Yes we know any talk of data quality and statistics is enough to send most of us to sleep, but they are really important. For many PR execs data quality isn’t a significant topic of debate until they have a problem which either causes extra work internally, or worse, the client or journalist makes it an issue. Then it is too late!
In our experience there are several quality issues which can cause real problems for PR agencies. The following (in no particular order) is Redshift’s list of the ten most common issues we have seen. Mistakes can occur for many reasons, but being aware of potential pitfalls is essential for avoiding the problems in the first place.
1) The press release isn’t supported by the data
This can be a real headache if a press release is issued and errors or inconsistencies are picked up by the journalists. Checking the press releases several times, and asking the polling company to check the highlighted figures is recommended.
2) The data tables are wrong
If the data tables are wrong and goes unnoticed, there is a real risk of writing a compromised story. At the very least, it could mean having to rewrite a press release when there is little time available to correct it or rewrite the press release.
3) The question types used haven’t provided the right results
Knowing which questions to ask is very much about experience – knowing what works. See our recent article on key survey issues identified by PR professionals. A good agency can always assist with questionnaire design.
4) The question phrasing was ambiguous
If the questions or question code frames haven’t been well phrased, the resulting data could be seriously compromised. Garbage in, garbage out!
5) The sample isn’t truly nationally representative
A lot of people mistake sample size and national coverage for a nationally representative survey. A properly controlled survey should ensure the number of responses reflect the structure of the UK population. For example, if you have 2,000 interviews but 75% of the responses are from women, this is not a nationally representative survey (e.g. in the real population women account for 52% of the population).
6) Its not statistically reliable
There needs to be a minimum number of interviews for the results to be statistically significant. For consumer surveys it is usually a minimum of 1,000, although sub groups can be fewer (e.g. Mums with children under 5).
7) The polling company/panel can’t reach the correct audience in big enough numbers
This is related to number 6. If a survey needed to reach a cross range of age groups by gender, but it falls well short in one particular age group, then it is not possible to sub analyse the data.
8) There are too few interviews by region or city
Don’t write a press release breaking out data for Reading unless the survey was constructed to ensure a minimum number of responses deemed statistically reliable for the cities you want to write about. A survey of 2,000 interviews won’t allow sub analysis of 30 cities – at least it won’t be statistically reliable.
9) The sample wasn’t truly independent
If a survey is drawn from a particular supplier or website, and the sample isn’t properly representative of the population, the survey results will not be credible.
10) Inadequate quality control measures
This is all about the quality management of the research panel used to conduct the opinion survey. A good panel will ensure minimum quality checks. For example, will eliminate made up responses and speeders (e.g. speeders), set quarantine periods to limit the effect of professional respondents, and use geo tagging to freeze out people doing surveys if they are not resident in the country. Its not exciting, but if these measures aren’t taken, there is a real risk of poor data being collected.
About the Author:Neil Carey is the director of Redshift Research, a leading market research company.