Insight to bring your story to life
At the core of every good story you have drama, human interest and topicality and market research is the perfect tool to identify and bring these ingredients to life. We believe stories are powerful things. They make us laugh, they make us cry, they change the way we think and act. Stories are potent, emotional and memorable. At Arlington Research we understand this, and we put stories at the heart of every piece of research we undertake. Does the research grab the imagination, paint a picture, explain what is happening? Is it interesting and does it pass the so-what test? This guide is a tool to help you get the most out of your research and ask the right questions when building a campaign.
What journalists will look for when reporting on research
- Who conducted the research? Are they members of a professional association, such as the Market Research Society (MRS)?
- Has the research been carried out among an appropriate and clearly defined target audience? Journalists will always want to know the sample size, so ensure you note any limitations.
- Was the methodology appropriate? Research results are only valuable if representative of a clearly defined target audience or population. Journalists will expect technical details to be included within the content, to evaluate whether the research reflects the demographic profile of the target audience/population being represented.
- Were the questions asked in a balanced and unambiguous way or did they lead participants to a particular answer?
- Accuracy of results commentary. It should be written in an accurate and balanced way, to ensure it is a true reflection of the research results.
Published surveys should always include sufficient background and contextual information to enable users and readers to interpret the information. This should include information such as:
- Why was the survey undertaken, and what was measured or asked
- The agency who conducted the survey
- Fieldwork dates
- The sample size and geographic coverage of the sample
- Method/s of obtaining participant responses (e.g., online, face-to-face, phone)
- The audience or subjects represented
(e.g., consumers, businesses, employees)
- Whether or not the survey data have been weighted
Things to consider when interpreting data
The following provides a best practice approach to interpreting and using research results in content generation for research led campaigns.
01 Sample considerations
- Research should be conducted using a robust sample and be sufficiently large to allow for meaningful analysis. Typically, in the UK, consumer research is carried out with at least 1,000 participants, but most national newspapers prefer 2,000. The sample size for business research is dependent upon the size of the audience, e.g., when surveying IT decision-makers 250-350 interviews is a robust sample.
- If the survey sample is not representative of the target audience, the results will be skewed. For example, it will not be possible to draw reliable conclusions for all UK consumers from a sample of UK adults aged 35-65.
- Regional press releases are popular, but often the sample sizes involved are
too small to justify separate analysis and reporting (i.e., they are not statistically reliable). As a guideline, to report the percentage results based on any sub-sample, the sample size should be at least 50; results for a base below that can be used but you must provide a clear health warning about the small sample size.
02 Reporting considerations
- It is critical you correctly define the respondent base. For example, don’t refer to ‘British consumers’ when the survey was conducted in the UK. Be clear about the sample audience or the proportion of the audience sampled who provided that answer.
- Ensure you don’t subtly or significantly alter the question wording, or summarise the question, so that the meaning changes. Ideally, use the exact words of the question within your content.
- Be mindful of not describing changes in % findings incorrectly. For example, a shift from 40% to 60% is not ‘an increase of 20%’. It is an increase of 50% (the difference, 20, divided by 40), or can be described as ‘an increase of 20 percentage points’.
- Do not present differences between sub-sample groups as relevant if they are not statistically significant. Significance testing is done to see if ‘the difference is enough to allow for normal sampling error’ and not caused purely by chance. This should be provided in the data tables for your survey.
- When referring to research data use the actual research results. For example, stating ‘most people feel they are not getting value for money’ does not clearly reflect the research findings. Instead, the text should be ‘two-thirds of those surveyed (67%) feel they are not getting value for money’, for example.
- If you exclude ‘don’t know’ responses and re-base findings after taking out those who responded with ‘don’t know’ in your analysis, then your commentary needs to make this clear, e.g., instead of ‘60% of the public’ say ‘60% of those who expressed an opinion’.
- Always check the figures used and how they are described in your content carefully. For example, if the figures relate to a sub-sample of those who ‘ever use a smartphone’, this must be clear. It would be wrong to state ‘x% send at least 15 text messages a day’ as this implies it is x% of all mobile users, whereas it should say ‘x% of those surveyed who ever use a smartphone say they send at least 15 text messages a day’.
GB vs UK
Great Britain consists of England, Wales and Scotland, and the United Kingdom is Great Britain and Northern Ireland.
International vs Global
For a survey to be international, only a couple of countries need to be surveyed, e.g. the UK and US. For it to be global, interviews need to be conducted in countries from at least three continents.
03 Data presentation considerations
- Charts or tables should provide sufficient technical details for the reader to know what the results are based on. If charts or graphs are being used, include the full question wording and the base size for the audience(s) the data is being presented for.
- Report research results as whole numbers – don’t use decimal places. To report results as whole numbers, use this simple rule: round up the number if it is 0.5% or above (i.e., 65.7% = 66%), and round down if it is less than 0.5% (i.e., 54.3% = 54%).
- Don’t add up the numbers in a ‘select all that apply’ question. With multi response questions, the answers will always add up to more than 100% as each respondent can select more than one answer to the question. Therefore, you can’t just add up the responses for selected answers, as you will be counting some respondents more than once.
- Don’t use mid points when analysing results from odd numbered scale questions. For example, many use 5-point scales in surveys for Agree/Disagree. For these, you can analyse the level of ‘Agree’ (‘Strongly Agree’ + ‘Agree’) and ‘Disagree’ (‘Strongly Disagree’ + ‘Disagree’), but the mid-point ‘Neutral’ responses shouldn’t be included in any analysis.
- Be careful when reversing scores in agree/disagree questions. For example, if 44% of respondents agree with a statement, this doesn’t mean 56% disagreed, as this will also include people who selected the mid-point within the scale (i.e., ‘Neither agree nor disagree’) as well as anyone who selected ‘I don’t know’ or ‘Not applicable’, dependent upon how the question was asked.
- Be aware when using averages. The arithmetic mean is the sum of the values divided by the number of values. Median represents the middle number in a sequence of numbers when it’s ordered by rank. The mode is the value that appears most often in a set of data. Only a mean can be described as an ‘average’ without any qualification, but ideally your reporting should specify which average is being used.