Contrary to the (presumed) belief of the majority of those who push out bad statistics to the airwaves, newspapers, TV bulletins and client presentations… quantitative research is a science and not something that’s up for personal interpretation.
With every aspect from design, to collection and analysis, there are a huge set of rules which need to be followed checked and cross checked in order for your research to be worth anything at all.
Unfortunately, many people either…
a) ‘Can’t cook / won’t cook’ when it comes to learning how and think ‘Sure anybody can do that, what would I need any certification for?‘ (Although copywriters and creative I feel may suffer even more of this)
b) Just can’t do basic statistical analysis and (worse still) don’t even realize it
Usually, it’s a lethal combination of both.
For the most part, I’ve come to accept it all. And it’s a wonderful release to realize most of the negative scary stats in the news probably aren’t true. Furthermore, because the majority of research doesn’t actually seem to get used for any tangible decision making. And who can blame us?
In all the interviews I’ve ever had with ‘pure’ research companies (6), not one of them asked about previous academic qualifications that would deem me suitable for any role from exec to department head positions. Not one. What they were interested in was what brands I’d worked for and what useful connections I may have had. So, why should we the general public put any weight behind statistics created by people who aren’t even interested in taking themselves seriously?
However when an institution like the CSO gets it wrong, I really do despair.
Today our newspapers and social feeds are filled commentary about ‘leprechaun economics’ regarding Ireland GDP report for 2015. Well, unfortunately it won’t be the only data that is deemed completely void of use from the CSO this year because there is also, the census.
There are a number of issues with what is essentially the most important piece of quantitative research in the country.
I’ve broken out the issues line by line and included some learnings we can all take from them to better use powerful research tools such as Survey Monkey… rather than erm, being one.
- The ‘Race Card’
Apparently included for the purposes of examining ‘discrimination’ the CSO offices evidently do not have any sense of irony.
Outside of the countless arguments already out there since the 80’s for not including such a question (which I won’t insult your intelligence by spelling out), lets just take a moment to note how ‘Irish’ is not an option for any other skin colour.
And that is what this is about, skin colour.
So being ‘white Irish’ is enough detail for a white respondent but being ‘black Irish’ (whatever the hell that means) requires further explanation. The racist research equivalent of going, ‘Ah yeah but where are you really from?!‘. What actionable insights are to be derived from this question as a stand alone or when crossed with other data complete escapes me.
If you are conducting research, remember that your talking to other human beings. Consider your own biases and how they effect what you’re asking and how you go about asking it. Ethics is one of the most important themes of doing research – particularly when it comes to what you do with that data once you have it, where it is stored and who else gets to see it. Make sure you know the rules in your country before embarking on any data collection.
2. Imagine there’s no heaven…
Question 12 is about religion. And that’s just grand. Two issues that relate to statistical error however…
- Agnostic/Atheist/No religion have not be separated out as potential responses. I would strongly argue that putting ‘No religion’ as the only option of those three will have forced respondents to tick ‘Roman Catholic’ as more accurate.
- No religion should have been placed above the ‘Other’ box. It runs the risk of being missed. A lot of people have anecdotally mentioned this.
When designing research, always consider all possible answers and ensure that no one response gets more favourability over another, based on how the question has been laid out or phrased. As a researcher you should be neutral, not leading. And you should never presume your own views will be the ‘average’.
3. Your Health is your Wealth…
So goes the old adage. I doubt however there will be the ‘wealth’ of accurate data from this set of questions. I’m -4.0 in both eyes. So come the inevitable Zombie apocalypse, I’m rather screwed. That’s a serious disability! However, in modern life as a middle class citizen in Ireland this really doesn’t pose and issue for me. Same can be said of some hearing impairments. Likewise, is dyslexia to be counted? Depression? Asthma? If purpose of this question is to be crossed with the next one (on restrictions to activities) how is it to be of use? Or accurately answered?
Ensure that if you are designing research questions that you always give clear set parameters by which people will know if they are answering correctly.
Descriptors such as ‘chronic’ or ‘serious’ don’t help. They’re entirely up to interpretation by the reader. If it’s too hard to set those limitations, simply asked open ended questions and code back afterwards. It will take more time but at least the answers will be actually worth something!
4. Who Cares?
Surely this data is already available from Revenue? This may not seem to matter, until you consider the fact that this is a question space wasted, on a survey that only takes place every 4 years. The longer the survey, the more likely you are to get drop-offs and therefore an unsatisfactory sample. Obviously not an issue for the census but for others, consider the use of space wisely.
If you have a 50 question survey that takes an hour to do, ask yourself – whether it’s door to door, over the phone, online etc… are the type of people willing to fill it out to the bitter end really the type of people you want to be gaining insights from? This is particularly important when you consider products geared towards the higher end of the market but are offering incentives for completion of surveys.
For example – If you’re researching on Experience with Luxury Air Travel and offer a survey incentive of a Pigsback coupon for Tesco to respondents… fail!
5. Werk, werk, werk, werk, werk…
This question is apparently for the benefit of planning things like transport. Again, data already available from the likes of Revenue?
Asking questions you may already have an answer to is a very common research mistake. So make sure you already know about previous projects undergone by your business and the publicly available data already out there.
6. Not wanting to force the issue…
I am quite sure, an awful lot of us out there have no idea what the answer to this question is. Particularly if we are not home owners. Not having an ‘I don’t know’ option will skew this data inaccurately.
This is probably the most common mistake in survey design. Forcing answers leads to false data and also to people ‘dropping off’ if they morally feel they cannot be of ‘help’ to you. My advice would be put in a ‘Don’t Know’ and a ‘Do not wish to Say’ type option as standard, then cull back once established if they are 100% not needed.
7. Two Wrongs do not Make a Right
Of course my favourite one is a media question! Smart TVs are ‘personal computers’, iPads are personal computers, smartphones are personal computers. Unfortunately many of us will have interpreted this questions as a desktop computer. On account of calling them ‘PCs’ since they came into existence. I’m not even entirely sure what the census is trying to find out here… The funnest bit is that this question was already asked in the exact same way back in 2011, and flagged as problematic. Well unfortunately the ability to compare two false results does not improve the value or accuracy of your data. Two wrongs don’t make a right!
A lot of companies and brands sign up to regular research pieces over a set period of time. This is particularly good for a brand who wishes to monitor things like recall, awareness, trial, brand scores etc. But if you feel your previous designs were inaccurate, a back log of incorrect data from before to compare it to aint going to suddenly give either mistakes value. Just bite the bullet and do it properly this time!
So to summarise, some basic rules for research are:
- Remember it’s a human being that will be answering your research – each uniquely different and flawed. (And isn’t that wonderful?) Always keep them in mind when designing research.
- Leave your own biases at the door.
- Try to sound neutral at all times (Remember the automatic compulsion in humans is to please others. If people think you ‘want’ or ‘agree’ with a certain response, most of the time they will go with that answer).
- Ensure that your survey structure does not create bias either.
- Set clear parameters for measurement, and don’t leave anything up to respondent interpretation.
- Consider who your panel is. Will they be an accurate reflection of your future target audience?
- Consider the research methodology (e.g. can you really conduct phone research about student life when most people in their teens/twenties don’t own a landline?
- Keep it as short and as simple as possible. Look for opportunities to use question branching were possible. A little harder for you. A whole heap easier for your respondents.
- Know already known knows before you decide what you want to em, know.
- Sometimes people just don’t know the answer. And that’s OK – if not leading to an insight in itself.
- Two records of false data do not make for an accurate comparison report. Bite the bullet. Start again.
- You can’t BS maths. Take a little time to learn to do statistical analysis properly, or rope in someone that already does. As someone who is shockingly hopeless at pretty much every other branch of mathematics, I can assure you it’s not the worst one to get your head around
- The zombie apocalypse is inevitable.