This post is by Darren Woolley, Founder of TrinityP3. With his background as analytical scientist and creative problem solver, Darren brings unique insights and learnings to the marketing process. He is considered a global thought leader on agency remuneration, search and selection and relationship optimisation.
Yet another presentation landed in my inbox that makes a huge number of promises supported by the ‘results’ of some pseudo-statistical market research. I say pseudo because coming from a medical science background, my degree included two years of statistics and then six years of statistical analysis in medical research at the Royal Children’s Hospital in Melbourne under the diligent supervision of Dr Xenia Dennett in the Victorian Muscle Unit.
She would rightly not let a scientific paper leave the Unit without it being proofed, re-proofed and triple proofed to ensure all references, results and statistics had been triple checked and that no statement or finding was made without the data and statistical analysis to prove it.
Working free and loose with statistics
You can imagine my surprise when I landed my first job in advertising and experienced a research debrief, on some of the concepts we had developed that had been placed into qualitative research. We were presented with statistical results such as 32% of participants did not like Concept A, but 76% liked Concept D. There was even a bar chart to show the relative results of ‘likeability’ of the four concepts that had been ‘researched’.
My first question was “Is that statistically significant?” There was tittering in the room, as those in the industry much longer than me, took great delight in what they saw as my naivety. One account director went as far to point out that my Concept A was disliked by as many people as liked Concept D. Not mathematically true but I had bigger fish to fry.
The point was the sample size and I asked if they could remind me of the number of people in the qualitative groups. There were three groups of 12, a total of 36 people, recruited against the target audience of grocery-buyers 18 – 54 who have a mortgage (Home owners). My next question was to the media strategist on the account (in the days when media had not completely unbundled) and that was:
“How many people are in the target audience?”
The answer was a little over 3 million at the time. So I asked again, was this statistically significant based on the sample size to the population? Was the sample large enough to draw this conclusion in a statistically significant way? This is without even raising sampling errors due to the recruitment process and interpretive bias due to the fact it was groups and not a survey?
Numbers don’t lie, or do they?
This is why the joke that “72% of people are fooled by statistics” is so funny, because it is true. To determine if the result is significant, you need to ask the following questions:
- What was the sample size?
- How does this compare to the population being tested?
- When and where was the research undertaken?
- How were the participants recruited?
- What was the methodology?
- Did it include controls?
Just because someone provides you with numeric results, does not mean they are significant, or that you should rely on them to make a decision.
Judging the research
We read all about market research everyday. Agencies, consultants, media and suppliers all use research to generate publicity and draw attention to their business, services or venture. Most of this research is not particularly robust and yet some of it gains traction and momentum and becomes a meme in the industry with very little substance.
One of the ways of sorting out the wheat from the chaff in a media or online report is to look for the research source. Where was the research undertaken? Does the source have the information above on sample sizes, methodology and the like? By going to the source of the press release or story, you can start to assess the validity of the results and the support for the conclusions drawn. Until then it is best to not trust anything you read, except as interesting water cooler / coffee break gossip.
Back to the sales presentation in the inbox
So this very professional, substantial (almost 50 slide) presentation lands in my inbox offering marketers the opportunity to get the results they always dreamed about from their agencies. The offer was supported by research results including:
- 100% of marketers agree that when it comes to managing agencies it’s something you really need to learn through experience
- 40% of marketers don’t think their communication agencies work well together as a team
- 90% of marketers believe that they could do more to maximize the value they get out of their communication agencies
- 100% of marketers agree that there’s always something more they can learn from people who have been in the industry before them
- Only 40% of marketers believe great work is synonymous with great results
Apart from the last point, which is a concern as it means potentially 60% of marketers believe bad work will be more likely to deliver results, the statements raise more questions for me than they answer.
The first problem is the results themselves, they are either all rounding up beautifully, or perhaps the researcher only asked ten marketers to participate? Second is that many of the results appear to be quite abvious and you would wonder what the were questions to illicit this response? Also what type of marketers were they? How were they recruited? Are they clients of this company? And was the survey taken prior to the company’s involvement or post?
There is no way to know, as there is no source for either the results or the survey itself. So we will never know. Of course you could argue that it is just simply sales puffery and it is not important.
Big data and statistics
The fact is that statistics have always mattered. Statistical analysis is how we unlock knowledge from data and information. At a time when we are collecting more data on customers and customer behavior, than ever before in history, it is important that we understand and apply the fundamentals to the analysis and interpretation of that data.
This means applying rigor to all aspects of the way we present marketing and the results of our marketing efforts. Continuing to allow the ends to justify the means simply undermines the credibility of marketing in the broader business world.
So next time someone tries to convince you on their point-of-view you with some pseudo statistics, what are you going to do?