Seven watch-outs with AI-driven marketing mix modelling

The call for greater accountability of marketing investment has never been louder.

And as technology continues to revolutionise the way that people interact and behave, the science of marketing and budget allocation has become increasingly complex.

I’m sure I don’t need to tell you that due to the wide mix of variables at play, the lack of holistic data, and the variety of statistical models being applied, most marketers have a massive headache when attempting to attribute success to all the working parts of their marketing activity.

Trying to understand consumer interaction and behaviour at scale is challenging.

Cue the vendor buzzwords.

“You need Marketing Mix Modelling”

“You need Multi-Touch Attribution”

“You need to apply Machine Learning and AI to solve the challenge”

TrinityP3 is increasingly discussing this challenge with marketers, vendors and agencies.

In this post, I outline some of the pitfalls and watch-outs that we’re seeing, as well as some of the key questions that should be considered to successfully implement a next practice approach in this area.

Marketing Mix Modelling has been around for decades

Marketing Mix Modelling (MMM) had been around for decades, however, I first heard of it in the year 2000, when working with Nestle.

They were aggregating a wide variety of data sources and using regression analysis to identify pockets of opportunity to improve, what was then called, above the line media advertising – attempting to link ad spend to sales uplift.

Nestle and their media agency were trying to find the strength (or degree of relationship) of one variable affecting another.

The graphs looked cool to a young buck like me who had loved maths at school. Seeing a line that best fit a scattered plot of dots.

In my early career, I could see that the science of marketing was starting to catch up to gut instinct.

Over the next five years, I went on to prove that a direct-to-consumer communications program could be ROI positive – based on incremental profit margin and share of basket analysis.

The science behind our approach was based on 2-factor and 3-factor product purchase correlations. All down to an individual household shopper level utilising claimed purchase shifts from survey data, proof of actual product purchase through return barcodes, and assessment of segments against shopper scan data panels.

It was a wondrous time, but it was slow, costly to prove, and all very linear.

Limitations of MMM

Fast forward to today. With the explosion of digital communication options, and resulting massive data footprint, we now continue the debate as to whether a predictive model can help marketers allocate budgets, target consumers better in multiple channels, and crack the code of whether correlation = causation (a thought I’m sure you’ve heard well challenged by Professor Byron Sharp).

Marketers and creative and media agencies are still grappling with the challenge of optimising the media and messaging mix.

And so, the statistical models are getting sexier.

However, one of the key problems of MMM is aggregating data.

In particular, for digital channels, where analysts have typically applied ‘last touch’ or ‘last click’ data.

Evolution to multi-touch attribution

We’ve seen the emergence of multi-touch attribution (MTA) for advertisers to try and understand the role that different media and formats at various stages of a buyer’s journey play in influencing a sale.

However, MTA is typically flawed by the poor link between offline and online channels, as well as being retrospective behaviour analysis.

And as a result, 100% trackability (or lack thereof) meant that most attribution models are incomplete.

So, the red flag goes up when asking the question of understanding true campaign success.

MTA is also partially floored as it fails to understand how people make decisions and the multitude of other positive and negative influences that can impact decision making such as friend recommendations, word of mouth, interviews with influential people, new competitor product entrants etc.

Is Artificial intelligence changing the playing field?

We’ve now started hearing of Unified Marketing Measurement (UMM) – combining MMM with individual person-level data from multi-touch attribution.

This takes the communication impact to a personal level whilst still considering wider market trends. Combining engagement data with conversion data.

Great in theory, but prior to cloud computing it was also data-intensive and costly.

But today, AI and cloud computing are revolutionising the playing field.

Over the last few years, we’ve seen more practical, real-time applications and a resurgence of MMM discussions. Bringing a greater level of propensity and prediction to marketing.

However, caveat emptor, “buyer beware”.

There is still a myriad of watch-outs when embarking on optimising budget allocations and media spend with advanced statistics, modelling, and machine-learning algorithms.

Here are 7 to get you started:

  • What is your objective in applying AI to your marketing approach? Is it for better market research and insight generation? Is it for better strategy of who to target, and how to segment and position? Or is it for understanding actions and greater personalisation? One of the biggest challenges that we have seen in applying AI is not being clear on the value that marketers are wanting to generate over a defined period in order to assess the success of the new approach and test cases being conducted.
  • Get beyond the buzzwords. If anyone continually drops neural networks, deep learning, and algorithms into the conversation without any actual explanation and application to your business needs, then run for the hills. With AI complexity comes the challenge of understanding. AI is an area that is difficult to translate across organisations and up to senior management who are not always familiar with the technology and terminology. Traditional silo-structures need to be broken down with people who have expertise in straddling business, technology and analytics understanding. In order to explain real meaning and value.
  • Identify the benefits. There are a myriad of benefits from AI. Relational benefits. Emotional benefits. Functional benefits. But it’s important to identify the actual benefit of your marketing approach. Meaning and value are one thing, but ultimately improving marketing is about creating a stronger attachment to benefits.
  • Not everything is equal. What may work in one industry (ie: FMCG) may not work in another (eg: Healthcare) due to a myriad of factors. And what works in B2C may not work in a B2B environment which traditionally involves more nurturing and face to face intervention. And in many MMM cases, video content is treated homogenously. Whereas different platforms treat viewability very differently. So, beware when reviewing vendor and agency case studies in this space.
  • Are you using first-party data and existing customer lifetime value? This can allow you to use longitudinal customer data for dynamic personalisation over time. Rather than a like-minded approach or suggestive approach. A golden question to ask is how is the ‘product returns data’ being ingested into the AI system? As we enter a cookie-less world with universal IDs, it will also be critical to ensure that you’re using a quality filter with your digital advertising that matches your brand values. Using ROI as the driver and not cost per thousand can lead you to low cost, poor quality environments that look good in a model, but which don’t actually convert to a sale.
  • ‘Meta’ may mean mega failure. We’ve seen companies spend years trying to build massive ‘data lakes’ of company-wide data to have one meta-model which gives a single source of truth with ‘golden single customer view records’. But poor data tracking, poor data cleansing, and the lack of consumer context have simply resulted in these companies paying massive price tags without any demonstrable results. The AI system has been doomed to fail before running any use cases.
  • Transparency of the algorithm to remove bias. The algorithm is only as good as the data fed into the system. So, if the data is biased, so too may be the model and outcomes. The data can be biased based on biased human and ethical decisions and under-representation, as well as by societal norms and customs that may prevent a true picture from being created. We’ve seen plenty of this occur around ethnicity, inequality in gender and attitudes, and cultural linguistics. You may need to intervene with the machine to alter the weighting or help it understand from a real-world perspective. 

The opportunities moving forward

As mentioned upfront I would also like to outline some areas that should be considered when implementing a next practice approach utilising AI-driven MMM.

Here are three important areas.

Firstly, gain greater clarity on your strategy. The strategy is how you aim to achieve your marketing objectives. And then define how AI-driven MMM aims to assist you. If AI-driven MMM is to deliver value, then you will need clarity on:

  1. how strategy is developed within your organisation?
  2. who is accountable for it?
  3. and how the resulting marketing performance will be measured?

Secondly, relook at your structures and capability, and gain clarity on how you will manage cross-functional collaboration. As outlined in the watch-outs, it is critical to align your approach to data, technology, insight generation and operational usage. This all sounds easy when spoken, however, we observe major challenges within organisations around alignment.

Thirdly, reset your processes. With AI-driven MMM you will have access to data interpretation at a much faster rate than before. Whether it’s weekly, fortnightly, or monthly, you will be able to refresh data integrations, relook at scenarios and outcomes, and consider more optimal approaches at a much faster rate. However, speed and expectation can kill. So, it’s important to build an efficient operating process as well as feedback and decision-making loops. 

Complexity or simplicity?

Keep in mind, if you proceed down the AI path, too much data may still not give you a complete picture of reality. Nor the truth of which media worked.

For some organisations, it may be better to utilise a limited set of data.

You can always conduct the old A/B split to test one media against another, one format against another, one creative message against another, one look and feel against another, one region against another. And identify the actual incremental lift in sales and profit.

I’d love to hear from you if you’re grappling with these dilemmas.

How are you solving them?

Are you looking to expand your AI capability?

Or are you just starting out on the initial stages of implementation of your marketing science journey?

And finally, what has been your biggest learning to date?

Are you struggling with the complexity that digital and data offer to business? Let TrinityP3 make sense of the new digital ecosystem for you