The concept of performance-based remuneration or payment by results appears to be universally embraced. After all, according to the ANA Agency Compensation Trends Survey, 68% of marketers’ arrangements have a performance or results-based payment component. The problem is that it is often poorly executed.
Recently, we updated an article about the “Reasons why performance-based remuneration or payment by results often fail”, which included:
- The stick is bigger than the carrot
- The objective is virtually unobtainable
- The calculation is way too complex (or too expensive)
- The metrics are irrelevant to the business
- Linking contribution and value creation to payment
But we have noticed another reason this often fails, and it is similar to point three (The calculation is way too complex or expensive), and that is because there are too many metrics in the measure.
The typical range of metrics
Typically, these performance metrics fall into three basic categories:
Business Performance (Hard)
Examples include sales, traffic, profit, market share, volume growth, etc. These can be measured by the same criteria the advertiser uses for their internal bonus systems.
The Agency often claims that business results may not be within their ‘span of control’ as many factors besides advertising can affect business outcomes.
Advertising Performance (Medium)
Examples include product awareness, ad awareness, consumer measures, attitude ratings, persuasion, purchase intent, awards, brand equity, image, effectiveness awards, etc.
This performance assessment is vulnerable to research technique, statistical anomalies, discussions of creative ‘philosophy’, and disputes about attribution between creative and media: was it the message or the medium mostly responsible?
Agency Performance (Soft)
Relates to the evaluation of agency functional areas: account services, creative and media in terms of: performance, service, relationship, cost efficiencies, etc.
This is highly subjective and may be affected by ‘entertainment’ on the upside and personality problems on the downside.
The right type of metrics
The right metrics are more important than the number. So, let’s deal with this first.
What are the right types of metrics?
1. Develop a list of metrics available to you.
We had a client who proposed investing almost a quarter of a million dollars a year to buy data every quarter to measure and pay a performance bonus of less than $200,000. Poor investment.
Elements of performance that are already regularly being measured internally (and therefore likely of importance – see point 2 below) are the best ones to include.
2. Rank the order of importance of the metrics to the various stakeholders.
Not all organisational stakeholders have the same objectives (which explains the misalignment and lack of collaboration). It is important to align metrics to the valuable stakeholder groups. CFOs and CEOs will generally focus on financial metrics, while marketers will have marketing metrics as well, and so on. Understanding these alignments is crucial.
3. Evaluate the level of influence the agency has on the metric.
Of course, no one can control everything, but marketing is about influence. Therefore, it is important to understand the metrics the agency can influence and by how much. We had a retail client who wanted to measure the agency against sales, but we pointed out that the agency had more influence on foot traffic through their store, which correlated closely with sales.
4. Plot the importance against the level of influence of the agency.
Create a matrix with key stakeholder importance against agency influence. At the bottom will be those metrics that are least important to the key stakeholder–agency relationship performance. And in the top, the metric of most significance to the CEO is profitability. Across the matrix on the left will be those metrics that the agency can directly influence: the relationship with the client. And on the right would be the metric they can least directly impact, like the company’s profitability. And all the rest are somewhere in between.
5. Select the most appropriate metrics for the situation.
This is a balancing act. It is not valuable to choose metrics the agency cannot directly impact, just as it is not valuable to choose metrics that are not significant to the key stakeholders. But you can look for metrics that the agency can influence, which correlate with the more significant ones, e.g., Store foot traffic and retail sales, brand health and market share, etc.
Equally, it is a mistake to choose metrics that contradict each other. A recent remuneration review we conducted for a major FMCG advertiser revealed that half the agency bonus was tied to the media agency reducing inventory cost, and the other half was tied to “media innovation” (which was also ill-defined). Whilst the two aren’t necessarily mutually exclusive, innovation generally requires investment (e.g. test and learn scenarios) at odds with reducing cost. Needless to say, the agency was focusing on the metric they could most influence, cost savings, with little to no effort spent on innovation – at least securing 50% of the bonus. Equal attention to both would most likely result in missing targets for both and no bonus at all.
The right number of metrics
The mistake often made is that to mitigate risk and to cover as many metrics as possible, the final selection often looks like a shopping list. This is counterproductive for a number of reasons:
- The agency ends up focusing on a multitude of metrics and influencing none at all.
- Or they focus only on the ones they can directly influence, gambling on the fact this will maximise their returns.
- The relative size of the bonus paid for each metric diminishes with the number of metrics being measured, so the bonus’s influence is also diminished.
- The more metrics, the more measuring, the greater the complexity and cost and the less likely to drive focus and effort (from both agency and advertiser).
In our experience, the ideal number is a singular focus on the one most important metric that the agency can have some direct influence over. For those wanting a more diversified range of metrics, three is ideal, depending on the size of the performance bonus prize.
However, neither should this be an “all or nothing” proposition, particularly with just one or two metrics chosen. The agency should be able to earn a proportion of the bonus; i.e., if you achieve half the result, then half of the bonus is paid, and so on. Too often, we see agencies work hard to reach the KPI targets, fall marginally short, and have nothing extra to show for going above and beyond, turning the whole thing into a disincentive.
Reasons this process fails
There are two main reasons this process fails. The first is due to the marketer or procurement making the performance models complex by trying to make them a surgical tool in supplier management. Largely, they fail because they do not acknowledge the bluntness of the tool.
The second is the agency trying to mitigate risk by adding as many metrics as possible. Adding more subjective metrics into the mix can ensure that, unlike empirical metrics, ranging from 0 to the highest level, subjective metrics are inclined to inhabit a narrow range, usually above the midpoint range.
However, entering into a performance bonus to mitigate risk mitigates the reason for the performance measure in the first place. It is like an athlete wanting to add other metrics such as beauty, intelligence and like-ability to determine the results of the Olympic 100-metre athletics competition.
Critical to success is a significant carrot, a clear and focused metric or metrics that the agency can influence and is significant to the key stakeholders within the organisation.
How are your performance metrics looking? Are they relevant? Are they complex? Are they focused?
What do you think? Contact us and let us know how well your performance metrics are performing.
Or read more on how we can help you with ‘Agency Commercial Evaluations’, including performance fee models.