The concept of performance based remuneration or payment by results appears to me to be universally embraced. After all, according to the latest ANA Agency Compensation Trends Survey, 68% of marketers’ arrangements have a performance or results based payment component. The problem is that it is often poorly executed.
Almost three years ago I wrote about the “Reasons why performance based remuneration or payment by results often fail” which included:
- The stick is bigger than the carrot
- The objective is virtually unobtainable
- The calculation is way too complex (or too expensive)
- The metrics are irrelevant to the business
- Linking contribution and value creation to payment
But I have noticed another reason that this often fails, and it is similar to point three (The calculation is way too complex or expensive) and that is because there are too many metrics in the measure.
The typical range of metrics
Typically, these performance metrics fall into three basic categories being:
Business Performance (Hard)
Examples include: sales, traffic, profit, market share, volume growth, etc. These can be measured by the same criteria that the advertiser uses for their internal bonus systems.
The Agency often claims that business results may not be within their ‘span of control’ as many factors besides advertising can affect business outcomes.
Advertising Performance (Medium)
Examples include: product awareness, ad awareness measures, consumer measures, attitude ratings, persuasion, purchase intent, awards, brand equity, image, effectiveness awards, etc.
This kind of performance assessment is vulnerable to research technique, statistical anomalies and discussions of creative ‘philosophy’.
Agency Performance (Soft)
Relates to the evaluation of agency functional areas: account services, creative and media in terms of: performance, service, relationship, cost efficiencies, etc.
This is highly subjective and may be affected by ‘entertainment’ on the upside and personality problems on the downside.
The right type of metrics
The right metrics are more important than the number. So lets deal with this first.
What are the right type of metrics?
1. Developing a list of metrics available to you.
We had a client who proposed investing almost quarter of a million dollars a year to buy data on a quarterly basis to measure and pay a performance bonus of less than $200,000. Poor investment.
2. Rank the order of importance of the metrics to the various stakeholders.
Not all stakeholders across an organisation have the same objectives (which explains the misalignment and lack of collaboration). It is important to align metrics to the valuable stakeholder groups. CFOs and CEOs will generally focus on financial metrics, while marketers will have marketing metrics as well and so on. Understanding these alignments is crucial.
3. Evaluate the level of influence the agency has on the metric.
Of course no-one can control everything, but marketing is about influence. Therefore it is important to understand the metrics the agency can influence and by how much. We had a retail client who wanted to measure the agency against sales, but we pointed out that the agency had more influence on foot-traffic through their store, which correlated closely with sales.
4. Plot the importance against the level of influence of the agency.
Create a matrix with key stakeholder importance against agency influence. At the bottom will be those metrics that are least important to the key stakeholder – lets say agency relationship performance and in the top the metric of most significance to the CEO being the profitability. Across the matrix on the left will be those metrics that the agency can directly influence, being the relationship with the client and on the right would be the metric they can least directly impact like profitability of the company. And all the rest are somewhere in between.
5. Select the most appropriate metrics for the situation.
This is a balancing act. It is not valuable to choose metrics the agency cannot directly impact just as it is not valuable choosing metrics that are not significant to the key stakeholders. But you can look for those metrics that the agency can influence and also correlate with the more significant eg. Store foot traffic and retail sales or brand health and market share etc.
The right number of metrics
The mistake often made is that to mitigate risk and to cover as many metrics as possible, the final selection often looks like a shopping list. This is counterproductive for a number of reasons:
- The agency ends up focusing on a multitude of metrics and actually influencing none at all.
- Or they focus only on the ones they can directly influence, gambling on the fact this will maximise their returns.
- The relative size of the bonus paid for each metric diminishes with the number of metrics being measured and so the influence of the bonus is also diminished.
- The more metrics, the more measuring, the greater the complexity and cost and the less likely to drive focus and effort.
The ideal number in my experience is a singular focus on the one most important metric that the agency can have some direct influence over. For those wanting a more diversified range of metrics, three is ideal, depending on the size of the performance bonus prize.
Reasons this process fails
There are two main reasons this process fails. The first is due to the marketer or procurement making the performance models horribly complex by trying to make them a surgical tool in supplier management. Largely they fail because they do not acknowledge the bluntness of the tool.
The second is the agency trying to mitigate risk by adding as many metrics as possible. Adding more subjective metrics into the mix can ensure that unlike empirical metrics which will range from 0 to the highest level, subjective metrics are inclined to inhabit a narrow range, usually above the midpoint range.
But entering into a performance bonus to mitigate risk, mitigates the reason for the performance measure in the first place. It is like an athlete wanting to add other metrics such as beauty, intelligence and like-ability as factors in determining the results of Olympic 100 metres athletics competition.
Critical to success is a significant carrot, a clear and focused metric or metrics, that the agency can influence, and is significant to the key stakeholders within the organisation.
How are your performance metrics looking? Are they relevant? Are they complex? Are they focused?
What do you think?