The traps and pitfalls of developing an output-based agency fee model

Since 2004, we have been using a workload agency fee model to calculate agency resources based on the work to be undertaken and the outputs delivered.

We realised we had a phenomenal amount of data regarding agency workloads and outputs gathered independently from around the world. Perhaps surprisingly, though not for us, the data showed an incredible correlation in the samples against the outputs measured, which for creative and digital agencies was more than three thousand outputs.

More importantly, TrinityP3 had more than 15 years of data on media agencies and was able to replicate the same approach based on outputs.

Reading the trade media, it appears that the industry has caught up with this methodology and that many people are today developing their own output-based model. This includes procurement teams, agencies, and other consultants. Having had discussions and reviewed a significant number of these models, it seems appropriate to offer an overview of many of the common traps and pitfalls impacting their development.

Sampling

Sampling is a significant issue for this approach, and the trap is often twofold: Either the sample is too small or narrow, or it is too large. (Yes, it is possible to be too large.) Let me explain. Let’s start with a small or narrow sample.

You sit down with the incumbent agency or agencies and ask them to provide the underlying numbers of hours to deliver particular tasks. The assumption here is that the data is robust and accurate, as in statistical terms, the sample and the data population could be considered equivalent.

However, the issue is that many timesheet systems are flawed, just as many billing systems are also flawed. After all, one of the reasons for moving from a resource cost model to an output-based model is the incredible difficulty in validating the hours taken to undertake the work from a series of spreadsheets of agency hours.

Likewise, simply relying on very large data samples from multiple sources can also be problematic. As we have seen in our own modelling, the resources required in hours will vary dramatically for the same or similar output based on the advertiser team, the advertiser category, and the advertiser complexity. Simply averaging a huge pool of data will result in an average result. Yet, no one wants to pay for average advertising, do they?

Like for like

This brings us to the next trap: Defining the like-for-like comparison of the output. A product, such as a cake mix or a computer, can be defined by its metrics: weight, ingredients, components, materials, etc. Likewise, one of the complexities to overcome is the diversity of outputs measurably and comparably.

After all, a Facebook campaign could mean a thousand different things to as many people as possible. But beyond defining the specific and measurable outputs, we have also had to contend with defining it as a standalone origination, an extension of an existing campaign or communication idea, an adaptation of previous work, or part of a complex, simple, or moderate level of communication complexity.

Almost a decade ago, we benchmarked the output-based model of a major global advertiser. They had less than 100 outputs used across over eighty markets. The biggest challenge of the project was not the benchmarking, it was defining what each of these outputs, often with terms such as “Brand TVC” actually meant. We discovered that the exact definition of each one appeared to differ by market, providing us with more than eighty different definitions.

But with more than three thousand outputs and a whole range of levels to tweak and set based on the current practices and levels of complexity, the truth is, like well-tailored clothing, we can make an output-based model to fit you perfectly.

Economy of scale

Let’s return to your Facebook updates. Michael Farmer, in his book Madison Avenue Manslaughter, reports that twenty years ago, the average brand produced between 200 and 250 different outputs per year. This was when media choices were largely traditional, and campaign planning was an annual event.

Today, or more specifically in 2015 (potentially much worse), the average brand is producing 2000 – 3000 or more outputs per year. What has driven the exponential growth in outputs? Social media and digital media and the desire for weekly or daily updates to feed the appetite for new content.

So, the question is, if it takes 80 hours to produce one set of Facebook updates, how long does it take the agency to produce 10 sets, 50 sets or 100 sets of Facebook updates? Well, it depends on how they are produced. If you brief in a set of Facebook ads once a week for a year, that is 52 sets, assuming each set is originated each week. But if you brief in a whole year of Facebook updates at once, approve them all at once, produce them all at once, and then dispatch them weekly, that could be a fraction of the time and price of doing this in a weekly cycle.

It gets complex. Except when you can move beyond simple statistics to develop an algorithm or formula for calculating the same. But more on that later. If you overestimate the economies of scale, you pay too little or too much. Neither of which is the desired outcome.

Assessing value

Increasingly, the industry is talking about the output fee model as a pricing or value model. The desire is to move beyond the underlying cost of human resources and develop a model based on the value of those outputs. We fully agree with this approach, but the first step is always to understand the current productivity and workload of the relationship in producing those specific outputs.

If the current agency relationship is underproductive, then any output model based on the current resources will continue to be underproductive. What we mean by underproductive is that it takes more time to produce the output than the industry average. This loss of productivity can be due to wasteful processes, duplication of effort or misalignment of expectations.

Our first step in setting output-based fees is to assess the current level of productivity in the relationship. This allows advertisers and their agencies to identify and discuss ways to improve this before moving to an output-based fee model.

Calculating agency fees

As mentioned, TrinityP3 has a significant data sample from many agencies and many markets across a wide range of advertiser categories and agency types. But beyond simply providing a statistical view of the data, we have been able to model the data to develop a formula or algorithm that allows us to accurately calculate the resource requirements for a specific scope of work or combinations of agency outputs based on the Verificom tools.

Using these formulas and adding specific variables to define the relationship, we can accurately calculate the workload of a particular output. While we acknowledge that a television brand ad or a promotional television campaign is not the same for every advertiser and every agency partner, we can calculate and adjust the output model to the specifics of the current relationship.

But that is just one of the many things that the Verificom tools can allow us to do. You can find out the others here. It can even help us assess the value of your agency fees past, present and future.

And if you are developing your own output-based or value-pricing model and wanted to have us assess it to ensure you are not falling for any one of the many traps and pitfalls, we are happy to do that too.

TrinityP3’s Agency Remuneration Agency Remuneration and Negotiation service ensures that the way in which you pay your agency is optimal. Read more here