In agency pitching, proper evaluation is important. The services you’re procuring are complex. Not everything you’re buying directly quantifiable as units – in fact, much of it is not at all quantifiable.
There are multiple candidates to keep track of. And there’s a lot of written documentation, presentations, charts, diagrams and general agency sales talk to remember.
So, you need scorecards of some sort. But what sort? Here are some golden rules to consider, when developing something fit for purpose.
Make it Internally Compliant. Whatever method you use (and there are lots), make sure that your own stakeholders recognise what you’re doing as valid.
I managed a pitch once for a large multinational, in which the local marketing lead made specific demands about how agency scoring should take place, only to be superseded by the regional leadership – at the end of the process, when a decision had been made – who requested that I changed the scoring to suit the cheapest agency because that’s all they recognise.
There’s a need for brutal candour about what, actually, is being assessed (and by the way, rather than jumping through needless hoops, you should tailor the process accordingly).
Tailor The Questions and Limit the Generic.
Agree up front what pressure points exist. What is it you’re looking for that separates ‘any agency’ from ‘the right agency’? And make sure your panel has opportunity to score against them.
Yes, there will be some more generic questions about agency skillsets. But intersperse them with questions that go to the heart of what you actually need (this approach helps prompts your panel with questions of their own, too).
Allow for Panel Fluctuation in Methodology
Ideally, the assessment panel should be constant throughout every stage of a pitch. In reality, it often is not the case – people come in, drop out or re-emerge.
Other than having non-scoring panel members, if you know that fluctuation is going to be a problem, take simple steps to mitigate against this (such as using straight averages as the primary ranking mechanism) to ensure minimal adjustment or inconsistency in approach.
Don’t Limit to Numbers
Scoring 1-5 or similar, as long as there are some basic criteria for what those numbers mean, is fine. But always ensure that your panel is able to make comments alongside the numbers.
In my experience, the comments can be far more insightful than the numbers for group discussion and decision making.
If the scoring system is too complicated or convoluted, the results will be a mess. Keep the assessment questions in plain English; limit the number of questions to no more than 10, wherever possible; and if the scoring is weighted, keep the number of criteria to a minimum.
Remove The Agendas
Once in an RFI process, agency assessment should be voided of emotion (I worked with them previously and didn’t like them; I think their work is better than our incumbent, or similar).
These discussions should have been completed and resolved in the pre-selection/agency longlisting stage, before the agency becomes too invested in the process; and questions pertaining to these sorts of things should not be included in an assessment.
Once the agencies selected for RFI are selected – they should have a bias-free and equal chance of success, wrapped entirely within the boundaries of the pitch assessment.
In general terms, having a proper, aligned scoring method is part of pitching due diligence. It’s also part of pitching integrity; in exchange for the work that both you and the agencies inject into a process, it’s only right that all are given proper airtime, on a level and appropriate playing field, to make comparative and informed decisions.