I rate poor development and implementation of Key Performance Indicators (KPIs) as second only behind poor leadership as a driver of poor performance. Use of poor KPIs (or none at all) is like flying blind in a hailstorm without instruments, attempting to land on a postage stamp sized airstrip atop a cliff. The only solution is to be completely reactive and instinctive, trusting your judgement rather than data.

Trusting your judgement works fine for some people, some of the time. It fails most people most of the time and will fail all people at some time.

Operating with poor KPIs not only increases the risk of failure, it decreases the risk of success. Companies with good KPIs are generally ahead of their competitors when it comes to utilisation of their assets (financial, physical, intellectual, customers and relationships) and in detecting when changes in internal or external environments are having a positive or negative impact on their organisation’s ability to achieve its goal.

There are many mistakes made in the development and implementation of KPIs. Here are the three most common mistakes.

The KPI is not a key indicator of performance

Often lag result indicators of an organisation’s performance are offered as KPIs. For example, profit, $sales per employee, sales/cost ratio, employee satisfaction, brand awareness or customer satisfaction.

These types of indicators are important, but they are not key performance indicators.

For an indicator to wear the mantle of KPI it must impact several of the factors for the organisation upon which their ability to achieve their goal is critical. Most of the above indicators would fail that test. They do not impact factors critical to the success of an organisation. They merely record the result of a combination of factors.

They also fail the test of being the responsibility of an individual or a team executing a process or set of processes.

They fail the third test of having an obvious corrective action.

A more likely KPI than sales per employee is “Customer appointments per week scheduled over the next three months”. This measure can be attributed to a team, the corrective actions are relatively obvious, and has an impact on critical success factors (CSFs) which would come under the headings customer satisfaction, finance (cash flow) and operations (demand planning). Additionally, it is a lead indicator for sales and operations and a current or lag indicator for marketing.

KPIs are usually performance indicators which lie deeper in the organisations processes than most organisations realise. For example, Lord King, in sanctioning a review of British Airways operations was finally convinced by the consulting team that on-time arrival and departure was a KPI. The result of focusing on this KPI was a turnaround of British Airways fortunes. The KPI galvanised the airline from maintenance crew through to food contractors to get the aircraft in and out on time. Customers were happier, crew and other staff were happier, BA missed fewer of its allotted take off and landing spots and waste and duplication declined.

I have read many articles that insist that a KPI must be SMART (Specific, Measurable, Actionable, Realistic and Time based). That is not true. Insisting a KPI be time based is making it sound more like a target or business objective, e.g. double sales within two years. A KPI must be specific, measurable and must indicate what action must be taken to bring the numeric result of the KPI back into an acceptable range. Additionally, the notion of a KPI being realistic only makes sense in the range that is set for it.

Targets or Key Result Indicators (KRIs) are confused for KPIs by most organisations. They are necessary for any organisation to progress but are, not in themselves, KPIs.

Too many “KPIs”

When I ran Shell’s business in South Pacific, in addition to my official titles and general management duties, I had other roles. Human Resources, Safety and Renewable Energy were my domain. Our Marketing Manager ran Retail, Commercial, Aviation and Marine and our Operation Manager ran storage and distribution for Commercial and Retail Fuels, Lubricants, Aviation and Marine and a Local Coastal Tanker.

Whilst we developed our business vision, business goal, strategy and tactics locally with attendant PIs and KPIs, we also had to fit in with the mainstream of Shell with direction of some elements of our business being directed from Melbourne and some from London.

At the completion of our normally exhausting round of business planning, I counted that from the different divisions of Shell that had an influence over our planning we had over 100 KPIs upon which we had to report no less than monthly and two hundred more we were required to record as PIs but not report on. The story of my rebellion is for another time; suffice to say that the 100 or so KPIs could in no way be “Key”. It was not only illogical; the nature of the “KPIs” and the acceptable range given meant that we would be asking our people to achieve mutually exclusive outcomes.

Having too many KPIs almost guarantees failure of another test of KPIs, they must support each other.

At the very best, with too many KPIs, expect your staff to pick the ones that mean something to them to guide their decision making and day-to-day actions. At worst, your staff, not being tied up in the intellectualisation of KPIs, will see them for what they are, a mish-mash of wishful thinking divorced from reality and will ignore them.

Ten is a good number for KPIs. For every KPI you set above ten, you increase the risk of unintended consequences by way of mutual exclusivity or lack of engagement.

The KPI cannot be measured well enough

How well do you need to be able to measure a KPI? Well enough that the staff who are accountable and the staff who are responsible for its value compared with the desired range believe the reported results.

If your staff do not believe that the reported results have an error range that makes sense given the target range for a KPI they will only pay lip-service to the KPI. By then, it is not a KPI. If staff believe that data integrity issues mean that they have less influence over the reported result than random chance, the KPI fails the test of corrective action being obvious.

If that is the case then you should discontinue use of the KPI and choose one which is perceived to be measured with sufficient accuracy, even if it is less suitable. Make it an action with a SMART target to resolve the data integrity issues, but do not keep on using the KPI until that target is met.