Let me give you the definitions first, because people use these terms interchangeably and they shouldn't. A metric is any quantitative measure you track. Website sessions. Support tickets opened. Employee headcount. Time to first response. These are all metrics, data points that describe something about your organisation. You can have hundreds of them.
A KPI, a Key Performance Indicator, is a metric that is explicitly linked to a strategic objective and tracked specifically because it tells you whether you're achieving that objective. The word "key" is doing a lot of work in that definition. A KPI is not just important. It is decisive. If it moves in the wrong direction, something significant needs to change.
The practical difference
Here's a way to think about it. Every KPI is a metric, but most metrics are not KPIs. Your website has hundreds of trackable data points: page views, scroll depth, click-through rates, session duration, bounce rate, conversion rate, source attribution, device split, load time. These are all metrics. But if your strategic objective is to grow qualified pipeline, then the only metric from that list that rises to the level of KPI is the number of qualified leads generated through the site. Everything else is context, useful for diagnosis, not for performance management.
The confusion between these two things causes real damage. Teams spend time in weekly reviews going through 30 numbers, most of which have no direct connection to whether the business is succeeding. Leaders feel informed because they're looking at data. But the specific measure that tells them whether the strategy is working gets lost in the noise.
"Having 30 KPIs is the same as having no KPIs. When everything is key, nothing is."
The symptoms of metric confusion
There are a few patterns that signal a team has lost the distinction. The most common is the weekly data dump, a standing agenda item where someone shares a dashboard of 25 numbers and the team looks at each one briefly before moving on. Nothing changes as a result. There's no threshold that, if crossed, triggers a different decision. The metrics are reported, noted, and filed away.
A related symptom is the vanity metric, a number that looks good and goes up, but doesn't indicate business performance. Monthly active users can be a vanity metric if retention is poor. Revenue can be a vanity metric if margin is negative. Newsletter subscribers can be a vanity metric if open rates and click-through rates are zero. Vanity metrics are addictive because they're generally easy to move and make teams feel productive. The test is simple: if this number doubles, does the business materially improve? If not, it's probably not a KPI.
A third symptom is the absence of thresholds. A KPI without a threshold isn't being used as a KPI, it's just being tracked. "Our NPS score is 42" is a metric observation. "Our NPS score is 42, against a target of 55, which means we're 13 points below the threshold that would trigger a customer experience review" is a KPI in use. The threshold is what makes the number actionable.
How to choose the right KPIs
Start from the strategic objectives, not from the data. What does the organisation need to achieve this year? Not what does it want to track, what does it need to achieve? For each objective, ask: what is the one number that would most clearly tell us whether we're achieving this? That's your KPI candidate.
Then apply three tests. First, is it measurable, can you actually get a reliable number, at a reasonable frequency? Second, is it within the team's control, or is it downstream of decisions made elsewhere? Third, if this number moves significantly, is there a clear action the team can take? If you can't answer yes to all three, the metric may be interesting but it's probably not a KPI for this team.
Keep the set small. For a business unit or a team, three to five KPIs is usually the right number. For an individual role, one to three. If someone can't remember their own KPIs without looking them up, there are too many.
KPIs and the leading/lagging distinction
One more distinction worth making: the difference between leading and lagging indicators. A lagging indicator tells you what happened, revenue, profit, churn, NPS. A leading indicator tells you what is likely to happen, pipeline value, trial conversions, sales activity volume. Both matter, but for different reasons.
Lagging indicators confirm whether your strategy is working over time. Leading indicators tell you whether it's likely to work before it's too late to change course. A well-designed KPI set usually includes both: a lagging outcome metric that confirms success over the period, and a leading indicator that gives you in-period warning.
A SaaS business might track monthly recurring revenue (lagging) alongside net new trials started (leading). A professional services firm might track revenue billed (lagging) alongside qualified proposals submitted (leading). The lagging metric tells you how you did. The leading metric tells you how you're likely to do.
One practical exercise
If you want to audit your current measurement approach, try this. Take your current dashboard or weekly report and highlight every metric. Then, for each one, ask: which strategic objective does this connect to, and what decision would we make differently if this number moved significantly? Any metric you can't answer that question for is not a KPI. You're measuring it, which might be fine, but it shouldn't be in your performance review conversation.
Trying to build a cleaner performance framework for your business?
This is a common challenge, and one I help leadership teams work through. Let's talk →
