OKRs without the bureaucracy

OKRs work. The way most companies implement them doesn't. The framework gets blamed for the failure, when the real problem is how it was deployed, too many objectives, no real cadence, and a process that turns a clarity tool into a compliance exercise.

Stewart Masters·17 Mar 2026·6 min read
OKRs bullseye diagram

Most OKR implementations follow the same pattern. The leadership team reads a book, gets excited, and rolls it out company-wide. Three months later, everyone has eight objectives, each with four key results, none of which are measurable, most of which describe activity rather than outcomes, and all of which were forgotten by the end of week two. The quarterly review is a box-ticking exercise. Nothing changed.

The framework gets blamed. But OKRs didn't fail, the implementation failed. And it failed in predictable ways that are entirely avoidable.

What OKRs are actually for

OKRs do one thing well: they force a conversation about what matters most, make the answer explicit, and create a shared reference point for decisions. That's it. They're not a performance management tool. They're not a measurement system. They're not a way to track everything the team does. They're a focus mechanism.

When they work, it's because the team can answer three questions at any moment: What are we trying to achieve this quarter? How will we know if we got there? Is what I'm doing today connected to that? When they don't work, it's because no one can answer any of those questions without opening a spreadsheet.

The five ways OKRs become bureaucracy

Too many objectives. If you have more than three objectives at any level, you don't have priorities, you have a list. The point of OKRs is to make hard choices about what matters. A leadership team with eight objectives hasn't made those choices. They've delegated them downward and called it a framework.

Key results that are tasks. "Launch the new onboarding flow" is a task. "Increase week-one retention from 42% to 60%" is a key result. The difference matters: a task can be completed without moving the needle. A genuine key result forces the team to figure out what will actually change the metric, which is where the useful thinking happens.

No connection between levels. Company OKRs that don't inform team OKRs, team OKRs that don't inform individual focus, the cascade breaks down and each level optimises for its own objectives in isolation. The whole point is alignment. If a team can hit all its OKRs while the company misses its objectives, the system isn't working.

Set and forget. OKRs written in January and reviewed in March achieved nothing in February. The quarterly cadence needs a weekly or biweekly check-in, not a status update meeting, but a brief conversation: are we on track, what's blocking us, does anything need to change? Without the cadence, the objectives become wallpaper.

Graded on completion, not learning. The original OKR design assumes that consistently hitting 100% means your objectives weren't ambitious enough. A healthy average is somewhere around 70%. But most organisations grade OKRs like exams, anything below 80% is a failure. This guarantees safe, unambitious objectives that describe what the team was going to do anyway.

"An OKR that everyone hits every quarter is a sign that nobody is being ambitious. The discomfort of the stretch is the point."

How to run OKRs without the overhead

Start with fewer objectives than feel comfortable. Three at the company level. Two or three at the team level. One or two for individuals. The constraint forces the real conversation about what actually matters.

Write key results that are outcomes, not outputs. Before finalising any key result, ask: could we complete this without it actually mattering? If yes, it's a task. Replace it with the metric that would move if the task worked.

Run a fifteen-minute weekly check-in. Not a full review, a pulse. Three questions: what did we move this week, what's blocking progress, does anything need escalating? Done standing up, documented in a shared doc. The value is in the conversation, not the report.

Review and reset quarterly, not annually. Markets move. Priorities shift. A set of objectives that was right in January may be wrong by March. Build in a formal reset at the start of each quarter, not just a rollover of whatever wasn't completed.

Grade honestly. At the end of each quarter, score every key result 0–1. Share the scores, discuss what you learned, and use the gaps to inform the next set. Teams that consistently score 0.9 on everything need to raise the bar. Teams that score 0.3 need to understand why, was it the wrong objective, an uncontrollable external factor, or a resourcing issue?

The organisational condition that makes this hard

The biggest barrier to simple OKRs isn't the framework, it's the organisational desire for comprehensiveness. Leaders want to capture everything the team is doing. Board members want to see that all strategic themes are covered. HR wants OKRs to map to performance reviews. Each of these is understandable, and each of them is corrosive.

OKRs that try to capture everything end up capturing nothing. The tool works because it forces selection. The moment you start using it to track everything, you've turned a compass into a spreadsheet.

The fix is to separate concerns. OKRs are for strategic focus. Performance reviews can reference OKRs but should include more. Operational metrics live in dashboards. Project tracking lives in project tools. Trying to collapse all of these into one system is where the bureaucracy starts.


Working on strategy execution or team alignment?
This is territory I've navigated across multiple businesses. Let's talk →

Stewart Masters
Stewart Masters

Strategic advisor to founders and operators. 20+ years building and advising businesses across Europe and the Middle East. Based in Barcelona. Guest lecturer at IE Business School and ESADE. Connect on LinkedIn →

More to read

Digital Strategy KPIs vs metrics — why the distinction matters more than you think Leadership Why digital transformation fails — and what actually works
← All posts
Newsletter

Practical thinking, twice a week

AI adoption, digital strategy, and what actually changes organisations. No fluff.