The graveyard of AI projects is littered with excellent technology that never got funded. Not because the use case was wrong, not because the numbers didn't stack up, but because the person building the case made a fundamental mistake: they wrote it for themselves rather than for the decision-maker.
An AI business case has to translate. It has to take what the technology team finds exciting and render it in terms that a finance director, a board, or an executive leadership team can evaluate and approve. That translation is where most cases fall apart.
Why most AI business cases get rejected
There are three patterns I see repeatedly in rejected AI proposals. The first is leading with technology rather than outcomes. The case starts with what the AI does — the model architecture, the training approach, the accuracy metrics — before ever establishing what problem it solves or what it would change in the business. Decision-makers don't know how to evaluate technology. They know how to evaluate outcomes. Lead with the outcome.
The second pattern is false precision on benefits. The case presents a benefit of £2.4 million over three years with two decimal places of confidence, despite being built on assumptions that are themselves uncertain. Sophisticated approvers are not reassured by precision — they are made suspicious by it. They've approved enough projects to know that confident-looking numbers are often invented numbers. A range with explicit assumptions is more credible than a point estimate.
The third pattern is underestimating costs. The AI license or compute cost is modelled. The infrastructure cost may be included. But the change management cost, the training cost, the data preparation cost, the ongoing model maintenance cost, and the opportunity cost of the people involved are typically missing or understated. When an approver has been burned before by undercosted projects, they will look for this gap as a reason to say no.
The four elements of a winning AI business case
A clear problem statement tied to a strategic priority. The problem you're solving has to matter to the organisation — not just to the team proposing it. If your company's strategic priorities this year are revenue growth and cost reduction in operations, your AI business case needs to connect explicitly to one of those. An AI initiative that solves an interesting problem in a strategically irrelevant area will lose to a less interesting initiative that connects directly to a priority the board has already committed to.
A credible, bounded benefit estimate. State the benefit in terms of the outcome metric that the decision-maker cares about: revenue, margin, cost, speed, risk reduction. Be specific about which part of the business experiences the benefit and over what timeframe. Use a range — low case, central case, high case — and state explicitly what assumptions drive each. A well-structured range builds more confidence than a single number because it signals you've thought about the uncertainty.
A full cost model including the invisible costs. Technology and licensing are the visible costs. Build out the full picture: data preparation and cleaning (typically underestimated by a factor of two), change management and training, integration with existing systems, ongoing model monitoring and maintenance, internal time at realistic rates. The approver who has approved projects before will add these in their head if you don't — and they'll add a premium for uncertainty. Better to include them yourself at realistic rates than have them inflated in the review.
A risk section that's honest, not defensive. Every AI project has risks — around data quality, model performance, user adoption, regulation, and dependency on third-party providers. Name them. State what the mitigation is. State what the residual risk is after mitigation. A business case that acknowledges risk reads as mature. A business case that pretends risk doesn't exist reads as naive.
How to frame the ask
A common mistake is presenting an AI business case as a binary decision: approve the full programme or don't. This is a bad structure. It puts the entire risk on the approval decision. A better structure is phased: request approval for a proof of concept or pilot that demonstrates the outcome at lower cost and lower risk, with explicit criteria that, if met, would justify the larger investment.
"A business case that asks for £50,000 to prove the model works in production — rather than £500,000 to build the full system — is dramatically easier to approve. And it de-risks the organisation, not just the proposal."
This approach works for two reasons. It reduces the immediate financial commitment and makes the approval decision smaller. And it provides a natural checkpoint at which the sceptics in the room can be converted by results rather than promises.
The language that doesn't work — and what to use instead
Avoid the word "AI" wherever you can replace it with what the AI actually does. "An AI-powered solution" tells a decision-maker nothing about value. "A system that automatically classifies customer support tickets and routes them to the right team, reducing average handling time from 48 hours to 4 hours" tells them everything. The technology is a detail. The outcome is the point.
Similarly, avoid the phrase "we'll learn as we go." It's honest, but it sounds like you haven't done the planning. Instead: "The pilot will include defined decision points at weeks four and eight where we assess progress against the agreed success criteria before committing to the next phase." Same reality, very different impression.
One thing that almost always helps
Find the internal comparison case. If your organisation has approved a similar investment before — a system implementation, a data infrastructure project, an automation initiative — reference it explicitly. "This is structured similarly to [previous project], which was approved in [year] and delivered [outcome]." It reduces perceived novelty, which is one of the main sources of hesitation in approving AI spending. The approver's mental model shifts from "this is new and uncertain" to "this is similar to something that worked."
If you can't find an internal comparison, find an external one — a competitor, a comparable company in another sector, a published case study. The point is to show that this isn't experimental speculation but a category of investment that has a track record.
Building an AI business case for your organisation?
This is work I do regularly — helping leadership teams translate AI opportunity into language that gets approved. Get in touch →
