Thinking AI & Technology

The AI problem is adoption, not capability

The tools are good enough. The bottleneck is people, process, and organisational readiness. Here's why most AI implementations stall at adoption.

Stewart Masters · 10 Apr 2026 · 6 min read
Diagram showing the gap between AI capability and actual adoption, with the barriers in between

The conversation about AI in business has been dominated by capability questions. Is the model good enough? Is the technology mature enough? Can it actually do what we need it to do?

These were legitimate questions two or three years ago. They're much less relevant now. For the vast majority of business problems organisations want to apply AI to, the technology is more than capable enough. The tools exist. The APIs are accessible. The cost of AI compute has dropped dramatically.

The bottleneck has shifted. It is now almost entirely an adoption problem — and adoption problems require a completely different set of solutions than capability problems.

What the adoption gap actually looks like

Walk into almost any organisation that has announced an "AI strategy" and you'll find the same pattern. A central team — often small, often reporting to the CDO or CTO — has implemented one or two AI tools. Usage is concentrated in a handful of enthusiastic early adopters. The rest of the organisation continues to work in exactly the same way it did before.

This is not a technology failure. The tools are available, often free or very cheap, often already licensed as part of existing software subscriptions. The failure is that nobody has figured out how to make the tools fit into the actual work that people do, in a way that feels obviously better than the current approach, with enough support to overcome the friction of changing habits.

That's a change management problem. And most AI implementations are treating it as an IT problem.

The five barriers to adoption

Awareness without applicability. Most employees know, in the abstract, that AI tools exist. Very few can answer the question "what would I actually use this for in my job today?" Making the connection between the tool and the specific task is not something people do naturally — it requires deliberate education that goes far beyond a lunch-and-learn.

Trust gaps. People won't use tools they don't trust. And trust in AI is low in many organisations — partly due to well-publicised failures, partly due to accurate concern about accuracy, and partly due to a reasonable worry that using AI might expose them to criticism if something goes wrong. A culture that doesn't explicitly address these concerns will have low adoption regardless of how good the tools are.

Process fit. AI tools that don't integrate into existing workflows will fail. Asking someone to open a separate application, reformulate their problem in a new way, and then translate the output back into their existing system is asking them to take on friction in exchange for an uncertain benefit. The tools that get adopted are the ones that appear where people already work.

Skill gaps that nobody has measured. Effective use of AI tools requires skills that are not evenly distributed. Prompt engineering is a learnable skill, but it requires practice. Data literacy — understanding what the AI is doing and when to trust its outputs — is another. These aren't skills that people pick up accidentally.

Lack of visible leadership behaviour. In most organisations, adoption follows senior behaviour. If leaders are visibly using AI tools in their own work, talking about how they use them, and sharing where they've found value, adoption accelerates. If leaders are absent from this conversation, the implicit signal is that it's optional — and optional things don't become habits.

What works: the shift from deployment to embedding

Organisations that achieve meaningful AI adoption typically make a conceptual shift from thinking about deployment (getting the tool in place) to thinking about embedding (changing how work actually happens).

Embedding looks different from deployment in several ways. Instead of rolling out tools to everyone and hoping for uptake, it starts with identifying specific workflows where AI creates clear, demonstrable value — and then redesigning those workflows to incorporate AI as the default approach, not an optional add-on.

Instead of one-off training sessions, it provides ongoing learning infrastructure — regular sharing of what's working, peer coaching, and explicit recognition of people who find new applications. Instead of treating AI adoption as an IT rollout, it assigns it to the people who own the relevant workflows — operations, HR, finance, customer service — with technology in a supporting role.

The measurement problem

Adoption without measurement is invisible. Most organisations have no clear view of how widely AI tools are being used, where they're creating value, and where they're not. Without this data, it's impossible to identify what's working, what's not, and where to focus effort.

Basic adoption metrics — active users, tasks completed with AI assistance, time saved — should be tracked from the beginning of any AI implementation. Not because the numbers matter in themselves, but because they create accountability and reveal the reality of adoption that self-reporting and anecdotes will always obscure.

The leadership question

Ultimately, the adoption gap is a leadership question. The organisations that are genuinely ahead on AI — not in terms of what they've announced, but in terms of how their people actually work — have leaders who have made AI adoption a visible priority, allocated real resources to change management (not just technology), and created accountability structures for adoption, not just for deployment.

The capability questions will keep getting easier as the technology improves. The adoption questions are hard for reasons that won't go away on their own — they require deliberate, sustained leadership attention. The organisations that figure this out first will have a durable advantage over the ones that are still waiting for the capability to get good enough.

SM
Stewart Masters
Chief Digital Officer · Honest Greens · Barcelona

20 years building and running digital operations inside real businesses. I write about AI, digital systems, and the leadership decisions that determine whether transformation actually happens.

Related posts

Newsletter

Practical thinking, twice a week

AI adoption, digital strategy, and what actually changes organisations. No fluff.