The hire most companies get wrong is their first head of AI. And they get it wrong in a predictable way: by going looking for the most technically impressive person who has AI in their title.
A PhD. A former researcher. Someone who can speak convincingly about model architectures and training pipelines. That person is rarely what the business actually needs.
The mistake most companies make
Most businesses don't need to build AI models from scratch. They need to embed AI tools into real operations — into workflows, into teams, into products that already exist. That requires a fundamentally different set of skills.
The candidate pool for "head of AI" is dominated by researchers and ML engineers. These are excellent people for certain roles. They are not, in most cases, the right people to run an AI adoption and implementation function inside a company that isn't a technology research lab.
What the role actually requires
The head of AI in most companies is an operator with technical credibility, not a technical specialist with operator ambitions. They need to understand AI well enough to know what's feasible and what isn't. But the harder part of the job is everything else:
- Building cross-functional relationships in organisations that aren't aligned on AI priorities
- Getting a sceptical operations team to actually change how they work
- Communicating clearly to a board that doesn't know what questions to ask
- Managing vendors, evaluating tools, and navigating procurement
- Designing adoption programmes that don't collapse six months after launch
This is change management with an AI layer on top. The technical layer is necessary but not sufficient.
Prioritise operators over researchers
The most predictive signal I've seen in these hires is whether the candidate has run things before. Not advised, not researched, not prototyped — but been accountable for delivery inside a real organisation, with real constraints and real consequences.
Ask them about their last significant failure. Not in a gotcha way — genuinely. The best candidates have a clear, considered answer. They know what went wrong, what they'd do differently, and what they learned from it. Researchers often haven't had to sit with operational failure. Operators have.
The skills that actually matter, in rough order of importance:
- Change management experience — formal or demonstrably practical
- Business translation — can they explain AI to a CFO without oversimplifying?
- Track record of taking things from pilot to production, not just from paper to pilot
- Vendor evaluation and integration experience
- Technical depth — necessary, but probably fourth on the list
The interview that actually reveals something
Run the conversation in reverse. Start with failure: what's the most consequential thing that didn't work out in an AI or digital initiative you owned? Where did something that looked good on paper break down in execution?
Then move to the business translation question: explain a recent AI use case to me as if I were a non-technical CFO who is sceptical about this. You're listening for fluency and precision — not just accessibility.
Finally, ask about their operating model. Who do they need to win over first in a new organisation? How do they handle sustained resistance from a team that's comfortable with the status quo? What does a realistic 90-day plan look like?
Org readiness matters as much as the hire
The thing most companies underestimate: you can hire exactly the right person and still fail, if the organisation isn't ready for them.
If leadership isn't aligned on what AI is for. If there's no exec sponsor with real authority. If IT is a gatekeeper rather than a partner. If the data infrastructure isn't there to build on. The best head of AI in the world can't fix those problems unilaterally — they'll just be the person who tried and eventually left.
Before you hire, run an honest audit of what you're actually asking this person to walk into. Then set them up to succeed rather than to absorb a structural problem that hasn't been acknowledged.