The AI decisions arriving at board level are genuinely hard. Should the company invest in its own AI infrastructure or rely on third-party providers? What is the organisation's liability exposure from AI-generated outputs? How should the board oversee an AI strategy it doesn't fully understand? How do you challenge management's AI assumptions when management knows more about the technology than you do?
These are board-level questions. They're not technical questions — they're governance, strategy, and risk questions. But answering them well requires enough understanding of how AI actually works to distinguish a credible answer from an impressive-sounding one. That's the challenge for most boards today.
The gap most boards have
The typical board composition was designed for a different era's risk profile. Financial expertise. Industry depth. Governance experience. M&A track record. These remain valuable. But the majority of boards appointed over the last decade don't have meaningful direct experience of how AI systems are built, deployed, or managed — and many don't have a plan to close that gap.
This creates a structural problem. Management presents AI proposals and management presents AI risk updates. The board asks questions, but the questions are often informed by AI coverage in the general press rather than operational experience of AI at scale. The management team knows this, and the result is a subtle power asymmetry where management has more confidence in what they're saying than the board has in how to challenge it.
"A board doesn't need to build AI. It needs to be able to ask the right questions — and know when it's not getting a straight answer."
What AI fluency actually means at board level
AI fluency for board directors is not about understanding transformer architecture or training data pipelines. It's about understanding five things well enough to govern them.
How AI systems fail. Hallucination, data bias, model drift, adversarial attack. What are the failure modes the board should know to ask about? If a board director doesn't know what model drift is, they can't ask management whether they're monitoring for it in deployed AI systems.
What questions to ask about data. AI systems are only as good as the data they're trained and evaluated on. What data is this system trained on? Who owns it? Is it representative? Does using it create any privacy, legal, or reputational exposure? A board director with data fluency can probe these questions meaningfully.
The regulatory landscape. The EU AI Act is creating tiered obligations that vary by use case. Sector regulators in finance, healthcare, and legal are issuing guidance and beginning to enforce it. A board that isn't tracking this is accumulating regulatory exposure without knowing it.
How to read AI performance claims. "The model is 95% accurate" can mean very different things depending on what the 5% errors are and who they affect. A board needs to be able to ask: accurate at what, compared to what baseline, tested on what population, in what conditions?
The make vs. buy vs. partner question. Many organisations are making significant AI investment decisions without a clear framework for whether to build capability internally, buy software, or partner with an AI provider. This is a strategic choice with long-term implications for cost, control, and capability that belongs on the board agenda.
What the composition of an AI-ready board looks like
An AI-ready board doesn't mean a board where every director is a technologist. It means a board where the collective capability covers the bases the organisation needs.
At minimum, one director should have genuine operational experience of technology at scale — having run a technology organisation, built software products, or overseen significant digital transformation. This person isn't there to translate everything — they're there to sense-check whether the answers management is giving hold up under scrutiny.
At minimum, one director should have direct experience of AI governance or AI ethics — either in a regulatory context, an audit context, or an operational one. As AI accountability becomes a regulatory matter, the board's ability to demonstrate oversight will be scrutinised.
Every director should have enough AI literacy to ask competent questions and recognise when they're not getting straight answers. This is a training matter, not a composition matter — and it should be a standing part of board development, not a one-time briefing.
How to build AI readiness on an existing board
Most boards can't replace their composition quickly. So the question becomes: how do you build AI readiness with the board you have?
The most effective approach I've seen is a standing briefing programme — quarterly sessions where management briefs the board on AI developments relevant to the business, and where external speakers are occasionally brought in to provide an independent perspective. The key is that these sessions are structured around the board's questions, not management's presentation. The agenda should be "what do you need to understand to govern this well," not "here's what we've been working on."
A second lever is selective use of AI-focused advisors or NEDs. If the existing board doesn't have AI experience, a director or advisor who does can raise the floor considerably. They don't need to run a committee — they need to be in the room to ask the questions that others can't.
A third lever is governance structure. Some boards are creating AI oversight as a formal agenda item at every board meeting, or assigning AI governance to the audit or risk committee with specific reporting requirements. The structure creates accountability even when the board is still building fluency.
The board's job is not to run the AI strategy
A final clarification that's worth making explicitly: the board's job is not to direct the AI strategy. That's management's job. The board's job is to satisfy itself that management has a credible AI strategy, that the risks are being appropriately managed, that the regulatory exposure is understood, and that the investment decisions are coherent. That's a governance job, not an operational one — and it's entirely achievable without turning the board into a team of AI engineers.
Working on AI governance at board level?
I work with boards and leadership teams on exactly this. Let's talk →
