Thinking Board & Advisory

How boards should evaluate AI risk

Boards are being asked to govern AI without the frameworks to do it. Here's a practical approach to evaluating AI risk at board level.

Stewart Masters · 24 Mar 2026 · 6 min read
Board-level framework for evaluating AI risk across four categories

Most boards were built for a different era of corporate risk. Financial risk, legal risk, reputational risk — these are well-mapped. There are frameworks, committees, disclosure requirements, and decades of case law to draw on.

AI introduces a different kind of risk. It's faster-moving. It gets embedded in operations before it appears on a risk register. The technical complexity makes it genuinely hard for non-specialist board members to ask the right questions — let alone evaluate the answers.

The result is a governance gap. Management moves ahead with AI adoption — often responsibly, sometimes not — while boards watch from a distance, unsure what they're supposed to be looking for.

The three categories of AI risk

Operational risk. AI that runs in your business can fail. Models degrade over time. Training data becomes stale. Edge cases emerge that weren't in the test set. The question isn't whether failure will happen — it's how consequential it is and whether there's a recovery path.

Questions boards should ask: Are there human checkpoints on decisions with material consequences? How is model performance monitored over time? What's the fallback when the AI is wrong? Who gets notified when it fails, and how quickly?

Regulatory and legal risk. This is changing fast. The EU AI Act creates tiered obligations depending on use case. Sector-specific regulation in financial services, healthcare, and HR is tightening. The accountability question — when an AI-driven decision causes harm, who is responsible — is not fully resolved, but regulators are clear that "the algorithm did it" is not a defence.

Questions boards should ask: What AI use cases are in scope for the AI Act or sector-specific regulation? Has legal reviewed the high-risk applications? Is there a clear audit trail and documented human oversight for consequential decisions?

Reputational and ethical risk. Some AI risks are hard to quantify but easy to understand once they become public. Using AI for decisions that systematically disadvantage protected groups. Deploying AI in customer interactions without disclosure. Using data in ways that weren't anticipated or consented to. These risks are real before they become legal ones.

Questions boards should ask: Are there documented principles for what AI can and can't be used for? Who has oversight of those principles? Has the organisation done any bias testing on models that affect individuals?

What good governance looks like

Board-level AI governance doesn't require technical expertise — it requires the right information and the discipline to ask for it.

At minimum, boards should be receiving a regular update on: where AI is in use across the business; which applications are material enough to warrant board-level visibility; what monitoring is in place; and any incidents or near-misses, including minor ones.

This is normal risk governance applied to a new risk category. The analogy to cybersecurity is useful: boards don't need to understand how attacks work, but they do need to know what the exposure is, what controls are in place, and what the incident response plan looks like. The same logic applies to AI.

What boards often get wrong

The two failure modes I see most often are opposite errors. The first is boards that defer entirely to management — treating AI as too technical to engage with meaningfully. This produces a rubber-stamp dynamic that isn't oversight at all.

The second is boards that become adversarial — blocking AI adoption out of an abundance of caution, without distinguishing between uses that carry real risk and uses that don't. This is equally unhelpful. The competitive pressure to move on AI is real, and leadership needs room to act.

The right posture is neither. It's engaged oversight: asking whether the risk is being taken knowingly and managed appropriately. Not blocking everything. Not approving everything. Understanding what's actually happening.

The questions that matter most

If a board could only ask four questions about AI, they should be:

The companies that get this right aren't the ones that move the slowest. They're the ones that move with clear eyes.

SM
Stewart Masters
Chief Digital Officer · Honest Greens · Barcelona

20 years building and running digital operations inside real businesses. I write about AI, digital systems, and the leadership decisions that determine whether transformation actually happens.

Related posts

Newsletter

Practical thinking, twice a week

AI adoption, digital strategy, and what actually changes organisations. No fluff.