What boards need to know about shadow AI right now

Shadow AI — employees using AI tools without corporate knowledge or approval — is almost certainly already operating inside your organisation. Here's what boards need to understand about the risk, and what a proportionate response looks like.

Stewart Masters·9 Jan 2026·6 min read
Shadow AI in the boardroom illustration

Shadow AI is the organisational equivalent of shadow IT — the phenomenon from a decade ago where employees started using Dropbox, Google Docs, and consumer cloud tools without IT department knowledge or approval. But the risk profile is meaningfully different. When employees used shadow IT, the main concerns were data leakage and version control. When employees use shadow AI, the concerns extend to confidential data being processed by third-party AI systems, AI-generated outputs being mistaken for verified information, and regulatory exposure in sectors where AI use is increasingly governed. Boards that aren't asking questions about shadow AI are not asking the right questions.

What shadow AI actually looks like in practice

Shadow AI isn't exotic. It's the salesperson using ChatGPT to draft client proposals containing pricing strategy. It's the analyst feeding quarterly financial data into a free AI tool to build a summary for the CFO. It's the lawyer using Claude to review contract drafts that contain privileged information. It's the HR manager asking an AI assistant to help with performance review language that includes confidential employee data.

In each case, the person using the tool is usually doing something sensible from their own perspective — they're more productive, the output is good, and no one told them not to. The problem is systemic, not individual. And because it's systemic and invisible, the risk accumulates without anyone tracking it.

"The employees using shadow AI aren't doing anything malicious. They're doing their jobs more efficiently with the tools available to them. The governance gap is the organisation's problem, not theirs."

The specific risks boards should understand

Data privacy and regulatory exposure. Consumer AI tools typically use your inputs to train their models unless you have a paid enterprise agreement with specific data terms. Customer data, employee data, and financial data processed through consumer AI tools may violate GDPR, sector-specific regulations (FCA, CQC, DPDPA), and your own contractual obligations to clients. A data breach that originated in shadow AI usage would likely be treated as a foreseeable risk that the organisation failed to mitigate.

Confidentiality and IP leakage. Commercially sensitive information shared with third-party AI tools may be retained, used in training data, or exposed through model outputs to other users. This includes trade secrets, M&A-related information, legal strategy, and proprietary processes.

Reliability and hallucination risk. AI tools generate plausible-sounding outputs that are sometimes factually wrong. When these outputs are used without verification — in client communications, regulatory filings, or board materials — the consequences range from embarrassing to materially harmful. The risk is higher when users aren't trained to understand the limitations of AI-generated content.

Audit trail and accountability. If a business decision was informed by AI-generated analysis that later proves incorrect, who is accountable? In regulated environments, the absence of any audit trail for AI-assisted decision-making is itself a governance gap.

Questions boards should be asking management

A proportionate response

The wrong response to shadow AI is a blanket prohibition. Banning AI tools entirely is both unenforceable and counterproductive — you'll drive usage further underground while watching competitors extract genuine productivity gains from legitimate AI adoption.

The right response is a structured framework that acknowledges AI is being used and creates a sanctioned path that's more attractive than the shadow alternative. This typically means:

The goal is not to stop people using AI. It's to ensure that when they do, they're doing it in a way the organisation has thought about. That's a governance problem with a governance solution — and it belongs on the board agenda now, before a shadow AI incident makes it unavoidable.


Working through AI governance at board or leadership level?
This is an area I work in directly. Happy to have a practical conversation →

Stewart Masters
Stewart Masters

Strategic advisor to founders and operators. 20+ years building and advising businesses across Europe and the Middle East. Based in Barcelona. Guest lecturer at IE Business School and ESADE. Connect on LinkedIn →

More to read

AI & Technology What is AI fluency — and why every executive needs it in 2026 Board & Advisory What is a Non-Executive Director — and does your business need one?
← All posts
Newsletter

Practical thinking, twice a week

AI adoption, digital strategy, and what actually changes organisations. No fluff.