Shadow AI is the organisational equivalent of shadow IT — the phenomenon from a decade ago where employees started using Dropbox, Google Docs, and consumer cloud tools without IT department knowledge or approval. But the risk profile is meaningfully different. When employees used shadow IT, the main concerns were data leakage and version control. When employees use shadow AI, the concerns extend to confidential data being processed by third-party AI systems, AI-generated outputs being mistaken for verified information, and regulatory exposure in sectors where AI use is increasingly governed. Boards that aren't asking questions about shadow AI are not asking the right questions.
What shadow AI actually looks like in practice
Shadow AI isn't exotic. It's the salesperson using ChatGPT to draft client proposals containing pricing strategy. It's the analyst feeding quarterly financial data into a free AI tool to build a summary for the CFO. It's the lawyer using Claude to review contract drafts that contain privileged information. It's the HR manager asking an AI assistant to help with performance review language that includes confidential employee data.
In each case, the person using the tool is usually doing something sensible from their own perspective — they're more productive, the output is good, and no one told them not to. The problem is systemic, not individual. And because it's systemic and invisible, the risk accumulates without anyone tracking it.
"The employees using shadow AI aren't doing anything malicious. They're doing their jobs more efficiently with the tools available to them. The governance gap is the organisation's problem, not theirs."
The specific risks boards should understand
Data privacy and regulatory exposure. Consumer AI tools typically use your inputs to train their models unless you have a paid enterprise agreement with specific data terms. Customer data, employee data, and financial data processed through consumer AI tools may violate GDPR, sector-specific regulations (FCA, CQC, DPDPA), and your own contractual obligations to clients. A data breach that originated in shadow AI usage would likely be treated as a foreseeable risk that the organisation failed to mitigate.
Confidentiality and IP leakage. Commercially sensitive information shared with third-party AI tools may be retained, used in training data, or exposed through model outputs to other users. This includes trade secrets, M&A-related information, legal strategy, and proprietary processes.
Reliability and hallucination risk. AI tools generate plausible-sounding outputs that are sometimes factually wrong. When these outputs are used without verification — in client communications, regulatory filings, or board materials — the consequences range from embarrassing to materially harmful. The risk is higher when users aren't trained to understand the limitations of AI-generated content.
Audit trail and accountability. If a business decision was informed by AI-generated analysis that later proves incorrect, who is accountable? In regulated environments, the absence of any audit trail for AI-assisted decision-making is itself a governance gap.
Questions boards should be asking management
- Do we have a current inventory of which AI tools are being used across the organisation, whether sanctioned or not?
- Does our data classification policy address what information can and cannot be shared with AI tools?
- Do we have enterprise agreements with AI providers that include appropriate data terms, or are employees using consumer-tier accounts?
- Has legal reviewed our exposure under GDPR and sector regulation for current AI tool usage?
- Are there roles or functions — legal, finance, HR, sales — where unsanctioned AI usage is creating specific risk?
- What is our policy on AI-assisted outputs being submitted externally — to clients, regulators, investors?
A proportionate response
The wrong response to shadow AI is a blanket prohibition. Banning AI tools entirely is both unenforceable and counterproductive — you'll drive usage further underground while watching competitors extract genuine productivity gains from legitimate AI adoption.
The right response is a structured framework that acknowledges AI is being used and creates a sanctioned path that's more attractive than the shadow alternative. This typically means:
- An enterprise AI policy that defines what data can be processed by which types of AI tools
- Approved tools with appropriate data terms for common use cases
- Training for employees on what AI is good at, what it gets wrong, and what not to share
- A reporting mechanism so employees can flag AI tools they're finding useful — feeding demand into the sanctioned framework rather than suppressing it
The goal is not to stop people using AI. It's to ensure that when they do, they're doing it in a way the organisation has thought about. That's a governance problem with a governance solution — and it belongs on the board agenda now, before a shadow AI incident makes it unavoidable.
Working through AI governance at board or leadership level?
This is an area I work in directly. Happy to have a practical conversation →
