Fifty-four percent of organisations have minimal or no formal AI governance. Not because they haven't thought about it. The same Trustmarque research found that only 6% of leaders said governance awareness was absent from their organisation entirely.
The problem isn't knowledge. It's execution.
And the reason most governance frameworks fail at the execution stage is straightforward — though it took cognitive science to make it precise. George Miller established in 1956, later refined by Nelson Cowan in 2001, that human working memory holds approximately four meaningful chunks of information at once. Most AI governance frameworks exceed that comfortably. So people simplify informally. They create their own shortcuts. And the gap between what the policy says and what the team actually does quietly opens up.
The Three Simple Rules — Data Traffic Light, Human Wrapper, Prompt Dividend — are designed for the human brain, not the organisation chart. GovernFirst, not AI-First. Three disciplines that cover every moment that matters in AI-assisted agency work. Implementable this week.
Why governance frameworks fail
Here's the thing about governance failure: it almost never happens in the boardroom. The organisation chart approved the policy. Leadership signed off the document. Someone archived it carefully.
Then a deadline arrived.
Converging evidence across security policy research, compliance literature, and SME implementation studies shows the same pattern. When a framework demands more cognitive load than people have available under pressure, they don't abandon it deliberately. They simplify it quietly. What the policy says and what the team does diverge — not through defiance, but through the entirely human tendency to manage what working memory can hold.
Three rules fit in working memory. They can be recalled mid-client-meeting. They can be explained to a new team member in ten minutes. They can be checked in the moment a piece of work is about to go out.
That's not a compromise on rigour. It's the architecture that makes rigour possible at the operational level where agency work actually happens.
The Three Simple Rules
RULE 1
Data Traffic Light
Operates before anyone types anything into an AI tool. Three zones: red, amber, green. Red data — personally identifiable information, NDA-protected material, authentication credentials — never enters any AI tool. Amber data — client briefs, campaign strategies, internal documents — can be processed, but only through enterprise tools with a Data Processing Agreement. Green data — public research, published competitor content, generic writing inputs — can go into any approved tool. Three seconds. One decision. The classification that prevents the breach that doesn't have to happen.
RULE 2
Human Wrapper
Operates after the AI has responded. In July 2025, Deloitte Australia submitted a government report worth approximately AU$442,000 containing fabricated academic references, non-existent researchers, and invented quotations. Azure OpenAI GPT-4o. QA processes that should have caught the errors didn't. Deloitte confirmed it. This wasn't a junior team on a free tool — this was a Big Four firm with substantial QA infrastructure. Automation bias (the ICO's term for the documented tendency to defer to confident, well-formatted AI outputs without adequate scrutiny) happens at every seniority level. The Human Wrapper is three things documented: who reviewed the output, what they checked it against, what they changed. Two minutes. A retrievable record. The difference between 'we review AI outputs' and 'we have documented human review' is the difference between a verbal assurance and an auditable process.
RULE 3
Prompt Dividend
Operates after delivery — and it's the rule that pays you rather than simply protecting you. McKinsey's State of AI 2025 found that 8 in 10 organisations report no significant bottom-line gains from AI adoption. The distinguishing factor for those who do capture real value isn't which tools they use. It's whether they redesign workflows around AI capability rather than using AI as a faster version of what they did before. Left in individual chat histories, AI efficiency evaporates when someone leaves or changes role. The Prompt Dividend is systematic capture: a shared location where prompts that produced good results are recorded with enough context to be reused. A ten-person agency that does this consistently for six months has institutional AI knowledge no competitor can replicate quickly.
The Classify → Review → Capture cycle
The three rules aren't independent. They form a complete workflow.
Before you prompt: the Data Traffic Light. A three-second classification before anything enters a tool. The upstream decision that everything downstream depends on.
After AI responds: the Human Wrapper. Substantive review — not rubber-stamping — before output goes anywhere. Three fields in whatever system the agency already uses. Not a new system. Three new fields in the one you have.
After you deliver: the Prompt Dividend. A moment of capture before moving on. Did this prompt produce something worth keeping? The capture habit adds minutes. The library it builds adds months of accumulated capability.
These three rules, consistently applied, create something UK agencies increasingly need: demonstrable evidence that AI governance exists and functions. Not a policy document. An operational reality you can describe, evidence, and hand to a procurement team when they ask.
Because they will ask.
The gap between knowing governance matters and having governance that functions is where most UK agencies currently sit. The Trustmarque research put it precisely: the blockers aren't awareness or intention — they're ownership, resources, and clarity on next steps.
The Three Simple Rules are the clarity on next steps.
The Done-With-You AI Workflow Build implements all three rules across your specific agency operation — the tools you actually use, the workflows you already have, the evidence your enterprise clients will want to see.
About the book
This chapter is from Shadow AI Governance: The UK Agency Playbook — a book I'm writing in public about making agency AI usage visible, accountable, and commercially defensible. Chapter 7 is where the framework arrives. Not with theory. With three disciplines your team can implement this week, built on the cognitive science of how people actually follow rules under pressure. Chapter 8 is what this looks like in practice — not as a framework on paper, but as a live implementation, week by week, for an agency starting from where most agencies are.
Want the full chapter?
The newsletter covers the three rules and how they connect. The full chapter goes further: the complete Data Traffic Light zone breakdown with the specific data types that sit in each zone, the precise 5-day implementation plan for getting governance operational from scratch, and the honest framing section on what the Three Simple Rules actually operationalise versus what they don't mandate.
Or if you'd rather understand what your agency's AI readiness actually looks like before a client asks the question, the AI Readiness Assessment maps exactly what's running, where your gaps are, and what governance looks like for your specific operation.

