From around 2014, both agencies I partnered in started feeling the effects of the same external crisis. A major client froze payments. Same market. Same timeline. Same financial pressure.
XEIOH survived. In 2019, Zonke closed.
The difference wasn't talent. It wasn't client relationships. It was structure — formalised governance systems that gave XEIOH a framework to make decisions fast when everything else was under pressure. Those systems weren't designed for resilience. They were imposed on us by pharmaceutical client requirements. The resilience was a side effect I only understood afterwards.
That story matters here because of what it reveals about how governance actually gets built. Not as a grand project. Not as a shutdown-and-redesign exercise. Incrementally. Practically. One layer at a time.
Chapter 8 is the implementation chapter. Four weeks. Specific actions per week. A governance foundation your agency can install without stopping work.
The problem with starting
Most agencies that want to address AI governance have no implementation sequence. Someone reads something uncomfortable — a LinkedIn post about an agency losing work over undisclosed AI use, a procurement questionnaire with questions they can't answer — and the intention to act is genuine. Then nothing happens. Or something starts, loses momentum, and gets quietly shelved.
This isn't laziness. It's structural. McKinsey's research puts it plainly: 70% of change programmes fail to achieve their goals. The organisations that succeed implement differently — they front-load action, embed changes in existing workflows, and don't wait for perfect readiness.
Here's the reframe worth sitting with. 84% of AI-using businesses in the UK already report that humans check AI outputs before they're used (DSIT, January 2026). Most agencies already have an informal Human Wrapper running. The behaviour exists. What's missing is the structure around it.
The Pilot Blueprint isn't asking your team to adopt new behaviours. It's asking you to formalise the ones they already have.
The four-week sequence
Week One: Surface the reality. Run an AI Usage Survey. Keep it short — eight to twelve questions. Genuinely anonymous. Frame it correctly: "We want to understand how you're actually working so we can support it properly." Three working days. Fill it in yourself visibly. Map what you find against your client data handling obligations.
Week Two: Install the policy layer. The Three Simple Rules from Chapter 7 become written documents. The Data Traffic Light becomes a classification system. The Human Wrapper becomes your documented review process. Your AI Acceptable Use Policy can be two pages. The test isn't length — it's whether someone reading it knows exactly what to do.
Week Three: Activate the team. A ninety-minute working session. Walk through the Three Simple Rules using real project types from your agency. Install the checklist into your project kickoff template. The Human Wrapper review step goes into your project management system. Zero additional friction is the goal.
Week Four: Lock in and hand over. Spot-check three or four recent projects against the framework. Produce a two-page governance summary document written for a client or procurement audience. Schedule the thirty-day review. The AI Champion leads from here.
Why this matters commercially
51% of UK agencies report that no client has ever asked them to disclose their AI use (CIPR, 2024). That number is going to compress quickly. Enterprise procurement teams are already adding AI governance to vendor assessments.
Picture what the governed agency looks like at week five. The MD is in a credentials meeting. The prospect's procurement lead asks about AI policy. The MD pulls out a two-page document and walks through it. The conversation moves on. The competitor next week doesn't have the document. That gap doesn't close in a credentials meeting.
Governance-ready firms experience 37% shorter sales delays compared to those without documented governance (Cisco, 2019). That's not coincidence. That's the commercial translation of having done the work before the client asked.
The four-week blueprint puts you in front of that moment. GovernFirst, not AI-First.
About the book
This newsletter is an excerpt from the latest chapter in Shadow AI Governance: The UK Agency Playbook — a book I'm writing in public about making agency AI usage visible, accountable, and commercially defensible.
Chapter 8 is where the framework becomes operational. The Three Simple Rules from Part 2 now have a week-by-week installation sequence — the kind of practical, no-shutdown implementation guide that turns theory into something your team actually follows.
The next question: what does it look like when you bring someone alongside you to run the blueprint — and what do most agencies discover when they do?
Want the full chapter?
The newsletter covers the four-week sequence at summary level. The full chapter goes further: the complete AI Usage Survey guidance (what to ask, how to frame it, how to get honest answers) — how incremental governance refinement through real conditions maps directly to the spot-check process in Week Four.
Or if you'd rather have someone alongside you to run the blueprint — someone who's done it before, who can calibrate the Data Traffic Light for your specific client mix and install the policy layer with you — that's what the Done-With-You AI Workflow Build is designed for. Four weeks. Your workflows. Governance that fits your operation.

