Your agency is being evaluated on AI governance right now.
Most agency MDs don't know the evaluation is happening.
That's not a problem with intent. It's a structural gap — and it has a number attached to it.
The CIPR (Chartered Institute of Public Relations) surveyed 2,016 people in September 2024. Sixty-three per cent of in-house marketing and communications teams said they ask their agencies about AI use. Only 24% of agencies reported being asked.
CIPR, September 2024 (n=2,016): 63% of in-house teams ask agencies about AI use. 24% of agencies report being asked. The 39-point gap is a qualification gap — clients are evaluating agencies on AI governance, and many agencies aren't tracking the question as commercially relevant.
Clients are building AI governance questions into their internal briefing processes and supplier assessments. Many of the agencies being evaluated aren't registering it as commercially relevant.
GovernFirst, not AI-First, is a positioning choice. It's also a competitive one. The agency that can answer the AI governance question with specifics — which tools, which controls, what data handling applies — isn't just compliant. It's differentiated. Most of its competitors are answering from belief. The agency that answers from evidence is the one that gets through the qualification round.
The problem with self-assessment
A second study puts the belief-versus-evidence gap in sharper relief.
Skillcast and YouGov surveyed 4,000 UK workers. Eighty-five per cent of managers said data protection practices were fully embedded in their organisation. Thirty-eight per cent were confident they could report a data breach accurately within 72 hours. Same respondents. Same organisation. Different answers depending on whether you ask for a claim or for evidence.
Skillcast/YouGov UK Corporate Compliance Survey (n=4,000): 85% of managers claim data protection is fully embedded. 38% are confident in 72-hour breach reporting. The 47-point gap exists because self-assessed readiness and verifiable readiness are structurally different things.
The same structural difference applies to AI usage in agencies. Microsoft and Censuswide surveyed 2,003 UK workers in October 2025. Seventy-one per cent reported using AI tools not approved by their organisation. Fifty-one per cent did so weekly. The agency principal who believes the team is using approved tools through managed channels is, statistically, almost certainly working from an impression — not a map.
The dimension most agencies haven't mapped
There's a thread in this that most AI governance conversations don't reach.
Freelancers. Medical writers. Motion designers. Disease area specialists. Animators pulled in for a three-month pitch run. UX contractors who know the client's system better than the permanent team does.
These people enter agency workflows carrying their own tools, their own habits, and their own accounts. They're not logging into a managed agency AI environment. They're using whatever they use. Probably on a personal account. Possibly without a Data Processing Agreement in place. Almost certainly without anyone at the agency having asked the question explicitly.
"I managed this problem in a different form at XEIOH — under pharmaceutical client requirements, where the outsourcing clause made third-party disclosure an obligation, not a preference. The back-to-back paperwork was real. But when the question came, we had an answer."
The AI disclosure requirement is the outsourcing clause of this decade. And the freelancer dimension is where most agencies, right now, have a gap they haven't mapped. Not because they're negligent. Because nobody asked.
What the Assessment produces
The AI Readiness Assessment is a two-week diagnostic. Not a conversation about strategy. Not a policy review. An evidence-gathering exercise that produces four specific outputs: a complete tool inventory, a data flow map, a gap analysis, and a governance recommendations report.
Week 1 is discovery — tools in use across permanent team and active freelancers, data classification review, DPA status for platforms handling client data. Week 2 is mapping — how data actually moves through the workflows, where controls are present, where they're absent, what the gaps are.
The output is specific to the agency. Not a framework template. A documented picture of exactly what's in use, exactly what data has moved through it, and exactly what needs to happen to close the identifiable gaps.
Practical pressure point: From April 27 2026, any Cyber Essentials assessment under IASME Requirements v3.3 must treat all cloud services that store or process organisational data as in scope. AI tools accessed via account that process client data fall within that definition. The inventory work the Assessment produces is exactly what Cyber Essentials now requires.
What readiness confidence enables
The Trustmarque AI Governance Index surveyed 507 UK IT decision-makers in July 2025. Ninety-three per cent of UK organisations use AI in some form. Seven per cent have fully embedded governance. Fifty-four per cent have minimal governance or none.
The gap between adoption and governance is the norm — not the exception. The agency with documented, verifiable AI governance is genuinely differentiated in most competitive contexts. Not because the bar is high. Because almost nobody has cleared it yet.
What does that confidence enable, practically?
Enterprise client conversations that weren't previously possible. Enterprise and regulated clients are adding AI questions to briefings and procurement processes. The agency that answers with documented specifics doesn't hope the client won't follow up.
Procurement responses that don't stall. Where ISBA's Generative AI Supplemental Agreement has been adopted — advisory terms that advertisers can consider as a supplement to the Media Services Framework 2021 — enterprise marketing clients have a ready-made framework for requesting detailed AI disclosure. The agency with existing governance documentation answers from the file.
Team consistency that doesn't depend on individual judgment calls. Clear AI governance means team members — including freelancers — aren't making their own decisions about which data to include in a prompt or which tools are appropriate for which clients. The decisions have been made. They're documented. They're followed.
The question most agency principals ask after the Assessment isn't whether it was worth doing. It's why they waited.
About the book
This is the chapter where the book moves from framework to action. The Three Simple Rules give you the structure. The four-week governance foundation gives you the timeline. But neither works until you know what you're actually building on — which tools are in use, where data is moving, and which parts of your team the current picture doesn't include.
Next week's chapter addresses the question every agency MD asks after the Assessment: how do you embed governance into the way the team actually works — without slowing anything down?
Want the full chapter?
The newsletter covers the four-week sequence at summary level. The full chapter goes further: the complete AI Usage Survey guidance (what to ask, how to frame it, how to get honest answers) — how incremental governance refinement through real conditions maps directly to the spot-check process in Week Four.
Rather have someone do this for your specific agency?
The AI Readiness Assessment covers exactly what this chapter describes — tool inventory, data flow mapping, freelancer and contractor mapping, and a governance recommendations report. Two weeks. Specific to your operation.

