There is a question being asked about your agency right now. Chances are, no one in your team has heard it.

The CIPR State of the Profession 2024 surveyed 2,016 communications professionals across in-house and agency roles. Thirty-seven percent of in-house PR professionals say they "often" ask agencies to declare how they use AI. Seven percent of agency respondents say they are "often" asked.

Ben Verinder, the CIPR's head of research, offered an honest note when this data was published: in-house respondents sometimes overstate their own rigour. The real gap could be narrower. But even a sceptic's reading confirms the pattern. The questions are being asked. Most agencies are not hearing them.

The reason matters. Governance questions do not arrive labelled as governance questions. They sit inside IT security assessments, appear as standard RFP clauses on data handling, surface as supplier onboarding questionnaire items that account teams skim past because they look like boilerplate. The agency with documented answers passes those checkpoints as a matter of course. The agency without documentation faces a choice between a vague response and a silence that signals the same thing. Those two options feel different. They don't look different to procurement.

This is what Chapter 12 of Shadow AI Governance: The UK Agency Playbook is about. Not risk. Commercial advantage.

Three questions your clients are already asking

Three categories of AI procurement question are now standard across enterprise and public-sector clients.

The first is tool disclosure. Which AI systems does your agency use? Were any used in preparing this tender submission? The second is data handling. How is client data treated when it passes through AI systems? What prevents it from being used to train third-party models? The third is output accountability. Who is responsible for verifying AI-generated content before it reaches the client? What human review process governs that sign-off?

ISBA's Generative AI Member Survey, conducted in July 2025, found that ten percent of member advertisers had already revised contracts to include GenAI terms, with a further thirty-seven percent in progress. Mainstream UK advertisers, not edge cases.

An agency with documented governance answers those questions in passing. An agency without documentation encounters them mid-pitch, for the first time, with forty-eight hours to respond.

Three gates you cannot see from outside

Beyond open pitches, three procurement structures are now creating eligibility gates that filter out undocumented agencies before the shortlist forms.

The government gate arrived formally in February 2025. Procurement Policy Note 017 applies to central government departments, executive agencies, and non-departmental public bodies — template disclosure questions covering AI tool use in tender submissions and proposed service delivery are now part of the standard framework. The Government Communication Service goes further: contracted and framework suppliers must have safeguards in place for responsible AI use. Any agency carrying government communications work, directly or as a sub-contractor, sits inside that obligation. Whether they know it or not.

The industry standard gate is now live through the AA/IPA/ISBA Best Practice Guide on Generative AI. Ten major holding groups, including WPP, Publicis, Dentsu, and IPG, are aligned to this guidance. Agencies operating as sub-contractors or delivery partners for those groups encounter these standards without necessarily knowing they exist.

The advertiser contract gate is the one most agencies notice first, and usually too late. Contract revision clauses arriving mid-engagement are harder to handle than clauses anticipated during pitch.

The agencies that have documentation already built clear these gates quickly. The ones that don't spend time improvising documentation that should have existed before the conversation started.

Speed through systems, not through scrambling

During my partnership at XEIOH in South Africa, a major pharmaceutical client issued an RFP with a 72-hour response deadline.

We had the response drafted in eight hours.

Not because we worked faster. Because the governance systems already existed. Data handling protocols? Already written. Team CVs and qualifications? Already current. Case studies with client permissions? Already secured. Approval chains? Already clear. We spent the remaining time refining strategy, not hunting for basic operational documentation.

This was before AI governance existed as a category. The systems were pharmaceutical compliance structures built to satisfy demanding clients. But the pattern is identical to what happens now when an agency with documented AI governance meets an RFP with procurement requirements. The agency that built the documentation before the deadline controls how it spends its time. Everyone else controls nothing.

And yet most agencies are still building it after the question arrives. The agencies clearing AI governance requirements in days are the ones that built that documentation architecture before the deadline existed. The ones building it now are the ones with something to show when the question lands.

What the documentation actually looks like

Chapter 12 ends with the AI Assurance Pack: five specific documents that address each procurement category directly.

A tool register answers the tool disclosure question. An AI usage policy answers the data handling question. A risk assessment template documents client-specific risk at the start of each engagement. Disclosure language provides pre-drafted client communication covering AI use, human review stages, and accountability — the text that goes into contracts rather than being improvised each time. A human review workflow documents the checkpoints at which AI-generated output is reviewed and verified before client delivery.

None of these documents is complex individually. The complexity is in having all five structured coherently, maintained as live documents, and positioned consistently across pitches. A tool register updated eight months ago does not answer a procurement question. A usage policy that lives in a founder's head does not pass a contract clause review.

For most agencies of five to twenty staff, the core documentation can be assembled in a matter of weeks. For agencies already working through this book, the materials are mostly there already.

The pitch conversation changes as a result. Most agencies, when asked about AI, say some version of: "We use AI across our workflows to improve efficiency." That describes adoption. It answers nothing about data handling, risk management, or accountability.

The governance-ready agency answers differently: "We can show you exactly how we use AI — which tools, at which stages, with what human oversight, and how your data is handled throughout." That is a description of a system. It answers the question the procurement team is actually asking, whether or not they have put it in those terms.

The room changes when one agency in a pitch can say that and the others cannot.

About the book

This newsletter comes from from Shadow AI Governance: The UK Agency Playbook — a book I'm writing in public about making agency AI usage visible, accountable, and commercially defensible.

Chapter 12 is where the book shifts register. Parts 1 and 2 were about building the right systems internally. Part 3 is about what those systems are worth externally. The work done in earlier chapters was never just operational. It was preparation for the commercial conversations this chapter describes.

Chapter 13 picks up where this one leaves off. Building the AI Assurance Pack is the straightforward part. The harder question is what happens as your agency grows — and whether the governance that works clearly at five people still holds coherently at fifteen, thirty, or fifty. That is where the next chapter begins.

Want the full chapter?

The newsletter covers the three procurement question categories, the three access gates, and the AI Assurance Pack. The full chapter goes further: a detailed look at the four mechanisms through which governance documentation creates commercial advantage (Visibility, Access, Speed, and Premium Positioning), the Deloitte Australia cautionary reference point and what it means for agencies, and the full XEIOH pharmaceutical pitch story that shows how constraint-built systems transfer across contexts.

Ready to build this advantage into your agency?

Or if you'd rather have the AI Assurance Pack built properly — tool register, usage policy, risk assessment template, disclosure language, and human review workflow structured for your agency and your client mix — the Done-With-You AI Workflow Build includes that as part of the engagement.

Keep Reading