Here's the test. Not a thought experiment. An actual test you can run this week.
Pick four people in your agency. Someone in strategy. Someone who signs off copy. Your most senior account lead. Your project manager or ops lead. Ask each of them: "If a pharma or healthcare client asked you tomorrow how we use AI across our work, what would you say?"
Then compare the answers.
Most agencies don't do this. It’s not because the question isn't worth asking. It’s more that the answers are uncomfortable.
AI is already inside the work.
It's in the research summaries. The first-draft copy. The meeting notes that become briefs. The status updates. The campaign territories. The client-facing materials where nobody is entirely sure whether the reviewed version was the AI-assisted one or the one after it.
Leadership knows AI is being used. What leadership often doesn't know is where, by whom, with what client data, and to what standard of review.
That gap is manageable. Right up until the moment a regulated client decides to ask.
And they are starting to ask.
Not always in formal procurement questionnaires. Often in ordinary conversations. A medical director on a quarterly call. A procurement lead at a chemistry meeting. A client services director who reads the news. The question arrives casually, and the agency's answer, if there is one, comes out differently depending on who picks it up.
Below are four questions a pharma or healthcare communications client might reasonably raise. Each one touches a different part of your agency's operations. None of them requires a crisis. All of them require a consistent answer.
Question 1: "How is your team using AI in strategy work for our brand?"
This is a strategy function question, and it's harder to answer than it looks.
The visible answer is usually something about desk research, competitive monitoring, or insight generation. True as far as it goes. But the question underneath the question is: how do you know what AI contributed, and how do you know it's accurate?
AI can produce strategy-shaped material at speed. It can summarise research, cluster themes, generate audience personas, draft recommendation frameworks. It cannot validate its own conclusions. It cannot distinguish a well-evidenced insight from a plausible-sounding one. It cannot flag when a claim needs regulated-market scrutiny rather than normal editorial judgement.
The failure pattern looks like this. AI synthesises research. The strategist uses the synthesis. The brief inherits the synthesis. Creative works from the brief. Copy reflects it. The regulated claim in the third paragraph of the campaign rationale was never checked against actual source material. The source material was the AI summary, and nobody went back further.
For a healthcare or pharma client, that is not a process problem. It is a sign-off problem. And sign-off problems in regulated environments become MLR problems, review escalations, and rework.
An agency that can answer this question consistently can show where strategy starts with human problem definition, where AI enters the workflow, and what human judgement owned the recommendation before it became the brief. Most agencies cannot show that. Not because it isn't happening. It is happening. But nobody has mapped it.
Question 2: "Who reviews AI-assisted copy before it reaches us?"
This is a copywriting function question, and the honest answer in most agencies is: it depends.
It depends on who wrote the brief. Who was under deadline pressure. Which AI tool they used. Whether the draft looked finished enough to move. Whether the reviewer knew AI was involved.
That inconsistency is the risk. Not the AI use itself.
AI has made acceptable copy cheap to produce. Fluent headlines, smooth paragraphs, polished benefit lists. Nothing obviously wrong. Nothing that reads like a machine wrote it. Nothing the reviewer would catch without specifically looking for it. And in regulated copy, "nothing obviously wrong" is not the same as "safe to send."
The failure pattern works quietly. Confidential brief material enters a general AI tool because the copywriter is under deadline pressure. The draft looks clean. The reviewer doesn't know what was pasted in, or to what tool, or whether the data boundaries for this client permit that use. The copy goes out. The issue, if there is one, doesn't surface until weeks later.
Or it works the other way. The copy is fine. The process was fine. But the account director and the MD give completely different answers when the client asks who reviewed it and by what standard. There isn't one. There are individuals with different habits and different review instincts, and nobody has made those habits consistent.
An agency that can answer this question can tell a client: yes, we know which tools were used, we know what data went in, and there is a named human who signed off the copy before it reached you. At very high risk levels, we can also confirm that a regulated-content review happened. Most agencies would struggle to say that about more than a few recent projects.
Question 3: "What client data does your team put into AI tools?"
This is a client service and account management question, and it's the one that makes founders most uncomfortable. The honest answer is often "I don't know for certain."
The account team might use AI to summarise a client call. To draft a status update. To turn a messy set of meeting notes into a brief. Useful, efficient, well-intentioned. And the client information in those calls: strategy, pipeline, stakeholder politics, adverse event mentions, competitive positioning, pre-approval product details. All of it is in the tool.
Whether that's a problem depends on which tool, on what terms, with what retention controls, and whether this specific client has written any AI-use restrictions into their agreement with you. Most agencies don't have a clear line of sight across all of those.
The question isn't "is the team using AI for client work?" They are. The question is: do the account leads know what's Red, what's Amber, what's Green for each client, and does that rule exist in a place they can actually find it at 5pm on a Thursday when the deadline is tight?
An agency that can answer this question has visible data-use rules that account teams know and follow, client-level AI restrictions documented where they exist, and the ability to say clearly to a client: here is what we do and do not put into AI tools on your account. Most agencies have a policy somewhere. Most teams have not seen it since induction.
Question 4: "How do you make sure AI use is consistent across the people working on our account?"
This is a project management and operations question, and it's the one that exposes everything the other three are pointing at.
If different strategists use different AI tools with different prompts and different review habits, you cannot give a consistent answer to question one. If different copywriters have different standards for what gets disclosed and what gets reviewed, you cannot give a consistent answer to question two. If the account team has different interpretations of what client data can go where, you cannot give a consistent answer to question three.
Operations is where inconsistency compounds. It's also where the cost of that inconsistency shows up: rework, escalations, delivery drag, senior time pulled into fixing problems that started upstream.
The failure pattern is not dramatic. It's a senior project manager who follows one review process. An account lead who follows another. A junior who uses whichever tool is quickest. A founder who assumes the process is consistent because nobody has reported a problem. Nobody has reported a problem because no one has asked the question.
An agency that can answer this question has mapped how AI actually touches each main work type, not how it's supposed to touch it. Who uses what, on which accounts, with which client data, to what review standard. And there is a named human accountable for each piece of client-facing work before it leaves the agency. Most agencies have not mapped that. Not because it would be hard. Because there has not been a reason to look until now.
Why clarity comes before policy, tools, or training
There is a version of this problem that looks like a governance issue. It isn't. Not yet.
Governance is what you build when you know what you're governing. Policy is what you write when you understand what the team is actually doing. Training is what you run when you know which habits need changing.
The first step is visibility. Where is AI actually being used? Which functions? Which tools? Which clients? With what client data? To what review standard? Are those standards consistent across the team?
Most agencies that start with policy skip that step. They write a document that describes a process the team was never operating, circulate it at an all-hands, and assume the problem is addressed. Six weeks later, the habits haven't changed because the document didn't describe reality.
The useful first move is mapping. Not building. Mapping.
What is actually happening across the six core functions where AI is most likely to be inside the work right now: strategy, copywriting, client service, account management, project management, and operations. Not what should be happening. What is.
Once that's visible, the gaps become obvious. So does the order in which to close them.
If the four questions made you uncomfortable
That's useful information.
It doesn't mean your agency is behind. It means there's work to do. And that the work is specific, not abstract.
I'm running AI Workflow Clarity Audits for agency founders who want the real picture across their team before a client or a procurement process asks for it. It's a two-week diagnostic. It costs £500. At the end of it, you have a clear view of where AI is being used across the core functions, where the workflow and data boundaries are unclear, and what needs tightening first.
Not a policy document. Not a training programme. Not a tool recommendation. Just clarity, so you can make the right decisions about what comes next.
If the four questions resonated, it might be worth comparing notes. I'm speaking with agency founders about this now. Happy to have a conversation if it's useful.

