This website uses cookies

Read our Privacy policy and Terms of use for more information.

Your CSD is on a routine call. It is probably a monthly check-in. The client is a medical affairs lead or a senior procurement contact; someone who uses phrases like "quality control framework" in casual conversation.

The agenda moves along. Then, with no particular drama, they ask "Can you walk me through how your team uses AI in content delivery?"

Not a challenge. Not a compliance inspection. Just a question.

Your CSD answers. Something comes out. It is broadly true. It is not the full picture. And from where you are sitting, you have no idea whether what they just said matches what anyone else in the agency would say if asked the same question tomorrow.

That is the problem. Not the question. The gap between the question and the answer you could be confident about.

Why the CSD catches it first

The client services director is the person who sits closest to the client relationship while also sitting furthest from most of the actual delivery workflow. They know how the work lands. They do not always know how it was made.

In most agencies, the CSD is not running the AI tools. But they are on the calls where the questions land. They are the ones building the relationship capital that gets spent if something goes wrong. And they are increasingly the ones who need to speak with authority about how the agency works, without having a full picture of the AI usage happening at brief stage, in copywriting, in research, in design handoffs, or in project management.

The account lead uses an AI transcription tool in every client call. Does the CSD know? The copywriter uses a drafting tool with client campaign language in the prompt. Is there a rule about that? The project manager uses AI to generate status reports from Slack threads. Who checks them before they go to the client?

None of these are reckless decisions. They are normal decisions happening at normal speed. The problem is that they are happening without a shared operating line, which means no one person has the full picture, and the CSD is the one expected to speak to it.

The questions are already landing

Most agency founders I speak to assume the AI scrutiny question is coming. What they underestimate is how quickly it arrives without warning, and where it lands first. It does not go to the managing partner's inbox. It goes to the person on the call.

Three questions are starting to show up in client conversations at regulated-client agencies:

"Can you walk us through your review process for AI-assisted content?" The client is not asking whether you use AI. They are asking who checks the output, at what stage, and what that review actually involves. They want a process, not a reassurance. "We take quality seriously" is not an answer to this question.

"What is your policy on using our data in AI tools?" They want to know what the rule is and whether your team knows it. Not the policy document, the actual working rule. The one the account executive could repeat under pressure. Most agencies have a vague version of this somewhere. Very few have it visible at the moment someone needs it.

"If we asked three members of your team the same question, would we get the same answer?" Some clients are starting to ask this directly. Others are simply thinking it while they listen to the answer they just received. Either way, if the honest answer is no, the agency has a consistency problem it has probably not yet measured.

These are not hostile questions. They are reasonable ones. The problem is that most agencies are not ready to answer them consistently, because they have never needed to coordinate the answer before.

What consistent looks like. What scattered looks like.

Two agencies. Similar size. Similar client mix. Both have people using AI across account management, copywriting, and project delivery.

The first agency has a shared operating line. The CSD knows which tools are approved for client work, what client data can and cannot go in, and what review sits between AI output and anything that reaches the client. That knowledge is not stored in a policy document. It is visible in how the work actually runs — in brief templates, in handoff notes, in what gets checked before a piece goes out.

When the client asks, the CSD answers. Calmly. Specifically. Without needing to check with anyone. The answer matches what the account director would say. It matches what the senior copywriter would say. There is one version.

The second agency is using AI just as much, possibly more. The founder has thought about AI governance. There is a document somewhere. But the rule is not visible at the moment the account executive opens a tool. The meeting summary goes into the brief without anyone checking whether the uncertainty survived the AI-generated version. The CSD's answer on the client call is their best reconstruction of how things work, based on what they have seen and what they assume.

It is mostly accurate. It is not the same answer anyone else would give.

That is not a failure. It is what scattered AI usage looks like from the outside. The danger is not that it is dramatically wrong. The danger is that it is inconsistent in ways the founder cannot see, because nobody has ever asked three people the same question at the same time.

One practical prompt for this week

Ask three people on your team the same question: where are you using AI, what data goes in, and how do outputs get reviewed before they reach the client?

You do not need a survey. You do not need a workshop. Just ask three people. This week. An account lead, a copywriter or strategist, and your CSD.

If you get three consistent answers, you are in better shape than most. Record those answers. They are your client-ready baseline.

If you get three different answers, that is not a crisis. It is a visibility problem. The difference matters, because a visibility problem is something you can do something about before a client call makes it visible for you.

If that conversation made you pause

The gap between what is actually happening across your team and what your CSD would say on a client call is exactly what the AI Workflow Clarity Audit is designed to surface.

It is a two-week diagnostic. It maps where AI is entering the work across the Core Six functions: client service, strategy, copywriting, creative, design, and project management. It identifies where workflow and review boundaries are unclear, where client-data assumptions have never been tested, and what a consistent agency-wide answer would actually need to contain.

The outcome is a clear picture and the three things worth tightening first. Not a policy document. Not a transformation programme.

It costs £500. It does not require a project team.

If the scenario at the top of this newsletter felt familiar, it is probably worth a conversation.

Keep Reading