When one person holds all your AI expertise, you're not scaling capability—you're concentrating risk. Here's what the cascade looks like.

When 'Ask Sarah' Becomes a Single Point of Failure

How one person's holiday exposed a £180K dependency web most UK agencies have already built

Last summer, a Bristol creative agency missed a major deadline.

Not because the work wasn't done. Because only one person knew how to do it.

Sarah had built an AI workflow for brand strategy research over six months. Client briefs into ChatGPT, competitor insights out, positioning frameworks generated. Three days of work compressed into three hours. The agency loved it.

Then she went on holiday.

A pharmaceutical client needed urgent revisions. The account director tried to replicate Sarah's process. Couldn't. The junior strategist attempted it. Produced nonsense. The creative director stepped in. His outputs looked nothing like Sarah's.

They missed the deadline.

Three months later, that £180K account went to pitch.

The problem wasn't the technology

The agency had built revenue-critical workflows around ungoverned AI. No documentation. No training. No redundancy. When Sarah was unavailable, the capability vanished.

One concentration point. One cascade. One lost client.

Most UK agency owners focus on individual AI risks. Data breaches. Copyright violations. Compliance gaps. They're treating these like separate problems.

They're not.

Shadow AI creates interconnected dependencies that cascade. Tool concentration triggers human concentration. Human concentration creates workflow concentration. Workflow concentration leads to data concentration.

When external pressure arrives—ICO inquiry, client audit, key person departure—one domino falls and the cascade begins.

The dependency web you've already built

Here's how it works.

Tool concentration happens first. Your team finds ChatGPT. It works. They keep using it. Weeks become months. Twenty people rely on it daily. You didn't decide to depend on it—your team did.

When Microsoft's CrowdStrike update crashed 8.5 million Windows devices last July, airlines couldn't check passengers in. Banks couldn't process transactions. One vendor dependency. Global cascade.

AI tools face similar dynamics. Pricing changes without notice. Features deprecated. Free tiers eliminated. OpenAI changed ChatGPT's terms six times in 2024.

Human concentration follows. When one tool embeds in workflows, one person leads adoption. They learn it first. They teach others. Everyone starts asking them for help. Within months, they're essential.

UK agencies have 24% annual staff turnover. That's one in four people leaving each year. When that person is your AI expert, their departure takes capabilities with them.

IBM found 42% of workers hold knowledge unique to them that isn't documented anywhere. When those workers use AI tools, that undocumented knowledge gets further concentrated.

Workflow concentration is next. When one person owns the AI expertise, workflows route through them. Their approval becomes required. Their availability becomes critical. Their process becomes the process.

The Bristol agency's brand strategy research had no manual fallback. Sarah's workflow was the only workflow. When she left, the process broke.

Data concentration completes the web. When AI embeds in critical paths, organisational intelligence flows through those paths. Prompts get refined. Outputs get optimised. Learning accumulates.

But it accumulates in the AI tool. Not in your documentation.

Sarah's six months of refinements—all the optimisations, all the learning—lived in her ChatGPT conversation history. When the Creative Director tried to replicate her work, the prompts were gone. The context was gone. The agency couldn't access what they'd created.

How the cascade works

These four concentrations form a web where one failure triggers multiple breakdowns.

Tool dependency creates human dependency. Human dependency creates workflow dependency. Workflow dependency creates data dependency.

When Sarah went on holiday, all four concentration points failed simultaneously. The agency couldn't access ChatGPT's premium features. Didn't have Sarah's expertise. Couldn't execute the workflow. Couldn't retrieve the data.

One person's absence. Four systems failed. One client lost.

This is what ungoverned AI creates. Not individual risks you can address separately—an interconnected dependency web where failure propagates.

Governance maps dependencies before they cascade

Most agencies discover this pattern under pressure. ICO opens an inquiry. Enterprise client requests security documentation. Key person gives notice.

That's when you learn which dependencies exist and how they interconnect.

Governance maps dependencies before external pressure tests them. It answers three questions: Where are concentration points? How do they connect? What redundancy exists?

The Bristol agency commissioned a Shadow AI audit after missing their deadline. The audit revealed eleven unauthorised tools in active use. Four people held critical AI knowledge with no documentation. Seven revenue-critical workflows had AI embedded with no fallback.

They implemented governance frameworks. Data classification for information risk. Documented review processes for AI outputs. Systems to capture AI intelligence organisationally.

Four months later, their largest pharmaceutical client conducted a vendor assessment.

The agency passed. Their competitors didn't.

Documented governance became commercial advantage. Not just compliance—differentiation.

What to do this week

Look at your agency's AI usage and ask: Who's your Sarah?

Which person holds AI expertise everyone else depends on? What happens if they're unavailable for two weeks? Can someone else replicate their workflows? Is their knowledge documented anywhere?

If you can't answer these questions, you've already built the dependency web. You just haven't mapped it yet.

The cascade effect isn't theoretical. It's operational reality for any agency using ungoverned AI.

The question isn't whether these dependencies exist. The question is whether you've mapped them before they cascade.

Want the full cascade analysis? The complete chapter includes how 71% of UK employees use unauthorised AI tools, why 24% annual staff turnover becomes a capability crisis, what happens when ChatGPT changes terms six times in one year, how the Bristol agency lost £180K when four concentration points failed simultaneously, and why documented governance became competitive advantage over compliance burden.

Want to know your agency’s Shadow AI exposure?

The £500 Shadow AI Audit I've designed maps these dependencies. It reveals tool adoption, identifies human concentration points, documents workflow dependencies, and assesses data exposure. Then shows you where cascade risk lives. Reply if you want to know more.

Keep Reading