Air Canada argued their website chatbot was a "separate legal entity" responsible for its own mistakes. The tribunal's response was blunt: you're responsible for everything on your website. Here's why that precedent matters for UK agencies using Shadow AI.

"AI Did It" Isn't a Defence

In February 2024, Air Canada made legal history by arguing their website chatbot was "a separate legal entity responsible for its own actions."

The airline wasn't joking. The tribunal wasn't amused.

A customer asked the chatbot about bereavement fares. The chatbot provided incorrect information. The customer booked based on that information, then requested the promised discount.

Air Canada refused. They pointed to the correct policy buried elsewhere on their website. The customer sued.

Air Canada's defence was straightforward: the chatbot was wrong, but we're not responsible for what it says. It's a separate legal entity.

The tribunal's response was equally straightforward: "It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot."

Air Canada lost. The customer won. The chatbot defence died in a Canadian tribunal.

This happened in Canada. UK agencies face identical liability under contract law.

Why This Matters for Your Agency

When I audit agencies, I see the same pattern Air Canada made. Teams using AI without governance. Without oversight. Assuming someone else will catch the errors.

They won't.

You're responsible for what you publish. "AI did it" isn't a defence in Canadian law. It isn't a defence in UK law.

Air Canada's mistake wasn't deploying AI. It was deploying AI without accountability structures. No one owned the chatbot's outputs. No one verified its accuracy. No one took responsibility until forced to by a tribunal.

UK agencies make the same mistake with Shadow AI. Just distributed across teams instead of concentrated in one chatbot.

The Cascade Effect No One Talks About

Here's what makes this particularly dangerous for agencies: you don't work with one client. You work with 10, 15, 20 clients simultaneously.

One data breach doesn't affect one client. It affects every client whose data you hold.

Under UK GDPR Article 33, when you discover a personal data breach likely to result in risk to individuals, you must notify the ICO within 72 hours. Not 72 business hours. Just 72 hours.

You have 72 hours to:

  • Determine which clients are affected (you hold data for 15 clients—which datasets were exposed?)

  • Assess whether the breach creates risk to individuals (you need legal advice, fast)

  • Notify the ICO with accurate information (incomplete notifications create additional breaches)

  • Prepare client notifications (15 separate conversations, 15 sets of questions you can't answer)

The 72-hour clock doesn't pause while you figure out what happened.

This is the cascade effect. One exposure doesn't create one problem. It creates a portfolio-wide crisis.

I learned this the expensive way. When a major client delayed payment on a multi-million rand project, my South African agency Zonke couldn't withstand the cash flow pressure. We closed.

Our sister agency XEIOH survived the blast radius. The difference? A pharmaceutical client had required formalised governance. XEIOH had documentation. Zonke had relationships.

Informal practices reached their limits when external pressure arrived.

The same pattern plays out with Shadow AI. One team member uploads client data to ChatGPT. Another uses Midjourney for client concepts. A third uses Claude for proposal writing.

None of them are being reckless. They're trying to do excellent work efficiently.

But when the breach happens—and research shows approximately 3.1% of AI prompts contain confidential data—you're managing 15 damaged client relationships simultaneously.

What You Can Do This Week

You don't need to ban AI tools. You need to govern them.

Here's one action you can take this week:

Conduct a 15-minute Shadow AI audit with your team:

  1. Ask each person: "Which AI tools did you use with client work this week?"

  2. Document the answers (no judgment, just documentation)

  3. Look for tools you didn't approve or didn't know about

  4. Identify which client data touched which external systems

That's it. Don't restrict anything yet. Just know what's happening.

The regulatory reality isn't arriving. It's arrived. The ICO has named AI as an enforcement priority. Fines have increased seven-fold year-over-year. And 80% of your potential clients have serious concerns about agency AI governance.

The Air Canada precedent confirmed something agencies need to understand: you can't outsource accountability to algorithms. When AI makes a mistake using your name, you own the consequences.

The Two Paths Forward

You have two options:

Path One: Continue operating on Shadow AI. Hope you're not the first agency caught when the ICO decides to make an example. Hope your clients don't start asking questions you can't answer.

Path Two: Implement governance before consequences arrive. Document your AI usage. Formalise your tool stack. Create accountability for data handling.

The difference between these paths isn't compliance versus innovation.

It's formalised governance versus informal practices.

As I learned watching Zonke close while XEIOH survived: informal practices reach their limits when external pressure arrives. Formalised governance determines survivability.

The chatbot defence died in a Canadian tribunal because Air Canada couldn't show governance, accountability, or oversight. Don't wait for a UK tribunal to teach you the same lesson.

Want the full chapter analysis? The complete breakdown includes the ICO's seven-fold fine increase, Samsung's £100M exposure in 20 days, why 80% of Enterprise clients have "serious concerns" about agency AI use, and what happens when the 72-hour breach notification clock starts ticking across your entire client portfolio.

Want to know your agency’s Shadow AI exposure?

The £500 Shadow AI Audit identifies your ungoverned AI usage, cascade risk points, and client data exposure in two weeks. It's designed for UK agencies who want answers before clients start asking questions.

Keep Reading