← Vercon Research
AI Agent Security·

Air Canada and the New Liability of Hallucinated AI Intake

The Civil Resolution Tribunal ruling against Air Canada this week is being read narrowly as a consumer-protection decision. It deserves a wider reading. The tribunal held the airline responsible for a bereavement-fare policy that its chatbot fabricated, rejecting the argument that the bot was a separate legal entity whose statements the carrier could disown.

For anyone deploying an AI agent on a customer-facing channel, the implication is direct. The agent is part of your intake surface. Whatever it tells a customer is, for legal purposes, what your organization told that customer. The defense that the model hallucinated is not a defense; it is a description of an unmanaged risk.

This is why intake hardening is no longer optional for regulated or high-trust businesses. Before deployment, AI agents need adversarial testing against the kinds of questions customers actually ask, including the edge cases where a confident wrong answer is more dangerous than a refusal. After deployment, they need monitoring that flags novel responses, not just failed ones.

The Air Canada ruling will not be the last of its kind. Organizations that treat AI intake as a marketing surface will keep being surprised by it. Those that treat it as a regulated communications channel, with the same review discipline as a published policy document, will not.

#case study#AI liability#intake

Find out where your communications channels are exposed.

A Vercon Communications Security Assessment delivers an executive-readable risk report and a prioritized remediation roadmap — typically within four weeks.