Lessons From the Recent Wave of AI-Generated Voice Scams Targeting Families
The wave of AI voice-cloning scams targeting families, in which a cloned voice of a child or grandchild calls claiming to be in trouble and asking for money, has been treated as a consumer-protection story. It is also a preview of what enterprise contact centers will face at scale.
The technology that makes a convincing thirty-second clone of a stranger's voice from a TikTok clip is the same technology that makes a convincing clone of a CEO, a vendor contact, or a high-value customer. The consumer attacks are happening first because the targets are easier and the payoffs are immediate. The enterprise attacks are coming because the tooling is identical.
The defensive posture for an enterprise should not be to detect deepfakes in real time, which remains an unsolved problem. It should be to design workflows in which a single voice call, however convincing, cannot trigger a high-value action without independent confirmation. That is a process change, not a technology purchase, and it is the cheapest control available.
If your contact center cannot answer the question of which actions a single voice call can authorize, that is the place to start.