
Bob Sullivan
“I need help. Oh my God! I hit a woman with my car,” the fake Bob says. “It was an accident, but I’m in jail and they won’t let me leave unless I come up with $20,000 for bail …Can you help me? Please tell me you can send the money.”
It’s fake, but it sounds stunningly real. For essentially $1, using an online service available to anyone, an expert was able to fake my voice and use it to create telephone-ready audio files that would deceive my mom. We’ve all heard so much about artificial intelligence – AI – recently. For good reason, there have long been fears that AI-generated deepfake videos of government figures could cause chaos and confusion in an election. But there might be more reason to fear the use of this technology by criminals who want to create confusion in order to steal money from victims.
Already, there are various reports from around North America claiming that criminals are using AI-enhanced audio-generation tools to clone voices and steal from victims. So far, all we have are isolated anecdotes, but after spending a lot of time looking into this recently, and allowing an expert to make deepfakes out of me, I am convinced that there’s plenty of cause for concern.
Reading these words is one thing; hearing voice clones in action is another. So I hope you’ll listen to a recent episode of The Perfect Scam that I hosted on this for AARP. Professor Jonathan Anderson, an expert in artificial intelligence and computer security at Memorial University in Canada, does a great job of demonstrating the problem — using my voice — and explaining why there is cause for … concern, but perhaps not alarm. His main suggestion: All consumers need to raise their digital literacy and become skeptical of everything they read, everything they see, and everything they hear. It’s not so far-fetched; many people now realize that images can be ‘photoshopped’ to show fake evidence. We all need to extend that skepticism to everything we consume. Easier said, than done, however.
Still, I ponder, what kind of future are we building? Also in the episode, AI consultant Chloe Autio offers up some suggestions about how industry, governments, and other policymakers can make better choices now to avoid the darkest version of the future that I’m worried about.
I must admit I am still skeptical that criminals are using deepfakes to any great extent. Still, if you listen to this episode, you’ll hear Phoenix mom Jennifer DeStefano describe a fake child abduction she endured, in which she was quite sure she heard her own child’s voice crying for help. And you’ll hear about an FBI agent’s warning, and a Federal Trade Commission warning. As Professor Anderson put it, “based on the technology that is easily, easily available for anyone with a credit card, I could very much believe that that’s something that’s actually going on.”