Imagine this: It’s just past midnight, and your mother’s number appears on your phone screen. You answer, and on the other end, you hear her voice—it’s unmistakably her, and the caller ID confirms it’s her number—crying out, “Let me go!” “Help!” and other phrases in panic. Then a man takes over, telling you that there was a break-in at her house, but they found nothing valuable, so they’re holding her hostage. He demands that you send a sum of money immediately—before hanging up—or they will kill her.
Until now, phone scams followed a familiar pattern, mostly targeting specific victims. For example, someone would call pretending to be a police officer and tell the victim that a relative had been involved in a serious car accident. To avoid jail, the victim would be asked to pay a large sum of money. While the majority of victims have been elderly, a recent case in Zografou involved a 13-year-old who handed over €2,500 in cash, jewelry, and watches to scammers. If you think you’re unlikely to be scammed, you might want to reconsider, as scammers are now leveraging Artificial Intelligence (AI) to create voice clones, introducing a terrifying new type of fraud.
Three seconds of speech. That’s all an AI application needs to clone your voice—or that of a loved one. After that, to carry out “vishing” (voice phishing), all that’s needed is access to personal information, contacts, SIM card cloning (already widespread), and programming an AI app to deliver the message—using the right tone, of course. The scam has now become so convincing that even the most cautious individuals can be tricked. It’s even easier when scammers have access to social media, videos, and other personal data posted online.
How easy is it for your voice or that of a loved one to be stolen? Consider a brief phone call or a short video posted on TikTok, Facebook, or other social platforms where users share countless videos every day.
“Millions of Targets”
Though relatively new, this type of phone scam has gained popularity in the US and Asia, and it’s now coming to Europe. Last week, the UK’s Starling Bank issued a warning that “millions of people could fall victim to scams using AI programs to clone their voices.”
In the US, this fraud technique gained widespread attention when it was revealed that voice cloning technology was used to trick thousands of Democrats by making them receive calls from a fake voice of Joe Biden, urging them to abstain from voting. In South Korea, a case shocked the nation when a businessman was convinced he was on a video call with his company’s executives—who were, in fact, criminals—causing him to lose $25 million.
AI-powered voice impersonation is making phone scams more believable and dangerous than ever, says Steve Grobman, CTO at cybersecurity firm McAfee. Globally, one in four people has either fallen victim to or knows someone who has been scammed using AI, with 77% of victims losing money, according to McAfee’s “Beware the Artificial Imposter” report.
Cybersecurity experts have found that 6.5 billion fraudulent calls are made worldwide each quarter, with the average American receiving 12 scam calls per month. The “modernization” of this criminal industry was inevitable. Breachsense CEO Josh Amisav explains that with the advancement of AI text-to-speech technology, fraud techniques will become significantly more sophisticated. When combined with leaked or stolen data, scammers can impersonate someone the victim knows, adding personal details to increase the scam’s credibility. These scams have grown so prevalent that an Arizona senator proposed legislation to treat AI as a weapon when used to commit crimes, in order to impose harsher penalties on scammers.
It doesn’t take much for scammers to get into the audio deepfake business—just a few hundred euros. Several companies offer AI and deep learning applications that can create voice replicas of such high quality that they can fool anyone. These technologies were originally developed for legitimate uses, like movie production, audiobooks in different languages, and lending celebrity voices to advertisements or other creative applications. For example, a restaurant chain in the US uses the cloned voice of famous football player Keith Byars to take orders, and a South Korean company offers to recreate the voices of deceased loved ones for their families. On the positive side, the technology is also used for health purposes, such as the Voice Keeper app, which helps people suffering from conditions like throat cancer, Parkinson’s, or ALS to preserve their voices. But it was only a matter of time before this technology was misused.
The Scenarios Used
With voice mimicry technology, fraudsters now have a wider range of scenarios to exploit in order to achieve their ultimate goal—extracting as much money as possible from their victims.
The most common scam scenario in the US involves fake kidnappings. Scammers often target families with young children, convincing them their child has been kidnapped and that they must pay a ransom. These payments are frequently made via anonymous apps or even in person, as in real kidnappings. Initially, large sums (over $1 million) are demanded, but the criminals often settle for much smaller amounts (like $50,000).
In other cases, the stolen voice is used to extract money in different ways. Using AI, scammers call a friend or family member, who hears their “loved one” claim they were in a car accident in a remote area. “I’m hurt and need surgery. I’ve lost everything—wallet, cards, phone—and the hospital is asking for X amount to proceed because insurance won’t cover it. Please send the money to this account immediately; I’ll repay you once I’m out of surgery.”
Sometimes, “professional” voices are used to convince victims to transfer large sums of money quickly. Who wouldn’t panic if they heard their accountant say they must pay a large tax bill to avoid confiscation or jail? Or if a lawyer called for something similar? Scammers even target business accountants, using the voice of a company’s CEO to authorize payments or fulfill fake obligations.
Banks are on high alert, not only to block suspicious transactions but also because AI is becoming more convincing in mimicking human voices. This makes it increasingly likely that criminals will use it to gain access to victims’ bank accounts, perfecting existing scams and bypassing current security measures. That’s why customers are now required to confirm personal details like tax IDs and identity numbers during phone calls with their banks.
Similarly, voice cloning could be used—like the case involving fake Joe Biden calls—to spread misinformation or attempt to manipulate previously secure processes, such as elections. But that’s a whole different category of concern.
Keys to Protection
Experts suggest that while apps capable of detecting scam calls or AI usage are evolving rapidly, “there will never be a silver bullet.” The best protection is for individuals to be cautious and proactive, ensuring it’s practically impossible for them to fall victim.
The most important step is recognizing scam calls. As with current fraud schemes, being cautious, avoiding disclosing personal information, and safeguarding data that’s often exposed online are crucial. “If you suspect it’s a scam, it’s better to hang up and stop the conversation.”
Perhaps the most practical advice is to use a family password. Experts recommend that families agree on one or more words or phrases to be used in emergencies. These passwords should never be shared with third parties or posted on social media. They should also be words or phrases that no one outside the family would know.
This way, grandparents, parents, or children can be sure that the person on the other end of the phone isn’t really their loved one saying, “I’m being held hostage,” or “I’m in serious trouble and need money immediately.”