Consider this unsettling scenario: You pick up your phone one day, and to your astonishment, you hear a voice that uncannily resembles your child’s. This voice on the other end claims that your child has been kidnapped and is in desperate need of cash for a ransom. Panic sets in, and you frantically scramble to offer assistance, only to later realize that you’ve fallen victim to a highly sophisticated and deeply unsettling new AI scam phone call.
Jennifer DeStefano, a mother from Arizona, recently shared her harrowing experience before the Senate, shedding light on a disturbing trend that is becoming all too common. As artificial intelligence (AI) technology becomes more affordable and accessible, criminals are harnessing its power to mimic the voices of our loved ones, exploiting our emotions and deceiving us into sending them money. Astonishingly, imposter scams of this nature have collectively siphoned off up to $2.6 billion from unsuspecting Americans in the past year, as reported by the Federal Trade Commission.
But here’s the reassuring news: You can outsmart these scammers at their own game. We’ve consulted with cybersecurity experts to uncover the inner workings of these AI scam calls, why they’re so challenging to detect, how to protect yourself from becoming a victim, what to do if you find yourself targeted, and what the future holds for AI-driven scams.
Understanding the Novel AI Scam
An astute scammer armed with a potent AI program doesn’t need much more than a brief recording of a loved one’s voice to clone it and craft their own narrative. They can then play this manipulated audio over the phone to convince their victims that someone they cherish is in dire straits and urgently needs financial assistance.
These scams are not your typical run-of-the-mill phone cons or Google Voice tricks; they are far more sophisticated. One prevalent example involves parents or grandparents receiving a call purportedly from their children or grandchildren, claiming they need money for ransom or bail, similar to the AI kidnapping scam that Jennifer DeStefano encountered. Nico Dekens, Director of Intelligence and Collection Innovation at ShadowDragon, remarks, “We have seen parents targeted and extorted for money out of fear that their child is in danger.”
Eva Velasquez, President and CEO of the Identity Theft Resource Center, also notes reports of AI scam calls convincing victims that their relatives need money to cover damages from car accidents or other incidents. Other variations include impersonating a manager or executive’s voice in a voicemail instructing someone to pay a bogus invoice, as well as calls masquerading as law enforcement or government officials demanding sensitive information over the phone.
How Do AI Scams Operate?
While executing an AI scam may involve a few steps, the technology accelerates the process to a concerning degree, making these cons alarmingly easy to orchestrate compared to voice scams of the past. In essence, this type of fraud follows these steps:
Step 1: Collect the Recording To initiate an AI scam call, criminals initially require a five- to ten-second audio snippet of a loved one’s voice, which can often be sourced from platforms like YouTube, Facebook, or Instagram. They then input this audio into an artificial intelligence tool that learns the individual’s voice patterns, pitch, and tone, and crucially, simulates their voice.
These AI tools are readily available, inexpensive, or even free to use, intensifying the threat. For instance, generative AI models like ChatGPT or Microsoft’s VALL-E can replicate someone’s voice after analyzing only three seconds of a training clip of that person speaking. As Aleksander Madry, a researcher at MIT’s Computer Science and Artificial Intelligence Laboratory, points out, “This grants scammers a new superpower, and they’ve wasted no time exploiting it.”
Step 2: Feed the AI a Script Once the AI software comprehends the targeted individual’s voice, fraudsters can instruct it to generate an audio file in which the cloned voice says whatever it desires. The next step is to call you and play this AI-generated clip, commonly referred to as a deepfake. These calls might appear to come from a local area code to deceive you into answering, but be cautious—scammers can easily spoof their phone numbers. Many phone-based fraud schemes originate in countries with substantial call-center operations, such as India, the Philippines, or Russia, according to Velasquez.
Step 3: Set the Trap The scammer informs you that your loved one is in grave danger and demands immediate, untraceable payment methods, such as cash, wire transfers, or gift cards. Although these tactics are typical indicators of wire fraud or gift card scams, most victims react with panic and comply with the demands. Nico Dekens emphasizes, “These scams exploit fear, creating a heightened emotional state that makes it challenging to take a moment to consider their authenticity.”
Scammers also count on catching their targets off guard, explains Karim Hijazi, Managing Director of SCP & CO, a private investment firm specializing in emerging technology platforms. “Scammers rely on creating enough surprise to catch the targeted individual off balance,” Hijazi says. “Currently, this tactic remains relatively unknown, making it easy for most people to believe they are truly speaking to their loved one, boss, colleague, or law enforcement officer.”
How AI Facilitates Scams and Makes Them Harder to Detect
Impersonation scams have been a longstanding issue, but artificial intelligence has elevated their sophistication and believability. Madry asserts, “AI hasn’t changed the motives behind scams; it has merely provided a new avenue for executing them, whether it’s blackmail, fraud, or misinformation. All of these can now be executed more affordably and persuasively.”
While AI has been available for decades, both for criminal and everyday applications (such as AI password cracking and virtual assistants like Alexa and Siri), it used to be expensive and necessitated significant computing power. Consequently, malevolent actors needed substantial expertise and time, often relying on specialized software to mimic someone’s voice with AI.
However, the landscape has evolved drastically. As Madry highlights, “Today, all of this is accessible to anyone willing to invest some time in tutorials or reading how-to documents and experimenting with internet-downloaded AI systems.”
Additionally, Velasquez observes that previous imposter phone scams often attributed voice differences to poor connections or accidents, providing a plausible explanation for altered voices. However, contemporary technology “has reached such a level of proficiency that it’s nearly impossible for the human ear to discern whether the voice on the other end of the phone is genuine,” says Alex Hammerstone, a director at the security consulting firm TrustedSec.
How to Safeguard Against AI Scams
While AI scam calls may be quicker to orchestrate than traditional imposter scams, they remain labor-intensive for criminals. Consequently, according to Velasquez, the likelihood of being targeted is relatively low. Most fraudsters prefer attacks that can be automated and executed repeatedly, whereas voice clones require victims to recognize and verify a voice.
Nevertheless, these attacks are expected to increase as technology continues to improve, making it easier for scammers to identify targets and replicate voices. Moreover, as Madry points out, society hasn’t yet developed the instincts and precautions needed to fully combat this emerging threat.
To bolster your online security and reduce the risk of becoming a victim, consider the following expert recommendations:
1. Make Your Social Media Accounts Private Before sharing audio or video clips of yourself on platforms like Facebook, Instagram, or YouTube, restrict your privacy settings to allow only people you know and trust to view your posts. If you maintain an open profile, consider removing audio and video recordings of yourself and your loved ones from these platforms to deter scammers looking to capture your voice.
2. Implement Multifactor Authentication Setting up multifactor authentication for your online accounts can complicate fraudsters’ attempts to gain unauthorized access. This system necessitates entering a combination of credentials to verify your identity, such as a single-use, time-sensitive code sent to your phone via text, in addition to your username and password.
If you use biometric verification, opt for methods that use your face or fingerprint instead of your voice to prevent criminals from accessing the resources required for deepfakes.
3. Establish a Secret Phrase Hijazi suggests devising a secret phrase or password that you can exchange with your loved ones in advance. If you receive a call claiming that they need immediate money, you can use this phrase to confirm the caller’s authenticity. Although this requires some advance planning, it’s a free and effective proactive measure.
4. Erase Your Digital Footprint Minimizing your online presence is another way to avoid being targeted by these scams, to the extent possible. Scammers often rely on the information you publicly share about yourself, from your pet’s name to your high school mascot, to construct convincing scams.
According to Hammerstone, “There is a wealth of freely available information about nearly everyone. It’s straightforward to uncover details about people’s family members, affiliations, and employers, all of which scammers can leverage to craft persuasive scams.” To mitigate this risk, Hammerstone recommends limiting the personal information you share online.
Tools like DeleteMe can automatically remove your personal details from data brokers, making it more challenging for scammers to target you. Google is even developing a “Results About You” tool that will notify you when your personal information appears in search results and simplify the process of requesting removal.
What to Do If You Receive an AI Scam Call
If you receive a suspicious phone call, particularly one in which a loved one is demanding money, it’s crucial not to panic. Velasquez emphasizes that while it can be distressing to hear a loved one in apparent distress, the key is to remain calm and not overreact.
Experts recommend taking the following steps before complying with any monetary demands made over the phone:
- Call Your Loved One Directly: Use a trusted phone number to reach your loved one and verify their situation.
- Contact Family, Friends, or Colleagues: If you cannot reach your loved one directly, attempt to get in touch with them through a family member, friend, or colleague.
- Use a Secret Phrase: Ask the caller to confirm a detail or secret phrase that only your loved one would know.
- Alert Law Enforcement: Reach out to law enforcement authorities to verify the authenticity of the call.
- Listen for Audio Anomalies: Pay attention to any unusual voice modulations or synthetic-sounding voices, which can be indicative of a scam. Deepfake audio may lack natural intonation or exhibit glitches.
If you ascertain that the call is indeed a scam, take the following additional steps:
- Record the Caller: If possible, put the scam caller on speaker and record the audio with a secondary phone. This preserves evidence.
- Block the Number: Block the scammer’s number on your phone and add it to your do-not-call list to prevent future calls.
- Report the Call: Inform your mobile phone carrier of the call so that appropriate action can be taken.
In summary, while AI scam calls represent a disturbing and evolving threat, you can protect yourself by practicing caution, securing your online presence, and responding thoughtfully to suspicious phone calls. By staying vigilant and taking proactive steps, you can reduce the risk of falling prey to these increasingly sophisticated scams.
Techy is a seasoned writer and expert in the fields of technology and finance. With a passion for demystifying complex concepts, Techy has a knack for breaking down intricate topics into easily digestible content for readers.