An Arizona family was recently in the news warning others about how they were the target of a ransom call in which scammers used AI (artificial intelligence) to clone their daughter’s voice to convince the parents they had kidnapped their daughter, with the apparent goal of extorting money.
DeLynne Bock, the mother of Payton Bock and target of the con, said she feels she can easily spot a fake scam call, but this was on a whole other level.
According to the news story, the scammers called their home, where DeLynne’s husband answered the call. A man on the other end of the line was screaming and using foul language, saying his daughter had caused an accident, hitting his car, and couldn’t find her insurance. From there, he started making threats, saying he had her tied up in the back of his truck.
What made the call so convincing was the deep fake of her daughter’s voice on the other end of the line – pleading for help, crying. Unable to reach her daughter by phone, DeLynne called the police while her husband kept the man on the phone. “I called the police, and they’re saying, ‘This is possibly a scam situation.’ I said, ‘There is no way this is a scam. This is my daughter’s voice,’” DeLynne said. “This wasn’t just some person pretending. As a mother, you know your daughter’s voice, and this was my daughter.”
Apparently, this wasn’t the first time this happened, which is how the police were able to suggest it could be a scam. This is just the latest iteration of how hackers use AI to produce deep fakes to extort money. AI and ChatGPT have been in the news recently for a reason – AI is an extremely powerful tool that, if put in the wrong hands, can do a lot of harm.
It’s not a stretch to imagine the use of AI to fake a Principal’s voice, signature, or writing style in an e-mail, text, call, or instant messaging to trick a faculty member into sending money or doing things that would severely harm the organization, such as providing a login or access to the company’s network, data or critical applications. Or similarly use this same type of approach to scam clients or patients into giving up confidential information or payments.
A report released by security experts at Home Security Heroes showed that using an AI, 51% of common passwords could be cracked in less than one minute. Both the length and complexity of the passwords factored into the speed of successfully cracking the password. Still, even a complex password with seven characters using both uppercase and lowercase letters, numbers, and symbols took just minutes to crack.
This means it’s hypercritical for all district organizers to no longer rely on strong passwords and simple antivirus to protect their organization.
Today, all schools should have some type of security awareness training for their teachers and students. For example, simply sharing this article and others we publish like them with them can go a long way toward making sure they’re always on high alert for scams; but sharing the occasional article is not enough. You should have some ongoing reminders and formal training so that it’s always top of mind. Teachers AREN’T “too smart” to fall for these scams. If someone can trick a mother into believing her daughter has been kidnapped by duping her daughter’s voice, they can trick a student or faculty member into clicking on a link, giving them access, or transferring funds – and it’s happening right now to many districts.
Second, you need to ensure your network has implemented robust cyber security tools and protections, as well as disaster recovery protocols so if you are ransomed, you can be sure to recover your data. This is not an area to be cheap about. Most people stubbornly believe it won’t happen to them, or that it will be a minor inconvenience, not the costly, district-crippling and devastating disaster that a cyber or ransomware attack can have. An ounce of prevention goes a long, long way toward minimizing your risk.