Skip to content

AI Biometric Spoofing

By Mark Rasch

For more than 40 years, information security professionals have argued for stronger authentication (and access control) protocols.  For most, this has meant some form of multi-factor authentication (MFA). Some combination of something you ARE (biometric), something you HAVE (token or device) and something you KNOW (password, passphrase, PIN).  Certainly, a combination of these factors, properly implemented, provides greater authentication and security than any one of them alone. But, with increasingly sophisticated attacks — many of which are powered by artificial intelligence — even MFA may be DOA.

Take this scenario.  You get a phishing e-mail from what appears to be an authenticated e-mail address requesting that you wire transfer funds to someone you know at a particular bank account.  But you are smart. You don’t fall for it. First, you call the person you know at their telephone number that you also know. They pick up the phone and you recognize their voice.  They authorize the wire transfer and confirm the amount and wire instructions. You are golden, right?

Not so fast.

A recent case illustrates how each of the “authentication” factors we just used can be, and are being spoofed for profit.  We know that e-mail addresses can be easily spoofed. Phone numbers are increasingly being spoofed by SIM swapping and other techniques where hackers take over the cell phone number and account.  The same technique gives the hacker access to SMS, MMS and other accounts linked to the phone and phone number, so a SIM swap essentially compromises several channels of authentication at once. That’s why it’s a favorite tool of hackers.  In fact, in a case awaiting sentencing in federal court in Detroit, Colton Jurasic has plead guilty to using SIM swapping to steal more than $6.7 million in cryptocurrency. United States v. Jurasic, Dkt. No. 19-20246 (February 11, 2020, E.D. Mi.).   So e-mail and SMS and related technologies are just not as strong authenticators as we might like.  Particularly for determined and patient hackers.

But what about the contents of the phone call?  A recent case how AI is being used to circumvent that form of authentication.  We are all familiar with the “family member in trouble” e-mail scams in which scammers claim to be a family member who is in trouble (usually in a foreign country) and needs money wire transferred immediately.  Well, a recent variant of that scam was attempted to be perpetrated against a lawyer in the US who received a phone call from his adult son explaining that he had just been wrongfully arrested and needed thousands of dollars wired to the clerk of the Court for bond. The son explained that he had had an accident, had injured a pregnant woman, and needed money to get released.  In reality, hackers had used a short snippet of the son’s voice (you can do it with as little as 5 seconds of sampling) and applied an AI program to generate the hysterical phone call which used the son’s actual voice.  Similar “AI voice” scams have been used to, for example, convince a UK CEO to wire hundreds of thousands of pounds to a scammer, and can use AI programs (like Google’s Duplex) to not only fake the sound and timbre of someone’s voice, but to fake the responses to questions as well — to generate an artificial conversation.  Am I REALLY speaking to Alan Turing?

Facial recognition programs can similarly be spoofed by AI.  We have all seen the deepfake videos which have, for example, Jordan Peele putting words into President Obama’s mouth, or creating a new version of Back to the Future with Robert Downey Jr., and Tom Holland.  Combine these with programs like Lyrebird AI or Resemble AI and you can create video and audio dopplegangers that can fool the average person in a video conference.  The face is fake. The voice is fake. The conversation is fake. The scam is real. But AI can not only spoof a face sufficiently to defeat humans, it can do so sufficiently to defeat facial recognition programs as well.  Knowing (or deducing) the algorithm that the facial recognition software is looking for, and representing a matching set of data points, the AI spoofer tricks the facial recognition program into authenticating.   Even the things that we think are the most sophisticated forms of authentication like DNA sequencing can be spoofed.(personally, I just use the DNA from my identical twin, but not everyone has one).  

AI programs can spoof something we are.  SIM swapping can spoof something we have and something we know.  MiTM attacks can capture other things that we “know.” If there’s enough of an incentive, anything can be spoofed.  Pretty soon it might be AI robots authenticating other AI robots.  

For now, we should recognize these inherent weaknesses and create more sophisticated authentication protocols, with the understanding that these too will ultimately be defeated.  Cat. Meet mouse. Mouse. Meet cat.