Cybersecurity has always been an arms race between cybercriminals and defenders. Defense against attackers will improve to adapt to new threats, and then attackers respond by refining their tactics in order to find the next vulnerability in the defense. It's one of the most dynamic environments in the world of computer science.
And one of the most successful and increasingly prevalent ways of attack has come from social engineering, which is when criminals manipulate humans directly to gain access to confidential information. Social engineering is more sophisticated than ever, and its most advanced iteration is the topic of today's discussion: deepfakes.
Deepfakes is a portmanteau of "deep learning" and "fake." The "deep learning" part references the deep learning that occurs with the help of AI and ML algorithms. Using AI/ML, deepfakes can generate audio, video, or photographic content that imitates real people—and they can do so with frightening accuracy.
Originally, the technology gained its reputation from its use in entertainment and media. Fake YouTube and TikTok videos are already a common sighting. That said, its implications for cybersecurity are much more alarming. Cybercriminals have been quick to recognize and take advantage of these new capabilities, which has given birth to a new epoch of phishing called "deepfake phishing."
The mechanics of deepfake phishing
The way traditional phishing works is rather simple. The phisher sends fake emails that attempt to seem legitimate to lure victims into giving them sensitive information such as login credentials or financial information. Commonly, this involves using scare tactics in an attempt to bypass the user's rational mind and emotionally manipulate them into action without them second-guessing the authenticity of the request.
[RELATED: 5 Emotions Used in Social Engineering Attacks (with Examples)]
The reason deepfake phishing is so effective is because it amplifies this emotional manipulation. The deepfaked material is so accurate that it's able to catch more people off guard and make it that much easier to bypass their rational minds.
Imagine getting a video call from your CEO, complete with all of his/her familiar gestures and tone of voice, asking you to access certain data on the company network. It sounds like science fiction, but it's not. And it's not hard to see how devastating such a scenario could be if replicated.
Barriers to entry
The higher quality of deepfake footage will be further exacerbated by the growth of the AI industry. At the mention of AI, most cybersecurity experts get excited about threat detection, automated incident reports, and easy discovery of polymorphic code.
However, the fact that deepfake phishing will require next to no effort to code, thanks to AI, is a big problem. Nowadays, being a successful "black hat" takes a lot of effort. To even catch wind of a potential profit, criminals must attempt to breach a company's internal software, such as JavaScript DOCX editor, which is often difficult because of a lack of server-side conversions and no versioning issues. And even if they managed to find a weak point, it is entirely a different matter to actually exploit it.
Now, consider using deepfake content instead. With a few photos or voice clips and a subscription to AI tools, hackers will be able to, for example, jump on a video call with a company's CFO to authorize a large payment to a fraudulent account with ease. No skills are needed, and everything seems 100% legitimate to the victim.
Bypassing traditional security
Possibly the biggest strength of using deepfakes for phishing is the ability to bypass conventional security measures. Most modern cybersecurity systems are geared against malware, ransomware, and brute-force attacks. Email filters have a chance at blocking traditional phishing attempts, but they're not equipped to handle a legitimate-seeming video call if it seems to originate from a trusted source.
What's worse is that the human factor plays such a huge role here. The truth is, technology is limited by human activity. While it can aid us in detecting deepfakes, in the end, it comes down to the person in front of the computer to make the right choices.
Adapting to the threat: detection and prevention
In order to combat the threat of deepfakes, there will need to be many changes in the cybersecurity arena that combine technology with training and procedure changes. That's right, relying on technology alone isn't enough. Businesses will have to change their practices in order to adjust to the new threat. This will involve:
- Training and awareness:
Companies should conduct regular training sessions to educate their employees about deepfakes and the risks involved. This includes the signs of a deepfake phishing attempt, and protocols for scrutinizing communications, even if they appear genuine on the surface. - Multi-factor authentication (MFA):
If an employee receives a suspicious request, MFA can save the day if used correctly. This way, even if a phisher tricks an employee with their deepfake, they'll still need additional verification to continue, which will deter attacks. - Communication protocols:
Anytime there is a sensitive request, there should be a protocol to minimize any risk for it. One way to accomplish this could be to have a rule that financial transfers or data-sharing requests above a certain confidentiality level must always be verified through a secondary source, especially an offline method like a phone call.
Real-world implications and cases
The threat of deepfake phishing is not just theoretical. There are in fact already notable cases that showcase the real-world implications of its existence. One example of this is an incident involving a Brazilian crypto exchange, BlueBenx, which was effectively ruined by criminals using AI to impersonate Binance COO Patrick Hillmann. They were scammed into sending $200,000 and 25 million BNX tokens, all because of a convincing Zoom call.
If scammers can fool a crypto exchange, despite all the safety features involved, they can fool anyone. Incidents like these should be a bright red warning sign and serve as a wake-up call for any businesses not privy to this threat.
According to research, the global AI market is expected to balloon to more than $300 billion by 2025, and a large part of that will be cybersecurity companies providing AI-driven deepfake-busting software.
White hats are already working on defensive algorithms that will be able to detect artificial videos, pinpoint anomalies, and even track the source and maker of the deepfake content. But that's tomorrow; nowadays, businesses have to fend for themselves. Here are a few ways to stay proactive as deepfake phishing is becoming more sophisticated:
- Invest in research and development:
Companies should consider investing in R&D specifically targeted at deepfake detection. A great idea is to collaborate with academic institutions or tech startups as innovation often comes from these sources. - Open source collaboration:
The cybersecurity community benefits from shared knowledge. Open source projects, where professionals from around the world collaborate, can lead to breakthroughs in deepfake detection. - Regularly update policies:
Cybersecurity policies should be living documents, frequently updated to reflect the latest threats and best practices. This cannot be stressed highly enough. - Stay informed:
Cyber threats are ever-evolving. Attend seminars, workshops, and conferences, and encourage others in your company to do the same so you can stay ahead of the curve.
Conclusion
Cybercriminals are cunning and adaptable, which is why they are unfortunately often successful. Deepfake phishing is simply their newest way of deploying their scams.
That said, organizations and individuals are not helpless against them. If they make use of technological advances and strengthen their protocols to fill in the gaps in human psychology, they can prevent such attacks from occurring.
Finally, by understanding the threat and investing in measures to help them get ahead of the curve, and ensuring their work culture emphasizes awareness and skepticism, they can truly put a dent in mitigating the success of these attacks.