Concern is growing about the use of generative artificial intelligence (AI) models for malicious purposes. Security researchers have demonstrated that generative AI can write code for polymorphic malware and create convincing lures for phishing emails and the the guardrails put in place to prevent generative IT tools such as ChatGPT from being used for malicious purposes can be easily circumvented. Further, alternative tools such as WormGPT and FraudGPT are available specifically for use by cybercriminals. What is largely unknown is to what extent cybercriminals are taking advantage of generative AI. Mandiant has found evidence to suggest that cybercriminals have been using generative AI, although only for limited purposes such as phishing, business email compromise (BEC) attacks, and image manipulation to defeat know-your-customer (KYC) requirements
AI and Social Engineering Experts Go Head-to-Head
Researchers at IBM Security’s X-Force Red team have shown how effective generative AI tools are at generating convincing phishing emails that appear to have been written by humans. So good were the emails that they decided to create a test that squared off AI against humans to see who was better at phishing.
Stephanie Carruthers, Chief People Hacker for IBM X-Force Red, said her team was able to circumvent the guardrails of ChatGPT and develop convincing phishing emails with five simple prompts. The campaign took just 5 minutes to create from start to finish, not including the time it would take to set up the infrastructure. The prompts her team used were concerned with identifying the top areas of concern for employees in the healthcare industry, determining the best social engineering techniques to use, identifying the individuals and companies that should be impersonated for the best results, and generating a phishing email template based on that information. Carruthers writes phishing emails for a living and said it would typically take her team around 16 hours to develop a phishing campaign. At just 5 minutes, ChatGPT saves phishers almost two days of work.
For the head-to-head test, a team of seasoned X-Force Red social engineers was tasked with creating a campaign. Through Open-Source Intelligence (OSINT) acquisition, the team identified the launch of an employee wellness program that would serve as an ideal lure, and the team got to work constructing their phishing email. The two emails were then compared through A/B testing and the results were measured by click rates and reporting rates.
Humans Still have the Edge but the Margins are Small
The good news is that humans still have the edge when it comes to phishing, achieving a click rate of 14% compared to 11% for the AI-generated emails. The AI-generated emails were also more likely to be reported as suspicious, with a reporting rate of 59% compared to 52% for the human-generated emails. The bad news is the margins were small. Seasoned phishers may be able to outperform AI, but the AI-generated emails had a perfectly acceptable click rate and reporting rate, plus the campaign only took 5 minutes to create rather than 16 hours.
The test showed humans still have the upper hand when it comes to social engineering because they are better than AI at emotional manipulation. “Humans understand emotions in ways that AI can only dream of. We can weave narratives that tug at the heartstrings and sound more realistic, making recipients more likely to click on a malicious link. For example, humans chose a legitimate example within the organization, while AI chose a broad topic, making the human-generated phish more believable,” explained Carruthers. The human emails also had greater personalization and used shorter and more succinct subject lines.
Carruthers said her team has not observed wide-scale use of generative AI in current campaigns but cybercriminal use of generative AI is increasing. AI is also improving and will reach parity and outperform humans at some point in the future. Carruthers offers five tips for preparing for AI-generated phishing emails: If in doubt, call the sender; don’t assume phishing emails will have poor grammar; revamp and improve social engineering programs to account for AI; strengthen identity and access management controls; and constantly adapt and innovate as that is what cybercriminals are doing.
“We have seen, as predicted, Generative AI being used to perfect the content distributed through phishing emails. The focus must remain on the impersonation aspect of phishing which renders the content irrelevant. We need to verify senders and embedded links which will eliminate the need to worry about how convincing the text might be,” Dror Liwer, co-founder of cybersecurity company Coro, told the HIPAA Journal.
While the bad guys can take advantage of AI, AI can also be leveraged to improve defenses, as Roger Grimes, data-driven defense evangelist at KnowBe4 explained. “KnowBe4 has been using AI-enabled technology for over 10 years. We know that our AI-enabled technology improves the educational experience for customers and decreases cybersecurity risk. It isn’t like AI is just being used by the bad guys. The good guys invented it and have been using it even longer. The question is how the increased use of AI by the good side ends up compared to the increase in AI used by the bad side? Who gets the bigger benefit? I wouldn’t absolutely bet that AI only benefits the attacker.”
Further information on AI-Augmented Phishing and the Threat to Healthcare
On October 26, 2023, The Health Sector Cybersecurity Coordination Center published a white paper outlining the risks to the healthcare and public health (HPH) sector from AI-augmented phishing and offers advice on countermeasures and mitigations that HPH organizations can implement to improve their defenses – HC3: White Paper: AI-Augmented Phishing and the Threat to the Health Sector
The post AI Can Save Phishers 2 Days Per Campaign appeared first on HIPAA Journal.