Research has shown that people beat ChatGPT when creating phishing attacks

Cyber ​​Security Training Services Provider Hoxhunt Ltd. the company today released the results of a new study on the effectiveness of phishing attacks created by ChatGPT, and while the technology continues to improve, humans actually performed better.

The study analyzed more than 53,000 email users in more than 100 countries, comparing the profitability of simulated phishing attacks created by human social engineers and those created by large artificial intelligence language models. While the potential of ChatGPT can be used for malicious phishing activities, human social engineers are outperforming AI in inducing clicks on malicious links.

The correlation between the success rate of human-generated phishing emails and ChatGPT was significant, with human red teams generating a 4.2% click-through rate versus a 2.9% click-through rate for ChatGPT in a population sample of email users. Humans outperformed AI by 69% in persuading other humans to do something.

The study also found that users with more experience in the security awareness and behavior change program showed significant protection against human and AI-generated emails. from email phishing attacks, with the failure rate dropping from 14% for less trained users to between 2% and 4%. with experienced users.

The idea that ChatGPT can be used for good and evil is not a new idea, but it has not yet been thoroughly explored from a malicious perspective. The study found that AI does create opportunities for both the attacker and the defender. While phishing attacks augmented with large language models still don’t perform as well as human social engineering, researchers say the gap is likely to close, and attackers are already using AI.

“Security awareness and behavior change training needs to be dynamic with the evolving threat landscape to protect people and organizations from attacks,” the study notes.

Discussing the study’s findings, Melissa Bishoping, director of Endpoint Security Research at Endpoint Management Company Tanium Inc., told SiliconANGLE that AI really opens up new opportunities for the effectiveness, creativity and personalization of phishing lures. But he said it’s important to remember that defenses against such attacks remain largely unchanged.

“It may be a good opportunity to update awareness training programs to educate employees about emerging technologies and trends in phishing/smashing/vishing tactics to encourage vigilance and a ‘think before you click’ culture,” Bishoping explained. “We will potentially see an increase in highly customized and persuasive lures at scale. Today, it’s much easier and much faster for a threat actor to ask an AI to write a message asking someone in a specific field to do something and associate relevant and compelling details.”

Hoxhunt co-founder and CEO Mika Aalto said, “We now know from our research that effective, existing security awareness and behavior change programs protect against AI-enhanced phishing attacks.”

“As part of your holistic cybersecurity strategy, be sure to focus on your people and their email behavior, because that’s what our adversaries are doing with their new AI tools,” Aalto added. “Embedded security as a universal responsibility throughout the organization, with ongoing training that empowers users to spot suspicious messages and reward them for reporting threats, until human threat detection becomes a habit.”

Image: Hodghunt

Show your support for our mission by joining our Cube Club and our community of Cube Event experts. Join a community that includes Amazon Web Services and CEO Andy Jassy, ​​Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many other stars and experts.

Source link