Artificial intelligence in phishing. Do forwards or defenders benefit more?

As cybercrime has increased, the cyber security industry has had to adopt advanced technologies to keep up. Artificial intelligence (AI) has quickly become one of the most useful tools for stopping cyberattacks, but attackers can also use it. Recent phishing trends are a great example of both sides of the issue.

Phishing the most common type of cybercrime today away. As more companies have become aware of this growing threat, more have implemented AI tools to stop it. However, cybercriminals are also stepping up their use of AI in phishing. Here’s how both sides are using this technology and who’s benefiting more from it.

How AI Helps Fight Phishing

Phishing attacks take advantage of people’s natural inclination towards curiosity and fear. Because this social engineering is so effective, one of the best ways to protect yourself from it is to not see it in the first place. That’s where AI comes in.

Anti-phishing AI tools usually come in the form of advanced email filters. These programs scan your incoming messages for signs of phishing attempts and automatically send suspicious emails to your junk folder. Some newer solutions can detect phishing emails with 99.9% accuracy creating different versions of deception messages based on real examples so they can spot variations.

As security researchers discover more phishing emails, they can feed these models more data, making them more accurate. AI’s continuous learning capabilities also help refine models to reduce false positives.

AI can also help stop phishing attacks when you click on a malicious link. Automated monitoring software can establish a basis for normal behavior to detect abnormalities that are likely to occur when someone uses your account. They can then block the profile and alert security teams before the intruder does too much damage.

How attackers use AI in phishing

AI’s potential to stop phishing attacks is impressive, but it’s also a powerful tool for crafting phishing emails. As generative AI like ChatGPT becomes more available, it makes phishing attacks more effective.

Spearphishing, which uses personal information to create customized messages for users, is one of the most effective types of phishing. An email that gets all your personal information right will naturally be much more persuasive. However, creating these messages has traditionally been difficult and time-consuming, especially at scale. This is no longer the case with generative AI.

Artificial intelligence can create a large number of customized phishing messages in a fraction of the time it would take a human. It’s also better than people writing convincing fakes. In a 2021 study, AI-generated phishing emails saw significantly higher click-through rates than those people wrote, and that was before ChatGPT was released.

Just as marketers use artificial intelligence to tailor their customer engagement campaigns, cybercriminals can use it to craft effective, user-specific phishing messages. As generative AI improves, these hoaxes will only become more convincing.

Human frailties keep attackers in the lead

As attackers and defenders take advantage of AI, which side has seen the most obvious benefits? If you look at the latest trends in cybercrime, you will see that cybercriminals have thrived despite more sophisticated defenses.

Business Email Compromise Attacks increased by 81% in the second half in 2022 and employees opened 28% of those messages. That’s part of a longer-term 175% increase over the past two years, indicating that phishing is growing faster than ever. These attacks are also effective, stealing $17,700 per minutethis is probably why they are behind 91% of cyber attacks.

Why has phishing increased so much despite AI improving anti-phishing defenses? Perhaps it comes down to the human element. Employees must actually use these tools for them to be effective. Beyond that, employees may engage in other dangerous activities that make them susceptible to phishing attempts, such as accessing their work accounts with unauthorized, unprotected personal devices.

The previously mentioned survey also found that workers report only 2.1% of assaults. This lack of communication can make it difficult to see where and how safety measures need to be improved.

How to protect yourself from the growing number of phishing attacks

Given this alarming trend, businesses and individual users must take steps to stay safe. Implementing anti-AI phishing tools is a good start, but it can’t be your only recourse. Only 7% of security teams don’t use or plan to use AI, but phishing continues to dominate, so companies need to address the human element as well.

Since people are the weakest link against phishing attacks, they should be the focus of mitigation steps. Organizations must make security best practices a more important part of employee engagement and ongoing training. These programs should include how to spot phishing attacks, why it’s a problem, and simulations to test their retention of knowledge after training.

Using stronger identity and access management tools is also important, as they help stop successful breaches once the account is signed in. Even experienced employees can make mistakes, so you need to be able to spot and stop compromised accounts before they cause major damage.

AI is a powerful tool for both good and bad

AI is one of the most disruptive technologies in recent history. Whether that’s good or bad depends on how it’s used.

It’s important to realize that AI can help cybercriminals just as much, if not more, than cybersecurity professionals. When organizations acknowledge these risks, they can take more effective steps to combat the growing number of phishing attacks.

Source link