AI on both sides of cybersecurity: ally and threat in the digital world
Artificial intelligence is neither good nor bad in itself, but, as with any tool, it all depends on how you use it. In fact, in cybersecurity it has a dual role: while security teams use its capabilities to bolster defenses, cybercriminals use it to hone their attacks.
Header image generated with Midjourney (IA)
Imagine receiving an email notifying you of a temporary suspension of your bank account, another alerting you to an undelivered package, and a third proclaiming that you've won a gift card. Each message includes a link, urging you to provide your personal information. At first glance, these emails may appear legitimate, but they are, in fact, anything but. They are prime examples of social engineering—a sophisticated toolkit of manipulation tactics crafted to deceive individuals and extract sensitive information for illicit gain. With the advent of generative artificial intelligence, these deceptive strategies have not only proliferated but have also evolved in complexity, making them increasingly difficult to detect.
Targeted phishing attacks, a widespread form of social engineering, traditionally required extensive research on the intended victim. This labor-intensive and costly process was largely manual, which naturally limited the frequency of such attacks. However, the emergence of generative artificial intelligence now enables the automation of these preparatory steps, allowing for the execution of targeted phishing campaigns on a massive scale. According to a report by the U.S. cybersecurity firm Zscaler, phishing attacks leveraging generative AI surged by 60% globally between January and December 2023.
In addition, there is another key issue: generative AI enables the immediate creation of messages that seem legitimate and are more likely to deceive victims--emails, calls and text messages that appear to come from legitimate entities, such as a social network, a bank or a government institution.
How does AI affect phishing?
The Internet gives fraudsters the ability to strike without exposing their physical presence, so they are emboldened by a sense of security. Furthermore, the digital landscape offers numerous opportunities to automate tasks, enabling cybercriminals to effortlessly target vast numbers of potential victims with minimal effort.
As companies and individuals increasingly embrace digitalization, social engineering tactics on the Internet have evolved in parallel. What began as email-based schemes known as 'phishing' has expanded to encompass a variety of new channels. Scammers have adapted their techniques to exploit instant messaging systems and social media ('SMiShing'), ‘lost’ USB drives left as bait ('baiting'), phone calls ('vishing'), and, most recently, QR codes, which are now prevalent in both physical and digital spaces ('QRishing').
Over time, social engineering attacks have grown markedly more sophisticated. What once involved mass mailings with generic content has evolved into highly targeted campaigns. These attacks now focus on specific groups, using carefully crafted messages that resonate with the intended audience, making the deception much harder to detect.
Phishing attacks are sometimes disguised as messages from legitimate contacts or mimicked to resemble authentic processes within a victim's organization—a tactic known as spear phishing. Despite accounting for only 0.1% of all emails sent, spear phishing is alarmingly effective, being responsible for 66% of all security breaches, according to a report by the U.S. security firm Barracuda.
As a result, cyberattacks in Spain have surged dramatically. In 2023, the country recorded a staggering 107,777 incidents, marking a 94% increase compared to 2022, according to a report by the Spanish National Cryptology Center (CCN). Consequently, cybersecurity has become the top concern for 48% of Spanish companies, leading them to boost their budgets for information technology professionals by an average of 4.7 million euros, as reported by the Hiscox Cyberpreparedness Report .
The targets of social engineering have also evolved significantly over time. Initially, attackers focused on obtaining easily monetizable information, such as bank passwords, or directly tricking victims into making payments. However, as user identity verification systems—like biometrics—have improved, the focus has shifted. Today, attackers increasingly aim to install malware on a victim's device, granting them control and access to perform a wide range of malicious activities.
Image generated with Midjourney (AI).
The advent of AI
Given the rapid advancements in social engineering, one might wonder whether we've reached the pinnacle of sophistication or if new avenues remain to be explored. This is where the significant recent breakthroughs in artificial intelligence come into play.
How can AI make social engineering attacks more dangerous?
Generative AI has significantly lowered the barrier for conducting phishing and other digital scams, making these activities easier than ever before. This advancement has given rise to programs like WormGPT, an AI language model specifically trained on fraudulent data to assist hackers in their criminal endeavors.
Generative AI-powered scams are thus more sophisticated and can take a variety of forms, according to an MIT Technology Review article:
- Phishing. The rise of ChatGPT has coincided with a marked increase in phishing emails. Tools like GoMail Pro are already incorporating this generative AI to translate or refine the messages they send to their victims, thereby amplifying the effectiveness of these fraudulent schemes.
- Deepfakes. Deepfakes are AI-generated synthetic images, videos, and audio that convincingly mimic reality. A recent high-profile case involved global pop star Taylor Swift, who became the victim of a fake video campaign where her face was digitally recreated without her consent to produce explicit content. Another striking incident occurred with a multinational company in China, which lost $200 million (about €186.5 million) in a scam. Cybercriminals deceived an employee by creating digital replicas of his superiors to gain access to confidential information and perpetrate the theft.
- Jailbreak-as-a-service. Some users have managed to manipulate generative AI systems to elicit responses that violate security policies, such as creating code for ransomware or generating text for fraudulent emails.
- Doxxing. Doxxing is the online practice of exposing individuals' private and identifying information. This phenomenon has been amplified by generative AI, which utilizes models trained on vast amounts of personal data available on the internet. Researchers have demonstrated that, even from minimal clues, these models can uncover detailed information about individuals, including their ethnicity, location, and occupation.
"It’s getting harder and harder to believe what you read, see and hear online. That’s worrisome both because you are going to have people victimized by deepfakes and because there will be people who will falsely claim the “AI defense” to avoid accountability," explained Hany Farid, a computer science professor at the University of California, in an interview.
A powerful ally
While artificial intelligence is often leveraged to refine cyberattacks, it is also a powerful tool for enhancing digital security. AI systems have long been employed to detect anomalies that could signal a cyberattack or fraud. Recent advancements have significantly improved the accuracy of these defense systems, allowing them to detect and neutralize increasingly sophisticated threats more effectively and at earlier stages.
Artificial intelligence systems can also be used to automate cybersecurity monitoring tasks so that they can be done faster and avoid a large number of human errors. This has several advantages for cyber defense, according to an IBM article:
- AI enhances data protection in a hybrid cloud environment—where public and private cloud services are combined—by monitoring for anomalies in data access. When irregularities are detected, AI systems promptly alert cybersecurity professionals, enabling them to respond to potential threats swiftly.
- AI-driven risk analysis is more accurate. It performs incident summaries and automates incident responses, speeding up vulnerability investigations and classifications by 55 percent.
- AI reduces fraud by up to 90 percent with mechanisms such as user authentication: systems can distinguish between real people and malicious activity by analyzing behavioral data.
While having defense mechanisms against fraudulent attacks is crucial, educating users to recognize scams is equally important. To this end, the Spanish National Cybersecurity Institute (INCIBE) provides a series of guidelines to help individuals identify and avoid such fraud:
- Do not open e-mail from unknown users.
- Do not reply to such e-mail or send personal information, such as passwords or bank details.
- Keep all devices and software up to date.
- Verify who is sending a message before providing any information.
- Do not click on a link before checking which website it redirects to.
- Do not download attached files contained in the message.
- Use up-to-date security software.
- Enable two-factor authentication whenever an online service allows it.
The two faces of AI
As social engineering techniques have evolved to become more profitable for attackers, defense systems have simultaneously advanced, bolstering their ability to protect against these increasingly sophisticated threats.
Artificial intelligence has the potential to revolutionize the cybersecurity landscape, impacting both attackers and defense teams. With AI enabling more advanced social engineering tactics, only the strategic application of AI techniques can effectively safeguard us against emerging cyber threats.
In light of these evolving trends, both users and companies must exercise utmost vigilance to identify fraud before it escalates. It is crucial to promptly alert relevant parties upon detecting any fraudulent attempts and to seek expert assistance when uncertainties arise.