Skip to content
Home » Why We Need Human Risk Management Against AI-Enabled Cyber Threats

Why We Need Human Risk Management Against AI-Enabled Cyber Threats

Why We Need Human Risk Management Against AI-Enabled Cyber Threats
Apr 2025
by Tommaso de Zan, written in partnership with Martin Kraemer, KnowBe4

In February 2024, the CFO of Arup, a British engineering firm with nearly 20,000 employees worldwide, joined an internal company meeting online and instructed an employee in Hong Kong to execute several bank transfers. Trusting the apparent authenticity of the request, the employee made 15 transfers totalling GBP 20 million (USD 26.8 million) to five bank accounts before realizing he had been conned: Cybercriminals had perfectly recreated the CFO’s face, voice, and mannerisms using AI deepfakes.

The Worrying Trend

This incident reflects a larger unsettling trend: AI-powered polymorphic phishing campaigns have surged by 47% from 2024 to 2025. In 2024, 76% of all phishing attacks featured at least one polymorphic characteristic. Deepfake-related financial fraud incidents have increased by 300%, with average losses exceeding USD2.5 million per event. This is further compounded by the rise of ransomware delivered via phishing, which increased by 23% in early 2025, often in the form of routine HR communications or supply chain invoices.

The Effects of AI-powered Polymorphic Phishing

Polymorphic phishing refers to an advanced phishing campaign that randomizes email components, such as content or subject lines, to evade technical defences. These new techniques create subtle but deceptive variations that can overcome security controls such as blocklists, static signatures, secure email gateways, and native security tools. With recent AI advancements, phishing emails also have become increasingly personalized, significantly boosting attack success rates.

AI-generated and polymorphic phishing is becoming more effective by:

  • Tailoring distinct email content for every recipient, making it harder for secure email gateways to spot malicious attempts.
  • AI can also search victims’ social media to make the attack more credible and personalized. Persuasion tactics and victim selection improve and become more efficient.
  • Adjusting to the victim’s behavior and changing its content and actions. For example, if a user clicks on a URL but refrains from inserting their credentials, AI would send a follow-up request urging the person to complete the action

Are We Doomed?

AI-powered polymorphic phishing supercharges traditional phishing attempts, creating a formidable cognitive trap for the victims. It adds a fake human element – such as voice and/or video – to a traditionally text-based email or text message phish attempt, making it very difficult for victims to distinguish the phishing attempt.

While the threat is significant, organizations can still avoid costly breaches by adopting a new mindset and using different training methods to counter these threats.

A revealing example comes from Italian car manufacturer Ferrari. In July 2024, a Ferrari executive began receiving several WhatsApp messages from CEO Benedetto Vigna. After some initial exchanges, the employee received a direct phone call from Vigna. The CEO, claiming to use a different phone number due to the confidential nature of the matter, asked the employee to execute an unspecified currency-hedge transaction. While the CEO’s voice was remarkably similar, doubt crept into the employee’s mind. The employee then asked what the title of the book that Vigna had recently recommended was (it was “The Decalogue of Complexity: Acting, Learning and Adapting in the Incessant Becoming of the World”). The call ended abruptly—a clear sign of attempted fraud.

How Human Risk Management (HRM) Can Help

Ferrari’s near miss illustrates that Human Risk Management (HRM) is essential in the fight against AI deepfakes, complementing other sophisticated security controls and technologies. HRM solutions mitigate cybersecurity risks posed by humans in three ways:

  1. Measuring and quantifying human behaviors and associated security risks
  2. Developing training interventions based on such risks and concrete outcomes
  3. Enabling the workforce while building a positive security culture

A 2024 systematic review already found that behavioral cybersecurity training positively impacts organizational cybersecurity behaviors. HRM goes beyond traditional security awareness by considering human risk as quantifiable and requiring continuous monitoring and training. Effective HRM programs focus on three elements:

  • Risk prioritization: using phishing simulation data to identify employees at higher risk, also depending on the criticality of their roles;
  • Continuous nudges: employing real-time training when employees handle sensitive data, not just annual compliance training;
  • A security culture first: promoting an organizational culture where authority can be challenged and questioned for security’s sake, as demonstrated by Ferrari’s case.

Way Forward

While phishing is not a recent social engineering technique, AI-polymorphic campaigns elevate the likelihood of successful attacks to a new level. Protection against these sophisticated threats requires a comprehensive approach, and Ferrari’s example exemplifies the need to put human beings at the center of organizations’ security posture through effective HRM solutions.