
Artificial intelligence has radically transformed IT—but at the same pace, it has accelerated cybercrime. Today, AI is used for large-scale phishing, voice and video deepfakes, and impersonation of well-known brands and individuals, which directly translates into financial and reputational losses for organizations. The line between legitimate communication and manipulation is increasingly blurred, while attacks are becoming more tailored, automated, and harder to detect.
Employee training and awareness alone are no longer enough. Effective defense requires an assume-breach mindset: layered controls, strict processes (e.g., the four-eyes principle), Zero Trust, strong identity control, and rapid incident response. In the AI era, security is not about prevention at all costs, but about an organization’s resilience to the impact of an attack.
Artificial intelligence is a breakthrough that has changed how the entire IT ecosystem operates. However, we must remember that this progress has its other side—the worse, more dangerous one. Cybercriminals are just as eager to use the very same technologies. An example? In June 2022, the FBI issued a warning about a wave of reports involving fake candidates who used deepfakes during recruitment processes. AI-generated faces and voices allowed criminals to impersonate IT specialists, often to gain access to corporate infrastructure.
And this is not an exception. According to “Gartner’s 2024 AI Security Survey,” as many as 73% of enterprises experienced at least one AI-related incident in the last 12 months. These are no longer isolated cases, but a new reality in which every organization must account for this type of threat.
AI changes the rules of the game. If anyone can now generate realistic video, stitch together a synthetic voice from a few samples, or write an email at a native-speaker level, it means the boundary between reality and manipulation is slowly disappearing. And we—both as users and as organizations—are increasingly forced to make decisions in a world where nothing is as obvious as it used to be.
In 2024, CERT Orange Polska warned about a smishing campaign in which cybercriminals impersonated the well-known and popular brand Revolut. Using fake domains with SSL certificates and identical branding, they directed victims to spoofed login pages where they stole data, documents, and photos. The attacks were large-scale and conducted using tools that generated SMS and phishing email content with the help of AI.
This is only one of many similar examples. In recent months, we have increasingly observed phishing campaigns in which AI generates entire fake e-commerce sites (selling, for example, counterfeit versions of famous brands), customer support chats run by bots impersonating human consultants, and malicious advertising campaigns based on AI-generated deepfakes of well-known influencers or celebrities. Scammers use voice, image, and text generators that can sometimes be genuinely difficult to distinguish from real content. It is also worth emphasizing that automated AI campaigns run in real time and at scale. Cybercriminals test multiple variants of messages, domains, and visuals using A/B testing algorithms—exactly the way marketing teams do. As a result, the effectiveness of these attacks increases, and rapid detection becomes more difficult.
Companies in sectors such as e-commerce, fintech, logistics, and healthcare are particularly exposed: their customers make purchasing decisions based on trust in the brand. When that trust is undermined, it can lead to both financial and reputational consequences. That is why brand protection is becoming one of the most important elements of a cybersecurity strategy in the era of ubiquitous AI.
Imagine this situation: a finance employee receives a phone call from the “CEO,” asking for an urgent transfer. The voice sounds familiar, and the situation seems genuinely urgent. But it is not the CEO making the call—it is an AI-generated deepfake audio clip built using recordings from previous public appearances.
For many of us, this may sound like a hypothetical scenario, but it is important to know that similar incidents have already happened. In 2023, a bank in Hong Kong lost over $25 million after employees were manipulated into transferring funds following a conversation with a “director” whose voice was actually artificially generated.
As AI tools evolve, phishing moves to an entirely new level. Email content is generated with the communication style of a specific employee in mind, including organizational context and even industry tone.
Video and audio deepfakes are especially used for scams on social media—for example, campaigns selling fake products using an AI-generated likeness of well-known influencers. Victims are often unable to distinguish synthetic content from real content—especially when the attack relies on trust in a particular person.
In a world dominated by automated attacks, real security is about limiting the impact of compromise, responding quickly, and designing smart processes.
One such process can be the authorization of sensitive actions such as bank transfers. More and more organizations are introducing the “four-eyes principle,” requiring approval by at least two people. Of course, this model slows processes down and can be inconvenient, but for now it is one of the most effective barriers against social engineering attacks. Why? Because even if one link fails, the second can react in time. In this context, employee awareness still matters, but it should not be treated as the foundation of the entire strategy.
Regular training and phishing simulations help, but history shows there will always be someone who ignores an alert, forgets a procedure, or gets caught off guard in a moment of distraction. That is why it is so important that security mechanisms do not rely solely on human vigilance. Traditional security processes must evolve. In addition to two-factor authentication and behavioral verification, Zero Trust principles are gaining increasing importance.
The biggest challenge remains the fact that cybercriminals can use AI to act quickly and at massive scale. That is why defense must be layered—combining process, technology, and risk analysis in a way that assumes one thing: humans will always be the weakest link.
If you want to talk about how to secure your organization against AI-enabled abuse, contact us. We will analyze your environment and select solutions that genuinely increase your company’s resilience against new forms of cyber threats.
This article was prepared by a 4Prime expert and subsequently edited with the support of artificial intelligence tools.
