Separating Hype from Reality: How Cybercriminals Actually Use AI

From board conversations to industry events, “Artificial Intelligence” is a buzzword that reshapes how we look at the future of security together. These views are diverse to say the least. Some insist that AI is a long-term overdue silver bullet, while others believe it will gradually undermine the digital society we know it.
These hype cycles and the bold statements that accompany them are often completely inconsistent with reality when it comes to emerging technologies. Although threat actors absolutely use AI to amplify and simplify their efforts, the sensational scenarios we often hear are still largely theoretical.
Defenders need to clearly evaluate how AI can transfer the cybercrime ecosystem, as well as insights on how it will develop in the future, rather than more fear, uncertainty and doubt. By separating facts from novels, security teams everywhere will have the ability to adjust their defense strategies in terms of AI and cybercrime, predict how attackers will use AI in the future and effectively protect critical assets.
Democratization of artificial intelligence provides new capabilities for attackers
As with any new technology, it is easy to assume that cybercriminals are using AI to create entirely new attack vectors. However, instead of using AI completely to reshape the threat, the attacker mainly uses this tool to turbocharge its existing operations. Threat participants rely on AI to drive the efficiency, accuracy and scale of technology, such as social engineering and malware deployment. For example, cybercriminals are using AI-generated tools such as Fraudgpt and Elevenlabs to create phishing and phishing attacks that mimic the tone and style of company executives, making it increasingly difficult for recipients to recognize potential threats.
Adversaries also use these AI tools for native language support, making it easy to develop communications that are used and reused almost anywhere in the world.
Democratization of artificial intelligence is driving these shifts in attackers’ capabilities, and even novice threat actors can now perform successful (and profitable) attacks. Now, from substantial coding expertise to computing logistics planning, most of the content that once needed can be easily achieved through AI. Threat participants rely on AI as a “simple button” to use the technology to automate labor-intensive tasks such as extending their reconnaissance efforts, developing highly personalized and context-sensitive social engineering communications, and optimizing existing malicious code to evade detection.
AI also influences what is available on dark networks, and the AI-AS-AS-Service model is booming under the cybercriminal ground. Just like ransomware models that have become common over the past decade, adversaries can purchase AI-enhanced services that offer reconnaissance tools, deep generation tools, social engineering suites for specific industries or languages, and more. The result is that cybercrime is becoming cheaper, faster, more targeted and difficult to detect.
The evolution of artificial intelligence: preparing for tomorrow’s threat
When security professionals develop their defense strategies, we must consider how AI can reshape cybercrime in the coming years. We also need to predict what basic pivot attackers will produce and what this evolution means to our entire industry. Artificial intelligence inevitably affects vulnerability discovery, enabling the creation of novel attack vectors and promoting the use of autonomous agent populations. Future AI advancements will also accelerate the discovery of zero-day vulnerabilities, which has attracted serious attention for defenders.
In addition to using AI to mine new vulnerabilities, cybercriminals will also use AI to develop new attack vectors. Although this is not what happened today, it is a concept that will become a reality in the future. For example, an attacker could exploit vulnerabilities within the AI system itself or a complex data poisoning attack used by machine learning model organizations.
Finally, while it doesn’t seem reasonable to have a group of autonomous communities that conduct entire cyberattacks, it is crucial that cybersecurity communities monitor threat actors are gradually leveraging ways in which automation can promote turbochargers.
Build a future of cyber resilience
Responding to more advanced AI-driven threats requires us to develop defenses together, and the good news is that many security practitioners have begun to adapt. The team is using frameworks such as MITER ATT&CK to draw attack chains and is deploying AI for predictive modeling and anomaly detection. Additionally, defenders need to focus on activities such as AI-powered threat hunting and hyperautomatic incident response, and they may need to rethink their security architecture.
Let’s not forget that AI has given cybercriminals a new level of agility, and security practitioners are difficult to match. To close this gap, security leaders need to consider how bureaucracy or isolated responsibilities hinder their defense strategies and adjust accordingly. Malicious actors are already using AI to accelerate the attack lifecycle, and we need to be able to withstand their efforts without necessarily expanding human participation.
In addition to strategic and tactical adjustments to our defensive capabilities, public-private partnerships are equally critical to our collective success. These efforts must also inform policy changes, requiring the active development of new frameworks and standardized norms regarding AI use and abuse, which are accepted and adhered to around the world.
AI will continue to affect all aspects of network security. This transition cannot be successfully browsed regardless of resources or expertise. Success depends not only on technology, but also on collaboration, flexibility, and our ability to adapt to changing reality.
Read more about how AI can convert cybercrime.