AI Security Status in 2025: Key Insights from Cisco Report

As more businesses adopt AI, understanding their security risks is becoming more important than ever. AI is reshaping industries and workflows, but it also introduces new security challenges that organizations must address. Protecting AI systems is essential to maintaining trust, protecting privacy and ensuring smooth business operations. This article summarizes key insights from Cisco’s recent “State of AI Security 2025” report. It outlines the location of AI security today and the company’s considerations for the future.
The security threat to AI is increasing
If 2024 teaches us anything, it is that AI adoption is faster than many organizations can ensure. Cisco’s report notes that about 72% of organizations now use AI in their business functions, but only 13% are fully prepared to safely achieve their potential. The gap between adoption and readiness is primarily driven by security concerns, which remains a major barrier to the wider use of enterprise AI. What makes this situation even more worrying is that AI introduces new types of threats that traditional cybersecurity methods cannot be fully handled. Unlike conventional cybersecurity that usually protects fixed systems, AI poses unpredictable dynamic and adaptive threats. The report highlights several emerging threats organizations should pay attention to:
- Infrastructure attacks: Artificial intelligence infrastructure has become the main target of attackers. A notable example is the compromise of NVIDIA’s container toolkit, which allows attackers to access the file system, run malicious code, and upgrade privileges. Similarly, Ray is an open source AI framework managed by GPUs, and was compromised in one of the first real-world AI framework attacks. These cases show how weaknesses in AI infrastructure affect many users and systems.
- Supply Chain Risks: AI supply chain vulnerabilities raise another important question. About 60% of organizations rely on open source AI components or ecosystems. This creates risks because attackers can compromise these widely used tools. The report mentions a technique called “sleepy kimchi” that can tamper with AI models even after distribution. This makes detection extremely difficult.
- AI-specific attacks: New attack technology is developing rapidly. Methods such as timely injection, jailbreak and training data extraction allow attackers to bypass security controls and access sensitive information contained in the training dataset.
Attack vectors for AI systems
The report highlights the emergence of attack vectors used by malicious actors to exploit weaknesses in AI systems. These attacks can occur at all stages of the AI lifecycle, from data collection and model training to deployment and reasoning. The goal is often to make AI act in unexpected ways, leak private data or perform harmful actions.
In recent years, these attack methods have become more advanced and difficult to detect. The report highlights several types of attack vectors:
- prison Break: The technology involves developing adversarial tips to bypass the model’s security measures. Despite improvements in AI defense capabilities, Cisco’s research shows that even simple jailbreaks still work for advanced models like DeepSeek R1.
- Indirect prompt for injection: Unlike direct attacks, this attack vector involves manipulating the context of input data or the indirect use of AI models. An attacker may provide damaged source material such as malicious PDFs or web pages, resulting in unexpected or harmful output from the AI. These attacks are particularly dangerous because they do not require direct access to the AI system, thus bypassing many traditional defenses.
- Training data extraction and poisoning: Cisco researchers show that chatbots can be tricked into revealing part of their training data. This has attracted serious concerns about data privacy, intellectual property and compliance. Attackers can also poison training data by injecting malicious input. Shockingly, only 0.01% of large data sets (such as Laion-400m or Coyo-700m) poisoning affects model behavior, which can be done with a smaller budget (about $60), which allows many bad actors to use these attacks.
The report highlights serious concerns about the current state of these attacks, with researchers reaching a 100% success rate for advanced models such as DeepSeek R1 and Llama 2. This reveals critical security vulnerabilities and their potential risks associated with their use. In addition, the report also identified the emergence of new threats, such as voice-based jailbreaks, which were specifically designed for multimodal AI models.
Cisco AI Security Research Discovery
The Cisco Research Team evaluates all aspects of AI security and reveals several key findings:
- Going out of prison: Researchers show that even top AI models can automatically cheat. The researchers used a method called “attack tree” (TAP), bypassing protections from GPT-4 and Llama 2.
- Fine-tuning risks: Many businesses fine-tune the basic models to improve relevance to specific areas. However, researchers have found that fine-tuning can weaken the internal safety guardrail. The fine-tuned version is triple more than the jailbreak, and it is 22 times more likely to produce harmful content than the original model.
- Training data extraction: Cisco researchers used a simple breakdown method to trick chatbots into reproducing clips of news articles, allowing them to reconstruct the source of the material. This poses a risk of exposure to sensitive or proprietary data.
- Data poisoning: Data poisoning: Cisco’s team demonstrates the ease and inexpensiveness of poisoning large-scale network datasets. At about $60, the researchers managed to poison 0.01% of the data sets of Laion-400m or Coyo-700m. Furthermore, they stressed that this level of poisoning is sufficient to cause significant changes in model behavior.
The role of AI in cybercrime
AI is not only a goal—it is also a tool for cybercriminals. The report notes that automation and AI-driven social engineering make attacks more efficient and difficult to detect. From phishing scams to voice cloning, AI can help criminals create compelling and personalized attacks. The report also identified the rise of malicious AI tools such as “Darkgpt” that are designed specifically to aid cybercrime by generating phishing emails or exploiting vulnerabilities. What makes these tools particularly worrying is their accessibility. Now even low-skill criminals can create highly personalized attacks to escape traditional defensive abilities.
Ensure best practices for AI
Given the turbulent AI security, Cisco has suggested several practical steps for the organization:
- Risks of managing the AI life cycle: From data procurement and model training to deployment and monitoring, it is crucial to identify and reduce risks at every stage of the AI life cycle. This also includes ensuring third-party components, applying a powerful guardrail, and strictly controlling access points.
- Using established cybersecurity practices: Although AI is unique, traditional cybersecurity best practices are still crucial. Technologies such as the scope of prevention can play a vital role.
- Focus on weak areas: Organizations should focus on areas that are most likely to be targeted, such as supply chains and third-party AI applications. By understanding where the vulnerability lies, enterprises can implement more targeted defenses.
- Education and training staff: With the general development of AI tools, it is important to train users to be responsible for AI usage and risk awareness. A good workforce helps reduce unexpected data exposure and abuse.
Looking to the future
AI adoption will continue to grow, and with it, security risks will continue to evolve. Governments and organizations around the world are recognizing these challenges and starting to develop policies and regulations to guide AI security. As the Cisco report highlights, the balance between AI security and advancement will define the next era of AI development and deployment. Organizations that prioritize security and innovation at the same time will be the ability to meet challenges and seize emerging opportunities.