Cyber Security

Firewalls may need to be upgraded soon, as traditional tools fail under AI security

Traditional security tools are constantly encountering threats introduced by LLM and proxy AI systems, and legacy defenses are not intended to stop. From timely injection to model extraction, the attack surface of AI applications is very strange.

“Traditional security tools like WAF and API Gateway are largely insufficient to protect generative AI systems, mainly because they do not point, read, and interact with AI and do not know how to interpret them,” said Gartner, a distinguished VP analyst.

Artificial intelligence threats could be zero days

While AI systems and applications can greatly automate business workflows and threat detection and response routines, they bring their own problems into a mix of problems that have never existed before. Security threats evolve from vulnerabilities in SQL injection or cross-site scripting to behavioral manipulation, in which case adversaries trick models into leaking data, bypass filters, or operate in unpredictable ways.

Gartner’s litan said that although AI threats like model extraction have existed for many years, some people are very novel and difficult to solve. “National states and non-compliant competitors have been the most advanced AI models that others have created for years.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button