Rapid injection defects in Gitlab Duo highlight risks for AI assistants

The researchers found that GitLab’s coding assistant duo could parse malicious AI prompts hidden in comments, source code, merge request descriptions, and submit messages in public repositories. This technology allows them to trick chatbots into making malicious code suggestions to users, sharing malicious links and injecting Rogue HTML code, which block the code of private projects.
“GitLab patches HTML injection, which is great, but the bigger lesson is clear: AI tools are now part of the surface of your application’s attack,” researchers at Application Security Security Security Security Security Security Security Security Security Security Security Security Security Security Research said in a report. “If they read from the page, they need to treat that input like data provided by other users – untrusted, confusing and potentially dangerous.”
Timely injection is an attack technique for large language models (LLMs) to manipulate its output to users. While this is not a new attack, it will become increasingly important as enterprises develop AI agents to parse user-generated data and take independent measures based on that content.