Cyber Security

Code security in the AI ​​era: speed and security according to new EU regulations

The rapid adoption of code-generated AI is simply surprising, and it is completely changing the way software development teams operate. According to a 2024 Stack Overflow Developer Survey, 82% of developers now use AI tools to write code. Major tech companies now rely on AI to create code for a large portion of their new software, and Alphabet’s CEO reported that its third quarter AI generated about 25% of Google code bases in 2024. Given the rapid advancement of AI since then, the percentage of code generated by Google in Google may now be much higher.

However, while AI can greatly improve efficiency and speed up software development, the use of AI-generated code is posing serious security risks, and new EU law rules improve the benefits of code security. The company finds itself stuck between two competing commands: keeping the speed of development necessary for competition while ensuring its code meets increasingly stringent security requirements.

The main problem with AI-generated code is that large language models (LLMS) provide billions of lines of public code training for coding assistants that have not been filtered for quality or security. Therefore, these models may replicate existing errors and security vulnerabilities in software that uses this unbrowsed AI-generated code.

Although the quality of AI-generated code continues to improve, security analysts have identified many common weaknesses that often occur. These include incorrect input verification, shelter of distrust data, operating system command injection, path traversal vulnerability, unrestricted upload of dangerous file types, and insufficient protected credentials (CWE 522).

Black Duck CEO Jason Schmitt saw similarities between security issues raised by AI-generated code and similar situations in the early days of open source.

“Open source movements open faster and faster time to innovate because people can focus on the areas or expertise they have in the market, rather than spending time and resources building networks and infrastructures (such as networks and infrastructure) such as the underlying construction they are not good at,” Schmidt said.

Regulatory Response: EU Cyber ​​Resilience Law

European regulators have noticed these emerging risks. The EU Cyber ​​Resilience Act will impose comprehensive security requirements on manufacturers of all products containing digital elements in December 2027.

Specifically, the bill requires safety considerations at every stage of the product life cycle: planning, design, development and maintenance. By default, companies must provide ongoing security updates and must opt ​​out for customers rather than choose. Products classified as critical will require a third-party security assessment before they can be sold in the EU market.

Violations are subject to serious fines, with fines up to €15 million or 2.5% of annual revenues in the previous fiscal year. These harsh penalties underline the urgency of organizations to take strong security measures immediately.

“Software is becoming a regulated industry,” Schmidt said. “Software has become so common in every organization from companies to schools to governments that the risk of poor quality or flawed security poses a society has become far-reaching.”

Even so, despite these security challenges and regulatory pressures, organizations are unable to slow down their development. Market dynamics require a rapid release cycle, and AI has become a key tool for achieving development acceleration. McKinsey’s research highlights productivity gains: AI tools allow developers to record code functionality twice as fast as possible, write new code in almost half the time, and then refactor existing code by one-third faster. In the competitive market, those who give up AI-assisted development risk efficiency lack a critical market window and provide an advantage for more agile competitors.

Challenge the challenge of an organization is not choosing between speed and security, but finding ways to implement both.

Wiring: Safety without sacrificing speed

The solution is that technical approaches do not force compromise between the capabilities of AI and the requirements of modern security software development. Effective partners provide:

  • Comprehensive automation tools This will be seamlessly integrated into the development pipeline, detecting vulnerabilities without breaking the workflow.
  • AI-enabled security solutions This can be consistent with the speed and scale of AI-generated code, thereby identifying patterns of vulnerability that may not be discovered.
  • Extensible method As development operations grow and code generation accelerates, ensuring security coverage does not become a bottleneck.
  • Depth of experience Face security challenges in various industries and development methodologies.

As AI continues to transform software development, thriving organizations will be those that include both the speed of code generated by AI and the security measures needed to protect it.

Black Duck cuts teeth and provides security solutions that help securely and quickly adopt open source code, and now they provide a comprehensive set of tools to protect software in a regulated AI-powered world.

Learn more about how Black Duck gets AI-generated code without sacrificing speed.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button