Generative AI (GenAI) has had a huge impact on app development over the past two years, with tools such as ChatGPT and GitHub changing the face of the industry. The gains to be had are potentially enormous, helping developers to work faster and more efficiently. In fact, according to Snyk’s 2023 AI Code Security Report, 71.7% of respondents said that AI code suggestion was making them and their teams somewhat or much more productive.
Recognizing the security risks of AI-generated code
The use of GenAI can pose security challenges for those writing code, however. 56.4% of respondents in Snyk’s report say that insecure AI suggestions are common, but few have changed processes to improve AI security. A Stanford University research paper revealed that participants with access to an AI assistant are more likely to believe they’ve written secure code than those without. The risk of increased and misplaced confidence in the security of AI-assisted code heightens as development speed outpaces traditional security practices.
Worse still, developers using AI assistants have been shown to produce code with potential flaws that are hard to detect without adequate safeguards. Stanford’s research found that not only are AI-assisted developers more likely to produce less secure code, they’re likely to place higher trust in code generated by AI, assuming correctness and security without proper validation.
The dangers are compounded when AI tools draw from outdated or flawed data, potentially replicating known vulnerabilities. With the rapid pace at which developers work, and the speed at which new vulnerabilities are discovered, the required security checks may be overlooked or delayed, creating a window of exposure for cyber risks.
Applying the right policies and practices
Generative AI shouldn’t be overlooked or avoided, however. After all, the productivity benefits are clear, and risks can be mitigated by comprehensive policies and tooling that guide the secure and effective use of AI-generated code. Businesses should treat GenAI as they would a Junior Developer, for example, and implement continuous code reviews. Just as junior coders require constant review and mentorship, AI-generated code also requires rigorous checks before signing off.
The shift-left approach, which focuses on security testing earlier in the software development lifecycle, is also key. Organizations can’t afford for security checks to be relegated to later stages of development as this causes costly rework and delays. Real-time security testing should be integrated directly into the developer’s workflow. Modern security tools and platforms enable developers to identify vulnerabilities immediately as both human and AI-generated code is produced, enabling remediation before flaws become embedded in the production application.
Education is also essential for the adoption of secure AI code assistance. Developers need to understand the limitations of AI models, including their inability to reason and their reliance on historical data that may not account for the latest vulnerabilities. Developers should actively scrutinize AI-generated suggestions to ensure code is both secure and relevant.
While GenAI itself can introduce risks, it can also help mitigate them when paired with security-focused tools. AI-powered security platforms can act as a security companion for AI-generated code. These tools analyze code for vulnerabilities in real-time, flagging issues such as insecure coding practices and outdated dependencies.
By promoting a collaborative DevSecOps culture where development, security and operations are combined within project management workflows and automated tools, organizations can align functions and ensure security becomes an enabler rather than an obstacle to innovation. Security champions within dev teams can play a vital role in advocating for best practices, reinforcing the importance of secure coding without slowing down the pace of innovation.
AI-generated code should always be used with appropriate safeguards to minimize security risks. The adoption of secure practices can go a long way, but organizations will also benefit by integrating security platforms and tools alongside.
Developers using GenAI should run code through security tools that integrate with popular development environments like IDEs, Git and CI/CD pipelines. This ensures that security checks run in the background as work progresses, complementing and empowering the shift-left mentality. Real-time feedback loops enable developers to fix vulnerabilities immediately, and security tools can monitor AI-generated code continuously, performing static analysis and highlighting security issues as the code evolves.
Organizations can also save time and reduce risks by leveraging tools that not only flag vulnerabilities but also suggest or automatically apply fixes. The best tools offer one-click remediation capabilities, enabling developers to secure their code rapidly without needing to rewrite large sections.
Embracing a holistic security approach
As GenAI continues to gain traction in software development, the need for robust security practices becomes even more paramount. Sure, GenAI tools like ChatGPT and GitHub can dramatically improve development speed and efficiency, but should never be assumed to generate secure code out of the box.
Instead, organizations should combine the power of GenAI with comprehensive security frameworks to maximize the benefits of AI-generated code while minimizing associated risks. Ultimately, success requires a balance between AI-driven innovation and stringent security protocols, with the right policies, practices and tools empowering developers and organizations to use AI responsibly while avoiding the pitfalls of insecure software.
We’ve featured the best laptops for programming.
This article was produced as part of TechRadarPro’s Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
+ There are no comments
Add yours