Securely working with AI-generated code

Estimated read time 5 min read



Generative AI (GenAI) has had a huge impact on app development over the past two years, with tools such as ChatGPT and GitHub changing the face of the industry. The gains to be had are potentially enormous, helping developers to work faster and more efficiently. In fact, according to Snyk’s 2023 AI Code Security Report, 71.7% of respondents said that AI code suggestion was making them and their teams somewhat or much more productive.

Recognizing the security risks of AI-generated code

The use of GenAI can pose security challenges for those writing code, however. 56.4% of respondents in Snyk’s report say that insecure AI suggestions are common, but few have changed processes to improve AI security. A Stanford University research paper revealed that participants with access to an AI assistant are more likely to believe they’ve written secure code than those without. The risk of increased and misplaced confidence in the security of AI-assisted code heightens as development speed outpaces traditional security practices.



Source link

You May Also Like

More From Author

+ There are no comments

Add yours