Computer scientists from Stanford University have discovered that programmers who use help from AI coding tools like GitHub Copilot produce less secure code solutions for the apps they develop
A team of researchers affiliated with Stanford has published a paper titled, “Do Users Write More Insecure Code with AI Assistants?” The study finds that code-generating systems provided by vendors like GitHub have unexpected pitfalls.
Neil Perry, a PhD candidate at Stanford and the lead co-author of the study believes AI code-generating systems at the moment cannot substitute human beings. Although they may provide some assistance, software engineers who use such systems are more likely to create apps with inherent security vulnerabilities.
The results were disappointing for the technology prospects. Namely, the researchers found that the study participants who had access to Codex more often wrote incorrect and insecure solutions to programming problems compared to a control group. Additionally, they tended to over-rely on the AI tool, believing their insecure answers were secure, while the developers fully in control of their coding tasks questioned their solutions more often.
However, Megha Srivastava, a postgraduate student at Stanford and the second co-author of the study, stressed that code-generating systems are quite reliable for tasks that aren’t high-risk. Thus, they shouldn’t be abandoned altogether. At the same time, developers who use such tools should carefully double-check the outputs and enhance their security expertise to better spot code vulnerabilities.
SambaNova’s Dataflow-as-a-Service Awarded as the Best Big Data Deep Learning/AI Solution
Top 10 White-label SaaS Business Opportunities in 2023