Presentation + Paper
15 June 2023 Double-edged sword of LLMs: mitigating security risks of AI-generated code
Ramesh Bharadwaj, Ilya Parker
Author Affiliations +
Abstract
With the increasing reliance on collaborative and cloud-based systems, there is a drastic increase in attack surfaces and code vulnerabilities. Automation is key for fielding and defending software systems at scale. Researchers in Symbolic AI have had considerable success in finding flaws in human-created code. Also, run-time testing methods such as fuzzing do uncover numerous bugs. However, the major deficiency of both approaches is the inability of the methods to fix the discovered errors. They also do not scale and defy automation. Static analysis methods also suffer from the false positive problem – an overwhelming number of reported flaws are not real bugs. This brings up an interesting conundrum: Symbolic approaches actually have a detrimental impact on programmer productivity, and therefore do not necessarily contribute to improved code quality. What is needed is a combination of automation of code generation using large language models (LLMs), with scalable defect elimination methods using symbolic AI, to create an environment for the automated generation of defect-free code.
Conference Presentation
© (2023) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Ramesh Bharadwaj and Ilya Parker "Double-edged sword of LLMs: mitigating security risks of AI-generated code", Proc. SPIE 12542, Disruptive Technologies in Information Sciences VII, 125420I (15 June 2023); https://doi.org/10.1117/12.2664116
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Artificial intelligence

Error analysis

Software development

Transformers

Machine learning

Back to Top