Skip to yearly menu bar Skip to main content


Poster

ProSec: Fortifying Code LLMs with Proactive Security Alignment

Xiangzhe Xu · Zian Su · Jinyao Guo · Kaiyuan Zhang · Zhenting Wang · Xiangyu Zhang

East Exhibition Hall A-B #E-2609
[ ] [ ]
Tue 15 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

While recent code-specific large language models (LLMs) have greatly enhanced their code generation capabilities, the safety of these models remains under-explored, posing potential risks as insecure code generated by these models may introduce vulnerabilities into real-world systems. Existing methods collect security-focused datasets from real-world vulnerabilities for instruction tuning in order to mitigate such issues. However, they are largely constrained by the data sparsity of vulnerable code, and have limited applicability in the multi-stage post-training workflows of modern LLMs. In this paper, we propose ProSec, a novel proactive security alignment approach designed toalign code LLMs with secure coding practices. ProSec systematically exposes the vulnerabilities in a code LLM by synthesizing vulnerability-inducing coding scenarios from Common Weakness Enumerations (CWEs) and generates fixes to vulnerable code snippets, allowing the model to learn secure practices through preference learning objectives. The scenarios synthesized by ProSec trigger 25× more vulnerable code than a normal instruction-tuning dataset, resulting in a security-focused alignment dataset 7× larger than the previous work. Experiments show that models trained with ProSec are 25.2% to 35.4% more secure compared to previous work without degrading models' utility.

Lay Summary:

AI models are powerful at writing code but may produce insecure code vulnerable to attackers. Existing methods rely on scarce real-world bug examples, limiting their coverage. Our system, PROSEC, automatically generates realistic and diverse coding tasks where models tend to write insecure code, then creates paired secure and insecure implementations to teach the model to generate secure code. This process yields over 20× more vulnerable samples and a dataset 7× larger than previous efforts. Models trained with PROSEC are 25–35% more secure without losing their coding performance.

Chat is not available.