Skip to yearly menu bar Skip to main content


Poster

Safe Delta: Consistently Preserving Safety when Fine-Tuning LLMs on Diverse Datasets

Ning LU · Shengcai Liu · Jiahao Wu · Weiyu CHEN · Zhirui Zhang · Yew Soon ONG · Qi Wang · Ke Tang

East Exhibition Hall A-B #E-1000
[ ] [ ] [ Project Page ]
Thu 17 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Large language models (LLMs) have shown great potential as general-purpose AI assistants across various domains. To fully leverage this potential in specific applications, many companies provide fine-tuning API services, enabling users to upload their own data for LLM customization. However, fine-tuning services introduce a new safety threat: user-uploaded data, whether harmful or benign, can break the model’s alignment, leading to unsafe outputs. Moreover, existing defense methods struggle to address the diversity of fine-tuning datasets (e.g., varying sizes, tasks), often sacrificing utility for safety or vice versa. To address this issue, we propose Safe Delta, a safety-aware post-training defense method that adjusts the delta parameters (i.e., the parameter change before and after fine-tuning). Specifically, Safe Delta estimates the safety degradation, selects delta parameters to maximize utility while limiting overall safety loss, and applies a safety compensation vector to mitigate residual safety loss. Through extensive experiments on four diverse datasets with varying settings, our approach consistently preserves safety while ensuring that the utility gain from benign datasets remains unaffected.

Lay Summary:

ProblemWhen companies allow users to customize powerful AI models (like ChatGPT) with their own data, this "fine-tuning" process can accidentally or intentionally make the AI unsafe, leading it to produce harmful content. Current safety methods struggle to adapt to different user datasets, often failing to prevent harm or unnecessarily reducing the AI's usefulness.## SolutionWe developed "Safe Delta," a new technique that works after an AI has been customized. It intelligently assesses the changes made, figuring out which ones improve performance and which pose safety risks. Safe Delta then carefully adjusts these modifications to maximize usefulness while ensuring the AI remains safe.## ImpactThis research offers a more reliable way for AI providers to offer customization services. Safe Delta helps ensure that AI models can be effectively tailored for diverse needs without compromising their safety, leading to more trustworthy and beneficial AI applications for everyone.

Chat is not available.