Skip to yearly menu bar Skip to main content


Poster
in
Workshop: CODEML: Championing Open-source DEvelopment in Machine Learning

ZKLoRA: Efficient Zero-Knowledge Proofs for LoRA Verification

Bidhan Roy · Peter Potash · Marcos Villagra

[ ] [ Project Page ]
Fri 18 Jul 2:15 p.m. PDT — 3 p.m. PDT

Abstract:

Low-Rank Adaptation (LoRA) is a widelyadopted method for customizing large-scale languagemodels. In distributed, untrusted trainingenvironments, an open source base model usermay want to use LoRA weights created by an externalcontributor, leading to two requirements:(1) the base model user must confirm that theLoRA weights are effective when paired with theintended base model, and (2) the LoRA contributormust keep their proprietary weights privateuntil certain conditions have been met that allowthe LoRA contributor to release the weights.We present ZKLoRA, a zero-knowledge verificationprotocol that relies on succinct proofs and ournovel Multi-Party Inference procedure to verifyLoRA–base model compatibility without exposingLoRA weights. ZKLoRA produces deterministiccorrectness guarantees and validates eachLoRA module in only 1–2 seconds on state-ofthe-art large language models. This low-latencyapproach enables nearly real-time verification andpromotes secure collaboration among geographicallydecentralized teams and contract-based trainingpipelines. The protocol ensures that the deliveredLoRA module works as claimed, safeguardingthe contributor’s intellectual property whileproviding the base model user with verification ofcompatibility and lineage.

Chat is not available.