Skip to yearly menu bar Skip to main content


Poster
in
Workshop: CODEML: Championing Open-source DEvelopment in Machine Learning

LUQ: Language Models Uncertainty Quantification Toolkit

Alexander V Nikitin · Martin Trapp · Pekka Marttinen

[ ] [ Project Page ]
Fri 18 Jul 2:15 p.m. PDT — 3 p.m. PDT

Abstract:

Uncertainty quantification is a principled approach to ensuring the robustness, reliability, and safety of large language models (LLMs). However, progress in this field is hindered by the lack of a unified framework for benchmarking these methods. Additionally, creating suitable datasets for uncertainty quantification is computationally demanding because it often requires sampling LLMs multiple times per each sample. In this work, we propose and describe a software framework that (i) unifies the benchmarking of uncertainty quantification methods for language models, and (ii) provides an easy-to-use tool for practitioners aiming to develop more robust and safer LLM applications.

Chat is not available.