_images/uqlm_flow_ds.png _images/uqlm_flow_ds_dark.png

uqlm: Uncertainty Quantification for Language Models#

Get Started β†’ | View Examples β†’

UQLM is a Python library for Large Language Model (LLM) hallucination detection using state-of-the-art uncertainty quantification techniques.

Hallucination Detection#

UQLM provides a suite of response-level scorers for quantifying the uncertainty of Large Language Model (LLM) outputs. Each scorer returns a confidence score between 0 and 1, where higher scores indicate a lower likelihood of errors or hallucinations. We categorize these scorers into four main types:

Comparison of Scorer Types#

Scorer Type

Added Latency

Added Cost

Compatibility

Off-the-Shelf / Effort

Black-Box Scorers

⏱️ Medium-High (multiple generations & comparisons)

πŸ’Έ High (multiple LLM calls)

🌍 Universal (works with any LLM)

βœ… Off-the-shelf

White-Box Scorers

⚑ Minimal (token probabilities already returned)

βœ”οΈ None (no extra LLM calls)

πŸ”’ Limited (requires access to token probabilities)

βœ… Off-the-shelf

LLM-as-a-Judge Scorers

⏳ Low-Medium (additional judge calls add latency)

πŸ’΅ Low-High (depends on number of judges)

🌍 Universal (any LLM can serve as judge)

βœ… Off-the-shelf; Can be customized

Ensemble Scorers

πŸ”€ Flexible (combines various scorers)

πŸ”€ Flexible (combines various scorers)

πŸ”€ Flexible (combines various scorers)

βœ… Off-the-shelf (beginner-friendly); πŸ› οΈ Can be tuned (best for advanced users)

1. Black-Box Scorers(Consistency-Based)#

_images/black_box_graphic.png _images/black_box_graphic_dark.png

These scorers assess uncertainty by measuring the consistency of multiple responses generated from the same prompt. They are compatible with any LLM, intuitive to use, and don’t require access to internal model states or token probabilities.

2. White-Box Scorers(Token-Probability-Based)#

_images/white_box_graphic.png _images/white_box_graphic_dark.png

These scorers leverage token probabilities to estimate uncertainty. They are significantly faster and cheaper than black-box methods, but require access to the LLM’s internal probabilities, meaning they are not necessarily compatible with all LLMs/APIs.

3. LLM-as-a-Judge scorers#

_images/judges_graphic.png _images/judges_graphic_dark.png

These scorers use one or more LLMs to evaluate the reliability of the original LLM’s response. They offer high customizability through prompt engineering and the choice of judge LLM(s).

4. Ensemble scorers#

_images/uqensemble_generate_score.png _images/uqensemble_generate_score_dark.png

These scorers leverage a weighted average of multiple individual scorers to provide a more robust uncertainty/confidence estimate. They offer high flexibility and customizability, allowing you to tailor the ensemble to specific use cases.

Contents#