uqlm: Uncertainty Quantification for Language Models#
Get Started β | View Examples β
UQLM is a Python library for Large Language Model (LLM) hallucination detection using state-of-the-art uncertainty quantification techniques.
Hallucination Detection#
UQLM provides a suite of response-level scorers for quantifying the uncertainty of Large Language Model (LLM) outputs. Each scorer returns a confidence score between 0 and 1, where higher scores indicate a lower likelihood of errors or hallucinations. We categorize these scorers into four main types:
Scorer Type |
Added Latency |
Added Cost |
Compatibility |
Off-the-Shelf / Effort |
|---|---|---|---|---|
β±οΈ Medium-High (multiple generations & comparisons) |
πΈ High (multiple LLM calls) |
π Universal (works with any LLM) |
β Off-the-shelf |
|
β‘ Minimal (token probabilities already returned) |
βοΈ None (no extra LLM calls) |
π Limited (requires access to token probabilities) |
β Off-the-shelf |
|
β³ Low-Medium (additional judge calls add latency) |
π΅ Low-High (depends on number of judges) |
π Universal (any LLM can serve as judge) |
β Off-the-shelf; Can be customized |
|
π Flexible (combines various scorers) |
π Flexible (combines various scorers) |
π Flexible (combines various scorers) |
β Off-the-shelf (beginner-friendly); π οΈ Can be tuned (best for advanced users) |
1. Black-Box Scorers(Consistency-Based)#
These scorers assess uncertainty by measuring the consistency of multiple responses generated from the same prompt. They are compatible with any LLM, intuitive to use, and donβt require access to internal model states or token probabilities.
Discrete Semantic Entropy (Farquhar et al., 2024; Kuh et al., 2023)
Number of Semantic Sets (Lin et al., 2024; Vashurin et al., 2025; Kuhn et al., 2023)
Non-Contradiction Probability (Chen & Mueller, 2023; Lin et al., 2025; Manakul et al., 2023)
Entailment Probability (Chen & Mueller, 2023; Lin et al., 2025; Manakul et al., 2023)
Exact Match (Cole et al., 2023; Chen & Mueller, 2023)
BERT-score (Manakul et al., 2023; Zheng et al., 2020)
Cosine Similarity (Shorinwa et al., 2024; HuggingFace)
2. White-Box Scorers(Token-Probability-Based)#
These scorers leverage token probabilities to estimate uncertainty. They offer single-generation scoring, which is significantly faster and cheaper than black-box methods, but require access to the LLMβs internal probabilities, meaning they are not necessarily compatible with all LLMs/APIs. The following single-generation scorers are available:
Minimum token probability (Manakul et al., 2023)
Length-Normalized Joint Token Probability (Malinin & Gales, 2021)
Sequence Probability (Vashurin et al., 2024)
Mean Top-K Token Negentropy (Scalena et al., 2025; Manakul et al., 2023)
Min Top-K Token Negentropy (Scalena et al., 2025; Manakul et al., 2023)
Probability Margin (Farr et al., 2024)
UQLM also offers sampling-based white-box methods, which incur higher cost and latency, but tend have superior hallucination detection performance. The following sampling-based white-box scorers are available:
Monte carlo sequence probability (Kuhn et al., 2023)
Consistency and Confidence (CoCoA) (Vashurin et al., 2025)
Semantic Entropy (Farquhar et al., 2024)
Semantic Density (Qiu et al., 2024)
Lastly, the P(True) scorer is offered, which is a self-reflection method that requires one additional generation per response.
P(True) (Kadavath et al., 2022)
3. LLM-as-a-Judge scorers#
These scorers use one or more LLMs to evaluate the reliability of the original LLMβs response. They offer high customizability through prompt engineering and the choice of judge LLM(s).
Categorical LLM-as-a-Judge (Manakul et al., 2023; Chen & Mueller, 2023; Luo et al., 2023)
Continuous LLM-as-a-Judge (Xiong et al., 2024)
Likert Scale Scoring (Bai et al., 2023)
Panel of LLM Judges (Verga et al., 2024)
4. Ensemble scorers#
These scorers leverage a weighted average of multiple individual scorers to provide a more robust uncertainty/confidence estimate. They offer high flexibility and customizability, allowing you to tailor the ensemble to specific use cases.
BS Detector (Chen & Mueller, 2023)
Generalized Ensemble (Bouchard & Chauhan, 2025)