

uqlm: Uncertainty Quantification for Language Models#
Get Started β | View Examples β
UQLM is a Python library for Large Language Model (LLM) hallucination detection using state-of-the-art uncertainty quantification techniques.
Hallucination Detection#
UQLM provides a suite of response-level scorers for quantifying the uncertainty of Large Language Model (LLM) outputs. Each scorer returns a confidence score between 0 and 1, where higher scores indicate a lower likelihood of errors or hallucinations. We categorize these scorers into four main types:
Scorer Type |
Added Latency |
Added Cost |
Compatibility |
Off-the-Shelf / Effort |
---|---|---|---|---|
β±οΈ Medium-High (multiple generations & comparisons) |
πΈ High (multiple LLM calls) |
π Universal (works with any LLM) |
β Off-the-shelf |
|
β‘ Minimal (token probabilities already returned) |
βοΈ None (no extra LLM calls) |
π Limited (requires access to token probabilities) |
β Off-the-shelf |
|
β³ Low-Medium (additional judge calls add latency) |
π΅ Low-High (depends on number of judges) |
π Universal (any LLM can serve as judge) |
β Off-the-shelf; Can be customized |
|
π Flexible (combines various scorers) |
π Flexible (combines various scorers) |
π Flexible (combines various scorers) |
β Off-the-shelf (beginner-friendly); π οΈ Can be tuned (best for advanced users) |
1. Black-Box Scorers(Consistency-Based)#


These scorers assess uncertainty by measuring the consistency of multiple responses generated from the same prompt. They are compatible with any LLM, intuitive to use, and donβt require access to internal model states or token probabilities.
Contradiction Probability (Chen & Mueller, 2023; Lin et al., 2025; Manakul et al., 2023)
Semantic Entropy (Farquhar et al., 2024; Kuh et al., 2023)
Exact Match (Cole et al., 2023; Chen & Mueller, 2023)
BERT-score (Manakul et al., 2023; Zheng et al., 2020)
BLUERT-score (Sellam et al., 2020)
Cosine Similarity (Shorinwa et al., 2024; HuggingFace)
2. White-Box Scorers(Token-Probability-Based)#


These scorers leverage token probabilities to estimate uncertainty. They are significantly faster and cheaper than black-box methods, but require access to the LLMβs internal probabilities, meaning they are not necessarily compatible with all LLMs/APIs.
Minimum token probability (Manakul et al., 2023)
Length-Normalized Joint Token Probability (Malinin & Gales, 2021)
3. LLM-as-a-Judge scorers#


These scorers use one or more LLMs to evaluate the reliability of the original LLMβs response. They offer high customizability through prompt engineering and the choice of judge LLM(s).
Categorical LLM-as-a-Judge (Manakul et al., 2023; Chen & Mueller, 2023; Luo et al., 2023)
Continuous LLM-as-a-Judge (Xiong et al., 2024)
Panel of LLM Judges (Verga et al., 2024)
4. Ensemble scorers#


These scorers leverage a weighted average of multiple individual scorers to provide a more robust uncertainty/confidence estimate. They offer high flexibility and customizability, allowing you to tailor the ensemble to specific use cases.
BS Detector (Chen & Mueller, 2023)
Generalized Ensemble (uses any combination of the above)