🎯 Black-Box Uncertainty Quantification#
Black-box Uncertainty Quantification (UQ) methods treat the LLM as a black box and evaluate consistency of multiple responses generated from the same prompt to estimate response-level confidence. This demo provides an illustration of how to use state-of-the-art black-box UQ methods with uqlm. The following scorers are available:
Non-Contradiction Probability (Chen & Mueller, 2023; Lin et al., 2025; Manakul et al., 2023)
Semantic Negentropy (based on Farquhar et al., 2024; Kuhn et al., 2023)
Exact Match (Cole et al., 2023; Chen & Mueller, 2023)
BERT-score (Manakul et al., 2023; Zheng et al., 2020)
BLUERT (Sellam et al., 2020)
Normalized Cosine Similarity (Shorinwa et al., 2024; HuggingFace)
📊 What You’ll Do in This Demo#
1
Set up LLM and prompts.
Set up LLM instance and load example data prompts.
2
Generate LLM Responses and Confidence Scores
Generate and score LLM responses to the example questions using the BlackBoxUQ() class.
3
Evaluate Hallucination Detection Performance
Visualize model accuracy at different thresholds of the various black-box UQ confidence scores. Compute precision, recall, and F1-score of hallucination detection.
⚖️ Advantages & Limitations#
Pros
Universal Compatibility: Works with any LLM
Intuitive: Easy to understand and implement
No Internal Access Required: Doesn’t need token probabilities or model internals
Cons
Higher Cost: Requires multiple generations per prompt
Slower: Multiple generations and comparison calculations increase latency
[2]:
import numpy as np
from sklearn.metrics import precision_score, recall_score, f1_score
from uqlm import BlackBoxUQ
from uqlm.utils import load_example_dataset, math_postprocessor, plot_model_accuracies, Tuner
## 1. Set up LLM and Prompts
In this demo, we will illustrate this approach using a set of math questions from the SVAMP benchmark. To implement with your use case, simply replace the example prompts with your data.
[3]:
# Load example dataset (SVAMP)
svamp = load_example_dataset("svamp", n=75)
svamp.head()
Loading dataset - svamp...
Processing dataset...
Dataset ready!
[3]:
question | answer | |
---|---|---|
0 | There are 87 oranges and 290 bananas in Philip... | 145 |
1 | Marco and his dad went strawberry picking. Mar... | 19 |
2 | Edward spent $ 6 to buy 2 books each book cost... | 3 |
3 | Frank was reading through his favorite book. T... | 198 |
4 | There were 78 dollars in Olivia's wallet. She ... | 63 |
[4]:
# Define prompts
MATH_INSTRUCTION = (
"When you solve this math problem only return the answer with no additional text.\n"
)
prompts = [MATH_INSTRUCTION + prompt for prompt in svamp.question]
In this example, we use ChatVertexAI
to instantiate our LLM, but any LangChain Chat Model may be used. Be sure to replace with your LLM of choice.
[5]:
# import sys
# !{sys.executable} -m pip install langchain-google-vertexai
from langchain_google_vertexai import ChatVertexAI
llm = ChatVertexAI(model="gemini-pro")
## 2. Generate LLM Responses and Confidence Scores
BlackBoxUQ()
- Generate LLM responses and compute consistency-based confidence scores for each response.#
📋 Class Attributes#
Parameter | Type & Default | Description |
---|---|---|
llm | BaseChatModeldefault=None | A langchain llm |
scorers | List[str]default=None | Specifies which black box (consistency) scorers to include. Must be subset of [‘semantic_negentropy’, ‘noncontradiction’, ‘exact_match’, ‘bert_score’, ‘bleurt’, ‘cosine_sim’]. If None, defaults to [“semantic_negentropy”, “noncontradiction”, “exact_match”, “cosine_sim”]. Note that using “bleurt” scorer requires installation of bleurt package (pip install pip install –user git+https://github.com/google-research/bleurt.git). |
device | str or torch.devicedefault=”cpu” | Specifies the device that NLI model use for prediction. Only applies to ‘semantic_negentropy’, ‘noncontradiction’ scorers. Pass a torch.device to leverage GPU. |
use_best | booldefault=True | Specifies whether to swap the original response for the uncertainty-minimized response among all sampled responses based on semantic entropy clusters. Only used if |
system_prompt | str or Nonedefault=”You are a helpful assistant.” | Optional argument for user to provide custom system prompt for the LLM. |
max_calls_per_min | intdefault=None | Specifies how many API calls to make per minute to avoid rate limit errors. By default, no limit is specified. |
use_n_param | booldefault=False | Specifies whether to use n parameter for BaseChatModel. Not compatible with all BaseChatModel classes. If used, it speeds up the generation process substantially when num_responses is large. |
postprocessor | callabledefault=None | A user-defined function that takes a string input and returns a string. Used for postprocessing outputs. |
sampling_temperature | floatdefault=1 | The ‘temperature’ parameter for LLM to use when generating sampled LLM responses. Must be greater than 0. |
nli_model_name | strdefault=”microsoft/deberta-large-mnli” | Specifies which NLI model to use. Must be acceptable input to AutoTokenizer.from_pretrained() and AutoModelForSequenceClassification.from_pretrained(). |
max_length | intdefault=2000 | Specifies the maximum allowed string length for LLM responses for NLI computation. Responses longer than this value will be truncated in NLI computations to avoid OutOfMemoryError. |
🔍 Parameter Groups#
🧠 LLM-Specific
llm
system_prompt
sampling_temperature
📊 Confidence Scores
scorers
use_best
nli_model_name
postprocessor
🖥️ Hardware
device
⚡ Performance
max_calls_per_min
use_n_param
💻 Usage Examples#
# Basic usage with default parameters
bbuq = BlackBoxUQ(llm=llm)
# Using GPU acceleration, default scorers
bbuq = BlackBoxUQ(llm=llm, device=torch.device("cuda"))
# Custom scorer list
bbuq = BlackBoxUQ(llm=llm, scorers=["semantic_negentropy", "exact_match", "cosine_sim"])
# High-throughput configuration with rate limiting
bbuq = BlackBoxUQ(llm=llm, max_calls_per_min=200, use_n_param=True)
[6]:
import torch
# Set the torch device
if torch.cuda.is_available(): # NVIDIA GPU
device = torch.device("cuda")
elif torch.backends.mps.is_available(): # macOS
device = torch.device("mps")
else:
device = torch.device("cpu") # CPU
print(f"Using {device.type} device")
Using cuda device
[8]:
bbuq = BlackBoxUQ(
llm=llm,
max_calls_per_min=250,
device=device,
scorers=["semantic_negentropy", "exact_match", "cosine_sim"],
)
Some weights of the model checkpoint at microsoft/deberta-large-mnli were not used when initializing DebertaForSequenceClassification: ['config']
- This IS expected if you are initializing DebertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DebertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
🔄 Class Methods#
Method | Description & Parameters |
---|---|
BlackBoxUQ.generate_and_score | Generate LLM responses, sampled LLM (candidate) responses, and compute confidence scores for the provided prompts. Parameters:
Returns: UQResult containing data (prompts, responses, sampled responses, and confidence scores) and metadata 💡 Best For: Complete end-to-end uncertainty quantification when starting with prompts. |
BlackBoxUQ.score | Compute confidence scores on provided LLM responses. Should only be used if responses and sampled responses are already generated. Parameters:
Returns: UQResult containing data (responses, sampled responses, and confidence scores) and metadata 💡 Best For: Computing uncertainty scores when responses are already generated elsewhere. |
[9]:
results = await bbuq.generate_and_score(
prompts=prompts,
num_responses=10, # for lower cost and latency, use smaller value of num_responses
)
# # alternative approach: directly score if responses already generated
# results = bbuq.score(responses=responses, sampled_responses=sampled_responses)
Generating responses...
Generating candidate responses...
Computing confidence scores...
[10]:
result_df = results.to_df()
result_df.head(5)
[10]:
response | sampled_responses | prompt | exact_match | cosine_sim | semantic_negentropy | |
---|---|---|---|---|---|---|
0 | 145 | [145 \n, 145 Each group should contain 145 ban... | When you solve this math problem only return t... | 0.8 | 0.972315 | 1.000000 |
1 | 19 pounds | [19 pounds, 19, 19 pounds, 19 pounds, 19, 19, ... | When you solve this math problem only return t... | 0.5 | 0.909083 | 1.000000 |
2 | $3 | [$4, $4\n, $3, $ 3.00, $3, $3, $ 3.00, $3.00, ... | When you solve this math problem only return t... | 0.3 | 0.915118 | 0.802269 |
3 | 198 | [198, ```\n198\n```, 198, 198, 198, 198, 198, ... | When you solve this math problem only return t... | 0.9 | 0.990422 | 1.000000 |
4 | 63 | [63, 63, 63, 63, 63, 63, 63, 63, 63, 63] | When you solve this math problem only return t... | 1.0 | 1.000000 | 1.000000 |
## 3. Evaluate Hallucination Detection Performance
To evaluate hallucination detection performance, we ‘grade’ the responses against an answer key. Note the math_postprocessor
is specific to our use case (math questions). If you are using your own prompts/questions, update the grading method accordingly.
[11]:
# Populate correct answers
result_df["answer"] = svamp.answer
# Grade responses against correct answers
result_df["response_correct"] = [
math_postprocessor(r) == a for r, a in zip(result_df["response"], svamp["answer"])
]
result_df.head(5)
[11]:
response | sampled_responses | prompt | exact_match | cosine_sim | semantic_negentropy | answer | response_correct | |
---|---|---|---|---|---|---|---|---|
0 | 145 | [145 \n, 145 Each group should contain 145 ban... | When you solve this math problem only return t... | 0.8 | 0.972315 | 1.000000 | 145 | True |
1 | 19 pounds | [19 pounds, 19, 19 pounds, 19 pounds, 19, 19, ... | When you solve this math problem only return t... | 0.5 | 0.909083 | 1.000000 | 19 | True |
2 | $3 | [$4, $4\n, $3, $ 3.00, $3, $3, $ 3.00, $3.00, ... | When you solve this math problem only return t... | 0.3 | 0.915118 | 0.802269 | 3 | True |
3 | 198 | [198, ```\n198\n```, 198, 198, 198, 198, 198, ... | When you solve this math problem only return t... | 0.9 | 0.990422 | 1.000000 | 198 | True |
4 | 63 | [63, 63, 63, 63, 63, 63, 63, 63, 63, 63] | When you solve this math problem only return t... | 1.0 | 1.000000 | 1.000000 | 63 | True |
[12]:
print(f"""Baseline LLM accuracy: {np.mean(result_df["response_correct"])}""")
Baseline LLM accuracy: 0.72
3.1 Filtered LLM Accuracy Evaluation#
Here, we explore ‘filtered accuracy’ as a metric for evaluating the performance of our confidence scores. Filtered accuracy measures the change in LLM performance when responses with confidence scores below a specified threshold are excluded. By adjusting the confidence score threshold, we can observe how the accuracy of the LLM improves as less certain responses are filtered out.
We will plot the filtered accuracy across various confidence score thresholds to visualize the relationship between confidence and LLM accuracy. This analysis helps in understanding the trade-off between response coverage (measured by sample size below) and LLM accuracy, providing insights into the reliability of the LLM’s outputs. We conduct this analysis separately for each of our scorers.
[13]:
for confidence_score in ["semantic_negentropy", "exact_match", "cosine_sim"]:
plot_model_accuracies(
scores=result_df[confidence_score],
correct_indicators=result_df.response_correct,
title=f"LLM Accuracy by {confidence_score} Threshold",
)



3.2 Precision, Recall, F1-Score of Hallucination Detection#
Lastly, we compute the optimal threshold for binarizing confidence scores, using F1-score as the objective. Using this threshold, we compute precision, recall, and F1-score for black box scorer predictions of whether responses are correct.
[14]:
# instantiate UQLM tuner object for threshold selection
t = Tuner()
correct_indicators = (
result_df.response_correct
) * 1 # Whether responses is actually correct
for confidence_score in ["semantic_negentropy", "exact_match", "cosine_sim"]:
y_scores = result_df[confidence_score] # confidence score
# Solve for threshold that maximizes F1-score
best_threshold = t.tune_threshold(
y_scores=y_scores,
correct_indicators=correct_indicators,
thresh_objective="fbeta_score",
)
y_pred = [
(s > best_threshold) * 1 for s in y_scores
] # predicts whether response is correct based on confidence score
print(f"{confidence_score} F1-optimal threshold: {best_threshold}")
print(" ")
# evaluate precision, recall, and f1-score of predictions of correctness
print(
f"{confidence_score} precision: {precision_score(y_true=correct_indicators, y_pred=y_pred)}"
)
print(
f"{confidence_score} recall: {recall_score(y_true=correct_indicators, y_pred=y_pred)}"
)
print(
f"{confidence_score} f1-score: {f1_score(y_true=correct_indicators, y_pred=y_pred)}"
)
print(" ")
print(" ")
semantic_negentropy F1-optimal threshold: 0.59
semantic_negentropy precision: 0.8333333333333334
semantic_negentropy recall: 0.9259259259259259
semantic_negentropy f1-score: 0.8771929824561403
exact_match F1-optimal threshold: 0.2
exact_match precision: 0.7910447761194029
exact_match recall: 0.9814814814814815
exact_match f1-score: 0.8760330578512396
cosine_sim F1-optimal threshold: 0.88
cosine_sim precision: 0.8305084745762712
cosine_sim recall: 0.9074074074074074
cosine_sim f1-score: 0.8672566371681416
4. Scorer Definitions#
Below we define the scorers offered by the BlackBoxUQ
class. These scorers exploit variation in LLM responses to the same prompt to measure semantic consistency. All scorers have outputs ranging from 0 to 1, with higher values indicating higher confidence.
For a given prompt \(x_i\), these approaches involves generating \(m\) responses \(\tilde{\mathbf{y}}_i = \{ \tilde{y}_{i1},...,\tilde{y}_{im}\}\), using a non-zero temperature, from the same prompt and comparing these responses to the original response \(y_{i}\). We provide detailed descriptions of each below.
Exact Match Rate (exact_match
)#
Exact Match Rate (EMR) computes the proportion of candidate responses that are identical to the original response.
For more on this scorer, refer to Cole et al., 2023.
Non-Contradiction Probability (noncontradiction
)#
Non-contradiction probability (NCP) computes the mean non-contradiction probability estimated by a natural language inference (NLI) model. This score is formally defined as follows:
- :nbsphinx-math:`begin{equation}
NCP(y_i; tilde{mathbf{y}}_i) = frac{1}{m} sum_{j=1}^m(1 - p_j)
end{equation}` where
- :nbsphinx-math:`begin{equation}
p_j = frac{eta(y_{i}, tilde{y}_{ij}) + eta(tilde{y}_{ij},y_i)}{2}.
end{equation}`
Above, \(\eta(\tilde{y}_{ij},y_i)\) denotes the contradiction probability estimated by the NLI model for response \(y_i\) and candidate \(\tilde{y}_{ij}\). For more on this scorer, refer to Chen & Mueller, 2023, Lin et al., 2025, or Manakul et al., 2023.
Normalized Semantic Negentropy (semantic_negentropy
)#
Normalized Semantic Negentropy (NSN) normalizes the standard computation of discrete semantic entropy to be increasing with higher confidence and have [0,1] support. In contrast to the EMR and NCP, semantic entropy does not distinguish between an original response and candidate responses. Instead, this approach computes a single metric value on a list of responses generated from the same prompt. Under this approach, responses are clustered using an NLI model based on mutual entailment. We consider the discrete version of SE, where the final set of clusters is defined as follows:
- :nbsphinx-math:`begin{equation}
SE(y_i; tilde{mathbf{y}}_i) = - sum_{C in mathcal{C}} P(C|y_i, tilde{mathbf{y}}_i)log P(C|y_i, tilde{mathbf{y}}_i),
end{equation}` where \(P(C|y_i, \tilde{\mathbf{y}}_i)\) denotes the probability a randomly selected response $y \in `{y_i} :nbsphinx-math:cup :nbsphinx-math:tilde{mathbf{y}}`_i $ belongs to cluster \(C\), and \(\mathcal{C}\) denotes the full set of clusters of \(\{y_i\} \cup \tilde{\mathbf{y}}_i\).
- To ensure that we have a normalized confidence score with \([0,1]\) support and with higher values corresponding to higher confidence, we implement the following normalization to arrive at ormalized Semantic Negentropy (NSN): :nbsphinx-math:`begin{equation}
NSN(y_i; tilde{mathbf{y}}_i) = 1 - frac{SE(y_i; tilde{mathbf{y}}_i)}{log m},
end{equation}` where \(\log m\) is included to normalize the support.
BERTScore (bert_score
)#
Let a tokenized text sequence be denoted as \(\textbf{t} = \{t_1,...t_L\}\) and the corresponding contextualized word embeddings as \(\textbf{E} = \{\textbf{e}_1,...,\textbf{e}_L\}\), where \(L\) is the number of tokens in the text. The BERTScore precision, recall, and F1-scores between two tokenized texts \(\textbf{t}, \textbf{t}'\) are respectively defined as follows:
- :nbsphinx-math:`begin{equation}
BertP(textbf{t}, textbf{t}’) = frac{1}{| textbf{t}|} sum_{t in textbf{t}} max_{t’ in textbf{t}’} textbf{e} cdot textbf{e}’
end{equation}`
- :nbsphinx-math:`begin{equation}
BertR(textbf{t}, textbf{t}’) = frac{1}{| textbf{t}’|} sum_{t’ in textbf{t}’} max_{t in textbf{t}} textbf{e} cdot textbf{e}’
end{equation}`
- :nbsphinx-math:`begin{equation}
BertF(textbf{t}, textbf{t}’) = 2frac{ BertP(textbf{t}, textbf{t}’) BertR(textbf{t}, textbf{t}’)}{BertP(textbf{t}, textbf{t}’) + BertRec(textbf{t}, textbf{t}’)},
- end{equation}` where \(e, e'\) respectively correspond to \(t, t'\). We compute our BERTScore-based confidence scores as follows: :nbsphinx-math:`begin{equation}
BertConf(y_i; tilde{mathbf{y}}_i) = frac{1}{m} sum_{j=1}^m BertF(y_i, tilde{y}_{ij}),
end{equation}` i.e. the average BERTScore F1 across pairings of the original response with all candidate responses. For more on BERTScore, refer to Zheng et al., 2020.
BLEURT (bleurt
)#
In contrast to the aforementioned scorers, BLEURT is specifically pre-trained and fine-tuned to learn human judgments of text similarity. Our BLEURT confidence score is the average BLEURT value across pairings of the original response with all candidate responses:
- :nbsphinx-math:`begin{equation}
BLEURTConf(y_i; tilde{mathbf{y}}_i) = frac{1}{m} sum_{j=1}^m BLEURT(y_i, tilde{y}_{ij}).
end{equation}`
For more on this scorer, refer to Sellam et al., 2020.
Normalized Cosine Similarity (cosine_sim
)#
This scorer leverages a sentence transformer to map LLM outputs to an embedding space and measure similarity using those sentence embeddings. Let \(V: \mathcal{Y} \xrightarrow{} \mathbb{R}^d\) denote the sentence transformer, where \(d\) is the dimension of the embedding space. The average cosine similarity across pairings of the original response with all candidate responses is given as follows:
- :nbsphinx-math:`begin{equation}
CS(y_i; tilde{mathbf{y}}_i) = frac{1}{m} sum_{i=1}^m frac{mathbf{V}(y_i) cdot mathbf{V}(tilde{y}_{ij}) }{ lVert mathbf{V}(y_i) rVert lVert mathbf{V}(tilde{y}_{ij}) rVert}.
end{equation}`
To ensure a standardized support of \([0, 1]\), we normalize cosine similarity to obtain confidence scores as follows:
- :nbsphinx-math:`begin{equation}
NCS(y_i; tilde{mathbf{y}}_i) = frac{CS(y_i; tilde{mathbf{y}}_i) + 1}{2}.
end{equation}`
© 2025 CVS Health and/or one of its affiliates. All rights reserved.