Entailment Probability#
entailment
Entailment Probability (EP) computes the mean entailment probability estimated by a natural language inference (NLI) model.
Definition#
This score is formally defined as follows:
where \(p_{\text{entail}}(y_i, \tilde{y}_{ij})\) denotes the (asymmetric) entailment probability estimated by the NLI model for response \(y_i\) and candidate \(\tilde{y}_{ij}\).
Key Properties:
The bidirectional averaging \((p_{\text{entail}}(a, b) + p_{\text{entail}}(b, a))/2\) accounts for the asymmetric nature of NLI
Higher EP values indicate that the original response is more likely to be entailed by (and entail) the sampled responses
Score range: \([0, 1]\) where 1 indicates strong mutual entailment
How It Works#
Generate multiple candidate responses \(\tilde{\mathbf{y}}_i\) from the same prompt
For each pair of original response \(y_i\) and candidate \(\tilde{y}_{ij}\):
Compute entailment probability in both directions using an NLI model
Average the bidirectional entailment probabilities
Average across all candidates to get the mean entailment probability
Parameters#
When using BlackBoxUQ, specify "entailment" in the scorers list.
Example#
from uqlm import BlackBoxUQ
# Initialize with entailment scorer
bbuq = BlackBoxUQ(
llm=llm,
scorers=["entailment"],
nli_model_name="microsoft/deberta-large-mnli"
)
# Generate responses and compute scores
results = await bbuq.generate_and_score(prompts=prompts, num_responses=5)
# Access the entailment scores
print(results.to_df()["entailment"])
References#
Chen, J. & Mueller, J. (2023). Quantifying Uncertainty in Answers from any Language Model and Enhancing their Trustworthiness. arXiv.
Lin, Z., et al. (2024). Generating with Confidence: Uncertainty Quantification for Black-box Large Language Models. arXiv.
See Also#
BlackBoxUQ- Main class for black-box uncertainty quantificationNon-Contradiction Probability - Related scorer measuring non-contradiction probability