uqlm.scorers.entropy.SemanticEntropy#
- class uqlm.scorers.entropy.SemanticEntropy(llm=None, postprocessor=None, device=None, use_best=True, system_prompt='You are a helpful assistant.', max_calls_per_min=None, use_n_param=False, sampling_temperature=1.0, verbose=False, nli_model_name='microsoft/deberta-large-mnli', max_length=2000, discrete=True)#
Bases:
UncertaintyQuantifier
- __init__(llm=None, postprocessor=None, device=None, use_best=True, system_prompt='You are a helpful assistant.', max_calls_per_min=None, use_n_param=False, sampling_temperature=1.0, verbose=False, nli_model_name='microsoft/deberta-large-mnli', max_length=2000, discrete=True)#
Class for computing Discrete Semantic Entropy-based confidence scores. For more on semantic entropy, refer to Farquhar et al.(2024) [1].
- Parameters:
llm (langchain BaseChatModel, default=None) – A langchain llm BaseChatModel. User is responsible for specifying temperature and other relevant parameters to the constructor of their llm object.
postprocessor (callable, default=None) – A user-defined function that takes a string input and returns a string. Used for postprocessing outputs.
device (str or torch.device input or torch.device object, default="cpu") – Specifies the device that NLI model use for prediction. Only applies to ‘semantic_negentropy’, ‘noncontradiction’ scorers. Pass a torch.device to leverage GPU.
use_best (bool, default=True) – Specifies whether to swap the original response for the uncertainty-minimized response based on semantic entropy clusters.
system_prompt (str or None, default="You are a helpful assistant.") – Optional argument for user to provide custom system prompt
max_calls_per_min (int, default=None) – Specifies how many api calls to make per minute to avoid a rate limit error. By default, no limit is specified.
sampling_temperature (float, default=1.0) – The ‘temperature’ parameter for llm model to generate sampled LLM responses. Must be greater than 0.
use_n_param (bool, default=False) – Specifies whether to use n parameter for BaseChatModel. Not compatible with all BaseChatModel classes. If used, it speeds up the generation process substantially when num_responses > 1.
verbose (bool, default=False) – Specifies whether to print the index of response currently being scored.
nli_model_name (str, default="microsoft/deberta-large-mnli") – Specifies which NLI model to use. Must be acceptable input to AutoTokenizer.from_pretrained() and AutoModelForSequenceClassification.from_pretrained()
max_length (int, default=2000) – Specifies the maximum allowed string length. Responses longer than this value will be truncated to avoid OutOfMemoryError
Methods
__init__
([llm, postprocessor, device, ...])Class for computing Discrete Semantic Entropy-based confidence scores.
generate_and_score
(prompts[, num_responses])Evaluate discrete semantic entropy score on LLM responses for the provided prompts.
generate_candidate_responses
(prompts)This method generates multiple responses for uncertainty estimation.
generate_original_responses
(prompts)This method generates original responses for uncertainty estimation.
score
([responses, sampled_responses])Evaluate discrete semantic entropy score on LLM responses for the provided prompts.
- async generate_and_score(prompts, num_responses=5)#
Evaluate discrete semantic entropy score on LLM responses for the provided prompts.
- Return type:
- Parameters:
prompts (list of str) – A list of input prompts for the model.
num_responses (int, default=5) – The number of sampled responses used to compute consistency.
- Returns:
UQResult, containing data (prompts, responses, and semantic entropy scores) and metadata
- Return type:
- async generate_candidate_responses(prompts)#
This method generates multiple responses for uncertainty estimation. If specified in the child class, all responses are postprocessed using the callable function defined by the user.
- Return type:
List
[List
[str
]]
- async generate_original_responses(prompts)#
This method generates original responses for uncertainty estimation. If specified in the child class, all responses are postprocessed using the callable function defined by the user.
- Return type:
List
[str
]
- score(responses=None, sampled_responses=None)#
Evaluate discrete semantic entropy score on LLM responses for the provided prompts.
- Return type:
- Parameters:
responses (list of str, default=None) – A list of model responses for the prompts. If not provided, responses will be generated with the provided LLM.
sampled_responses (list of list of str, default=None) – A list of lists of sampled model responses for each prompt. These will be used to compute consistency scores by comparing to the corresponding response from responses. If not provided, sampled_responses will be generated with the provided LLM.
- Returns:
UQResult, containing data (responses, sampled responses, and semantic entropy scores) and metadata
- Return type:
References