uqlm.scorers.longform.baseclass.uncertainty.LongFormUQ#
- class uqlm.scorers.longform.baseclass.uncertainty.LongFormUQ(llm=None, scorers=None, granularity='claim', aggregation='mean', claim_decomposition_llm=None, response_refinement=False, claim_filtering_scorer=None, device=None, system_prompt=None, max_calls_per_min=None, use_n_param=False)#
Bases:
UncertaintyQuantifier- __init__(llm=None, scorers=None, granularity='claim', aggregation='mean', claim_decomposition_llm=None, response_refinement=False, claim_filtering_scorer=None, device=None, system_prompt=None, max_calls_per_min=None, use_n_param=False)#
Parent class for uncertainty quantification of LLM responses
- Parameters:
llm (BaseChatModel) – A langchain llm object to get passed to chain constructor. User is responsible for specifying temperature and other relevant parameters to the constructor of their llm object.
scorers (List[str], default=None) – Specifies which black box (consistency) scorers to include.
aggregation (str, default="mean") – Specifies how to aggregate claim/sentence-level scores to response-level scores. Must be one of ‘min’ or ‘mean’.
granularity (str, default="claim") – Specifies whether to decompose and score at claim or sentence level granularity. Must be either “claim” or “sentence”
claim_decomposition_llm (langchain BaseChatModel, default=None) – A langchain llm BaseChatModel to be used for decomposing responses into individual claims. Also used for claim refinement. If granularity=”claim” and claim_decomposition_llm is None, the provided llm will be used for claim decomposition.
response_refinement (bool, default=False) – Specifies whether to refine responses with uncertainty-aware decoding. This approach removes claims with confidence scores below the response_refinement_threshold and uses the claim_decomposition_llm to reconstruct the response from the retained claims. Only available for claim-level granularity. For more details, refer to Jiang et al., 2024: https://arxiv.org/abs/2410.20783
claim_filtering_scorer (Optional[str], default=None) – specifies which scorer to use to filter claims if response_refinement is True. If not provided, defaults to the first element of self.scorers.
device (str or torch.device input or torch.device object, default="cpu") – Specifies the device that NLI model use for prediction. Only applies to ‘semantic_negentropy’, ‘noncontradiction’ scorers. Pass a torch.device to leverage GPU.
system_prompt (str, default=None) – Optional argument for user to provide custom system prompt. If prompts are list of strings and system_prompt is None, defaults to “You are a helpful assistant.”
max_calls_per_min (int, default=None) – Specifies how many api calls to make per minute to avoid a rate limit error. By default, no limit is specified.
use_n_param (bool, default=False) – Specifies whether to use n parameter for BaseChatModel. Not compatible with all BaseChatModel classes. If used, it speeds up the generation process substantially when num_responses > 1.
Methods
__init__([llm, scorers, granularity, ...])Parent class for uncertainty quantification of LLM responses
generate_candidate_responses(prompts[, ...])This method generates multiple responses for uncertainty estimation.
generate_original_responses(prompts[, ...])This method generates original responses for uncertainty estimation.
uncertainty_aware_decode(claim_sets, ...[, ...])- async generate_candidate_responses(prompts, num_responses=5, progress_bar=None)#
This method generates multiple responses for uncertainty estimation. If specified in the child class, all responses are postprocessed using the callable function defined by the user.
- Return type:
List[List[str]]- Parameters:
prompts (List[Union[str, List[BaseMessage]]]) – List of prompts from which LLM responses will be generated. Prompts in list may be strings or lists of BaseMessage. If providing input type List[List[BaseMessage]], refer to https://python.langchain.com/docs/concepts/messages/#langchain-messages for support.
num_responses (int, default=5) – The number of sampled responses used to compute consistency.
progress_bar (rich.progress.Progress, default=None) – A progress bar object to display progress.
- Returns:
A list of sampled responses for each prompt.
- Return type:
list of list of str
- async generate_original_responses(prompts, top_k_logprobs=None, progress_bar=None)#
This method generates original responses for uncertainty estimation. If specified in the child class, all responses are postprocessed using the callable function defined by the user.
- Return type:
List[str]- Parameters:
prompts (List[Union[str, List[BaseMessage]]]) – List of prompts from which LLM responses will be generated. Prompts in list may be strings or lists of BaseMessage. If providing input type List[List[BaseMessage]], refer to https://python.langchain.com/docs/concepts/messages/#langchain-messages for support.
progress_bar (rich.progress.Progress, default=None) – A progress bar object to display progress.
- Returns:
A list of original responses for each prompt.
- Return type:
list of str
- async uncertainty_aware_decode(claim_sets, claim_scores, response_refinement_threshold=0.3333333333333333, show_progress_bars=True)#
- Return type:
List[str]- Parameters:
claim_sets (List[List[str]]) – List of original responses decomposed into lists of claims
claim_scores (List[List[float]]) – List of lists of claim-level confidence scores to be used for uncertainty-aware filtering
response_refinement_threshold (float, default=1/3) – Threshold for uncertainty-aware filtering. Claims with confidence scores below this threshold are dropped from the refined response. Only used if response_refinement is True.
progress_bar (rich.progress.Progress, default=None) – If provided, displays a progress bar while scoring responses
References