uqlm.scorers.baseclass.uncertainty.UncertaintyQuantifier#
- class uqlm.scorers.baseclass.uncertainty.UncertaintyQuantifier(llm=None, device=None, system_prompt='You are a helpful assistant', max_calls_per_min=None, use_n_param=False, postprocessor=None)#
Bases:
object
- __init__(llm=None, device=None, system_prompt='You are a helpful assistant', max_calls_per_min=None, use_n_param=False, postprocessor=None)#
Parent class for uncertainty quantification of LLM responses
- Parameters:
llm (BaseChatModel) – A langchain llm object to get passed to chain constructor. User is responsible for specifying temperature and other relevant parameters to the constructor of their llm object.
device (str or torch.device input or torch.device object, default="cpu") – Specifies the device that NLI model use for prediction. Only applies to ‘semantic_negentropy’, ‘noncontradiction’ scorers. Pass a torch.device to leverage GPU.
system_prompt (str or None, default="You are a helpful assistant.") – Optional argument for user to provide custom system prompt
max_calls_per_min (int, default=None) – Specifies how many api calls to make per minute to avoid a rate limit error. By default, no limit is specified.
use_n_param (bool, default=False) – Specifies whether to use n parameter for BaseChatModel. Not compatible with all BaseChatModel classes. If used, it speeds up the generation process substantially when num_responses > 1.
postprocessor (callable, default=None) – A user-defined function that takes a string input and returns a string. Used for postprocessing outputs.
Methods
__init__
([llm, device, system_prompt, ...])Parent class for uncertainty quantification of LLM responses
generate_candidate_responses
(prompts)This method generates multiple responses for uncertainty estimation.
generate_original_responses
(prompts)This method generates original responses for uncertainty estimation.
- async generate_candidate_responses(prompts)#
This method generates multiple responses for uncertainty estimation. If specified in the child class, all responses are postprocessed using the callable function defined by the user.
- Return type:
List
[List
[str
]]
- async generate_original_responses(prompts)#
This method generates original responses for uncertainty estimation. If specified in the child class, all responses are postprocessed using the callable function defined by the user.
- Return type:
List
[str
]
References