uqlm.utils.response_generator.ResponseGenerator#
- class uqlm.utils.response_generator.ResponseGenerator(llm=None, max_calls_per_min=None, use_n_param=False)#
Bases:
object- __init__(llm=None, max_calls_per_min=None, use_n_param=False)#
Class for generating data from a provided set of prompts
- Parameters:
llm (langchain BaseChatModel, default=None) – A langchain llm BaseChatModel. User is responsible for specifying temperature and other relevant parameters to the constructor of their llm object.
max_calls_per_min (int, default=None) – Used to control rate limiting
use_n_param (bool, default=False) – Specifies whether to use n parameter for BaseChatModel. Not compatible with all BaseChatModel classes. If used, it speeds up the generation process substantially when count > 1.
Methods
__init__([llm, max_calls_per_min, use_n_param])Class for generating data from a provided set of prompts
generate_responses(prompts[, system_prompt, ...])Generates responses from a provided set of prompts.
- async generate_responses(prompts, system_prompt=None, count=1, progress_bar=None)#
Generates responses from a provided set of prompts. For each prompt, count responses are generated.
- Return type:
Dict[str,Any]- Parameters:
prompts (List[Union[str, List[BaseMessage]]]) – List of prompts from which LLM responses will be generated. Prompts in list may be strings or lists of BaseMessage. If providing input type List[List[BaseMessage]], refer to https://python.langchain.com/docs/concepts/messages/#langchain-messages for support.
system_prompt (str, default=None) – Optional argument for user to provide custom system prompt. If prompts are list of strings and system_prompt is None, defaults to “You are a helpful assistant.”
count (int, default=1) – Specifies number of responses to generate for each prompt.
progress_bar (rich.progress.Progress, default=None) – If provided, displays a progress bar while scoring responses
- Returns:
A dictionary with two keys: ‘data’ and ‘metadata’.
- ’data’dict
A dictionary containing the prompts and responses.
- ’prompt’list
A list of prompts.
- ’response’list
A list of responses corresponding to the prompts.
- ’metadata’dict
A dictionary containing metadata about the generation process.
- ’temperature’float
The temperature parameter used in the generation process.
- ’count’int
The count of prompts used in the generation process.
- ’system_prompt’str
The system prompt used for generating responses
- Return type:
dict
References