uqlm.utils.grader.LLMGrader#
- class uqlm.utils.grader.LLMGrader(llm, max_calls_per_min=None)#
Bases:
object- __init__(llm, max_calls_per_min=None)#
Class for grading LLM responses against a provided set of ground-truth (ideal) answers for the given prompts
- Parameters:
llm (langchain BaseChatModel) – A langchain llm BaseChatModel. User is responsible for specifying temperature and other relevant parameters to the constructor of their llm object.
max_calls_per_min (int, default=None) – Specifies how many api calls to make per minute to avoid a rate limit error. By default, no limit is specified.
Methods
__init__(llm[, max_calls_per_min])Class for grading LLM responses against a provided set of ground-truth (ideal) answers for the given prompts
grade_responses(prompts, responses, answers)- async grade_responses(prompts, responses, answers, progress_bar=None)#
- Return type:
List[bool]- Parameters:
prompts (list of str) – A list of input prompts for the model.
responses (list of str) – A list of model responses for the prompts.
answers (list of str) – A list of ideal (correct) responses
progress_bar (rich.progress.Progress, default=None) – If provided, displays a progress bar while scoring responses
References