{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# ๐ŸŽฏ Tunable Ensemble for LLM Uncertainty (Advanced)\n", "\n", "
\n", "

\n", "Ensemble UQ methods combine multiple individual scorers to provide a more robust uncertainty estimate. They offer high flexibility and customizability, allowing you to tailor the ensemble to specific use cases. This ensemble can leverage any combination of black-box, white-box, or LLM-as-a-Judge scorers offered by uqlm. Below is a list of the available scorers:\n", "\n", "#### Black-Box (Consistency) Scorers\n", "* Non-Contradiction Probability ([Chen & Mueller, 2023](https://arxiv.org/abs/2308.16175); [Lin et al., 2025](https://arxiv.org/abs/2305.19187); [Manakul et al., 2023](https://arxiv.org/abs/2303.08896))\n", "* Semantic Negentropy (based on [Farquhar et al., 2024](https://www.nature.com/articles/s41586-024-07421-0); [Kuhn et al., 2023](https://arxiv.org/pdf/2302.09664))\n", "* Exact Match ([Cole et al., 2023](https://arxiv.org/abs/2305.14613); [Chen & Mueller, 2023](https://arxiv.org/abs/2308.16175))\n", "* BERT-score ([Manakul et al., 2023](https://arxiv.org/abs/2303.08896); [Zheng et al., 2020](https://arxiv.org/abs/1904.09675))\n", "* BLUERT ([Sellam et al., 2020](https://arxiv.org/abs/2004.04696))\n", "* Normalized Cosine Similarity ([Shorinwa et al., 2024](https://arxiv.org/pdf/2412.05563); [HuggingFace](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2))\n", "\n", "#### White-Box (Token-Probability-Based) Scorers\n", "* Minimum token probability ([Manakul et al., 2023](https://arxiv.org/abs/2303.08896))\n", "* Length-Normalized Joint Token Probability ([Malinin & Gales, 2021](https://arxiv.org/pdf/2002.07650))\n", "\n", "#### LLM-as-a-Judge Scorers\n", "* Categorical LLM-as-a-Judge ([Manakul et al., 2023](https://arxiv.org/abs/2303.08896); [Chen & Mueller, 2023](https://arxiv.org/abs/2308.16175); [Luo et al., 2023](https://arxiv.org/pdf/2303.15621))\n", "* Continuous LLM-as-a-Judge ([Xiong et al., 2024](https://arxiv.org/pdf/2306.13063))\n", "

\n", "
\n", " \n", "## ๐Ÿ“Š What You'll Do in This Demo\n", "\n", "
\n", "
1
\n", "
\n", "

Set up LLM and prompts.

\n", "

Set up LLM instance and load example data prompts.

\n", "
\n", "
\n", "\n", "
\n", "
2
\n", "
\n", "

Tune Ensemble Weights

\n", "

Tune the ensemble weights on a set of tuning prompts. You will execute a single UQEnsemble.tune() method that will generate responses, compute confidence scores, and optimize weights using a provided answer key corresponding to the provided questions.

\n", "
\n", "
\n", "\n", "
\n", "
3
\n", "
\n", "

Generate LLM Responses and Confidence Scores with Tuned Ensemble.

\n", "

Generate and score LLM responses to the example questions using the tuned UQEnsemble() object.

\n", "
\n", "
\n", "\n", "
\n", "
4
\n", "
\n", "

Evaluate Hallucination Detection Performance.

\n", "

Visualize LLM accuracy at different thresholds of the ensemble score that combines various scorers. Compute precision, recall, and F1-score of hallucination detection.

\n", "
\n", "
\n", "\n", "## โš–๏ธ Advantages & Limitations\n", "\n", "
\n", "
\n", "

Pros

\n", " \n", "
\n", " \n", "
\n", "

Cons

\n", " \n", "
\n", "
" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "tags": [] }, "outputs": [], "source": [ "import numpy as np\n", "from sklearn.metrics import precision_score, recall_score, f1_score\n", "\n", "from uqlm import UQEnsemble\n", "from uqlm.utils import load_example_dataset, math_postprocessor, plot_model_accuracies" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## 1. Set up LLM and Prompts" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this demo, we will illustrate this approach using a set of math questions from the [GSM8K benchmark](https://github.com/openai/grade-school-math). To implement with your use case, simply **replace the example prompts with your data**. " ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Loading dataset - gsm8k...\n", "Processing dataset...\n", "Dataset ready!\n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
questionanswer
0Natalia sold clips to 48 of her friends in Apr...72
1Weng earns $12 an hour for babysitting. Yester...10
2Betty is saving money for a new wallet which c...5
3Julie is reading a 120-page book. Yesterday, s...42
4James writes a 3-page letter to 2 different fr...624
\n", "
" ], "text/plain": [ " question answer\n", "0 Natalia sold clips to 48 of her friends in Apr... 72\n", "1 Weng earns $12 an hour for babysitting. Yester... 10\n", "2 Betty is saving money for a new wallet which c... 5\n", "3 Julie is reading a 120-page book. Yesterday, s... 42\n", "4 James writes a 3-page letter to 2 different fr... 624" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Load example dataset (GSM8K)\n", "gsm8k = load_example_dataset(\"gsm8k\", n=100)\n", "gsm8k.head()" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "tags": [] }, "outputs": [], "source": [ "gsm8k_tune = gsm8k.iloc[0:50]\n", "gsm8k_test = gsm8k.iloc[51:100]" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "tags": [] }, "outputs": [], "source": [ "# Define prompts\n", "MATH_INSTRUCTION = (\n", " \"When you solve this math problem only return the answer with no additional text.\\n\"\n", ")\n", "tune_prompts = [MATH_INSTRUCTION + prompt for prompt in gsm8k_tune.question]\n", "test_prompts = [MATH_INSTRUCTION + prompt for prompt in gsm8k_test.question]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this example, we use `ChatVertexAI` and `AzureChatOpenAI` to instantiate our LLMs, but any [LangChain Chat Model](https://js.langchain.com/docs/integrations/chat/) may be used. Be sure to **replace with your LLM of choice.**" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "tags": [] }, "outputs": [], "source": [ "# import sys\n", "# !{sys.executable} -m pip install python-dotenv\n", "# !{sys.executable} -m pip install langchain-openai\n", "\n", "# # User to populate .env file with API credentials. In this step, replace with your LLM of choice.\n", "from dotenv import load_dotenv, find_dotenv\n", "from langchain_openai import AzureChatOpenAI\n", "\n", "load_dotenv(find_dotenv())\n", "gpt = AzureChatOpenAI(\n", " deployment_name=os.getenv(\"DEPLOYMENT_NAME\"),\n", " openai_api_key=os.getenv(\"API_KEY\"),\n", " azure_endpoint=os.getenv(\"API_BASE\"),\n", " openai_api_type=os.getenv(\"API_TYPE\"),\n", " openai_api_version=os.getenv(\"API_VERSION\"),\n", " temperature=1, # User to set temperature\n", ")" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "tags": [] }, "outputs": [], "source": [ "# import sys\n", "# !{sys.executable} -m pip install langchain-google-vertexai\n", "from langchain_google_vertexai import ChatVertexAI\n", "\n", "gemini = ChatVertexAI(model=\"gemini-pro\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## 2. Tune Ensemble" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### `UQEnsemble()` - Ensemble of uncertainty scorers\n", "\n", "#### ๐Ÿ“‹ Class Attributes\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
ParameterType & DefaultDescription
llmBaseChatModel
default=None
A langchain llm `BaseChatModel`. User is responsible for specifying temperature and other relevant parameters to the constructor of the provided `llm` object.
scorersList
default=None
Specifies which black-box, white-box, or LLM-as-a-Judge scorers to include in the ensemble. List containing instances of BaseChatModel, LLMJudge, black-box scorer names from ['semantic_negentropy', 'noncontradiction','exact_match', 'bert_score', 'bleurt', 'cosine_sim'], or white-box scorer names from [\"normalized_probability\", \"min_probability\"]. If None, defaults to the off-the-shelf BS Detector ensemble by Chen & Mueller, 2023 which uses components [\"noncontradiction\", \"exact_match\",\"self_reflection\"] with respective weights of [0.56, 0.14, 0.3].
devicestr or torch.device
default=\"cpu\"
Specifies the device that NLI model use for prediction. Only applies to 'semantic_negentropy', 'noncontradiction' scorers. Pass a torch.device to leverage GPU.
use_bestbool
default=True
Specifies whether to swap the original response for the uncertainty-minimized response among all sampled responses based on semantic entropy clusters. Only used if `scorers` includes 'semantic_negentropy' or 'noncontradiction'.
system_promptstr or None
default=\"You are a helpful assistant.\"
Optional argument for user to provide custom system prompt for the LLM.
max_calls_per_minint
default=None
Specifies how many API calls to make per minute to avoid rate limit errors. By default, no limit is specified.
use_n_parambool
default=False
Specifies whether to use n parameter for BaseChatModel. Not compatible with all BaseChatModel classes. If used, it speeds up the generation process substantially when num_responses is large.
postprocessorcallable
default=None
A user-defined function that takes a string input and returns a string. Used for postprocessing outputs.
sampling_temperaturefloat
default=1
The 'temperature' parameter for LLM model to generate sampled LLM responses. Must be greater than 0.
weightslist of floats
default=None
Specifies weight for each component in ensemble. If None, and scorers is not None, and defaults to equal weights for each scorer. These weights get updated with tune method is executed.
nli_model_namestr
default=\"microsoft/deberta-large-mnli\"
Specifies which NLI model to use. Must be acceptable input to AutoTokenizer.from_pretrained() and AutoModelForSequenceClassification.from_pretrained().
\n", "\n", "#### ๐Ÿ” Parameter Groups\n", "\n", "
\n", "
\n", "

๐Ÿง  LLM-Specific

\n", " \n", "
\n", "
\n", "

๐Ÿ“Š Confidence Scores

\n", " \n", "
\n", "
\n", "

๐Ÿ–ฅ๏ธ Hardware

\n", " \n", "
\n", "
\n", "

โšก Performance

\n", " \n", "
\n", "
\n", "\n", "#### ๐Ÿ’ป Usage Examples\n", "\n", "```python\n", "# Basic usage with default parameters\n", "uqe = UQEnsemble(llm=llm)\n", "\n", "# Using GPU acceleration\n", "uqe = UQEnsemble(llm=llm, device=torch.device(\"cuda\"))\n", "\n", "# Custom scorer list\n", "uqe = BlackBoxUQ(llm=llm, scorers=[\"bert_score\", \"exact_match\", llm])\n", "\n", "# High-throughput configuration with rate limiting\n", "uqe = UQEnsemble(llm=llm, max_calls_per_min=200, use_n_param=True) \n", "```" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Using cuda device\n" ] } ], "source": [ "import torch\n", "\n", "# Set the torch device\n", "if torch.cuda.is_available(): # NVIDIA GPU\n", " device = torch.device(\"cuda\")\n", "elif torch.backends.mps.is_available(): # macOS\n", " device = torch.device(\"mps\")\n", "else:\n", " device = torch.device(\"cpu\") # CPU\n", "print(f\"Using {device.type} device\")" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "tags": [] }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "Some weights of the model checkpoint at microsoft/deberta-large-mnli were not used when initializing DebertaForSequenceClassification: ['config']\n", "- This IS expected if you are initializing DebertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n", "- This IS NOT expected if you are initializing DebertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n" ] } ], "source": [ "scorers = [\n", " \"exact_match\", # Measures proportion of candidate responses that match original response (black-box)\n", " \"noncontradiction\", # mean non-contradiction probability between candidate responses and original response (black-box)\n", " \"normalized_probability\", # length-normalized joint token probability (white-box)\n", " gpt, # LLM-as-a-judge (self)\n", " gemini, # LLM-as-a-judge (separate LLM)\n", "]\n", "\n", "uqe = UQEnsemble(\n", " llm=gpt,\n", " device=device,\n", " max_calls_per_min=175,\n", " # postprocessor=math_postprocessor,\n", " use_n_param=True, # Set True if using AzureChatOpenAI for faster generation\n", " scorers=scorers,\n", ")" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "### ๐Ÿ”„ Class Methods: Tuning\n", "\n", "![Sample Image](https://raw.githubusercontent.com/cvs-health/uqlm/develop/assets/images/uqensemble_tune.png)\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
MethodDescription & Parameters
UQEnsemble.tune\n", "

Generate responses from provided prompts, grade responses with provided grader function, and tune ensemble weights. If weights and threshold objectives match, joint optimization will happen. Otherwise, sequential optimization will happen. If an optimization problem has fewer than three choice variables, grid search will happen.

\n", "

Parameters:

\n", "
    \n", "
  • prompts - (list of str) A list of input prompts for the model.
  • \n", "
  • ground_truth_answers - (List[str]) A list of ideal (correct) responses.
  • \n", "
  • grader_function - (callable, default=None) A user-defined function that takes a response and a ground truth 'answer' and returns a boolean indicator of whether the response is correct. If not provided, vectara's HHEM is used: https://huggingface.co/vectara/hallucination_evaluation_model
  • \n", "
  • num_responses - (int, default=5) The number of sampled responses used to compute consistency.
  • \n", "
  • thresh_objective - (str, default='fbeta_score') Objective function for threshold optimization via grid search. One of {'fbeta_score', 'accuracy_score', 'balanced_accuracy_score', 'roc_auc', 'log_loss'}.
  • \n", "
  • thresh_bounds - (tuple of floats, default=(0,1)) Bounds to search for threshold.
  • \n", "
  • n_trials - (int, default=100) Indicates how many trials to search over with optuna optimizer
  • \n", "
  • step_size - (float, default=0.01) Indicates step size in grid search, if used.
  • \n", "
  • fscore_beta - (float, default=1) Value of beta in fbeta_score.
  • \n", "
\n", "

Returns: UQResult containing data (prompts, responses, sampled responses, and confidence scores) and metadata

\n", "
\n", " ๐Ÿ’ก Best For: Tuning an optimized ensemble for detecting hallucinations in a specific use case.\n", "
\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that below, we are providing a grader function that is specific to our use case (math questions). If you are running this example notebook with your own prompts/questions, update the grader function accordingly. Note that the default grader function, `vectara/hallucination_evaluation_model`, is used if no grader function is provided and generally works well across use cases. " ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "tags": [] }, "outputs": [], "source": [ "def grade_response(response: str, answer: str) -> bool:\n", " return (math_postprocessor(response) == answer)" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Generating responses...\n", "Generating candidate responses...\n", "Computing confidence scores...\n", "Generating LLMJudge scores...\n", "Generating LLMJudge scores...\n", "Grading responses with grader function...\n", "Optimizing weights...\n", "Optimizing threshold with grid search...\n" ] } ], "source": [ "tune_results = await uqe.tune(\n", " prompts=tune_prompts, # prompts for tuning (responses will be generated from these prompts)\n", " ground_truth_answers=gsm8k_tune[\"answer\"], # correct answers to 'grade' LLM responses against\n", " grader_function=grade_response, # grader function to grade responses against provided answers\n", ")" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "tags": [] }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
promptresponsesampled_responsesensemble_scoreexact_matchnoncontradictionnormalized_probabilityjudge_1judge_2
0When you solve this math problem only return t...72[72, 72, 72, 72, 72]0.9525661.01.0000000.9991881.00.5
1When you solve this math problem only return t...$10[$10, $10, $10, $10, $10]0.8950371.01.0000000.9990190.00.5
2When you solve this math problem only return t...$20[$20, $20, $20, $20, $10]0.7628770.80.8013010.9465841.00.0
3When you solve this math problem only return t...48[48, 48, 48, 48, 48]0.9417981.01.0000000.9960910.01.0
4When you solve this math problem only return t...624[624, 624, 624, 624, 624]0.9999691.01.0000000.9998281.01.0
\n", "
" ], "text/plain": [ " prompt response \\\n", "0 When you solve this math problem only return t... 72 \n", "1 When you solve this math problem only return t... $10 \n", "2 When you solve this math problem only return t... $20 \n", "3 When you solve this math problem only return t... 48 \n", "4 When you solve this math problem only return t... 624 \n", "\n", " sampled_responses ensemble_score exact_match noncontradiction \\\n", "0 [72, 72, 72, 72, 72] 0.952566 1.0 1.000000 \n", "1 [$10, $10, $10, $10, $10] 0.895037 1.0 1.000000 \n", "2 [$20, $20, $20, $20, $10] 0.762877 0.8 0.801301 \n", "3 [48, 48, 48, 48, 48] 0.941798 1.0 1.000000 \n", "4 [624, 624, 624, 624, 624] 0.999969 1.0 1.000000 \n", "\n", " normalized_probability judge_1 judge_2 \n", "0 0.999188 1.0 0.5 \n", "1 0.999019 0.0 0.5 \n", "2 0.946584 1.0 0.0 \n", "3 0.996091 0.0 1.0 \n", "4 0.999828 1.0 1.0 " ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "result_df = tune_results.to_df()\n", "result_df.head()" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Weight for exact_match: 0.14848838407926002\n", "Weight for noncontradiction: 0.5195905565435162\n", "Weight for normalized_probability: 0.17984565170407496\n", "Weight for judge_1: 0.05749867602904491\n", "Weight for judge_2: 0.09457673164410399\n" ] } ], "source": [ "for i, weight in enumerate(uqe.weights):\n", " print(f\"Weight for {uqe.component_names[i]}: {weight}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## 3. Generate LLM Responses and Confidence Scores" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To evaluate hallucination detection performance, we will generate responses and corresponding confidence scores on a holdout set using the tuned ensemble." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### ๐Ÿ”„ Class Methods: Generation + Scoring\n", "\n", "![Sample Image](https://raw.githubusercontent.com/cvs-health/uqlm/develop/assets/images/uqensemble_generate_score.png)\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
MethodDescription & Parameters
UQEnsemble.generate_and_score\n", "

Generate LLM responses, sampled LLM (candidate) responses, and compute confidence scores for the provided prompts.

\n", "

Parameters:

\n", "
    \n", "
  • prompts - (list of str) A list of input prompts for the model.
  • \n", "
  • num_responses - (int, default=5) The number of sampled responses used to compute consistency.
  • \n", "
\n", "

Returns: UQResult containing data (prompts, responses, sampled responses, and confidence scores) and metadata

\n", "
\n", " ๐Ÿ’ก Best For: Complete end-to-end uncertainty quantification when starting with prompts.\n", "
\n", "
UQEnsemble.score\n", "

Compute confidence scores on provided LLM responses. Should only be used if responses and sampled responses are already generated.

\n", "

Parameters:

\n", "
    \n", "
  • prompts - (list of str) A list of input prompts for the LLM.
  • \n", "
  • responses - (list of str) A list of LLM responses for the prompts.
  • \n", "
  • sampled_responses - (list of list of str, default=None) A list of lists of sampled LLM responses for each prompt. These will be used to compute consistency scores by comparing to the corresponding response from responses. Must be provided if using Black-Box scorers.
  • \n", "
  • logprobs_results - (list of logprobs_result, default=None) List of lists of dictionaries, each returned by BaseChatModel.agenerate. Must be provided if using white box scorers.
  • \n", "
\n", "

Returns: UQResult containing data (responses, sampled responses, and confidence scores) and metadata

\n", "
\n", " ๐Ÿ’ก Best For: Computing uncertainty scores when responses are already generated elsewhere.\n", "
\n", "
" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Generating responses...\n", "Generating candidate responses...\n", "Computing confidence scores...\n", "Generating LLMJudge scores...\n", "Generating LLMJudge scores...\n" ] } ], "source": [ "test_results = await uqe.generate_and_score(prompts=test_prompts, num_responses=5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## 4. Evaluate Hallucination Detection Performance" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To evaluate hallucination detection performance, we 'grade' the responses against an answer key. Again, note that the `grade_response` function is specific to our use case (math questions). **If you are using your own prompts/questions, update the grading method accordingly**." ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "tags": [] }, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
promptresponsesampled_responsesensemble_scoreexact_matchnoncontradictionnormalized_probabilityjudge_1judge_2response_correct
0When you solve this math problem only return t...160[68, 176, 152, 80, 72]0.0300600.00.0211500.1060420.00.0True
1When you solve this math problem only return t...12[12, 14, 18, 11, 13]0.1586900.20.2406500.0219820.00.0False
2When you solve this math problem only return t...$36[$36, $36, $36, $36, 36]0.8708010.80.9942310.9892871.00.0True
3When you solve this math problem only return t...9[$3, $9, 3, $10, 9]0.3591670.20.4524590.2050471.00.0False
4When you solve this math problem only return t...75%[75%, 75., 75%, 75%, 75%]0.8733140.80.9987180.9902971.00.0True
\n", "
" ], "text/plain": [ " prompt response \\\n", "0 When you solve this math problem only return t... 160 \n", "1 When you solve this math problem only return t... 12 \n", "2 When you solve this math problem only return t... $36 \n", "3 When you solve this math problem only return t... 9 \n", "4 When you solve this math problem only return t... 75% \n", "\n", " sampled_responses ensemble_score exact_match noncontradiction \\\n", "0 [68, 176, 152, 80, 72] 0.030060 0.0 0.021150 \n", "1 [12, 14, 18, 11, 13] 0.158690 0.2 0.240650 \n", "2 [$36, $36, $36, $36, 36] 0.870801 0.8 0.994231 \n", "3 [$3, $9, 3, $10, 9] 0.359167 0.2 0.452459 \n", "4 [75%, 75., 75%, 75%, 75%] 0.873314 0.8 0.998718 \n", "\n", " normalized_probability judge_1 judge_2 response_correct \n", "0 0.106042 0.0 0.0 True \n", "1 0.021982 0.0 0.0 False \n", "2 0.989287 1.0 0.0 True \n", "3 0.205047 1.0 0.0 False \n", "4 0.990297 1.0 0.0 True " ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "test_result_df = test_results.to_df()\n", "test_result_df[\"response_correct\"] = [ \n", " grade_response(r, a) for r, a in zip(test_result_df[\"response\"], gsm8k_test[\"answer\"])\n", "]\n", "test_result_df.head(5)" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Baseline LLM accuracy: 0.5714285714285714\n" ] } ], "source": [ "print(f\"\"\"Baseline LLM accuracy: {np.mean(test_result_df[\"response_correct\"])}\"\"\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 4.1 Filtered LLM Accuracy Evaluation" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here, we explore โ€˜filtered accuracyโ€™ as a metric for evaluating the performance of our confidence scores. Filtered accuracy measures the change in LLM performance when responses with confidence scores below a specified threshold are excluded. By adjusting the confidence score threshold, we can observe how the accuracy of the LLM improves as less certain responses are filtered out.\n", "\n", "We will plot the filtered accuracy across various confidence score thresholds to visualize the relationship between confidence and LLM accuracy. This analysis helps in understanding the trade-off between response coverage (measured by sample size below) and LLM accuracy, providing insights into the reliability of the LLMโ€™s outputs." ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "tags": [] }, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAjcAAAHECAYAAADFxguEAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjkuNCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8ekN5oAAAACXBIWXMAAA9hAAAPYQGoP6dpAAB2j0lEQVR4nO3dd1iT19sH8G/YG1EUQVEUxVFx771x1KrYuvdobR1Vah1114HWqljrqgP3aBVHK3VR/TlqxYVbFEVxMUQFQZk57x/PSzCyEggkhO/nunL1yXnWfUKQu+c5QyaEECAiIiLSEwbaDoCIiIhIk5jcEBERkV5hckNERER6hckNERER6RUmN0RERKRXmNwQERGRXmFyQ0RERHqFyQ0RERHpFSY3REREpFeY3BAVMTKZDAcOHNB2GHny22+/wdnZGQYGBvDx8cGcOXNQu3btbM8ZOnQoevToUSDxkeTUqVOQyWR48+ZNgd538+bNKFasWJ6u8ejRI8hkMgQFBWV5jLbqRzljckP5Lqc/Ki4uLvDx8cl0X9o/MIaGhnj27JnSvhcvXsDIyAgymQyPHj3KMY5du3bB0NAQY8aMUSN6UlVSUhJ++ukn1KpVCxYWFrC3t0ezZs3g6+uL5ORkjd0nNjYWY8eOxZQpU/Ds2TN8+eWXmDRpEgICAjR2D226du0aPvvsM5QqVQpmZmZwcXFBnz59EBkZqe3QlMhksmxfc+bM0XaIVIQxuaFCoUyZMti6datS2ZYtW1CmTBmVr7Fx40ZMnjwZu3btQkJCgqZDVEtSUpJW769pSUlJ8PDwwKJFi/Dll1/i33//RWBgIMaMGYOVK1fi1q1bGrtXWFgYkpOT0bVrVzg6OsLCwgJWVlYoUaKExu6hLVFRUWjXrh2KFy+Oo0eP4s6dO/D19YWTkxPi4+Pz7b65ST5fvHihePn4+MDGxkapbNKkSbmKRd9+N0g7mNxQoTBkyBD4+voqlfn6+mLIkCEqnR8aGop///0XU6dOhZubG/z8/DIcs2nTJnzyyScwNTWFo6Mjxo4dq9j35s0bfPXVV3BwcICZmRlq1KiBv/76CwAyfSTi4+MDFxcXxfu01qsFCxbAyckJVapUAQBs27YN9evXh7W1NUqXLo3+/ftn+D/0W7du4dNPP4WNjQ2sra3RokULPHjwAKdPn4axsTHCw8OVjp8wYQJatGiR7efx4sULdO7cGebm5qhYsSL27t2r2Ne2bVulugPSH10TE5MsW0d8fHxw+vRpBAQEYMyYMahduzYqVqyI/v3748KFC6hcuTIAIDExEePHj1e0SjRv3hwXL15UXCetmT8gIAD169eHhYUFmjZtiuDgYADS4wZ3d3cAQMWKFRWtdh//DFJTU+Hl5YVixYqhRIkSmDx5Mj5eI1gul8Pb2xsVKlSAubk5atWqpfQ55BRLmj///BMNGjSAmZkZ7O3t0bNnT8W+xMRETJo0CWXKlIGlpSUaNWqEU6dOZflzOXfuHGJiYrBhwwbUqVMHFSpUQJs2bbB8+XJUqFBBcVxW34m0ev34448oW7YsTE1NUbt2bRw5ckRxblpr6J49e9CqVSuYmZlhx44dAIANGzagWrVqMDMzQ9WqVbF69eosYy1durTiZWtrC5lMplRmZWWlOPby5ctZfoZpP7sNGzagQoUKMDMzAyD9zo0cORIlS5aEjY0N2rZti2vXrinOu3btGtq0aQNra2vY2NigXr16uHTpklKMR48eRbVq1WBlZYVOnTrhxYsXin05fU6Z8ff3h5ubG8zNzdGmTRuVWoxJSwRRPhsyZIjo3r17lvvLly8vli9fnum+0NBQAUAEBgYKe3t7cebMGSGEEGfOnBElS5YUgYGBAoAIDQ3NNoaZM2eKzz//XAghxMqVK0Xbtm2V9q9evVqYmZkJHx8fERwcLAIDAxUxpaamisaNG4tPPvlEHDt2TDx48ED8+eefwt/fXwghxOzZs0WtWrWUrrd8+XJRvnx5pc/AyspKDBo0SNy8eVPcvHlTCCHExo0bhb+/v3jw4IE4f/68aNKkiejcubPivKdPn4rixYsLT09PcfHiRREcHCw2bdok7t69K4QQws3NTfz000+K45OSkoS9vb3YtGlTlp8FAFGiRAmxfv16ERwcLGbMmCEMDQ3F7du3hRBC7NixQ9jZ2YmEhATFOcuWLRMuLi5CLpdnes2aNWuKjh07ZnnPNOPHjxdOTk7C399f3Lp1SwwZMkTY2dmJ6OhoIYQQJ0+eFABEo0aNxKlTp8StW7dEixYtRNOmTYUQQrx7906cOHFC8Z148eKFSElJyfAzWLx4sbCzsxP79u0Tt2/fFiNGjBDW1tZK38P58+eLqlWriiNHjogHDx4IX19fYWpqKk6dOqVSLEII8ddffwlDQ0Mxa9Yscfv2bREUFCQWLlyo2D9y5EjRtGlTcfr0aRESEiKWLFkiTE1Nxb179zL9fM6fPy8AiN9//z3Lzzqn78SyZcuEjY2N2LVrl7h7966YPHmyMDY2Vtwz7XfKxcVF7Nu3Tzx8+FA8f/5cbN++XTg6OirK9u3bJ4oXLy42b96c48/V19dX2NraZihX5TOcPXu2sLS0FJ06dRJXrlwR165dE0II0b59e9GtWzdx8eJFce/ePfHdd9+JEiVKKL4rn3zyiRg4cKC4c+eOuHfvnvj9999FUFCQIh5jY2PRvn17cfHiRXH58mVRrVo10b9/f8V9Vf2crl69KoQQIiwsTJiamgovLy9x9+5dsX37duHg4CAAiNevX+f4GVHBYnJD+U4Tyc3Vq1fFhAkTxLBhw4QQQgwbNkxMnDhRXL16NcfkJjU1VTg7O4sDBw4IIYSIiooSJiYm4uHDh4pjnJycxPTp0zM9/+jRo8LAwEAEBwdnul/V5MbBwUEkJiZmGacQQly8eFEAEG/fvhVCCDFt2jRRoUIFkZSUlOnxixcvFtWqVVO837dvn7CyshJxcXFZ3gOAGD16tFJZo0aNxNdffy2EEOL9+/fCzs5O7NmzR7G/Zs2aYs6cOVle09zcXIwfPz7busXFxQljY2OxY8cORVlSUpJwcnJSJGhpfwxPnDihOObw4cMCgHj//r0QQmT6M//4Z+Do6KiU9CUnJ4uyZcsqvocJCQnCwsJC/Pvvv0oxjhgxQvTr10/lWJo0aSIGDBiQaX0fP34sDA0NxbNnz5TK27VrJ6ZNm5bl5/TDDz8IIyMjUbx4cdGpUyfx008/ifDwcMX+nL4TTk5OYsGCBUplDRo0EN98840QIv13ysfHR+kYV1dXsXPnTqWyefPmiSZNmmQZa5qckpvsPsPZs2cLY2NjERkZqTjmzJkzwsbGRinBTotx3bp1QgghrK2ts0y8fH19BQAREhKiKFu1apVwcHBQvFf1c0pLbqZNmyaqV6+udPyUKVOY3OgoPpaiQmP48OH4448/EB4ejj/++APDhw9X6bzjx48jPj4eXbp0AQDY29ujQ4cO2LRpEwAgMjISz58/R7t27TI9PygoCGXLloWbm1ue4nd3d4eJiYlS2eXLl9GtWzeUK1cO1tbWaNWqFQCpX0navVu0aAFjY+NMrzl06FCEhITgv//+AyA9tunduzcsLS2zjaVJkyYZ3t+5cwcAYGZmhkGDBik+nytXruDmzZsYOnRoltcTHz3yycyDBw+QnJyMZs2aKcqMjY3RsGFDxb3T1KxZU7Ht6OgIACp3qI2JicGLFy/QqFEjRZmRkRHq16+veB8SEoJ3796hQ4cOsLKyUry2bt2qeLyjSixBQUFZfm9u3LiB1NRUuLm5Kd3jf//7X4Z7fGjBggUIDw/H2rVr8cknn2Dt2rWoWrUqbty4obhnVt+J2NhYPH/+XOkzBoBmzZpl+Iw//Dzi4+Px4MEDjBgxQinW+fPnZxurqnL6eZYvXx4lS5ZUvL927Rri4uJQokQJpXhCQ0MV8Xh5eWHkyJFo3749Fi1alCFOCwsLuLq6Kt037Z7qfE5p7ty5o/SdAjL+HpHuMNJ2AESqcnd3R9WqVdGvXz9Uq1YNNWrUyHaYZpqNGzfi1atXMDc3V5TJ5XJcv34dc+fOVSrPTE77DQwMMvxxz6yD5scJR3x8PDw8PODh4YEdO3agZMmSCAsLg4eHh6JTZU73LlWqFLp16wZfX19UqFABf//9d7Z9OlQ1cuRI1K5dG0+fPoWvry/atm2L8uXLZ3m8m5sb7t69m+f7pvnwD7dMJgMg/cw0JS4uDgBw+PDhDJ3STU1NVY4lu59PXFwcDA0NcfnyZRgaGirt+7A/SmZKlCiBL774Al988QUWLlyIOnXq4Oeff8aWLVty/E6o6sPvY9rnsX79+gx/wD+OPTdy+nl+/LsRFxcHR0fHTL/LaUO858yZg/79++Pw4cP4+++/MXv2bOzevVvR5+nj5E8mk6mUhJN+YMsNFSrDhw/HqVOnVG61iY6OxsGDB7F7924EBQUpXlevXsXr169x7NgxWFtbw8XFJcvOsjVr1sTTp09x7969TPeXLFkS4eHhSv9wqpJ03b17F9HR0Vi0aBFatGiBqlWrZmidqFmzJs6cOZPtaJaRI0diz549+O233+Dq6prh/0Yzk9bS8+H7atWqKd67u7ujfv36WL9+PXbu3Jnj592/f3+cOHECV69ezbAvOTkZ8fHxcHV1hYmJCc6dO6e07+LFi6hevXqOMavK1tYWjo6OuHDhgqIsJSUFly9fVryvXr06TE1NERYWhkqVKim9nJ2dVb5XzZo1s/ze1KlTB6mpqYiMjMxwj9KlS6t8DxMTE7i6uipGS2X3nbCxsYGTk5PSZwxIHZWz+4wdHBzg5OSEhw8fZoj1w47MBaVu3boIDw+HkZFRhnjs7e0Vx7m5uWHixIk4duwYPD09Mww6yEpuPqdq1aohMDBQqezj3yPSIdp9KkZFwZAhQ0Tr1q3F1atXlV5hYWFCCKnPzaRJkzLsf/XqVYbn3snJySIqKkokJycLITLvf/Gh5cuXC0dHx0w7Z/bu3VvRyXjz5s3CzMxMrFixQty7d09cvnxZ/PLLL4pjW7duLWrUqCGOHTsmHj58KPz9/cXff/8thBDi9u3bQiaTiUWLFomQkBDx66+/Cjs7uwx9bj7udxQZGSlMTEzE999/Lx48eCAOHjwo3NzclOr78uVLUaJECUXn0Xv37omtW7cqOo8Kkd6nyMTERCxatCjHnwcAYW9vLzZu3CiCg4PFrFmzhIGBgbh165bScb/99pswMTERdnZ2iv4RWUlISBAtWrQQdnZ24tdffxVBQUHiwYMHYs+ePaJu3bqK+nz77bfCyclJ/P3330odil+9eiWESO+j8WEfho9/xqr0uVm0aJEoXry42L9/v7hz544YNWpUhg7F06dPFyVKlBCbN28WISEhip95Wj8OVWI5efKkMDAwUHQovn79utLPYMCAAUoddy9cuCAWLlwo/vrrr0w/xz///FMMGDBA/PnnnyI4OFjcvXtXLFmyRBgaGoqtW7cKIXL+TixfvlzY2NiI3bt3i7t374opU6Zk21E2zfr164W5ublYsWKFCA4OFtevXxebNm0SS5cuzfLnnianPjfZfYaZ9VmTy+WiefPmolatWuLo0aMiNDRUnDt3Tvzwww/i4sWL4t27d2LMmDHi5MmT4tGjR+Ls2bPC1dVVTJ48Oct49u/fLz78k6fu5/T48WNhYmIiJk2aJO7evSt27NghSpcuzT43OorJDeW7IUOGCAAZXiNGjBBCSMlNZvu3bduW5T/EaXJKbtzd3RUdBD+2Z88eYWJiIqKiooQQQqxdu1ZUqVJFGBsbC0dHRzFu3DjFsdHR0WLYsGGiRIkSwszMTNSoUUPpD9SaNWuEs7OzsLS0FIMHDxYLFizIMbkRQoidO3cKFxcXYWpqKpo0aSIOHTqUob7Xrl0THTt2FBYWFsLa2lq0aNFCPHjwQOk6M2fOFIaGhuL58+eZ1vVDAMSqVatEhw4dhKmpqXBxcVHqPJzm7du3wsLCIsvP72MJCQnC29tbuLu7CzMzM1G8eHHRrFkzsXnzZkUy+v79ezFu3Dhhb28vTE1NRbNmzURgYKDiGppKbpKTk8W3334rbGxsRLFixYSXl5cYPHiw0s9ALpcLHx8fxc+8ZMmSwsPDQ/zvf/9TORYhpE7ctWvXFiYmJsLe3l54enoq9iUlJYlZs2YJFxcXxfeqZ8+e4vr165l+hg8ePBCjRo0Sbm5uwtzcXBQrVkw0aNBA+Pr6Kh2X3XciNTVVzJkzR5QpU0YYGxuLWrVqKRJxIbJOboSQRsql1cXOzk60bNlS+Pn5ZRrrhzSd3AghRGxsrBg3bpxwcnISxsbGwtnZWQwYMECEhYWJxMRE0bdvX0VS7+TkJMaOHatIwlVJbnLzOf3555+iUqVKwtTUVLRo0UJs2rSJyY2OkgnBh5BEhd2IESMQFRWFQ4cOaeyajx49gqurKy5evIi6detq7LpERPmNHYqJCrGYmBjcuHEDO3fu1Fhik5ycjOjoaMyYMQONGzdmYkNEhQ6TG6JCrHv37ggMDMTo0aPRoUMHjVzz3LlzaNOmDdzc3JRm7CUiKiz4WIqIiIj0ilaHgp8+fRrdunWDk5MTZDIZDhw4kO3xL168QP/+/eHm5gYDAwNMmDChQOIkIiKiwkOryU18fDxq1aqFVatWqXR8YmIiSpYsiRkzZqBWrVr5HB0REREVRlrtc9O5c2d07txZ5eNdXFywYsUKAFBMDU9ERET0Ib3vUJyYmIjExETFe7lcjlevXqFEiRKKacCJiIhItwkh8PbtWzg5OcHAIPsHT3qf3Hh7e2Pu3LnaDoOIiIg04MmTJyhbtmy2x+h9cjNt2jR4eXkp3sfExKBcuXJ48uQJbGxstBgZERERqSo2NhbOzs6wtrbO8Vi9T25MTU0zrPILSAunMbkhIiIqXFTpUsJVwYmIiEivaLXlJi4uDiEhIYr3oaGhCAoKQvHixVGuXDlMmzYNz549w9atWxXHBAUFKc6NiopCUFAQTExMslymnoiIiIoWrc5QfOrUKbRp0yZD+ZAhQ7B582YMHToUjx49wqlTpxT7MmuOKl++PB49eqTSPWNjY2Fra4uYmBg+liIiIiok1Pn7XeSWX2ByQ0SkO1JTU5GcnKztMEhHmJiYZDnMW52/33rfoZiIiHSPEALh4eF48+aNtkMhHWJgYIAKFSrAxMQkT9dhckNERAUuLbEpVaoULCwsOKkqQS6X4/nz53jx4gXKlSuXp+8EkxsiIipQqampisSmRIkS2g6HdEjJkiXx/PlzpKSkwNjYONfX4VBwIiIqUGl9bCwsLLQcCematMdRqampeboOkxsiItIKPoqij2nqO8HkhoiIiPQKkxsiIiLSK0xuiIiI1HT+/HkYGhqia9eu2g6FMsHkhoiISE0bN27EuHHjcPr0aTx//lxrcSQlJWnt3rqMyQ0RERVK9+8DV65kfN2/n7/3jYuLw549e/D111+ja9eu2Lx5s9L+P//8Ew0aNICZmRns7e3Rs2dPxb7ExERMmTIFzs7OMDU1RaVKlbBx40YAwObNm1GsWDGlax04cECpk+2cOXNQu3ZtbNiwARUqVICZmRkA4MiRI2jevDmKFSuGEiVK4NNPP8WDBw+UrvX06VP069cPxYsXh6WlJerXr48LFy7g0aNHMDAwwKVLl5SO9/HxQfny5SGXy/P6kRU4znNDRESFzv37gJtb1vvv3QMqV86fe//++++oWrUqqlSpgoEDB2LChAmYNm0aZDIZDh8+jJ49e2L69OnYunUrkpKS4O/vrzh38ODBOH/+PH755RfUqlULoaGhePnypVr3DwkJwb59++Dn5wdDQ0MAQHx8PLy8vFCzZk3ExcVh1qxZ6NmzJ4KCgmBgYIC4uDi0atUKZcqUwaFDh1C6dGlcuXIFcrkcLi4uaN++PXx9fVG/fn3FfXx9fTF06NAsl0PQaaKIiYmJEQBETEyMtkMhIiqS3r9/L27fvi3ev3+f62tcviwEkPXr8mUNBvyRpk2bCh8fHyGEEMnJycLe3l6cPHlSCCFEkyZNxIABAzI9Lzg4WAAQx48fz3S/r6+vsLW1VSrbv3+/+PBP9ezZs4WxsbGIjIzMNsaoqCgBQNy4cUMIIcS6deuEtbW1iI6OzvT4PXv2CDs7O5GQkCCEEOLy5ctCJpOJ0NDQbO+jadl9N9T5+10I0zEiIiLtCA4ORmBgIPr16wcAMDIyQp8+fRSPloKCgtCuXbtMzw0KCoKhoSFatWqVpxjKly+PkiVLKpXdv38f/fr1Q8WKFWFjYwMXFxcAQFhYmOLederUQfHixTO9Zo8ePWBoaIj9+/cDkB6RtWnTRnGdwoaPpYiIiFS0ceNGpKSkwMnJSVEmhICpqSl+/fVXmJubZ3ludvsAadFIIYRSWWYrpltaWmYo69atG8qXL4/169fDyckJcrkcNWrUUHQ4zuneJiYmGDx4MHx9feHp6YmdO3dixYoV2Z6jy9hyQ0REpIKUlBRs3boVS5cuRVBQkOJ17do1ODk5YdeuXahZsyYCAgIyPd/d3R1yuRz/+9//Mt1fsmRJvH37FvHx8YqyoKCgHOOKjo5GcHAwZsyYgXbt2qFatWp4/fq10jE1a9ZEUFAQXr16leV1Ro4ciRMnTmD16tVISUmBp6dnjvfWVWy5ISIiUsFff/2F169fY8SIEbC1tVXa16tXL2zcuBFLlixBu3bt4Orqir59+yIlJQX+/v6YMmUKXFxcMGTIEAwfPlzRofjx48eIjIxE79690ahRI1hYWOCHH37A+PHjceHChQwjsTJjZ2eHEiVK4LfffoOjoyPCwsIwdepUpWP69euHhQsXokePHvD29oajoyOuXr0KJycnNGnSBABQrVo1NG7cGFOmTMHw4cNzbO3RZWy5ISKiQsfaOm/7c2Pjxo1o3759hsQGkJKbS5cuoXjx4vjjjz9w6NAh1K5dG23btkVgYKDiuDVr1uDzzz/HN998g6pVq2LUqFGKlprixYtj+/bt8Pf3h7u7O3bt2oU5c+bkGJeBgQF2796Ny5cvo0aNGpg4cSKWLFmidIyJiQmOHTuGUqVKoUuXLnB3d8eiRYsUo63SjBgxAklJSRg+fHguPiHdIRMfP+DTc7GxsbC1tUVMTAxsbGy0HQ4RUZGTkJCA0NBQpXlacuP+feDt24zl1tb5Nwxc382bNw9//PEHrl+/rpX7Z/fdUOfvNx9LERFRocQERnPi4uLw6NEj/Prrr5g/f762w8kzPpYiIiIq4saOHYt69eqhdevWhf6RFMCWGyIioiJv8+bNKnVeLizYckNERER6hckNERER6RUmN0RERKRXmNwQERGRXmFyQ0RERHqFyQ0RERHpFSY3REREekImk+HAgQP5eo85c+agdu3a+XqPvOI8N0REpDNuPI0p0Pu5l824TlR2oqKiMGvWLBw+fBgRERGws7NDrVq1MGvWLDRr1iyfoixY+/fvx+LFi3Hnzh3I5XKUK1cOHTp0gI+PDwBg0qRJGDdunHaDzAGTGyIiIhX16tULSUlJ2LJlCypWrIiIiAgEBAQgOjpa26FpREBAAPr06YMFCxbgs88+g0wmw+3bt3H8+HHFMVZWVrCystJilDnjYykiIiIVvHnzBmfOnMHixYvRpk0blC9fHg0bNsS0adPw2WefKY5btmwZ3N3dYWlpCWdnZ3zzzTeIi4tT7N+8eTOKFSuGv/76C1WqVIGFhQU+//xzvHv3Dlu2bIGLiwvs7Owwfvx4pKamKs5zcXHBvHnz0K9fP1haWqJMmTJYtWpVtjE/efIEvXv3RrFixVC8eHF0794djx49yvL4P//8E82aNcP333+PKlWqwM3NDT169FC6z8ePpWQyWYaXi4uLYv/NmzfRuXNnWFlZwcHBAYMGDcLLly9V+MRzj8kNERGRCtJaLA4cOIDExMQsjzMwMMAvv/yCW7duYcuWLfjnn38wefJkpWPevXuHX375Bbt378aRI0dw6tQp9OzZE/7+/vD398e2bduwbt067N27V+m8JUuWoFatWrh69SqmTp2Kb7/9VqlV5UPJycnw8PCAtbU1zpw5g3PnzsHKygqdOnVCUlJSpueULl0at27dws2bN1X+XF68eKF4hYSEoFKlSmjZsiUAKSFs27Yt6tSpg0uXLuHIkSOIiIhA7969Vb5+bvCxFBERkQqMjIywefNmjBo1CmvXrkXdunXRqlUr9O3bFzVr1lQcN2HCBMW2i4sL5s+fj9GjR2P16tWK8uTkZKxZswaurq4AgM8//xzbtm1DREQErKysUL16dbRp0wYnT55Enz59FOc1a9YMU6dOBQC4ubnh3LlzWL58OTp06JAh3j179kAul2PDhg2QyWQAAF9fXxQrVgynTp1Cx44dM5wzbtw4nDlzBu7u7ihfvjwaN26Mjh07YsCAATA1Nc30cyldujQAQAiBXr16wdbWFuvWrQMA/Prrr6hTpw4WLlyoOH7Tpk1wdnbGvXv34Obmlv2HnktsuSEiIlJRr1698Pz5cxw6dAidOnXCqVOnULduXaVFJ0+cOIF27dqhTJkysLa2xqBBgxAdHY13794pjrGwsFAkNgDg4OAAFxcXpb4sDg4OiIyMVLp/kyZNMry/c+dOprFeu3YNISEhsLa2VrQ6FS9eHAkJCXjw4EGm51haWuLw4cMICQnBjBkzYGVlhe+++w4NGzZUij8zP/zwA86fP4+DBw/C3NxcEcPJkycV97eyskLVqlUBIMsYNIEtN0RERGowMzNDhw4d0KFDB8ycORMjR47E7NmzMXToUDx69Aiffvopvv76ayxYsADFixfH2bNnMWLECCQlJcHCwgIAYGxsrHRNmUyWaZlcLs91nHFxcahXrx527NiRYV/JkiWzPdfV1RWurq4YOXIkpk+fDjc3N+zZswfDhg3L9Pjt27dj+fLlOHXqFMqUKaMUQ7du3bB48eIM5zg6OqpZI9UxuSEiIsqD6tWrK+aWuXz5MuRyOZYuXQoDA+nhyO+//66xe/33338Z3lerVi3TY+vWrYs9e/agVKlSsLGxyfU9XVxcYGFhgfj4+Ez3nz9/HiNHjsS6devQuHHjDDHs27cPLi4uMDIquJSDj6WIiIhUEB0djbZt22L79u24fv06QkND8ccff+Cnn35C9+7dAQCVKlVCcnIyVq5ciYcPH2Lbtm1Yu3atxmI4d+4cfvrpJ9y7dw+rVq3CH3/8gW+//TbTYwcMGAB7e3t0794dZ86cQWhoKE6dOoXx48fj6dOnmZ4zZ84cTJ48GadOnUJoaCiuXr2K4cOHIzk5OdN+PeHh4ejZsyf69u0LDw8PhIeHIzw8HFFRUQCAMWPG4NWrV+jXrx8uXryIBw8e4OjRoxg2bJjSSDBNY3JDRESkAisrKzRq1AjLly9Hy5YtUaNGDcycOROjRo3Cr7/+CgCoVasWli1bhsWLF6NGjRrYsWMHvL29NRbDd999h0uXLqFOnTqYP38+li1bBg8Pj0yPtbCwwOnTp1GuXDl4enqiWrVqGDFiBBISErJsyWnVqhUePnyIwYMHo2rVqujcuTPCw8Nx7NgxVKlSJcPxd+/eRUREBLZs2QJHR0fFq0GDBgAAJycnnDt3DqmpqejYsSPc3d0xYcIEFCtWTNGylR9kQgiRb1fXQbGxsbC1tUVMTEyemumIiCh3EhISEBoaigoVKsDMzEzb4RQaLi4umDBhgtJoLH2T3XdDnb/fbLkhIiIivcLkhoiIiPQKR0sREREVAtktm0DK2HJDREREeoXJDRERaUURG89CKtDUd4LJDRERFai0mXhzms6fip60BT0NDQ3zdB32uSEiogJlaGiIYsWKKdZNsrCwUCzsSEWXXC5HVFQULCws8jybMZMbIiIqcGkrSX+8MCQVbQYGBihXrlyek10mN0REVOBkMhkcHR1RqlQpJCcnazsc0hEmJiYambmYyQ0REWmNoaFhnvtXEH2MHYqJiIhIrzC5ISIiIr3C5IaIiIj0ilaTm9OnT6Nbt25wcnKCTCbDgQMHcjzn1KlTqFu3LkxNTVGpUiVs3rw53+MkIiKijO7fB65cyfi6f1+7cWm1Q3F8fDxq1aqF4cOHw9PTM8fjQ0ND0bVrV4wePRo7duxAQEAARo4cCUdHR3h4eBRAxERERARICYybW9b7790DKlcuuHg+pNXkpnPnzujcubPKx69duxYVKlTA0qVLAQDVqlXD2bNnsXz5ciY3REREBejt27ztz0+Fqs/N+fPn0b59e6UyDw8PnD9/PstzEhMTERsbq/QiIiIi/VWo5rkJDw+Hg4ODUpmDgwNiY2Px/v17mJubZzjH29sbc+fOVes+N57GAACSEhNx9uRxPHsaBiMjI7i6VUXDpi0zHO9e1lat6xMREVH+KVTJTW5MmzYNXl5eivexsbFwdnbO8bzAf09jptc3sLaxxaOHIajbsAn2bN0ICwtLLF+/HQ6OTvkZNhEREeVSoXosVbp0aURERCiVRUREwMbGJtNWGwAwNTWFjY2N0ksVP8+bgd92HcTeY+ewee/fsC/lgAP/XIBn/yFYOGNSnutCRERE+aNQJTdNmjRBQECAUtnx48fRpEkTjd9LyOUoX8EVAFCjdl08uHcXAPB5/yF4GHJP4/cjIiIqTKyt87Y/P2k1uYmLi0NQUBCCgoIASEO9g4KCEBYWBkB6pDR48GDF8aNHj8bDhw8xefJk3L17F6tXr8bvv/+OiRMnajw2C0srBP57GgBw7PBBFC9RUuP3ICIiKqxMTABjY2l71Srg8uX0lzaHgQNa7nNz6dIltGnTRvE+rW/MkCFDsHnzZrx48UKR6ABAhQoVcPjwYUycOBErVqxA2bJlsWHDhnwZBv79rIWY+OUgvHkVDXsHB6zYsBMA8DIyAl17fqHx+xERERUmc+YAyclA27bA118DMpm2I0onE0IIbQdRkGJjY2Fra4uYmJgs+9+kjZYCgDevX6GYXXEAwB/bffHFwGEZjudoKSIiKkpu3wbc3QG5HPjvP6BRo/y/pyp/v9Po/Wip3Dp5zD9D2epl3rAvJQ1Fb9OxS0GHREREpBNmzJASm549CyaxUReTmyxMGDkAteo1hHHaA0UAcbGx2L5hNSCTMbkhIqIi6cIFYP9+wMAAmD9f29FkjslNFuYuWQm/3dswadYCVKtRCwDQqWlNbPz9Ly1HRkREpB1CANOmSduDBwPVq2s3nqwwuclCjz4D0bBZS8yZPB51GzbBqHGTINOl3lJEREQF7MQJ4ORJaaTUnDnajiZrhWqem4LmVLYc1u3YD3NzCwzt1RnJiUnaDomIiEgrPmy1+fproHx57caTHbbc5EAmk2HIV+PQrHV7XAnMeoFOIiIifbZvnzSHjaUl8MMP2o4me0xuVFSpSjVUqlJN22EQEREVuJQUaYQUAHz3HVCqlHbjyQkfSxEREVG2tmwBgoOBEiWk5EbXMbkhIiKiLCUkpHce/uEHQMX1p7WKyQ0RERFlafVq4OlToGxZ4JtvtB2NapjcEBERUaZiY4GFC6XtOXMAMzOthqMyJjdERESUqaVLgehooEoVYMgQbUejOiY3RERElEFkJLBsmbQ9fz5gVIjGVzO5ISIiogwWLgTi4oB69YBevbQdjXqY3BAREZGSx4+BNWukbW9voLCtPsTkhoiIiJTMmQMkJQFt2gDt22s7GvUxuSEiIiKF27eBrVul7cLYagMwuSEiIqIPzJgByOVAjx5Ao0bajiZ3mNwQERERACAwENi/HzAwkEZIFVaFaGAXqevG05hMy2PfvIFNsWKZ7nMva5uPERERkS6bNk3676BBwCefaDeWvGDLjZ7bvnGNYvtp2CP0bNcY7epXRaemNXHvzi0tRkZERLrkxAngn38AE5P0taQKKyY3eu7Q3l2K7V8Wz0OfQSNwMSQc382Yh5/nTddiZEREpCuESG+1GT0acHHRajh5xuSmCHl4/y76Dh0FAOjQpTteR7/UckRERKQL/PyAS5cAS0tguh78fy/73Oi5t7ExOHX8bwghkJKSorRPCKGlqIiISFekpKQnNF5eQKlS2o1HE5jc6DlHp7LYtn4VAKCEfUlEvHgOB0cnRL+MgrGxiZajIyIibdu6FQgOBkqUAL77TtvRaAaTGz236Y/DGcr+2O6LXv2HYNPejPuIiKjoSEgAZs+WtqdNA2z1ZMAskxs9d/KYf4ay1cu8YV/KAQDQpmOXgg6JiIh0xJo1wNOnQNmywDffaDsazWFyo+cmjByAWvUawtjYWFEWFxuL7RtWAzIZkxsioiIqNhZYsEDanj0bMDfXbjyaxORGz81dshJ+u7dh0qwFqFajFgCgU9Oa2Pj7X1qOjIiItGnZMiA6GnBzA4YO1XY0msXkRs/16DMQDZu1xJzJ41G3YROMGjcJssK4ChoREWlMVBSwdKm0PX8+YKRn2QDnuSkCnMqWw7od+2FuboGhvTojOTFJ2yEREZEWLVwIxMUB9eoBvXppOxrN07NcjbIik8kw5KtxaNa6Pa4Entd2OEREpCWPHwOrV0vbCxdKi2TqGyY3RUylKtVQqUo1bYehlswWAE1JScH9u7dQtpwLrG0yjl3kAqBERJmbOxdISgLatAE6dNB2NPlDD/M10kcXzv0PLdwroGXNirh0/iwG9+iIqeNGoWvzOrh0/qy2wyMiKhRu3wa2bJG2Fy4E9LULJltuqFBYsehHrN99EG9jY+D11WAsWeOLRs1a4cbVy/h53gxs8ftb2yESEem8mTMBuRzo0QNo3Fjb0eSfPCU3iYmJMDU11VQsRFlKSU5C1U9qAgCsbWzRqFkrAIB7nXp4/y5Om6ERERUKgYHSApkymTRCSp+p9Vjq77//xpAhQ1CxYkUYGxvDwsICNjY2aNWqFRYsWIDnz5/nV5xUxMnlcsV2x097KO1LTU0t4GiIiAqfH36Q/jt4MPDJJ9qNJb+plNzs378fbm5uGD58OIyMjDBlyhT4+fnh6NGj2LBhA1q1aoUTJ06gYsWKGD16NKKiovI7bipiqrvXRtzbWADAt1NnK8qfPAqFlbWNtsIiIioUTpwAAgIAY2NgzhxtR5P/ZEIIkdNBTZo0wYwZM9C5c2cYZDNm7NmzZ1i5ciUcHBwwceJEjQaqKbGxsbC1tUVMTAxsbDL/o5jZ6Jzs6OrIHHXrAehmXbKqxx/bfeHZbzCSk5Jg9tG84bpeD7lcnuF3KfbNG9gUK6ZUpov1IKLCRQigYUPg0iVg/HhgxQptR5Q7qvz9TqNSn5vz51WbF6VMmTJYtGiRSscSqUNfFgC9de0qvvt6CKIiwtGiTQfMWrwCxUvYAwBG9vsMv/99WssREpG+8fOTEhtLS2D6dG1HUzA4WooKBX1ZAPSnuT/gh3lLULNuA2zfsBrDPu+C33YegIOjk/S/V0REGpSSAsyYIW17eQGlSmk3noKiUnLj5eWl8gWXLVuW62CIsqIvC4C+fxeHlu08AABjv58BF9fKGNn3M6zfdUB/J5wgIq3ZuhW4excoXhz47jttR1NwVEpurl69qvT+ypUrSElJQZUqVQAA9+7dg6GhIerVq6f5CImgPwuAvn//Xqm/zaeefWBkZIxRfbsjKSlRy9ERkT5JSEjvPPzDD4BtEerCp1Jyc/LkScX2smXLYG1tjS1btsDOzg4A8Pr1awwbNgwtWrTInyiJkL4A6Nbffi20C4DWqd8IZ/45hlbtOynKOn3mCZlMhmnffqnFyIhI36xZAzx5ApQpA3zzjbajKVgqjZb6UJkyZXDs2DF88tEg+Zs3b6Jjx446P9cNR0tlTxfrklk9QoLv4ErgefQeNDzTcwpLPQBp1NcXA4dluk8X60FEui82FnB1BV6+BNavB0aO1HZEeafx0VIfXzyzeWyioqLw9u1bdS9HlCuFcQFQQH9GfRGRblu2TEps3NyAoUO1HU3BUzu56dmzJ4YNG4alS5eiYcOGAIALFy7g+++/h6enp8YDJNIn+jLqi4h0V1QUsHSptD1/PmBUBMdFq13ltWvXYtKkSejfvz+Sk5OlixgZYcSIEViyZInGAyTSJ/oy6ouIdNfChUBcHFC3LtCrl7aj0Q61kxsLCwusXr0aS5YswYMHDwAArq6usLS01HhwRPpGX0Z9EZFuCgsDVq+Wtr29gWwWFdBrua72ixcv8OLFC1SuXBmWlpZQs18yUZGVNurL3Nyi0I76IiLdNGcOkJQEtG4NdOig7Wi0R+2Wm+joaPTu3RsnT56ETCbD/fv3UbFiRYwYMQJ2dnZYmvagj4iyJJPJMOSrcWjWuj2uBKq2vAkRUXbu3AG2bJG2vb2L9rygarfcTJw4EcbGxggLC4OFhYWivE+fPjhy5IhGgyPSd5WqVMtyODsRkTpmzADkcqB7d6BxY21Ho11qJzfHjh3D4sWLUbZsWaXyypUr4/Hjx7kKYtWqVXBxcYGZmRkaNWqEwMDALI9NTk7Gjz/+CFdXV5iZmaFWrVpMqoiIqEi7eFFaIFMmAxYs0HY02qd2chMfH6/UYpPm1atXMDU1VTuAPXv2wMvLC7Nnz8aVK1dQq1YteHh4IDIyMtPjZ8yYgXXr1mHlypW4ffs2Ro8ejZ49e2ZYIoKIiKiomDZN+u+gQcBHc+wWSWonNy1atMDWrVsV72UyGeRyOX766Se0adNG7QCWLVuGUaNGYdiwYahevTrWrl0LCwsLbNq0KdPjt23bhh9++AFdunRBxYoV8fXXX6NLly7s60NEREXSiRNAQABgbAzMnavtaHSD2h2Kf/rpJ7Rr1w6XLl1CUlISJk+ejFu3buHVq1c4d+6cWtdKSkrC5cuXMS0t5QRgYGCA9u3b4/z5zDtZJiYmwszMTKnM3NwcZ8+ezfL4xMT0BQljY2PVipGIiEhXCSEtigkAX38NuLhoNRydoXbLTY0aNXDv3j00b94c3bt3R3x8PDw9PXH16lW4urqqda2XL18iNTUVDg4OSuUODg4IDw/P9BwPDw8sW7YM9+/fh1wux/Hjx+Hn54cXL15kery3tzdsbW0VL2dnZ7ViJCIi0lX790v9bSwtgenTtR2N7lCr5SY5ORmdOnXC2rVrMV1Ln+KKFSswatQoVK1aFTKZDK6urhg2bFiWj7GmTZsGLy8vxfvY2FgmOEREVOilpEgjpADAywsoVUq78egStVpujI2Ncf36dY3d3N7eHoaGhoiIiFAqj4iIQOnSpTM9p2TJkjhw4ADi4+Px+PFj3L17F1ZWVqhYsWKmx5uamsLGxkbpRUREVNht2ybNbVO8OPDdd9qORreo/Vhq4MCB2Lhxo0ZubmJignr16iEgIEBRJpfLERAQgCZNmmR7rpmZGcqUKYOUlBTs27cP3bt310hMREREui4hQZqNGJD63NjaajUcnaN2h+KUlBRs2rQJJ06cQL169TKsKbVs2TK1rufl5YUhQ4agfv36aNiwIXx8fBAfH49hw4YBAAYPHowyZcrA29sbgLQC+bNnz1C7dm08e/YMc+bMgVwux+TJk9WtChERUaG0dq20jlSZMsA332g7Gt2jdnJz8+ZN1K1bFwBw7949pX25WQCwT58+iIqKwqxZsxAeHo7atWvjyJEjik7GYWFhMPhg5a+EhATMmDEDDx8+hJWVFbp06YJt27ahWLFiat+biIiosHn7Nn2ivjlzAHNzrYajk2SiiK14GRsbC1tbW8TExGTZ/+bG0xi1ruleVjfbA9WtB6CbdWE9iIjSzZ0rJTVubsCtW4CR2s0UhZMqf7/T5HpV8JCQEBw9ehTv378HAK4KTkRElM+iooC0OWvnzy86iY261E5uoqOj0a5dO7i5uaFLly6K+WVGjBiB79hdm4iIKN94e0uPperWBXr10nY0uourghMRERUCYWHA6tXStrc3YJDrZy/6T+0GrWPHjuHo0aMaXRWciIiIsjd3LpCYCLRuDXTooO1odJvWVwUnIiKi7N29C2zeLG17ewO5GJxcpKjdcpO2Kvi8efMA5H1VcCIiIlXdvy/1OfmYtTVQuXLBx1NQZswA5HKge3egcWNtR6P7tLoqOBERkaru35eGP2fl3j39THAuXgT27ZNaa9Lmt6HsqZ3cpK0K/uuvv8La2hpxcXHw9PTEmDFj4OjomB8xEpEOufE0BrExb2BjW0zlczhfD2lCZi026uwvrH74QfrvoEHAJ59oN5bCQu3kJiwsDM7OzpmuCh4WFoZy5cppJDAi0l2t61RG89bt4dlvMFq281CaRZyINCcgADhxAjA2ljoUk2rU/hepQoUKiIqKylAeHR2NChUqaCQoItJtZZzLo16jpvDxnoMODarDx3sOHj0M0XZYVMTNng0cPAj8/9yyhZ4Q6a02o0cDLi5aDadQUTu5EUJkuoZUXFwczMzMNBIUEek2cwsLDPlqHA78cwFL123B61fR6Ne1DYb26oxDe3dpOzwqov76C+jRA7C3Bzw9gW3bgNevtR1V7h04AAQGApaWQCYPSygbKj+W8vLyAiCNjpo5c6bScPDU1FRcuHABtWvX1niARKTbatdvhNr1G2HK3EU4cmgf9u3cgs8+76ftsEgPyeXZ7+/XDzh3Tprsbv9+6WVoKM0L07OnNNLooynadFZqanpCM3Ei8P9rSZOKVE5url69CkBqublx4wZMTEwU+0xMTFCrVi1MmjRJ8xESke7JZC05CwtLePYdDM++g7UQEBUFBw5kv3/uXKBSJeDqVenY/fuBmzelfisBAcDYsUCDBlKi06MHUK1aAQSdS9u2AXfuAMWLA/zTqj61VwUfNmwYVqxYkeOKnLqKq4JnTxfrwnrolhtPYxDz+jVs7eyUymPfvIFNsWKZnqOL9aDCJTgYqF0bSEgApk4FvvhCeX9W89yEhKQnOufPK+flVapISU7PnlLSoyv94hMTpSHvYWHAkiVMbtLk66rgvr6+hTaxISLNCH/xFF94NEfvzi0REnwHY4b0RvsG1dCxUQ3cu3NT2+GRnklJAQYPlhKbDh2AhQulhSM/fGU1v02lSlJycO4c8Pw5sG4d0LmzNPooOBhYvFiaFM/ZGfjmG+D4cSA5uWDr97G1a6XEpkwZYMwY7cZSWKn0WMrT01PlC/r5+eU6GCIqHBbNmoKvJ07F29gYjBnyBcZ+PwOrtvyOf478haXzZmLdzv3aDpH0yOLFUsdaW1tg06bcLz1QujTw5ZfSKzYW8PeXWnX8/aXEZ80a6VWsGNC1q9Si06mT1KG3oLx9C8yfL23Png2YmxfcvfWJSi03tra2Kr+ISP/Fx71F206fonvvARAC6NarLwCgbadP8So641QRRLkVFJQ+v8vKlZrrEGxjA/TtC+zeDURFSQnOqFFAqVLAmzfAjh3A559LI68++wzw9QVevtTMvbOzfLl0Hzc3YNiw/L+fvlKp5cbX1ze/4yCiQuTDrnoNmjbPch9RXiQmSo+jkpOlVpSBA/PnPqam0qOqzp2llpv//ksfbfXwIfDnn9LLwABo0SK9Q3L58pqN4+VL4Oefpe158wAjtafZpTQ60n2KiAqTEvalEPc2FgCwYPlaRXlURDhMTDnfFWnGnDnAjRtAyZJSP5SCWAnb0BBo1kxKMkJCgOvXpZajOnWkoej/+x8wYYI0oV7dulIScuNGpgMI1ebtLT2WqltXajWi3FNptFTdunUREBAAOzs71KlTJ9NJ/NJcuXJFowFqGkdLZU8X68J66Jbs6vE2NgZxb2PhWMZZqVwX60G67fx5oHlzKaHw85NaS7Tt0SOpj86BA8CZM8rz7ri6po+8atxYSpLU8eSJ1Ck6MRE4cgTw8NBc3PpCndFSKjV6de/eHaampgCAHj165DlAItJP1ja2sLZhIkN5Ex8PDBkiJQ+DBulGYgNIrTUTJkivqChpRuT9+4Fjx4AHD4ClS6WXg4PUT6dnT6BtW+mxV07mzpUSm9atgY4d87ceRYHK89xs2rQJAwYMUCQ5hRVbbrKni3VhPXSLvtSDdNe4ccCvv0pDoW/elEYv6bK4OODoUSnR+esvIOaDXxFra6BLFynR6dxZ6sgMAPfvp69iHhoK9O4tJXO//55xDh+S5Ms8N6NGjULMBz8xJycnPHr0KNdBEhERfSwgQEpsAGnYt64nNgBgZQX06gVs3w5ERkotOV9/DTg6SgnMnj3SyKySJaVEZ948aTRUvXrS6/PP0x9x9e4tJT6UNyonNx838Lx9+xbynBb6ICIiUlFMTPrw56+/LpyPZ0xMpIkGV68Gnj6VRl5NmSIlM0lJwN9/A7NmZX+NtBYdyj2OliIiIp0wcaLUsbZiReCnn7QdTd4ZGACNGgGLFkmzId++Lc2u/Mkn2o5M/6k8il4mkymNkvr4PRERUW79+ac0UZ5MBmzZIj3q0TfVqkkvDw/pcRTlH5WTGyEE3NzcFAlNXFwc6tSpA4OPVhp79eqVZiMkIiK99vKlNDswAHz3nTQEnCgvVE5uOEsxERFpmhBS/5qICOlxzbx52o6I9IHKyc2QIUPyMw4iIiqCdu8G9u6VlhrYsgUwKwITXFtb520/5Uyl5EYIwf41RESkUc+fA2PGSNszZhSdfiiVKwP37mU+KsraWtpPeaNScvPJJ59g1qxZ8PT0hImJSZbH3b9/H8uWLUP58uUxdepUjQVJRET6RQhg5Ejg9WspqfnhB21HVLCYwOQvlZKblStXYsqUKfjmm2/QoUMH1K9fH05OTjAzM8Pr169x+/ZtnD17Frdu3cLYsWPx9ddf53fcRERUiG3YIM35YmoKbN0KGBtrOyLSJyolN+3atcOlS5dw9uxZ7NmzBzt27MDjx4/x/v172Nvbo06dOhg8eDAGDBgAOzu7/I6ZiIgKsdBQwMtL2l6wAKheXbvxkP5RuUMxADRv3hzNOUaPiPRAVmtkxb55A5ss5vznGll5J5cDQ4dK6zG1aCEtQkmkaZyhmIiKrLu3ruMLj+bo3bklQoLvYMyQ3mjfoBo6NqqBe3duajs8vbRiBXD6NGBpCWzeDBgaajsi0kdMboioyFo8eyq+njgVA4Z9hTFDvkCnzzwReP8Fps5dhKXzZmo7PL1z5w4wbZq0vXSptMwCUX5gckNERVZ83Fu07fQpuvceACGAbr36AgDadvoUr6KjtBydfklJAQYPBhITgU6dgC+/1HZEpM+Y3BBRkSWEUGw3aNo8y32Ud97ewKVLQLFi0kgpTp1G+YnJDREVWSXsSyHubSwAYMHytYryqIhwmJgWgalyC8iVK8CPP0rbv/4KlCmj3XhI/6md3LRq1Qpbt27F+/fv8yMeIqICs3aHH6ysbZTKYt+8gZm5OZau3aydoPRMYqL0OColBejVC+jfX9sRUVGgdnJTp04dTJo0CaVLl8aoUaPw33//5UdcRET5Lvj2jUxHS/Xq0AxvY2O1HZ5emDULuHULKFUKWLOGj6OoYKid3Pj4+OD58+fw9fVFZGQkWrZsierVq+Pnn39GREREfsRIRJQvFs2aks1oqRnaDq/Q+/dfYMkSafu334CSJbUbDxUduepzY2RkBE9PTxw8eBBPnz5F//79MXPmTDg7O6NHjx74559/NB0nEZHGcbRU/omPlx5HCQEMGQJ0767tiKgoyVOH4sDAQMyePRtLly5FqVKlMG3aNNjb2+PTTz/FpEmTNBUjEVG+4Gip/DN5MvDgAVC2LODjo+1oqKhRa/kFAIiMjMS2bdvg6+uL+/fvo1u3bti1axc8PDwg+/+HqUOHDkWnTp3w888/azxgIiJNSRstZWVtw9FSGnT8OLB6tbTt6ysN/yYqSGonN2XLloWrqyuGDx+OoUOHomQmD1Fr1qyJBg0aaCRAIqL8snaHX6blHC2Ve2/eAMOHS9tjxgDt22s1HCqi1E5uAgIC0KJFi2yPsbGxwcmTJ3MdFBGRNlnb2MLahotk5sa33wJPnwKVKgGLF2s7Giqq1O5zU7ZsWdy/fz9D+f379/Ho0SNNxERERIXQgQPA1q2AgQGwZYu0OCaRNqid3AwdOhT//vtvhvILFy5g6NChmoiJiIgKmago4KuvpO3vvweaNtVuPFS0qZ3cXL16Fc2aNctQ3rhxYwQFBWkiJiIiKkSEAEaPBiIjgRo1gLlztR0RFXVqJzcymQxv377NUB4TE4PU1FSNBEVERIXHzp2Anx9gZCQ9ljI11XZEVNSpndy0bNkS3t7eSolMamoqvL290bx582zOJCIiffPsGTB2rLQ9axZQp4524yECcpHcLF68GP/88w+qVKmCYcOGYdiwYahSpQpOnz6NJWnzbKtp1apVcHFxgZmZGRo1aoTAwMBsj/fx8UGVKlVgbm4OZ2dnTJw4EQkJCbm6NxER5Y4QwIgR0vDvBg2AadO0HRGRRO3kpnr16rh+/Tp69+6NyMhIvH37FoMHD8bdu3dRo0YNtQPYs2cPvLy8MHv2bFy5cgW1atWCh4cHIiMjMz1+586dmDp1KmbPno07d+5g48aN2LNnD3744Qe1701ERLn322/A0aOAmZn0OMpI7clFiPJHrr6KTk5OWLhwoUYCWLZsGUaNGoVhw4YBANauXYvDhw9j06ZNmDp1aobj//33XzRr1gz9+/cHALi4uKBfv364cOGCRuIhIqKcPXgAfPedtL1wIVC1qnbjIfpQrvPsd+/eISwsDElJSUrlNWvWVPkaSUlJuHz5MqZ90JZpYGCA9u3b4/z585me07RpU2zfvh2BgYFo2LAhHj58CH9/fwwaNCjT4xMTE5GYmKh4Hxsbq3J8RESUUWoqMHSotDhmq1bSxH1EukTt5CYqKgrDhg3D33//nel+dUZMvXz5EqmpqXBwcFAqd3BwwN27dzM9p3///nj58iWaN28OIQRSUlIwevToLB9LeXt7Yy7HJRIRaYyPD3D2LGBlJa0dZZCnJZiJNE/tr+SECRPw5s0bXLhwAebm5jhy5Ai2bNmCypUr49ChQ/kRo5JTp05h4cKFWL16Na5cuQI/Pz8cPnwY8+bNy/T4adOmISYmRvF68uRJvsdIRKSvbt0Cpk+XtpctAypU0G48RJlRu+Xmn3/+wcGDB1G/fn0YGBigfPny6NChA2xsbODt7Y2uXbuqfC17e3sYGhoiIiJCqTwiIgKlS5fO9JyZM2di0KBBGDlyJADA3d0d8fHx+PLLLzF9+nQYfPS/EKampjDlpAtERHmWnAwMGQIkJgKdOwP//88wkc5RO7mJj49HqVKlAAB2dnaIioqCm5sb3N3dceXKFbWuZWJignr16iEgIAA9evQAAMjlcgQEBGBs2sQJH3n37l2GBMbQ0BAAIIRQszZERIXfjacxAICkxEScPXkcz56GwcjICK5uVdGwacsMx7uXzd2ioAsXApcvA3Z2wIYNgEyWp7CJ8o3ayU2VKlUQHBwMFxcX1KpVC+vWrYOLiwvWrl0LR0dHtQPw8vLCkCFDUL9+fTRs2BA+Pj6Ij49XjJ4aPHgwypQpA29vbwBAt27dsGzZMtSpUweNGjVCSEgIZs6ciW7duimSHCKioibw39OY6fUNrG1s8ehhCOo2bII9WzfCwsISy9dvh4OjU56uf/kyMH++tL1qFeCUt8sR5Su1k5tvv/0WL168AADMnj0bnTp1wo4dO2BiYoLNmzerHUCfPn0QFRWFWbNmITw8HLVr18aRI0cUnYzDwsKUWmpmzJgBmUyGGTNm4NmzZyhZsiS6deuGBQsWqH1vIiJ98fO8Gfht10GUr+CKm0FXsHPzOvy28wD27tyChTMmYcXGnbm+dkICMHgwkJICfPEF0LevBgMnygcykcdnOe/evcPdu3dRrlw52NvbayqufBMbGwtbW1vExMTAxsYm02PSmnhVldsm3vymbj0A3awL66FbWA/dc+NpDL7waI4/jp5VlPXp0gp7/P8HAOjWqj7+/N8lxT516/H998DPPwMODsDNm0Ah+Kee9JAqf7/TqDVaKjk5Ga6urrhz546izMLCAnXr1i0UiQ0Rkb6ysLRC4L+nAQDHDh9E8RIlNXLdM2eApUul7fXrmdhQ4aDWYyljY2Ou4UREpIO+n7UQE78chNevXqKkQ2ms2CA9hnoZGYGuPb/I1TXj4qTJ+oQAhg0DunXTYMBE+UjtPjdjxozB4sWLsWHDBhhxIREiIp1Qo3ZdHA+8hTevX6GYXXFFuX0pB4yeMCVX1/z+e+DhQ6BcOWD5ck1FSpT/1M5OLl68iICAABw7dgzu7u6wtLRU2u/n56ex4IiISDVPHz/C7Mnj8PxpGNp27IrxU2bB1MwMADCwewdsP3hcresdPQqsXStt+/oCtrrZ3YgoU2onN8WKFUOvXr3yIxYiIsql+T94oUPnz1CzbgNs37QGo/p1x5pte2FpZY2kRPW6E7x+DYwYIW2PGwe0bZsPARPlI7WTG19f3/yIg4iI8uBVdBT6Dh0FAFjosw7rVy7FqL7dsW7nfrVn2xs/Hnj2DKhcGVi0KD+iJcpf7DRDRKQHPh7sMWrcdzA2NsGovt3xLi5O5ev4+QHbt0uLYW7dClhYaDpSovyndnJToUIFyLL5v4CHDx/mKSAiIlJfxUpuOHvyBJq3aa8oGzp6HAwMZFg6f6ZK14iMBL76StqeMgVo3Dg/IiXKf2onNxMmTFB6n5ycjKtXr+LIkSP4/vvvNRUXERGp4adVmzItH/zlWHh088zxfCGkxOblS6BmTWD2bE1HSFRwcrX8QmZWrVqFS5cuZbqPiIjyl4mpaZb7VFlXats24MABwNhYehyVzeWIdJ5aMxRnp3Pnzti3b5+mLkdERAXkyROpEzEAzJkD1Kql1XCI8kxjyc3evXtRvHjxnA8kIiKdIYQ07DsmBmjUCJg8WdsREeWd2o+l6tSpo9ShWAiB8PBwREVFYfXq1RoNjoiI8teaNcDx44C5ObBlC8CJ50kfqP017tGjh9J7AwMDlCxZEq1bt0bVqlU1FRcREeWzkBBpiQVAms+mShXtxkOkKWonN7PZhZ6IqNBLTZUWxXz3DmjTBhg7VtsREWmO2n1u/P39cfTo0QzlR48exd9//62RoIiIKH8tWwacOwdYWwObNkmT9hHpC7W/zlOnTkVqamqGciEEpk6dqpGgiIhIs8KeAHfvSq/ffwemT5fKp04FXFy0GhqRxqn9WOr+/fuoXr16hvKqVasiJCREI0EREZHmhD0BPHtK20IAyZHp+6ZPB774QlpHikhfqN1yY2trm+kSCyEhIbC0tNRIUEREpDnv4qX/CgGkxpll2P/2bQEHRJTP1E5uunfvjgkTJuDBgweKspCQEHz33Xf47LPPNBocERHl3eOHBkiJNUNylA3k7zj1MOk/tR9L/fTTT+jUqROqVq2KsmXLAgCePn2KFi1a4Oeff9Z4gEREpL538cDfh4yxb6cJbgZ98E+9gRyQs/cw6Te1kxtbW1v8+++/OH78OK5duwZzc3PUrFkTLVu2zI/4iIhIRUIAt64ZYt8uE/x90Bjv4qUJVw0NBeRGKTA0T4LMJAXJkbZajpQof+VqLkqZTIaOHTuiY8eOmo6HiIjUFBsD+O83wb5dJgi+bagoL18hFT37JaF67WSMHSe0GCFRwVI7uRk/fjwqVaqE8WmrrP2/X3/9FSEhIfDx8dFUbERElAUhpHlq1q8Hfv/dBgkJUiuNialA+87J6NU/CfUbp0Imk0ZLZcfaugACJipAaic3+/btw6FDhzKUN23aFIsWLWJyQ0SUj16+BLZuBTZsAO7cSSuVwdUtFb36J6GbZzJs7ZRbaco5A37700dNuX6wxrG1NYeBk/5RO7mJjo6GrW3G57U2NjZ4+fKlRoIiIqJ0cjlw6pTUSuPnByQlSeUWFkCfPkDb7nGoWVdqpclKOef0bfey+Roukdap3WW+UqVKOHLkSIbyv//+GxUrVtRIUEREBISHSwtaurkB7doBu3dLiU3dutJq3s+fS0sn1KqXfWJDVNSo3XLj5eWFsWPHIioqCm3btgUABAQEYOnSpXwkRUSUR6mpwLFjUivNn38CKSlSubU1MGAAMGqUlNwQUdbUTm6GDx+OxMRELFiwAPPmzQMAuLi4YM2aNRg8eLDGAyQiKgqePJFaYTZtAsLC0ssbN5YSmj59AH2fBP7G05hMy39dMh9jv5+Rody9LIe0U+ZyNRT866+/xtdff42oqCiYm5vDysoKAPDq1SsUL148h7OJiAgAkpOBw4elVpojR6S+NQBgZwcMGiQlNTVqaDfGgrZj09oMZb9v2wS7EvYAgAHDRxd0SFQI5Sq5SVOyZEkAwLFjx7Bhwwb8+eefeP/+vUYCIyLSVw8fSqOdfH2lfjVpWrWSEppevQCzjEtAFQk//zgdLdp2hG0xO0VZUlIS7t68Dhk7FpGKcp3cPH78GJs2bcKWLVvw+vVrdO7cGVu3btVkbEREeiMxETh4UGqlOXEivbxkSWDoUGDkSKnjcFG3docfViz6Eb36D0Gr9p0AABf/O4t5y1ZrOTIqTNRKbpKSkuDn54cNGzbg3LlzaN++PZ4+fYqrV6/C3d09v2IkIiq07t6VWmm2bJHmqAEAmQzo0EFqpfnsM8DERLsx6pJGzVrht537sXDm9zjhfwhT5i5iiw2pTeXkZty4cdi1axcqV66MgQMHYs+ePShRogSMjY1haGiY8wWIiIqI9++BvXulVpozZ9LLnZyA4cOlV4UK2otP11lZ22ChzzocO3wQw7/oisSEBG2HRIWMysnNmjVrMGXKFEydOhXWnKubiHIQ9iR9RtzkyPRyfZ4R9/p1KaHZvh1480YqMzAAunSRWmm6dAGM8tTTsWjp2LU76jVsgts3grQdChUyKv+abdu2DZs2bYKjoyO6du2KQYMGoXPnzvkZGxEVUmFPAM+e6e+TIpT337tXeBKcnJK0uDhpcr3164HAwPT95csDI0YAw4YBZTkjsMqePArFnMnj8PzZE7Tt2BXjp8xCi7bSIs0Du3fA9oPHtRwhFQYqJzf9+vVDv379EBoais2bN2PMmDF49+4d5HI5bt++jerVq+dnnERUiKQlAwAgUjP2l7h7F7CykkYEmZkBpqZSC4euySlJ690b8PeXEhxAapXp3l1qpWnfHuATe/UtmP4dOnTpjpp1G2D7pjUY1a871mzbC0srayQl8vEUqUbtBtIKFSpg7ty5mDNnDo4dO4aNGzdi4MCBmDBhAjw9PfHLL7/kR5xEVAjJE4yQEmORofyzzzIea2KinOykbavyys3x4a9lMDUFTE0FTEwzf1z0YZKWmd9/l/5bqZKU0AwZAjg45OKDIoVX0VHoO3QUAGChzzqsX7kUo/p2x7qd+8E1JkhVuX76K5PJ4OHhAQ8PD7x69Qpbt26Fr6+vJmMjokJMnmj4/4lNxj9IZmbS0GjxweLVSUnSKza2oCK0UXpnaCglOWnJjqmpAGRAcrTIrAoAgE6dgClTpPlp+HdXMxI+6jw8atx3MDY2wai+3fEurYmMKAca6dpWvHhxTJgwARMmTNDE5YiokHt4zwApMZYAZJCZJkMkGivtP3cOqFNHWjcpIUF6JSamb6v6UvecD49/nyCQkpyekaSmyvD+HfD+nepZyoIFXOdJ0ypWcsPZkyfQvE17RdnQ0eNgYCDD0vkztRgZFSbst09EGvXooQEWTrcEhAwykxQY2b5DcmTGNYBkMsDYWHppYwDmjaexSE39/xajBBkSE4GkRCAxMW1bhpD7wMKFMkAAQsiQGpvxERtp1k+rNmVaPvjLsfDo5lnA0VBhxeSGiDQmMlyG0QMsEfvGADKjFBjZxmf6uEZXZpMwNATMzQFz8w+ejyF928wKMDBN35NaYI/Mii4TU9Ms9zk4OhVgJFSYMbkhIo2IfQOMHmiJ508NUL5CKhasfAeT//875frBerqFaZ4bixxW4daVJI2IlDG5IaI8e/8eGDfcEiHBhihZSo61O+JRxjm9BcS9kM7zUs4Z8NufPmqqsCZpREWNyslNWFiYSseVK1cu18EQUeGTnAx8/7UFrl40grWtwJrtyolNYVfOOX27sCZpREWNyslNhQ8WQhH/P37zw8XMhBCQyWRITU3VYHhEpMvkcmDOZHOcDjCGqanAyk3xcKsm13ZYRFTEqZzcyGQylC1bFkOHDkW3bt1gxAVSiIq85QvN8OdeExgaCixZ8w51G/J/bohI+1TOUJ4+fYotW7bA19cXa9euxcCBAzFixAhUq1YtP+MjIh3lu9YEW9ZJPYbnLHmP1h1StBwREZFE5dVcSpcujSlTpuDu3bvYu3cvXr9+jUaNGqFx48ZYv3495HI2RRMVFQf2GGP5AnMAgNf09+j+RbKWIyIiSperpeqaN2+OjRs34v79+7CwsMDo0aPx5s0bDYdGRLro5DEjzJ0iJTZDRydi6OgkLUdERKQsV8nNv//+i5EjR8LNzQ1xcXFYtWoVihUrpuHQiEjXnDkDTP7GAqmpMnT/IgkTf+AqzUSke1Tuc/PixQvF4pivX7/GgAEDcO7cOdSoUSM/4yMiHXH9OtCtm7Q8Qcv2yZj903suFklEOknllpty5cphzZo16NOnD/z9/TF06FDI5XJcv35d6ZUbq1atgouLC8zMzNCoUSMEBgZmeWzr1q0hk8kyvLp27ZqrexNRzkJDAQ8PICYGqNMgBUtWvwMHTBKRrlL5n6fU1FSEhYVh3rx5mD9/PoD0+W7S5Gaemz179sDLywtr165Fo0aN4OPjAw8PDwQHB6NUqVIZjvfz80NSUvoz/ujoaNSqVQtffPGFWvclItVERAAdOwLh4YC7O7ByUzzMzbUdFRFR1lRObkJDQ/MlgGXLlmHUqFEYNmwYAGDt2rU4fPgwNm3ahKlTp2Y4vnjx4krvd+/eDQsLCyY3RPkgNhbo3BkICQFcXIAjR4BoDowkytaNpzEApEaBS/+dRfizpwCA0mXKon7j5jA0NFQ63r2sbYHHqO9UTm7Kly+f7f43b97A398/x+M+lJSUhMuXL2PatGmKMgMDA7Rv3x7nz59X6RobN25E3759YWmZwwp3RKSWhASgRw/g6lWgZEng2DHAyQmIfqrtyIh03+UL/2LquFEoVdoRTmWkNTyePQ1DVEQ4vH/5DfUbN9NyhPpNY0/NHz9+jEGDBqF///4qn/Py5UukpqbCwcFBqdzBwQF3797N8fzAwEDcvHkTGzduzPKYxMREJCYmKt7HxsaqHB9RUZWaCgwcCJw8KS0QeeQIF4kkUsfCGZPgs347PqlVR6n8ZtAVzJo0Fn4n/tVSZEVDroaC64qNGzfC3d0dDRs2zPIYb29v2NraKl7Ozs5ZHktEgBDAmDHAvn2AiQlw4ABQt662oyIqXJISEzMkNgBQo3ZdJCclZnIGaZJWkxt7e3sYGhoiIiJCqTwiIgKlS5fO9tz4+Hjs3r0bI0aMyPa4adOmISYmRvF68uRJnuMm0mezZwPr1gEyGbBzJ9C2rbYjIip8ypavgLU+ixH9MkpRFv0yCmuWL0IZZ9W7b1DuaDW5MTExQb169RAQEKAok8vlCAgIQJMmTbI9948//kBiYiIGDhyY7XGmpqawsbFRehFR5lauBObNk7bXrAF69dJuPESF1QKftXj+JAxdm9dBg0ql0aBSaXRtXgcvnj7BghXrtB2e3lO5z80vv/yS7f5nz57lKgAvLy8MGTIE9evXR8OGDeHj44P4+HjF6KnBgwejTJky8Pb2Vjpv48aN6NGjB0qUKJGr+xKRsl27gPHjpe1584CvvtJuPESFWfES9vhx6Sr8uHQVYl6/BgDY2tlpOaqiQ+XkZvny5TkeU65cObUD6NOnD6KiojBr1iyEh4ejdu3aOHLkiKKTcVhYGAwMlBuYgoODcfbsWRw7dkzt+xFRRkePAoMHS9vjxgHTp2s3HqLC7smjUMyZPA7Pnz1B245dMX7KLMW+gd07YPvB41qMTv9pfZ4bABg7dizGjh2b6b5Tp05lKKtSpUqGCQSJKHcuXJAeP6WkAP36AT4+4LIKRHm0YPp36NClO2rWbYDtm9ZgVL/uWLNtLyytrJGUyDXZ8pvG+tw8ffoUX375paYuR0QF4M4doGtXID5emoV482bAoFCPoSTSDa+io9B36ChUr1kbC33WoUXbjhjVtzvexsbw/x4KgMb+GYuOjs52vhki0i1PnkjrRUVHAw0bpg/9JqK8S0hQbp0ZNe47dPy0J0b17Y53cXFaiqro4P+jERVB0dFSYvPkCVC1KnD4MGBlpe2oiPRHxUpuOHvyhFLZ0NHj0KXH53jyOP+6eZCE6/oSFTHx8dKjqDt3gLJlpc7E9vbajopIv/y0alOm5YO/HAuPbp4FHE3Rw+SGqAhJSpI6D1+4ABQvLiU2uRjkSEQ5MDE1zXKfg6NTAUZSNKmc3Hh6Zp9pvnnzJq+xEFE+ksuBYcOkhMbCQnoUVb26tqMiItI8lZMbW9vsl2S3tbXF4LSJMohIpwgBTJwoLadgZCR1Hm7cWNtRERHlD5WTG19f3/yMg4jykbc3kDbJ+JYtQKdO2o2HiCg/cbQUkZ5bvz59xuEVK4D+/bUbDxFRfmNyQ6TH/PyA0aOl7enT09eOIiLSZ0xuiPTUyZPScgpyOTBqVPpq30RE+o7JDZEeunoV6N5dGvrt6QmsWcMZ34mo6GByQ6RnQkKkDsNv3wKtWwM7dgCGhtqOioio4DC5IdIjL15IC2BGRgK1awMHDwJmZtqOioioYDG5IdITb95ILTahoYCrK3DkCGBjo+2oiIgKHpMbIj3w/j3w2WfA9etA6dLAsWOAg4O2oyIi0g4mN0SFXEoK0LcvcOaM1FJz5AhQsaK2oyIi0h4mN0SFmBDAl18Chw4BpqbAn38CtWppOyoiIu1ickNUiE2bBvj6AgYGwJ49QMuW2o6IiEj7mNwQFVJLlwKLF0vb69dL89oQERGTG6JCaetWYNIkaXvRImD4cO3GQ0SkS5jcEBUyhw+nJzNeXsDkydqNh4hI1zC5ISpEzp0DvvgCSE0FBg0ClizhsgpERB9jckNUSNy8CXz6qTSnTZcuwMaNUkdiIiJSxn8aiQqBR48ADw9pFuKmTYE//gCMjbUdFRGRbmJyQ6TjIiOl9aKePwc++USay8bCQttRERHpLiNtB0BEysKeAO/ipe03ocBXXwH37wNOTsDRo0Dx4tqNj4hI1zG5IdIhYU8Az57SthBAcmT6vufPgXfvtBMXEVFhwuSGSIe8iweEHBAphkh9Z5ph/9u3WgiKiKiQYXJDpCVCAJHhMty9ZYjgW4a4e9sQN4IMkBxlqO3QiIgKNSY3RAUgJQV4cM8Ad28ZSsnMbQME3zLE61dZ9Ok3kENmlAqRxCFRRETqYnJDpGFxccCNG8DVq0BQkPS6cQNISLDOcKyhoUCFSnJU/SQVVaqnwtImFd5L5JAZCABAUoRtwQZPRKQHmNwQ5UF4eHoCk5bM3L8vPXL6mIWlQJXqUhJT5ZNUVPskFa5ucpiapR9z9y4g4wQNRER5wuSGSAWpqUBIiHISExQERERkfryTE1C7dvqrTh0g3iQ2xxmFLSyz32+dsfGHiIg+wuSG6CPv3klLHXyYyFy/nvkwbAMDoEoV5USmdm2gVKmMx954mvO9yzkDfvvT57lx/WBOG2troHJldWtDRFT0MLkhvfHh5Hcfzg+TXVIQFZXxsVJwMCCXZzzW3ByoVUs5iXF31/xsweWc07fdy2r22kRERQGTG9ILH05+BwBJHz0uunsXMDTM+Fjp+fPMr1eypPQoqU6d9ESmcmXpGkREpNuY3JBeSGuxATLvzFu3btaz+1aunN4vJi2RKV0akMnyIVAiIsp3TG5ILyQlAvIEI6QmmEAkZvxav3sHmJpKj5E+7OTr7s5OukRE+obJDRVaKSlA4Dkj+B8wxnF/Y6S8y7qp5fffgZ49ASN+44mI9B7/qadCRQjg+lVD/H3AGEf+NMarlx+MrTaQw8AsCQZmyUh5pdwc4+rKxIaIqKjgP/dUKDy4ZwD/A8b4+6Axnoal9+otZidHx0+TUaNuMuZ5p7KfDBERMbkh3RUWBuzeDWzaYoXg2+kJjbmFQFuPZHTpkYzGLVJgbCyNlsousWG/GiKiooPJDemUly+BvXuBnTuBM2fSSg1hZCzQvHUKuvRIRsv2yRnmluHkd0RElIbJDWldXBxw6JCU0Bw9KnUUBqSWmJYtgVZd3qFD5xTY2mUyxvsDnPyOiIgAJjekJUlJwLFjUkJz8KDyHDR16wL9+wN9+gBlywI3niZrL1AiIip0mNxQgZHLgbNnpYTmjz+AV6/S97m6AgMGAP36AVWrai9GIiIq/JjcUL4SArh2TUpodu0Cnn6weGTp0kDfvlIrTf36nBGYiIg0g8kN5YuQECmZ2bULuHMnvdzWFujVS0poWrfmWk1ERKR5TG5IY8LDgT17pFaawMD0clNToFs3KaHp3BkwM9NejEREpP+Y3FCexMQAfn5SQvPPP1K/GgAwMADat5cSmh49pBYbIiKigsDkhtSWkAAcPiwlNIcPA4mJ6fsaN5YSmt69AQcH7cVIRERFl0HOh+S/VatWwcXFBWZmZmjUqBECP3ymkYk3b95gzJgxcHR0hKmpKdzc3ODv719A0RZNKSnA8ePAsGFS0vL551KLTWIiUK0aMH8+8OABcP48MG4cExsiItIerbfc7NmzB15eXli7di0aNWoEHx8feHh4IDg4GKVKlcpwfFJSEjp06IBSpUph7969KFOmDB4/foxixYoVfPB6IuxJ+sy+yZHp5VZWwOvXUgvNnj1ARET6Pmdnadh2//5AzZoc6URERLpD68nNsmXLMGrUKAwbNgwAsHbtWhw+fBibNm3C1KlTMxy/adMmvHr1Cv/++y+MjY0BAC4uLgUZsl4JewJ49kx/nxSR9bHFi0uPm/r3B5o1k/rVEBER6Rqt/nlKSkrC5cuX0b59e0WZgYEB2rdvj/Pnz2d6zqFDh9CkSROMGTMGDg4OqFGjBhYuXIjU1NSCCluvpLXYiFQZUuNNMuw3M5OSmb/+Al68ANasAVq0YGJDRES6S6stNy9fvkRqaiocPuqg4eDggLt372Z6zsOHD/HPP/9gwIAB8Pf3R0hICL755hskJydj9uzZGY5PTExE4gc9XmNjYzVbiULsbSxw8ogxkl+ZQCQbAsj4bOnECamVhoiIqLDQ+mMpdcnlcpQqVQq//fYbDA0NUa9ePTx79gxLlizJNLnx9vbG3LlztRCpbkpOAs79zwiH/Yxx6rgxEhPTExqZcQpEsvJXwty8oCMkIiLKG60mN/b29jA0NEREhHJHj4iICJQuXTrTcxwdHWFsbAzDD6a2rVatGsLDw5GUlAQTE+VHK9OmTYOXl5fifWxsLJydnVGUCAFcv2KIv/yMcfRPY7x5nf5MqYxzKsJfJ8PALAkyQ4GkCE5IQ0REhZtWkxsTExPUq1cPAQEB6NGjBwCpZSYgIABjx47N9JxmzZph586dkMvlMPj/jh/37t2Do6NjhsQGAExNTWFqappvddBlj0MN8JefMfz3G+PJ4/RksERJObp0T0ZXzyTIjOQYNEiLQRIREWmY1h9LeXl5YciQIahfvz4aNmwIHx8fxMfHK0ZPDR48GGXKlIG3tzcA4Ouvv8avv/6Kb7/9FuPGjcP9+/excOFCjB8/XpvV0BlRUdKw7e3bgQsXrBXlZuYC7Tsno2vPZDRqngKj///Jhz3J/nrW1tnvJyIi0jVaT2769OmDqKgozJo1C+Hh4ahduzaOHDmi6GQcFhamaKEBAGdnZxw9ehQTJ05EzZo1UaZMGXz77beYMmWKtqqgde/eAYcOSQnNkSNA2sAxAwOBJi1T8KlnMtp0TIaFZcZzyzkDfvvTR025Fk/fZ20NVK6c//ETERFpktaTGwAYO3Zslo+hTp06laGsSZMm+O+///I5Kt2WmgqcOiUlNPv2AW/fpu+rXx8YOBCo2eIt7EuJHK9V7oMuSO5lNR8rERFRQdKJ5IZUIwRw/bqU0OzcCTx/nr7PxUVKaAYMAKpWlcpuPM05sSEiItI3TG4KgSdPpGRm+3bg5s30cjs7acbggQOBpk05sR4RERHA5EZnxcQAe/dKCc3//ie12gCAiQnQrZuU0HTuDBTRgWBERERZYnKjQ5KSpA7B27dLHYQ/mFgZLVsCgwYBvXpJLTZERESUOSY3WiYEcP68lNDs2QO8epW+r1o1KaHp3x8oX157MRIRERUmTG605N49YMcOKal5+DC9vHRpKZkZOBCoXRuQZVzuiYiIiLLB5KYARUamT7AXGJhebmkJeHpKrTRt2wIfrCxBREREamJyk8/evQMOHpQSmqNH0yfYMzQEOnaUWmi6d5cSHCIiIso7Jje5FPYkfVbf5Mj0cmtroGJF4ORJYNs2wM8PiItL39+ggZTQ9OkD/P8kzERERKRBTG5yIewJ4Nkz/X2S8qLmKFVKegSVpkKF9An2qlQpmBiJiIiKKiY3uZDWYgMAIjVjj9/ISGm4dp8+6RPssWMwERFRwWByk0cpseYZypYuBcaOlSbcIyIiooLF5CaPDMySkZpkrFTWujUTGyIiKtxuPI1R63j3srb5FIn6uBpRHhmaJ2s7BCIiIvoAkxsiIiLSK0xucsEihzlprK0LJg4iIiLKiH1ucqGcM+C3P33UlGvx9H3W1kDlytqJi4iIiJjc5Fo55/Rt97Lai4OIiCi/xMa8gY1tMW2HoTY+liIiIqJMta5TGeOH98Op439DLpdrOxyVMbkhIiKiTJVxLo96jZrCx3sOOjSoDh/vOXj0METbYeWIyQ0RERFlytzCAkO+GocD/1zA0nVb8PpVNPp1bYOhvTrj0N5d2g4vS0xuiIiIKEe16zfC3CUrEXD5Lj77oh/27dyi7ZCyxOSGiIiIMidEhiILC0t49h2MLX5HtBCQapjcEBERUabW7zqk7RByhckNERERZcrWzk7bIeQKkxsiIiLSK0xuiIiISK8wuSEiIiK9wuSGiIiI9AqTGyIiItIrTG6IiIhIrzC5ISIiIr3C5IaIiIj0CpMbIiIi0itMboiIiEivMLkhIiIivcLkhoiIiPQKkxsiIiLSK0xuiIiISK8wuSEiIiK9wuSGiIiI9AqTGyIiItIrTG6IiIhIrzC5ISIiIr3C5IaIiIj0CpMbIiIi0itMboiIiEivGGk7gIImhAAAxMbGZnlM3Nus92UmNlaWp5jyi7r1AHSzLqyHbmE9dE9R/TeL9chfulaPtL/baX/Hs1Pkkpu3b98CAJydnbUcCREREanr7du3sLW1zfYYmVAlBdIjcrkcz58/h7W1NWSyvGeZsbGxcHZ2xpMnT2BjY6OBCLVHX+rCeugW1kO3sB66hfVQnRACb9++hZOTEwwMsu9VU+RabgwMDFC2bFmNX9fGxqZQfzE/pC91YT10C+uhW1gP3cJ6qCanFps07FBMREREeoXJDREREekVJjd5ZGpqitmzZ8PU1FTboeSZvtSF9dAtrIduYT10C+uRP4pch2IiIiLSb2y5ISIiIr3C5IaIiIj0CpMbIiIi0itMboiIiEivMLlRwapVq+Di4gIzMzM0atQIgYGB2R7/xx9/oGrVqjAzM4O7uzv8/f0LKNLsqVOPW7duoVevXnBxcYFMJoOPj0/BBaoCdeqyfv16tGjRAnZ2drCzs0P79u1z/BkWFHXq4efnh/r166NYsWKwtLRE7dq1sW3btgKMNmvq/o6k2b17N2QyGXr06JG/AapInXps3rwZMplM6WVmZlaA0WZN3Z/HmzdvMGbMGDg6OsLU1BRubm468e+WOvVo3bp1hp+HTCZD165dCzDizKn78/Dx8UGVKlVgbm4OZ2dnTJw4EQkJCQUUbdbUqUdycjJ+/PFHuLq6wszMDLVq1cKRI0cKLlhB2dq9e7cwMTERmzZtErdu3RKjRo0SxYoVExEREZkef+7cOWFoaCh++ukncfv2bTFjxgxhbGwsbty4UcCRK1O3HoGBgWLSpEli165donTp0mL58uUFG3A21K1L//79xapVq8TVq1fFnTt3xNChQ4Wtra14+vRpAUeuTN16nDx5Uvj5+Ynbt2+LkJAQ4ePjIwwNDcWRI0cKOHJl6tYjTWhoqChTpoxo0aKF6N69e8EEmw116+Hr6ytsbGzEixcvFK/w8PACjjojdeuRmJgo6tevL7p06SLOnj0rQkNDxalTp0RQUFABR65M3XpER0cr/Sxu3rwpDA0Nha+vb8EG/hF167Fjxw5hamoqduzYIUJDQ8XRo0eFo6OjmDhxYgFHrkzdekyePFk4OTmJw4cPiwcPHojVq1cLMzMzceXKlQKJl8lNDho2bCjGjBmjeJ+amiqcnJyEt7d3psf37t1bdO3aVamsUaNG4quvvsrXOHOibj0+VL58eZ1KbvJSFyGESElJEdbW1mLLli35FaJK8loPIYSoU6eOmDFjRn6Ep7Lc1CMlJUU0bdpUbNiwQQwZMkQnkht16+Hr6ytsbW0LKDrVqVuPNWvWiIoVK4qkpKSCClElef39WL58ubC2thZxcXH5FaJK1K3HmDFjRNu2bZXKvLy8RLNmzfI1zpyoWw9HR0fx66+/KpV5enqKAQMG5GucafhYKhtJSUm4fPky2rdvrygzMDBA+/btcf78+UzPOX/+vNLxAODh4ZHl8QUhN/XQVZqoy7t375CcnIzixYvnV5g5yms9hBAICAhAcHAwWrZsmZ+hZiu39fjxxx9RqlQpjBgxoiDCzFFu6xEXF4fy5cvD2dkZ3bt3x61btwoi3Czlph6HDh1CkyZNMGbMGDg4OKBGjRpYuHAhUlNTCyrsDDTxe75x40b07dsXlpaW+RVmjnJTj6ZNm+Ly5cuKRz4PHz6Ev78/unTpUiAxZyY39UhMTMzwmNbc3Bxnz57N11jTMLnJxsuXL5GamgoHBwelcgcHB4SHh2d6Tnh4uFrHF4Tc1ENXaaIuU6ZMgZOTU4YktCDlth4xMTGwsrKCiYkJunbtipUrV6JDhw75HW6WclOPs2fPYuPGjVi/fn1BhKiS3NSjSpUq2LRpEw4ePIjt27dDLpejadOmePr0aUGEnKnc1OPhw4fYu3cvUlNT4e/vj5kzZ2Lp0qWYP39+QYScqbz+ngcGBuLmzZsYOXJkfoWoktzUo3///vjxxx/RvHlzGBsbw9XVFa1bt8YPP/xQECFnKjf18PDwwLJly3D//n3I5XIcP34cfn5+ePHiRUGEzOSGipZFixZh9+7d2L9/v850/lSHtbU1goKCcPHiRSxYsABeXl44deqUtsNS2du3bzFo0CCsX78e9vb22g4nT5o0aYLBgwejdu3aaNWqFfz8/FCyZEmsW7dO26GpRS6Xo1SpUvjtt99Qr1499OnTB9OnT8fatWu1HVqubdy4Ee7u7mjYsKG2Q1HbqVOnsHDhQqxevRpXrlyBn58fDh8+jHnz5mk7NLWsWLEClStXRtWqVWFiYoKxY8di2LBhMDAomLTDqEDuUkjZ29vD0NAQERERSuUREREoXbp0pueULl1areMLQm7qoavyUpeff/4ZixYtwokTJ1CzZs38DDNHua2HgYEBKlWqBACoXbs27ty5A29vb7Ru3To/w82SuvV48OABHj16hG7duinK5HI5AMDIyAjBwcFwdXXN36AzoYnfEWNjY9SpUwchISH5EaJKclMPR0dHGBsbw9DQUFFWrVo1hIeHIykpCSYmJvkac2by8vOIj4/H7t278eOPP+ZniCrJTT1mzpyJQYMGKVqd3N3dER8fjy+//BLTp08vsOTgQ7mpR8mSJXHgwAEkJCQgOjoaTk5OmDp1KipWrFgQIbPlJjsmJiaoV68eAgICFGVyuRwBAQFo0qRJpuc0adJE6XgAOH78eJbHF4Tc1ENX5bYuP/30E+bNm4cjR46gfv36BRFqtjT1M5HL5UhMTMyPEFWibj2qVq2KGzduICgoSPH67LPP0KZNGwQFBcHZ2bkgw1fQxM8jNTUVN27cgKOjY36FmaPc1KNZs2YICQlRJJkAcO/ePTg6OmolsQHy9vP4448/kJiYiIEDB+Z3mDnKTT3evXuXIYFJSzyFlpaCzMvPw8zMDGXKlEFKSgr27duH7t2753e4kgLptlyI7d69W5iamorNmzeL27dviy+//FIUK1ZMMeRz0KBBYurUqYrjz507J4yMjMTPP/8s7ty5I2bPnq0zQ8HVqUdiYqK4evWquHr1qnB0dBSTJk0SV69eFffv39dWFRTUrcuiRYuEiYmJ2Lt3r9JQ0bdv32qrCkII9euxcOFCcezYMfHgwQNx+/Zt8fPPPwsjIyOxfv16bVVBCKF+PT6mK6Ol1K3H3LlzxdGjR8WDBw/E5cuXRd++fYWZmZm4deuWtqoghFC/HmFhYcLa2lqMHTtWBAcHi7/++kuUKlVKzJ8/X1tVEELk/nvVvHlz0adPn4ION0vq1mP27NnC2tpa7Nq1Szx8+FAcO3ZMuLq6it69e2urCkII9evx33//iX379okHDx6I06dPi7Zt24oKFSqI169fF0i8TG5UsHLlSlGuXDlhYmIiGjZsKP777z/FvlatWokhQ4YoHf/7778LNzc3YWJiIj755BNx+PDhAo44c+rUIzQ0VADI8GrVqlXBB54JdepSvnz5TOsye/bsgg/8I+rUY/r06aJSpUrCzMxM2NnZiSZNmojdu3drIeqM1P0d+ZCuJDdCqFePCRMmKI51cHAQXbp0KbA5PHKi7s/j33//FY0aNRKmpqaiYsWKYsGCBSIlJaWAo85I3XrcvXtXABDHjh0r4Eizp049kpOTxZw5c4Srq6swMzMTzs7O4ptvvimwpCA76tTj1KlTolq1asLU1FSUKFFCDBo0SDx79qzAYpUJoaV2LiIiIqJ8wD43REREpFeY3BAREZFeYXJDREREeoXJDREREekVJjdERESkV5jcEBERkV5hckNERER6hckNEeWrU6dOQSaT4c2bNwV6382bN6NYsWJ5usajR48gk8kQFBSU5THaqh8RZY3JDRHlmkwmy/Y1Z84cbYdIREUQVwUnolx78eKFYnvPnj2YNWsWgoODFWVWVla4dOmS2tfV1mrURKQf2HJDRLlWunRpxcvW1hYymUypzMrKSnHs5cuXUb9+fVhYWKBp06ZKSdCcOXNQu3ZtbNiwARUqVICZmRkA4M2bNxg5ciRKliwJGxsbtG3bFteuXVOcd+3aNbRp0wbW1tawsbFBvXr1MiRTR48eRbVq1WBlZYVOnTopJWRyuRw//vgjypYtC1NTU9SuXRtHjhzJts7+/v5wc3ODubk52rRpg0ePHintf/z4Mbp16wY7OztYWlrik08+gb+/v9qfLRHlHpMbIioQ06dPx9KlS3Hp0iUYGRlh+PDhSvtDQkKwb98++Pn5Kfq4fPHFF4iMjMTff/+Ny5cvo27dumjXrh1evXoFABgwYADKli2Lixcv4vLly5g6dSqMjY0V13z37h1+/vlnbNu2DadPn0ZYWBgmTZqk2L9ixQosXboUP//8M65fvw4PDw989tlnuH//fqZ1ePLkCTw9PdGtWzcEBQVh5MiRmDp1qtIxY8aMQWJiIk6fPo0bN25g8eLFSkkeERWAAluik4j0mq+vr7C1tc1QfvLkSQFAnDhxQlF2+PBhAUC8f/9eCCHE7NmzhbGxsYiMjFQcc+bMGWFjYyMSEhKUrufq6irWrVsnhBDC2tpabN68Oct4AIiQkBBF2apVq4SDg4PivZOTk1iwYIHSeQ0aNBDffPONEEKI0NBQAUBcvXpVCCHEtGnTRPXq1ZWOnzJligCgWLXZ3d1dzJkzJ9OYiKhgsOWGiApEzZo1FduOjo4AgMjISEVZ+fLlUbJkScX7a9euIS4uDiVKlICVlZXiFRoaigcPHgAAvLy8MHLkSLRv3x6LFi1SlKexsLCAq6ur0n3T7hkbG4vnz5+jWbNmSuc0a9YMd+7cybQOd+7cQaNGjZTKmjRpovR+/PjxmD9/Ppo1a4bZs2fj+vXr2X8wRKRxTG6IqEB8+LhIJpMBkPq8pLG0tFQ6Pi4uDo6OjggKClJ6BQcH4/vvvwcg9dW5desWunbtin/++QfVq1fH/v37M71n2n2FEBqv24dGjhyJhw8fYtCgQbhx4wbq16+PlStX5us9iUgZkxsi0kl169ZFeHg4jIyMUKlSJaWXvb294jg3NzdMnDgRx44dg6enJ3x9fVW6vo2NDZycnHDu3Dml8nPnzqF69eqZnlOtWjUEBgYqlf33338ZjnN2dsbo0aPh5+eH7777DuvXr1cpJiLSDCY3RKST2rdvjyZNmqBHjx44duwYHj16hH///RfTp0/HpUuX8P79e4wdOxanTp3C48ePce7cOVy8eBHVqlVT+R7ff/89Fi9ejD179iA4OBhTp05FUFAQvv3220yPHz16NO7fv4/vv/8ewcHB2LlzJzZv3qx0zIQJE3D06FGEhobiypUrOHnypFoxEVHecZ4bItJJMpkM/v7+mD59OoYNG4aoqCiULl0aLVu2hIODAwwNDREdHY3BgwcjIiIC9vb28PT0xNy5c1W+x/jx4xETE4PvvvsOkZGRqF69Og4dOoTKlStneny5cuWwb98+TJw4EStXrkTDhg2xcOFCpZFfqampGDNmDJ4+fQobGxt06tQJy5cvz/PnQUSqk4n8fgBNREREVID4WIqIiIj0CpMbIiIi0itMboiIiEivMLkhIiIivcLkhoiIiPQKkxsiIiLSK0xuiIiISK8wuSEiIiK9wuSGiIiI9AqTGyIiItIrTG6IiIhIrzC5ISIiIr3yf9q61BOf3H4eAAAAAElFTkSuQmCC", "text/plain": [ "
" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "plot_model_accuracies(\n", " scores=test_result_df.ensemble_score,\n", " correct_indicators=test_result_df.response_correct,\n", ")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 4.2 Precision, Recall, F1-Score of Hallucination Detection" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Lastly, we compute the optimal threshold for binarizing confidence scores, using F1-score as the objective. Using this threshold, we compute precision, recall, and F1-score for black box scorer predictions of whether responses are correct." ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Ensemble F1-optimal threshold: 0.36\n" ] } ], "source": [ "# extract optimal threshold\n", "best_threshold = uqe.thresh\n", "\n", "# Define score vector and corresponding correct indicators (i.e. ground truth)\n", "y_scores = test_result_df[\"ensemble_score\"] # confidence score\n", "correct_indicators = (\n", " test_result_df.response_correct\n", ") * 1 # Whether responses is actually correct\n", "y_pred = [\n", " (s > best_threshold) * 1 for s in y_scores\n", "] # predicts whether response is correct based on confidence score\n", "print(f\"Ensemble F1-optimal threshold: {best_threshold}\")" ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Ensemble precision: 0.6585365853658537\n", "Ensemble recall: 0.9642857142857143\n", "Ensemble f1-score: 0.782608695652174\n" ] } ], "source": [ "# evaluate precision, recall, and f1-score of semantic entropy predictions of correctness\n", "print(\n", " f\"Ensemble precision: {precision_score(y_true=correct_indicators, y_pred=y_pred)}\"\n", ")\n", "print(f\"Ensemble recall: {recall_score(y_true=correct_indicators, y_pred=y_pred)}\")\n", "print(f\"Ensemble f1-score: {f1_score(y_true=correct_indicators, y_pred=y_pred)}\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 5. Scorer Definitions\n", "\n", "### Black-Box Scorers\n", "Black-Box UQ scorers exploit variation in LLM responses to the same prompt to measure semantic consistency. All scorers have outputs ranging from 0 to 1, with higher values indicating higher confidence. \n", "\n", "For a given prompt $x_i$, these approaches involves generating $m$ responses $\\tilde{\\mathbf{y}}_i = \\{ \\tilde{y}_{i1},...,\\tilde{y}_{im}\\}$, using a non-zero temperature, from the same prompt and comparing these responses to the original response $y_{i}$. We provide detailed descriptions of each below.\n", "\n", "#### Exact Match Rate (`exact_match`)\n", "Exact Match Rate (EMR) computes the proportion of candidate responses that are identical to the original response.\n", "$$ EMR(y_i; \\tilde{\\mathbf{y}}_i) = \\frac{1}{m} \\sum_{j=1}^m \\mathbb{I}(y_i=\\tilde{y}_{ij}). $$\n", "\n", "For more on this scorer, refer to [Cole et al., 2023](https://arxiv.org/abs/2305.14613).\n", "\n", "#### Non-Contradiction Probability (`noncontradiction`)\n", "Non-contradiction probability (NCP) computes the mean non-contradiction probability estimated by a natural language inference (NLI) model. This score is formally defined as follows:\n", "\n", "\\begin{equation}\n", " NCP(y_i; \\tilde{\\mathbf{y}}_i) = \\frac{1}{m} \\sum_{j=1}^m(1 - p_j)\n", "\\end{equation}\n", "where\n", "\n", "\\begin{equation}\n", " p_j = \\frac{\\eta(y_{i}, \\tilde{y}_{ij}) + \\eta(\\tilde{y}_{ij},y_i)}{2}.\n", "\\end{equation}\n", "\n", "Above, $\\eta(\\tilde{y}_{ij},y_i)$ denotes the contradiction probability estimated by the NLI model for response $y_i$ and candidate $\\tilde{y}_{ij}$. For more on this scorer, refer to [Chen & Mueller, 2023](https://arxiv.org/abs/2308.16175), [Lin et al., 2025](https://arxiv.org/abs/2305.19187), or [Manakul et al., 2023](https://arxiv.org/abs/2303.08896).\n", "\n", "#### Normalized Semantic Negentropy (`semantic_negentropy`)\n", "Normalized Semantic Negentropy (NSN) normalizes the standard computation of discrete semantic entropy to be increasing with higher confidence and have [0,1] support. In contrast to the EMR and NCP, semantic entropy does not distinguish between an original response and candidate responses. Instead, this approach computes a single metric value on a list of responses generated from the same prompt. Under this approach, responses are clustered using an NLI model based on mutual entailment. We consider the discrete version of SE, where the final set of clusters is defined as follows:\n", "\n", "\\begin{equation}\n", " SE(y_i; \\tilde{\\mathbf{y}}_i) = - \\sum_{C \\in \\mathcal{C}} P(C|y_i, \\tilde{\\mathbf{y}}_i)\\log P(C|y_i, \\tilde{\\mathbf{y}}_i),\n", "\\end{equation}\n", "where $P(C|y_i, \\tilde{\\mathbf{y}}_i)$ denotes the probability a randomly selected response $y \\in \\{y_i\\} \\cup \\tilde{\\mathbf{y}}_i $ belongs to cluster $C$, and $\\mathcal{C}$ denotes the full set of clusters of $\\{y_i\\} \\cup \\tilde{\\mathbf{y}}_i$.\n", "\n", "To ensure that we have a normalized confidence score with $[0,1]$ support and with higher values corresponding to higher confidence, we implement the following normalization to arrive at *Normalized Semantic Negentropy* (NSN):\n", "\\begin{equation}\n", " NSN(y_i; \\tilde{\\mathbf{y}}_i) = 1 - \\frac{SE(y_i; \\tilde{\\mathbf{y}}_i)}{\\log m},\n", "\\end{equation}\n", "where $\\log m$ is included to normalize the support.\n", "\n", "#### BERTScore (`bert_score`)\n", "Let a tokenized text sequence be denoted as $\\textbf{t} = \\{t_1,...t_L\\}$ and the corresponding contextualized word embeddings as $\\textbf{E} = \\{\\textbf{e}_1,...,\\textbf{e}_L\\}$, where $L$ is the number of tokens in the text. The BERTScore precision, recall, and F1-scores between two tokenized texts $\\textbf{t}, \\textbf{t}'$ are respectively defined as follows:\n", "\n", "\\begin{equation}\n", " BertP(\\textbf{t}, \\textbf{t}') = \\frac{1}{| \\textbf{t}|} \\sum_{t \\in \\textbf{t}} \\max_{t' \\in \\textbf{t}'} \\textbf{e} \\cdot \\textbf{e}'\n", "\\end{equation}\n", "\n", "\\begin{equation}\n", " BertR(\\textbf{t}, \\textbf{t}') = \\frac{1}{| \\textbf{t}'|} \\sum_{t' \\in \\textbf{t}'} \\max_{t \\in \\textbf{t}} \\textbf{e} \\cdot \\textbf{e}'\n", "\\end{equation}\n", "\n", "\\begin{equation}\n", " BertF(\\textbf{t}, \\textbf{t}') = 2\\frac{ BertP(\\textbf{t}, \\textbf{t}') BertR(\\textbf{t}, \\textbf{t}')}{BertPr(\\textbf{t}, \\textbf{t}') + BertRec(\\textbf{t}, \\textbf{t}')},\n", "\\end{equation}\n", "where $e, e'$ respectively correspond to $t, t'$. We compute our BERTScore-based confidence scores as follows:\n", "\\begin{equation}\n", " BertConfidence(y_i; \\tilde{\\mathbf{y}}_i) = \\frac{1}{m} \\sum_{j=1}^m BertF(y_i, \\tilde{y}_{ij}),\n", "\\end{equation}\n", "i.e. the average BERTScore F1 across pairings of the original response with all candidate responses. For more on BERTScore, refer to [Zheng et al., 2020](https://arxiv.org/abs/1904.09675).\n", "\n", "#### BLEURT (`bleurt`)\n", "In contrast to the aforementioned scorers, BLEURT is specifically pre-trained and fine-tuned to learn human judgments of text similarity. Our BLEURT confidence score is the average BLEURT value across pairings of the original response with all candidate responses:\n", "\n", "\\begin{equation}\n", " BLEURTConfidence(y_i; \\tilde{\\mathbf{y}}_i) = \\frac{1}{m} \\sum_{j=1}^m BLEURT(y_i, \\tilde{y}_{ij}).\n", "\\end{equation}\n", "\n", "For more on this scorer, refer to [Sellam et al., 2020](https://arxiv.org/abs/2004.04696).\n", "\n", "\n", "#### Normalized Cosine Similarity (`cosine_sim`)\n", "This scorer leverages a sentence transformer to map LLM outputs to an embedding space and measure similarity using those sentence embeddings. Let $V: \\mathcal{Y} \\xrightarrow{} \\mathbb{R}^d$ denote the sentence transformer, where $d$ is the dimension of the embedding space. The average cosine similarity across pairings of the original response with all candidate responses is given as follows:\n", "\n", "\\begin{equation}\n", " CS(y_i; \\tilde{\\mathbf{y}}_i) = \\frac{1}{m} \\sum_{i=1}^m \\frac{\\mathbf{V}(y_i) \\cdot \\mathbf{V}(\\tilde{y}_{ij}) }{ \\lVert \\mathbf{V}(y_i) \\rVert \\lVert \\mathbf{V}(\\tilde{y}_{ij}) \\rVert}.\n", "\\end{equation}\n", "\n", "To ensure a standardized support of $[0, 1]$, we normalize cosine similarity to obtain confidence scores as follows:\n", "\n", "\\begin{equation}\n", " NCS(y_i; \\tilde{\\mathbf{y}}_i) = \\frac{CS(y_i; \\tilde{\\mathbf{y}}_i) + 1}{2}.\n", "\\end{equation}\n", "\n", "\n", "### White-Box UQ Scorers\n", "White-box UQ scorers leverage token probabilities of the LLM's generated response to quantify uncertainty. All scorers have outputs ranging from 0 to 1, with higher values indicating higher confidence. We define two white-box UQ scorers below.\n", "\n", "#### Length-Normalized Token Probability (`normalized_probability`)\n", "Let the tokenization LLM response $y_i$ be denoted as $\\{t_1,...,t_{L_i}\\}$, where $L_i$ denotes the number of tokens the response. Length-normalized token probability (LNTP) computes a length-normalized analog of joint token probability:\n", "\n", "\\begin{equation}\n", " LNTP(y_i) = \\prod_{t \\in y_i} p_t^{\\frac{1}{L_i}},\n", "\\end{equation}\n", "where $p_t$ denotes the token probability for token $t$. Note that this score is equivalent to the geometric mean of token probabilities for response $y_i$. For more on this scorer, refer to [Malinin & Gales, 2021](https://arxiv.org/pdf/2002.07650).\n", "\n", "\n", "#### Minimum Token Probability (`min_probability`)\n", "Minimum token probability (MTP) uses the minimum among token probabilities for a given responses as a confidence score:\n", "\n", "\\begin{equation}\n", " MTP(y_i) = \\min_{t \\in y_i} p_t,\n", "\\end{equation}\n", "where $t$ and $p_t$ follow the same definitions as above. For more on this scorer, refer to [Manakul et al., 2023](https://arxiv.org/abs/2303.08896).\n", "\n", "### LLM-as-a-Judge Scorers\n", "Under the LLM-as-a-Judge approach, either the same LLM that was used for generating the original responses or a different LLM is asked to form a judgment about a pre-generated response. Below, we define two LLM-as-a-Judge scorer templates. \n", "#### Categorical Judge Template (`true_false_uncertain`)\n", "We follow the approach proposed by [Chen & Mueller, 2023](https://arxiv.org/abs/2308.16175) in which an LLM is instructed to score a question-answer concatenation as either *incorrect*, *uncertain*, or *correct* using a carefully constructed prompt. These categories are respectively mapped to numerical scores of 0, 0.5, and 1. We denote the LLM-as-a-judge scorers as $J: \\mathcal{Y} \\xrightarrow[]{} \\{0, 0.5, 1\\}$. Formally, we can write this scorer function as follows:\n", "\n", "\\begin{equation}\n", "J(y_i) = \\begin{cases}\n", " 0 & \\text{LLM states response is incorrect} \\\\\n", " 0.5 & \\text{LLM states that it is uncertain} \\\\\n", " 1 & \\text{LLM states response is correct}.\n", "\\end{cases}\n", "\\end{equation}\n", "\n", "#### Continuous Judge Template (`continuous`)\n", "For the continuous template, the LLM is asked to directly score a question-answer concatenation's correctness on a scale of 0 to 1. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "ยฉ 2025 CVS Health and/or one of its affiliates. All rights reserved." ] } ], "metadata": { "environment": { "kernel": "uqlm", "name": "workbench-notebooks.m126", "type": "gcloud", "uri": "us-docker.pkg.dev/deeplearning-platform-release/gcr.io/workbench-notebooks:m126" }, "kernelspec": { "display_name": "uqlm", "language": "python", "name": "uqlm" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.21" } }, "nbformat": 4, "nbformat_minor": 4 }