Metric
Metric ¶
Base class for metrics.
Source code in flexeval/core/metric/base.py
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
|
evaluate
abstractmethod
¶
evaluate(
lm_outputs: list[str],
references_list: list[list[str]],
task_inputs_list: list[dict[str, str]] | None = None,
) -> MetricResult
Evaluate the outputs of LanguageModel
against the references.
Parameters:
-
lm_outputs
(list[str]
) –List of model outputs.
-
references_list
(list[list[str]]
) –List of reference outputs.
-
task_inputs_list
(list[dict[str, str]] | None
, default:None
) –List of task inputs.
Source code in flexeval/core/metric/base.py
30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
|
MetricResult
dataclass
¶
A dataclass representing the result of a metric evaluation.
Source code in flexeval/core/metric/base.py
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
|
BLEU ¶
An implementation of BLEU. The calculation is based on the sacrebleu library.
Parameters:
-
tokenize_option
(str | None
, default:None
) –Tokenization option for sacrebleu. If
None
, sacrebleu will use the default tokenization. -
lm_output_processor
(StringProcessor | list[StringProcessor] | None
, default:None
) –StringProcessor or a list of StringProcessor to be applied to the model outputs before comparison.
-
reference_processor
(StringProcessor | list[StringProcessor] | None
, default:None
) –StringProcessor or list of StringProcessor to apply to the references before comparison.
-
category_key
(str | None
, default:None
) –A key to create category-wise mean score. The category key is expected to be in task inputs.
Examples:
>>> from flexeval import BLEU
>>> bleu = BLEU()
>>> lm_outputs = ["I am a student .", "I am a teacher ."]
>>> references_list = [["I am a student .", "I am a learner ."], ["I am a teacher ."]]
>>> result = bleu.evaluate(lm_outputs, references_list)
>>> print(result)
MetricResult(
summary={
'bleu_score': 1.0,
'bleu_bp': 1.0,
'bleu_signature': nrefs:1|case:mixed|eff:no|tok:13a|smooth:exp|version:2.4.1},
instance_details=[
{'bleu_score': 1.0, 'bleu_bp': 1.0},
{'bleu_score': 1.0, 'bleu_bp': 1.0}
]
)
Source code in flexeval/core/metric/bleu.py
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 |
|
__init__ ¶
__init__(
tokenize_option: str | None = None,
lm_output_processor: StringProcessor
| list[StringProcessor]
| None = None,
reference_processor: StringProcessor
| list[StringProcessor]
| None = None,
category_key: str | None = None,
) -> None
Source code in flexeval/core/metric/bleu.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
|
evaluate ¶
evaluate(
lm_outputs: list[str],
references_list: list[list[str]],
task_inputs_list: list[dict[str, str]] | None = None,
) -> MetricResult
Source code in flexeval/core/metric/bleu.py
65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 |
|
CharF1 ¶
A metric that calculates how many characters in the output string are included in the characters of the expected output. If there are multiple expected outputs, the highest score is adopted.
Parameters:
-
lm_output_processor
(StringProcessor | list[StringProcessor] | None
, default:None
) –StringProcessor or list of Normalizers to apply to the model outputs before comparison.
-
reference_processor
(StringProcessor | list[StringProcessor] | None
, default:None
) –StringProcessor or list of Normalizers to apply to the references before comparison.
-
category_key
(str | None
, default:None
) –A key to create category-wise mean score. The category key is expected to be in task inputs.
Examples:
>>> from flexeval import CharF1
>>> char_f1 = CharF1()
>>> lm_outputs = ["abcd", "efgh"]
>>> references_list = [["abcd", "ABCD"], ["efGH"]]
>>> result = char_f1.evaluate(lm_outputs, references_list)
>>> print(result)
MetricResult(summary={'char_f1': 0.75}, instance_details=[{'char_f1': 1.0}, {'char_f1': 0.5}])
Source code in flexeval/core/metric/char_f1.py
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
|
__init__ ¶
__init__(
lm_output_processor: StringProcessor
| list[StringProcessor]
| None = None,
reference_processor: StringProcessor
| list[StringProcessor]
| None = None,
category_key: str | None = None,
) -> None
Source code in flexeval/core/metric/char_f1.py
35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
|
evaluate ¶
evaluate(
lm_outputs: list[str],
references_list: list[list[str]],
task_inputs_list: list[dict[str, str]] | None = None,
) -> MetricResult
Source code in flexeval/core/metric/char_f1.py
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
|
CodeEval ¶
A metric that evaluates generated code with test cases.
Parameters:
-
code_template
(str | None
, default:None
) –A Jinja2 template string to make the generated code. The template can contain variables from task inputs. If
None
, the code prompt will be the generated text itself. -
lm_output_processor
(StringProcessor | list[StringProcessor] | None
, default:None
) –String processors applied to model outputs before evaluation.
-
evaluate_module
(str
, default:'code_eval'
) –An evaluate module to use.
Examples:
>>> from flexeval import CodeEval
>>> code_eval = CodeEval()
>>> lm_outputs = ["def add(a, b):\n return a + b", "def is_equal(a, b):\n return a = b"]
>>> references_list = [["assert add(1, 2) == 3"], ["assert is_equal(1, 2) == False"]]
>>> result = code_eval.evaluate(lm_outputs, references_list)
>>> print(result)
MetricResult(
summary={'pass@1': 0.5},
instance_details=[
{'passed': True, 'result': 'passed'},
{'passed': False, 'result': 'failed: invalid syntax (<string>, line 2)'}
]
)
Source code in flexeval/core/metric/code_eval.py
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 |
|
__init__ ¶
__init__(
code_template: str | None = None,
lm_output_processor: StringProcessor
| list[StringProcessor]
| None = None,
evaluate_module: str = "code_eval",
) -> None
Source code in flexeval/core/metric/code_eval.py
44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
|
evaluate ¶
evaluate(
lm_outputs: list[str],
references_list: list[list[str]],
task_inputs_list: list[dict[str, str]] | None = None,
) -> MetricResult
Source code in flexeval/core/metric/code_eval.py
60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 |
|
CommonPrefixLength ¶
A metric that calculates the length of the longest common prefix between the model output and the reference.
Examples:
>>> from flexeval import CommonPrefixLength
>>> common_prefix_length = CommonPrefixLength()
>>> lm_outputs = ["ABCDEFG"]
>>> references_list = [["ABCdefg"]]
>>> result = common_prefix_length.evaluate(lm_outputs, references_list)
>>> print(result)
MetricResult(
summary={"average_common_prefix_length": 3.0, "longest_common_prefix_length": 3},
instance_details=[{"common_prefix_length": 3}],
)
Source code in flexeval/core/metric/common_prefix_length.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
|
evaluate ¶
evaluate(
lm_outputs: list[str],
references_list: list[list[str]],
task_inputs_list: list[dict[str, str]] | None = None,
) -> MetricResult
Source code in flexeval/core/metric/common_prefix_length.py
33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
|
CommonStringLength ¶
A metric that calculates the length of the longest common substring between the model output and the reference.
Examples:
>>> from flexeval import CommonStringLength
>>> common_string_length = CommonStringLength()
>>> lm_outputs = ["aBCDEFG"]
>>> references_list = [["ABCDefg"]]
>>> result = common_string_length.evaluate(lm_outputs, references_list)
>>> print(result)
MetricResult(
summary={"average_common_string_length": 3.0, "longest_common_string_length": 3},
instance_details=[{"common_string_length": 3}],
)
Source code in flexeval/core/metric/common_string_length.py
40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
|
evaluate ¶
evaluate(
lm_outputs: list[str],
references_list: list[list[str]],
task_inputs_list: list[dict[str, str]] | None = None,
) -> MetricResult
Source code in flexeval/core/metric/common_string_length.py
57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
|
Correlation ¶
Correlation metric to compute Pearson, Spearman, or Kendall correlation coefficients. The lm_outputs and references should be numeric values, optionally preprocessed by StringProcessor.
Parameters:
-
method
(Literal['pearson', 'spearman', 'kendall']
, default:'pearson'
) –The correlation method to use ('pearson', 'spearman', 'kendall').
-
lm_output_processor
(StringProcessor | list[StringProcessor] | None
, default:None
) –StringProcessor or a list of StringProcessor to be applied to the model outputs before computing the correlation. If a list is provided, the processors will be applied in order.
-
reference_processor
(StringProcessor | list[StringProcessor] | None
, default:None
) –StringProcessor or a list of StringProcessor to be applied to the references before computing the correlation. If a list is provided, the processors will be applied in order.
Examples:
>>> from flexeval import Correlation
>>> correlation = Correlation(method='pearson')
>>> lm_outputs = ["1", "2", "3", "4", "5"]
>>> references = [["5"], ["4"], ["3"], ["2"], ["1"]]
>>> result = correlation.evaluate(lm_outputs, references)
>>> print(result)
MetricResult(
summary={"pearson_correlation": -1.0, "pearson_pvalue": 0.0},
instance_details=[],
)
Source code in flexeval/core/metric/correlation.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 |
|
__init__ ¶
__init__(
method: Literal[
"pearson", "spearman", "kendall"
] = "pearson",
lm_output_processor: StringProcessor
| list[StringProcessor]
| None = None,
reference_processor: StringProcessor
| list[StringProcessor]
| None = None,
) -> None
Source code in flexeval/core/metric/correlation.py
39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
|
evaluate ¶
evaluate(
lm_outputs: list[str],
references_list: list[list[str]],
task_inputs_list: list[dict[str, str]] | None = None,
) -> MetricResult
Source code in flexeval/core/metric/correlation.py
57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 |
|
ExactMatch ¶
Exact match metric. If there are multiple references, the output is considered correct if it matches any of the references.
Parameters:
-
lm_output_processor
(StringProcessor | list[StringProcessor] | None
, default:None
) –StringProcessor or a list of StringProcessor to be applied to the model outputs before comparison.
-
reference_processor
(StringProcessor | list[StringProcessor] | None
, default:None
) –StringProcessor or list of StringProcessor to apply to the references before comparison.
-
category_key
(str | None
, default:None
) –A key to create category-wise mean score. The category key is expected to be in task inputs.
Examples:
>>> from flexeval import ExactMatch
>>> exact_match = ExactMatch()
>>> lm_outputs = ["ABC", "DEF"]
>>> references_list = [["ABC"], ["DEFG"]]
>>> result = exact_match.evaluate(lm_outputs, references_list)
>>> print(result)
MetricResult(
summary={"exact_match": 0.5},
instance_details=[{"exact_match": True}, {"exact_match": False}],
)
Source code in flexeval/core/metric/exact_match.py
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 |
|
__init__ ¶
__init__(
lm_output_processor: StringProcessor
| list[StringProcessor]
| None = None,
reference_processor: StringProcessor
| list[StringProcessor]
| None = None,
category_key: str | None = None,
) -> None
Source code in flexeval/core/metric/exact_match.py
36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
|
evaluate ¶
evaluate(
lm_outputs: list[str],
references_list: list[list[str]],
task_inputs_list: list[dict[str, str]] | None = None,
) -> MetricResult
Source code in flexeval/core/metric/exact_match.py
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 |
|
ChatLLMGEvalScore ¶
A metric that evaluates the output of LanguageModel.batch_generate_chat_response
.
Unlike ChatLLMScore, this metric let the model output logprobs for all valid scores and
calculate weighted score among them.
Note that due to constraint for OpenAI models, the number of valid scores must not exceed 20.
Parameters:
-
language_model
(required
) –An instance of
LanguageModel
to evaluate the output of the model. -
prompt_template
(required
) –An instance of
PromptTemplate
to embed the input for the evaluator. -
valid_score_range
(required
) –A tuple of two integers representing the valid score range. If the parsed score is out of the range, it will be ignored.
-
batch_size
(int
, default:4
) –The batch size for the evaluator.
-
system_message
(str | PromptTemplate | None
, default:None
) –A system message to be prepended to the input for the evaluator.
-
disable_tqdm
(bool
, default:False
) –Whether to disable the progress bar.
-
category_key
(str | None
, default:None
) –A key to create category-wise mean score. The category key is expected to be in task inputs.
-
prob_threshold
(float
, default:0
) –For considering low probability among all of valid scores, return None (invalid) if sum of the all probability among vaild scores is less than this value.
Examples:
>>> from flexeval import ChatLLMGEvalScore, HuggingFaceLM, Jinja2PromptTemplate
>>> language_model = HuggingFaceLM("Qwen/Qwen2.5-0.5B-Instruct")
>>> template = "Evaluate the quality of this text.\n`{{ lm_output }}`\nOutput only a number from 1 to 5."
>>> prompt_template = Jinja2PromptTemplate(template)
>>> system_message = "This is the system message."
>>> llm_score = ChatLLMGEvalScore(language_model, prompt_template, [1, 5], system_message=system_message)
>>> lm_outputs = ["Hello, world!", "Good morning!"]
>>> llm_score.evaluate(lm_outputs)
MetricResult(
summary={'llm_geval_score': 1.179980414173022, 'num_failed_score_parses': 0},
instance_details=[
{
'llm_geval_score': 1.1509989197179789,
'llm_geval_score_input': [
{'role': 'system', 'content': 'This is the system message.'},
{'role': 'user', 'content': 'Evaluate the quality of this text...'}
],
'llm_geval_score_logprobs': {
'1': -0.06977498531341553,
'2': -3.687819004058838,
'3': -3.937819480895996,
'4': -5.812800884246826,
'5': -3.937807083129883
},
'llm_geval_score_generation_probs': {
1: 0.932603645815178,
2: 0.02502652531327666,
3: 0.01949066821765914,
4: 0.002989046364034347,
5: 0.019490909859903
}
},
{
'llm_geval_score': 1.208961908628065,
'llm_geval_score_input': [
{'role': 'system', 'content': 'This is the system message.'},
{'role': 'user', 'content': 'Evaluate the quality of this text...'}
],
'llm_geval_score_logprobs': {
'1': -0.13043057918548584,
'2': -2.8754935264587402,
'3': -3.000467538833618,
'4': -4.750283241271973,
'5': -5.000345706939697
},
'llm_geval_score_generation_probs': {
1: 0.8777174226922144,
2: 0.05638830351569556,
3: 0.04976379642068341,
4: 0.008649245032977617,
5: 0.006735618046639277
}
}
])
Source code in flexeval/core/metric/llm_geval_score.py
290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 |
|
valid_labels
instance-attribute
¶
valid_labels = [
str(score)
for score in range(
valid_score_range[0], valid_score_range[1] + 1
)
]
__init__ ¶
__init__(
language_model: LanguageModel,
prompt_template: PromptTemplate,
valid_score_range: tuple[int, int],
batch_size: int = 4,
system_message: str | PromptTemplate | None = None,
disable_tqdm: bool = False,
category_key: str | None = None,
prob_threshold: float = 0,
) -> None
Source code in flexeval/core/metric/llm_geval_score.py
367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 |
|
evaluate ¶
evaluate(
lm_outputs: list[str],
references_list: list[list[str]] | None = None,
task_inputs_list: list[dict[str, str]] | None = None,
) -> MetricResult
Source code in flexeval/core/metric/llm_geval_score.py
389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 |
|
__repr__ ¶
__repr__() -> str
Source code in flexeval/core/metric/llm_geval_score.py
449 450 451 452 |
|
LLMGEvalScore ¶
Let LanguageModel evaluate the output of another LanguageModel. Unlike LLMScore, this metric let the model output logprobs for all valid scores and calculate weighted score among them. Note that due to constraint for OpenAI models, the number of valid scores must not exceed 20. For detail, see https://aclanthology.org/2023.emnlp-main.153/
You can specify the evaluation criteria in PromptTemplate
.
Parameters:
-
language_model
(required
) –An instance of
LanguageModel
to evaluate the output of the model. -
prompt_template
(required
) –An instance of
PromptTemplate
to embed the input for the evaluator. -
valid_score_range
(required
) –A tuple of two integers representing the valid score range. If the parsed score is out of the range, it will be ignored.
-
batch_size
(int
, default:4
) –The batch size for the evaluator.
-
disable_tqdm
(bool
, default:False
) –Whether to disable the progress bar.
-
category_key
(str | None
, default:None
) –A key to create category-wise mean score. The category key is expected to be in task inputs.
-
prob_threshold
(float
, default:0
) –For considering low probability among all of valid scores, return None (invalid) if sum of the all probability among vaild scores is less than this value.
Examples:
>>> from flexeval import LLMGEvalScore, HuggingFaceLM, Jinja2PromptTemplate
>>> language_model = HuggingFaceLM("Qwen/Qwen2.5-0.5B-Instruct")
>>> template = "Evaluate the quality of this text.\n`{{ lm_output }}`\nOutput only a number from 1 to 5."
>>> prompt_template = Jinja2PromptTemplate(template)
>>> llm_score = LLMGEvalScore(language_model, prompt_template, [1, 5])
>>> lm_outputs = ["Hello, world!", "Good morning!"]
>>> llm_score.evaluate(lm_outputs)
MetricResult(
summary={'llm_geval_score': 1.4399980931290486, 'num_failed_score_parses': 0},
instance_details=[
{
'llm_geval_score': 1.418920817254956,
'llm_geval_score_input': 'Evaluate the quality of this text...',
'llm_geval_score_logprobs': {
'1': -4.0625,
'2': -7.75,
'3': -8.25,
'4': -8.0625,
'5': -6.4375
},
'llm_geval_score_generation_probs': {
1: 0.017205950425851383,
2: 0.00043074254057568753,
3: 0.00026125855730166754,
4: 0.000315137974737356,
5: 0.0016004026902445643
}
},
{
'llm_geval_score': 1.461075369003141
'llm_geval_score_input': 'Evaluate the quality of this text...',
'llm_geval_score_logprobs': {
'1': -4.25,
'2': -8.1875,
'3': -8.375,
'4': -8.125,
'5': -6.5
},
'llm_geval_score_generation_probs': {
1: 0.014264233908999256,
2: 0.00027810828659249914,
3: 0.00023055986759244163,
4: 0.0002960447300568554,
5: 0.0015034391929775724
}
}
]
)
Source code in flexeval/core/metric/llm_geval_score.py
132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 |
|
valid_labels
instance-attribute
¶
valid_labels = [
str(score)
for score in range(
valid_score_range[0], valid_score_range[1] + 1
)
]
__init__ ¶
__init__(
language_model: LanguageModel,
prompt_template: PromptTemplate,
valid_score_range: tuple[int, int],
batch_size: int = 4,
disable_tqdm: bool = False,
category_key: str | None = None,
prob_threshold: float = 0,
) -> None
Source code in flexeval/core/metric/llm_geval_score.py
204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 |
|
evaluate ¶
evaluate(
lm_outputs: list[str],
references_list: list[list[str]] | None = None,
task_inputs_list: list[dict[str, str]] | None = None,
) -> MetricResult
Source code in flexeval/core/metric/llm_geval_score.py
224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 |
|
__repr__ ¶
__repr__() -> str
Source code in flexeval/core/metric/llm_geval_score.py
284 285 286 287 |
|
ChatLLMLabel ¶
A metric that evaluates the output of LanguageModel.batch_generate_chat_response
.
Parameters:
-
language_model
(LanguageModel
) –An instance of
LanguageModel
to evaluate the output of the model. -
prompt_template
(PromptTemplate
) –An instance of
PromptTemplate
to embed the input for the evaluator. -
label_names
(list[str]
) –A list of valid label names.
-
label_points
(list[float | int] | None
, default:None
) –A list of points for each label specified in label_names.
-
system_message
(str | PromptTemplate | None
, default:None
) –A system message to be prepended to the input for the evaluator.
-
batch_size
(int
, default:4
) –The batch size for the evaluator.
-
disable_tqdm
(bool
, default:False
) –Whether to disable the progress bar.
-
category_key
(str | None
, default:None
) –A key to create category-wise mean score. The category key is expected to be in task inputs.
Examples:
>>> from flexeval import ChatLLMScore, OpenAIChatAPI, Jinja2PromptTemplate
>>> language_model = OpenAIChatAPI(model_name="gpt-3.5-turbo")
>>> template = "Evaluate the quality of this text on a scale of Good/Bad.\n`{{ lm_output }}`\nPut the label at the end like [[Good]]."
>>> prompt_template = Jinja2PromptTemplate(template)
>>> system_message = "This is the system message."
>>> label_names = ["Good", "Bad"]
>>> label_points = [1.0, 0.0]
>>> llm_label = ChatLLMLabel(language_model, prompt_template, label_names, label_points)
>>> lm_outputs = ["Hello, world!", "Good morning!"]
>>> result = llm_label.evaluate(lm_outputs)
>>> print(result)
MetricResult(
summary={'llm_score': 0.5, 'llm_label_distribution': {'Good': 0.5, 'Bad': 0.5}, 'num_failed_score_parses': 0},
instance_details=[
{
'llm_label': 'Good',
'llm_score': 1.0,
'llm_label_input': 'Evaluate the quality of this text...',
'llm_label_output': 'This text is natural, ... [[Good]]'
},
{
'llm_label': 'Bad',
'llm_score': 0.0,
'llm_label_input': 'Evaluate the quality of this text on a scale of Good/Bad.\n`Good mrrrning!`\nPut the label at the end like [[Good]].',
'llm_label_output': 'This text contains a spelling error, ... [[Bad]]'
}
]
)
Source code in flexeval/core/metric/llm_label.py
229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 |
|
__init__ ¶
__init__(
language_model: LanguageModel,
prompt_template: PromptTemplate,
label_names: list[str],
label_points: list[float | int] | None = None,
system_message: str | PromptTemplate | None = None,
batch_size: int = 4,
disable_tqdm: bool = False,
category_key: str | None = None,
) -> None
Source code in flexeval/core/metric/llm_label.py
275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 |
|
evaluate ¶
evaluate(
lm_outputs: list[str],
references_list: list[list[str]] | None = None,
task_inputs_list: list[dict[str, str]] | None = None,
) -> MetricResult
Source code in flexeval/core/metric/llm_label.py
305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 |
|
__repr__ ¶
__repr__() -> str
Source code in flexeval/core/metric/llm_label.py
363 364 365 366 |
|
LLMLabel ¶
Let LanguageModel to evaluate the output of another LanguageModel.
You can specify the evaluation criteria in PromptTemplate
.
The last label value found in the output of the evaluator is used to compute the evaluation score.
You can assign a score to each label.
The final output is the average score and the distribution of the labels.
Parameters:
-
language_model
(LanguageModel
) –An instance of
LanguageModel
to evaluate the output of the model. -
prompt_template
(PromptTemplate
) –An instance of
PromptTemplate
to embed the input for the evaluator. -
label_names
(list[str]
) –A list of valid label names.
-
label_points
(list[float | int] | None
, default:None
) –A list of points for each label specified in label_names.
-
batch_size
(int
, default:4
) –The batch size for the evaluator.
-
disable_tqdm
(bool
, default:False
) –Whether to disable the progress bar.
-
category_key
(str | None
, default:None
) –A key to create category-wise mean score. The category key is expected to be in task inputs.
Examples:
>>> from flexeval import OpenAIChatAPI, Jinja2PromptTemplate, LLMLabel
>>> language_model = OpenAIChatAPI(model="gpt-3.5-turbo")
>>> template = "Evaluate the quality of this text on a scale of Good/Bad.\n`{{ lm_output }}`\nPut the label at the end like [[Good]]."
>>> prompt_template = Jinja2PromptTemplate(template)
>>> label_names = ["Good", "Bad"]
>>> label_points = [1.0, 0.0]
>>> llm_label = LLMLabel(language_model, prompt_template, label_names, label_points)
>>> lm_outputs = ["Hello, world!", "Good mrrrning!"]
>>> result = llm_label.evaluate(lm_outputs)
>>> print(result)
MetricResult(
summary={'llm_score': 0.5, 'llm_label_distribution': {'Good': 0.5, 'Bad': 0.5}, 'num_failed_score_parses': 0},
instance_details=[
{
'llm_label': 'Good',
'llm_score': 1.0,
'llm_label_input': 'Evaluate the quality of this text...',
'llm_label_output': 'This text is natural, ... [[Good]]'
},
{
'llm_label': 'Bad',
'llm_score': 0.0,
'llm_label_input': 'Evaluate the quality of this text on a scale of Good/Bad.\n`Good mrrrning!`\nPut the label at the end like [[Good]].',
'llm_label_output': 'This text contains a spelling error, ... [[Bad]]'
}
]
)
Source code in flexeval/core/metric/llm_label.py
88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 |
|
__init__ ¶
__init__(
language_model: LanguageModel,
prompt_template: PromptTemplate,
label_names: list[str],
label_points: list[float | int] | None = None,
batch_size: int = 4,
disable_tqdm: bool = False,
valid_score_range: tuple[int, int] | None = None,
category_key: str | None = None,
) -> None
Source code in flexeval/core/metric/llm_label.py
136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 |
|
evaluate ¶
evaluate(
lm_outputs: list[str],
references_list: list[list[str]] | None = None,
task_inputs_list: list[dict[str, str]] | None = None,
) -> MetricResult
Source code in flexeval/core/metric/llm_label.py
166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 |
|
__repr__ ¶
__repr__() -> str
Source code in flexeval/core/metric/llm_label.py
223 224 225 226 |
|
ChatLLMScore ¶
A metric that evaluates the output of LanguageModel.batch_generate_chat_response
.
Parameters:
-
language_model
(LanguageModel
) –An instance of
LanguageModel
to evaluate the output of the model. -
prompt_template
(PromptTemplate
) –An instance of
PromptTemplate
to embed the input for the evaluator. -
system_message
(str | PromptTemplate | None
, default:None
) –A system message to be prepended to the input for the evaluator.
-
batch_size
(int
, default:4
) –The batch size for the evaluator.
-
disable_tqdm
(bool
, default:False
) –Whether to disable the progress bar.
-
valid_score_range
(tuple[int, int] | None
, default:None
) –A tuple of two integers representing the valid score range. If the parsed score is out of the range, it will be ignored.
-
category_key
(str | None
, default:None
) –A key to create category-wise mean score. The category key is expected to be in task inputs.
Examples:
>>> from flexeval import ChatLLMScore, OpenAIChatAPI, Jinja2PromptTemplate
>>> language_model = OpenAIChatAPI(model_name="gpt-3.5-turbo")
>>> template = "Evaluate the quality of this text.\n`{{ lm_output }}`\nPut the score at the end like [[5]]."
>>> prompt_template = Jinja2PromptTemplate(template)
>>> system_message = "This is the system message."
>>> llm_score = ChatLLMScore(language_model, prompt_template, system_message)
>>> lm_outputs = ["Hello, world!", "Good morning!"]
>>> result = llm_score.evaluate(lm_outputs)
>>> print(result)
MetricResult(
summary={'llm_score': 3.0, 'num_failed_score_parses': 0},
instance_details=[
{
'llm_score': 2,
'llm_score_input': [{'role': 'user', 'content': 'Evaluate the quality of this text...'}],
'llm_score_output': 'This text is very simple,... Therefore, its quality is average. [[2]]'},
{
'llm_score': 4,
'llm_score_input': [{'role': 'user', 'content': 'Evaluate the quality of this text...'}],
'llm_score_output': '... Overall, the quality of the text is good but basic. [[4]]'}
]
)
Source code in flexeval/core/metric/llm_score.py
276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 |
|
__init__ ¶
__init__(
language_model: LanguageModel,
prompt_template: PromptTemplate,
system_message: str | PromptTemplate | None = None,
batch_size: int = 4,
disable_tqdm: bool = False,
valid_score_range: tuple[int, int] | None = None,
category_key: str | None = None,
) -> None
Source code in flexeval/core/metric/llm_score.py
316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 |
|
evaluate ¶
evaluate(
lm_outputs: list[str],
references_list: list[list[str]] | None = None,
task_inputs_list: list[dict[str, str]] | None = None,
) -> MetricResult
Source code in flexeval/core/metric/llm_score.py
334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 |
|
__repr__ ¶
__repr__() -> str
Source code in flexeval/core/metric/llm_score.py
380 381 382 383 |
|
LLMScore ¶
Let LanguageModel to evaluate the output of another LanguageModel.
You can specify the evaluation criteria in PromptTemplate
.
The last integer value in the output of the evaluator is used as the evaluation score.
Parameters:
-
language_model
(LanguageModel
) –An instance of
LanguageModel
to evaluate the output of the model. -
prompt_template
(PromptTemplate
) –An instance of
PromptTemplate
to embed the input for the evaluator. -
batch_size
(int
, default:4
) –The batch size for the evaluator.
-
disable_tqdm
(bool
, default:False
) –Whether to disable the progress bar.
-
valid_score_range
(tuple[int, int] | None
, default:None
) –A tuple of two integers representing the valid score range. If the parsed score is out of the range, it will be ignored.
-
category_key
(str | None
, default:None
) –A key to create category-wise mean score. The category key is expected to be in task inputs.
Examples:
>>> from flexeval import LLMScore, OpenAIChatAPI, Jinja2PromptTemplate
>>> language_model = OpenAIChatAPI(model_name="gpt-3.5-turbo")
>>> template = "Evaluate the quality of this text.\n`{{ lm_output }}`\nPut the score at the end like [[5]]."
>>> prompt_template = Jinja2PromptTemplate(template)
>>> llm_score = LLMScore(language_model, prompt_template)
>>> lm_outputs = ["Hello, world!", "Good morning!"]
>>> result = llm_score.evaluate(lm_outputs)
>>> print(result)
MetricResult(
summary={'llm_score': 3.0, 'num_failed_score_parses': 0},
instance_details=[
{
'llm_score': 2,
'llm_score_input': 'Evaluate the quality of this text...',
'llm_score_output': 'This text is very simple,... Therefore, its quality is average. [[2]]'},
{
'llm_score': 4,
'llm_score_input': 'Evaluate the quality of this text...',
'llm_score_output': '... Overall, the quality of the text is good but basic. [[4]]'}
]
)
Source code in flexeval/core/metric/llm_score.py
168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 |
|
__init__ ¶
__init__(
language_model: LanguageModel,
prompt_template: PromptTemplate,
batch_size: int = 4,
disable_tqdm: bool = False,
valid_score_range: tuple[int, int] | None = None,
category_key: str | None = None,
) -> None
Source code in flexeval/core/metric/llm_score.py
208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 |
|
evaluate ¶
evaluate(
lm_outputs: list[str],
references_list: list[list[str]] | None = None,
task_inputs_list: list[dict[str, str]] | None = None,
) -> MetricResult
Source code in flexeval/core/metric/llm_score.py
224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 |
|
__repr__ ¶
__repr__() -> str
Source code in flexeval/core/metric/llm_score.py
270 271 272 273 |
|
OutputLengthStats ¶
Compute statistics on the length of the outputs.
Examples:
>>> from flexeval import OutputLengthStats
>>> output_length_stats = OutputLengthStats()
>>> lm_outputs = ["123456", "123456789"]
>>> result = output_length_stats.evaluate(lm_outputs)
>>> print(result)
MetricResult(
summary={'avg_output_length': 7.5, 'max_output_length': 9, 'min_output_length': 6},
instance_details=[{'output_length': 6}, {'output_length': 9}]
)
Source code in flexeval/core/metric/output_length_stats.py
6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
|
evaluate ¶
evaluate(
lm_outputs: list[str],
references_list: list[list[str]] | None = None,
task_inputs_list: list[dict[str, str]] | None = None,
) -> MetricResult
Source code in flexeval/core/metric/output_length_stats.py
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
|
PerspectiveAPI ¶
A metric that evaluates text outputs using the Perspective API.
Please set PERSPECTIVE_API_KEY
in the environment variable.
Parameters:
-
languages
(list[str]
) –A list of languages to analyze.
Examples:
>>> from flexeval import PerspectiveAPI
>>> perspective_api = PerspectiveAPI(languages=["en"])
>>> lm_outputs = ["I love you", "I hate you"]
>>> result = perspective_api.evaluate(lm_outputs)
>>> print(result)
MetricResult(
summary={'TOXICITY': 0.35407552, ..., 'THREAT': 0.0265799825},
instance_details=[
{'TOXICITY': 0.02543884, ..., 'THREAT': 0.009204263},
{'TOXICITY': 0.6827122, ..., 'THREAT': 0.043955702}
]
)
Source code in flexeval/core/metric/perspective_api.py
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 |
|
client
instance-attribute
¶
client = build(
"commentanalyzer",
"v1alpha1",
developerKey=PERSPECTIVE_API_KEY,
discoveryServiceUrl="https://commentanalyzer.googleapis.com/$discovery/rest?version=v1alpha1",
static_discovery=False,
)
attributes
instance-attribute
¶
attributes = [
"TOXICITY",
"SEVERE_TOXICITY",
"IDENTITY_ATTACK",
"INSULT",
"PROFANITY",
"THREAT",
]
__init__ ¶
__init__(languages: list[str]) -> None
Source code in flexeval/core/metric/perspective_api.py
59 60 61 62 63 64 65 66 67 68 |
|
evaluate ¶
evaluate(
lm_outputs: list[str],
references_list: list[list[str]] | None = None,
task_inputs_list: list[dict[str, str]] | None = None,
) -> MetricResult
Source code in flexeval/core/metric/perspective_api.py
70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 |
|
RepetitionCount ¶
A metric that counts the number of repetitions of the most repeated pattern in the model's output.
Parameters:
-
lm_output_processor
(StringProcessor | list[StringProcessor] | None
, default:None
) –StringProcessor or list of Normalizers to apply to the model outputs before analysis.
Examples:
>>> from flexeval import RepetitionCount
>>> repetition_count = RepetitionCount()
>>> lm_outputs = ["hello hello hello hello hello hello hello hello hello hello"]
>>> references_list = [[]] # Not used for this metric
>>> result = repetition_count.evaluate(lm_outputs, references_list)
>>> print(result)
MetricResult(
summary={'repetition_ratio': 1.0},
instance_details=[{'most_repeated_pattern': 'hello hell', 'repetition_count': 9, 'is_repetition': True}]
)
Source code in flexeval/core/metric/repetition_count.py
27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
|
__init__ ¶
__init__(
count_threshold: int = 30,
threshold_length: int = 10,
lm_output_processor: StringProcessor
| list[StringProcessor]
| None = None,
) -> None
Source code in flexeval/core/metric/repetition_count.py
47 48 49 50 51 52 53 54 55 56 57 58 |
|
evaluate ¶
evaluate(
lm_outputs: list[str],
references_list: list[list[str]],
task_inputs_list: list[dict[str, str]] | None = None,
) -> MetricResult
Source code in flexeval/core/metric/repetition_count.py
60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
|
ROUGE ¶
An implementation of ROUGE.
The calculation is based on the rouge library.
Parameters:
-
tokenizer
(Tokenizer
) –An instance of
Tokenizer
to tokenize the input and output strings.
Examples:
>>> from flexeval import ROUGE
>>> from flexeval import WhitespaceTokenizer
>>> tokenizer = WhitespaceTokenizer()
>>> rouge = ROUGE(tokenizer)
>>> lm_outputs = ["I am a student .", "I am a teacher ."]
>>> references_list = [["I am a student .", "I am a learner ."], ["I am a teacher ."]]
>>> result = rouge.evaluate(lm_outputs, references_list)
>>> print(result)
MetricResult(
summary={'rouge1': 0.999999995, 'rouge2': 0.999999995, 'rougeL': 0.999999995},
instance_details=[
{'rouge1': 0.999999995, 'rouge2': 0.999999995, 'rougeL': 0.999999995},
{'rouge1': 0.999999995, 'rouge2': 0.999999995, 'rougeL': 0.999999995}
]
)
Source code in flexeval/core/metric/rouge.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
|
__init__ ¶
__init__(tokenizer: Tokenizer) -> None
Source code in flexeval/core/metric/rouge.py
36 37 |
|
evaluate ¶
evaluate(
lm_outputs: list[str],
references_list: list[list[str]],
task_inputs_list: list[dict[str, str]] | None = None,
) -> MetricResult
Source code in flexeval/core/metric/rouge.py
39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
|
SARI ¶
An implementation of SARI, a metric for evaluating text simplification.
Based on the original implementation [1], modified to allow configurable settings for the maximum n-gram size and tokenizer. Additionally, it fixes a bug present in the original implementation [2]. When used with the default parameters, it produces scores that are consistent with the HuggingFace/evaluate implementation [3].
[1] https://github.com/cocoxu/simplification/blob/master/SARI.py [2] https://github.com/cocoxu/simplification/issues/6 [3] https://huggingface.co/spaces/evaluate-metric/sari/blob/main/sari.py
Parameters:
-
tokenizer
(Tokenizer | Literal['default']
, default:'default'
) –An instance of
Tokenizer
to tokenize the input and output strings. -
max_ngrams
(int
, default:4
) –The maximum n-gram order to consider. Defaults to
4
. -
category_key
(str | None
, default:None
) –A key to create category-wise mean score. The category key is expected to be in task inputs.
-
lm_output_processor
(StringProcessor | list[StringProcessor] | None | Literal['default']
, default:'default'
) –StringProcessor or a list of StringProcessor to be applied to the model outputs before comparison.
-
reference_processor
(StringProcessor | list[StringProcessor] | None | Literal['default']
, default:'default'
) –StringProcessor or list of StringProcessor to apply to the references before comparison.
-
source_processor
(StringProcessor | list[StringProcessor] | None | Literal['default']
, default:'default'
) –StringProcessor or list of StringProcessor to apply to the source sentences before comparison.
Examples:
>>> from flexeval import SARI
>>> sari_scorer = SARI(source_key="source")
>>> lm_outputs = ["About 95 you now get in."]
>>> references_list = [["About 95 species are currently known.", "About 95 species are now accepted.", "95 species are now accepted."]]
>>> task_inputs_list = [{"source": "About 95 species are currently accepted."}]
>>> result = sari_scorer.evaluate(lm_outputs, references_list, task_inputs_list)
>>> print(result)
MetricResult(
summary={
'sari_score': 0.2695360195360195,
'sari_add': 0.08333333333333333,
'sari_keep': 0.22527472527472525,
'sari_del': 0.5
},
instance_details=[{'sari_score': 0.2695360195360195, 'sari_add': 0.08333333333333333, 'sari_keep': 0.22527472527472525, 'sari_del': 0.5}]
)
Source code in flexeval/core/metric/sari.py
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 |
|
__init__ ¶
__init__(
source_key: str,
tokenizer: Tokenizer | Literal["default"] = "default",
max_ngrams: int = 4,
category_key: str | None = None,
source_processor: StringProcessor
| list[StringProcessor]
| None
| Literal["default"] = "default",
lm_output_processor: StringProcessor
| list[StringProcessor]
| None
| Literal["default"] = "default",
reference_processor: StringProcessor
| list[StringProcessor]
| None
| Literal["default"] = "default",
) -> None
Source code in flexeval/core/metric/sari.py
61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 |
|
evaluate ¶
evaluate(
lm_outputs, references_list, task_inputs_list=None
) -> MetricResult
Source code in flexeval/core/metric/sari.py
93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 |
|
SubstringMatch ¶
A metric that calculates how many outputs contain any of the expected substrings.
Parameters:
-
mode
(Literal['any', 'all']
, default:'any'
) –The mode to calculate the substring match. - "any": If any of the expected substrings are in the output, it is a match. - "all": If all of the expected substrings are in the output, it is a match.
-
category_key
(str | None
, default:None
) –Optional key to group scores by category from task_inputs_list.
Examples:
>>> from flexeval import SubstringMatch
>>> substring_match = SubstringMatch()
>>> lm_outputs = ["This is a cat .", "This is a dog ."]
>>> references_list = [["cat", "dog"], ["mouse"]]
>>> result = substring_match.evaluate(lm_outputs, references_list)
>>> print(result)
MetricResult(
summary={'substring_match': 0.5},
instance_details=[{'substring_match': True}, {'substring_match': False}]
)
Source code in flexeval/core/metric/substring_match.py
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
|
__init__ ¶
__init__(
mode: Literal["any", "all"] = "any",
category_key: str | None = None,
) -> None
Source code in flexeval/core/metric/substring_match.py
32 33 34 35 36 37 38 39 40 41 |
|
evaluate ¶
evaluate(
lm_outputs: list[str],
references_list: list[list[str]],
task_inputs_list: list[dict[str, str]] | None = None,
) -> MetricResult
Source code in flexeval/core/metric/substring_match.py
43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
|
XER ¶
Calculate the Character Error Rate (CER) and Word Error Rate (WER) between the model outputs and the references. The calculation is based on the jiwer library.
Parameters:
-
tokenizer
(Tokenizer | None
, default:None
) –An instance of
Tokenizer
to tokenize the input and output strings.
Examples:
>>> from flexeval import XER
>>> xer = XER()
>>> lm_outputs = ["I am a student .", "I am a teacher ."]
>>> references_list = [["I am a student .", "I am a learner ."], ["Are you the student ?"]]
>>> result = xer.evaluate(lm_outputs, references_list)
>>> print(result)
MetricResult(
summary={'cer_score': 0.43243243243243246, 'wer_score': 0.5},
instance_details=[{'cer_score': 0.0, 'wer_score': 0.0}, {'cer_score': 0.7619047619047619, 'wer_score': 1.0}
]
)
Source code in flexeval/core/metric/xer.py
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
|
__init__ ¶
__init__(tokenizer: Tokenizer | None = None) -> None
Source code in flexeval/core/metric/xer.py
32 33 |
|
evaluate ¶
evaluate(
lm_outputs: list[str],
references_list: list[list[str]],
task_inputs_list: list[dict[str, str]] | None = None,
) -> MetricResult
Source code in flexeval/core/metric/xer.py
35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
|