mirror of
https://github.com/open-compass/opencompass.git
synced 2025-05-30 16:03:24 +08:00
Update CascadeEvaluator
This commit is contained in:
parent
3d1760aba2
commit
16e9884c2f
@ -60,7 +60,7 @@ Just like a compass guides us on our journey, OpenCompass will guide you through
|
||||
- **\[2025.04.01\]** OpenCompass now supports `CascadeEvaluator`, a flexible evaluation mechanism that allows multiple evaluators to work in sequence. This enables creating customized evaluation pipelines for complex assessment scenarios. Check out the [documentation](docs/en/advanced_guides/llm_judge.md) for more details! 🔥🔥🔥
|
||||
- **\[2025.03.11\]** We have supported evaluation for `SuperGPQA` which is a great benchmark for measuring LLM knowledge ability 🔥🔥🔥
|
||||
- **\[2025.02.28\]** We have added a tutorial for `DeepSeek-R1` series model, please check [Evaluating Reasoning Model](docs/en/user_guides/deepseek_r1.md) for more details! 🔥🔥🔥
|
||||
- **\[2025.02.15\]** We have added two powerful evaluation tools: `GenericLLMEvaluator` for LLM-as-judge evaluations and `MATHEvaluator` for mathematical reasoning assessments. Check out the documentation for [LLM Judge](docs/en/advanced_guides/llm_judge.md) and [Math Evaluation](docs/en/advanced_guides/general_math.md) for more details! 🔥🔥🔥
|
||||
- **\[2025.02.15\]** We have added two powerful evaluation tools: `GenericLLMEvaluator` for LLM-as-judge evaluations and `MATHVerifyEvaluator` for mathematical reasoning assessments. Check out the documentation for [LLM Judge](docs/en/advanced_guides/llm_judge.md) and [Math Evaluation](docs/en/advanced_guides/general_math.md) for more details! 🔥🔥🔥
|
||||
- **\[2025.01.16\]** We now support the [InternLM3-8B-Instruct](https://huggingface.co/internlm/internlm3-8b-instruct) model which has enhanced performance on reasoning and knowledge-intensive tasks.
|
||||
- **\[2024.12.17\]** We have provided the evaluation script for the December [CompassAcademic](examples/eval_academic_leaderboard_202412.py), which allows users to easily reproduce the official evaluation results by configuring it.
|
||||
- **\[2024.11.14\]** OpenCompass now offers support for a sophisticated benchmark designed to evaluate complex reasoning skills — [MuSR](https://arxiv.org/pdf/2310.16049). Check out the [demo](examples/eval_musr.py) and give it a spin! 🔥🔥🔥
|
||||
@ -246,7 +246,7 @@ Currently, OpenCompass have provided standard recommended configurations for dat
|
||||
opencompass --datasets aime2024_gen --models hf_internlm2_5_1_8b_chat
|
||||
|
||||
# Recommended Evaluation Config based on LLM Judge
|
||||
opencompass --datasets aime2024_llm_judge_gen --models hf_internlm2_5_1_8b_chat
|
||||
opencompass --datasets aime2024_llmjudge_gen --models hf_internlm2_5_1_8b_chat
|
||||
```
|
||||
|
||||
If you want to use multiple GPUs to evaluate the model in data parallel, you can use `--max-num-worker`.
|
||||
|
@ -60,7 +60,7 @@
|
||||
- **\[2025.04.01\]** OpenCompass 现已支持 `CascadeEvaluator`,允许多个评估器按顺序工作,可以为更复杂的评估场景创建自定义评估流程,查看[文档](docs/zh_cn/advanced_guides/llm_judge.md)了解具体用法!🔥🔥🔥
|
||||
- **\[2025.03.11\]** 现已支持 `SuperGPQA` 覆盖285 个研究生学科的知识能力评测,欢迎尝试!🔥🔥🔥
|
||||
- **\[2025.02.28\]** 我们为 `DeepSeek-R1` 系列模型添加了教程,请查看 [评估推理模型](docs/zh_cn/user_guides/deepseek_r1.md) 了解更多详情!🔥🔥🔥
|
||||
- **\[2025.02.15\]** 我们新增了两个实用的评测工具:用于LLM作为评判器的`GenericLLMEvaluator`和用于数学推理评估的`MATHEvaluator`。查看[LLM评判器](docs/zh_cn/advanced_guides/llm_judge.md)和[数学能力评测](docs/zh_cn/advanced_guides/general_math.md)文档了解更多详情!🔥🔥🔥
|
||||
- **\[2025.02.15\]** 我们新增了两个实用的评测工具:用于LLM作为评判器的`GenericLLMEvaluator`和用于数学推理评估的`MATHVerifyEvaluator`。查看[LLM评判器](docs/zh_cn/advanced_guides/llm_judge.md)和[数学能力评测](docs/zh_cn/advanced_guides/general_math.md)文档了解更多详情!🔥🔥🔥
|
||||
- **\[2025.01.16\]** 我们现已支持 [InternLM3-8B-Instruct](https://huggingface.co/internlm/internlm3-8b-instruct) 模型,该模型在推理、知识类任务上取得同量级最优性能,欢迎尝试。
|
||||
- **\[2024.12.17\]** 我们提供了12月CompassAcademic学术榜单评估脚本 [CompassAcademic](configs/eval_academic_leaderboard_202412.py),你可以通过简单地配置复现官方评测结果。
|
||||
- **\[2024.10.14\]** 现已支持OpenAI多语言问答数据集[MMMLU](https://huggingface.co/datasets/openai/MMMLU),欢迎尝试! 🔥🔥🔥
|
||||
@ -237,7 +237,7 @@ humaneval, triviaqa, commonsenseqa, tydiqa, strategyqa, cmmlu, lambada, piqa, ce
|
||||
opencompass --datasets aime2024_gen --models hf_internlm2_5_1_8b_chat
|
||||
|
||||
# 基于LLM Judge的推荐配置
|
||||
opencompass --datasets aime2024_llm_judge_gen --models hf_internlm2_5_1_8b_chat
|
||||
opencompass --datasets aime2024_llmjudge_gen --models hf_internlm2_5_1_8b_chat
|
||||
```
|
||||
|
||||
此外,如果你想在多块 GPU 上使用模型进行推理,您可以使用 `--max-num-worker` 参数。
|
||||
|
@ -303,7 +303,7 @@
|
||||
category: Examination
|
||||
paper: https://huggingface.co/datasets/Maxwell-Jia/AIME_2024
|
||||
configpath: opencompass/configs/datasets/aime2024/aime2024_gen.py
|
||||
configpath_llmjudge: opencompass/configs/datasets/aime2024/aime2024_llm_judge_gen.py
|
||||
configpath_llmjudge: opencompass/configs/datasets/aime2024/aime2024_llmjudge_gen.py
|
||||
- anli:
|
||||
name: Adversarial NLI
|
||||
category: Reasoning
|
||||
|
@ -278,7 +278,7 @@ Here's an example of how to configure the CascadeEvaluator:
|
||||
|
||||
```python
|
||||
# Define a rule-based evaluator
|
||||
rule_evaluator = dict(type=MATHEvaluator)
|
||||
rule_evaluator = dict(type=MATHVerifyEvaluator)
|
||||
|
||||
# Define an LLM judge evaluator
|
||||
llm_judge_evaluator = dict(
|
||||
|
@ -2,7 +2,7 @@
|
||||
|
||||
## Introduction
|
||||
|
||||
Mathematical reasoning is a crucial capability for large language models (LLMs). To evaluate a model's mathematical abilities, we need to test its capability to solve mathematical problems step by step and provide accurate final answers. OpenCompass provides a convenient way to evaluate mathematical reasoning through the CustomDataset and MATHEvaluator components.
|
||||
Mathematical reasoning is a crucial capability for large language models (LLMs). To evaluate a model's mathematical abilities, we need to test its capability to solve mathematical problems step by step and provide accurate final answers. OpenCompass provides a convenient way to evaluate mathematical reasoning through the CustomDataset and MATHVerifyEvaluator components.
|
||||
|
||||
## Dataset Format
|
||||
|
||||
@ -61,7 +61,7 @@ math_infer_cfg = dict(
|
||||
|
||||
```python
|
||||
math_eval_cfg = dict(
|
||||
evaluator=dict(type=MATHEvaluator),
|
||||
evaluator=dict(type=MATHVerifyEvaluator),
|
||||
)
|
||||
```
|
||||
|
||||
@ -86,11 +86,11 @@ math_datasets = [
|
||||
]
|
||||
```
|
||||
|
||||
## MATHEvaluator
|
||||
## MATHVerifyEvaluator
|
||||
|
||||
The MATHEvaluator is specifically designed to evaluate mathematical answers. It is developed based on the math_verify library, which provides mathematical expression parsing and verification capabilities, supporting extraction and equivalence verification for both LaTeX and general expressions.
|
||||
The MATHVerifyEvaluator is specifically designed to evaluate mathematical answers. It is developed based on the math_verify library, which provides mathematical expression parsing and verification capabilities, supporting extraction and equivalence verification for both LaTeX and general expressions.
|
||||
|
||||
The MATHEvaluator implements:
|
||||
The MATHVerifyEvaluator implements:
|
||||
|
||||
1. Extracts answers from both predictions and references using LaTeX extraction
|
||||
2. Handles various LaTeX formats and environments
|
||||
@ -133,7 +133,7 @@ Here's a complete example of how to set up math evaluation:
|
||||
from mmengine.config import read_base
|
||||
from opencompass.models import TurboMindModelwithChatTemplate
|
||||
from opencompass.datasets import CustomDataset
|
||||
from opencompass.openicl.icl_evaluator.math_evaluator import MATHEvaluator
|
||||
from opencompass.openicl.icl_evaluator.math_evaluator import MATHVerifyEvaluator
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
@ -160,7 +160,7 @@ math_infer_cfg = dict(
|
||||
|
||||
# Evaluation configuration
|
||||
math_eval_cfg = dict(
|
||||
evaluator=dict(type=MATHEvaluator),
|
||||
evaluator=dict(type=MATHVerifyEvaluator),
|
||||
)
|
||||
|
||||
# Dataset configuration
|
||||
|
@ -277,7 +277,7 @@ OpenCompass还提供了级联评估器`CascadeEvaluator`,它结合了规则式
|
||||
|
||||
```python
|
||||
# 定义规则式评估器
|
||||
rule_evaluator = dict(type=MATHEvaluator)
|
||||
rule_evaluator = dict(type=MATHVerifyEvaluator)
|
||||
|
||||
# 定义LLM评判器
|
||||
llm_judge_evaluator = dict(
|
||||
|
@ -2,7 +2,7 @@
|
||||
|
||||
## 简介
|
||||
|
||||
数学推理能力是大语言模型(LLMs)的一项关键能力。为了评估模型的数学能力,我们需要测试其逐步解决数学问题并提供准确最终答案的能力。OpenCompass 通过 CustomDataset 和 MATHEvaluator 组件提供了一种便捷的数学推理评测方式。
|
||||
数学推理能力是大语言模型(LLMs)的一项关键能力。为了评估模型的数学能力,我们需要测试其逐步解决数学问题并提供准确最终答案的能力。OpenCompass 通过 CustomDataset 和 MATHVerifyEvaluator 组件提供了一种便捷的数学推理评测方式。
|
||||
|
||||
## 数据集格式
|
||||
|
||||
@ -61,7 +61,7 @@ math_infer_cfg = dict(
|
||||
|
||||
```python
|
||||
math_eval_cfg = dict(
|
||||
evaluator=dict(type=MATHEvaluator),
|
||||
evaluator=dict(type=MATHVerifyEvaluator),
|
||||
)
|
||||
```
|
||||
|
||||
@ -86,11 +86,11 @@ math_datasets = [
|
||||
]
|
||||
```
|
||||
|
||||
## MATHEvaluator
|
||||
## MATHVerifyEvaluator
|
||||
|
||||
MATHEvaluator 是专门设计用于评估数学答案的评测器。它基于 math_verify 库进行开发,该库提供了数学表达式解析和验证功能,支持 LaTeX 和一般表达式的提取与等价性验证。
|
||||
MATHVerifyEvaluator 是专门设计用于评估数学答案的评测器。它基于 math_verify 库进行开发,该库提供了数学表达式解析和验证功能,支持 LaTeX 和一般表达式的提取与等价性验证。
|
||||
|
||||
MATHEvaluator 具有以下功能:
|
||||
MATHVerifyEvaluator 具有以下功能:
|
||||
|
||||
1. 使用 LaTeX 提取器从预测和参考答案中提取答案
|
||||
2. 处理各种 LaTeX 格式和环境
|
||||
@ -133,7 +133,7 @@ MATHEvaluator 具有以下功能:
|
||||
from mmengine.config import read_base
|
||||
from opencompass.models import TurboMindModelwithChatTemplate
|
||||
from opencompass.datasets import CustomDataset
|
||||
from opencompass.openicl.icl_evaluator.math_evaluator import MATHEvaluator
|
||||
from opencompass.evaluator import MATHVerifyEvaluator
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
@ -160,7 +160,7 @@ math_infer_cfg = dict(
|
||||
|
||||
# 评测配置
|
||||
math_eval_cfg = dict(
|
||||
evaluator=dict(type=MATHEvaluator),
|
||||
evaluator=dict(type=MATHVerifyEvaluator),
|
||||
)
|
||||
|
||||
# 数据集配置
|
||||
|
@ -7,9 +7,12 @@ from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.evaluator import GenericLLMEvaluator, CascadeEvaluator
|
||||
from opencompass.evaluator import (
|
||||
GenericLLMEvaluator,
|
||||
CascadeEvaluator,
|
||||
MATHVerifyEvaluator,
|
||||
)
|
||||
from opencompass.datasets import generic_llmjudge_postprocess
|
||||
from opencompass.openicl.icl_evaluator import MATHEvaluator
|
||||
from opencompass.datasets import (
|
||||
MATHDataset,
|
||||
math_postprocess_v2,
|
||||
@ -94,7 +97,7 @@ llm_judge_evaluator = dict(
|
||||
judge_cfg=dict(),
|
||||
)
|
||||
|
||||
rule_evaluator =dict(type=MATHEvaluator)
|
||||
rule_evaluator =dict(type=MATHVerifyEvaluator)
|
||||
cascade_evaluator = dict(type=CascadeEvaluator,
|
||||
llm_evaluator=llm_judge_evaluator,
|
||||
rule_evaluator=rule_evaluator,
|
||||
|
@ -1,87 +0,0 @@
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.datasets import Aime2024Dataset, MATHEvaluator, math_postprocess_v2
|
||||
from opencompass.openicl.icl_evaluator import LMEvaluator
|
||||
from opencompass.datasets import generic_llmjudge_postprocess
|
||||
|
||||
aime2024_reader_cfg = dict(
|
||||
input_columns=['question'],
|
||||
output_column='answer'
|
||||
)
|
||||
|
||||
|
||||
aime2024_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(role='HUMAN', prompt='{question}\nRemember to put your final answer within \\boxed{}.'),
|
||||
],
|
||||
)
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer, max_out_len=2048)
|
||||
)
|
||||
|
||||
|
||||
GRADER_TEMPLATE = """
|
||||
Please as a grading expert, judge whether the final answers given by the candidates below are consistent with the standard answers, that is, whether the candidates answered correctly.
|
||||
|
||||
Here are some evaluation criteria:
|
||||
1. Please refer to the given standard answer. You don't need to re-generate the answer to the question because the standard answer has been given. You only need to judge whether the candidate's answer is consistent with the standard answer according to the form of the question. Don't try to answer the original question. You can assume that the standard answer is definitely correct.
|
||||
2. Because the candidate's answer may be different from the standard answer in the form of expression, before making a judgment, please understand the question and the standard answer first, and then judge whether the candidate's answer is correct, but be careful not to try to answer the original question.
|
||||
3. Some answers may contain multiple items, such as multiple-choice questions, multiple-select questions, fill-in-the-blank questions, etc. As long as the answer is the same as the standard answer, it is enough. For multiple-select questions and multiple-blank fill-in-the-blank questions, the candidate needs to answer all the corresponding options or blanks correctly to be considered correct.
|
||||
4. Some answers may be expressed in different ways, such as some answers may be a mathematical expression, some answers may be a textual description, as long as the meaning expressed is the same. And some formulas are expressed in different ways, but they are equivalent and correct.
|
||||
5. If the prediction is given with \\boxed{}, please ignore the \\boxed{} and only judge whether the candidate's answer is consistent with the standard answer.
|
||||
|
||||
Please judge whether the following answers are consistent with the standard answer based on the above criteria. Grade the predicted answer of this new question as one of:
|
||||
A: CORRECT
|
||||
B: INCORRECT
|
||||
Just return the letters "A" or "B", with no text around it.
|
||||
|
||||
Here is your task. Simply reply with either CORRECT, INCORRECT. Don't apologize or correct yourself if there was a mistake; we are just trying to grade the answer.
|
||||
|
||||
|
||||
<Original Question Begin>: \n{question}\n<Original Question End>\n\n
|
||||
<Gold Target Begin>: \n{answer}\n<Gold Target End>\n\n
|
||||
<Predicted Answer Begin>: \n{prediction}\n<Predicted End>\n\n
|
||||
|
||||
Judging the correctness of candidates' answers:
|
||||
""".strip()
|
||||
|
||||
aime2024_eval_cfg = dict(
|
||||
evaluator=dict(
|
||||
type=LMEvaluator,
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
begin=[
|
||||
dict(
|
||||
role='SYSTEM',
|
||||
fallback_role='HUMAN',
|
||||
prompt="You are a helpful assistant who evaluates the correctness and quality of models' outputs.")
|
||||
],
|
||||
round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt = GRADER_TEMPLATE
|
||||
),
|
||||
]),
|
||||
),
|
||||
dict_postprocessor=dict(type=generic_llmjudge_postprocess),
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
|
||||
aime2024_datasets = [
|
||||
dict(
|
||||
abbr='aime2024',
|
||||
type=Aime2024Dataset,
|
||||
path='opencompass/aime2024',
|
||||
reader_cfg=aime2024_reader_cfg,
|
||||
infer_cfg=aime2024_infer_cfg,
|
||||
eval_cfg=aime2024_eval_cfg,
|
||||
mode='singlescore',
|
||||
)
|
||||
]
|
@ -1,28 +1,44 @@
|
||||
"""
|
||||
Summary: A config for AIME-2024 Evaluation.
|
||||
Setting:
|
||||
Shot: 0-shot
|
||||
Evaluator:
|
||||
- CascadeEvaluator
|
||||
- MATHVerifyEvaluator
|
||||
- GenericLLMEvaluator
|
||||
Avaliable Models:
|
||||
- Instruct/Chat Models
|
||||
"""
|
||||
from opencompass.datasets.arc_prize_public_evaluation import pad_array_with_value
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.datasets import Aime2024Dataset, MATHEvaluator, math_postprocess_v2
|
||||
from opencompass.evaluator import GenericLLMEvaluator
|
||||
from opencompass.datasets import generic_llmjudge_postprocess
|
||||
from opencompass.utils import xml_tag_postprocessor
|
||||
|
||||
aime2024_reader_cfg = dict(
|
||||
input_columns=['question'],
|
||||
output_column='answer'
|
||||
from opencompass.datasets import Aime2024Dataset
|
||||
from opencompass.evaluator import (
|
||||
CascadeEvaluator,
|
||||
GenericLLMEvaluator,
|
||||
MATHVerifyEvaluator
|
||||
)
|
||||
|
||||
from opencompass.datasets import generic_llmjudge_postprocess
|
||||
|
||||
aime2024_reader_cfg = dict(input_columns=['question'], output_column='answer')
|
||||
|
||||
|
||||
aime2024_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(role='HUMAN', prompt='{question}\nRemember to put your final answer within \\boxed{}.'),
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt='{question}\nRemember to put your final answer within \\boxed{}.',
|
||||
),
|
||||
],
|
||||
)
|
||||
),
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer, max_out_len=2048)
|
||||
inferencer=dict(type=GenInferencer),
|
||||
)
|
||||
|
||||
|
||||
@ -51,35 +67,45 @@ GRADER_TEMPLATE = """
|
||||
Judging the correctness of candidates' answers:
|
||||
""".strip()
|
||||
|
||||
aime2024_eval_cfg = dict(
|
||||
evaluator=dict(
|
||||
type=GenericLLMEvaluator,
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
begin=[
|
||||
dict(
|
||||
role='SYSTEM',
|
||||
fallback_role='HUMAN',
|
||||
prompt="You are a helpful assistant who evaluates the correctness and quality of models' outputs.")
|
||||
],
|
||||
round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt = GRADER_TEMPLATE
|
||||
),
|
||||
]),
|
||||
),
|
||||
dataset_cfg=dict(
|
||||
type=Aime2024Dataset,
|
||||
path='opencompass/aime2024',
|
||||
reader_cfg=aime2024_reader_cfg,
|
||||
),
|
||||
judge_cfg=dict(),
|
||||
dict_postprocessor=dict(type=generic_llmjudge_postprocess),
|
||||
pred_postprocessor=dict(type=xml_tag_postprocessor, tag='<conclude>'),
|
||||
cascade_evaluator = dict(
|
||||
type=CascadeEvaluator,
|
||||
rule_evaluator=dict(
|
||||
type=MATHVerifyEvaluator,
|
||||
),
|
||||
pred_role='BOT',
|
||||
llm_evaluator= dict(
|
||||
dict(
|
||||
type=GenericLLMEvaluator,
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
begin=[
|
||||
dict(
|
||||
role='SYSTEM',
|
||||
fallback_role='HUMAN',
|
||||
prompt="You are a helpful assistant who evaluates the correctness and quality of models' outputs.",
|
||||
)
|
||||
],
|
||||
round=[
|
||||
dict(role='HUMAN', prompt=GRADER_TEMPLATE),
|
||||
],
|
||||
),
|
||||
),
|
||||
dataset_cfg=dict(
|
||||
type=Aime2024Dataset,
|
||||
path='opencompass/aime2024',
|
||||
reader_cfg=aime2024_reader_cfg,
|
||||
n=2,
|
||||
),
|
||||
judge_cfg=dict(),
|
||||
dict_postprocessor=dict(type=generic_llmjudge_postprocess),
|
||||
)
|
||||
),
|
||||
# parallel=False,
|
||||
)
|
||||
|
||||
|
||||
aime2024_eval_cfg = dict(
|
||||
evaluator=cascade_evaluator,
|
||||
)
|
||||
|
||||
aime2024_datasets = [
|
||||
@ -90,6 +116,6 @@ aime2024_datasets = [
|
||||
reader_cfg=aime2024_reader_cfg,
|
||||
infer_cfg=aime2024_infer_cfg,
|
||||
eval_cfg=aime2024_eval_cfg,
|
||||
mode='singlescore',
|
||||
n=2,# Evaluate the dataset with 2 times
|
||||
)
|
||||
]
|
||||
]
|
@ -2,7 +2,7 @@ from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.datasets import CustomDataset
|
||||
from opencompass.openicl.icl_evaluator.math_evaluator import MATHEvaluator
|
||||
from opencompass.evaluator import MATHVerifyEvaluator
|
||||
|
||||
math_reader_cfg = dict(input_columns=['problem'], output_column='solution')
|
||||
|
||||
@ -24,7 +24,7 @@ math_infer_cfg = dict(
|
||||
|
||||
|
||||
math_eval_cfg = dict(
|
||||
evaluator=dict(type=MATHEvaluator),
|
||||
evaluator=dict(type=MATHVerifyEvaluator),
|
||||
)
|
||||
|
||||
math_datasets = [
|
||||
|
@ -2,7 +2,7 @@ from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.datasets import MATHDataset
|
||||
from opencompass.openicl.icl_evaluator import MATHEvaluator
|
||||
from opencompass.evaluator import MATHVerifyEvaluator
|
||||
|
||||
math_reader_cfg = dict(input_columns=['problem'], output_column='solution')
|
||||
|
||||
@ -24,7 +24,7 @@ math_infer_cfg = dict(
|
||||
inferencer=dict(type=GenInferencer))
|
||||
|
||||
math_eval_cfg = dict(
|
||||
evaluator=dict(type=MATHEvaluator)
|
||||
evaluator=dict(type=MATHVerifyEvaluator)
|
||||
)
|
||||
|
||||
math_datasets = [
|
||||
|
@ -1,7 +1,7 @@
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.openicl.icl_evaluator import MATHEvaluator
|
||||
from opencompass.evaluator import MATHVerifyEvaluator
|
||||
from opencompass.datasets import (
|
||||
MATHDataset,
|
||||
math_postprocess_v2,
|
||||
@ -28,7 +28,7 @@ math_infer_cfg = dict(
|
||||
|
||||
# postprocess v2
|
||||
math_eval_cfg = dict(
|
||||
evaluator=dict(type=MATHEvaluator)
|
||||
evaluator=dict(type=MATHVerifyEvaluator)
|
||||
)
|
||||
|
||||
math_datasets = [
|
||||
|
@ -1,2 +1,3 @@
|
||||
from .cascade_evaluator import CascadeEvaluator # noqa
|
||||
from .generic_llm_evaluator import GenericLLMEvaluator # noqa
|
||||
from .math_evaluator import MATHVerifyEvaluator # noqa
|
@ -181,8 +181,10 @@ class CascadeEvaluator(BaseEvaluator):
|
||||
original_out_dir = getattr(self.llm_evaluator, '_out_dir', None)
|
||||
self.llm_evaluator._out_dir = f'{self._out_dir}_llm_judge'
|
||||
|
||||
# Generate random hash suffix
|
||||
llm_results_path = f'{self.llm_evaluator._out_dir}_{self.dataset_replica_idx}' # noqa
|
||||
|
||||
# Check if results already exist to avoid re-evaluation
|
||||
llm_results_path = f'{self.llm_evaluator._out_dir}.json'
|
||||
if os.path.exists(llm_results_path):
|
||||
self.logger.info(
|
||||
f'Loading existing LLM evaluation results from '
|
||||
@ -212,7 +214,9 @@ class CascadeEvaluator(BaseEvaluator):
|
||||
# Use GenericLLMEvaluator to evaluate samples
|
||||
# unset dataset_cfg for GenericLLMEvaluator to
|
||||
# directly use test_set
|
||||
self.llm_evaluator.output_path = llm_results_path
|
||||
self.llm_evaluator.dataset_cfg = None
|
||||
|
||||
llm_results = self.llm_evaluator.score(
|
||||
predictions=failed_predictions,
|
||||
references=failed_references,
|
||||
|
@ -1,5 +1,6 @@
|
||||
import os
|
||||
import os.path as osp
|
||||
from copy import deepcopy
|
||||
from typing import Dict, List, Optional
|
||||
|
||||
import mmengine
|
||||
@ -54,12 +55,16 @@ class GenericLLMEvaluator(BaseEvaluator):
|
||||
self.dict_postprocessor = dict_postprocessor
|
||||
self.pred_postprocessor = pred_postprocessor
|
||||
|
||||
def build_inferencer(self, ):
|
||||
def build_inferencer(self):
|
||||
"""Build LLM Inference."""
|
||||
output_path = self._out_dir
|
||||
self.output_path = f'{output_path}.json'
|
||||
out_dir, out_name = osp.split(output_path)
|
||||
if not self.output_path:
|
||||
# output_path = self._out_dir
|
||||
# self.output_path = f'{output_path}.json'
|
||||
self.output_path = self._out_dir
|
||||
|
||||
out_dir, out_name = osp.split(self.output_path)
|
||||
out_name = f'{out_name}.json'
|
||||
self.output_path = osp.join(out_dir, out_name)
|
||||
|
||||
self.logger.info(
|
||||
f'Set self.output_path to {self.output_path} for current task')
|
||||
@ -96,9 +101,9 @@ class GenericLLMEvaluator(BaseEvaluator):
|
||||
assert len(predictions) == len(
|
||||
references), 'predictions and references must have the same length'
|
||||
|
||||
# import pdb;pdb.set_trace()
|
||||
# -------------- Build Inferencer ----------------
|
||||
self.build_inferencer()
|
||||
|
||||
# ---------------- Process Predictions ------------------
|
||||
predictions = self.pred_postprocess(predictions)
|
||||
|
||||
@ -178,7 +183,7 @@ class GenericLLMEvaluator(BaseEvaluator):
|
||||
if self.dict_postprocessor is None:
|
||||
return output
|
||||
else:
|
||||
kwargs = self.dict_postprocessor
|
||||
kwargs = deepcopy(self.dict_postprocessor)
|
||||
proc = DICT_POSTPROCESSORS.get(kwargs.pop('type'))
|
||||
sig = inspect.signature(proc)
|
||||
if 'dataset' in sig.parameters:
|
||||
@ -192,7 +197,8 @@ class GenericLLMEvaluator(BaseEvaluator):
|
||||
@property
|
||||
def default_judge_cfg(self):
|
||||
from opencompass.models import OpenAISDK
|
||||
|
||||
self.logger.info('Please set your judge model in `OC_JUDGE_MODEL`, \
|
||||
`OC_JUDGE_API_KEY`, `OC_JUDGE_API_BASE` environment variables.')
|
||||
DEFAULT_JUDGE_CFG = dict(
|
||||
type=OpenAISDK,
|
||||
path=os.environ['OC_JUDGE_MODEL'],
|
||||
|
@ -3,7 +3,7 @@ from opencompass.registry import ICL_EVALUATORS
|
||||
|
||||
|
||||
@ICL_EVALUATORS.register_module()
|
||||
class MATHEvaluator(BaseEvaluator):
|
||||
class MATHVerifyEvaluator(BaseEvaluator):
|
||||
|
||||
def score(self, predictions, references):
|
||||
try:
|
@ -14,4 +14,3 @@ from .icl_misc_evaluator import AveragePPLEvaluator # noqa
|
||||
from .icl_plugin_evaluator import TEvalEvaluator # noqa
|
||||
from .icl_toxic_evaluator import ToxicEvaluator # noqa
|
||||
from .lm_evaluator import LMEvaluator # noqa
|
||||
from .math_evaluator import MATHEvaluator # noqa
|
||||
|
@ -47,6 +47,10 @@ class BaseEvaluator:
|
||||
# please see opencompass/opencompass/tasks/openicl_eval.py Line 197-200
|
||||
return self._out_dir
|
||||
|
||||
@property
|
||||
def dataset_replica_idx(self):
|
||||
return self._dataset_replica_idx
|
||||
|
||||
def group(self, n: int, details: List[Dict[str, Any]],
|
||||
test_set: Dataset) -> Dict[str, Any]:
|
||||
example2replications = {}
|
||||
@ -102,6 +106,7 @@ class BaseEvaluator:
|
||||
all_details = []
|
||||
all_results = []
|
||||
for i in range(n):
|
||||
self._dataset_replica_idx = i
|
||||
|
||||
def select_fn(i, real_size, x):
|
||||
if isinstance(x, Dataset):
|
||||
@ -111,11 +116,13 @@ class BaseEvaluator:
|
||||
else:
|
||||
return x
|
||||
|
||||
results = self.score(
|
||||
**{
|
||||
key: select_fn(i, real_size, value)
|
||||
for key, value in score_kwargs.items()
|
||||
})
|
||||
current_params = {
|
||||
key: select_fn(i, real_size, value)
|
||||
for key, value in score_kwargs.items()
|
||||
}
|
||||
|
||||
results = self.score(**current_params)
|
||||
|
||||
details = results.pop('details', None)
|
||||
if details is not None:
|
||||
if isinstance(details, Dict):
|
||||
@ -138,8 +145,6 @@ class BaseEvaluator:
|
||||
eval_results.pop(key)
|
||||
else:
|
||||
eval_results[key] = np.mean(eval_results[key])
|
||||
else:
|
||||
eval_results[key] = eval_results[key][0]
|
||||
|
||||
grouped_examples = self.group(n, all_details, original_dataset)
|
||||
can_calculate = False
|
||||
|
Loading…
Reference in New Issue
Block a user