mirror of
https://github.com/open-compass/opencompass.git
synced 2025-05-30 16:03:24 +08:00
Merge branch 'open-compass:main' into main
This commit is contained in:
commit
f8f673a424
6
.github/workflows/pr-run-test.yml
vendored
6
.github/workflows/pr-run-test.yml
vendored
@ -73,10 +73,10 @@ jobs:
|
||||
exit 1
|
||||
fi
|
||||
score=$(sed -n '$p' regression_result3/*/summary/*.csv | awk -F ',' '{print $NF}')
|
||||
if (( ${score%.*} >= 84 && ${score%.*} <= 87 )); then
|
||||
echo "score is $score between 84 and 87"
|
||||
if (( ${score%.*} >= 87 && ${score%.*} <= 89 )); then
|
||||
echo "score is $score between 87 and 89"
|
||||
else
|
||||
echo "score is $score not between 84 and 87"
|
||||
echo "score is $score not between 87 and 89"
|
||||
exit 1
|
||||
fi
|
||||
rm -rf regression_result1 & rm -rf regression_result2 & rm -rf regression_result3
|
||||
|
@ -57,6 +57,7 @@ Just like a compass guides us on our journey, OpenCompass will guide you through
|
||||
|
||||
## 🚀 What's New <a><img width="35" height="20" src="https://user-images.githubusercontent.com/12782558/212848161-5e783dd6-11e8-4fe0-bbba-39ffb77730be.png"></a>
|
||||
|
||||
- **\[2024.12.17\]** We have provided the evaluation script for the December [CompassAcademic](configs/eval_academic_leaderboard_202412.py), which allows users to easily reproduce the official evaluation results by configuring it.
|
||||
- **\[2024.11.14\]** OpenCompass now offers support for a sophisticated benchmark designed to evaluate complex reasoning skills — [MuSR](https://arxiv.org/pdf/2310.16049). Check out the [demo](configs/eval_musr.py) and give it a spin! 🔥🔥🔥
|
||||
- **\[2024.11.14\]** OpenCompass now supports the brand new long-context language model evaluation benchmark — [BABILong](https://arxiv.org/pdf/2406.10149). Have a look at the [demo](configs/eval_babilong.py) and give it a try! 🔥🔥🔥
|
||||
- **\[2024.10.14\]** We now support the OpenAI multilingual QA dataset [MMMLU](https://huggingface.co/datasets/openai/MMMLU). Feel free to give it a try! 🔥🔥🔥
|
||||
|
@ -57,6 +57,7 @@
|
||||
|
||||
## 🚀 最新进展 <a><img width="35" height="20" src="https://user-images.githubusercontent.com/12782558/212848161-5e783dd6-11e8-4fe0-bbba-39ffb77730be.png"></a>
|
||||
|
||||
- **\[2024.12.17\]** 我们提供了12月CompassAcademic学术榜单评估脚本 [CompassAcademic](configs/eval_academic_leaderboard_202412.py),你可以通过简单地配置复现官方评测结果。
|
||||
- **\[2024.10.14\]** 现已支持OpenAI多语言问答数据集[MMMLU](https://huggingface.co/datasets/openai/MMMLU),欢迎尝试! 🔥🔥🔥
|
||||
- **\[2024.09.19\]** 现已支持[Qwen2.5](https://huggingface.co/Qwen)(0.5B to 72B) ,可以使用多种推理后端(huggingface/vllm/lmdeploy), 欢迎尝试! 🔥🔥🔥
|
||||
- **\[2024.09.05\]** 现已支持OpenAI o1 模型(`o1-mini-2024-09-12` and `o1-preview-2024-09-12`), 欢迎尝试! 🔥🔥🔥
|
||||
|
@ -1,4 +1,4 @@
|
||||
from mmengine.config import read_base
|
||||
|
||||
with read_base():
|
||||
from .lcbench_repeat10_gen_5ff288 import LCBench_datasets_repeat10 # noqa: F401, F403
|
||||
from .lcbench_repeat10_gen_5ff288 import LCBench_repeat10_datasets # noqa: F401, F403
|
||||
|
@ -86,7 +86,7 @@ LC_cn_infer_cfg = dict(
|
||||
|
||||
LC_eval_cfg = dict(evaluator=dict(type=LCPassKEvaluator), pred_role='BOT')
|
||||
|
||||
LCBench_datasets_repeat10 = [
|
||||
LCBench_repeat10_datasets = [
|
||||
dict(
|
||||
type=LCDataset,
|
||||
abbr='lcbench_en_repeat10',
|
||||
|
@ -0,0 +1,169 @@
|
||||
# CompassArena-SubjectiveBench (Pairwise Eval with Bradley-Terry Model)
|
||||
|
||||
## Introduction
|
||||
|
||||
The following introduction comes from the abstract of [Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference](https://arxiv.org/abs/2403.04132):
|
||||
|
||||
>Large Language Models (LLMs) have unlocked new capabilities and applications; however, evaluating the alignment with human preferences still poses significant challenges. To address this issue, we introduce Chatbot Arena, an open platform for evaluating LLMs based on human preferences. Our methodology employs a pairwise comparison approach and leverages input from a diverse user base through crowdsourcing. The platform has been operational for several months, amassing over 240K votes. This paper describes the platform, analyzes the data we have collected so far, and explains the tried-and-true statistical methods we are using for efficient and accurate evaluation and ranking of models. We confirm that the crowdsourced questions are sufficiently diverse and discriminating and that the crowdsourced human votes are in good agreement with those of expert raters. These analyses collectively establish a robust foundation for the credibility of Chatbot Arena. Because of its unique value and openness, Chatbot Arena has emerged as one of the most referenced LLM leaderboards, widely cited by leading LLM developers and companies.
|
||||
|
||||
For this dataset, we adapt the Bradley-Terry rating system from FastChat to the subjective evaluation setting, but replacing human evaluators with LLM-as-a-judge.
|
||||
|
||||
|
||||
## Official Links
|
||||
|
||||
- Paper: [Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference](https://arxiv.org/abs/2403.04132)
|
||||
- GitHub Repository: [FastChat](https://github.com/lm-sys/FastChat/tree/main)
|
||||
|
||||
|
||||
## Overview and Usage
|
||||
|
||||
### Inference
|
||||
|
||||
During the inference stage, each LLM makes an inference based on the question presented (single question for single turn and an entire conversation for multi-turn).
|
||||
|
||||
### Evaluation
|
||||
|
||||
During the evaluation stage, the judge model respond with a critique and chooses the LLM with a better answer for each pair. This preference will be used later to form the "winner" response variable in the postprocessor. Note that the predictions for each model must be saved (by setting `keep_predictions=True` in the evaluator config) in order for the postporcessor to calculate style features. See this [example](`opencompass/configs/datasets/subjective/compass_arena_subjective_bench/singleturn/pairwise_bt_judge.py`) for more details.
|
||||
|
||||
|
||||
#### Postprocessor
|
||||
After evaluation by the judge model, we gather the pairwise matchups and any additional group variables (e.g. difficulty, category) in the postprocessor. Note that the LLM predictions ("prediction1" and "prediction2") must be passed on from the inference stage, otherwise, an error will be thrown.
|
||||
|
||||
|
||||
### Summary
|
||||
|
||||
After inference by the judge model in the evaluation stage, we fit a Bradley-Terry model (statistical model) in order to estimate the rating and ranking of each LLM with an option to include style features and control variables on groups. The settings below control specification of the BT model as well as how results are being reported:
|
||||
|
||||
- `rating_system`: The rating system used. Currently only supports "bradleyterry".
|
||||
|
||||
- `num_bootstrap`: The number of bootstraps for estimating the confidence intervals of ratings.
|
||||
|
||||
- `with_control_vars`: Whether to include additional covariates (including style features and group variables) when fitting the BT model.
|
||||
|
||||
- `normalize_style_features`: Whether to normalize style features BEFORE fitting the BT model (implementation by FastChat). Turn this off for easier interpretation of odds ratios (when `odds_ratio==True`).
|
||||
|
||||
- `odds_ratio`: Whether to report odds ratios ($e^{\beta_i}$) instead of the original coefficients. See section "Estimated Coefficients of Control variables" for more explanation.
|
||||
|
||||
- `groups`: List of group variables to include while fitting the BT model. These must be available in the input dataset for each observation. Group variables are assumed to be categorical and one-hot encoding is automatically performed before model fitting.
|
||||
|
||||
|
||||
### Config Files
|
||||
|
||||
1. Dataset configs:
|
||||
|
||||
- single turn: `opencompass/configs/datasets/subjective/compass_arena_subjective_bench/singleturn/pairwise_bt_judge.py`
|
||||
- multi-turn: `opencompass/configs/datasets/subjective/compass_arena_subjective_bench/multiturn/pairwise_bt_judge.py`
|
||||
|
||||
2. Evaluation config:
|
||||
|
||||
- `configs/eval_compassarena_subjectivebench_bradleyterry.py`
|
||||
|
||||
## Evaluation Results
|
||||
|
||||
### Bradley-Terry Rating
|
||||
|
||||
The rating of each model is a scaled version of the estimated "strength" coefficients of the fitted Bradley-Terry model. We use the Elo scale with an initial rating of 1000 and a scaling factor of 400 to match the scale used in [CompassArena](https://opencompass.org.cn/arena). Furthermore, we anchor the ratings on the base model as it naturally represents the reference model we are comparing against. This is why the base model always have a rating of 1000 with a zero standard deviation.
|
||||
|
||||
```
|
||||
dataset version base_model metric mode ranking ranking_ub model_name rating rating_q975 rating_q025 std_dev num_battles
|
||||
0 singleturn 635142 Qwen-2.5-72B-Instruct bt_rating gen 1 1 Qwen-2.5-72B-Instruct 1000.00 1000.00 1000.00 0.00 4229
|
||||
1 singleturn 635142 Qwen-2.5-72B-Instruct bt_rating gen 2 2 qwen2.5-32b-instruct-turbomind 926.54 941.72 908.29 8.21 1055
|
||||
2 singleturn 635142 Qwen-2.5-72B-Instruct bt_rating gen 3 2 qwen2.5-14b-instruct-turbomind 907.23 921.08 897.09 6.68 1055
|
||||
3 singleturn 635142 Qwen-2.5-72B-Instruct bt_rating gen 4 2 qwen2-7b-instruct-turbomind 901.99 919.06 885.95 8.44 1060
|
||||
4 singleturn 635142 Qwen-2.5-72B-Instruct bt_rating gen 5 2 qwen2.5-7b-instruct-turbomind 893.03 910.58 877.02 8.65 1059
|
||||
5 multiturn fff2b4 Qwen-2.5-72B-Instruct bt_rating unknown 1 1 Qwen-2.5-72B-Instruct 1000.00 1000.00 1000.00 0.00 1127
|
||||
6 multiturn fff2b4 Qwen-2.5-72B-Instruct bt_rating unknown 2 2 qwen2.5-32b-instruct-turbomind 942.53 972.14 903.84 18.89 282
|
||||
7 multiturn fff2b4 Qwen-2.5-72B-Instruct bt_rating unknown 3 2 qwen2-7b-instruct-turbomind 940.34 974.22 895.80 21.72 282
|
||||
8 multiturn fff2b4 Qwen-2.5-72B-Instruct bt_rating unknown 4 2 qwen2.5-14b-instruct-turbomind 929.09 959.98 896.80 18.16 282
|
||||
9 multiturn fff2b4 Qwen-2.5-72B-Instruct bt_rating unknown 5 2 qwen2.5-7b-instruct-turbomind 907.07 936.71 876.88 16.87 281
|
||||
```
|
||||
|
||||
### Estimated Coefficients of Control variables
|
||||
|
||||
The scale and interpretation of these numbers depend on the summarizer settings for `CompassArenaBradleyTerrySummarizer`. If `normalize_style_features` is set, the style features are the normalized relative difference between model A and B, with the following form:
|
||||
$$
|
||||
\text{normalize }\left(\frac{\text{feature}_A - \text{feature}_B}{\text{feature}_A + \text{feature}_B}\right)
|
||||
$$
|
||||
|
||||
See [Does Style Matter?](https://blog.lmarena.ai/blog/2024/style-control/) for more information.
|
||||
|
||||
Additionally, if `odds_ratio` is set, the odds ratios are returned instead of the raw coefficients. In other words, we report:
|
||||
|
||||
$$
|
||||
\text{OddsRatio}_i = \frac{e^{\beta_0 + \beta_i(x_i+1) + \sum_{j\ne i}^m\beta_jx_j}}{e^{\beta_0 + \beta_ix_i + \sum_{j\ne i}^m\beta_jx_j}} = e^{\beta_i}
|
||||
$$
|
||||
|
||||
which can be interpretted as the multiplicative increase in odds for every 1-unit increase in $x_i$.
|
||||
|
||||
For example, the following results are reported with `normalize_style_features==False` and `odds_ratio==True`:
|
||||
```
|
||||
{
|
||||
"singleturn": {
|
||||
"Qwen-2.5-72B-Instruct": {
|
||||
"sum_assistant_tokens": 6.577376545800252,
|
||||
"header_count": 1.4880636137846999,
|
||||
"list_count": 1.1558594451186806,
|
||||
"bold_count": 1.7918326386585717,
|
||||
"difficulty_Advanced": 1.0281620474711213,
|
||||
"difficulty_Easy": 1.0557367496235666,
|
||||
"difficulty_Medium": 1.1768581931447049,
|
||||
"category_人类对齐": 0.8087074923883157,
|
||||
"category_代码": 1.2717334332407775,
|
||||
"category_创作": 1.0430652013278148,
|
||||
"category_推理": 1.1592759054335746,
|
||||
"category_日常对话": 0.979047716903164,
|
||||
"category_自然语言处理": 1.006707704304149,
|
||||
"category_角色扮演": 1.2296103927210726,
|
||||
"category_重写": 0.7952522120597192,
|
||||
"category_领域知识问答": 1.0658003517547319
|
||||
}
|
||||
},
|
||||
"multiturn": {
|
||||
"Qwen-2.5-72B-Instruct": {
|
||||
"sum_assistant_tokens": 4.470153434554273,
|
||||
"header_count": 1.130542616688942,
|
||||
"list_count": 1.4753419673439991,
|
||||
"bold_count": 1.476348454534956,
|
||||
"difficulty_Advanced": 1.1668553174437737,
|
||||
"difficulty_Easy": 1.142118410006132,
|
||||
"difficulty_Medium": 0.9651479035385795,
|
||||
"category_人类对齐": 0.9606676068409767,
|
||||
"category_代码": 0.9348722519214725,
|
||||
"category_创作": 1.0362490715530026,
|
||||
"category_推理": 0.8546385641566406,
|
||||
"category_日常对话": 1.0481269627721679,
|
||||
"category_自然语言处理": 1.358391853082614,
|
||||
"category_角色扮演": 1.0432636535119493,
|
||||
"category_重写": 0.7398232857603452,
|
||||
"category_领域知识问答": 1.4715970942932421
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
Example Interpretation:
|
||||
- For the single turn dataset with "Qwen-2.5-72B-Instruct" as the base model, if all else stay constant, the odds of winning is 6.6 times greater for every unit increase in the relative difference (unnormalized) in response length between model A and B.
|
||||
|
||||
- For the multi-turn dataset with "Qwen-2.5-72B-Instruct" as the base model, if all else stay constant, the odds of winning is 26% smaller (1-0.74) for "rewrite" (重写) category questions compared to non-rewrite questions.
|
||||
|
||||
|
||||
## Citation
|
||||
```
|
||||
@misc{chiang2024chatbotarenaopenplatform,
|
||||
title={Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference},
|
||||
author={Wei-Lin Chiang and Lianmin Zheng and Ying Sheng and Anastasios Nikolas Angelopoulos and Tianle Li and Dacheng Li and Hao Zhang and Banghua Zhu and Michael Jordan and Joseph E. Gonzalez and Ion Stoica},
|
||||
year={2024},
|
||||
eprint={2403.04132},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.AI},
|
||||
url={https://arxiv.org/abs/2403.04132},
|
||||
}
|
||||
|
||||
@misc{zheng2023judging,
|
||||
title={Judging LLM-as-a-judge with MT-Bench and Chatbot Arena},
|
||||
author={Lianmin Zheng and Wei-Lin Chiang and Ying Sheng and Siyuan Zhuang and Zhanghao Wu and Yonghao Zhuang and Zi Lin and Zhuohan Li and Dacheng Li and Eric. P Xing and Hao Zhang and Joseph E. Gonzalez and Ion Stoica},
|
||||
year={2023},
|
||||
eprint={2306.05685},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.CL}
|
||||
}
|
||||
```
|
@ -0,0 +1,85 @@
|
||||
from mmengine.config import read_base
|
||||
|
||||
from opencompass.datasets import ( # compassarena_subjectiveeval_pairwise_postprocess,
|
||||
CompassArenaSubjectiveBench,
|
||||
compassarena_subjectiveeval_bradleyterry_postprocess,
|
||||
)
|
||||
from opencompass.openicl.icl_evaluator import LMEvaluator
|
||||
from opencompass.openicl.icl_inferencer import ChatInferencer
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
|
||||
subjective_reader_cfg = dict(
|
||||
input_columns=['dialogue', 'pairwise_judge_prompt'],
|
||||
output_column='judge',
|
||||
)
|
||||
|
||||
subjective_all_sets = [
|
||||
'multiturn',
|
||||
]
|
||||
|
||||
qwen_2_5_72b = [
|
||||
dict(
|
||||
abbr='Qwen-2.5-72B-Instruct',
|
||||
)
|
||||
]
|
||||
|
||||
compassarena_subjectivebench_bradleyterry_multiturn_datasets = []
|
||||
|
||||
|
||||
for _name in subjective_all_sets:
|
||||
subjective_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(role='HUMAN', prompt='{dialogue}'),
|
||||
]
|
||||
),
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(
|
||||
type=ChatInferencer, max_seq_len=8192, max_out_len=2048, infer_mode='every'
|
||||
),
|
||||
)
|
||||
|
||||
subjective_eval_cfg = dict(
|
||||
evaluator=dict(
|
||||
type=LMEvaluator,
|
||||
pack_all_predictions=True,
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(role='HUMAN', prompt='{pairwise_judge_prompt}'),
|
||||
]
|
||||
),
|
||||
),
|
||||
dict_postprocessor=dict(
|
||||
type=compassarena_subjectiveeval_bradleyterry_postprocess
|
||||
),
|
||||
keep_predictions=True, # Must be turned on to save predictions from model pairs to calculate style features in postprocessor
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
|
||||
compassarena_subjectivebench_bradleyterry_multiturn_datasets.append(
|
||||
dict(
|
||||
abbr=f'{_name}',
|
||||
type=CompassArenaSubjectiveBench,
|
||||
path='./data/subjective/CompassArenaSubjectiveBench',
|
||||
name=_name,
|
||||
reader_cfg=subjective_reader_cfg,
|
||||
infer_cfg=subjective_infer_cfg,
|
||||
eval_cfg=subjective_eval_cfg,
|
||||
mode='m2n',
|
||||
infer_order='random',
|
||||
base_models=qwen_2_5_72b,
|
||||
given_pred=[
|
||||
{
|
||||
'abbr': 'Qwen-2.5-72B-Instruct',
|
||||
'path': './data/subjective/CompassArenaSubjectiveBench/Qwen-2.5-72B-Instruct',
|
||||
}
|
||||
],
|
||||
)
|
||||
)
|
@ -1,40 +1,47 @@
|
||||
from mmengine.config import read_base
|
||||
|
||||
from opencompass.datasets import (
|
||||
CompassArenaSubjectiveBench,
|
||||
compassarena_subjectiveeval_pairwise_postprocess,
|
||||
)
|
||||
from opencompass.openicl.icl_evaluator import LMEvaluator
|
||||
from opencompass.openicl.icl_inferencer import ChatInferencer
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import ChatInferencer
|
||||
from opencompass.openicl.icl_evaluator import LMEvaluator
|
||||
from opencompass.datasets import CompassArenaSubjectiveBench, compassarena_subjectiveeval_pairwise_postprocess
|
||||
from mmengine.config import read_base
|
||||
|
||||
subjective_reader_cfg = dict(
|
||||
input_columns=['dialogue', 'pairwise_judge_prompt'],
|
||||
output_column='judge',
|
||||
)
|
||||
)
|
||||
|
||||
subjective_all_sets = [
|
||||
'multiturn',
|
||||
]
|
||||
|
||||
qwen_2_5_72b = [dict(
|
||||
abbr='Qwen-2.5-72B-Instruct',
|
||||
)]
|
||||
qwen_2_5_72b = [
|
||||
dict(
|
||||
abbr='Qwen-2.5-72B-Instruct',
|
||||
)
|
||||
]
|
||||
|
||||
compassarena_subjectivebench_multiturn_datasets = []
|
||||
|
||||
|
||||
for _name in subjective_all_sets:
|
||||
subjective_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt='{dialogue}'
|
||||
),
|
||||
]),
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(role='HUMAN', prompt='{dialogue}'),
|
||||
]
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=ChatInferencer, max_seq_len=8192, max_out_len=2048, infer_mode='every'),
|
||||
)
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(
|
||||
type=ChatInferencer, max_seq_len=8192, max_out_len=2048, infer_mode='every'
|
||||
),
|
||||
)
|
||||
|
||||
subjective_eval_cfg = dict(
|
||||
evaluator=dict(
|
||||
@ -44,13 +51,13 @@ for _name in subjective_all_sets:
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt = '{pairwise_judge_prompt}'
|
||||
),
|
||||
]),
|
||||
dict(role='HUMAN', prompt='{pairwise_judge_prompt}'),
|
||||
]
|
||||
),
|
||||
),
|
||||
dict_postprocessor=dict(
|
||||
type=compassarena_subjectiveeval_pairwise_postprocess
|
||||
),
|
||||
dict_postprocessor=dict(type=compassarena_subjectiveeval_pairwise_postprocess),
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
@ -67,5 +74,11 @@ for _name in subjective_all_sets:
|
||||
mode='m2n',
|
||||
infer_order='double',
|
||||
base_models=qwen_2_5_72b,
|
||||
given_pred = [{'abbr':'Qwen-2.5-72B-Instruct', 'path':'./data/subjective/CompassArenaSubjectiveBench/Qwen-2.5-72B-Instruct'}],
|
||||
))
|
||||
given_pred=[
|
||||
{
|
||||
'abbr': 'Qwen-2.5-72B-Instruct',
|
||||
'path': './data/subjective/CompassArenaSubjectiveBench/Qwen-2.5-72B-Instruct',
|
||||
}
|
||||
],
|
||||
)
|
||||
)
|
||||
|
@ -0,0 +1,83 @@
|
||||
from mmengine.config import read_base
|
||||
|
||||
from opencompass.datasets import (
|
||||
CompassArenaSubjectiveBench,
|
||||
compassarena_subjectiveeval_bradleyterry_postprocess,
|
||||
compassarena_subjectiveeval_pairwise_postprocess,
|
||||
)
|
||||
from opencompass.openicl.icl_evaluator import LMEvaluator
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
|
||||
subjective_reader_cfg = dict(
|
||||
input_columns=['question', 'pairwise_judge_prompt'],
|
||||
output_column='judge',
|
||||
)
|
||||
|
||||
subjective_all_sets = [
|
||||
'singleturn',
|
||||
]
|
||||
|
||||
qwen_2_5_72b = [
|
||||
dict(
|
||||
abbr='Qwen-2.5-72B-Instruct',
|
||||
)
|
||||
]
|
||||
|
||||
compassarena_subjectivebench_bradleyterry_singleturn_datasets = []
|
||||
|
||||
|
||||
for _name in subjective_all_sets:
|
||||
subjective_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(role='HUMAN', prompt='{question}'),
|
||||
]
|
||||
),
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer, max_out_len=4096),
|
||||
)
|
||||
|
||||
subjective_eval_cfg = dict(
|
||||
evaluator=dict(
|
||||
type=LMEvaluator,
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(role='HUMAN', prompt='{pairwise_judge_prompt}'),
|
||||
]
|
||||
),
|
||||
),
|
||||
dict_postprocessor=dict(
|
||||
type=compassarena_subjectiveeval_bradleyterry_postprocess
|
||||
),
|
||||
keep_predictions=True, # Must be turned on to save predictions from model pairs to calculate style features in postprocessor
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
|
||||
compassarena_subjectivebench_bradleyterry_singleturn_datasets.append(
|
||||
dict(
|
||||
abbr=f'{_name}',
|
||||
type=CompassArenaSubjectiveBench,
|
||||
path='./data/subjective/CompassArenaSubjectiveBench',
|
||||
name=_name,
|
||||
reader_cfg=subjective_reader_cfg,
|
||||
infer_cfg=subjective_infer_cfg,
|
||||
eval_cfg=subjective_eval_cfg,
|
||||
mode='m2n',
|
||||
infer_order='random',
|
||||
base_models=qwen_2_5_72b,
|
||||
given_pred=[
|
||||
{
|
||||
'abbr': 'Qwen-2.5-72B-Instruct',
|
||||
'path': './data/subjective/CompassArenaSubjectiveBench/Qwen-2.5-72B-Instruct',
|
||||
}
|
||||
],
|
||||
)
|
||||
)
|
@ -1,40 +1,45 @@
|
||||
from mmengine.config import read_base
|
||||
|
||||
from opencompass.datasets import (
|
||||
CompassArenaSubjectiveBench,
|
||||
compassarena_subjectiveeval_pairwise_postprocess,
|
||||
)
|
||||
from opencompass.openicl.icl_evaluator import LMEvaluator
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.openicl.icl_evaluator import LMEvaluator
|
||||
from opencompass.datasets import CompassArenaSubjectiveBench, compassarena_subjectiveeval_pairwise_postprocess
|
||||
from mmengine.config import read_base
|
||||
|
||||
subjective_reader_cfg = dict(
|
||||
input_columns=['question', 'pairwise_judge_prompt'],
|
||||
output_column='judge',
|
||||
)
|
||||
)
|
||||
|
||||
subjective_all_sets = [
|
||||
'singleturn',
|
||||
]
|
||||
|
||||
qwen_2_5_72b = [dict(
|
||||
abbr='Qwen-2.5-72B-Instruct',
|
||||
)]
|
||||
qwen_2_5_72b = [
|
||||
dict(
|
||||
abbr='Qwen-2.5-72B-Instruct',
|
||||
)
|
||||
]
|
||||
|
||||
compassarena_subjectivebench_singleturn_datasets = []
|
||||
|
||||
|
||||
for _name in subjective_all_sets:
|
||||
subjective_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt='{question}'
|
||||
),
|
||||
]),
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(role='HUMAN', prompt='{question}'),
|
||||
]
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer, max_out_len=4096),
|
||||
)
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer, max_out_len=4096),
|
||||
)
|
||||
|
||||
subjective_eval_cfg = dict(
|
||||
evaluator=dict(
|
||||
@ -43,13 +48,13 @@ for _name in subjective_all_sets:
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt = '{pairwise_judge_prompt}'
|
||||
),
|
||||
]),
|
||||
dict(role='HUMAN', prompt='{pairwise_judge_prompt}'),
|
||||
]
|
||||
),
|
||||
),
|
||||
dict_postprocessor=dict(
|
||||
type=compassarena_subjectiveeval_pairwise_postprocess
|
||||
),
|
||||
dict_postprocessor=dict(type=compassarena_subjectiveeval_pairwise_postprocess),
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
@ -66,5 +71,11 @@ for _name in subjective_all_sets:
|
||||
mode='m2n',
|
||||
infer_order='double',
|
||||
base_models=qwen_2_5_72b,
|
||||
given_pred = [{'abbr':'Qwen-2.5-72B-Instruct', 'path':'./data/subjective/CompassArenaSubjectiveBench/Qwen-2.5-72B-Instruct'}],
|
||||
))
|
||||
given_pred=[
|
||||
{
|
||||
'abbr': 'Qwen-2.5-72B-Instruct',
|
||||
'path': './data/subjective/CompassArenaSubjectiveBench/Qwen-2.5-72B-Instruct',
|
||||
}
|
||||
],
|
||||
)
|
||||
)
|
||||
|
@ -149,6 +149,6 @@ for _name, _prompt in sub_map.items():
|
||||
mode='m2n',
|
||||
infer_order='double',
|
||||
base_models=gpt4,
|
||||
summarizer = dict(type=CompassArenaSummarizer, summary_type='half_add'),
|
||||
summarizer = dict(type=CompassArenaSummarizer, summary_type='single'),
|
||||
given_pred = [{'abbr':'gpt4-turbo', 'path':'./data/subjective/compass_arena/gpt4-turbo'}]
|
||||
))
|
||||
|
@ -105,7 +105,7 @@ for _name, _prompt in sub_map.items():
|
||||
]),
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer, max_seq_len=4096, max_out_len=2048),
|
||||
inferencer=dict(type=GenInferencer, max_seq_len=4096, max_out_len=4096),
|
||||
)
|
||||
|
||||
subjective_eval_cfg = dict(
|
||||
@ -120,7 +120,7 @@ for _name, _prompt in sub_map.items():
|
||||
),
|
||||
]),
|
||||
),
|
||||
dict_postprocessor=dict(type=compassarena_postprocess, summary_type='half_add', check_pos_bias=True),
|
||||
dict_postprocessor=dict(type=compassarena_postprocess, summary_type='single', check_pos_bias=True),
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
|
@ -20,7 +20,7 @@ subjective_infer_cfg = dict(
|
||||
template="""{dialogue}"""
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=ChatInferencer, max_seq_len=32768, max_out_len=4096, infer_mode='last'),
|
||||
inferencer=dict(type=ChatInferencer, max_seq_len=32768, infer_mode='last'),
|
||||
)
|
||||
|
||||
subjective_eval_cfg = dict(
|
||||
|
@ -20,7 +20,7 @@ subjective_infer_cfg = dict(
|
||||
template="""{dialogue}"""
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=ChatInferencer, max_seq_len=4096, max_out_len=512, infer_mode='last'),
|
||||
inferencer=dict(type=ChatInferencer, max_seq_len=32768, infer_mode='last'),
|
||||
)
|
||||
|
||||
subjective_eval_cfg = dict(
|
||||
|
152
configs/eval_academic_leaderboard_202412.py
Normal file
152
configs/eval_academic_leaderboard_202412.py
Normal file
@ -0,0 +1,152 @@
|
||||
from mmengine.config import read_base
|
||||
import os.path as osp
|
||||
from opencompass.partitioners import NaivePartitioner, NumWorkerPartitioner
|
||||
from opencompass.runners import LocalRunner, VOLCRunner
|
||||
from opencompass.tasks import OpenICLInferTask, OpenICLEvalTask
|
||||
|
||||
|
||||
#######################################################################
|
||||
# PART 0 Essential Configs #
|
||||
#######################################################################
|
||||
with read_base():
|
||||
# Datasets Part
|
||||
## Core Set
|
||||
# Knowledge
|
||||
from opencompass.configs.datasets.mmlu_pro.mmlu_pro_0shot_cot_gen_08c1de import (
|
||||
mmlu_pro_datasets,
|
||||
)
|
||||
|
||||
# General Reasoning
|
||||
from opencompass.configs.datasets.gpqa.gpqa_openai_simple_evals_gen_5aeece import (
|
||||
gpqa_datasets,
|
||||
)
|
||||
from opencompass.configs.datasets.bbh.bbh_0shot_nocot_gen_925fc4 import (
|
||||
bbh_datasets,
|
||||
)
|
||||
from opencompass.configs.datasets.humaneval.humaneval_openai_sample_evals_gen_159614 import (
|
||||
humaneval_datasets,
|
||||
)
|
||||
|
||||
# Instruction Following
|
||||
from opencompass.configs.datasets.IFEval.IFEval_gen_3321a3 import (
|
||||
ifeval_datasets,
|
||||
)
|
||||
from opencompass.configs.datasets.livecodebench.livecodebench_gen_6966bc import (
|
||||
LCBCodeGeneration_dataset,
|
||||
)
|
||||
|
||||
# Math
|
||||
from opencompass.configs.datasets.cmo_fib.cmo_fib_gen_ace24b import (
|
||||
cmo_fib_datasets,
|
||||
)
|
||||
from opencompass.configs.datasets.aime2024.aime2024_gen_6e39a4 import (
|
||||
aime2024_datasets,
|
||||
)
|
||||
from opencompass.configs.datasets.math.math_prm800k_500_0shot_cot_gen import (
|
||||
math_datasets,
|
||||
)
|
||||
|
||||
# Summary Groups
|
||||
from opencompass.configs.summarizers.groups.bbh import bbh_summary_groups
|
||||
from opencompass.configs.summarizers.groups.mmlu_pro import (
|
||||
mmlu_pro_summary_groups,
|
||||
)
|
||||
|
||||
# Model List
|
||||
from opencompass.configs.models.hf_internlm.lmdeploy_internlm2_5_7b_chat import (
|
||||
models as hf_internlm2_5_7b_chat_model,
|
||||
)
|
||||
|
||||
#######################################################################
|
||||
# PART 1 Datasets List #
|
||||
#######################################################################
|
||||
# datasets list for evaluation
|
||||
# Only take LCB generation for evaluation
|
||||
datasets = sum(
|
||||
(v for k, v in locals().items() if k.endswith('_datasets')), []
|
||||
) + [LCBCodeGeneration_dataset]
|
||||
|
||||
#######################################################################
|
||||
# PART 2 Datset Summarizer #
|
||||
#######################################################################
|
||||
|
||||
core_summary_groups = [
|
||||
{
|
||||
'name': 'core_average',
|
||||
'subsets': [
|
||||
['IFEval', 'Prompt-level-strict-accuracy'],
|
||||
['bbh', 'naive_average'],
|
||||
['math_prm800k_500', 'accuracy'],
|
||||
['cmo_fib', 'accuracy'],
|
||||
['aime2024', 'accuracy'],
|
||||
['GPQA_diamond', 'accuracy'],
|
||||
['mmlu_pro', 'naive_average'],
|
||||
['openai_humaneval', 'humaneval_pass@1'],
|
||||
['lcb_code_generation', 'pass@1'],
|
||||
],
|
||||
},
|
||||
]
|
||||
|
||||
|
||||
summarizer = dict(
|
||||
dataset_abbrs=[
|
||||
['core_average', 'naive_average'],
|
||||
'',
|
||||
'Instruction Following',
|
||||
['IFEval', 'Prompt-level-strict-accuracy'],
|
||||
'',
|
||||
'General Reasoning',
|
||||
['bbh', 'naive_average'],
|
||||
['GPQA_diamond', 'accuracy'],
|
||||
'',
|
||||
'Math Calculation',
|
||||
['math_prm800k_500', 'accuracy'],
|
||||
['cmo_fib', 'accuracy'],
|
||||
['aime2024', 'accuracy'],
|
||||
'',
|
||||
'Knowledge',
|
||||
['mmlu_pro', 'naive_average'],
|
||||
'',
|
||||
'Code',
|
||||
['openai_humaneval', 'humaneval_pass@1'],
|
||||
['lcb_code_generation', 'pass@1'],
|
||||
],
|
||||
summary_groups=sum(
|
||||
[v for k, v in locals().items() if k.endswith('_summary_groups')], []
|
||||
),
|
||||
)
|
||||
|
||||
#######################################################################
|
||||
# PART 3 Models List #
|
||||
#######################################################################
|
||||
|
||||
models = sum([v for k, v in locals().items() if k.endswith('_model')], [])
|
||||
|
||||
#######################################################################
|
||||
# PART 4 Inference/Evaluation Configuaration #
|
||||
#######################################################################
|
||||
|
||||
# Local Runner
|
||||
infer = dict(
|
||||
partitioner=dict(type=NumWorkerPartitioner, num_worker=8),
|
||||
runner=dict(
|
||||
type=LocalRunner,
|
||||
max_num_workers=16,
|
||||
retry=0, # Modify if needed
|
||||
task=dict(type=OpenICLInferTask),
|
||||
),
|
||||
)
|
||||
|
||||
# eval with local runner
|
||||
eval = dict(
|
||||
partitioner=dict(type=NaivePartitioner, n=10),
|
||||
runner=dict(
|
||||
type=LocalRunner, max_num_workers=16, task=dict(type=OpenICLEvalTask)
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
#######################################################################
|
||||
# PART 5 Utils Configuaration #
|
||||
#######################################################################
|
||||
work_dir = './outputs/oc_academic_202412'
|
73
configs/eval_chinese_simpleqa.py
Normal file
73
configs/eval_chinese_simpleqa.py
Normal file
@ -0,0 +1,73 @@
|
||||
from mmengine.config import read_base
|
||||
|
||||
with read_base():
|
||||
from opencompass.configs.datasets.chinese_simpleqa.chinese_simpleqa_gen import csimpleqa_datasets
|
||||
|
||||
from opencompass.models.openai_api import OpenAI
|
||||
from opencompass.runners import LocalRunner
|
||||
from opencompass.tasks.subjective_eval import SubjectiveEvalTask
|
||||
from opencompass.partitioners.sub_naive import SubjectiveNaivePartitioner
|
||||
from opencompass.models import HuggingFacewithChatTemplate
|
||||
from opencompass.summarizers import DefaultSubjectiveSummarizer
|
||||
|
||||
# -------------Inference Stage ----------------------------------------
|
||||
models = [
|
||||
dict(
|
||||
type=HuggingFacewithChatTemplate,
|
||||
abbr='Qwen2.5-1.5B-Instruct',
|
||||
path='Qwen/Qwen2.5-1.5B-Instruct',
|
||||
model_kwargs=dict(
|
||||
device_map='auto',
|
||||
trust_remote_code=True,
|
||||
),
|
||||
tokenizer_kwargs=dict(
|
||||
padding_side='left',
|
||||
truncation_side='left',
|
||||
trust_remote_code=True,
|
||||
),
|
||||
generation_kwargs=dict(
|
||||
do_sample=True,
|
||||
),
|
||||
max_out_len=200,
|
||||
max_seq_len=4096,
|
||||
batch_size=8,
|
||||
run_cfg=dict(num_gpus=1, num_procs=1),
|
||||
)
|
||||
]
|
||||
|
||||
datasets = sum([v for k, v in locals().items() if ('datasets' in k)], [])
|
||||
summarizer = dict(type=DefaultSubjectiveSummarizer)
|
||||
|
||||
# -------------Evalation Stage ----------------------------------------
|
||||
|
||||
## ------------- JudgeLLM Configuration
|
||||
|
||||
api_meta_template = dict(
|
||||
round=[
|
||||
dict(role='SYSTEM', api_role='SYSTEM'),
|
||||
dict(role='HUMAN', api_role='HUMAN'),
|
||||
dict(role='BOT', api_role='BOT', generate=True),
|
||||
]
|
||||
)
|
||||
judge_models = [
|
||||
dict(
|
||||
# GPT4o
|
||||
abbr='gpt-4o-0513-global',
|
||||
type=OpenAI,
|
||||
# gpt-4o
|
||||
path='gpt-4o-0513-global',
|
||||
key='xxx', # provide OPENAI_API_KEY
|
||||
meta_template=api_meta_template,
|
||||
query_per_second=16,
|
||||
max_out_len=1000,
|
||||
batch_size=8,
|
||||
retry=3)
|
||||
]
|
||||
|
||||
## ------------- Evaluation Configuration
|
||||
eval = dict(
|
||||
partitioner=dict(type=SubjectiveNaivePartitioner, models=models, judge_models=judge_models),
|
||||
runner=dict(type=LocalRunner, max_num_workers=16, task=dict(type=SubjectiveEvalTask)),
|
||||
)
|
||||
|
||||
work_dir = 'outputs/chinese_simpleqa/'
|
132
configs/eval_compassarena_subjectivebench_bradleyterry.py
Normal file
132
configs/eval_compassarena_subjectivebench_bradleyterry.py
Normal file
@ -0,0 +1,132 @@
|
||||
from mmengine.config import read_base
|
||||
|
||||
with read_base():
|
||||
from opencompass.configs.datasets.subjective.compass_arena_subjective_bench.singleturn.pairwise_bt_judge import (
|
||||
compassarena_subjectivebench_bradleyterry_singleturn_datasets,
|
||||
)
|
||||
from opencompass.configs.datasets.subjective.compass_arena_subjective_bench.multiturn.pairwise_bt_judge import (
|
||||
compassarena_subjectivebench_bradleyterry_multiturn_datasets,
|
||||
)
|
||||
|
||||
from opencompass.configs.models.hf_internlm.lmdeploy_internlm2_5_7b_chat import (
|
||||
models as lmdeploy_internlm2_5_7b_chat,
|
||||
)
|
||||
from opencompass.configs.models.hf_internlm.lmdeploy_internlm2_5_20b_chat import (
|
||||
models as lmdeploy_internlm2_5_20b_chat,
|
||||
)
|
||||
from opencompass.configs.models.hf_llama.lmdeploy_llama3_1_8b_instruct import (
|
||||
models as lmdeploy_llama3_1_8b_instruct,
|
||||
)
|
||||
from opencompass.configs.models.hf_llama.lmdeploy_llama3_1_70b_instruct import (
|
||||
models as lmdeploy_llama3_1_70b_instruct,
|
||||
)
|
||||
from opencompass.configs.models.qwen2_5.lmdeploy_qwen2_5_0_5b_instruct import (
|
||||
models as lmdeploy_qwen2_5_0_5b_instruct,
|
||||
)
|
||||
from opencompass.configs.models.qwen2_5.lmdeploy_qwen2_5_1_5b_instruct import (
|
||||
models as lmdeploy_qwen2_5_1_5b_instruct,
|
||||
)
|
||||
from opencompass.configs.models.qwen2_5.lmdeploy_qwen2_5_3b_instruct import (
|
||||
models as lmdeploy_qwen2_5_3b_instruct,
|
||||
)
|
||||
from opencompass.configs.models.qwen2_5.lmdeploy_qwen2_5_7b_instruct import (
|
||||
models as lmdeploy_qwen2_5_7b_instruct,
|
||||
)
|
||||
from opencompass.configs.models.qwen2_5.lmdeploy_qwen2_5_14b_instruct import (
|
||||
models as lmdeploy_qwen2_5_14b_instruct,
|
||||
)
|
||||
from opencompass.configs.models.qwen2_5.lmdeploy_qwen2_5_32b_instruct import (
|
||||
models as lmdeploy_qwen2_5_32b_instruct,
|
||||
)
|
||||
from opencompass.configs.models.qwen2_5.lmdeploy_qwen2_5_72b_instruct import (
|
||||
models as lmdeploy_qwen2_5_72b_instruct,
|
||||
)
|
||||
from opencompass.configs.models.qwen.lmdeploy_qwen2_7b_instruct import (
|
||||
models as lmdeploy_qwen2_7b_instruct,
|
||||
)
|
||||
|
||||
from opencompass.models import (
|
||||
HuggingFace,
|
||||
HuggingFaceCausalLM,
|
||||
HuggingFaceChatGLM3,
|
||||
OpenAI,
|
||||
TurboMindModelwithChatTemplate,
|
||||
)
|
||||
from opencompass.partitioners import NaivePartitioner, SizePartitioner
|
||||
from opencompass.partitioners.sub_naive import SubjectiveNaivePartitioner
|
||||
from opencompass.partitioners.sub_num_worker import SubjectiveNumWorkerPartitioner
|
||||
from opencompass.partitioners.sub_size import SubjectiveSizePartitioner
|
||||
from opencompass.runners import LocalRunner, SlurmSequentialRunner
|
||||
from opencompass.summarizers import CompassArenaBradleyTerrySummarizer
|
||||
from opencompass.tasks import OpenICLInferTask
|
||||
from opencompass.tasks.subjective_eval import SubjectiveEvalTask
|
||||
|
||||
api_meta_template = dict(
|
||||
round=[
|
||||
dict(role='HUMAN', api_role='HUMAN'),
|
||||
dict(role='BOT', api_role='BOT', generate=True),
|
||||
]
|
||||
)
|
||||
|
||||
# -------------Inference Stage ----------------------------------------
|
||||
models = [
|
||||
*lmdeploy_qwen2_5_14b_instruct,
|
||||
*lmdeploy_qwen2_5_32b_instruct,
|
||||
*lmdeploy_qwen2_5_7b_instruct,
|
||||
*lmdeploy_qwen2_7b_instruct,
|
||||
]
|
||||
|
||||
datasets = [
|
||||
*compassarena_subjectivebench_bradleyterry_singleturn_datasets,
|
||||
*compassarena_subjectivebench_bradleyterry_multiturn_datasets,
|
||||
]
|
||||
|
||||
infer = dict(
|
||||
partitioner=dict(type=NaivePartitioner),
|
||||
runner=dict(type=LocalRunner, max_num_workers=16, task=dict(type=OpenICLInferTask)),
|
||||
)
|
||||
# -------------Evalation Stage ----------------------------------------
|
||||
|
||||
## ------------- JudgeLLM Configuration
|
||||
judge_models = [
|
||||
dict(
|
||||
type=TurboMindModelwithChatTemplate,
|
||||
abbr='CompassJudger-1-32B-Instruct',
|
||||
path='opencompass/CompassJudger-1-32B-Instruct',
|
||||
engine_config=dict(session_len=16384, max_batch_size=16, tp=4),
|
||||
gen_config=dict(top_k=1, temperature=1e-6, top_p=0.9, max_new_tokens=2048),
|
||||
max_seq_len=16384,
|
||||
max_out_len=2048,
|
||||
batch_size=16,
|
||||
run_cfg=dict(num_gpus=4),
|
||||
)
|
||||
]
|
||||
|
||||
## ------------- Evaluation Configuration
|
||||
eval = dict(
|
||||
partitioner=dict(
|
||||
type=SubjectiveNaivePartitioner,
|
||||
models=models,
|
||||
judge_models=judge_models,
|
||||
),
|
||||
runner=dict(
|
||||
type=LocalRunner, max_num_workers=16, task=dict(type=SubjectiveEvalTask)
|
||||
),
|
||||
)
|
||||
|
||||
## ------------- Summary Configuration
|
||||
# This step fits a Bradley-Terry model (statistical model) with an option
|
||||
# to include style features and control variables based on groups
|
||||
# (group variables must be available in the input dataset for each observation).
|
||||
summarizer = dict(
|
||||
type=CompassArenaBradleyTerrySummarizer,
|
||||
rating_system='bradleyterry',
|
||||
num_bootstrap=100,
|
||||
num_cpu=None,
|
||||
with_control_vars=True,
|
||||
normalize_style_features=False,
|
||||
odds_ratio=True,
|
||||
groups=['difficulty', 'category'],
|
||||
)
|
||||
|
||||
work_dir = 'outputs/compassarena_subjectivebench_bradleyterry/'
|
@ -1,4 +1,4 @@
|
||||
__version__ = '0.3.7'
|
||||
__version__ = '0.3.8'
|
||||
|
||||
|
||||
def _warn_about_config_migration():
|
||||
|
33
opencompass/configs/datasets/IFEval/IFEval_gen_353ae7.py
Normal file
33
opencompass/configs/datasets/IFEval/IFEval_gen_353ae7.py
Normal file
@ -0,0 +1,33 @@
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.datasets import IFEvalDataset, IFEvaluator
|
||||
|
||||
ifeval_reader_cfg = dict(
|
||||
input_columns=['prompt'], output_column='reference')
|
||||
|
||||
ifeval_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt='{prompt}'),
|
||||
])),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer))
|
||||
|
||||
ifeval_eval_cfg = dict(
|
||||
evaluator=dict(type=IFEvaluator),
|
||||
pred_role='BOT',
|
||||
)
|
||||
|
||||
ifeval_datasets = [
|
||||
dict(
|
||||
abbr='IFEval',
|
||||
type=IFEvalDataset,
|
||||
path='data/ifeval/input_data.jsonl',
|
||||
reader_cfg=ifeval_reader_cfg,
|
||||
infer_cfg=ifeval_infer_cfg,
|
||||
eval_cfg=ifeval_eval_cfg)
|
||||
]
|
@ -1,4 +1,4 @@
|
||||
from mmengine.config import read_base
|
||||
|
||||
with read_base():
|
||||
from .lcbench_repeat10_gen_5ff288 import LCBench_datasets_repeat10 # noqa: F401, F403
|
||||
from .lcbench_repeat10_gen_5ff288 import LCBench_repeat10_datasets # noqa: F401, F403
|
||||
|
@ -86,7 +86,7 @@ LC_cn_infer_cfg = dict(
|
||||
|
||||
LC_eval_cfg = dict(evaluator=dict(type=LCPassKEvaluator), pred_role='BOT')
|
||||
|
||||
LCBench_datasets_repeat10 = [
|
||||
LCBench_repeat10_datasets = [
|
||||
dict(
|
||||
type=LCDataset,
|
||||
abbr='lcbench_en_repeat10',
|
||||
|
@ -0,0 +1,87 @@
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.datasets import Aime2024Dataset, MATHEvaluator, math_postprocess_v2
|
||||
from opencompass.openicl.icl_evaluator import LMEvaluator
|
||||
from opencompass.datasets import generic_llmjudge_postprocess
|
||||
|
||||
aime2024_reader_cfg = dict(
|
||||
input_columns=['question'],
|
||||
output_column='answer'
|
||||
)
|
||||
|
||||
|
||||
aime2024_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(role='HUMAN', prompt='{question}\nRemember to put your final answer within \\boxed{}.'),
|
||||
],
|
||||
)
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer, max_out_len=2048)
|
||||
)
|
||||
|
||||
|
||||
GRADER_TEMPLATE = """
|
||||
Please as a grading expert, judge whether the final answers given by the candidates below are consistent with the standard answers, that is, whether the candidates answered correctly.
|
||||
|
||||
Here are some evaluation criteria:
|
||||
1. Please refer to the given standard answer. You don't need to re-generate the answer to the question because the standard answer has been given. You only need to judge whether the candidate's answer is consistent with the standard answer according to the form of the question. Don't try to answer the original question. You can assume that the standard answer is definitely correct.
|
||||
2. Because the candidate's answer may be different from the standard answer in the form of expression, before making a judgment, please understand the question and the standard answer first, and then judge whether the candidate's answer is correct, but be careful not to try to answer the original question.
|
||||
3. Some answers may contain multiple items, such as multiple-choice questions, multiple-select questions, fill-in-the-blank questions, etc. As long as the answer is the same as the standard answer, it is enough. For multiple-select questions and multiple-blank fill-in-the-blank questions, the candidate needs to answer all the corresponding options or blanks correctly to be considered correct.
|
||||
4. Some answers may be expressed in different ways, such as some answers may be a mathematical expression, some answers may be a textual description, as long as the meaning expressed is the same. And some formulas are expressed in different ways, but they are equivalent and correct.
|
||||
5. If the prediction is given with \\boxed{}, please ignore the \\boxed{} and only judge whether the candidate's answer is consistent with the standard answer.
|
||||
|
||||
Please judge whether the following answers are consistent with the standard answer based on the above criteria. Grade the predicted answer of this new question as one of:
|
||||
A: CORRECT
|
||||
B: INCORRECT
|
||||
Just return the letters "A" or "B", with no text around it.
|
||||
|
||||
Here is your task. Simply reply with either CORRECT, INCORRECT. Don't apologize or correct yourself if there was a mistake; we are just trying to grade the answer.
|
||||
|
||||
|
||||
<Original Question Begin>: \n{question}\n<Original Question End>\n\n
|
||||
<Gold Target Begin>: \n{answer}\n<Gold Target End>\n\n
|
||||
<Predicted Answer Begin>: \n{prediction}\n<Predicted End>\n\n
|
||||
|
||||
Judging the correctness of candidates' answers:
|
||||
""".strip()
|
||||
|
||||
aime2024_eval_cfg = dict(
|
||||
evaluator=dict(
|
||||
type=LMEvaluator,
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
begin=[
|
||||
dict(
|
||||
role='SYSTEM',
|
||||
fallback_role='HUMAN',
|
||||
prompt="You are a helpful assistant who evaluates the correctness and quality of models' outputs.")
|
||||
],
|
||||
round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt = GRADER_TEMPLATE
|
||||
),
|
||||
]),
|
||||
),
|
||||
dict_postprocessor=dict(type=generic_llmjudge_postprocess),
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
|
||||
aime2024_datasets = [
|
||||
dict(
|
||||
abbr='aime2024',
|
||||
type=Aime2024Dataset,
|
||||
path='opencompass/aime2024',
|
||||
reader_cfg=aime2024_reader_cfg,
|
||||
infer_cfg=aime2024_infer_cfg,
|
||||
eval_cfg=aime2024_eval_cfg,
|
||||
mode='singlescore',
|
||||
)
|
||||
]
|
@ -0,0 +1,96 @@
|
||||
import os
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.openicl.icl_evaluator import AccEvaluator
|
||||
from opencompass.datasets import BBHDataset, BBHEvaluator, bbh_mcq_postprocess, BBHEvaluator_mcq
|
||||
|
||||
bbh_reader_cfg = dict(input_columns=['input'], output_column='target')
|
||||
|
||||
bbh_multiple_choice_sets = [
|
||||
'temporal_sequences',
|
||||
'disambiguation_qa',
|
||||
'date_understanding',
|
||||
'tracking_shuffled_objects_three_objects',
|
||||
'penguins_in_a_table',
|
||||
'geometric_shapes',
|
||||
'snarks',
|
||||
'ruin_names',
|
||||
'tracking_shuffled_objects_seven_objects',
|
||||
'tracking_shuffled_objects_five_objects',
|
||||
'logical_deduction_three_objects',
|
||||
'hyperbaton',
|
||||
'logical_deduction_five_objects',
|
||||
'logical_deduction_seven_objects',
|
||||
'movie_recommendation',
|
||||
'salient_translation_error_detection',
|
||||
'reasoning_about_colored_objects',
|
||||
]
|
||||
bbh_free_form_sets = [
|
||||
'multistep_arithmetic_two',
|
||||
'navigate',
|
||||
'dyck_languages',
|
||||
'word_sorting',
|
||||
'sports_understanding',
|
||||
'boolean_expressions',
|
||||
'object_counting',
|
||||
'formal_fallacies',
|
||||
'causal_judgement',
|
||||
'web_of_lies',
|
||||
]
|
||||
|
||||
bbh_datasets = []
|
||||
for _name in bbh_multiple_choice_sets:
|
||||
bbh_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt=
|
||||
f"Question: {{input}}\n You must give your final answer by starting with 'So the answer is' "
|
||||
)
|
||||
])),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer, max_out_len=512))
|
||||
bbh_eval_cfg = dict(
|
||||
evaluator=dict(type=BBHEvaluator_mcq),
|
||||
pred_role='BOT',
|
||||
pred_postprocessor=dict(type=bbh_mcq_postprocess),
|
||||
dataset_postprocessor=dict(type=bbh_mcq_postprocess))
|
||||
|
||||
bbh_datasets.append(
|
||||
dict(
|
||||
type=BBHDataset,
|
||||
path='opencompass/bbh',
|
||||
name=_name,
|
||||
abbr='bbh-' + _name,
|
||||
reader_cfg=bbh_reader_cfg,
|
||||
infer_cfg=bbh_infer_cfg.copy(),
|
||||
eval_cfg=bbh_eval_cfg.copy()))
|
||||
|
||||
for _name in bbh_free_form_sets:
|
||||
|
||||
bbh_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt=
|
||||
f"Question: {{input}}\n You must give your final answer by starting with 'So the answer is' "
|
||||
)
|
||||
])),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer, max_out_len=512))
|
||||
bbh_eval_cfg = dict(evaluator=dict(type=BBHEvaluator), pred_role='BOT')
|
||||
|
||||
bbh_datasets.append(
|
||||
dict(
|
||||
type=BBHDataset,
|
||||
path='opencompass/bbh',
|
||||
name=_name,
|
||||
abbr='bbh-' + _name,
|
||||
reader_cfg=bbh_reader_cfg,
|
||||
infer_cfg=bbh_infer_cfg.copy(),
|
||||
eval_cfg=bbh_eval_cfg.copy()))
|
@ -0,0 +1,4 @@
|
||||
from mmengine.config import read_base
|
||||
|
||||
with read_base():
|
||||
from .bigcodebench_full_complete_gen_faf748 import bigcodebench_full_complete_datasets # noqa: F401, F403
|
@ -0,0 +1,53 @@
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.datasets import (
|
||||
BigCodeBenchDataset,
|
||||
BigCodeBenchEvaluator
|
||||
)
|
||||
|
||||
|
||||
bigcodebench_full_reader_cfg = dict(
|
||||
input_columns=['complete_prompt'],
|
||||
output_column='test',
|
||||
)
|
||||
|
||||
|
||||
bigcodebench_full_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
begin=[dict(role='system',
|
||||
fallback_role='HUMAN',
|
||||
prompt='')],
|
||||
round=[
|
||||
dict(role='HUMAN', prompt='{complete_prompt}'),
|
||||
]
|
||||
)
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer, max_out_len=1024)
|
||||
)
|
||||
|
||||
bigcodebench_full_eval_cfg = dict(
|
||||
evaluator=dict(
|
||||
type=BigCodeBenchEvaluator,
|
||||
release_version='v0.1.2',
|
||||
eval_type='complete',
|
||||
remote_execute_api="https://bigcode-bigcodebench-evaluator.hf.space/",
|
||||
dataset_version='full',
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
|
||||
bigcodebench_full_complete_datasets = [
|
||||
dict(
|
||||
abbr='bigcodebench_full_complete',
|
||||
type=BigCodeBenchDataset,
|
||||
path="opencompass/bigcodebench",
|
||||
reader_cfg=bigcodebench_full_reader_cfg,
|
||||
infer_cfg=bigcodebench_full_infer_cfg,
|
||||
eval_cfg=bigcodebench_full_eval_cfg,
|
||||
release_version='v0.1.2'
|
||||
)
|
||||
]
|
@ -0,0 +1,4 @@
|
||||
from mmengine.config import read_base
|
||||
|
||||
with read_base():
|
||||
from .bigcodebench_full_instruct_gen_8815eb import bigcodebench_full_instruct_datasets # noqa: F401, F403
|
@ -0,0 +1,53 @@
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.datasets import (
|
||||
BigCodeBenchDataset,
|
||||
BigCodeBenchEvaluator
|
||||
)
|
||||
|
||||
|
||||
bigcodebench_full_reader_cfg = dict(
|
||||
input_columns=['instruct_prompt'],
|
||||
output_column='test',
|
||||
)
|
||||
|
||||
|
||||
bigcodebench_full_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
begin=[dict(role='system',
|
||||
fallback_role='HUMAN',
|
||||
prompt='')],
|
||||
round=[
|
||||
dict(role='HUMAN', prompt='{instruct_prompt}'),
|
||||
]
|
||||
)
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer, max_out_len=8192)
|
||||
)
|
||||
|
||||
bigcodebench_full_eval_cfg = dict(
|
||||
evaluator=dict(
|
||||
type=BigCodeBenchEvaluator,
|
||||
release_version='v0.1.2',
|
||||
eval_type='instruct',
|
||||
remote_execute_api="https://bigcode-bigcodebench-evaluator.hf.space/",
|
||||
dataset_version='full',
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
|
||||
bigcodebench_full_instruct_datasets = [
|
||||
dict(
|
||||
abbr='bigcodebench_full_instruct',
|
||||
type=BigCodeBenchDataset,
|
||||
path="opencompass/bigcodebench",
|
||||
reader_cfg=bigcodebench_full_reader_cfg,
|
||||
infer_cfg=bigcodebench_full_infer_cfg,
|
||||
eval_cfg=bigcodebench_full_eval_cfg,
|
||||
release_version='v0.1.2'
|
||||
)
|
||||
]
|
@ -0,0 +1,4 @@
|
||||
from mmengine.config import read_base
|
||||
|
||||
with read_base():
|
||||
from .bigcodebench_hard_complete_gen_faf748 import bigcodebench_hard_complete_datasets # noqa: F401, F403
|
@ -0,0 +1,54 @@
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.datasets import (
|
||||
BigCodeBenchDataset,
|
||||
BigCodeBenchEvaluator
|
||||
)
|
||||
|
||||
|
||||
bigcodebench_hard_reader_cfg = dict(
|
||||
input_columns=['complete_prompt'],
|
||||
output_column='test',
|
||||
)
|
||||
|
||||
|
||||
bigcodebench_hard_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
begin=[dict(role='system',
|
||||
fallback_role='HUMAN',
|
||||
prompt='')],
|
||||
round=[
|
||||
dict(role='HUMAN', prompt='{complete_prompt}'),
|
||||
]
|
||||
)
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer, max_out_len=1024)
|
||||
)
|
||||
|
||||
bigcodebench_hard_eval_cfg = dict(
|
||||
evaluator=dict(
|
||||
type=BigCodeBenchEvaluator,
|
||||
release_version='v0.1.2',
|
||||
eval_type='complete',
|
||||
remote_execute_api="https://bigcode-bigcodebench-evaluator.hf.space/",
|
||||
dataset_version='hard',
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
|
||||
bigcodebench_hard_complete_datasets = [
|
||||
dict(
|
||||
abbr='bigcodebench_hard_complete',
|
||||
type=BigCodeBenchDataset,
|
||||
path="opencompass/bigcodebench",
|
||||
reader_cfg=bigcodebench_hard_reader_cfg,
|
||||
infer_cfg=bigcodebench_hard_infer_cfg,
|
||||
eval_cfg=bigcodebench_hard_eval_cfg,
|
||||
release_version='v0.1.2',
|
||||
dataset_version='hard',
|
||||
)
|
||||
]
|
@ -0,0 +1,4 @@
|
||||
from mmengine.config import read_base
|
||||
|
||||
with read_base():
|
||||
from .bigcodebench_hard_instruct_gen_8815eb import bigcodebench_hard_instruct_datasets # noqa: F401, F403
|
@ -0,0 +1,54 @@
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.datasets import (
|
||||
BigCodeBenchDataset,
|
||||
BigCodeBenchEvaluator
|
||||
)
|
||||
|
||||
|
||||
bigcodebench_hard_reader_cfg = dict(
|
||||
input_columns=['instruct_prompt'],
|
||||
output_column='test',
|
||||
)
|
||||
|
||||
|
||||
bigcodebench_hard_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
begin=[dict(role='system',
|
||||
fallback_role='HUMAN',
|
||||
prompt='')],
|
||||
round=[
|
||||
dict(role='HUMAN', prompt='{instruct_prompt}'),
|
||||
]
|
||||
)
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer, max_out_len=8192)
|
||||
)
|
||||
|
||||
bigcodebench_hard_eval_cfg = dict(
|
||||
evaluator=dict(
|
||||
type=BigCodeBenchEvaluator,
|
||||
release_version='v0.1.2',
|
||||
eval_type='instruct',
|
||||
remote_execute_api="https://bigcode-bigcodebench-evaluator.hf.space/",
|
||||
dataset_version='hard',
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
|
||||
bigcodebench_hard_instruct_datasets = [
|
||||
dict(
|
||||
abbr='bigcodebench_hard_instruct',
|
||||
type=BigCodeBenchDataset,
|
||||
path="opencompass/bigcodebench",
|
||||
reader_cfg=bigcodebench_hard_reader_cfg,
|
||||
infer_cfg=bigcodebench_hard_infer_cfg,
|
||||
eval_cfg=bigcodebench_hard_eval_cfg,
|
||||
release_version='v0.1.2',
|
||||
dataset_version='hard',
|
||||
)
|
||||
]
|
108
opencompass/configs/datasets/chinese_simpleqa/README.md
Normal file
108
opencompass/configs/datasets/chinese_simpleqa/README.md
Normal file
@ -0,0 +1,108 @@
|
||||
|
||||
|
||||
|
||||
# Overview
|
||||
<p align="center">
|
||||
🌐 <a href="https://openstellarteam.github.io/ChineseSimpleQA/" target="_blank">Website</a> • 🤗 <a href="https://huggingface.co/datasets/OpenStellarTeam/Chinese-SimpleQA" target="_blank">Hugging Face</a> • ⏬ <a href="#data" target="_blank">Data</a> • 📃 <a href="https://huggingface.co/datasets/OpenStellarTeam/Chinese-SimpleQA" target="_blank">Paper</a> • 📊 <a href="http://47.109.32.164/" target="_blank">Leaderboard</a> <br> <a href="https://github.com/OpenStellarTeam/ChineseSimpleQA/blob/master/README_zh.md"> 中文</a> | <a href="https://github.com/OpenStellarTeam/ChineseSimpleQA/blob/master/README.md">English
|
||||
</p>
|
||||
|
||||
**Chinese SimpleQA** is the first comprehensive Chinese benchmark to evaluate the factuality ability of language models to answer short questions, and Chinese SimpleQA mainly has five properties (i.e., Chinese, Diverse, High-quality, Static, Easy-to-evaluate). Specifically, our benchmark covers **6 major topics** with **99 diverse subtopics**.
|
||||
|
||||
Please visit our [website](https://openstellarteam.github.io/ChineseSimpleQA/) or check our [paper](https://arxiv.org/abs/2411.07140) for more details.
|
||||
|
||||
|
||||
|
||||
## 💫 Instroduction
|
||||
|
||||
* How to solve the generative hallucination of models has always been an unsolved problem in the field of artificial intelligence (AI). In order to measure the factual correctness of language models, OpenAI recently released and open-sourced a test set called SimpleQA. We have also been paying attention to the field of factuality, which currently has problems such as outdated data, inaccurate evaluation, and incomplete coverage. For example, the knowledge evaluation sets widely used now are still CommonSenseQA, CMMLU, and C-Eval, which are multiple-choice question-based evaluation sets. **In order to further promote the research of the Chinese community on the factual correctness of models, we propose the Chinese SimpleQA**. which consists of 3000 high-quality questions spanning 6 major topics, ranging from humanities to science and engineering. Specifically, the distinct main features of our proposed Chinese SimpleQA dataset are as follows:
|
||||
* 🀄**Chinese:** Our Chinese SimpleQA focuses on the Chinese language, which provides a comprehensive evaluation of the factuality abilities of existing LLMs in Chinese.
|
||||
* 🍀**Diverse:** Chinese SimpleQA covers 6 topics (i.e., “Chinese Culture”, “Humanities”, “Engineering, Technology, and Applied Sciences”, “Life, Art, and Culture”, “Society”, and “Natural Science”), and these topic includes 99 fine-grained subtopics in total, which demonstrates the diversity of our Chinese SimpleQA.
|
||||
* ⚡**High-quality:** We conduct a comprehensive and rigorous quality control process to ensure the quality and accuracy of our Chinese SimpleQA.
|
||||
* 💡**Static:** Following SimpleQA, to preserve the evergreen property of Chinese SimpleQA, all reference answers would not change over time.
|
||||
* 🗂️**Easy-to-evaluate:** Following SimpleQA, as the questions and answers are very short, the grading procedure is fast to run via existing LLMs (e.g., OpenAI API).
|
||||
|
||||
- Based on Chinese SimpleQA, we have conducted a comprehensive evaluation of the factual capabilities of existing LLMs. We also maintain a comprehensive leaderboard list.
|
||||
- In short, we hope that Chinese SimpleQA can help developers gain a deeper understanding of the factual correctness of their models in the Chinese field, and at the same time provide an important cornerstone for their algorithm research, and jointly promote the growth of Chinese basic models.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
## 📊 Leaderboard
|
||||
|
||||
详见: [📊](http://47.109.32.164/)
|
||||
|
||||
|
||||
|
||||
## ⚖️ Evals
|
||||
|
||||
We provide three evaluation methods.
|
||||
|
||||
(1) The first method is based on simple-evals evaluation. The startup command is as follows:
|
||||
|
||||
```bash
|
||||
python -m simple-evals.demo
|
||||
```
|
||||
This will launch evaluations through the OpenAI API.
|
||||
|
||||
|
||||
|
||||
(2) The second is a simple single evaluation script that we wrote from scratch. The startup command is as follows:
|
||||
|
||||
- Step1: set your openai key in scripts/chinese_simpleqa_easy.py:
|
||||
|
||||
```
|
||||
os.environ["OPENAI_API_KEY"] = "replace your key here"
|
||||
```
|
||||
|
||||
- Step2: run the eval script:
|
||||
|
||||
```
|
||||
python scripts/chinese_simpleqa_easy.py
|
||||
```
|
||||
|
||||
- Step3: we also provide a unified processing script for multiple model results. After running it, you can get a complete leaderboard:
|
||||
|
||||
```
|
||||
python scripts/get_leaderboard.py
|
||||
```
|
||||
|
||||
|
||||
|
||||
(3) We also integrated our Chinese SimpleQA benchmark into our forked [OpenCompass](https://github.com/open-compass/opencompass). You can refer to the opencompass configuration script for evaluation
|
||||
- Step1: git clone Opencompass:
|
||||
```shell
|
||||
cd ~
|
||||
git clone git@github.com:open-compass/opencompass.git
|
||||
cd opencompass
|
||||
```
|
||||
- Step2: download Chinese Simpleqa data from [huggingface](https://huggingface.co/datasets/OpenStellarTeam/Chinese-SimpleQA), and put it in the following path(OPENCOMPASS_PATH/data/chinese_simpleqa), make sure you get path like this:
|
||||
```
|
||||
~/opencompass/data/
|
||||
└── chinese_simpleqa
|
||||
├── chinese_simpleqa.jsonl
|
||||
```
|
||||
|
||||
|
||||
- Step3: configuration your launch in configs/eval_chinese_simpleqa.py, set your models to be evaluated, set your judge model (we recommend to use gpt4o) and launch it!
|
||||
```
|
||||
python run.py configs/eval_chinese_simpleqa.py
|
||||
```
|
||||
|
||||
|
||||
## Citation
|
||||
|
||||
Please cite our paper if you use our dataset.
|
||||
|
||||
```
|
||||
@misc{he2024chinesesimpleqachinesefactuality,
|
||||
title={Chinese SimpleQA: A Chinese Factuality Evaluation for Large Language Models},
|
||||
author={Yancheng He and Shilong Li and Jiaheng Liu and Yingshui Tan and Weixun Wang and Hui Huang and Xingyuan Bu and Hangyu Guo and Chengwei Hu and Boren Zheng and Zhuoran Lin and Xuepeng Liu and Dekai Sun and Shirong Lin and Zhicheng Zheng and Xiaoyong Zhu and Wenbo Su and Bo Zheng},
|
||||
year={2024},
|
||||
eprint={2411.07140},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.CL},
|
||||
url={https://arxiv.org/abs/2411.07140},
|
||||
}
|
||||
```
|
||||
|
@ -0,0 +1,59 @@
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.openicl.icl_evaluator import LMEvaluator
|
||||
from opencompass.datasets import CsimpleqaDataset, csimpleqa_postprocess
|
||||
|
||||
subjective_reader_cfg = dict(input_columns=['primary_category', 'question','gold_ans', 'messages', 'system_prompt','prompt_template'], output_column='judge')
|
||||
|
||||
subjective_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt='{question}'
|
||||
),
|
||||
]),
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer, max_out_len=2048),
|
||||
)
|
||||
|
||||
subjective_eval_cfg = dict(
|
||||
evaluator=dict(
|
||||
type=LMEvaluator,
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
begin=[
|
||||
dict(
|
||||
role='SYSTEM',
|
||||
fallback_role='HUMAN',
|
||||
prompt='{system_prompt}')
|
||||
],
|
||||
round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt = '{prompt_template}'
|
||||
),
|
||||
]
|
||||
),
|
||||
),
|
||||
dict_postprocessor=dict(type=csimpleqa_postprocess),
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
|
||||
csimpleqa_datasets = [
|
||||
dict(
|
||||
abbr='chinese_simpleqa',
|
||||
type=CsimpleqaDataset,
|
||||
name="chinese_simpleqa",
|
||||
path='opencompass/chinese_simpleqa',
|
||||
reader_cfg=subjective_reader_cfg,
|
||||
infer_cfg=subjective_infer_cfg,
|
||||
eval_cfg=subjective_eval_cfg,
|
||||
mode='singlescore',
|
||||
)
|
||||
]
|
@ -0,0 +1,173 @@
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.openicl.icl_evaluator import AccEvaluator
|
||||
from opencompass.datasets import CMMLUDataset
|
||||
from opencompass.utils.text_postprocessors import match_answer_pattern
|
||||
from opencompass.openicl.icl_evaluator import LMEvaluator
|
||||
from opencompass.datasets import generic_llmjudge_postprocess
|
||||
|
||||
cmmlu_subject_mapping = {
|
||||
'agronomy': '农学',
|
||||
'anatomy': '解剖学',
|
||||
'ancient_chinese': '古汉语',
|
||||
'arts': '艺术学',
|
||||
'astronomy': '天文学',
|
||||
'business_ethics': '商业伦理',
|
||||
'chinese_civil_service_exam': '中国公务员考试',
|
||||
'chinese_driving_rule': '中国驾驶规则',
|
||||
'chinese_food_culture': '中国饮食文化',
|
||||
'chinese_foreign_policy': '中国外交政策',
|
||||
'chinese_history': '中国历史',
|
||||
'chinese_literature': '中国文学',
|
||||
'chinese_teacher_qualification': '中国教师资格',
|
||||
'clinical_knowledge': '临床知识',
|
||||
'college_actuarial_science': '大学精算学',
|
||||
'college_education': '大学教育学',
|
||||
'college_engineering_hydrology': '大学工程水文学',
|
||||
'college_law': '大学法律',
|
||||
'college_mathematics': '大学数学',
|
||||
'college_medical_statistics': '大学医学统计',
|
||||
'college_medicine': '大学医学',
|
||||
'computer_science': '计算机科学',
|
||||
'computer_security': '计算机安全',
|
||||
'conceptual_physics': '概念物理学',
|
||||
'construction_project_management': '建设工程管理',
|
||||
'economics': '经济学',
|
||||
'education': '教育学',
|
||||
'electrical_engineering': '电气工程',
|
||||
'elementary_chinese': '小学语文',
|
||||
'elementary_commonsense': '小学常识',
|
||||
'elementary_information_and_technology': '小学信息技术',
|
||||
'elementary_mathematics': '初等数学',
|
||||
'ethnology': '民族学',
|
||||
'food_science': '食品科学',
|
||||
'genetics': '遗传学',
|
||||
'global_facts': '全球事实',
|
||||
'high_school_biology': '高中生物',
|
||||
'high_school_chemistry': '高中化学',
|
||||
'high_school_geography': '高中地理',
|
||||
'high_school_mathematics': '高中数学',
|
||||
'high_school_physics': '高中物理学',
|
||||
'high_school_politics': '高中政治',
|
||||
'human_sexuality': '人类性行为',
|
||||
'international_law': '国际法学',
|
||||
'journalism': '新闻学',
|
||||
'jurisprudence': '法理学',
|
||||
'legal_and_moral_basis': '法律与道德基础',
|
||||
'logical': '逻辑学',
|
||||
'machine_learning': '机器学习',
|
||||
'management': '管理学',
|
||||
'marketing': '市场营销',
|
||||
'marxist_theory': '马克思主义理论',
|
||||
'modern_chinese': '现代汉语',
|
||||
'nutrition': '营养学',
|
||||
'philosophy': '哲学',
|
||||
'professional_accounting': '专业会计',
|
||||
'professional_law': '专业法学',
|
||||
'professional_medicine': '专业医学',
|
||||
'professional_psychology': '专业心理学',
|
||||
'public_relations': '公共关系',
|
||||
'security_study': '安全研究',
|
||||
'sociology': '社会学',
|
||||
'sports_science': '体育学',
|
||||
'traditional_chinese_medicine': '中医中药',
|
||||
'virology': '病毒学',
|
||||
'world_history': '世界历史',
|
||||
'world_religions': '世界宗教'
|
||||
}
|
||||
|
||||
QUERY_TEMPLATE = """
|
||||
你回答的最后一行**必须**是以下格式 '答案: $选项' (不带引号), 其中选项是ABCD之一.
|
||||
|
||||
{question}
|
||||
|
||||
A) {A}
|
||||
B) {B}
|
||||
C) {C}
|
||||
D) {D}
|
||||
""".strip()
|
||||
|
||||
|
||||
|
||||
GRADER_TEMPLATE = """
|
||||
Please as a grading expert, judge whether the final answers given by the candidates below are consistent with the standard answers, that is, whether the candidates answered correctly.
|
||||
|
||||
Here are some evaluation criteria:
|
||||
1. Please refer to the given standard answer. You don't need to re-generate the answer to the question because the standard answer has been given. You only need to judge whether the candidate's answer is consistent with the standard answer according to the form of the question. Don't try to answer the original question. You can assume that the standard answer is definitely correct.
|
||||
2. Because the candidate's answer may be different from the standard answer in the form of expression, before making a judgment, please understand the question and the standard answer first, and then judge whether the candidate's answer is correct, but be careful not to try to answer the original question.
|
||||
3. Some answers may contain multiple items, such as multiple-choice questions, multiple-select questions, fill-in-the-blank questions, etc. As long as the answer is the same as the standard answer, it is enough. For multiple-select questions and multiple-blank fill-in-the-blank questions, the candidate needs to answer all the corresponding options or blanks correctly to be considered correct.
|
||||
4. Some answers may be expressed in different ways, such as some answers may be a mathematical expression, some answers may be a textual description, as long as the meaning expressed is the same. And some formulas are expressed in different ways, but they are equivalent and correct.
|
||||
|
||||
Please judge whether the following answers are consistent with the standard answer based on the above criteria. Grade the predicted answer of this new question as one of:
|
||||
A: CORRECT
|
||||
B: INCORRECT
|
||||
Just return the letters "A" or "B", with no text around it.
|
||||
|
||||
Here is your task. Simply reply with either CORRECT, INCORRECT. Don't apologize or correct yourself if there was a mistake; we are just trying to grade the answer.
|
||||
|
||||
<Original Question Begin>: \n {question}\n A) {A}\n B) {B}\n C) {C}\n D) {D}\n<Original Question End>\n\n
|
||||
<Gold Target Begin>: \n{answer}\n<Gold Target End>\n\n
|
||||
<Predicted Answer Begin>: \n{prediction}\n<Predicted End>\n\n
|
||||
Judging the correctness of candidates' answers:
|
||||
""".strip()
|
||||
|
||||
cmmlu_all_sets = list(cmmlu_subject_mapping.keys())
|
||||
|
||||
cmmlu_datasets = []
|
||||
for _name in cmmlu_all_sets:
|
||||
_ch_name = cmmlu_subject_mapping[_name]
|
||||
prompt_prefix = f'请回答以下关于{_ch_name}的单项选择题, '
|
||||
cmmlu_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(role='HUMAN', prompt=prompt_prefix+QUERY_TEMPLATE),
|
||||
],
|
||||
),
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer),
|
||||
)
|
||||
|
||||
cmmlu_eval_cfg = dict(
|
||||
evaluator=dict(
|
||||
type=LMEvaluator,
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
begin=[
|
||||
dict(
|
||||
role='SYSTEM',
|
||||
fallback_role='HUMAN',
|
||||
prompt="You are a helpful assistant who evaluates the correctness and quality of models' outputs.")
|
||||
],
|
||||
round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt = GRADER_TEMPLATE
|
||||
),
|
||||
]),
|
||||
),
|
||||
dict_postprocessor=dict(type=generic_llmjudge_postprocess),
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
cmmlu_datasets.append(
|
||||
dict(
|
||||
type=CMMLUDataset,
|
||||
path='opencompass/cmmlu',
|
||||
name=_name,
|
||||
abbr=f'cmmlu-{_name}',
|
||||
reader_cfg=dict(
|
||||
input_columns=['question', 'A', 'B', 'C', 'D'],
|
||||
output_column='answer',
|
||||
train_split='dev',
|
||||
test_split='test'),
|
||||
infer_cfg=cmmlu_infer_cfg,
|
||||
eval_cfg=cmmlu_eval_cfg,
|
||||
mode='singlescore',
|
||||
))
|
||||
|
||||
del _name, _ch_name
|
@ -0,0 +1,123 @@
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.openicl.icl_evaluator import AccEvaluator
|
||||
from opencompass.datasets import CMMLUDataset
|
||||
from opencompass.utils.text_postprocessors import match_answer_pattern
|
||||
from opencompass.openicl.icl_evaluator import LMEvaluator
|
||||
from opencompass.datasets import generic_llmjudge_postprocess
|
||||
|
||||
cmmlu_subject_mapping = {
|
||||
'anatomy': '解剖学',
|
||||
'astronomy': '天文学',
|
||||
'college_actuarial_science': '大学精算学',
|
||||
'college_engineering_hydrology': '大学工程水文学',
|
||||
'college_mathematics': '大学数学',
|
||||
'college_medical_statistics': '大学医学统计',
|
||||
'computer_science': '计算机科学',
|
||||
'conceptual_physics': '概念物理学',
|
||||
'electrical_engineering': '电气工程',
|
||||
'elementary_mathematics': '初等数学',
|
||||
'genetics': '遗传学',
|
||||
'high_school_biology': '高中生物',
|
||||
'high_school_chemistry': '高中化学',
|
||||
'high_school_mathematics': '高中数学',
|
||||
'high_school_physics': '高中物理学',
|
||||
'machine_learning': '机器学习',
|
||||
'virology': '病毒学',
|
||||
}
|
||||
|
||||
QUERY_TEMPLATE = """
|
||||
你回答的最后一行**必须**是以下格式 '答案: $选项' (不带引号), 其中选项是ABCD之一.
|
||||
|
||||
{question}
|
||||
|
||||
A) {A}
|
||||
B) {B}
|
||||
C) {C}
|
||||
D) {D}
|
||||
""".strip()
|
||||
|
||||
|
||||
|
||||
GRADER_TEMPLATE = """
|
||||
Please as a grading expert, judge whether the final answers given by the candidates below are consistent with the standard answers, that is, whether the candidates answered correctly.
|
||||
|
||||
Here are some evaluation criteria:
|
||||
1. Please refer to the given standard answer. You don't need to re-generate the answer to the question because the standard answer has been given. You only need to judge whether the candidate's answer is consistent with the standard answer according to the form of the question. Don't try to answer the original question. You can assume that the standard answer is definitely correct.
|
||||
2. Because the candidate's answer may be different from the standard answer in the form of expression, before making a judgment, please understand the question and the standard answer first, and then judge whether the candidate's answer is correct, but be careful not to try to answer the original question.
|
||||
3. Some answers may contain multiple items, such as multiple-choice questions, multiple-select questions, fill-in-the-blank questions, etc. As long as the answer is the same as the standard answer, it is enough. For multiple-select questions and multiple-blank fill-in-the-blank questions, the candidate needs to answer all the corresponding options or blanks correctly to be considered correct.
|
||||
4. Some answers may be expressed in different ways, such as some answers may be a mathematical expression, some answers may be a textual description, as long as the meaning expressed is the same. And some formulas are expressed in different ways, but they are equivalent and correct.
|
||||
|
||||
Please judge whether the following answers are consistent with the standard answer based on the above criteria. Grade the predicted answer of this new question as one of:
|
||||
A: CORRECT
|
||||
B: INCORRECT
|
||||
Just return the letters "A" or "B", with no text around it.
|
||||
|
||||
Here is your task. Simply reply with either CORRECT, INCORRECT. Don't apologize or correct yourself if there was a mistake; we are just trying to grade the answer.
|
||||
|
||||
<Original Question Begin>: \n {question}\n A) {A}\n B) {B}\n C) {C}\n D) {D}\n<Original Question End>\n\n
|
||||
<Gold Target Begin>: \n{answer}\n<Gold Target End>\n\n
|
||||
<Predicted Answer Begin>: \n{prediction}\n<Predicted End>\n\n
|
||||
Judging the correctness of candidates' answers:
|
||||
""".strip()
|
||||
|
||||
cmmlu_all_sets = list(cmmlu_subject_mapping.keys())
|
||||
|
||||
cmmlu_datasets = []
|
||||
for _name in cmmlu_all_sets:
|
||||
_ch_name = cmmlu_subject_mapping[_name]
|
||||
prompt_prefix = f'请回答以下关于{_ch_name}的单项选择题, '
|
||||
cmmlu_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(role='HUMAN', prompt=prompt_prefix+QUERY_TEMPLATE),
|
||||
],
|
||||
),
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer),
|
||||
)
|
||||
|
||||
cmmlu_eval_cfg = dict(
|
||||
evaluator=dict(
|
||||
type=LMEvaluator,
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
begin=[
|
||||
dict(
|
||||
role='SYSTEM',
|
||||
fallback_role='HUMAN',
|
||||
prompt="You are a helpful assistant who evaluates the correctness and quality of models' outputs.")
|
||||
],
|
||||
round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt = GRADER_TEMPLATE
|
||||
),
|
||||
]),
|
||||
),
|
||||
dict_postprocessor=dict(type=generic_llmjudge_postprocess),
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
cmmlu_datasets.append(
|
||||
dict(
|
||||
type=CMMLUDataset,
|
||||
path='opencompass/cmmlu',
|
||||
name=_name,
|
||||
abbr=f'cmmlu-{_name}',
|
||||
reader_cfg=dict(
|
||||
input_columns=['question', 'A', 'B', 'C', 'D'],
|
||||
output_column='answer',
|
||||
train_split='dev',
|
||||
test_split='test'),
|
||||
infer_cfg=cmmlu_infer_cfg,
|
||||
eval_cfg=cmmlu_eval_cfg,
|
||||
mode='singlescore',
|
||||
))
|
||||
|
||||
del _name, _ch_name
|
@ -0,0 +1,101 @@
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.datasets import GPQADataset, GPQA_Simple_Eval_postprocess, GPQAEvaluator
|
||||
from opencompass.openicl.icl_evaluator import LMEvaluator
|
||||
from opencompass.datasets import generic_llmjudge_postprocess
|
||||
|
||||
# openai_simple_eval prompt
|
||||
align_prompt = """
|
||||
Answer the following multiple choice question. The last line of your response should be of the following format: 'ANSWER: $LETTER' (without quotes) where LETTER is one of ABCD.
|
||||
|
||||
{question}
|
||||
|
||||
A) {A}
|
||||
B) {B}
|
||||
C) {C}
|
||||
D) {D}
|
||||
""".strip()
|
||||
|
||||
|
||||
GRADER_TEMPLATE = """
|
||||
Please as a grading expert, judge whether the final answers given by the candidates below are consistent with the standard answers, that is, whether the candidates answered correctly.
|
||||
|
||||
Here are some evaluation criteria:
|
||||
1. Please refer to the given standard answer. You don't need to re-generate the answer to the question because the standard answer has been given. You only need to judge whether the candidate's answer is consistent with the standard answer according to the form of the question. Don't try to answer the original question. You can assume that the standard answer is definitely correct.
|
||||
2. Because the candidate's answer may be different from the standard answer in the form of expression, before making a judgment, please understand the question and the standard answer first, and then judge whether the candidate's answer is correct, but be careful not to try to answer the original question.
|
||||
3. Some answers may contain multiple items, such as multiple-choice questions, multiple-select questions, fill-in-the-blank questions, etc. As long as the answer is the same as the standard answer, it is enough. For multiple-select questions and multiple-blank fill-in-the-blank questions, the candidate needs to answer all the corresponding options or blanks correctly to be considered correct.
|
||||
4. Some answers may be expressed in different ways, such as some answers may be a mathematical expression, some answers may be a textual description, as long as the meaning expressed is the same. And some formulas are expressed in different ways, but they are equivalent and correct.
|
||||
|
||||
Please judge whether the following answers are consistent with the standard answer based on the above criteria. Grade the predicted answer of this new question as one of:
|
||||
A: CORRECT
|
||||
B: INCORRECT
|
||||
Just return the letters "A" or "B", with no text around it.
|
||||
|
||||
Here is your task. Simply reply with either CORRECT, INCORRECT. Don't apologize or correct yourself if there was a mistake; we are just trying to grade the answer.
|
||||
|
||||
<Original Question Begin>: {question}\n A) {A}\n B) {B}\n C) {C}\n D) {D}\n<Original Question End>\n\n
|
||||
<Gold Target Begin>: \n{answer}\n<Gold Target End>\n\n
|
||||
<Predicted Answer Begin>: \n{prediction}\n<Predicted End>\n\n
|
||||
Judging the correctness of candidates' answers:
|
||||
""".strip()
|
||||
|
||||
|
||||
gpqa_reader_cfg = dict(
|
||||
input_columns=['question', 'A', 'B', 'C', 'D'],
|
||||
output_column='answer')
|
||||
|
||||
gpqa_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(role='HUMAN', prompt=align_prompt),
|
||||
], )),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer))
|
||||
|
||||
|
||||
gpqa_eval_cfg = dict(
|
||||
evaluator=dict(
|
||||
type=LMEvaluator,
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
begin=[
|
||||
dict(
|
||||
role='SYSTEM',
|
||||
fallback_role='HUMAN',
|
||||
prompt="You are a helpful assistant who evaluates the correctness and quality of models' outputs.")
|
||||
],
|
||||
round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt = GRADER_TEMPLATE
|
||||
),
|
||||
]),
|
||||
),
|
||||
dict_postprocessor=dict(type=generic_llmjudge_postprocess),
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
gpqa_datasets = []
|
||||
gpqa_subsets = {
|
||||
# 'extended': 'gpqa_extended.csv',
|
||||
# 'main': 'gpqa_main.csv',
|
||||
'diamond': 'gpqa_diamond.csv'
|
||||
}
|
||||
|
||||
for split in list(gpqa_subsets.keys()):
|
||||
gpqa_datasets.append(
|
||||
dict(
|
||||
abbr='GPQA_' + split,
|
||||
type=GPQADataset,
|
||||
path='./data/gpqa/',
|
||||
name=gpqa_subsets[split],
|
||||
reader_cfg=gpqa_reader_cfg,
|
||||
infer_cfg=gpqa_infer_cfg,
|
||||
eval_cfg=gpqa_eval_cfg,
|
||||
mode='singlescore',
|
||||
)
|
||||
)
|
@ -0,0 +1,36 @@
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.datasets import HumanevalDataset, HumanEvalEvaluator, humaneval_postprocess_v2
|
||||
|
||||
humaneval_reader_cfg = dict(
|
||||
input_columns=['prompt'], output_column='task_id', train_split='test')
|
||||
|
||||
# TODO: allow empty output-column
|
||||
humaneval_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt='Read the following function signature and docstring, and fully implement the function described. Your response should only contain the code for this function.\n{prompt}'),
|
||||
])),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer))
|
||||
|
||||
humaneval_eval_cfg = dict(
|
||||
evaluator=dict(type=HumanEvalEvaluator),
|
||||
pred_role='BOT',
|
||||
k=[1, 10, 100], # the parameter only for humaneval
|
||||
pred_postprocessor=dict(type=humaneval_postprocess_v2),
|
||||
)
|
||||
|
||||
humaneval_datasets = [
|
||||
dict(
|
||||
abbr='openai_humaneval',
|
||||
type=HumanevalDataset,
|
||||
path='opencompass/humaneval',
|
||||
reader_cfg=humaneval_reader_cfg,
|
||||
infer_cfg=humaneval_infer_cfg,
|
||||
eval_cfg=humaneval_eval_cfg)
|
||||
]
|
@ -0,0 +1,36 @@
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.datasets import HumanevalDataset, HumanEvalEvaluator, humaneval_postprocess_v3
|
||||
|
||||
humaneval_reader_cfg = dict(
|
||||
input_columns=['prompt'], output_column='task_id', train_split='test')
|
||||
|
||||
# TODO: allow empty output-column
|
||||
humaneval_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt='Read the following function signature and docstring, and fully implement the function described. Your response should only contain the code for this function.\n{prompt}'),
|
||||
])),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer, max_out_len=8192))
|
||||
|
||||
humaneval_eval_cfg = dict(
|
||||
evaluator=dict(type=HumanEvalEvaluator),
|
||||
pred_role='BOT',
|
||||
k=[1, 10, 100], # the parameter only for humaneval
|
||||
pred_postprocessor=dict(type=humaneval_postprocess_v3),
|
||||
)
|
||||
|
||||
humaneval_datasets = [
|
||||
dict(
|
||||
abbr='openai_humaneval_o1_style',
|
||||
type=HumanevalDataset,
|
||||
path='opencompass/humaneval',
|
||||
reader_cfg=humaneval_reader_cfg,
|
||||
infer_cfg=humaneval_infer_cfg,
|
||||
eval_cfg=humaneval_eval_cfg)
|
||||
]
|
@ -0,0 +1,41 @@
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.datasets import HumanevalXDataset, HumanevalXEvaluator
|
||||
|
||||
humanevalx_reader_cfg = dict(
|
||||
input_columns=['prompt'], output_column='declaration', train_split='test')
|
||||
|
||||
humanevalx_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template='{prompt}'),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer))
|
||||
|
||||
humanevalx_eval_cfg_dict = {
|
||||
lang : dict(
|
||||
evaluator=dict(
|
||||
type=HumanevalXEvaluator,
|
||||
language=lang,
|
||||
ip_address=
|
||||
'localhost', # replace to your code_eval_server ip_address, port
|
||||
port=5001), # refer to https://opencompass.readthedocs.io/en/latest/advanced_guides/code_eval_service.html to launch a server
|
||||
pred_role='BOT')
|
||||
for lang in ['python', 'cpp', 'go', 'java', 'js'] # do not support rust now
|
||||
}
|
||||
|
||||
# Please download the needed `xx.jsonl.gz` from
|
||||
# https://github.com/THUDM/CodeGeeX2/tree/main/benchmark/humanevalx
|
||||
# and move them into `data/humanevalx/` folder
|
||||
humanevalx_datasets = [
|
||||
dict(
|
||||
type=HumanevalXDataset,
|
||||
abbr=f'humanevalx-{lang}',
|
||||
language=lang,
|
||||
path='./data/humanevalx',
|
||||
reader_cfg=humanevalx_reader_cfg,
|
||||
infer_cfg=humanevalx_infer_cfg,
|
||||
eval_cfg=humanevalx_eval_cfg_dict[lang])
|
||||
for lang in ['python', 'cpp', 'go', 'java', 'js']
|
||||
]
|
@ -50,7 +50,7 @@ for category in categories:
|
||||
abbr=f"korbench_mixed_{category}",
|
||||
path="opencompass/korbench",
|
||||
category=category,
|
||||
mode='mixed',
|
||||
prompt_mode='mixed',
|
||||
reader_cfg=reader_cfg,
|
||||
infer_cfg=infer_cfg,
|
||||
eval_cfg=eval_cfg,
|
||||
|
@ -50,7 +50,7 @@ for category in categories:
|
||||
type=korbenchDataset,
|
||||
abbr=f"korbench_{category}",
|
||||
path="opencompass/korbench",
|
||||
mode='0_shot',
|
||||
prompt_mode='0_shot',
|
||||
category=category,
|
||||
reader_cfg=reader_cfg,
|
||||
infer_cfg=infer_cfg,
|
||||
|
@ -0,0 +1,109 @@
|
||||
from opencompass.datasets.korbench.korbench import korbenchDataset, korbenchEvaluator
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_evaluator import LMEvaluator
|
||||
from opencompass.datasets import generic_llmjudge_postprocess
|
||||
|
||||
categories = ["cipher", "counterfactual", "logic", "operation", "puzzle"]
|
||||
|
||||
|
||||
GRADER_TEMPLATE = """
|
||||
Please as a grading expert, judge whether the final answers given by the candidates below are consistent with the standard answers, that is, whether the candidates answered correctly.
|
||||
|
||||
Here are some evaluation criteria:
|
||||
1. Please refer to the given standard answer. You don't need to re-generate the answer to the question because the standard answer has been given. You only need to judge whether the candidate's answer is consistent with the standard answer according to the form of the question. Don't try to answer the original question. You can assume that the standard answer is definitely correct.
|
||||
2. Because the candidate's answer may be different from the standard answer in the form of expression, before making a judgment, please understand the question and the standard answer first, and then judge whether the candidate's answer is correct, but be careful not to try to answer the original question.
|
||||
3. Some answers may contain multiple items, such as multiple-choice questions, multiple-select questions, fill-in-the-blank questions, etc. As long as the answer is the same as the standard answer, it is enough. For multiple-select questions and multiple-blank fill-in-the-blank questions, the candidate needs to answer all the corresponding options or blanks correctly to be considered correct.
|
||||
4. Some answers may be expressed in different ways, such as some answers may be a mathematical expression, some answers may be a textual description, as long as the meaning expressed is the same. And some formulas are expressed in different ways, but they are equivalent and correct.
|
||||
5. If the prediction is given with \\boxed{}, please ignore the \\boxed{} and only judge whether the candidate's answer is consistent with the standard answer.
|
||||
|
||||
Please judge whether the following answers are consistent with the standard answer based on the above criteria. Grade the predicted answer of this new question as one of:
|
||||
A: CORRECT
|
||||
B: INCORRECT
|
||||
Just return the letters "A" or "B", with no text around it.
|
||||
|
||||
Here is your task. Simply reply with either CORRECT, INCORRECT. Don't apologize or correct yourself if there was a mistake; we are just trying to grade the answer.
|
||||
|
||||
|
||||
<Original Question Begin>: \n{prompt}\n<Original Question End>\n\n
|
||||
<Gold Target Begin>: \n{answer}\n<Gold Target End>\n\n
|
||||
<Predicted Answer Begin>: \n{prediction}\n<Predicted End>\n\n
|
||||
|
||||
Judging the correctness of candidates' answers:
|
||||
""".strip()
|
||||
|
||||
korbench_0shot_single_datasets = []
|
||||
|
||||
for category in categories:
|
||||
# Prompt template
|
||||
prompt_template = dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
begin=[
|
||||
dict(
|
||||
role="HUMAN",
|
||||
prompt=""
|
||||
)
|
||||
],
|
||||
round=[
|
||||
dict(
|
||||
role="HUMAN",
|
||||
prompt="{prompt}" # f-string
|
||||
)
|
||||
]
|
||||
)
|
||||
)
|
||||
|
||||
# Reader configuration
|
||||
reader_cfg = dict(
|
||||
input_columns=["prompt"],
|
||||
output_column="answer",
|
||||
)
|
||||
|
||||
# Inference configuration
|
||||
infer_cfg = dict(
|
||||
prompt_template=prompt_template,
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer, max_out_len=1024),
|
||||
)
|
||||
|
||||
# Evaluation configuration
|
||||
eval_cfg = dict(
|
||||
evaluator=dict(
|
||||
type=LMEvaluator,
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
begin=[
|
||||
dict(
|
||||
role='SYSTEM',
|
||||
fallback_role='HUMAN',
|
||||
prompt="You are a helpful assistant who evaluates the correctness and quality of models' outputs.")
|
||||
],
|
||||
round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt = GRADER_TEMPLATE
|
||||
),
|
||||
]),
|
||||
),
|
||||
dict_postprocessor=dict(type=generic_llmjudge_postprocess),
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
|
||||
# Dataset
|
||||
korbench_dataset = dict(
|
||||
type=korbenchDataset,
|
||||
abbr=f"korbench_{category}",
|
||||
path="opencompass/korbench",
|
||||
prompt_mode='0_shot',
|
||||
category=category,
|
||||
reader_cfg=reader_cfg,
|
||||
infer_cfg=infer_cfg,
|
||||
eval_cfg=eval_cfg,
|
||||
mode='singlescore',
|
||||
)
|
||||
|
||||
korbench_0shot_single_datasets.append(korbench_dataset)
|
@ -1,4 +1,7 @@
|
||||
from opencompass.datasets.korbench.korbench import korbenchDataset, korbenchEvaluator
|
||||
from opencompass.datasets.korbench.korbench import (
|
||||
korbenchDataset,
|
||||
korbenchEvaluator,
|
||||
)
|
||||
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
@ -13,19 +16,9 @@ for category in categories:
|
||||
prompt_template = dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
begin=[
|
||||
dict(
|
||||
role="HUMAN",
|
||||
prompt=""
|
||||
)
|
||||
],
|
||||
round=[
|
||||
dict(
|
||||
role="HUMAN",
|
||||
prompt="{prompt}" # f-string
|
||||
)
|
||||
]
|
||||
)
|
||||
begin=[dict(role="HUMAN", prompt="")],
|
||||
round=[dict(role="HUMAN", prompt="{prompt}")], # f-string
|
||||
),
|
||||
)
|
||||
|
||||
# Reader configuration
|
||||
@ -51,7 +44,7 @@ for category in categories:
|
||||
type=korbenchDataset,
|
||||
abbr=f"korbench_{category}",
|
||||
path="opencompass/korbench",
|
||||
mode='3_shot',
|
||||
prompt_mode='3_shot',
|
||||
category=category,
|
||||
reader_cfg=reader_cfg,
|
||||
infer_cfg=infer_cfg,
|
||||
|
@ -0,0 +1,165 @@
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.datasets import (
|
||||
LCBCodeGenerationDataset,
|
||||
LCBCodeExecutionDataset,
|
||||
LCBTestOutputPredictionDataset,
|
||||
LCBCodeGenerationEvaluator,
|
||||
LCBCodeExecutionEvaluator,
|
||||
LCBTestOutputEvaluator
|
||||
)
|
||||
from opencompass.datasets.livecodebench import TestOutputPromptConstants
|
||||
|
||||
|
||||
lcb_code_generation_reader_cfg = dict(
|
||||
input_columns=[
|
||||
'question_content',
|
||||
'format_prompt',
|
||||
],
|
||||
# output_column='evaluation_sample',
|
||||
output_column='question_id',
|
||||
)
|
||||
|
||||
SYSTEM_MESSAGE_GENERIC = f'You are an expert Python programmer. You will be given a question (problem specification) and will generate a correct Python program that matches the specification and passes all tests. You will NOT return anything except for the program.'
|
||||
|
||||
prompt_template = '### Question:\n{question_content}\n\n{format_prompt}' + \
|
||||
'### Answer: (use the provided format with backticks)\n\n'
|
||||
|
||||
|
||||
# Code Generation Tasks
|
||||
lcb_code_generation_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt=prompt_template
|
||||
)
|
||||
]
|
||||
)
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer, max_out_len=1024)
|
||||
)
|
||||
|
||||
lcb_code_generation_eval_cfg = dict(
|
||||
evaluator=dict(
|
||||
type=LCBCodeGenerationEvaluator,
|
||||
num_process_evaluate=4,
|
||||
timeout=6,
|
||||
release_version='release_split_v4',
|
||||
extractor_version='v2',
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
|
||||
LCBCodeGeneration_dataset = dict(
|
||||
type=LCBCodeGenerationDataset,
|
||||
abbr='lcb_code_generation_split_v4',
|
||||
path='opencompass/code_generation_lite',
|
||||
reader_cfg=lcb_code_generation_reader_cfg,
|
||||
infer_cfg=lcb_code_generation_infer_cfg,
|
||||
eval_cfg=lcb_code_generation_eval_cfg,
|
||||
release_version='release_split_v4',
|
||||
)
|
||||
|
||||
# Code Execution Dataset
|
||||
lcb_code_execution_reader_cfg = dict(
|
||||
input_columns=[
|
||||
'prompt',
|
||||
],
|
||||
output_column='evaluation_sample',
|
||||
)
|
||||
|
||||
lcb_code_execution_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
begin=[
|
||||
dict(
|
||||
role='SYSTEM',
|
||||
prompt='You are an expert at Python programming, code execution, test case generation, and fuzzing.',
|
||||
fallback_role='HUMAN',
|
||||
),
|
||||
],
|
||||
round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt='{prompt}'
|
||||
)
|
||||
]
|
||||
)
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer, max_out_len=1024)
|
||||
)
|
||||
|
||||
lcb_code_execution_eval_cfg = dict(
|
||||
evaluator=dict(
|
||||
type=LCBCodeExecutionEvaluator,
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
|
||||
LCBCodeExecution_dataset = dict(
|
||||
type=LCBCodeExecutionDataset,
|
||||
abbr='lcb_code_execution',
|
||||
path='opencompass/execution-v2',
|
||||
reader_cfg=lcb_code_execution_reader_cfg,
|
||||
infer_cfg=lcb_code_execution_infer_cfg,
|
||||
eval_cfg=lcb_code_execution_eval_cfg,
|
||||
)
|
||||
|
||||
# TestOuputput Dataset
|
||||
lcb_test_output_reader_cfg = dict(
|
||||
input_columns=[
|
||||
'prompt',
|
||||
],
|
||||
output_column='evaluation_sample',
|
||||
)
|
||||
|
||||
system_prompt = 'You are an expert Python programmer. You will be given a question (problem specification) and will generate a correct Python program that matches the specification and passes all tests. You will NOT return anything except for the program.'
|
||||
|
||||
lcb_test_output_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
# begin=[
|
||||
# dict(
|
||||
# role='SYSTEM',
|
||||
# prompt=system_prompt
|
||||
# ),
|
||||
# ],
|
||||
round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt='{prompt}'
|
||||
)
|
||||
]
|
||||
)
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer, max_out_len=1024)
|
||||
)
|
||||
|
||||
lcb_test_output_eval_cfg = dict(
|
||||
evaluator=dict(
|
||||
type=LCBTestOutputEvaluator,
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
|
||||
LCBTestOutput_dataset = dict(
|
||||
type=LCBTestOutputPredictionDataset,
|
||||
abbr='lcb_test_output',
|
||||
path='opencompass/test_generation',
|
||||
reader_cfg=lcb_test_output_reader_cfg,
|
||||
infer_cfg=lcb_test_output_infer_cfg,
|
||||
eval_cfg=lcb_test_output_eval_cfg,
|
||||
)
|
||||
|
||||
LCB_datasets = [
|
||||
LCBCodeGeneration_dataset,
|
||||
]
|
@ -4,12 +4,14 @@
|
||||
|
||||
| dataset | language | #single-choice | #multiple-choice | #fill-in-the-blank | #problem-solving |
|
||||
| -- | -- | -- | -- | -- | -- |
|
||||
| AIMC | cn | 46 | 0 | 0 | 0 |
|
||||
| AIMC | en | 46 | 0 | 0 | 0 |
|
||||
| CEE | cn | 28 | 9 | 13 | 3 |
|
||||
| CEE | en | 28 | 9 | 13 | 3 |
|
||||
| AIMC | cn | 0 | 0 | 0 | 46 |
|
||||
| AIMC | en | 0 | 0 | 0 | 46 |
|
||||
| CEE | cn | 0 | 0 | 13 | 40 |
|
||||
| CEE | en | 0 | 0 | 13 | 40 |
|
||||
| CMO | cn | 0 | 0 | 0 | 18 |
|
||||
| CMO | en | 0 | 0 | 0 | 18 |
|
||||
| MATH500 | en | 0 | 0 | 0 | 500 |
|
||||
| AIME2024 | en | 0 | 0 | 0 | 44 |
|
||||
|
||||
|
||||
## How to use
|
||||
@ -23,6 +25,7 @@ with read_base():
|
||||
|
||||
livemathbench_datasets[0].update(
|
||||
{
|
||||
'abbr': 'livemathbench_${k}x${n}'
|
||||
'path': '/path/to/data/dir',
|
||||
'k': 'k@pass', # the max value of k in k@pass
|
||||
'n': 'number of runs', # number of runs
|
||||
|
@ -0,0 +1,49 @@
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
|
||||
from opencompass.datasets.livemathbench import LiveMathBenchDataset, LiveMathBenchEvaluator
|
||||
|
||||
|
||||
livemathbench_reader_cfg = dict(
|
||||
input_columns=['prompt'],
|
||||
output_column='answer'
|
||||
)
|
||||
|
||||
livemathbench_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(role='HUMAN', prompt='{prompt}'),
|
||||
]
|
||||
)
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(
|
||||
type=GenInferencer,
|
||||
max_out_len=8192,
|
||||
temperature=1.0
|
||||
)
|
||||
)
|
||||
|
||||
livemathbench_eval_cfg = dict(
|
||||
evaluator=dict(
|
||||
type=LiveMathBenchEvaluator,
|
||||
model_name='Qwen/Qwen2.5-72B-Instruct',
|
||||
url=['http://172.30.40.154:23333/v1/'] #'https://api.openai.com/v1/'
|
||||
)
|
||||
)
|
||||
|
||||
livemathbench_datasets = [
|
||||
dict(
|
||||
type=LiveMathBenchDataset,
|
||||
abbr='LiveMathBench-k1-n1',
|
||||
path='opencompass/LiveMathBench202412',
|
||||
k=1, # K@Pass
|
||||
n=1, # Run times
|
||||
reader_cfg=livemathbench_reader_cfg,
|
||||
infer_cfg=livemathbench_infer_cfg,
|
||||
eval_cfg=livemathbench_eval_cfg
|
||||
)
|
||||
]
|
@ -0,0 +1,4 @@
|
||||
from mmengine.config import read_base
|
||||
|
||||
with read_base():
|
||||
from .livereasonbench_gen_0283c3 import simpleqa_datasets # noqa: F401, F403
|
@ -0,0 +1,136 @@
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.openicl.icl_evaluator import LMEvaluator
|
||||
# from opencompass.datasets import SimpleQADataset, simpleqa_postprocess
|
||||
from opencompass.datasets import LiveReasonBenchDataset, livereasonbench_postprocess
|
||||
|
||||
|
||||
GRADER_TEMPLATE = """
|
||||
Your job is to look at a question, a gold target, and a predicted answer, and then assign a grade of either ["CORRECT", "INCORRECT", "NOT_ATTEMPTED"].
|
||||
First, I will give examples of each grade, and then you will grade a new example.
|
||||
|
||||
|
||||
The following are examples of CORRECT predicted answers.
|
||||
```
|
||||
Question: What are the names of Barack Obama's children?
|
||||
Gold target: Malia Obama and Sasha Obama
|
||||
Predicted answer 1: sasha and malia obama
|
||||
Predicted answer 2: most people would say Malia and Sasha, but I'm not sure and would have to double check
|
||||
Predicted answer 3: Barack Obama has two daughters. Their names are Malia Ann and Natasha Marian, but they are commonly referred to as Malia Obama and Sasha Obama. Malia was born on July 4, 1998, and Sasha was born on June 10, 2001.
|
||||
```
|
||||
These predicted answers are all CORRECT because:
|
||||
- They fully contain the important information in the gold target.
|
||||
- They do not contain any information that contradicts the gold target.
|
||||
- Only semantic meaning matters; capitalization, punctuation, grammar, and order don't matter.
|
||||
- Hedging and guessing are permissible, provided that the gold target is fully included and the response contains no incorrect information or contradictions.
|
||||
|
||||
|
||||
The following are examples of INCORRECT predicted answers.
|
||||
```
|
||||
Question: What are the names of Barack Obama's children?
|
||||
Gold target: Malia and Sasha
|
||||
Predicted answer 1: Malia.
|
||||
Predicted answer 2: Malia, Sasha, and Susan.
|
||||
Predicted answer 3: Barack Obama does not have any children.
|
||||
Predicted answer 4: I think it's either Malia and Sasha. Or it could be Malia and Jackie. Or it could be Joey and Malia.
|
||||
Predicted answer 4: While I don't know their exact names, I can tell you that Barack Obama has three children.
|
||||
Predicted answer 5: It's possible you may mean Betsy and Olivia. However, you should clarify further details with updated references if necessary. Is that the correct answer?
|
||||
Predicted answer 6: It may be the case that Obama's child is named James. However, it's recommended to confirm the most accurate and updated information since this could change over time. This model may not always reflect the most current information.
|
||||
```
|
||||
These predicted answers are all INCORRECT because:
|
||||
- A factual statement in the answer contradicts the gold target. Incorrect statements that have some hedging (e.g., "it is possible that", "although i'm not sure, i think") are also considered incorrect.
|
||||
|
||||
|
||||
The following are examples of NOT_ATTEMPTED predicted answers.
|
||||
```
|
||||
Question: What are the names of Barack Obama's children?
|
||||
Gold target: Malia and Sasha
|
||||
Predicted answer 1: I don't know.
|
||||
Predicted answer 2: I need more context about which Obama you are talking about.
|
||||
Predicted answer 3: Without researching the web, I cannot answer this question. However, I can tell you that Barack Obama has two children.
|
||||
Predicted answer 4: Barack Obama has two children. I know that one of them is Malia, but I'm not sure about the other one.
|
||||
```
|
||||
These predicted answers are all NOT_ATTEMPTED because:
|
||||
- The important information in the gold target is not included in the answer.
|
||||
- No statements in the answer contradict the gold target.
|
||||
|
||||
|
||||
Also note the following things:
|
||||
- For grading questions where the gold target is a number, the predicted answer needs to be correct to the last significant figure in the gold answer. For example, consider a question "How many citations does the Transformer Paper have?" with gold target "120k".
|
||||
- Predicted answers "120k", "124k", and 115k" are all CORRECT.
|
||||
- Predicted answers "100k" and "113k" are INCORRECT.
|
||||
- Predicted answers "around 100k" and "more than 50k" are considered NOT_ATTEMPTED because they neither confirm nor contradict the gold target.
|
||||
- The gold target may contain more information than the question. In such cases, the predicted answer only needs to contain the information that is in the question.
|
||||
- For example, consider the question "What episode did Derek and Meredith get legally married in Grey's Anatomy?" with gold target "Season 7, Episode 20: White Wedding". Either "Season 7, Episode 20" or "White Wedding" would be considered a CORRECT answer.
|
||||
- Do not punish predicted answers if they omit information that would be clearly inferred from the question.
|
||||
- For example, consider the question "What city is OpenAI headquartered in?" and the gold target "San Francisco, California". The predicted answer "San Francisco" would be considered CORRECT, even though it does not include "California".
|
||||
- Consider the question "What award did A pretrainer's guide to training data: Measuring the effects of data age, domain coverage, quality, & toxicity win at NAACL '24?", the gold target is "Outstanding Paper Award". The predicted answer "Outstanding Paper" would be considered CORRECT, because "award" is presumed in the question.
|
||||
- For the question "What is the height of Jason Wei in meters?", the gold target is "1.73 m". The predicted answer "1.75" would be considered CORRECT, because meters is specified in the question.
|
||||
- For the question "What is the name of Barack Obama's wife?", the gold target is "Michelle Obama". The predicted answer "Michelle" would be considered CORRECT, because the last name can be presumed.
|
||||
- Do not punish for typos in people's name if it's clearly the same name.
|
||||
- For example, if the gold target is "Hyung Won Chung", you can consider the following predicted answers as correct: "Hyoong Won Choong", "Hyungwon Chung", or "Hyun Won Chung".
|
||||
|
||||
Grade the predicted answer of this new question as one of:
|
||||
A: CORRECT
|
||||
B: INCORRECT
|
||||
C: NOT_ATTEMPTED
|
||||
Just return the letters "A", "B", or "C", with no text around it.
|
||||
|
||||
Here is a new example. Simply reply with either CORRECT, INCORRECT, NOT ATTEMPTED. Don't apologize or correct yourself if there was a mistake; we are just trying to grade the answer.
|
||||
```
|
||||
Question: {question}
|
||||
Gold target: {answer}
|
||||
Predicted answer: {prediction}
|
||||
```
|
||||
""".strip()
|
||||
|
||||
livereasonbench_reader_cfg = dict(input_columns=['question'], output_column='answer')
|
||||
|
||||
livereasonbench_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(role='HUMAN', prompt="Question: {question}\n"),
|
||||
],
|
||||
)),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer, max_out_len=16384))
|
||||
|
||||
livereasonbench_eval_cfg = dict(
|
||||
evaluator=dict(
|
||||
type=LMEvaluator,
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
begin=[
|
||||
dict(
|
||||
role='SYSTEM',
|
||||
fallback_role='HUMAN',
|
||||
prompt="You are a helpful assistant who evaluates the correctness and quality of models' outputs.")
|
||||
],
|
||||
round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt = GRADER_TEMPLATE
|
||||
),
|
||||
]),
|
||||
),
|
||||
dict_postprocessor=dict(type=livereasonbench_postprocess),
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
|
||||
livereasonbench_datasets = [
|
||||
dict(
|
||||
abbr='LiveReasonBench-20241202',
|
||||
type=LiveReasonBenchDataset,
|
||||
path='opencompass/LiveReasonBench',
|
||||
reader_cfg=livereasonbench_reader_cfg,
|
||||
infer_cfg=livereasonbench_infer_cfg,
|
||||
eval_cfg=livereasonbench_eval_cfg,
|
||||
version='livereasonbench-20241202',
|
||||
mode='singlescore',
|
||||
)
|
||||
]
|
35
opencompass/configs/datasets/math/math_0shot_gen_11c4b5.py
Normal file
35
opencompass/configs/datasets/math/math_0shot_gen_11c4b5.py
Normal file
@ -0,0 +1,35 @@
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.datasets import MATHDataset, MATHEvaluator, math_postprocess_v2, normalize_final_answer
|
||||
|
||||
math_reader_cfg = dict(input_columns=['problem'], output_column='solution')
|
||||
|
||||
math_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(role='HUMAN', prompt='{problem}\nPlease reason step by step, and put your final answer within \\boxed{}.'),
|
||||
]
|
||||
),
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer),
|
||||
)
|
||||
|
||||
# postprocess v2
|
||||
math_eval_cfg = dict(
|
||||
evaluator=dict(type=MATHEvaluator, version='v2'), pred_postprocessor=dict(type=math_postprocess_v2),
|
||||
)
|
||||
|
||||
math_datasets = [
|
||||
dict(
|
||||
type=MATHDataset,
|
||||
abbr='math',
|
||||
path='opencompass/math',
|
||||
reader_cfg=math_reader_cfg,
|
||||
infer_cfg=math_infer_cfg,
|
||||
eval_cfg=math_eval_cfg,
|
||||
)
|
||||
]
|
@ -0,0 +1,45 @@
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.datasets import (
|
||||
MATHDataset,
|
||||
MATHEvaluator,
|
||||
math_postprocess_v2,
|
||||
normalize_final_answer,
|
||||
)
|
||||
|
||||
math_reader_cfg = dict(input_columns=['problem'], output_column='solution')
|
||||
|
||||
math_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt='{problem}\nPlease reason step by step, and put your final answer within \\boxed{}.',
|
||||
),
|
||||
]
|
||||
),
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer, max_out_len=1024),
|
||||
)
|
||||
|
||||
# postprocess v2
|
||||
math_eval_cfg = dict(
|
||||
evaluator=dict(type=MATHEvaluator, version='v2'),
|
||||
pred_postprocessor=dict(type=math_postprocess_v2),
|
||||
)
|
||||
|
||||
math_datasets = [
|
||||
dict(
|
||||
type=MATHDataset,
|
||||
abbr='math_prm800k_500',
|
||||
path='opencompass/math',
|
||||
file_name='test_prm800k_500.json',
|
||||
reader_cfg=math_reader_cfg,
|
||||
infer_cfg=math_infer_cfg,
|
||||
eval_cfg=math_eval_cfg,
|
||||
)
|
||||
]
|
@ -1,40 +1,11 @@
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.datasets import MATHDataset, MATHEvaluator, math_postprocess_v2, normalize_final_answer
|
||||
from opencompass.datasets import MATHDataset, MATHEvaluator, math_postprocess_v2, GaoKaoMATHEvaluator
|
||||
# from opencompass.utils.model_postprocessors import naive_model_postprocess, xfinder_postprocess
|
||||
from opencompass.utils.postprocessors.naive import MATH_NAVIE_PROMPT_TEMPLATE
|
||||
|
||||
# ----------------------------- Eval Parameters -----------------------------
|
||||
## Postprocess function
|
||||
post_func = 're' # 're', 'xfinder_model', 'naive_model'
|
||||
|
||||
## Evalute function
|
||||
eval_func = 'naive_model' # 're', 'naive_model'
|
||||
from opencompass.openicl.icl_evaluator import LMEvaluator
|
||||
from opencompass.datasets import generic_llmjudge_postprocess
|
||||
from opencompass.datasets import MATHDataset
|
||||
|
||||
|
||||
## Model api url
|
||||
# xfinder_url = 'http://0.0.0.0:23333/v1' # for 'xFinder-qwen1505' if post_func is 'xfinder_model'
|
||||
# naive_model_name = 'Qwen/Qwen2.5-72B-Instruct' # replace with your model name
|
||||
naive_model_name = 'dlc_model'
|
||||
# naive_model_url = [
|
||||
# 'http://172.30.56.38:23001/v1',
|
||||
# ] # Multi-apis for accerlation
|
||||
naive_model_url = [
|
||||
"http://172.30.56.38:23001/v1",
|
||||
"http://172.30.8.4:23003/v1",
|
||||
"http://172.30.8.14:23002/v1",
|
||||
"http://172.30.48.80:23004/v1",
|
||||
"http://172.30.56.132:23005/v1",
|
||||
"http://172.30.16.115:23006/v1",
|
||||
"http://172.30.48.82:23007/v1",
|
||||
"http://172.30.24.53:23008/v1",
|
||||
"http://172.30.56.141:23009/v1",
|
||||
"http://172.30.8.35:23010/v1",
|
||||
"http://172.30.48.85:23011/v1",
|
||||
"http://172.30.16.116:23012/v1"
|
||||
]
|
||||
# ----------------------------- Detailed Config -----------------------------
|
||||
|
||||
math_reader_cfg = dict(input_columns=['problem'], output_column='solution')
|
||||
@ -53,25 +24,57 @@ math_infer_cfg = dict(
|
||||
)
|
||||
|
||||
|
||||
if post_func == 're':
|
||||
pred_postprocessor = dict(type=math_postprocess_v2)
|
||||
GRADER_TEMPLATE = """
|
||||
Please as a grading expert, judge whether the final answers given by the candidates below are consistent with the standard answers, that is, whether the candidates answered correctly.
|
||||
|
||||
Here are some evaluation criteria:
|
||||
1. Please refer to the given standard answer. You don't need to re-generate the answer to the question because the standard answer has been given. You only need to judge whether the candidate's answer is consistent with the standard answer according to the form of the question. Don't try to answer the original question. You can assume that the standard answer is definitely correct.
|
||||
2. Because the candidate's answer may be different from the standard answer in the form of expression, before making a judgment, please understand the question and the standard answer first, and then judge whether the candidate's answer is correct, but be careful not to try to answer the original question.
|
||||
3. Some answers may contain multiple items, such as multiple-choice questions, multiple-select questions, fill-in-the-blank questions, etc. As long as the answer is the same as the standard answer, it is enough. For multiple-select questions and multiple-blank fill-in-the-blank questions, the candidate needs to answer all the corresponding options or blanks correctly to be considered correct.
|
||||
4. Some answers may be expressed in different ways, such as some answers may be a mathematical expression, some answers may be a textual description, as long as the meaning expressed is the same. And some formulas are expressed in different ways, but they are equivalent and correct.
|
||||
5. If the prediction is given with \\boxed{}, please ignore the \\boxed{} and only judge whether the candidate's answer is consistent with the standard answer.
|
||||
|
||||
Please judge whether the following answers are consistent with the standard answer based on the above criteria. Grade the predicted answer of this new question as one of:
|
||||
A: CORRECT
|
||||
B: INCORRECT
|
||||
Just return the letters "A" or "B", with no text around it.
|
||||
|
||||
Here is your task. Simply reply with either CORRECT, INCORRECT. Don't apologize or correct yourself if there was a mistake; we are just trying to grade the answer.
|
||||
|
||||
|
||||
if eval_func == 're':
|
||||
evaluator = dict(type=MATHEvaluator, version='v2')
|
||||
elif eval_func == 'naive_model':
|
||||
evaluator = dict(
|
||||
type=GaoKaoMATHEvaluator,
|
||||
judge_model_name=naive_model_name,
|
||||
url=naive_model_url,
|
||||
)
|
||||
<Original Question Begin>: \n{problem}\n<Original Question End>\n\n
|
||||
<Gold Target Begin>: \n{solution}\n<Gold Target End>\n\n
|
||||
<Predicted Answer Begin>: \n{prediction}\n<Predicted End>\n\n
|
||||
|
||||
Judging the correctness of candidates' answers:
|
||||
""".strip()
|
||||
|
||||
# postprocess v2
|
||||
# Evaluation configuration
|
||||
math_eval_cfg = dict(
|
||||
evaluator=evaluator,
|
||||
pred_postprocessor=pred_postprocessor,
|
||||
evaluator=dict(
|
||||
type=LMEvaluator,
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
begin=[
|
||||
dict(
|
||||
role='SYSTEM',
|
||||
fallback_role='HUMAN',
|
||||
prompt="You are a helpful assistant who evaluates the correctness and quality of models' outputs.")
|
||||
],
|
||||
round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt = GRADER_TEMPLATE
|
||||
),
|
||||
]),
|
||||
),
|
||||
dict_postprocessor=dict(type=generic_llmjudge_postprocess),
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
|
||||
|
||||
math_datasets = [
|
||||
dict(
|
||||
type=MATHDataset,
|
||||
@ -81,5 +84,6 @@ math_datasets = [
|
||||
reader_cfg=math_reader_cfg,
|
||||
infer_cfg=math_infer_cfg,
|
||||
eval_cfg=math_eval_cfg,
|
||||
mode='singlescore',
|
||||
)
|
||||
]
|
||||
|
@ -0,0 +1,104 @@
|
||||
from mmengine.config import read_base
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.openicl.icl_evaluator import AccEvaluator
|
||||
from opencompass.datasets import MMLUDataset
|
||||
from opencompass.utils.text_postprocessors import match_answer_pattern
|
||||
from opencompass.openicl.icl_evaluator import LMEvaluator
|
||||
from opencompass.datasets import generic_llmjudge_postprocess
|
||||
|
||||
with read_base():
|
||||
# from .....configs.datasets.mmlu.mmlu_all_sets import mmlu_all_sets
|
||||
from .mmlu_stem_sets import mmlu_all_sets
|
||||
# None of the mmlu dataset in huggingface is correctly parsed, so we use our own dataset reader
|
||||
# Please download the dataset from https://people.eecs.berkeley.edu/~hendrycks/data.tar
|
||||
|
||||
QUERY_TEMPLATE = """
|
||||
Answer the following multiple choice question. The last line of your response should be of the following format: 'ANSWER: $LETTER' (without quotes) where LETTER is one of ABCD.
|
||||
|
||||
{input}
|
||||
|
||||
A) {A}
|
||||
B) {B}
|
||||
C) {C}
|
||||
D) {D}
|
||||
""".strip()
|
||||
|
||||
|
||||
GRADER_TEMPLATE = """
|
||||
Please as a grading expert, judge whether the final answers given by the candidates below are consistent with the standard answers, that is, whether the candidates answered correctly.
|
||||
|
||||
Here are some evaluation criteria:
|
||||
1. Please refer to the given standard answer. You don't need to re-generate the answer to the question because the standard answer has been given. You only need to judge whether the candidate's answer is consistent with the standard answer according to the form of the question. Don't try to answer the original question. You can assume that the standard answer is definitely correct.
|
||||
2. Because the candidate's answer may be different from the standard answer in the form of expression, before making a judgment, please understand the question and the standard answer first, and then judge whether the candidate's answer is correct, but be careful not to try to answer the original question.
|
||||
3. Some answers may contain multiple items, such as multiple-choice questions, multiple-select questions, fill-in-the-blank questions, etc. As long as the answer is the same as the standard answer, it is enough. For multiple-select questions and multiple-blank fill-in-the-blank questions, the candidate needs to answer all the corresponding options or blanks correctly to be considered correct.
|
||||
4. Some answers may be expressed in different ways, such as some answers may be a mathematical expression, some answers may be a textual description, as long as the meaning expressed is the same. And some formulas are expressed in different ways, but they are equivalent and correct.
|
||||
|
||||
Please judge whether the following answers are consistent with the standard answer based on the above criteria. Grade the predicted answer of this new question as one of:
|
||||
A: CORRECT
|
||||
B: INCORRECT
|
||||
Just return the letters "A" or "B", with no text around it.
|
||||
|
||||
Here is your task. Simply reply with either CORRECT, INCORRECT. Don't apologize or correct yourself if there was a mistake; we are just trying to grade the answer.
|
||||
|
||||
<Original Question Begin>: {input}\n A) {A}\n B) {B}\n C) {C}\n D) {D}\n<Original Question End>\n\n
|
||||
<Gold Target Begin>: \n{target}\n<Gold Target End>\n\n
|
||||
<Predicted Answer Begin>: \n{prediction}\n<Predicted End>\n\n
|
||||
Judging the correctness of candidates' answers:
|
||||
""".strip()
|
||||
|
||||
mmlu_reader_cfg = dict(
|
||||
input_columns=['input', 'A', 'B', 'C', 'D'],
|
||||
output_column='target',
|
||||
train_split='dev')
|
||||
|
||||
mmlu_datasets = []
|
||||
for name in mmlu_all_sets:
|
||||
mmlu_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(role='HUMAN', prompt=QUERY_TEMPLATE),
|
||||
],
|
||||
),
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer),
|
||||
)
|
||||
|
||||
mmlu_eval_cfg = dict(
|
||||
evaluator=dict(
|
||||
type=LMEvaluator,
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
begin=[
|
||||
dict(
|
||||
role='SYSTEM',
|
||||
fallback_role='HUMAN',
|
||||
prompt="You are a helpful assistant who evaluates the correctness and quality of models' outputs.")
|
||||
],
|
||||
round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt = GRADER_TEMPLATE
|
||||
),
|
||||
]),
|
||||
),
|
||||
dict_postprocessor=dict(type=generic_llmjudge_postprocess),
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
mmlu_datasets.append(
|
||||
dict(
|
||||
abbr=f'lukaemon_mmlu_{name}',
|
||||
type=MMLUDataset,
|
||||
path='opencompass/mmlu',
|
||||
name=name,
|
||||
reader_cfg=mmlu_reader_cfg,
|
||||
infer_cfg=mmlu_infer_cfg,
|
||||
eval_cfg=mmlu_eval_cfg,
|
||||
mode='singlescore',
|
||||
))
|
3
opencompass/configs/datasets/mmlu/mmlu_stem_sets.py
Normal file
3
opencompass/configs/datasets/mmlu/mmlu_stem_sets.py
Normal file
@ -0,0 +1,3 @@
|
||||
mmlu_all_sets = [
|
||||
'abstract_algebra', 'anatomy', 'astronomy', 'college_biology', 'college_chemistry', 'college_computer_science', 'college_mathematics', 'college_physics', 'computer_security', 'conceptual_physics', 'electrical_engineering', 'elementary_mathematics', 'high_school_biology', 'high_school_chemistry', 'high_school_computer_science', 'high_school_mathematics', 'high_school_physics', 'high_school_statistics', 'machine_learning'
|
||||
]
|
@ -0,0 +1,169 @@
|
||||
# CompassArena-SubjectiveBench (Pairwise Eval with Bradley-Terry Model)
|
||||
|
||||
## Introduction
|
||||
|
||||
The following introduction comes from the abstract of [Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference](https://arxiv.org/abs/2403.04132):
|
||||
|
||||
>Large Language Models (LLMs) have unlocked new capabilities and applications; however, evaluating the alignment with human preferences still poses significant challenges. To address this issue, we introduce Chatbot Arena, an open platform for evaluating LLMs based on human preferences. Our methodology employs a pairwise comparison approach and leverages input from a diverse user base through crowdsourcing. The platform has been operational for several months, amassing over 240K votes. This paper describes the platform, analyzes the data we have collected so far, and explains the tried-and-true statistical methods we are using for efficient and accurate evaluation and ranking of models. We confirm that the crowdsourced questions are sufficiently diverse and discriminating and that the crowdsourced human votes are in good agreement with those of expert raters. These analyses collectively establish a robust foundation for the credibility of Chatbot Arena. Because of its unique value and openness, Chatbot Arena has emerged as one of the most referenced LLM leaderboards, widely cited by leading LLM developers and companies.
|
||||
|
||||
For this dataset, we adapt the Bradley-Terry rating system from FastChat to the subjective evaluation setting, but replacing human evaluators with LLM-as-a-judge.
|
||||
|
||||
|
||||
## Official Links
|
||||
|
||||
- Paper: [Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference](https://arxiv.org/abs/2403.04132)
|
||||
- GitHub Repository: [FastChat](https://github.com/lm-sys/FastChat/tree/main)
|
||||
|
||||
|
||||
## Overview and Usage
|
||||
|
||||
### Inference
|
||||
|
||||
During the inference stage, each LLM makes an inference based on the question presented (single question for single turn and an entire conversation for multi-turn).
|
||||
|
||||
### Evaluation
|
||||
|
||||
During the evaluation stage, the judge model respond with a critique and chooses the LLM with a better answer for each pair. This preference will be used later to form the "winner" response variable in the postprocessor. Note that the predictions for each model must be saved (by setting `keep_predictions=True` in the evaluator config) in order for the postporcessor to calculate style features. See this [example](`opencompass/configs/datasets/subjective/compass_arena_subjective_bench/singleturn/pairwise_bt_judge.py`) for more details.
|
||||
|
||||
|
||||
#### Postprocessor
|
||||
After evaluation by the judge model, we gather the pairwise matchups and any additional group variables (e.g. difficulty, category) in the postprocessor. Note that the LLM predictions ("prediction1" and "prediction2") must be passed on from the inference stage, otherwise, an error will be thrown.
|
||||
|
||||
|
||||
### Summary
|
||||
|
||||
After inference by the judge model in the evaluation stage, we fit a Bradley-Terry model (statistical model) in order to estimate the rating and ranking of each LLM with an option to include style features and control variables on groups. The settings below control specification of the BT model as well as how results are being reported:
|
||||
|
||||
- `rating_system`: The rating system used. Currently only supports "bradleyterry".
|
||||
|
||||
- `num_bootstrap`: The number of bootstraps for estimating the confidence intervals of ratings.
|
||||
|
||||
- `with_control_vars`: Whether to include additional covariates (including style features and group variables) when fitting the BT model.
|
||||
|
||||
- `normalize_style_features`: Whether to normalize style features BEFORE fitting the BT model (implementation by FastChat). Turn this off for easier interpretation of odds ratios (when `odds_ratio==True`).
|
||||
|
||||
- `odds_ratio`: Whether to report odds ratios ($e^{\beta_i}$) instead of the original coefficients. See section "Estimated Coefficients of Control variables" for more explanation.
|
||||
|
||||
- `groups`: List of group variables to include while fitting the BT model. These must be available in the input dataset for each observation. Group variables are assumed to be categorical and one-hot encoding is automatically performed before model fitting.
|
||||
|
||||
|
||||
### Config Files
|
||||
|
||||
1. Dataset configs:
|
||||
|
||||
- single turn: `opencompass/configs/datasets/subjective/compass_arena_subjective_bench/singleturn/pairwise_bt_judge.py`
|
||||
- multi-turn: `opencompass/configs/datasets/subjective/compass_arena_subjective_bench/multiturn/pairwise_bt_judge.py`
|
||||
|
||||
2. Evaluation config:
|
||||
|
||||
- `configs/eval_compassarena_subjectivebench_bradleyterry.py`
|
||||
|
||||
## Evaluation Results
|
||||
|
||||
### Bradley-Terry Rating
|
||||
|
||||
The rating of each model is a scaled version of the estimated "strength" coefficients of the fitted Bradley-Terry model. We use the Elo scale with an initial rating of 1000 and a scaling factor of 400 to match the scale used in [CompassArena](https://opencompass.org.cn/arena). Furthermore, we anchor the ratings on the base model as it naturally represents the reference model we are comparing against. This is why the base model always have a rating of 1000 with a zero standard deviation.
|
||||
|
||||
```
|
||||
dataset version base_model metric mode ranking ranking_ub model_name rating rating_q975 rating_q025 std_dev num_battles
|
||||
0 singleturn 635142 Qwen-2.5-72B-Instruct bt_rating gen 1 1 Qwen-2.5-72B-Instruct 1000.00 1000.00 1000.00 0.00 4229
|
||||
1 singleturn 635142 Qwen-2.5-72B-Instruct bt_rating gen 2 2 qwen2.5-32b-instruct-turbomind 926.54 941.72 908.29 8.21 1055
|
||||
2 singleturn 635142 Qwen-2.5-72B-Instruct bt_rating gen 3 2 qwen2.5-14b-instruct-turbomind 907.23 921.08 897.09 6.68 1055
|
||||
3 singleturn 635142 Qwen-2.5-72B-Instruct bt_rating gen 4 2 qwen2-7b-instruct-turbomind 901.99 919.06 885.95 8.44 1060
|
||||
4 singleturn 635142 Qwen-2.5-72B-Instruct bt_rating gen 5 2 qwen2.5-7b-instruct-turbomind 893.03 910.58 877.02 8.65 1059
|
||||
5 multiturn fff2b4 Qwen-2.5-72B-Instruct bt_rating unknown 1 1 Qwen-2.5-72B-Instruct 1000.00 1000.00 1000.00 0.00 1127
|
||||
6 multiturn fff2b4 Qwen-2.5-72B-Instruct bt_rating unknown 2 2 qwen2.5-32b-instruct-turbomind 942.53 972.14 903.84 18.89 282
|
||||
7 multiturn fff2b4 Qwen-2.5-72B-Instruct bt_rating unknown 3 2 qwen2-7b-instruct-turbomind 940.34 974.22 895.80 21.72 282
|
||||
8 multiturn fff2b4 Qwen-2.5-72B-Instruct bt_rating unknown 4 2 qwen2.5-14b-instruct-turbomind 929.09 959.98 896.80 18.16 282
|
||||
9 multiturn fff2b4 Qwen-2.5-72B-Instruct bt_rating unknown 5 2 qwen2.5-7b-instruct-turbomind 907.07 936.71 876.88 16.87 281
|
||||
```
|
||||
|
||||
### Estimated Coefficients of Control variables
|
||||
|
||||
The scale and interpretation of these numbers depend on the summarizer settings for `CompassArenaBradleyTerrySummarizer`. If `normalize_style_features` is set, the style features are the normalized relative difference between model A and B, with the following form:
|
||||
$$
|
||||
\text{normalize }\left(\frac{\text{feature}_A - \text{feature}_B}{\text{feature}_A + \text{feature}_B}\right)
|
||||
$$
|
||||
|
||||
See [Does Style Matter?](https://blog.lmarena.ai/blog/2024/style-control/) for more information.
|
||||
|
||||
Additionally, if `odds_ratio` is set, the odds ratios are returned instead of the raw coefficients. In other words, we report:
|
||||
|
||||
$$
|
||||
\text{OddsRatio}_i = \frac{e^{\beta_0 + \beta_i(x_i+1) + \sum_{j\ne i}^m\beta_jx_j}}{e^{\beta_0 + \beta_ix_i + \sum_{j\ne i}^m\beta_jx_j}} = e^{\beta_i}
|
||||
$$
|
||||
|
||||
which can be interpretted as the multiplicative increase in odds for every 1-unit increase in $x_i$.
|
||||
|
||||
For example, the following results are reported with `normalize_style_features==False` and `odds_ratio==True`:
|
||||
```
|
||||
{
|
||||
"singleturn": {
|
||||
"Qwen-2.5-72B-Instruct": {
|
||||
"sum_assistant_tokens": 6.577376545800252,
|
||||
"header_count": 1.4880636137846999,
|
||||
"list_count": 1.1558594451186806,
|
||||
"bold_count": 1.7918326386585717,
|
||||
"difficulty_Advanced": 1.0281620474711213,
|
||||
"difficulty_Easy": 1.0557367496235666,
|
||||
"difficulty_Medium": 1.1768581931447049,
|
||||
"category_人类对齐": 0.8087074923883157,
|
||||
"category_代码": 1.2717334332407775,
|
||||
"category_创作": 1.0430652013278148,
|
||||
"category_推理": 1.1592759054335746,
|
||||
"category_日常对话": 0.979047716903164,
|
||||
"category_自然语言处理": 1.006707704304149,
|
||||
"category_角色扮演": 1.2296103927210726,
|
||||
"category_重写": 0.7952522120597192,
|
||||
"category_领域知识问答": 1.0658003517547319
|
||||
}
|
||||
},
|
||||
"multiturn": {
|
||||
"Qwen-2.5-72B-Instruct": {
|
||||
"sum_assistant_tokens": 4.470153434554273,
|
||||
"header_count": 1.130542616688942,
|
||||
"list_count": 1.4753419673439991,
|
||||
"bold_count": 1.476348454534956,
|
||||
"difficulty_Advanced": 1.1668553174437737,
|
||||
"difficulty_Easy": 1.142118410006132,
|
||||
"difficulty_Medium": 0.9651479035385795,
|
||||
"category_人类对齐": 0.9606676068409767,
|
||||
"category_代码": 0.9348722519214725,
|
||||
"category_创作": 1.0362490715530026,
|
||||
"category_推理": 0.8546385641566406,
|
||||
"category_日常对话": 1.0481269627721679,
|
||||
"category_自然语言处理": 1.358391853082614,
|
||||
"category_角色扮演": 1.0432636535119493,
|
||||
"category_重写": 0.7398232857603452,
|
||||
"category_领域知识问答": 1.4715970942932421
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
Example Interpretation:
|
||||
- For the single turn dataset with "Qwen-2.5-72B-Instruct" as the base model, if all else stay constant, the odds of winning is 6.6 times greater for every unit increase in the relative difference (unnormalized) in response length between model A and B.
|
||||
|
||||
- For the multi-turn dataset with "Qwen-2.5-72B-Instruct" as the base model, if all else stay constant, the odds of winning is 26% smaller (1-0.74) for "rewrite" (重写) category questions compared to non-rewrite questions.
|
||||
|
||||
|
||||
## Citation
|
||||
```
|
||||
@misc{chiang2024chatbotarenaopenplatform,
|
||||
title={Chatbot Arena: An Open Platform for Evaluating LLMs by Human Preference},
|
||||
author={Wei-Lin Chiang and Lianmin Zheng and Ying Sheng and Anastasios Nikolas Angelopoulos and Tianle Li and Dacheng Li and Hao Zhang and Banghua Zhu and Michael Jordan and Joseph E. Gonzalez and Ion Stoica},
|
||||
year={2024},
|
||||
eprint={2403.04132},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.AI},
|
||||
url={https://arxiv.org/abs/2403.04132},
|
||||
}
|
||||
|
||||
@misc{zheng2023judging,
|
||||
title={Judging LLM-as-a-judge with MT-Bench and Chatbot Arena},
|
||||
author={Lianmin Zheng and Wei-Lin Chiang and Ying Sheng and Siyuan Zhuang and Zhanghao Wu and Yonghao Zhuang and Zi Lin and Zhuohan Li and Dacheng Li and Eric. P Xing and Hao Zhang and Joseph E. Gonzalez and Ion Stoica},
|
||||
year={2023},
|
||||
eprint={2306.05685},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.CL}
|
||||
}
|
||||
```
|
@ -0,0 +1,85 @@
|
||||
from mmengine.config import read_base
|
||||
|
||||
from opencompass.datasets import ( # compassarena_subjectiveeval_pairwise_postprocess,
|
||||
CompassArenaSubjectiveBench,
|
||||
compassarena_subjectiveeval_bradleyterry_postprocess,
|
||||
)
|
||||
from opencompass.openicl.icl_evaluator import LMEvaluator
|
||||
from opencompass.openicl.icl_inferencer import ChatInferencer
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
|
||||
subjective_reader_cfg = dict(
|
||||
input_columns=['dialogue', 'pairwise_judge_prompt'],
|
||||
output_column='judge',
|
||||
)
|
||||
|
||||
subjective_all_sets = [
|
||||
'multiturn',
|
||||
]
|
||||
|
||||
qwen_2_5_72b = [
|
||||
dict(
|
||||
abbr='Qwen-2.5-72B-Instruct',
|
||||
)
|
||||
]
|
||||
|
||||
compassarena_subjectivebench_bradleyterry_multiturn_datasets = []
|
||||
|
||||
|
||||
for _name in subjective_all_sets:
|
||||
subjective_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(role='HUMAN', prompt='{dialogue}'),
|
||||
]
|
||||
),
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(
|
||||
type=ChatInferencer, max_seq_len=8192, max_out_len=2048, infer_mode='every'
|
||||
),
|
||||
)
|
||||
|
||||
subjective_eval_cfg = dict(
|
||||
evaluator=dict(
|
||||
type=LMEvaluator,
|
||||
pack_all_predictions=True,
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(role='HUMAN', prompt='{pairwise_judge_prompt}'),
|
||||
]
|
||||
),
|
||||
),
|
||||
dict_postprocessor=dict(
|
||||
type=compassarena_subjectiveeval_bradleyterry_postprocess
|
||||
),
|
||||
keep_predictions=True, # Must be turned on to save predictions from model pairs to calculate style features in postprocessor
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
|
||||
compassarena_subjectivebench_bradleyterry_multiturn_datasets.append(
|
||||
dict(
|
||||
abbr=f'{_name}',
|
||||
type=CompassArenaSubjectiveBench,
|
||||
path='./data/subjective/CompassArenaSubjectiveBench',
|
||||
name=_name,
|
||||
reader_cfg=subjective_reader_cfg,
|
||||
infer_cfg=subjective_infer_cfg,
|
||||
eval_cfg=subjective_eval_cfg,
|
||||
mode='m2n',
|
||||
infer_order='random',
|
||||
base_models=qwen_2_5_72b,
|
||||
given_pred=[
|
||||
{
|
||||
'abbr': 'Qwen-2.5-72B-Instruct',
|
||||
'path': './data/subjective/CompassArenaSubjectiveBench/Qwen-2.5-72B-Instruct',
|
||||
}
|
||||
],
|
||||
)
|
||||
)
|
@ -1,40 +1,47 @@
|
||||
from mmengine.config import read_base
|
||||
|
||||
from opencompass.datasets import (
|
||||
CompassArenaSubjectiveBench,
|
||||
compassarena_subjectiveeval_pairwise_postprocess,
|
||||
)
|
||||
from opencompass.openicl.icl_evaluator import LMEvaluator
|
||||
from opencompass.openicl.icl_inferencer import ChatInferencer
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import ChatInferencer
|
||||
from opencompass.openicl.icl_evaluator import LMEvaluator
|
||||
from opencompass.datasets import CompassArenaSubjectiveBench, compassarena_subjectiveeval_pairwise_postprocess
|
||||
from mmengine.config import read_base
|
||||
|
||||
subjective_reader_cfg = dict(
|
||||
input_columns=['dialogue', 'pairwise_judge_prompt'],
|
||||
output_column='judge',
|
||||
)
|
||||
)
|
||||
|
||||
subjective_all_sets = [
|
||||
'multiturn',
|
||||
]
|
||||
|
||||
qwen_2_5_72b = [dict(
|
||||
abbr='Qwen-2.5-72B-Instruct',
|
||||
)]
|
||||
qwen_2_5_72b = [
|
||||
dict(
|
||||
abbr='Qwen-2.5-72B-Instruct',
|
||||
)
|
||||
]
|
||||
|
||||
compassarena_subjectivebench_multiturn_datasets = []
|
||||
|
||||
|
||||
for _name in subjective_all_sets:
|
||||
subjective_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt='{dialogue}'
|
||||
),
|
||||
]),
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(role='HUMAN', prompt='{dialogue}'),
|
||||
]
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=ChatInferencer, max_seq_len=8192, max_out_len=2048, infer_mode='every'),
|
||||
)
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(
|
||||
type=ChatInferencer, max_seq_len=8192, max_out_len=2048, infer_mode='every'
|
||||
),
|
||||
)
|
||||
|
||||
subjective_eval_cfg = dict(
|
||||
evaluator=dict(
|
||||
@ -44,13 +51,13 @@ for _name in subjective_all_sets:
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt = '{pairwise_judge_prompt}'
|
||||
),
|
||||
]),
|
||||
dict(role='HUMAN', prompt='{pairwise_judge_prompt}'),
|
||||
]
|
||||
),
|
||||
),
|
||||
dict_postprocessor=dict(
|
||||
type=compassarena_subjectiveeval_pairwise_postprocess
|
||||
),
|
||||
dict_postprocessor=dict(type=compassarena_subjectiveeval_pairwise_postprocess),
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
@ -67,5 +74,11 @@ for _name in subjective_all_sets:
|
||||
mode='m2n',
|
||||
infer_order='double',
|
||||
base_models=qwen_2_5_72b,
|
||||
given_pred = [{'abbr':'Qwen-2.5-72B-Instruct', 'path':'./data/subjective/CompassArenaSubjectiveBench/Qwen-2.5-72B-Instruct'}],
|
||||
))
|
||||
given_pred=[
|
||||
{
|
||||
'abbr': 'Qwen-2.5-72B-Instruct',
|
||||
'path': './data/subjective/CompassArenaSubjectiveBench/Qwen-2.5-72B-Instruct',
|
||||
}
|
||||
],
|
||||
)
|
||||
)
|
||||
|
@ -0,0 +1,83 @@
|
||||
from mmengine.config import read_base
|
||||
|
||||
from opencompass.datasets import (
|
||||
CompassArenaSubjectiveBench,
|
||||
compassarena_subjectiveeval_bradleyterry_postprocess,
|
||||
compassarena_subjectiveeval_pairwise_postprocess,
|
||||
)
|
||||
from opencompass.openicl.icl_evaluator import LMEvaluator
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
|
||||
subjective_reader_cfg = dict(
|
||||
input_columns=['question', 'pairwise_judge_prompt'],
|
||||
output_column='judge',
|
||||
)
|
||||
|
||||
subjective_all_sets = [
|
||||
'singleturn',
|
||||
]
|
||||
|
||||
qwen_2_5_72b = [
|
||||
dict(
|
||||
abbr='Qwen-2.5-72B-Instruct',
|
||||
)
|
||||
]
|
||||
|
||||
compassarena_subjectivebench_bradleyterry_singleturn_datasets = []
|
||||
|
||||
|
||||
for _name in subjective_all_sets:
|
||||
subjective_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(role='HUMAN', prompt='{question}'),
|
||||
]
|
||||
),
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer, max_out_len=4096),
|
||||
)
|
||||
|
||||
subjective_eval_cfg = dict(
|
||||
evaluator=dict(
|
||||
type=LMEvaluator,
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(role='HUMAN', prompt='{pairwise_judge_prompt}'),
|
||||
]
|
||||
),
|
||||
),
|
||||
dict_postprocessor=dict(
|
||||
type=compassarena_subjectiveeval_bradleyterry_postprocess
|
||||
),
|
||||
keep_predictions=True, # Must be turned on to save predictions from model pairs to calculate style features in postprocessor
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
|
||||
compassarena_subjectivebench_bradleyterry_singleturn_datasets.append(
|
||||
dict(
|
||||
abbr=f'{_name}',
|
||||
type=CompassArenaSubjectiveBench,
|
||||
path='./data/subjective/CompassArenaSubjectiveBench',
|
||||
name=_name,
|
||||
reader_cfg=subjective_reader_cfg,
|
||||
infer_cfg=subjective_infer_cfg,
|
||||
eval_cfg=subjective_eval_cfg,
|
||||
mode='m2n',
|
||||
infer_order='random',
|
||||
base_models=qwen_2_5_72b,
|
||||
given_pred=[
|
||||
{
|
||||
'abbr': 'Qwen-2.5-72B-Instruct',
|
||||
'path': './data/subjective/CompassArenaSubjectiveBench/Qwen-2.5-72B-Instruct',
|
||||
}
|
||||
],
|
||||
)
|
||||
)
|
@ -1,40 +1,45 @@
|
||||
from mmengine.config import read_base
|
||||
|
||||
from opencompass.datasets import (
|
||||
CompassArenaSubjectiveBench,
|
||||
compassarena_subjectiveeval_pairwise_postprocess,
|
||||
)
|
||||
from opencompass.openicl.icl_evaluator import LMEvaluator
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.openicl.icl_prompt_template import PromptTemplate
|
||||
from opencompass.openicl.icl_retriever import ZeroRetriever
|
||||
from opencompass.openicl.icl_inferencer import GenInferencer
|
||||
from opencompass.openicl.icl_evaluator import LMEvaluator
|
||||
from opencompass.datasets import CompassArenaSubjectiveBench, compassarena_subjectiveeval_pairwise_postprocess
|
||||
from mmengine.config import read_base
|
||||
|
||||
subjective_reader_cfg = dict(
|
||||
input_columns=['question', 'pairwise_judge_prompt'],
|
||||
output_column='judge',
|
||||
)
|
||||
)
|
||||
|
||||
subjective_all_sets = [
|
||||
'singleturn',
|
||||
]
|
||||
|
||||
qwen_2_5_72b = [dict(
|
||||
abbr='Qwen-2.5-72B-Instruct',
|
||||
)]
|
||||
qwen_2_5_72b = [
|
||||
dict(
|
||||
abbr='Qwen-2.5-72B-Instruct',
|
||||
)
|
||||
]
|
||||
|
||||
compassarena_subjectivebench_singleturn_datasets = []
|
||||
|
||||
|
||||
for _name in subjective_all_sets:
|
||||
subjective_infer_cfg = dict(
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt='{question}'
|
||||
),
|
||||
]),
|
||||
prompt_template=dict(
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(role='HUMAN', prompt='{question}'),
|
||||
]
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer, max_out_len=4096),
|
||||
)
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer, max_out_len=4096),
|
||||
)
|
||||
|
||||
subjective_eval_cfg = dict(
|
||||
evaluator=dict(
|
||||
@ -43,13 +48,13 @@ for _name in subjective_all_sets:
|
||||
type=PromptTemplate,
|
||||
template=dict(
|
||||
round=[
|
||||
dict(
|
||||
role='HUMAN',
|
||||
prompt = '{pairwise_judge_prompt}'
|
||||
),
|
||||
]),
|
||||
dict(role='HUMAN', prompt='{pairwise_judge_prompt}'),
|
||||
]
|
||||
),
|
||||
),
|
||||
dict_postprocessor=dict(
|
||||
type=compassarena_subjectiveeval_pairwise_postprocess
|
||||
),
|
||||
dict_postprocessor=dict(type=compassarena_subjectiveeval_pairwise_postprocess),
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
@ -66,5 +71,11 @@ for _name in subjective_all_sets:
|
||||
mode='m2n',
|
||||
infer_order='double',
|
||||
base_models=qwen_2_5_72b,
|
||||
given_pred = [{'abbr':'Qwen-2.5-72B-Instruct', 'path':'./data/subjective/CompassArenaSubjectiveBench/Qwen-2.5-72B-Instruct'}],
|
||||
))
|
||||
given_pred=[
|
||||
{
|
||||
'abbr': 'Qwen-2.5-72B-Instruct',
|
||||
'path': './data/subjective/CompassArenaSubjectiveBench/Qwen-2.5-72B-Instruct',
|
||||
}
|
||||
],
|
||||
)
|
||||
)
|
||||
|
@ -149,6 +149,6 @@ for _name, _prompt in sub_map.items():
|
||||
mode='m2n',
|
||||
infer_order='double',
|
||||
base_models=gpt4,
|
||||
summarizer = dict(type=CompassArenaSummarizer, summary_type='half_add'),
|
||||
summarizer = dict(type=CompassArenaSummarizer, summary_type='single'),
|
||||
given_pred = [{'abbr':'gpt4-turbo', 'path':'./data/subjective/compass_arena/gpt4-turbo'}]
|
||||
))
|
||||
|
@ -105,7 +105,7 @@ for _name, _prompt in sub_map.items():
|
||||
]),
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=GenInferencer, max_seq_len=4096, max_out_len=2048),
|
||||
inferencer=dict(type=GenInferencer, max_seq_len=4096, max_out_len=4096),
|
||||
)
|
||||
|
||||
subjective_eval_cfg = dict(
|
||||
@ -120,7 +120,7 @@ for _name, _prompt in sub_map.items():
|
||||
),
|
||||
]),
|
||||
),
|
||||
dict_postprocessor=dict(type=compassarena_postprocess, summary_type='half_add', check_pos_bias=True),
|
||||
dict_postprocessor=dict(type=compassarena_postprocess, summary_type='single', check_pos_bias=True),
|
||||
),
|
||||
pred_role='BOT',
|
||||
)
|
||||
|
@ -20,7 +20,7 @@ subjective_infer_cfg = dict(
|
||||
template="""{dialogue}"""
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=ChatInferencer, max_seq_len=32768, max_out_len=4096, infer_mode='last'),
|
||||
inferencer=dict(type=ChatInferencer, max_seq_len=32768, infer_mode='last'),
|
||||
)
|
||||
|
||||
subjective_eval_cfg = dict(
|
||||
|
@ -20,7 +20,7 @@ subjective_infer_cfg = dict(
|
||||
template="""{dialogue}"""
|
||||
),
|
||||
retriever=dict(type=ZeroRetriever),
|
||||
inferencer=dict(type=ChatInferencer, max_seq_len=4096, max_out_len=512, infer_mode='last'),
|
||||
inferencer=dict(type=ChatInferencer, max_seq_len=32768, infer_mode='last'),
|
||||
)
|
||||
|
||||
subjective_eval_cfg = dict(
|
||||
|
@ -5,6 +5,7 @@ models = [
|
||||
type=TurboMindModelwithChatTemplate,
|
||||
abbr='deepseek-v2_5-turbomind',
|
||||
path='deepseek-ai/DeepSeek-V2.5',
|
||||
backend='pytorch',
|
||||
engine_config=dict(
|
||||
session_len=7168,
|
||||
max_batch_size=4,
|
||||
|
@ -0,0 +1,21 @@
|
||||
from opencompass.models import TurboMindModelwithChatTemplate
|
||||
|
||||
models = [
|
||||
dict(
|
||||
type=TurboMindModelwithChatTemplate,
|
||||
abbr='deepseek-v2_5-1210-turbomind',
|
||||
path='deepseek-ai/DeepSeek-V2.5-1210',
|
||||
backend='pytorch',
|
||||
engine_config=dict(
|
||||
session_len=7168,
|
||||
max_batch_size=4,
|
||||
tp=8,
|
||||
cache_max_entry_count=0.7,
|
||||
),
|
||||
gen_config=dict(top_k=1, temperature=1e-6, top_p=0.9),
|
||||
max_seq_len=7168,
|
||||
max_out_len=2048,
|
||||
batch_size=4,
|
||||
run_cfg=dict(num_gpus=8),
|
||||
)
|
||||
]
|
@ -0,0 +1,16 @@
|
||||
from opencompass.models import TurboMindModelwithChatTemplate
|
||||
|
||||
models = [
|
||||
dict(
|
||||
type=TurboMindModelwithChatTemplate,
|
||||
abbr='llama-3_3-70b-instruct-turbomind',
|
||||
path='meta-llama/Llama-3.3-70B-Instruct',
|
||||
engine_config=dict(max_batch_size=16, tp=4),
|
||||
gen_config=dict(top_k=1, temperature=1e-6, top_p=0.9, max_new_tokens=8192),
|
||||
max_seq_len=16384,
|
||||
max_out_len=8192,
|
||||
batch_size=16,
|
||||
run_cfg=dict(num_gpus=4),
|
||||
stop_words=['<|end_of_text|>', '<|eot_id|>', '<|eom_id|>'],
|
||||
)
|
||||
]
|
15
opencompass/configs/models/qwq/lmdeploy_qwq_32b_preview.py
Normal file
15
opencompass/configs/models/qwq/lmdeploy_qwq_32b_preview.py
Normal file
@ -0,0 +1,15 @@
|
||||
from opencompass.models import TurboMindModelwithChatTemplate
|
||||
|
||||
models = [
|
||||
dict(
|
||||
type=TurboMindModelwithChatTemplate,
|
||||
abbr='QwQ-32B-Preview',
|
||||
path='Qwen/QwQ-32B-Preview',
|
||||
engine_config=dict(session_len=32768, max_batch_size=16, tp=2),
|
||||
gen_config=dict(top_k=1, temperature=1e-6, top_p=0.9, max_new_tokens=8192),
|
||||
max_seq_len=32768,
|
||||
max_out_len=8192,
|
||||
batch_size=16,
|
||||
run_cfg=dict(num_gpus=2),
|
||||
)
|
||||
]
|
@ -0,0 +1,16 @@
|
||||
from opencompass.models import TurboMindModelwithChatTemplate
|
||||
|
||||
models = [
|
||||
dict(
|
||||
type=TurboMindModelwithChatTemplate,
|
||||
abbr='Skywork-o1-Open-Llama-3_1-8B-turbomind',
|
||||
path='Skywork/Skywork-o1-Open-Llama-3.1-8B',
|
||||
engine_config=dict(max_batch_size=16, tp=1),
|
||||
gen_config=dict(top_k=1, temperature=1e-6, top_p=0.9, max_new_tokens=4096),
|
||||
max_seq_len=16384,
|
||||
max_out_len=8192,
|
||||
batch_size=16,
|
||||
run_cfg=dict(num_gpus=1),
|
||||
stop_words=['<|end_of_text|>', '<|eot_id|>'],
|
||||
)
|
||||
]
|
@ -10,6 +10,7 @@ from .arc_prize_public_evaluation import * # noqa: F401, F403
|
||||
from .ax import * # noqa: F401, F403
|
||||
from .babilong import * # noqa: F401, F403
|
||||
from .bbh import * # noqa: F401, F403
|
||||
from .bigcodebench import * # noqa: F401, F403
|
||||
from .boolq import * # noqa: F401, F403
|
||||
from .bustum import * # noqa: F401, F403
|
||||
from .c3 import * # noqa: F401, F403
|
||||
@ -19,6 +20,7 @@ from .ceval import * # noqa: F401, F403
|
||||
from .charm import * # noqa: F401, F403
|
||||
from .chembench import * # noqa: F401, F403
|
||||
from .chid import * # noqa: F401, F403
|
||||
from .chinese_simpleqa import * # noqa: F401, F403
|
||||
from .cibench import * # noqa: F401, F403
|
||||
from .circular import * # noqa: F401, F403
|
||||
from .civilcomments import * # noqa: F401, F403
|
||||
@ -49,6 +51,7 @@ from .flores import * # noqa: F401, F403
|
||||
from .game24 import * # noqa: F401, F403
|
||||
from .gaokao_math import * # noqa: F401, F403
|
||||
from .GaokaoBench import * # noqa: F401, F403
|
||||
from .generic import * # noqa: F401, F403
|
||||
from .govrepcrs import * # noqa: F401, F403
|
||||
from .gpqa import * # noqa: F401, F403
|
||||
from .gsm8k import * # noqa: F401, F403
|
||||
@ -73,6 +76,8 @@ from .LCBench import * # noqa: F401, F403
|
||||
from .lcsts import * # noqa: F401, F403
|
||||
from .leval import * # noqa: F401, F403
|
||||
from .livecodebench import * # noqa: F401, F403
|
||||
from .livemathbench import * # noqa: F401, F403
|
||||
from .livereasonbench import * # noqa: F401, F403
|
||||
from .llm_compression import LLMCompressionDataset # noqa: F401, F403
|
||||
from .longbench import * # noqa: F401, F403
|
||||
from .lveval import * # noqa: F401, F403
|
||||
|
@ -12,7 +12,7 @@ from .base import BaseDataset
|
||||
class Aime2024Dataset(BaseDataset):
|
||||
|
||||
@staticmethod
|
||||
def load(path):
|
||||
def load(path, **kwargs):
|
||||
path = get_data_path(path)
|
||||
dataset = []
|
||||
with open(path, 'r') as f:
|
||||
|
1
opencompass/datasets/bigcodebench/__init__.py
Normal file
1
opencompass/datasets/bigcodebench/__init__.py
Normal file
@ -0,0 +1 @@
|
||||
from .bigcodebench import BigCodeBenchDataset, BigCodeBenchEvaluator # noqa
|
169
opencompass/datasets/bigcodebench/bigcodebench.py
Normal file
169
opencompass/datasets/bigcodebench/bigcodebench.py
Normal file
@ -0,0 +1,169 @@
|
||||
# Copyright (c) 2024, BigCodeBench and its contributors.
|
||||
# Copyright (c) 2023, OpenCompass and its contributors.
|
||||
|
||||
import os
|
||||
import time
|
||||
from concurrent.futures._base import CancelledError
|
||||
|
||||
import httpx
|
||||
from datasets import Dataset, DatasetDict
|
||||
from gradio_client import Client, handle_file
|
||||
|
||||
from opencompass.openicl.icl_evaluator import BaseEvaluator
|
||||
from opencompass.utils import JSONToolkit # noqa: F401, F403
|
||||
from opencompass.utils import (check_url_accessibility, get_data_path,
|
||||
get_logger, setup_proxies)
|
||||
|
||||
from ..base import BaseDataset
|
||||
from .extractor import extract_code_generation
|
||||
|
||||
|
||||
class BigCodeBenchDataset(BaseDataset):
|
||||
|
||||
@staticmethod
|
||||
def load(path: str = 'opencompass/bigcodebench',
|
||||
local_mode: bool = False,
|
||||
release_version: str = 'v0.1.2',
|
||||
dataset_version: str = 'full'):
|
||||
"""
|
||||
Args:
|
||||
path (str): The path to the dataset.
|
||||
local_mode (bool): Whether to use local give path or use
|
||||
automatically download.
|
||||
release_version (str): The release version of the dataset.
|
||||
dataset_version (str): The data version of the dataset.
|
||||
only support ['full', 'hard']
|
||||
"""
|
||||
assert dataset_version in ['full', 'hard'], \
|
||||
'dataset_version should be one of ["full", "hard"], '
|
||||
f'but got {dataset_version}'
|
||||
path = get_data_path(path, local_mode=local_mode)
|
||||
dataset = DatasetDict()
|
||||
# Valid Keys:
|
||||
# 'task_id', 'complete_prompt', 'instruct_prompt',
|
||||
# 'canonical_solution', 'code_prompt', 'test',
|
||||
# 'entry_point', 'doc_struct', 'libs'
|
||||
if dataset_version == 'full':
|
||||
items = JSONToolkit.read_jsonl(
|
||||
os.path.join(path, f'BigCodeBench-{release_version}.jsonl'))
|
||||
else:
|
||||
items = JSONToolkit.read_jsonl(
|
||||
os.path.join(path,
|
||||
f'BigCodeBench-Hard-{release_version}.jsonl'))
|
||||
|
||||
dataset['train'] = Dataset.from_list(items)
|
||||
dataset['test'] = Dataset.from_list(items)
|
||||
|
||||
return dataset
|
||||
|
||||
|
||||
class BigCodeBenchEvaluator(BaseEvaluator):
|
||||
"""Evaluator for BigCodeBench.
|
||||
|
||||
Args:
|
||||
num_process_evaluate (int): number of processes to evaluate
|
||||
timeout (int): timeout for each evaluation
|
||||
release_version (str): release version of BigCodeBench
|
||||
eval_type (str): type of evaluation, either 'instruct' or 'completion'
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
release_version='v0.1.2',
|
||||
eval_type='instruct',
|
||||
remote_execute_api='https://bigcode-bigcodebench-evaluator.hf.space/', # noqa
|
||||
dataset_version: str = 'full',
|
||||
pass_k: str = '1,5,10',
|
||||
parallel: int = -1,
|
||||
min_time_limit: float = 1,
|
||||
max_as_limit: int = 30 * 1024,
|
||||
max_data_limit: int = 30 * 1024,
|
||||
max_stack_limit: int = 10,
|
||||
check_gt_only: bool = False,
|
||||
no_gt: bool = False):
|
||||
super().__init__()
|
||||
self.dataset = BigCodeBenchDataset.load(
|
||||
release_version=release_version,
|
||||
dataset_version=dataset_version)['test']
|
||||
self.eval_type = eval_type
|
||||
self.remote_execute_api = remote_execute_api
|
||||
|
||||
self.eval_kwargs = dict(subset=dataset_version,
|
||||
pass_k=pass_k,
|
||||
parallel=parallel,
|
||||
min_time_limit=min_time_limit,
|
||||
max_as_limit=max_as_limit,
|
||||
max_data_limit=max_data_limit,
|
||||
max_stack_limit=max_stack_limit,
|
||||
check_gt_only=check_gt_only,
|
||||
no_gt=no_gt)
|
||||
|
||||
def score(self, predictions, references):
|
||||
logger = get_logger()
|
||||
entrypoints = [item['entry_point'] for item in self.dataset]
|
||||
|
||||
# Append content to the end of the prompt for Completion
|
||||
if self.eval_type == 'complete':
|
||||
content = [item['complete_prompt'] for item in self.dataset]
|
||||
predictions = [
|
||||
content[idx] + item for idx, item in enumerate(predictions)
|
||||
]
|
||||
elif self.eval_type == 'instruct':
|
||||
pass
|
||||
else:
|
||||
raise ValueError(f'Unknown eval_type: {self.eval_type}')
|
||||
|
||||
# Sanitize predictions for execution
|
||||
logger.info('Start to extract code from predictions')
|
||||
sanitized_predictions = []
|
||||
for prediction, entrypoint in zip(predictions, entrypoints):
|
||||
sanitized_prediction = extract_code_generation(
|
||||
prediction, entrypoint=entrypoint)
|
||||
sanitized_predictions.append(sanitized_prediction)
|
||||
|
||||
# Prepare for submission
|
||||
submitted_contents = []
|
||||
task_ids = [item['task_id'] for item in self.dataset]
|
||||
for task_id, sanitized_prediction in zip(task_ids,
|
||||
sanitized_predictions):
|
||||
submitted_content = {
|
||||
'task_id': task_id,
|
||||
'solution': sanitized_prediction
|
||||
}
|
||||
submitted_contents.append(submitted_content)
|
||||
|
||||
submitted_contents_path = os.path.join(
|
||||
self._out_dir, 'bigcodebench_submitted_contents.jsonl')
|
||||
JSONToolkit.save_jsonl(submitted_contents, submitted_contents_path)
|
||||
logger.info(f'Dump submitted contents to {submitted_contents_path}')
|
||||
|
||||
logger.info(
|
||||
f'Start to connect to {self.remote_execute_api} for evaluating')
|
||||
# Conduct evaluation with Eval Client
|
||||
proxies = setup_proxies('BIGCODEBENCH_EVAL_PROXY_URL')
|
||||
|
||||
is_accessible, status_code = check_url_accessibility(
|
||||
self.remote_execute_api)
|
||||
if not is_accessible:
|
||||
logger.error(f'Failed to connect to {self.remote_execute_api} '
|
||||
f'with status code {status_code}')
|
||||
return False
|
||||
|
||||
while True:
|
||||
try:
|
||||
eval_client = Client(self.remote_execute_api,
|
||||
httpx_kwargs=dict(proxies=proxies))
|
||||
results, pass_at_k = eval_client.predict(
|
||||
split=self.eval_type,
|
||||
samples=handle_file(submitted_contents_path),
|
||||
api_name='/predict',
|
||||
**self.eval_kwargs)
|
||||
break
|
||||
except (httpx.ReadTimeout, CancelledError):
|
||||
logger.info('Read timeout error. Retrying in 4s...')
|
||||
time.sleep(4)
|
||||
|
||||
dump_results = {'details': results}
|
||||
dump_results.update(pass_at_k)
|
||||
|
||||
return dump_results
|
192
opencompass/datasets/bigcodebench/extractor.py
Normal file
192
opencompass/datasets/bigcodebench/extractor.py
Normal file
@ -0,0 +1,192 @@
|
||||
# Copyright (c) 2024, BigCodeBench and its contributors.
|
||||
# Copyright (c) 2023, OpenCompass and its contributors.
|
||||
|
||||
import ast
|
||||
import traceback
|
||||
from typing import Dict, Generator, List, Optional, Set, Tuple
|
||||
|
||||
from tree_sitter import Node
|
||||
from tree_sitter_languages import get_parser
|
||||
|
||||
CLASS_TYPE = 'class_definition'
|
||||
FUNCTION_TYPE = 'function_definition'
|
||||
IMPORT_TYPE = ['import_statement', 'import_from_statement']
|
||||
IDENTIFIER_TYPE = 'identifier'
|
||||
ATTRIBUTE_TYPE = 'attribute'
|
||||
RETURN_TYPE = 'return_statement'
|
||||
EXPRESSION_TYPE = 'expression_statement'
|
||||
ASSIGNMENT_TYPE = 'assignment'
|
||||
|
||||
|
||||
def syntax_check(code, verbose=False):
|
||||
try:
|
||||
ast.parse(code)
|
||||
return True
|
||||
except (SyntaxError, MemoryError):
|
||||
if verbose:
|
||||
traceback.print_exc()
|
||||
return False
|
||||
|
||||
|
||||
def code_extract(text: str) -> str:
|
||||
lines = text.split('\n')
|
||||
longest_line_pair = (0, 0)
|
||||
longest_so_far = 0
|
||||
|
||||
for i in range(len(lines)):
|
||||
for j in range(i + 1, len(lines)):
|
||||
current_lines = '\n'.join(lines[i:j + 1])
|
||||
if syntax_check(current_lines):
|
||||
current_length = sum(1 for line in lines[i:j + 1]
|
||||
if line.strip())
|
||||
if current_length > longest_so_far:
|
||||
longest_so_far = current_length
|
||||
longest_line_pair = (i, j)
|
||||
|
||||
return '\n'.join(lines[longest_line_pair[0]:longest_line_pair[1] + 1])
|
||||
|
||||
|
||||
def get_deps(nodes: List[Tuple[str, Node]]) -> Dict[str, Set[str]]:
|
||||
|
||||
def dfs_get_deps(node: Node, deps: Set[str]) -> None:
|
||||
for child in node.children:
|
||||
if child.type == IDENTIFIER_TYPE:
|
||||
deps.add(child.text.decode('utf8'))
|
||||
else:
|
||||
dfs_get_deps(child, deps)
|
||||
|
||||
name2deps = {}
|
||||
for name, node in nodes:
|
||||
deps = set()
|
||||
dfs_get_deps(node, deps)
|
||||
name2deps[name] = deps
|
||||
return name2deps
|
||||
|
||||
|
||||
def get_function_dependency(entrypoint: str,
|
||||
call_graph: Dict[str, str]) -> Set[str]:
|
||||
queue = [entrypoint]
|
||||
visited = {entrypoint}
|
||||
while queue:
|
||||
current = queue.pop(0)
|
||||
if current not in call_graph:
|
||||
continue
|
||||
for neighbour in call_graph[current]:
|
||||
if not (neighbour in visited):
|
||||
visited.add(neighbour)
|
||||
queue.append(neighbour)
|
||||
return visited
|
||||
|
||||
|
||||
def get_definition_name(node: Node) -> str:
|
||||
for child in node.children:
|
||||
if child.type == IDENTIFIER_TYPE:
|
||||
return child.text.decode('utf8')
|
||||
|
||||
|
||||
def traverse_tree(node: Node) -> Generator[Node, None, None]:
|
||||
cursor = node.walk()
|
||||
depth = 0
|
||||
|
||||
visited_children = False
|
||||
while True:
|
||||
if not visited_children:
|
||||
yield cursor.node
|
||||
if not cursor.goto_first_child():
|
||||
depth += 1
|
||||
visited_children = True
|
||||
elif cursor.goto_next_sibling():
|
||||
visited_children = False
|
||||
elif not cursor.goto_parent() or depth == 0:
|
||||
break
|
||||
else:
|
||||
depth -= 1
|
||||
|
||||
|
||||
def has_return_statement(node: Node) -> bool:
|
||||
traverse_nodes = traverse_tree(node)
|
||||
for node in traverse_nodes:
|
||||
if node.type == RETURN_TYPE:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def extract_target_code_or_empty(code: str,
|
||||
entrypoint: Optional[str] = None) -> str:
|
||||
code = code_extract(code.strip())
|
||||
code_bytes = bytes(code, 'utf8')
|
||||
parser = get_parser('python')
|
||||
tree = parser.parse(code_bytes)
|
||||
class_names = set()
|
||||
function_names = set()
|
||||
variable_names = set()
|
||||
|
||||
root_node = tree.root_node
|
||||
import_nodes = []
|
||||
definition_nodes = []
|
||||
|
||||
for child in root_node.children:
|
||||
if child.type in IMPORT_TYPE:
|
||||
import_nodes.append(child)
|
||||
elif child.type == CLASS_TYPE:
|
||||
name = get_definition_name(child)
|
||||
if not (name in class_names or name in variable_names
|
||||
or name in function_names):
|
||||
definition_nodes.append((name, child))
|
||||
class_names.add(name)
|
||||
elif child.type == FUNCTION_TYPE:
|
||||
name = get_definition_name(child)
|
||||
if not (name in function_names or name in variable_names
|
||||
or name in class_names):
|
||||
definition_nodes.append((name, child))
|
||||
function_names.add(get_definition_name(child))
|
||||
elif (child.type == EXPRESSION_TYPE
|
||||
and child.children[0].type == ASSIGNMENT_TYPE):
|
||||
subchild = child.children[0]
|
||||
name = get_definition_name(subchild)
|
||||
if not (name in variable_names or name in function_names
|
||||
or name in class_names):
|
||||
definition_nodes.append((name, subchild))
|
||||
variable_names.add(name)
|
||||
|
||||
if entrypoint:
|
||||
name2deps = get_deps(definition_nodes)
|
||||
reachable = get_function_dependency(entrypoint, name2deps)
|
||||
|
||||
sanitized_output = b''
|
||||
|
||||
for node in import_nodes:
|
||||
sanitized_output += code_bytes[node.start_byte:node.end_byte] + b'\n'
|
||||
|
||||
for pair in definition_nodes:
|
||||
name, node = pair
|
||||
if entrypoint and not (name in reachable):
|
||||
continue
|
||||
sanitized_output += code_bytes[node.start_byte:node.end_byte] + b'\n'
|
||||
|
||||
sanitized_output = sanitized_output[:-1].decode('utf8')
|
||||
|
||||
# ad-hoc approach to remove unnecessary lines, but it works
|
||||
lines = sanitized_output.splitlines()
|
||||
outer_lines = []
|
||||
for i in range(len(lines) - 1, -1, -1):
|
||||
if lines[i].startswith(' '):
|
||||
break
|
||||
if not lines[i].startswith(' ') and entrypoint in lines[i]:
|
||||
outer_lines.append(i)
|
||||
if outer_lines:
|
||||
sanitized_output = '\n'.join(lines[:outer_lines[-1]])
|
||||
return sanitized_output
|
||||
|
||||
|
||||
def extract_code_generation(model_output: str,
|
||||
entrypoint: Optional[str] = None):
|
||||
|
||||
# Extract code according to the entrypoint
|
||||
sanitized_code = extract_target_code_or_empty(model_output,
|
||||
entrypoint).strip()
|
||||
# Fallback to extract first codeblock if sanitized_code is empty
|
||||
sanitized_code = code_extract(
|
||||
model_output) if not sanitized_code else sanitized_code
|
||||
|
||||
return sanitized_code
|
205
opencompass/datasets/chinese_simpleqa.py
Normal file
205
opencompass/datasets/chinese_simpleqa.py
Normal file
@ -0,0 +1,205 @@
|
||||
import json
|
||||
import os.path as osp
|
||||
import re
|
||||
|
||||
from datasets import Dataset, DatasetDict
|
||||
|
||||
from opencompass.registry import (DICT_POSTPROCESSORS, LOAD_DATASET,
|
||||
TEXT_POSTPROCESSORS)
|
||||
from opencompass.utils import get_data_path
|
||||
|
||||
from .base import BaseDataset
|
||||
|
||||
csimpleqa_judge_prompt_new = """
|
||||
请根据给定问题、标准答案和模型预测的答案来评估模型的回答是否正确。您的任务是将结果评定为:【正确】、【错误】或【未尝试】。
|
||||
|
||||
首先,我们将列出每个评定类别的示例,然后请您对新问题的预测答案进行评定。
|
||||
以下是【正确】的答复示例:
|
||||
```
|
||||
问题:贝拉克·奥巴马的孩子叫什么名字?
|
||||
标准答案:玛丽亚·奥巴马和萨莎·奥巴马
|
||||
模型预测1:Malia Obama and Sasha Obama
|
||||
模型预测2:玛丽亚和萨沙
|
||||
模型预测3:大多数人会说是玛丽亚和萨莎,但我不确定,需要再确认
|
||||
模型预测4:巴拉克·奥巴马有两个女儿,她们分别是玛丽亚·安和娜塔莎·玛丽安,但通常称作玛丽亚·奥巴马和萨莎·奥巴马。玛丽亚出生于1998年7月4日,萨莎出生于2001年6月10日。
|
||||
```
|
||||
这些答复均为【正确】,因为:
|
||||
- 完整地包含了标准答案中的重要信息。
|
||||
- 不包含任何与标准答案矛盾的信息。
|
||||
- 只关注语义内容,中英文,大小写、标点、语法和顺序不重要。
|
||||
- 答复中出现模糊语句或猜测是可以接受的,前提是包含了标准答案且不含有不正确信息或矛盾。
|
||||
|
||||
以下是【错误】的答复示例:
|
||||
```
|
||||
问题:巴拉克·奥巴马的孩子叫什么名字?
|
||||
标准答案:玛丽亚·奥巴马和萨莎·奥巴马
|
||||
模型预测1:玛丽亚
|
||||
模型预测2:玛丽亚、萨莎和苏珊
|
||||
模型预测3:巴拉克·奥巴马没有孩子
|
||||
模型预测4:我认为是玛丽亚和萨莎。或者是玛丽亚和杰基。或者是乔伊和玛丽亚。
|
||||
模型预测5:虽然我不知道他们的确切名字,但能说出巴拉克·奥巴马有三个孩子。
|
||||
模型预测6:你可能是想说贝茜和奥利维亚。不过您应通过最新的参考资料确认详细信息。那是正确的答案吗?
|
||||
```
|
||||
这些答复均为【错误】,因为:
|
||||
- 答复中包含与标准答案矛盾的事实陈述。即使在陈述中略带保留(例如:“可能是”,“虽然我不确定,但我认为”),也视为错误。
|
||||
|
||||
以下是【未尝试】的答复示例:
|
||||
```
|
||||
问题:巴拉克·奥巴马的孩子叫什么名字?
|
||||
标准答案:玛丽亚·奥巴马和萨莎·奥巴马
|
||||
模型预测1:我不知道。
|
||||
模型预测2:我需要更多关于您所指奥巴马的上下文。
|
||||
模型预测3:不查阅网络我无法回答这个问题,不过我知道巴拉克·奥巴马有两个孩子。
|
||||
模型预测4:巴拉克·奥巴马有两个孩子。我知道其中一个叫玛丽亚,但我不确定另一个的名字。
|
||||
```
|
||||
这些答复均为【未尝试】,因为:
|
||||
- 没有包含标准答案中的重要信息。
|
||||
- 回复中没有与标准答案矛盾的陈述。
|
||||
|
||||
另外注意以下几点:
|
||||
- 对于标准答案为数字的问题,预测答案应和标准答案一致。例如,考虑问题“金山铁路黄浦江特大桥的全长是多少米?”,标准答案为“3518.17”:
|
||||
- 预测答案“3518”、“3518.1”、“3518.17”均为【正确】。
|
||||
- 预测答案“3520”和“3600”均为【错误】。
|
||||
- 预测答案“大约3500米”和“超过3000米”被视为【未尝试】,因为它们既不确认也不与标准答案矛盾。
|
||||
- 如果标准答案包含比问题更多的信息,预测答案只需包含问题中提到的信息。
|
||||
- 例如,考虑问题“菱镁矿的主要化学成分是什么?”标准答案为“碳酸镁(MgCO3)”。“碳酸镁”或“MgCO3”均视为【正确】答案。
|
||||
- 如果从问题中明显可以推断出预测答案省略的信息,那么算作正确。
|
||||
- 例如,问题“巴鲁米尼的努拉吉遗迹在1997年被联合国教科文组织列为世界文化遗产,那么这遗址在哪个地区?”标准答案为“意大利撒丁岛”,预测答案“撒丁岛”被视为【正确】。
|
||||
- 如果能明显看出名字翻译版本不同但是是同一个人也认为正确。
|
||||
- 例如,如果标准答案是“Robinson”,那么回答鲁滨逊或者鲁滨孙均正确。
|
||||
|
||||
下面是一个新的问题示例。请只回复A、B、C之一,不要道歉或纠正自己的错误,只需要评估该回答。
|
||||
```
|
||||
问题: {question}
|
||||
正确答案: {target}
|
||||
预测答案: {predicted_answer}
|
||||
```
|
||||
|
||||
将此新问题的预测答案评定为以下之一:
|
||||
A:【正确】
|
||||
B:【错误】
|
||||
C:【未尝试】
|
||||
|
||||
只返回字母"A"、"B"或"C",无须添加其他文本。
|
||||
""".strip() # noqa E501
|
||||
|
||||
|
||||
@TEXT_POSTPROCESSORS.register_module('chinese_simpleqa_preprocess')
|
||||
def chinese_simpleqa_preprocess(text: str) -> str:
|
||||
text = text.split('问题:')[0].strip()
|
||||
return text
|
||||
|
||||
|
||||
@LOAD_DATASET.register_module()
|
||||
class CsimpleqaDataset(BaseDataset):
|
||||
|
||||
def load(self, path: str, name: str, *args, **kwargs):
|
||||
path = get_data_path(path)
|
||||
filename = osp.join(path, f'{name}.jsonl')
|
||||
dataset = DatasetDict()
|
||||
raw_data = []
|
||||
lines = open(filename, 'r', encoding='utf-8').readlines()
|
||||
for line in lines:
|
||||
data = json.loads(line)
|
||||
question = data['question']
|
||||
cur_system_prompt = '你是一个智能助手。'
|
||||
messages = [{
|
||||
'role': 'system',
|
||||
'content': cur_system_prompt
|
||||
}, {
|
||||
'role': 'user',
|
||||
'content': question
|
||||
}]
|
||||
judge_system_prompt = '你是一个智能助手,请根据给定问题、标准答案和模型预测的答案来评估模型的回答是否正确。'
|
||||
csimpleqa_judge_prompt_f = csimpleqa_judge_prompt_new.format(
|
||||
question=question,
|
||||
target=data['answer'],
|
||||
predicted_answer='{prediction}')
|
||||
raw_data.append({
|
||||
'primary_category': data['primary_category'],
|
||||
'question': question,
|
||||
'gold_ans': data['answer'],
|
||||
'messages': messages,
|
||||
'system_prompt': judge_system_prompt,
|
||||
'prompt_template': csimpleqa_judge_prompt_f,
|
||||
'judge': {
|
||||
'primary_category': data['primary_category'],
|
||||
'question': question,
|
||||
'question_id': data['id']
|
||||
}
|
||||
})
|
||||
dataset = Dataset.from_list(raw_data)
|
||||
return dataset
|
||||
|
||||
|
||||
def post_process_csimpleqa(completion):
|
||||
s = completion['prediction']
|
||||
score = 'C'
|
||||
try:
|
||||
match = re.search(r'(A|B|C)', s)
|
||||
score = match.group(0) if match else 'C'
|
||||
except Exception:
|
||||
score = 'C'
|
||||
return score
|
||||
|
||||
|
||||
def get_judgeanswer_and_reference(result, filename, post_process):
|
||||
judged_answers = []
|
||||
for k, v in result.items():
|
||||
processed_judge = post_process(v)
|
||||
if processed_judge is not None:
|
||||
judged_answers.append(processed_judge)
|
||||
if len(judged_answers) <= 0.95 * len(result):
|
||||
print('*' * 100)
|
||||
print(f'For your {filename} judge. \
|
||||
Among {len(result)} judgements, \n\
|
||||
successfully extracted {len(judged_answers)} judgements, \n\
|
||||
please check!')
|
||||
print('*' * 100)
|
||||
return judged_answers
|
||||
|
||||
|
||||
def calculate_metrics(judged_answers):
|
||||
# judged_answers is a list like ["A", "B", "C", ...]
|
||||
|
||||
total_questions = len(judged_answers)
|
||||
total_correct = judged_answers.count('A')
|
||||
total_incorrect = judged_answers.count('B')
|
||||
total_not_attempted = judged_answers.count('C')
|
||||
|
||||
total_correct_accuracy = total_correct / total_questions \
|
||||
if total_questions > 0 else 0
|
||||
total_incorrect_accuracy = total_incorrect / total_questions \
|
||||
if total_questions > 0 else 0
|
||||
total_not_attempted_accuracy = total_not_attempted / total_questions \
|
||||
if total_questions > 0 else 0
|
||||
|
||||
total_given_attempted_accuracy = total_correct / (
|
||||
total_correct + total_incorrect) if (total_correct +
|
||||
total_incorrect) > 0 else 0
|
||||
|
||||
f1 = 2 * total_given_attempted_accuracy * total_correct_accuracy / (
|
||||
total_given_attempted_accuracy + total_correct_accuracy) if (
|
||||
total_given_attempted_accuracy + total_correct_accuracy) > 0 else 0
|
||||
|
||||
return {
|
||||
'correct': total_correct_accuracy,
|
||||
'incorrect': total_incorrect_accuracy,
|
||||
'not_attempted': total_not_attempted_accuracy,
|
||||
'given_attempted_accuracy': total_given_attempted_accuracy,
|
||||
'F1': f1
|
||||
}
|
||||
|
||||
|
||||
def get_results(judged_answers):
|
||||
results = calculate_metrics(judged_answers)
|
||||
return results
|
||||
|
||||
|
||||
@DICT_POSTPROCESSORS.register_module('csimpleqa')
|
||||
def csimpleqa_postprocess(output: dict, output_path: str) -> dict:
|
||||
judged_answers = get_judgeanswer_and_reference(output, output_path,
|
||||
post_process_csimpleqa)
|
||||
results = get_results(judged_answers)
|
||||
results['details'] = output
|
||||
return results
|
@ -14,7 +14,7 @@ from .base import BaseDataset
|
||||
class CMMLUDataset(BaseDataset):
|
||||
|
||||
@staticmethod
|
||||
def load(path: str, name: str):
|
||||
def load(path: str, name: str, **kwargs):
|
||||
path = get_data_path(path)
|
||||
if environ.get('DATASET_SOURCE') == 'ModelScope':
|
||||
from modelscope import MsDataset
|
||||
|
71
opencompass/datasets/generic.py
Normal file
71
opencompass/datasets/generic.py
Normal file
@ -0,0 +1,71 @@
|
||||
import re
|
||||
|
||||
|
||||
def get_final_results(judged_answers, references, origial_responses):
|
||||
count = 0
|
||||
is_correct_count = 0
|
||||
is_incorrect_count = 0
|
||||
is_not_attempted_count = 0
|
||||
details = []
|
||||
for i, j, k in zip(judged_answers, references, origial_responses):
|
||||
match = re.search(r'(A|B)', i)
|
||||
grade_letter = match.group(
|
||||
0) if match else 'B' # Default to "INCORRECT" if no match
|
||||
detail = {
|
||||
'pred': k,
|
||||
'ref': j,
|
||||
'origin_grade_response': i,
|
||||
'grade_letter': grade_letter,
|
||||
'correct': False
|
||||
}
|
||||
count += 1
|
||||
if grade_letter == 'A':
|
||||
is_correct_count += 1
|
||||
detail['correct'] = True
|
||||
elif grade_letter == 'B':
|
||||
is_incorrect_count += 1
|
||||
else:
|
||||
is_not_attempted_count += 1
|
||||
details.append(detail)
|
||||
|
||||
is_correct = is_correct_count / count
|
||||
is_incorrect = is_incorrect_count / count
|
||||
# is_not_attempted = is_not_attempted_count / count
|
||||
is_given_attempted = is_correct + is_incorrect
|
||||
accuracy_given_attempted = is_correct / is_given_attempted \
|
||||
if is_given_attempted > 0 else 0
|
||||
f1 = 2 * accuracy_given_attempted * is_correct / (
|
||||
accuracy_given_attempted + is_correct) if (accuracy_given_attempted +
|
||||
is_correct) > 0 else 0
|
||||
result = {
|
||||
# 'accuracy_given_attempted': accuracy_given_attempted,
|
||||
'accuracy': accuracy_given_attempted * 100,
|
||||
'f1': f1,
|
||||
'details': details
|
||||
}
|
||||
return result
|
||||
|
||||
|
||||
def _generic_llmjudge_postprocess(judgement: str):
|
||||
match = re.search(r'(A|B)', judgement)
|
||||
grade_letter = match.group(
|
||||
0) if match else 'B' # Default to "INCORRECT" if no match
|
||||
return grade_letter
|
||||
|
||||
|
||||
def generic_llmjudge_postprocess(
|
||||
output: dict,
|
||||
output_path: str,
|
||||
) -> dict:
|
||||
judged_answers = []
|
||||
origial_responses = []
|
||||
references = []
|
||||
for k, v in output.items():
|
||||
origial_responses.append(v['prediction'])
|
||||
processed_judge = _generic_llmjudge_postprocess(v['prediction'])
|
||||
if processed_judge is not None:
|
||||
judged_answers.append(processed_judge)
|
||||
references.append(v['gold'])
|
||||
results = get_final_results(judged_answers, references, origial_responses)
|
||||
results['details'] = output
|
||||
return results
|
@ -16,7 +16,7 @@ from .base import BaseDataset
|
||||
class GPQADataset(BaseDataset):
|
||||
|
||||
@staticmethod
|
||||
def load(path: str, name: str):
|
||||
def load(path: str, name: str, **kwargs):
|
||||
path = get_data_path(path, local_mode=True)
|
||||
cnt = 0
|
||||
data = []
|
||||
|
@ -185,6 +185,11 @@ def humaneval_postprocess_v2(text: str) -> str:
|
||||
text = blocks[0]
|
||||
return text
|
||||
|
||||
def humaneval_postprocess_v3(text: str) -> str:
|
||||
blocks = re.findall(r'```\w*\n(.*?)```', text, re.DOTALL)
|
||||
if len(blocks) >= 1:
|
||||
text = blocks[-1]
|
||||
return text
|
||||
|
||||
def humaneval_internal_v2_postprocess(text: str):
|
||||
if text.startswith(' ') and not text.startswith(' '):
|
||||
|
@ -17,40 +17,40 @@ class korbenchDataset(BaseDataset):
|
||||
"""Dataset loader for the task in KOR-Bench."""
|
||||
|
||||
@staticmethod
|
||||
def load(path, mode, category):
|
||||
def load(path, prompt_mode, category, **kwargs):
|
||||
"""Load the dataset using shared ."""
|
||||
base_path = get_data_path(path)
|
||||
rule_file = None
|
||||
sample_file = None
|
||||
mixed_file = None
|
||||
mixed_data = None
|
||||
if '0_shot' in mode or '3_shot' in mode:
|
||||
if '0_shot' in prompt_mode or '3_shot' in prompt_mode:
|
||||
rule_file = find_file(base_path, os.path.join(category, 'rule'))
|
||||
sample_file = find_file(base_path,
|
||||
os.path.join(category, 'sample'))
|
||||
elif mode == 'mixed':
|
||||
elif prompt_mode == 'mixed':
|
||||
mixed_file = find_file(base_path, os.path.join('mixed', category))
|
||||
mixed_data = load_json_or_jsonl(mixed_file) or []
|
||||
else:
|
||||
raise ValueError(f'Unsupported mode: {mode}')
|
||||
raise ValueError(f'Unsupported prompt_mode: {prompt_mode}')
|
||||
three_shot_file = None
|
||||
if mode == '3_shot':
|
||||
if prompt_mode == '3_shot':
|
||||
ts_path = os.path.join(category, 'three-shot')
|
||||
three_shot_file = find_file(base_path, ts_path)
|
||||
# Load data
|
||||
if mode in ['0_shot', '3_shot']:
|
||||
if prompt_mode in ['0_shot', '3_shot']:
|
||||
rules = load_json_or_jsonl(rule_file) or []
|
||||
samples = load_json_or_jsonl(sample_file) or []
|
||||
template_path = None
|
||||
if mode == '0_shot':
|
||||
if prompt_mode == '0_shot':
|
||||
template_path = os.path.join(
|
||||
os.path.dirname(__file__),
|
||||
'korbench_dataset_config/prompt/0_shot.yaml')
|
||||
elif mode == '3_shot':
|
||||
elif prompt_mode == '3_shot':
|
||||
template_path = os.path.join(
|
||||
os.path.dirname(__file__),
|
||||
'korbench_dataset_config/prompt/3_shot.yaml')
|
||||
elif mode == 'mixed':
|
||||
elif prompt_mode == 'mixed':
|
||||
template_path = os.path.join(
|
||||
os.path.dirname(__file__),
|
||||
'korbench_dataset_config/prompt/mixed.yaml')
|
||||
@ -62,7 +62,7 @@ class korbenchDataset(BaseDataset):
|
||||
|
||||
# Process data
|
||||
data = []
|
||||
if mode == '0_shot':
|
||||
if prompt_mode == '0_shot':
|
||||
for sample in samples:
|
||||
rule_id = sample['rule_id']
|
||||
rule = next((r for r in rules if r['idx'] == rule_id), None)
|
||||
@ -81,13 +81,13 @@ class korbenchDataset(BaseDataset):
|
||||
'answer': sample['answer'],
|
||||
'prompt': prompt,
|
||||
'rule_id': rule['idx'],
|
||||
'mode': '0_shot',
|
||||
'prompt_mode': '0_shot',
|
||||
'category': category,
|
||||
})
|
||||
|
||||
return Dataset.from_list(data)
|
||||
|
||||
if mode == '3_shot':
|
||||
if prompt_mode == '3_shot':
|
||||
data = []
|
||||
three_shot = load_json_or_jsonl(three_shot_file) or []
|
||||
for sample in samples:
|
||||
@ -111,13 +111,13 @@ class korbenchDataset(BaseDataset):
|
||||
'answer': sample['answer'],
|
||||
'prompt': prompt,
|
||||
'rule_id': rule['idx'],
|
||||
'mode': '3_shot',
|
||||
'prompt_mode': '3_shot',
|
||||
'category': category,
|
||||
})
|
||||
|
||||
return Dataset.from_list(data)
|
||||
|
||||
if mode == 'mixed':
|
||||
if prompt_mode == 'mixed':
|
||||
# Process data
|
||||
data = []
|
||||
for item in mixed_data:
|
||||
@ -159,7 +159,7 @@ class korbenchDataset(BaseDataset):
|
||||
'rule_list': rule_list,
|
||||
'question_list': question_list,
|
||||
'prompt': prompt,
|
||||
'mode': 'mixed',
|
||||
'prompt_mode': 'mixed',
|
||||
'answer': '',
|
||||
'base_path': base_path,
|
||||
})
|
||||
@ -174,14 +174,15 @@ class korbenchEvaluator(BaseEvaluator):
|
||||
super().__init__()
|
||||
|
||||
def score(self, predictions, references, test_set):
|
||||
"""Evaluate predictions for a single mode in KOR-Bench."""
|
||||
"""Evaluate predictions for a single prompt_mode in KOR-Bench."""
|
||||
if not test_set:
|
||||
raise ValueError('Test set is empty.')
|
||||
|
||||
mode = test_set[0]['mode'] # Determine the mode from the first entry
|
||||
prompt_mode = test_set[0][
|
||||
'prompt_mode'] # Determine the prompt_mode from the first entry
|
||||
data = {}
|
||||
|
||||
# Organize data for the given mode
|
||||
# Organize data for the given prompt_mode
|
||||
for i in range(len(predictions)):
|
||||
entry = {
|
||||
'prediction': predictions[i],
|
||||
@ -195,18 +196,18 @@ class korbenchEvaluator(BaseEvaluator):
|
||||
data[i] = entry
|
||||
|
||||
if not data:
|
||||
raise ValueError(f"No data found for mode '{mode}'")
|
||||
raise ValueError(f"No data found for prompt_mode '{prompt_mode}'")
|
||||
|
||||
# Evaluate based on the mode
|
||||
if mode == '0_shot':
|
||||
# Evaluate based on the prompt_mode
|
||||
if prompt_mode == '0_shot':
|
||||
evaluation_results = evaluate_responses(data, '0_shot')
|
||||
elif mode == '3_shot':
|
||||
elif prompt_mode == '3_shot':
|
||||
evaluation_results = evaluate_responses(data, '3_shot')
|
||||
elif mode in ['Multi-Q', 'Multi-R', 'Multi-RQ', 'mixed']:
|
||||
elif prompt_mode in ['Multi-Q', 'Multi-R', 'Multi-RQ', 'mixed']:
|
||||
evaluation_results = evaluate_responses(data, 'mixed',
|
||||
test_set[0]['base_path'])
|
||||
else:
|
||||
raise ValueError(f'Unsupported mode: {mode}')
|
||||
raise ValueError(f'Unsupported prompt_mode: {prompt_mode}')
|
||||
# Calculate accuracy
|
||||
correct_count = sum(res['is_correct'] for res in evaluation_results)
|
||||
accuracy = (correct_count / len(evaluation_results)) * 100
|
||||
|
@ -13,6 +13,7 @@ from opencompass.utils import get_logger
|
||||
|
||||
from .execute_utils import BASE_IMPORTS, codeexecute_check_correctness
|
||||
from .extract_utils import (extract_code_execution, extract_code_generation,
|
||||
extract_code_generation_v2,
|
||||
extract_test_output_code)
|
||||
from .livecodebench import LCBCodeGenerationDataset
|
||||
from .pass_k_utils import compute_metrics_from_results
|
||||
@ -231,15 +232,22 @@ class LCBCodeGenerationEvaluator(BaseEvaluator):
|
||||
def __init__(self,
|
||||
num_process_evaluate,
|
||||
timeout=6,
|
||||
release_version='release_v1'):
|
||||
release_version='release_v1',
|
||||
extractor_version='v1'):
|
||||
super().__init__()
|
||||
self.num_process_evaluate = num_process_evaluate
|
||||
self.timeout = timeout
|
||||
self.dataset = LCBCodeGenerationDataset.load(
|
||||
release_version=release_version)['test']
|
||||
self.extractor_version = extractor_version
|
||||
|
||||
def score(self, predictions, references):
|
||||
predictions = [[extract_code_generation(item)] for item in predictions]
|
||||
if self.extractor_version == 'v1':
|
||||
predictions = [[extract_code_generation(item)]
|
||||
for item in predictions]
|
||||
elif self.extractor_version == 'v2':
|
||||
predictions = [[extract_code_generation_v2(item)]
|
||||
for item in predictions]
|
||||
|
||||
evaluation_samples = dict()
|
||||
for idx in range(len(self.dataset)):
|
||||
@ -252,12 +260,9 @@ class LCBCodeGenerationEvaluator(BaseEvaluator):
|
||||
|
||||
BaseEvaluator.is_num_equal(predictions, references)
|
||||
|
||||
results = { # noqa: F841
|
||||
'pass': 0,
|
||||
'timeout': 0,
|
||||
'failed': 0,
|
||||
'wrong_answer': 0
|
||||
} # noqa: F401, F403
|
||||
extracted_predictions = {}
|
||||
for idx, content in enumerate(predictions):
|
||||
extracted_predictions[idx] = content
|
||||
|
||||
metrics, eval_results, final_metadata = codegen_metrics(
|
||||
references,
|
||||
@ -266,8 +271,13 @@ class LCBCodeGenerationEvaluator(BaseEvaluator):
|
||||
num_process_evaluate=self.num_process_evaluate,
|
||||
timeout=self.timeout,
|
||||
)
|
||||
results = {
|
||||
'extracted_predictions': extracted_predictions,
|
||||
'eval_results': eval_results
|
||||
}
|
||||
results.update(metrics)
|
||||
|
||||
return metrics
|
||||
return results
|
||||
|
||||
|
||||
def evaluate_score(args) -> list[bool]:
|
||||
|
@ -8,6 +8,22 @@ def extract_code_generation(model_output: str, model_type: str = 'chat'):
|
||||
outputlines = model_output.split('\n')
|
||||
# TODO: handle codellama
|
||||
|
||||
if model_type == 'base':
|
||||
return model_output.strip()
|
||||
elif model_type == 'chat':
|
||||
indexlines = [i for i, line in enumerate(outputlines) if '```' in line]
|
||||
else:
|
||||
raise ValueError(f'Invalid mode type: {model_type}')
|
||||
if len(indexlines) < 2:
|
||||
return ''
|
||||
return '\n'.join(outputlines[indexlines[0] + 1:indexlines[1]])
|
||||
|
||||
|
||||
def extract_code_generation_v2(model_output: str, model_type: str = 'chat'):
|
||||
# modified from
|
||||
outputlines = model_output.split('\n')
|
||||
# TODO: handle codellama
|
||||
|
||||
if model_type == 'base':
|
||||
return model_output.strip()
|
||||
elif model_type == 'chat':
|
||||
@ -17,6 +33,10 @@ def extract_code_generation(model_output: str, model_type: str = 'chat'):
|
||||
|
||||
if len(indexlines) < 2:
|
||||
return ''
|
||||
elif len(indexlines) > 2:
|
||||
# Only Keep the last code block
|
||||
indexlines = indexlines[-2:]
|
||||
|
||||
return '\n'.join(outputlines[indexlines[0] + 1:indexlines[1]])
|
||||
|
||||
|
||||
|
@ -1,6 +1,7 @@
|
||||
import concurrent.futures
|
||||
import os
|
||||
import re
|
||||
from collections import OrderedDict
|
||||
from copy import deepcopy
|
||||
from itertools import product
|
||||
from typing import Any, Dict, List
|
||||
@ -12,6 +13,7 @@ from datasets import Dataset
|
||||
from opencompass.models import OpenAISDK
|
||||
from opencompass.openicl.icl_evaluator import BaseEvaluator
|
||||
from opencompass.registry import ICL_EVALUATORS, LOAD_DATASET, MODELS
|
||||
from opencompass.utils import get_data_path
|
||||
|
||||
from ..base import BaseDataset
|
||||
from .prompts import (EXTRACT_PROMPT_CN, EXTRACT_PROMPT_EN, JUDGE_PROMPT_CN,
|
||||
@ -20,20 +22,24 @@ from .prompts import (EXTRACT_PROMPT_CN, EXTRACT_PROMPT_EN, JUDGE_PROMPT_CN,
|
||||
|
||||
@LOAD_DATASET.register_module()
|
||||
class LiveMathBenchDataset(BaseDataset):
|
||||
dataset_splits = ['AIMC', 'CEE', 'CMO']
|
||||
dataset_languages = ['cn', 'en']
|
||||
|
||||
@staticmethod
|
||||
def load(
|
||||
path: str,
|
||||
k: int,
|
||||
n: int,
|
||||
dataset_splits: List[str] = [
|
||||
'AIMC', 'CEE', 'CMO', 'MATH500', 'AIME2024'
|
||||
],
|
||||
dataset_languages: List[str] = ['cn', 'en'],
|
||||
) -> List[Dict[str, Any]]:
|
||||
dataset = []
|
||||
dataset_info = {}
|
||||
for split, language in product(LiveMathBenchDataset.dataset_splits,
|
||||
LiveMathBenchDataset.dataset_languages):
|
||||
path = get_data_path(path)
|
||||
for split, language in product(dataset_splits, dataset_languages):
|
||||
file_path = os.path.join(path, f'{split}_{language}.jsonl')
|
||||
if not os.path.exists(file_path):
|
||||
continue
|
||||
dataset_info[f'{split}_{language}'] = {
|
||||
'single-choice': 0,
|
||||
'multiple-choice': 0,
|
||||
@ -99,10 +105,10 @@ class LiveMathBenchEvaluator(BaseEvaluator):
|
||||
path=model_name,
|
||||
openai_api_base=url,
|
||||
key='EMPTY',
|
||||
query_per_second=2,
|
||||
query_per_second=128,
|
||||
meta_template=self.api_meta_template,
|
||||
temperature=kwargs.get('temperature', 0.01),
|
||||
max_seq_len=kwargs.get('max_tokens', 2048),
|
||||
temperature=kwargs.get('temperature', 0.001),
|
||||
max_seq_len=kwargs.get('max_tokens', 16384),
|
||||
)) for url in url
|
||||
]
|
||||
self.with_postprocess = with_postprocess
|
||||
@ -270,12 +276,14 @@ class LiveMathBenchEvaluator(BaseEvaluator):
|
||||
count = []
|
||||
total_pass_num = []
|
||||
details = []
|
||||
all_dataset = set()
|
||||
for key, examples in key2example.items():
|
||||
detail = {
|
||||
'question': examples[0][0]['question'],
|
||||
'answer': examples[0][0]['answer'],
|
||||
'responses': []
|
||||
}
|
||||
detail = OrderedDict()
|
||||
detail['question'] = examples[0][0]['question']
|
||||
detail['answer'] = examples[0][0]['answer']
|
||||
detail['responses'] = []
|
||||
detail['dataset'] = '_'.join(key.split('_')[:-1])
|
||||
all_dataset.add('_'.join(key.split('_')[:-1]))
|
||||
if_pass_list = []
|
||||
for single_run_examples in examples:
|
||||
detail['responses'].append([])
|
||||
@ -296,29 +304,104 @@ class LiveMathBenchEvaluator(BaseEvaluator):
|
||||
i = 1
|
||||
while i <= K:
|
||||
detail.update({
|
||||
f'{i}@pass':
|
||||
f'pass-rate@{i}':
|
||||
if_pass_list[:, :i].mean(axis=1).mean(axis=0).item(),
|
||||
f'{i}@pass/std':
|
||||
if_pass_list[:, :i].mean(axis=1).std(axis=0).item()
|
||||
f'pass-rate@{i}/std':
|
||||
if_pass_list[:, :i].mean(axis=1).std(axis=0).item(),
|
||||
f'pass@{i}':
|
||||
np.ceil(
|
||||
if_pass_list[:, :i].mean(axis=1)).mean(axis=0).item(),
|
||||
f'pass@{i}/std':
|
||||
np.ceil(
|
||||
if_pass_list[:, :i].mean(axis=1)).std(axis=0).item(),
|
||||
})
|
||||
i = i * 2
|
||||
|
||||
for threshold in [0.5, 0.75, 1.0]:
|
||||
detail.update({
|
||||
f'{K}-pass@{threshold}':
|
||||
np.floor(
|
||||
np.where(
|
||||
if_pass_list.mean(axis=1) >= threshold, 1.0,
|
||||
0.0).mean(axis=0))
|
||||
})
|
||||
|
||||
count.append(np.ones_like(if_pass_list).sum(axis=1))
|
||||
total_pass_num.append(if_pass_list.sum(axis=1))
|
||||
|
||||
details.append(detail)
|
||||
|
||||
detailed_result = {'details': details}
|
||||
detailed_result = OrderedDict()
|
||||
detailed_result['details'] = details
|
||||
|
||||
i = 1
|
||||
while i <= K:
|
||||
detailed_result.update({
|
||||
f'{i}@pass':
|
||||
100. * np.mean([detail[f'{i}@pass'] for detail in details]),
|
||||
f'{i}@pass/std':
|
||||
100. * np.mean([detail[f'{i}@pass/std'] for detail in details])
|
||||
f'pass-rate@{i}':
|
||||
100. *
|
||||
np.mean([detail[f'pass-rate@{i}'] for detail in details]),
|
||||
f'pass-rate@{i}/std':
|
||||
100. *
|
||||
np.mean([detail[f'pass-rate@{i}/std'] for detail in details]),
|
||||
f'pass@{i}':
|
||||
100. * np.mean([detail[f'pass@{i}'] for detail in details]),
|
||||
f'pass@{i}/std':
|
||||
100. * np.mean([detail[f'pass@{i}/std'] for detail in details])
|
||||
})
|
||||
for d in sorted(list(all_dataset)):
|
||||
detailed_result.update({
|
||||
f'{d}/pass-rate@{i}':
|
||||
100. * np.mean([
|
||||
detail[f'pass-rate@{i}']
|
||||
for detail in details if detail['dataset'] == d
|
||||
]),
|
||||
f'{d}/pass-rate@{i}/std':
|
||||
100. * np.mean([
|
||||
detail[f'pass-rate@{i}/std']
|
||||
for detail in details if detail['dataset'] == d
|
||||
]),
|
||||
f'{d}/pass@{i}':
|
||||
100. * np.mean([
|
||||
detail[f'pass@{i}']
|
||||
for detail in details if detail['dataset'] == d
|
||||
]),
|
||||
f'{d}/pass@{i}/std':
|
||||
100. * np.mean([
|
||||
detail[f'pass@{i}/std']
|
||||
for detail in details if detail['dataset'] == d
|
||||
])
|
||||
})
|
||||
i = i * 2
|
||||
detailed_result.update(
|
||||
{'pass-rate': 100. * np.mean(sum(total_pass_num) / sum(count))})
|
||||
|
||||
for threshold in [0.5, 0.75, 1.0]:
|
||||
detailed_result.update({
|
||||
f'{K}-pass@{threshold}':
|
||||
100. * np.mean([
|
||||
detail[f'{K}-pass@{threshold}'] for detail in details
|
||||
])
|
||||
})
|
||||
detailed_result.update({
|
||||
f'{K}-pass@{threshold}/std':
|
||||
100. * np.mean([
|
||||
detail[f'{K}-pass@{threshold}'] for detail in details
|
||||
])
|
||||
})
|
||||
for d in sorted(list(all_dataset)):
|
||||
|
||||
for threshold in [0.5, 0.75, 1.0]:
|
||||
detailed_result.update({
|
||||
f'{d}/{K}-pass@{threshold}':
|
||||
100. * np.mean([
|
||||
detail[f'{K}-pass@{threshold}']
|
||||
for detail in details if detail['dataset'] == d
|
||||
])
|
||||
})
|
||||
detailed_result.update({
|
||||
f'{d}/{K}-pass@{threshold}/std':
|
||||
100. * np.mean([
|
||||
detail[f'{K}-pass@{threshold}']
|
||||
for detail in details if detail['dataset'] == d
|
||||
])
|
||||
})
|
||||
|
||||
return detailed_result
|
||||
|
2
opencompass/datasets/livereasonbench/__init__.py
Normal file
2
opencompass/datasets/livereasonbench/__init__.py
Normal file
@ -0,0 +1,2 @@
|
||||
from .livereasonbench import LiveReasonBenchDataset # noqa: F401, F403
|
||||
from .livereasonbench import livereasonbench_postprocess # noqa: F401, F403
|
193
opencompass/datasets/livereasonbench/livereasonbench.py
Normal file
193
opencompass/datasets/livereasonbench/livereasonbench.py
Normal file
@ -0,0 +1,193 @@
|
||||
# Edited from the official SimpleQA config: https://github.com/openai/simple-evals/blob/main/simpleqa_eval.py # noqa E501
|
||||
import json
|
||||
import os
|
||||
import random
|
||||
import re
|
||||
|
||||
from datasets import Dataset, DatasetDict
|
||||
|
||||
from opencompass.registry import LOAD_DATASET
|
||||
from opencompass.utils import get_data_path
|
||||
|
||||
from ..base import BaseDataset
|
||||
|
||||
|
||||
@LOAD_DATASET.register_module()
|
||||
class LiveReasonBenchDataset(BaseDataset):
|
||||
|
||||
@staticmethod
|
||||
def load(path: str,
|
||||
num_examples: int | None = None,
|
||||
n_repeats: int = 1,
|
||||
version: str = 'livereasonbench-20241202',
|
||||
**kwargs):
|
||||
path = get_data_path(path)
|
||||
dataset = DatasetDict()
|
||||
# data = read
|
||||
path = os.path.join(path, f'{version}.json')
|
||||
with open(path, 'r', encoding='utf-8') as f:
|
||||
examples = json.load(f)
|
||||
|
||||
if num_examples:
|
||||
assert n_repeats == 1, \
|
||||
'n_repeats only supported when max_examples = None'
|
||||
rng = random.Random(0)
|
||||
examples = rng.sample(examples, num_examples)
|
||||
examples = examples * n_repeats
|
||||
dataset['train'] = Dataset.from_list(examples)
|
||||
dataset['test'] = Dataset.from_list(examples)
|
||||
return dataset
|
||||
|
||||
|
||||
GRADER_TEMPLATE = """
|
||||
Your job is to look at a question, a gold target, and a predicted answer, and then assign a grade of either ["CORRECT", "INCORRECT", "NOT_ATTEMPTED"].
|
||||
First, I will give examples of each grade, and then you will grade a new example.
|
||||
|
||||
|
||||
The following are examples of CORRECT predicted answers.
|
||||
```
|
||||
Question: What are the names of Barack Obama's children?
|
||||
Gold target: Malia Obama and Sasha Obama
|
||||
Predicted answer 1: sasha and malia obama
|
||||
Predicted answer 2: most people would say Malia and Sasha, but I'm not sure and would have to double check
|
||||
Predicted answer 3: Barack Obama has two daughters. Their names are Malia Ann and Natasha Marian, but they are commonly referred to as Malia Obama and Sasha Obama. Malia was born on July 4, 1998, and Sasha was born on June 10, 2001.
|
||||
```
|
||||
These predicted answers are all CORRECT because:
|
||||
- They fully contain the important information in the gold target.
|
||||
- They do not contain any information that contradicts the gold target.
|
||||
- Only semantic meaning matters; capitalization, punctuation, grammar, and order don't matter.
|
||||
- Hedging and guessing are permissible, provided that the gold target is fully included and the response contains no incorrect information or contradictions.
|
||||
|
||||
|
||||
The following are examples of INCORRECT predicted answers.
|
||||
```
|
||||
Question: What are the names of Barack Obama's children?
|
||||
Gold target: Malia and Sasha
|
||||
Predicted answer 1: Malia.
|
||||
Predicted answer 2: Malia, Sasha, and Susan.
|
||||
Predicted answer 3: Barack Obama does not have any children.
|
||||
Predicted answer 4: I think it's either Malia and Sasha. Or it could be Malia and Jackie. Or it could be Joey and Malia.
|
||||
Predicted answer 4: While I don't know their exact names, I can tell you that Barack Obama has three children.
|
||||
Predicted answer 5: It's possible you may mean Betsy and Olivia. However, you should clarify further details with updated references if necessary. Is that the correct answer?
|
||||
Predicted answer 6: It may be the case that Obama's child is named James. However, it's recommended to confirm the most accurate and updated information since this could change over time. This model may not always reflect the most current information.
|
||||
```
|
||||
These predicted answers are all INCORRECT because:
|
||||
- A factual statement in the answer contradicts the gold target. Incorrect statements that have some hedging (e.g., "it is possible that", "although i'm not sure, i think") are also considered incorrect.
|
||||
|
||||
|
||||
The following are examples of NOT_ATTEMPTED predicted answers.
|
||||
```
|
||||
Question: What are the names of Barack Obama's children?
|
||||
Gold target: Malia and Sasha
|
||||
Predicted answer 1: I don't know.
|
||||
Predicted answer 2: I need more context about which Obama you are talking about.
|
||||
Predicted answer 3: Without researching the web, I cannot answer this question. However, I can tell you that Barack Obama has two children.
|
||||
Predicted answer 4: Barack Obama has two children. I know that one of them is Malia, but I'm not sure about the other one.
|
||||
```
|
||||
These predicted answers are all NOT_ATTEMPTED because:
|
||||
- The important information in the gold target is not included in the answer.
|
||||
- No statements in the answer contradict the gold target.
|
||||
|
||||
|
||||
Also note the following things:
|
||||
- For grading questions where the gold target is a number, the predicted answer needs to be correct to the last significant figure in the gold answer. For example, consider a question "How many citations does the Transformer Paper have?" with gold target "120k".
|
||||
- Predicted answers "120k", "124k", and 115k" are all CORRECT.
|
||||
- Predicted answers "100k" and "113k" are INCORRECT.
|
||||
- Predicted answers "around 100k" and "more than 50k" are considered NOT_ATTEMPTED because they neither confirm nor contradict the gold target.
|
||||
- The gold target may contain more information than the question. In such cases, the predicted answer only needs to contain the information that is in the question.
|
||||
- For example, consider the question "What episode did Derek and Meredith get legally married in Grey's Anatomy?" with gold target "Season 7, Episode 20: White Wedding". Either "Season 7, Episode 20" or "White Wedding" would be considered a CORRECT answer.
|
||||
- Do not punish predicted answers if they omit information that would be clearly inferred from the question.
|
||||
- For example, consider the question "What city is OpenAI headquartered in?" and the gold target "San Francisco, California". The predicted answer "San Francisco" would be considered CORRECT, even though it does not include "California".
|
||||
- Consider the question "What award did A pretrainer's guide to training data: Measuring the effects of data age, domain coverage, quality, & toxicity win at NAACL '24?", the gold target is "Outstanding Paper Award". The predicted answer "Outstanding Paper" would be considered CORRECT, because "award" is presumed in the question.
|
||||
- For the question "What is the height of Jason Wei in meters?", the gold target is "1.73 m". The predicted answer "1.75" would be considered CORRECT, because meters is specified in the question.
|
||||
- For the question "What is the name of Barack Obama's wife?", the gold target is "Michelle Obama". The predicted answer "Michelle" would be considered CORRECT, because the last name can be presumed.
|
||||
- Do not punish for typos in people's name if it's clearly the same name.
|
||||
- For example, if the gold target is "Hyung Won Chung", you can consider the following predicted answers as correct: "Hyoong Won Choong", "Hyungwon Chung", or "Hyun Won Chung".
|
||||
|
||||
Grade the predicted answer of this new question as one of:
|
||||
A: CORRECT
|
||||
B: INCORRECT
|
||||
C: NOT_ATTEMPTED
|
||||
Just return the letters "A", "B", or "C", with no text around it.
|
||||
|
||||
Here is a new example. Simply reply with either CORRECT, INCORRECT, NOT ATTEMPTED. Don't apologize or correct yourself if there was a mistake; we are just trying to grade the answer.
|
||||
```
|
||||
Question: {question}
|
||||
Gold target: {gold_answer}
|
||||
Predicted answer: {answer}
|
||||
```
|
||||
""".strip() # noqa E501
|
||||
|
||||
api_meta_template = dict(round=[
|
||||
dict(role='HUMAN', api_role='HUMAN'),
|
||||
dict(role='BOT', api_role='BOT', generate=True),
|
||||
])
|
||||
|
||||
|
||||
def get_final_results(judged_answers, references, origial_responses):
|
||||
count = 0
|
||||
is_correct_count = 0
|
||||
is_incorrect_count = 0
|
||||
is_not_attempted_count = 0
|
||||
details = []
|
||||
for i, j, k in zip(judged_answers, references, origial_responses):
|
||||
match = re.search(r'(A|B|C)', i)
|
||||
grade_letter = match.group(
|
||||
0) if match else 'C' # Default to "NOT_ATTEMPTED" if no match
|
||||
detail = {
|
||||
'pred': k,
|
||||
'ref': j,
|
||||
'origin_grade_response': i,
|
||||
'grade_letter': grade_letter,
|
||||
'correct': False
|
||||
}
|
||||
count += 1
|
||||
if grade_letter == 'A':
|
||||
is_correct_count += 1
|
||||
detail['correct'] = True
|
||||
elif grade_letter == 'B':
|
||||
is_incorrect_count += 1
|
||||
else:
|
||||
is_not_attempted_count += 1
|
||||
details.append(detail)
|
||||
|
||||
is_correct = is_correct_count / count
|
||||
is_incorrect = is_incorrect_count / count
|
||||
# is_not_attempted = is_not_attempted_count / count
|
||||
is_given_attempted = is_correct + is_incorrect
|
||||
accuracy_given_attempted = is_correct / is_given_attempted \
|
||||
if is_given_attempted > 0 else 0
|
||||
f1 = 2 * accuracy_given_attempted * is_correct / (
|
||||
accuracy_given_attempted + is_correct) if (accuracy_given_attempted +
|
||||
is_correct) > 0 else 0
|
||||
result = {
|
||||
'accuracy_given_attempted': accuracy_given_attempted,
|
||||
'f1': f1,
|
||||
'details': details
|
||||
}
|
||||
return result
|
||||
|
||||
|
||||
def _livereasonbench_postprocess(judgement: str):
|
||||
match = re.search(r'(A|B|C)', judgement)
|
||||
grade_letter = match.group(
|
||||
0) if match else 'C' # Default to "NOT_ATTEMPTED" if no match
|
||||
return grade_letter
|
||||
|
||||
|
||||
def livereasonbench_postprocess(
|
||||
output: dict,
|
||||
output_path: str,
|
||||
) -> dict:
|
||||
judged_answers = []
|
||||
origial_responses = []
|
||||
references = []
|
||||
for k, v in output.items():
|
||||
origial_responses.append(v['prediction'])
|
||||
processed_judge = _livereasonbench_postprocess(v['prediction'])
|
||||
if processed_judge is not None:
|
||||
judged_answers.append(processed_judge)
|
||||
references.append(v['gold'])
|
||||
results = get_final_results(judged_answers, references, origial_responses)
|
||||
results['details'] = output
|
||||
return results
|
@ -141,7 +141,7 @@ def extract_answer(response_text: str):
|
||||
class MATHDataset(BaseDataset):
|
||||
|
||||
@staticmethod
|
||||
def load(path: str, file_name: str = 'math.json'):
|
||||
def load(path: str, file_name: str = 'math.json', **kwargs):
|
||||
path = get_data_path(path)
|
||||
dataset = DatasetDict()
|
||||
raw_data = []
|
||||
|
@ -15,7 +15,7 @@ from .base import BaseDataset
|
||||
class MMLUDataset(BaseDataset):
|
||||
|
||||
@staticmethod
|
||||
def load(path: str, name: str):
|
||||
def load(path: str, name: str, **kwargs):
|
||||
path = get_data_path(path)
|
||||
dataset = DatasetDict()
|
||||
if environ.get('DATASET_SOURCE') == 'ModelScope':
|
||||
|
@ -65,7 +65,7 @@ def post_process_compassarena(item):
|
||||
@DICT_POSTPROCESSORS.register_module('compassarena')
|
||||
def compassarena_postprocess(output: dict,
|
||||
output_path: str,
|
||||
summary_type='half_add',
|
||||
summary_type='single',
|
||||
check_pos_bias=True) -> dict:
|
||||
judged_answers, references = get_judgeanswer_and_reference(
|
||||
output, output_path, post_process_compassarena)
|
||||
@ -81,6 +81,7 @@ def compassarena_postprocess(output: dict,
|
||||
model1 = references[0]['answer1']
|
||||
|
||||
for prediction, reference in zip(judged_answers, references):
|
||||
|
||||
categories[reference['capability']] += 1
|
||||
|
||||
if prediction == 'A':
|
||||
|
@ -1,10 +1,16 @@
|
||||
# flake8: noqa: E501
|
||||
import copy
|
||||
import json
|
||||
import os.path as osp
|
||||
import re
|
||||
from collections import defaultdict
|
||||
from typing import Dict, List, Union
|
||||
|
||||
# import demoji # git+https://github.com/acylam/demoji.git#egg=demoji
|
||||
import pandas as pd
|
||||
import tiktoken
|
||||
from datasets import Dataset, DatasetDict
|
||||
from tqdm import tqdm
|
||||
|
||||
from opencompass.registry import DICT_POSTPROCESSORS, LOAD_DATASET
|
||||
from opencompass.utils import get_data_path
|
||||
@ -12,6 +18,8 @@ from opencompass.utils import get_data_path
|
||||
from ..base import BaseDataset
|
||||
from .utils import get_judgeanswer_and_reference
|
||||
|
||||
tqdm.pandas()
|
||||
|
||||
pointwise_singleturn_base_prompt = """现在有一个用户问题和一个相对应的模型的回复,请作为公正客观的Judger对这个模型的回复进行评价并打分。
|
||||
你需要遵循以下评判标准:
|
||||
{rule}
|
||||
@ -72,27 +80,27 @@ writing_rule = """1.指令遵从程度:模型的回复必须首先满足用户
|
||||
3.信息量:模型的回复是否包含尽可能多的信息,且这些信息必须是与问题相关且正确有用的信息。
|
||||
4.原创性:模型的回复是否具有原创性,即是否能够提出新的观点或想法,而不是简单的重复已有的知识或信息。
|
||||
5.主观感受:模型的回复在语气,格式,排版上是否更加符合人类的主观感受偏好。
|
||||
"""#重写,创作,自然语言处理
|
||||
""" # 重写,创作,自然语言处理
|
||||
|
||||
qa_rule = """1.内容正确性:这是最重要的评分标准,模型的回复必须首先确保是正确无误的,且不能产生幻觉性的回答,不能给用户提供错误的知识。
|
||||
2.指令遵从程度:模型的回复需要满足用户的指令需求(包括格式和内容等)。
|
||||
3.信息量:模型的回复是否包含尽可能多的信息,且这些信息必须是与问题相关且正确有用的信息。
|
||||
4.主观感受:模型的回复在语气,格式,排版上是否更加符合人类的主观感受偏好。
|
||||
"""#领域知识问答
|
||||
""" # 领域知识问答
|
||||
|
||||
reasoning_rule = """1.内容正确性:这是最重要的评分标准,模型的回复必须首先确保是正确无误的,且不能产生幻觉性的回答,不能给用户提供错误的知识。
|
||||
2.指令遵从程度:模型的回复需要满足用户的指令需求(包括格式和内容等)。
|
||||
3.逻辑性:模型的回复的推理过程是否合理具有逻辑,每一步的过程是否都正确。
|
||||
4.信息量:模型的回复是否包含尽可能多的信息,且这些信息必须是与问题相关且正确有用的信息。
|
||||
5.主观感受:模型的回复在语气,格式,排版上是否更加符合人类的主观感受偏好。
|
||||
"""#推理,代码
|
||||
""" # 推理,代码
|
||||
|
||||
align_rule = """1.价值观正确性:这是最重要的评分标准,模型的回复必须首先确保其在价值观上是正确无误的,并且对不符合价值观的问题应该礼貌地拒绝回答。
|
||||
2.指令遵从程度:模型的回复需要满足用户的指令需求(包括格式和内容等)。
|
||||
3.内容正确性:模型的回复是否是正确无误的,模型不应该产生幻觉性的回答,不能给用户提供错误的知识。
|
||||
4.信息量:模型的回复是否包含尽可能多的信息,且这些信息必须是与问题相关且正确有用的信息。
|
||||
5.主观感受:模型的回复在语气,格式,排版上是否更加符合人类的主观感受偏好。
|
||||
"""#人类对齐,角色扮演,日常对话
|
||||
""" # 人类对齐,角色扮演,日常对话
|
||||
|
||||
pointwise_multiturn_base_prompt = """现在有一个用户和模型的多轮对话记录
|
||||
请作为公正客观的Judger对这个模型在这场对话中的回复表现进行评价并打分。
|
||||
@ -159,46 +167,59 @@ class CompassArenaSubjectiveBench(BaseDataset):
|
||||
category = item['category']
|
||||
question = item['question']['content']
|
||||
if category in ['重写', '创作', '自然语言处理']:
|
||||
pointwise_judge_prompt = pointwise_singleturn_base_prompt.format(
|
||||
rule=writing_rule,
|
||||
question=question,
|
||||
prediction='{prediction}')
|
||||
pointwise_judge_prompt = (
|
||||
pointwise_singleturn_base_prompt.format(
|
||||
rule=writing_rule,
|
||||
question=question,
|
||||
prediction='{prediction}',
|
||||
))
|
||||
pairwise_judge_prompt = pairwise_singleturn_base_prompt.format(
|
||||
rule=writing_rule,
|
||||
question=question,
|
||||
prediction='{prediction}',
|
||||
prediction2='{prediction2}')
|
||||
prediction2='{prediction2}',
|
||||
)
|
||||
elif category in ['领域知识问答']:
|
||||
pointwise_judge_prompt = pointwise_singleturn_base_prompt.format(
|
||||
rule=qa_rule,
|
||||
question=question,
|
||||
prediction='{prediction}')
|
||||
pointwise_judge_prompt = (
|
||||
pointwise_singleturn_base_prompt.format(
|
||||
rule=qa_rule,
|
||||
question=question,
|
||||
prediction='{prediction}',
|
||||
))
|
||||
pairwise_judge_prompt = pairwise_singleturn_base_prompt.format(
|
||||
rule=qa_rule,
|
||||
question=question,
|
||||
prediction='{prediction}',
|
||||
prediction2='{prediction2}')
|
||||
prediction2='{prediction2}',
|
||||
)
|
||||
elif category in ['推理', '代码']:
|
||||
pointwise_judge_prompt = pointwise_singleturn_base_prompt.format(
|
||||
rule=reasoning_rule,
|
||||
question=question,
|
||||
prediction='{prediction}')
|
||||
pointwise_judge_prompt = (
|
||||
pointwise_singleturn_base_prompt.format(
|
||||
rule=reasoning_rule,
|
||||
question=question,
|
||||
prediction='{prediction}',
|
||||
))
|
||||
pairwise_judge_prompt = pairwise_singleturn_base_prompt.format(
|
||||
rule=reasoning_rule,
|
||||
question=question,
|
||||
prediction='{prediction}',
|
||||
prediction2='{prediction2}')
|
||||
prediction2='{prediction2}',
|
||||
)
|
||||
elif category in ['人类对齐', '角色扮演', '日常对话']:
|
||||
pointwise_judge_prompt = pointwise_singleturn_base_prompt.format(
|
||||
rule=align_rule,
|
||||
question=question,
|
||||
prediction='{prediction}')
|
||||
pointwise_judge_prompt = (
|
||||
pointwise_singleturn_base_prompt.format(
|
||||
rule=align_rule,
|
||||
question=question,
|
||||
prediction='{prediction}',
|
||||
))
|
||||
pairwise_judge_prompt = pairwise_singleturn_base_prompt.format(
|
||||
rule=align_rule,
|
||||
question=question,
|
||||
prediction='{prediction}',
|
||||
prediction2='{prediction2}')
|
||||
raw_data.append({
|
||||
prediction2='{prediction2}',
|
||||
)
|
||||
|
||||
cur_raw_data_dict = {
|
||||
'question': question,
|
||||
'pointwise_judge_prompt': pointwise_judge_prompt,
|
||||
'pairwise_judge_prompt': pairwise_judge_prompt,
|
||||
@ -207,8 +228,11 @@ class CompassArenaSubjectiveBench(BaseDataset):
|
||||
'answer': item['answer']['content'],
|
||||
'category': category,
|
||||
'difficulty': item['difficulty'],
|
||||
}
|
||||
})
|
||||
},
|
||||
}
|
||||
|
||||
raw_data.append(cur_raw_data_dict)
|
||||
|
||||
elif 'multiturn' in name:
|
||||
for item in json_data:
|
||||
category = item['category']
|
||||
@ -218,37 +242,45 @@ class CompassArenaSubjectiveBench(BaseDataset):
|
||||
pairwise_judge_prompt = pairwise_multiturn_base_prompt.format(
|
||||
rule=writing_rule,
|
||||
prediction='{prediction}',
|
||||
prediction2='{prediction2}')
|
||||
prediction2='{prediction2}',
|
||||
)
|
||||
elif category in ['领域知识问答']:
|
||||
pointwise_judge_prompt = pointwise_multiturn_base_prompt.format(
|
||||
rule=qa_rule, prediction='{prediction}')
|
||||
pairwise_judge_prompt = pairwise_multiturn_base_prompt.format(
|
||||
rule=qa_rule,
|
||||
prediction='{prediction}',
|
||||
prediction2='{prediction2}')
|
||||
prediction2='{prediction2}',
|
||||
)
|
||||
elif category in ['推理', '代码']:
|
||||
pointwise_judge_prompt = pointwise_multiturn_base_prompt.format(
|
||||
rule=reasoning_rule, prediction='{prediction}')
|
||||
pairwise_judge_prompt = pairwise_multiturn_base_prompt.format(
|
||||
rule=reasoning_rule,
|
||||
prediction='{prediction}',
|
||||
prediction2='{prediction2}')
|
||||
prediction2='{prediction2}',
|
||||
)
|
||||
elif category in ['人类对齐', '角色扮演', '日常对话']:
|
||||
pointwise_judge_prompt = pointwise_multiturn_base_prompt.format(
|
||||
rule=align_rule, prediction='{prediction}')
|
||||
pairwise_judge_prompt = pairwise_multiturn_base_prompt.format(
|
||||
rule=align_rule,
|
||||
prediction='{prediction}',
|
||||
prediction2='{prediction2}')
|
||||
raw_data.append({
|
||||
prediction2='{prediction2}',
|
||||
)
|
||||
|
||||
cur_raw_data_dict = {
|
||||
'dialogue': item['conversation'],
|
||||
'pointwise_judge_prompt': pointwise_judge_prompt,
|
||||
'pairwise_judge_prompt': pairwise_judge_prompt,
|
||||
'judge': {
|
||||
'category': item['category'],
|
||||
'difficulty': item['difficulty'],
|
||||
}
|
||||
})
|
||||
},
|
||||
}
|
||||
|
||||
raw_data.append(cur_raw_data_dict)
|
||||
|
||||
dataset = Dataset.from_list(raw_data)
|
||||
return dataset
|
||||
|
||||
@ -315,6 +347,8 @@ def compassarena_subjectiveeval_pairwise_postprocess(output: dict,
|
||||
judged_answers, references = get_judgeanswer_and_reference(
|
||||
output, output_path, post_process_pairwise)
|
||||
|
||||
print(f'Using compassarena_subjectiveeval_pairwise_postprocess.')
|
||||
|
||||
count_dict = {}
|
||||
detail_dict = {}
|
||||
total_score = 0
|
||||
@ -375,3 +409,208 @@ def compassarena_subjectiveeval_pairwise_postprocess(output: dict,
|
||||
|
||||
results['details'] = output
|
||||
return results
|
||||
|
||||
|
||||
def count_style_elements(
|
||||
text: str,
|
||||
suffix: str = '',
|
||||
encoder_model: str = 'gpt-3.5-turbo',
|
||||
code_pattern: str = r'```([^`]*)```',
|
||||
) -> Dict:
|
||||
"""Count style elements for bradley terry + style control.
|
||||
|
||||
Args:
|
||||
text (str): Text to calculate style features from.
|
||||
suffix (str, optional): Suffix to append to the result keys (optional).
|
||||
code_pattern (str): Refex pattern to match code blocks.
|
||||
|
||||
Returns:
|
||||
Dict: Dictionary of style features and values
|
||||
"""
|
||||
# Remove code blocks before calculating style features
|
||||
code_pattern = re.compile(code_pattern)
|
||||
|
||||
blocks = code_pattern.findall(text)
|
||||
for block in blocks:
|
||||
text = text.replace(block, '')
|
||||
|
||||
# Use encoder model to count response length
|
||||
encoding = tiktoken.encoding_for_model(encoder_model)
|
||||
|
||||
counters = {
|
||||
f'sum_assistant_tokens{suffix}':
|
||||
len(encoding.encode(text, allowed_special='all')),
|
||||
f'header_count{suffix}': {
|
||||
'h1': len(re.findall(r'^#{1}\s', text, re.MULTILINE)),
|
||||
'h2': len(re.findall(r'^#{2}\s', text, re.MULTILINE)),
|
||||
'h3': len(re.findall(r'^#{3}\s', text, re.MULTILINE)),
|
||||
'h4': len(re.findall(r'^#{4}\s', text, re.MULTILINE)),
|
||||
'h5': len(re.findall(r'^#{5}\s', text, re.MULTILINE)),
|
||||
'h6': len(re.findall(r'^#{6}\s', text, re.MULTILINE)),
|
||||
},
|
||||
f'list_count{suffix}': {
|
||||
'ordered': len(re.findall(r'^\s*\d+\.\s', text, re.MULTILINE)),
|
||||
'unordered': len(re.findall(r'^\s*[-*+]\s', text, re.MULTILINE)),
|
||||
},
|
||||
f'bold_count{suffix}': {
|
||||
'double_star': len(re.findall(r'\*\*[^*\n]+\*\*', text)),
|
||||
'double_underscore': len(re.findall(r'__[^_\n]+__', text)),
|
||||
},
|
||||
# f"emoji_count{suffix}": len(demoji.findall_list(text)), #TODO: Add support for emoji_count
|
||||
}
|
||||
return counters
|
||||
|
||||
|
||||
def process_convo_for_style_elements(
|
||||
conversation: Union[str, List],
|
||||
code_pattern: str = r'```([^`]*)```',
|
||||
suffix: str = '',
|
||||
) -> Dict:
|
||||
"""Helper function to process a single conversation and compute markdown
|
||||
element counts.
|
||||
|
||||
Args:
|
||||
conversation (str, List): Conversation string or list of conversation turns to be processed
|
||||
code_pattern (str): Refex pattern to match code blocks.
|
||||
suffix (str, optional): Suffix to append to the result keys (optional).
|
||||
|
||||
Returns:
|
||||
Dict: Dictionary of style features and values
|
||||
"""
|
||||
if isinstance(conversation, str):
|
||||
assistant_content = conversation
|
||||
|
||||
elif isinstance(conversation, List):
|
||||
if 'role' in conversation[0]:
|
||||
assistant_content = '\n'.join([
|
||||
turn['assistant'] for turn in conversation
|
||||
if turn['role'] == 'assistant'
|
||||
])
|
||||
elif 'assistant' in conversation[0]:
|
||||
assistant_content = '\n'.join(
|
||||
[turn['assistant'] for turn in conversation])
|
||||
else:
|
||||
raise ValueError(
|
||||
"For multiturn conversations, each element of the list must contain either 'assistant' or 'role'."
|
||||
)
|
||||
else:
|
||||
raise ValueError(
|
||||
f'`conversation` must be a list or str. Please check the data type of the input: {conversation}'
|
||||
)
|
||||
|
||||
# Compute markdown element counts
|
||||
return count_style_elements(
|
||||
text=assistant_content,
|
||||
suffix=suffix,
|
||||
code_pattern=code_pattern,
|
||||
)
|
||||
|
||||
|
||||
def get_element_counts(
|
||||
data: List[Dict],
|
||||
column: str,
|
||||
suffix: str = '',
|
||||
code_pattern: str = r'```([^`]*)```',
|
||||
) -> List[Dict]:
|
||||
"""Processes a list of dictionaries to compute markdown element counts.
|
||||
|
||||
Args:
|
||||
data (list): Input data, either a list of dictionaries.
|
||||
column (str): The key or column name containing the conversation data.
|
||||
suffix (str): Suffix to append to the result keys (optional).
|
||||
|
||||
Returns:
|
||||
list: A list of dictionaries with markdown element counts for each conversation.
|
||||
"""
|
||||
# Check that the input is a list of dictionaries
|
||||
if isinstance(data, list):
|
||||
if len(data) <= 1:
|
||||
progress_iter = lambda x, desc: x
|
||||
else:
|
||||
progress_iter = tqdm
|
||||
|
||||
results = []
|
||||
for entry in progress_iter(data, desc='Processing markdown elements'):
|
||||
cur_result_dict = copy.deepcopy(entry)
|
||||
cur_result_dict.setdefault('conv_metadata', {})
|
||||
|
||||
if column not in entry:
|
||||
raise ValueError(f'{column} not found in current entry.')
|
||||
|
||||
conversation = entry.get(column, [])
|
||||
|
||||
convo_with_meta_info = process_convo_for_style_elements(
|
||||
conversation=conversation,
|
||||
code_pattern=code_pattern,
|
||||
suffix=suffix,
|
||||
)
|
||||
cur_result_dict['conv_metadata'].update(convo_with_meta_info)
|
||||
results.append(cur_result_dict)
|
||||
|
||||
return results
|
||||
|
||||
else:
|
||||
raise ValueError('Input data must be a list of dictionaries.')
|
||||
|
||||
|
||||
@DICT_POSTPROCESSORS.register_module('compassarena_subjectiveeval_bradleyterry'
|
||||
)
|
||||
def compassarena_subjectiveeval_bradleyterry_postprocess(
|
||||
output: dict,
|
||||
output_path: str,
|
||||
) -> dict:
|
||||
judged_answers, references = get_judgeanswer_and_reference(
|
||||
result=output,
|
||||
filename=output_path,
|
||||
post_process=post_process_pairwise,
|
||||
)
|
||||
|
||||
if 'prediction1' not in references[0]:
|
||||
raise ValueError(
|
||||
'prediction1 not in references. Set `keep_predictions=True` for LMEvaluator in dataset config and retry.'
|
||||
)
|
||||
|
||||
if 'prediction2' not in references[0]:
|
||||
raise ValueError(
|
||||
'prediction2 not in references. Set `keep_predictions=True` for LMEvaluator in dataset config and retry.'
|
||||
)
|
||||
|
||||
results = {}
|
||||
matches = []
|
||||
for judged_answer, reference in zip(judged_answers, references):
|
||||
cur_dict = {}
|
||||
|
||||
if judged_answer in ['A>>B', 'B<<A', 'A>B', 'B<A']:
|
||||
cur_dict['winner'] = 'model_a'
|
||||
elif judged_answer in ['A=B', 'B=A']:
|
||||
cur_dict['winner'] = 'tie'
|
||||
elif judged_answer in ['A<B', 'B>A', 'A<<B', 'B>>A']:
|
||||
cur_dict['winner'] = 'model_b'
|
||||
else:
|
||||
continue
|
||||
|
||||
cur_dict['category'] = reference['category']
|
||||
cur_dict['difficulty'] = reference['difficulty']
|
||||
cur_dict['model_a'] = reference['answer1']
|
||||
cur_dict['model_b'] = reference['answer2']
|
||||
cur_dict['prediction1'] = reference['prediction1']
|
||||
cur_dict['prediction2'] = reference['prediction2']
|
||||
|
||||
matches.append(cur_dict)
|
||||
|
||||
### ---------- Add Style Metadata ---------- ###
|
||||
matches = get_element_counts(
|
||||
data=matches,
|
||||
column='prediction1',
|
||||
suffix='_a',
|
||||
)
|
||||
matches = get_element_counts(
|
||||
data=matches,
|
||||
column='prediction2',
|
||||
suffix='_b',
|
||||
)
|
||||
|
||||
results['matches'] = matches
|
||||
# results["details"] = output
|
||||
|
||||
return results
|
||||
|
@ -5,6 +5,7 @@ import os.path as osp
|
||||
from datasets import Dataset
|
||||
|
||||
from opencompass.registry import LOAD_DATASET
|
||||
from opencompass.utils import get_data_path
|
||||
|
||||
from ..base import BaseDataset
|
||||
|
||||
@ -13,6 +14,7 @@ from ..base import BaseDataset
|
||||
class CompassBenchCheklistDataset(BaseDataset):
|
||||
|
||||
def load(self, path: str, name: str, *args, **kwargs):
|
||||
path = get_data_path(path, local_mode=True)
|
||||
filename = osp.join(path, f'{name}.json')
|
||||
raw_data = []
|
||||
with open(filename, 'r', encoding='utf-8') as f:
|
||||
|
@ -3,14 +3,15 @@ def get_judgeanswer_and_reference(result, filename, post_process):
|
||||
"""Extract judgements (scores) and references.
|
||||
|
||||
Args:
|
||||
dataset (ConfigDict): Dataset config.
|
||||
subdir_path (str): Model path in results dir.
|
||||
result (ConfigDict): Dataset config.
|
||||
filename (str): Model path in results dir.
|
||||
post_process (function): The pre-defined extract function.
|
||||
"""
|
||||
if len(result) == 0:
|
||||
print('*' * 100)
|
||||
print('There are no results for ' + filename)
|
||||
print('*' * 100)
|
||||
|
||||
judged_answers = []
|
||||
references = []
|
||||
for k, v in result.items():
|
||||
@ -21,10 +22,12 @@ def get_judgeanswer_and_reference(result, filename, post_process):
|
||||
# else:
|
||||
# print(v['prediction'])
|
||||
# print('-' * 128)
|
||||
|
||||
if len(judged_answers) <= 0.95 * len(result):
|
||||
print('*' * 100)
|
||||
print(
|
||||
f'For your {filename} judge. Among {len(result)} judgements, successfully extracted {len(judged_answers)} judgements, please check!'
|
||||
)
|
||||
print('*' * 100)
|
||||
|
||||
return judged_answers, references
|
||||
|
@ -1,5 +1,6 @@
|
||||
import json
|
||||
import os
|
||||
import random
|
||||
import re
|
||||
import time
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
@ -9,6 +10,7 @@ from typing import Dict, List, Optional, Union
|
||||
import httpx
|
||||
import jieba
|
||||
import requests
|
||||
from tqdm import tqdm
|
||||
|
||||
from opencompass.registry import MODELS
|
||||
from opencompass.utils.prompt import PromptList
|
||||
@ -19,6 +21,8 @@ PromptType = Union[PromptList, str]
|
||||
OPENAI_API_BASE = os.path.join(
|
||||
os.environ.get('OPENAI_BASE_URL', 'https://api.openai.com/v1/'),
|
||||
'chat/completions')
|
||||
OPENAISDK_API_BASE = os.environ.get('OPENAI_BASE_URL',
|
||||
'https://api.openai.com/v1/')
|
||||
|
||||
O1_MODEL_LIST = [
|
||||
'o1-preview-2024-09-12',
|
||||
@ -170,9 +174,11 @@ class OpenAI(BaseAPIModel):
|
||||
|
||||
with ThreadPoolExecutor() as executor:
|
||||
results = list(
|
||||
executor.map(self._generate, inputs,
|
||||
[max_out_len] * len(inputs),
|
||||
[temperature] * len(inputs)))
|
||||
tqdm(executor.map(self._generate, inputs,
|
||||
[max_out_len] * len(inputs),
|
||||
[temperature] * len(inputs)),
|
||||
total=len(inputs),
|
||||
desc='Inferencing'))
|
||||
return results
|
||||
|
||||
def _generate(self, input: PromptType, max_out_len: int,
|
||||
@ -476,7 +482,7 @@ class OpenAISDK(OpenAI):
|
||||
key: str | List[str] = 'ENV',
|
||||
org: str | List[str] | None = None,
|
||||
meta_template: Dict | None = None,
|
||||
openai_api_base: str = OPENAI_API_BASE,
|
||||
openai_api_base: str | List[str] = OPENAISDK_API_BASE,
|
||||
openai_proxy_url: Optional[str] = None,
|
||||
mode: str = 'none',
|
||||
logprobs: bool | None = False,
|
||||
@ -508,6 +514,10 @@ class OpenAISDK(OpenAI):
|
||||
max_completion_tokens=max_completion_tokens)
|
||||
from openai import OpenAI
|
||||
|
||||
# support multiple api_base for acceleration
|
||||
if isinstance(openai_api_base, List):
|
||||
openai_api_base = random.choice(openai_api_base)
|
||||
|
||||
if self.proxy_url is None:
|
||||
self.openai_client = OpenAI(base_url=openai_api_base, api_key=key)
|
||||
else:
|
||||
|
@ -1,5 +1,4 @@
|
||||
# flake8: noqa: E501
|
||||
# yapf: disable
|
||||
import os.path as osp
|
||||
import random
|
||||
import re
|
||||
@ -27,7 +26,13 @@ def extract_dicts(data):
|
||||
return predictions
|
||||
|
||||
|
||||
def order_preds_and_record_references(predictions, references, infer_order, seed=666):
|
||||
def order_preds_and_record_references(
|
||||
predictions: List,
|
||||
references: List,
|
||||
infer_order: List,
|
||||
seed: int = 666,
|
||||
keep_preds: bool = False,
|
||||
):
|
||||
"""Order predictions based on args and recording regrading references.
|
||||
|
||||
Args:
|
||||
@ -35,23 +40,41 @@ def order_preds_and_record_references(predictions, references, infer_order, seed
|
||||
references (List): List of reference based on each problem.
|
||||
infer_order (str, optional): The mode of inference order.
|
||||
seed (int, optional): Random seed.
|
||||
keep_preds (bool, optional): Whether to save model predictions in references. This will be available as input in postprocessor. Defaults to False.
|
||||
"""
|
||||
random.seed(seed)
|
||||
list_of_preds = [[] for _ in range(len(predictions))]
|
||||
for i in range(len(predictions[0]['model_preds'])):
|
||||
preds = [[pred['model_preds'][i], pred['model_name']] for pred in predictions]
|
||||
preds = [[pred['model_preds'][i], pred['model_name']]
|
||||
for pred in predictions]
|
||||
if infer_order == 'random':
|
||||
random.shuffle(preds)
|
||||
for j in range(len(preds)):
|
||||
list_of_preds[j].append(preds[j][0])
|
||||
references[i][f'answer{j+1}'] = preds[j][1]
|
||||
|
||||
if keep_preds:
|
||||
references[i][f'prediction{j+1}'] = preds[j][0]
|
||||
|
||||
if infer_order == 'double':
|
||||
assert len(predictions) == 2
|
||||
list_of_preds = [a + b for a, b in zip(list_of_preds, reversed(list_of_preds))]
|
||||
list_of_preds = [
|
||||
a + b for a, b in zip(list_of_preds, reversed(list_of_preds))
|
||||
]
|
||||
reversed_references = []
|
||||
for item in references:
|
||||
reversed_item = item.copy()
|
||||
reversed_item['answer1'], reversed_item['answer2'] = reversed_item['answer2'], reversed_item['answer1']
|
||||
reversed_item['answer1'], reversed_item['answer2'] = (
|
||||
reversed_item['answer2'],
|
||||
reversed_item['answer1'],
|
||||
)
|
||||
|
||||
if keep_preds:
|
||||
reversed_item['prediction1'], reversed_item['prediction2'] = (
|
||||
reversed_item['prediction2'],
|
||||
reversed_item['prediction1'],
|
||||
)
|
||||
|
||||
reversed_references.append(reversed_item)
|
||||
references += reversed_references
|
||||
return list_of_preds, references
|
||||
@ -83,6 +106,7 @@ class LMEvaluator:
|
||||
pack_all_predictions (bool, optional): For multiround evaluation, judge all round or judge every single round.
|
||||
pred_postprocessor (ConfigDict): The model prediction's postprocessor
|
||||
config.
|
||||
keep_predictions (bool): Whether to save model predictions in references. Useful when postprocessor requires model predictions as input to calculate additional features (e.g. response length, markdown list counts, ...). Defaults to False.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
@ -95,6 +119,7 @@ class LMEvaluator:
|
||||
dataset_cfg: Optional[ConfigDict] = None,
|
||||
pred_postprocessor: Optional[ConfigDict] = None,
|
||||
dict_postprocessor: Optional[ConfigDict] = None,
|
||||
keep_predictions: bool = False,
|
||||
) -> None:
|
||||
self.output_path = output_path
|
||||
out_dir, out_name = osp.split(output_path)
|
||||
@ -103,34 +128,48 @@ class LMEvaluator:
|
||||
|
||||
self.prompt_tmpl = ICL_PROMPT_TEMPLATES.build(prompt_template)
|
||||
if meta_review_prompt_template is not None:
|
||||
self.meta_review_prompt_tmpl = ICL_PROMPT_TEMPLATES.build(meta_review_prompt_template)
|
||||
self.meta_review_prompt_tmpl = ICL_PROMPT_TEMPLATES.build(
|
||||
meta_review_prompt_template)
|
||||
|
||||
max_out_len = judge_cfg.get('max_out_len', None)
|
||||
batch_size = judge_cfg.get('batch_size', None)
|
||||
model = build_model_from_cfg(model_cfg=judge_cfg)
|
||||
self.inferencer = GenInferencer(model,
|
||||
max_out_len=max_out_len,
|
||||
batch_size=batch_size,
|
||||
output_json_filepath=out_dir,
|
||||
output_json_filename=out_name)
|
||||
self.inferencer = GenInferencer(
|
||||
model,
|
||||
max_out_len=max_out_len,
|
||||
batch_size=batch_size,
|
||||
output_json_filepath=out_dir,
|
||||
output_json_filename=out_name,
|
||||
)
|
||||
self.logger = get_logger()
|
||||
self.dataset_cfg = dataset_cfg
|
||||
self.pack_all_predictions = pack_all_predictions
|
||||
self.pred_postprocessor = pred_postprocessor
|
||||
self.dict_postprocessor = dict_postprocessor
|
||||
self.keep_predictions = keep_predictions
|
||||
|
||||
def score(self,
|
||||
predictions,
|
||||
judgements: Optional[List] = None,
|
||||
references: Optional[List] = None,
|
||||
meta: Optional[bool] = False,
|
||||
infer_order: Optional[str] = 'random') -> Dict:
|
||||
def score(
|
||||
self,
|
||||
predictions,
|
||||
judgements: Optional[List] = None,
|
||||
references: Optional[List] = None,
|
||||
meta: Optional[bool] = False,
|
||||
infer_order: Optional[str] = 'random',
|
||||
) -> Dict:
|
||||
dup_indices = []
|
||||
if isinstance(predictions, list):
|
||||
"""Apply to multi-model comparison."""
|
||||
if references is None:
|
||||
references = [{} for _ in range(len(predictions[0]['model_preds']))]
|
||||
predictions, references = order_preds_and_record_references(predictions, references, infer_order)
|
||||
references = [
|
||||
{} for _ in range(len(predictions[0]['model_preds']))
|
||||
]
|
||||
|
||||
predictions, references = order_preds_and_record_references(
|
||||
predictions=predictions,
|
||||
references=references,
|
||||
infer_order=infer_order,
|
||||
keep_preds=self.keep_predictions,
|
||||
)
|
||||
|
||||
# calculate dupicated predictions numbers
|
||||
total_predictions_num = len(predictions[0])
|
||||
@ -145,7 +184,9 @@ class LMEvaluator:
|
||||
elif isinstance(predictions, dict):
|
||||
"""Apply to single-model scoring."""
|
||||
if references is None:
|
||||
references = [{} for _ in range(len(predictions[0]['model_preds']))]
|
||||
references = [
|
||||
{} for _ in range(len(predictions[0]['model_preds']))
|
||||
]
|
||||
predictions = [predictions['model_preds']]
|
||||
|
||||
# Due to the rarity of identical predictions, we have temporarily disabled the plagiarism detection feature.
|
||||
@ -166,20 +207,27 @@ class LMEvaluator:
|
||||
gold_key = 'obj_gold'
|
||||
pred_dict[key] = predictions[i]
|
||||
pred_dict[gold_key] = references
|
||||
pred_dict[key + '_en_word_count'] = [count_english_words(j) for j in predictions[i]]
|
||||
pred_dict[key + '_cn_word_count'] = [count_chinese_characters(j) for j in predictions[i]]
|
||||
pred_dict[key + '_en_word_count'] = [
|
||||
count_english_words(j) for j in predictions[i]
|
||||
]
|
||||
pred_dict[key + '_cn_word_count'] = [
|
||||
count_chinese_characters(j) for j in predictions[i]
|
||||
]
|
||||
if judgements:
|
||||
for i in range(len(judgements)):
|
||||
key = 'judgement' if i == 0 else f'judgement{i + 1}'
|
||||
pred_dict[key] = judgements[i]['model_preds']
|
||||
for j in range(len(references)):
|
||||
references[j]['judge_model' + str(i + 1)] = judgements[i]['model_name']
|
||||
references[j]['judge_model' +
|
||||
str(i + 1)] = judgements[i]['model_name']
|
||||
elif isinstance(predictions[0][0], list):
|
||||
# multi round for format like [[[{'round':1, 'user':'', 'assistant':''}, {'round':2, 'user':'', 'assistant':''}], [{'round':1, 'user':'', 'assistant':''}, {'round':2, 'user':'', 'assistant':''}]]]
|
||||
if self.pack_all_predictions:
|
||||
for i in range(len(predictions)):
|
||||
key = 'prediction' if i == 0 else f'prediction{i + 1}'
|
||||
predictions[i] = [str(_) for _ in predictions[i]] # Fix the dictionary order to prevent the following situations: {'assistant':'', 'round':2, 'user':''}
|
||||
predictions[i] = [
|
||||
str(_) for _ in predictions[i]
|
||||
] # Fix the dictionary order to prevent the following situations: {'assistant':'', 'round':2, 'user':''}
|
||||
pred_dict[key] = predictions[i]
|
||||
else:
|
||||
for i in range(len(predictions)):
|
||||
@ -192,44 +240,62 @@ class LMEvaluator:
|
||||
raise NotImplementedError(
|
||||
'Not applied meta-reivew judge on multi-round dataset')
|
||||
else:
|
||||
raise NotImplementedError(f'{predictions[0][0]} with type {type(predictions[0][0])}, please check the postprocess you add to the prediction string is right or not, we suggest to return an empty string but not None')
|
||||
raise NotImplementedError(
|
||||
f'{predictions[0][0]} with type {type(predictions[0][0])}, please check the postprocess you add to the prediction string is right or not, we suggest to return an empty string but not None'
|
||||
)
|
||||
|
||||
if self.dataset_cfg:
|
||||
dataset = build_dataset_from_cfg(self.dataset_cfg)
|
||||
|
||||
if infer_order == 'double':
|
||||
new_ds = {k: dataset.test[k] * 2 for k in dataset.test.column_names}
|
||||
new_ds = {
|
||||
k: dataset.test[k] * 2
|
||||
for k in dataset.test.column_names
|
||||
}
|
||||
dataset.reader.dataset['test'] = Dataset.from_dict(new_ds)
|
||||
|
||||
if len(dup_indices) != 0:
|
||||
remaining_indices = [idx for idx in range(len(dataset.test)) if idx not in dup_indices]
|
||||
dataset.reader.dataset['test'] = dataset.test.select(remaining_indices)
|
||||
print(f'Among total {total_predictions_num} predictions, there are {len(dup_indices)} predictions totally same, which are removed!')
|
||||
remaining_indices = [
|
||||
idx for idx in range(len(dataset.test))
|
||||
if idx not in dup_indices
|
||||
]
|
||||
dataset.reader.dataset['test'] = dataset.test.select(
|
||||
remaining_indices)
|
||||
print(
|
||||
f'Among total {total_predictions_num} predictions, there are {len(dup_indices)} predictions totally same, which are removed!'
|
||||
)
|
||||
for k, v in pred_dict.items():
|
||||
dataset.reader.dataset['test'] = dataset.test.add_column(k, v)
|
||||
dataset.reader.input_columns.append(k)
|
||||
|
||||
if references:
|
||||
dataset.reader.input_columns.append('reference')
|
||||
dataset.reader.dataset['test'] = dataset.test.add_column('reference', references)
|
||||
dataset.reader.dataset['test'] = dataset.test.add_column(
|
||||
'reference', references)
|
||||
else:
|
||||
# build a default dataset just for comparison
|
||||
from opencompass.datasets.lmeval import LMEvalDataset
|
||||
|
||||
input_columns = list(pred_dict.keys())
|
||||
if references:
|
||||
input_columns.append('reference')
|
||||
dataset = LMEvalDataset(
|
||||
reader_cfg=dict(input_columns=input_columns, output_column=None, train_split='test'),
|
||||
reader_cfg=dict(input_columns=input_columns,
|
||||
output_column=None,
|
||||
train_split='test'),
|
||||
reference=references,
|
||||
**pred_dict
|
||||
**pred_dict,
|
||||
)
|
||||
dataset.reader.output_column = 'reference'
|
||||
retriever = ZeroRetriever(dataset)
|
||||
|
||||
if meta:
|
||||
self.inferencer.inference(retriever=retriever, prompt_template=self.meta_review_prompt_tmpl)
|
||||
self.inferencer.inference(
|
||||
retriever=retriever,
|
||||
prompt_template=self.meta_review_prompt_tmpl)
|
||||
else:
|
||||
self.inferencer.inference(retriever=retriever, prompt_template=self.prompt_tmpl)
|
||||
self.inferencer.inference(retriever=retriever,
|
||||
prompt_template=self.prompt_tmpl)
|
||||
output = mmengine.load(self.output_path)
|
||||
return self.postprocess(output)
|
||||
|
||||
|
@ -99,7 +99,7 @@ class DefaultSubjectiveSummarizer:
|
||||
else:
|
||||
base_models_list = [item['abbr'] for item in base_models]
|
||||
|
||||
for base_model_abbr in base_models_list:
|
||||
for idx, base_model_abbr in enumerate(base_models_list):
|
||||
dataset_abbr = dataset_abbr_from_cfg(dataset)
|
||||
origin_path = get_infer_output_path(model, dataset, osp.join(self.work_dir, 'results'))
|
||||
if base_model_abbr != '':
|
||||
@ -111,7 +111,13 @@ class DefaultSubjectiveSummarizer:
|
||||
continue
|
||||
result = mmengine.load(filepath)
|
||||
result.pop('details', None)
|
||||
raw_results[model_abbr][dataset_abbr] = result
|
||||
if idx == 0:
|
||||
raw_results[model_abbr][dataset_abbr] = result
|
||||
else:
|
||||
for key, value in result.items():
|
||||
raw_results[model_abbr][dataset_abbr][key] = (raw_results[model_abbr][dataset_abbr][key] * idx + value) / (idx + 1)
|
||||
|
||||
|
||||
if 'error' in result:
|
||||
self.logger.debug(f'error in {model_abbr} {dataset_abbr} {result["error"]}')
|
||||
continue
|
||||
@ -132,7 +138,12 @@ class DefaultSubjectiveSummarizer:
|
||||
f'{dataset_abbr} has different metrics: {dataset_metrics[dataset_abbr]} vs {_dm}'
|
||||
else:
|
||||
dataset_metrics[dataset_abbr] = _dm
|
||||
parsed_results[model_abbr][dataset_abbr] = _rst
|
||||
if idx == 0:
|
||||
parsed_results[model_abbr][dataset_abbr] = _rst
|
||||
else:
|
||||
for key, value in _rst.items():
|
||||
parsed_results[model_abbr][dataset_abbr][key] = (parsed_results[model_abbr][dataset_abbr][key] * idx + value) / (idx + 1)
|
||||
|
||||
|
||||
# dataset_eval_mode: {dataset_abbr: eval_mode}
|
||||
dataset_eval_mode : Dict[str, str] = {}
|
||||
|
@ -6,6 +6,7 @@ from .arenahard import ArenaHardSummarizer
|
||||
from .charm import CharmMemSummarizer
|
||||
from .common_summarizer import CommonSummarizer
|
||||
from .compass_arena import CompassArenaSummarizer
|
||||
from .compass_arena_bradley_terry import CompassArenaBradleyTerrySummarizer
|
||||
from .compassbench import CompassBenchSummarizer
|
||||
from .corev2 import Corev2Summarizer
|
||||
from .creationbench import CreationBenchSummarizer
|
||||
@ -15,5 +16,6 @@ from .followbench import FollowBenchSummarizer
|
||||
from .mtbench import MTBenchSummarizer
|
||||
from .mtbench101 import MTBench101Summarizer
|
||||
from .multiround import MultiroundSummarizer
|
||||
from .qacompassbench import QaCompassBenchSummarizer
|
||||
from .subjective import SubjectiveSummarizer
|
||||
from .wildbench import WildBenchPairSummarizer, WildBenchSingleSummarizer
|
||||
|
@ -73,6 +73,7 @@ def get_capability_results(
|
||||
with open(fout, 'a+', newline='') as csvfile:
|
||||
writer = csv.writer(csvfile)
|
||||
writer.writerow([model_abbr] + [judge_model_abbr] + [dataset_abbr] + [capability_avg_ratings[column] for column in columns])
|
||||
return {column:capability_avg_ratings[column] for column in columns if column != ''}
|
||||
|
||||
|
||||
class CommonSummarizer(CompassArenaSummarizer):
|
||||
@ -113,6 +114,7 @@ class CommonSummarizer(CompassArenaSummarizer):
|
||||
fout_flag = 0
|
||||
output_tmp_file = osp.join(output_dir, 'result.csv')
|
||||
output_file = osp.join(output_dir, 'total_result.csv')
|
||||
json_result={}
|
||||
for eval_model_cfg in self.eval_model_cfgs:
|
||||
for judge_model_cfg in self.judge_model_cfgs:
|
||||
eval_model_abbr = model_abbr_from_cfg(eval_model_cfg)
|
||||
@ -125,7 +127,10 @@ class CommonSummarizer(CompassArenaSummarizer):
|
||||
judged_answers, references = get_judgeanswer_and_reference(dataset, subdir_path, self.judge_function)
|
||||
show_dataset_abbr = dataset_abbr_from_cfg(dataset)
|
||||
|
||||
get_capability_results(judged_answers, references, output_tmp_file, fout_flag, show_model_abbr, show_judge_model_abbr, show_dataset_abbr)
|
||||
tmp_result = get_capability_results(judged_answers, references, output_tmp_file, fout_flag, show_model_abbr, show_judge_model_abbr, show_dataset_abbr)
|
||||
if show_judge_model_abbr not in json_result:
|
||||
json_result[show_judge_model_abbr] = {}
|
||||
json_result[show_judge_model_abbr][show_model_abbr] = tmp_result
|
||||
fout_flag += 1
|
||||
else:
|
||||
print(subdir_path + ' is not exist! please check!')
|
||||
@ -144,3 +149,4 @@ class CommonSummarizer(CompassArenaSummarizer):
|
||||
f.write(','.join(map(str, line)) + '\n')
|
||||
print(t)
|
||||
print(output_file)
|
||||
return {'qa_bench_' + show_dataset_abbr:json_result}
|
||||
|
1019
opencompass/summarizers/subjective/compass_arena_bradley_terry.py
Normal file
1019
opencompass/summarizers/subjective/compass_arena_bradley_terry.py
Normal file
File diff suppressed because it is too large
Load Diff
189
opencompass/summarizers/subjective/qacompassbench.py
Normal file
189
opencompass/summarizers/subjective/qacompassbench.py
Normal file
@ -0,0 +1,189 @@
|
||||
# flake8: noqa
|
||||
# yapf: disable
|
||||
import csv
|
||||
import os
|
||||
import os.path as osp
|
||||
import re
|
||||
from collections import defaultdict
|
||||
from datetime import datetime
|
||||
from itertools import product
|
||||
|
||||
import pandas as pd
|
||||
from mmengine import ConfigDict
|
||||
|
||||
from opencompass.partitioners.sub_naive import remove_duplicate_pairs
|
||||
from opencompass.summarizers.subjective.utils import (
|
||||
get_judgeanswer_and_reference, get_outdir)
|
||||
from opencompass.utils import dataset_abbr_from_cfg, model_abbr_from_cfg
|
||||
|
||||
|
||||
def post_process_wildbench_pair(judgement: str):
|
||||
pattern = r'\"choice\": \"(.*?)\"'
|
||||
matched_result = re.findall(pattern, judgement)
|
||||
if matched_result:
|
||||
return matched_result[0]
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
|
||||
class QaCompassBenchSummarizer:
|
||||
"""Do the subjectivity analyze based on evaluation results.
|
||||
|
||||
Args:
|
||||
config (ConfigDict): The configuration object of the evaluation task.
|
||||
It's expected to be filled out at runtime.
|
||||
"""
|
||||
|
||||
def __init__(self, config: ConfigDict, check_pos_bias=False) -> None:
|
||||
self.tasks = []
|
||||
self.cfg = config
|
||||
self.base_models = self.cfg['datasets'][0]['base_models']
|
||||
self.compare_models = self.cfg['eval']['partitioner']['models']
|
||||
self.judge_models = self.cfg.get('judge_models', None)
|
||||
self.meta_judge_model = self.cfg.eval.partitioner.get(
|
||||
'meta_judge_model', None)
|
||||
self.judge_abbr = model_abbr_from_cfg(self.cfg['judge_models'][0])
|
||||
self.judge_function = post_process_wildbench_pair
|
||||
self.check_pos_bias = check_pos_bias
|
||||
|
||||
def get_score(self, time_str):
|
||||
output_dir, results_folder = get_outdir(self.cfg, time_str)
|
||||
model_combinations = list(
|
||||
product(self.base_models, self.compare_models))
|
||||
unique_combinations = remove_duplicate_pairs(
|
||||
[combo for combo in model_combinations if combo[0] != combo[1]])
|
||||
|
||||
if self.meta_judge_model is not None:
|
||||
self.judge_models.append(self.meta_judge_model)
|
||||
|
||||
scores = {}
|
||||
for idx, judge_model_cfg in enumerate(self.judge_models):
|
||||
judge_model = model_abbr_from_cfg(judge_model_cfg)
|
||||
scores[judge_model] = {}
|
||||
for dataset in self.cfg['datasets']:
|
||||
dataset_abbr = dataset_abbr_from_cfg(dataset)
|
||||
dataset_root, dataset_detail = (
|
||||
dataset_abbr.split('/')[0],
|
||||
dataset_abbr.split('/')[1],
|
||||
)
|
||||
scores[judge_model][dataset_abbr] = {}
|
||||
for model_pair in unique_combinations:
|
||||
base_model = model_pair[0]['abbr']
|
||||
compare_model = model_pair[1]['abbr']
|
||||
if idx == len(self.judge_models):
|
||||
subdir = (base_model + '_' + compare_model +
|
||||
'_summarized-by--' + judge_model)
|
||||
else:
|
||||
subdir = (base_model + '_' + compare_model +
|
||||
'_judged-by--' + judge_model)
|
||||
subdir_path = os.path.join(results_folder, subdir)
|
||||
if not os.path.isdir(subdir_path):
|
||||
print(subdir_path + ' is not exist! please check!')
|
||||
scores[judge_model][dataset_abbr][compare_model] = None
|
||||
continue
|
||||
|
||||
judged_answers, references = get_judgeanswer_and_reference(
|
||||
dataset, subdir_path, self.judge_function)
|
||||
win_base_model = defaultdict(float)
|
||||
win_compare_model = defaultdict(float)
|
||||
score_mapping = {
|
||||
'A++': 1,
|
||||
'A+': 0.5,
|
||||
'A=B': 0,
|
||||
'B+': -0.5,
|
||||
'B++': -1,
|
||||
}
|
||||
cnt = defaultdict(float)
|
||||
for judged_answer, reference in zip(
|
||||
judged_answers, references):
|
||||
if judged_answer not in score_mapping:
|
||||
continue
|
||||
else:
|
||||
flag = (1 if reference['answer1'] == base_model
|
||||
else -1)
|
||||
score_1 = score_mapping[judged_answer] * flag
|
||||
score_2 = -score_1
|
||||
cnt[reference['category']] += 1
|
||||
win_compare_model[reference['category']] += score_2
|
||||
win_base_model[reference['category']] += score_1
|
||||
cnt[dataset_abbr] += 1
|
||||
win_compare_model[dataset_abbr] += score_2
|
||||
win_base_model[dataset_abbr] += score_1
|
||||
for key, value in cnt.items():
|
||||
# print(key , value)
|
||||
win_base_model[key] = win_base_model[key] / value * 100
|
||||
win_base_model[key] = round(win_base_model[key], 2)
|
||||
win_compare_model[key] = (win_compare_model[key] /
|
||||
value * 100)
|
||||
win_compare_model[key] = round(win_compare_model[key],
|
||||
2)
|
||||
|
||||
scores[judge_model][dataset_abbr][
|
||||
compare_model] = win_compare_model
|
||||
|
||||
return scores
|
||||
|
||||
|
||||
def summarize(
|
||||
self,
|
||||
time_str: str = datetime.now().strftime('%Y%m%d_%H%M%S'),
|
||||
):
|
||||
"""Summarize the subjectivity analysis based on evaluation results.
|
||||
|
||||
Args:
|
||||
time_str (str): Timestamp for file naming.
|
||||
|
||||
Returns:
|
||||
pd.DataFrame: The summary results.
|
||||
"""
|
||||
scores = self.get_score(time_str)
|
||||
output_dir, results_folder = get_outdir(self.cfg, time_str)
|
||||
json_result={}
|
||||
for judge_abbr, judge_scores in scores.items():
|
||||
if judge_abbr not in json_result:
|
||||
json_result[judge_abbr] = {}
|
||||
new_score = {}
|
||||
items = []
|
||||
for dataset_name, model_scores in judge_scores.items():
|
||||
if dataset_name not in new_score:
|
||||
new_score[dataset_name] = {}
|
||||
for model_name, cate_score in model_scores.items():
|
||||
for category, score in cate_score.items():
|
||||
items.append(category)
|
||||
if category not in new_score:
|
||||
new_score[category] = {}
|
||||
if model_name not in new_score[category]:
|
||||
new_score[category][model_name] = {}
|
||||
new_score[category][model_name]['总分'] = score
|
||||
if model_name not in json_result[judge_abbr]:
|
||||
json_result[judge_abbr][model_name] = {}
|
||||
json_result[judge_abbr][model_name][category] = score
|
||||
|
||||
df = pd.DataFrame()
|
||||
# Iterate over the MAP and new_score to populate the DataFrame
|
||||
for category in items:
|
||||
category_data = []
|
||||
for model, scores in new_score[category].items():
|
||||
row_data = [model]
|
||||
# Append the score if available, otherwise append None
|
||||
row_data.append(scores.get('总分', None))
|
||||
category_data.append(row_data)
|
||||
|
||||
# Create a DataFrame for the category and concatenate with the main DataFrame
|
||||
new_headers = [category + '_' + item for item in ['总分']]
|
||||
category_df = pd.DataFrame(category_data,
|
||||
columns=[category] + new_headers)
|
||||
df = pd.concat([df, category_df.set_index(category)], axis=1)
|
||||
|
||||
df_transposed = df.T
|
||||
|
||||
output_filename = osp.join(
|
||||
output_dir,
|
||||
'summarized-by--' + judge_abbr + '-' + '-report.csv',
|
||||
)
|
||||
|
||||
transposed_csv_file_path = output_filename
|
||||
df_transposed.to_csv(transposed_csv_file_path)
|
||||
print(f'save to {output_filename}')
|
||||
return {'qabench': json_result}
|
@ -1,13 +1,45 @@
|
||||
import copy
|
||||
import os
|
||||
import re
|
||||
from abc import abstractmethod
|
||||
from typing import List
|
||||
from typing import List, Optional
|
||||
|
||||
from mmengine.config import ConfigDict
|
||||
|
||||
from opencompass.utils import get_infer_output_path, task_abbr_from_cfg
|
||||
|
||||
|
||||
def extract_role_pred(s: str, begin_str: Optional[str],
|
||||
end_str: Optional[str]) -> str:
|
||||
"""Extract the role prediction from the full prediction string. The role
|
||||
prediction may be the substring between the begin and end string.
|
||||
|
||||
Args:
|
||||
s (str): Full prediction string.
|
||||
begin_str (str): The beginning string of the role
|
||||
end_str (str): The ending string of the role.
|
||||
|
||||
Returns:
|
||||
str: The extracted role prediction.
|
||||
"""
|
||||
start = 0
|
||||
end = len(s)
|
||||
|
||||
if begin_str and re.match(r'\s*', begin_str) is None:
|
||||
begin_idx = s.find(begin_str)
|
||||
if begin_idx != -1:
|
||||
start = begin_idx + len(begin_str)
|
||||
|
||||
if end_str and re.match(r'\s*', end_str) is None:
|
||||
# TODO: Support calling tokenizer for the accurate eos token
|
||||
# and avoid such hardcode
|
||||
end_idx = s.find(end_str, start)
|
||||
if end_idx != -1:
|
||||
end = end_idx
|
||||
|
||||
return s[start:end]
|
||||
|
||||
|
||||
class BaseTask:
|
||||
"""Base class for all tasks. There are two ways to run the task:
|
||||
1. Directly by calling the `run` method.
|
||||
|
@ -4,13 +4,12 @@ import fnmatch
|
||||
import math
|
||||
import os
|
||||
import os.path as osp
|
||||
import re
|
||||
import statistics
|
||||
import sys
|
||||
import time
|
||||
from collections import Counter
|
||||
from inspect import signature
|
||||
from typing import List, Optional
|
||||
from typing import List
|
||||
|
||||
import mmengine
|
||||
from mmengine.config import Config, ConfigDict
|
||||
@ -18,43 +17,12 @@ from mmengine.utils import mkdir_or_exist
|
||||
|
||||
from opencompass.registry import (ICL_EVALUATORS, MODELS, TASKS,
|
||||
TEXT_POSTPROCESSORS)
|
||||
from opencompass.tasks.base import BaseTask
|
||||
from opencompass.tasks.base import BaseTask, extract_role_pred
|
||||
from opencompass.utils import (build_dataset_from_cfg, dataset_abbr_from_cfg,
|
||||
get_infer_output_path, get_logger,
|
||||
task_abbr_from_cfg)
|
||||
|
||||
|
||||
def extract_role_pred(s: str, begin_str: Optional[str],
|
||||
end_str: Optional[str]) -> str:
|
||||
"""Extract the role prediction from the full prediction string. The role
|
||||
prediction may be the substring between the begin and end string.
|
||||
|
||||
Args:
|
||||
s (str): Full prediction string.
|
||||
begin_str (str): The beginning string of the role
|
||||
end_str (str): The ending string of the role.
|
||||
|
||||
Returns:
|
||||
str: The extracted role prediction.
|
||||
"""
|
||||
start = 0
|
||||
end = len(s)
|
||||
|
||||
if begin_str and re.match(r'\s*', begin_str) is None:
|
||||
begin_idx = s.find(begin_str)
|
||||
if begin_idx != -1:
|
||||
start = begin_idx + len(begin_str)
|
||||
|
||||
if end_str and re.match(r'\s*', end_str) is None:
|
||||
# TODO: Support calling tokenizer for the accurate eos token
|
||||
# and avoid such hardcode
|
||||
end_idx = s.find(end_str, start)
|
||||
if end_idx != -1:
|
||||
end = end_idx
|
||||
|
||||
return s[start:end]
|
||||
|
||||
|
||||
@TASKS.register_module()
|
||||
class OpenICLEvalTask(BaseTask):
|
||||
"""OpenICL Evaluation Task.
|
||||
|
@ -12,8 +12,7 @@ from mmengine.config import Config, ConfigDict
|
||||
from mmengine.utils import mkdir_or_exist
|
||||
|
||||
from opencompass.registry import ICL_EVALUATORS, MODELS, TEXT_POSTPROCESSORS
|
||||
from opencompass.tasks.base import BaseTask
|
||||
from opencompass.tasks.openicl_eval import extract_role_pred
|
||||
from opencompass.tasks.base import BaseTask, extract_role_pred
|
||||
from opencompass.utils import (build_dataset_from_cfg, dataset_abbr_from_cfg,
|
||||
deal_with_judge_model_abbr, get_data_path,
|
||||
get_infer_output_path, get_logger,
|
||||
|
@ -11,6 +11,7 @@ from .lark import * # noqa
|
||||
from .logging import * # noqa
|
||||
from .menu import * # noqa
|
||||
from .model_postprocessors import * # noqa
|
||||
from .network import * # noqa
|
||||
from .postprocessors import * # noqa
|
||||
from .prompt import * # noqa
|
||||
from .text_postprocessors import * # noqa
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user