Linchen Xiao
408f5caff4
[Dataset] Add SuperGPQA subfield configs ( #2124 )
...
* update
* fix lint
* fix lint
* update precommit
* update precommit
* fix lint
2025-05-28 14:12:58 +08:00
Songyang Zhang
aa2b89b6f8
[Update] Add CascadeEvaluator with Data Replica ( #2022 )
...
* Update CascadeEvaluator
* Update CascadeEvaluator
* Update CascadeEvaluator
* Update Config
* Update
* Update
* Update
* Update
* Update
* Update
* Update
* Update
* Update
* Update
* Update
* Update
* Update
* Update
* Update
2025-05-20 16:46:55 +08:00
bittersweet1999
9eaa1f6fec
Update icl_judge_evaluator.py ( #2095 )
2025-05-13 10:44:24 +08:00
Dongsheng Zhu
2c79dc5227
[Dataset] Add human_eval/mbpp pro ( #2092 )
...
* add bench
* update
* bug fix
* time update
* add index
* fix repeat bug
2025-05-12 18:38:13 +08:00
bittersweet1999
ddc9cc0afb
[Add] add a config to Judge dataset all ( #2077 )
...
* fix pip version
* fix pip version
* add judgedatasetall
* add judgedatasetall
* add judgedatasetall
2025-05-07 10:57:23 +08:00
bittersweet1999
37cbaf8d92
[Add] Add Judgerbenchv2 ( #2067 )
...
* fix pip version
* fix pip version
* add judgerbenchv2
* Update __init__.py
2025-04-30 17:12:34 +08:00
Taolin Zhang
b6148aa198
add Judgebench ( #2066 )
...
* add rewardbench
* add rewardbench
* add rmb datasets
* add rmb datasets
* add judgebench
* add judgebench
2025-04-30 15:01:10 +08:00
bittersweet1999
527a80947b
[Add] Add writingbench ( #2028 )
...
* fix pip version
* fix pip version
* add writingbench
* add writingbench
* add writingbench
* add writingbench
2025-04-29 16:29:32 +08:00
Taolin Zhang
8c74e6a39e
add RMB Bench ( #2056 )
...
* add rewardbench
* add rewardbench
* add rmb datasets
* add rmb datasets
2025-04-27 16:26:01 +08:00
Taolin Zhang
c69110361b
[Add] add rewardbench ( #2029 )
...
* add rewardbench
* add rewardbench
2025-04-21 17:18:51 +08:00
Junnan Liu
20660ab507
[Fix] Fix compare error when k is list in base_evaluator ( #2010 )
...
* fix gpass compare error of list k
* fix compare error in 177
2025-04-10 19:47:21 +08:00
Linchen Xiao
12213207b6
[Refactor] Refactorize openicl eval task ( #1990 )
...
* [Refactor] Refactorize openicl eval task
* update
2025-04-09 15:52:23 +08:00
Myhs_phz
fd82bea747
[Fix] OpenICL Math Evaluator Config ( #2007 )
...
* fix
* fix recommended
* fix
* fix
* fix
* fix
2025-04-08 14:38:35 +08:00
Linchen Xiao
db96161a4e
[Update] Add SuperGPQA subset metrics ( #1966 )
2025-03-24 14:25:12 +08:00
Dongsheng Zhu
8a5029b121
[Feature] Add MultiPL-E & Code Evaluator ( #1963 )
...
* multiple_code develop
* multiple_code update
* comments upadate
* index upadate
2025-03-21 20:09:25 +08:00
Linchen Xiao
854c6bf025
[Update] Update requirement and base evaluator
2025-03-13 20:52:50 +08:00
Kangreen
59e49aedf1
[Feature] Support SuperGPQA ( #1924 )
...
* support supergpqa
* remove unnecessary code
* remove unnecessary code
* Add Readme
* Add Readme
* fix lint
* fix lint
* update
* update
---------
Co-authored-by: mkj3085003 <mkj3085003@gmail.com>
Co-authored-by: MaiziXiao <xxllcc1993@gmail.com>
2025-03-11 19:32:08 +08:00
Linchen Xiao
e403fd21be
[Fix] Fix math-verify evaluator ( #1917 )
...
* update
* update
* update
2025-03-11 17:35:04 +08:00
Linchen Xiao
6a573f671b
[Fix] Fix compatible issue
2025-03-03 15:35:57 +08:00
Junnan Liu
73c80953c6
[Feature] Support Dataset Repeat and G-Pass Compute for Each Evaluator ( #1886 )
...
* support dataset repeat and g-pass compute for each evaluator
* fix pre-commit errors
* delete print
* delete gpassk_evaluator and fix potential errors
* change `repeat` to `n`
* fix `repeat` to `n` in openicl_eval
* update doc for multi-run and g-pass
* update latex equation in doc
* update eng doc for multi-run and g-pass
* update datasets.md
* update datasets.md
* fix multi-line equation
* fix multi-line equation
* fix multi-line equation
* fix multi-line equation
* fix multi-line equation
* fix multi-line equation
* fix multi-line equation in zh_cn user_guides
* mmodify pre-commit-zh-cn
* recover pre-commit and edit math expr in doc
* del [TIP]
* del cite tag in doc
* del extract_model param in livemathbench config
2025-02-26 19:43:12 +08:00
Songyang Zhang
fd6fbf01a2
[Update] Support AIME-24 Evaluation for DeepSeek-R1 series ( #1888 )
...
* Update
* Update
* Update
* Update
2025-02-25 20:34:41 +08:00
Linchen Xiao
27c916661d
[Feature] Math Verify with model post_processor ( #1881 )
...
* update
* [Feature] Update model post_processor
* update
* update
* update
2025-02-20 19:32:12 +08:00
bittersweet1999
f407930475
[Feature] Support subjective evaluation for reasoning model ( #1868 )
...
* fix pip version
* fix pip version
* add subeval for reasoning model
* add subeval for reasoning model
* update configs
* update config
* update config
* update config
* update files
2025-02-20 12:19:46 +08:00
Alexander Lam
dc6035cfcb
[Feature] Added Bradley-Terry subjective evaluation
2024-12-31 11:01:23 +08:00
Junnan Liu
8e8d4f1c64
[Feature] Support G-Pass@k and LiveMathBench ( #1772 )
...
* support G-Pass@k and livemathbench
* fix bugs
* fix comments of GPassKEvaluator
* update saved details of GPassKEvaluator
* update saved details of GPassKEvaluator
* fix eval api configs & update openai_api for ease of debugging
* update huggingface path
* fix method name of G-Pass@k
* fix default value of eval_model_name
* refactor G-Pass@k evaluator
* log generation params for each backend
* fix evaluation resume
* add notimplementerror
2024-12-30 16:59:39 +08:00
Alexander Lam
1bd594fc62
[Feature] Added CompassArena-SubjectiveBench with Bradley-Terry Model ( #1751 )
...
* fix lint issues
* updated gitignore
* changed infer_order from random to double for the pairwise_judge.py (not changing for pairwise_bt_judge.py
* added return statement to CompassArenaBradleyTerrySummarizer to return overall score for each judger model
2024-12-16 13:41:28 +08:00
Yufeng Zhao
300adc31e8
[Feature] Add Korbench dataset ( #1713 )
...
* first version for korbench
* first stage for korbench
* korbench_1
* korbench_1
* korbench_1
* korbench_1
* korbench_1_revised
* korbench_combined_1
* korbench_combined_1
* kor_combined
* kor_combined
* update
---------
Co-authored-by: MaiziXiao <xxllcc1993@gmail.com>
2024-11-25 20:11:27 +08:00
bittersweet1999
a0853c939d
[Add] Add CompassArenaSubjectiveBench ( #1645 )
...
* fix pip version
* fix pip version
* add compassarenasubjectivebench
* add compassarenasubjectivebench
* add compassarenabench
2024-11-01 13:52:22 +08:00
Songyang Zhang
a4d5a6c81b
[Feature] Support LiveCodeBench ( #1617 )
...
* Update
* Update LCB
* Update
* Update
* Update
* Update
* Update
2024-10-21 20:50:39 +08:00
bittersweet1999
a11e2b2fd4
[Fix] Compatible with old versions ( #1616 )
...
* fix pip version
* fix pip version
* Compatible with old versions
* compati old version
* compati old version
* compati old version
* update configs
2024-10-21 10:16:29 +08:00
bittersweet1999
fa54aa62f6
[Feature] Add Judgerbench and reorg subeval ( #1593 )
...
* fix pip version
* fix pip version
* update (#1522 )
Co-authored-by: zhulin1 <zhulin1@pjlab.org.cn>
* [Feature] Update Models (#1518 )
* Update Models
* Update
* Update humanevalx
* Update
* Update
* [Feature] Dataset prompts update for ARC, BoolQ, Race (#1527 )
add judgerbench and reorg sub
add judgerbench and reorg subeval
add judgerbench and reorg subeval
* add judgerbench and reorg subeval
* add judgerbench and reorg subeval
* add judgerbench and reorg subeval
* add judgerbench and reorg subeval
---------
Co-authored-by: zhulinJulia24 <145004780+zhulinJulia24@users.noreply.github.com>
Co-authored-by: zhulin1 <zhulin1@pjlab.org.cn>
Co-authored-by: Songyang Zhang <tonysy@users.noreply.github.com>
Co-authored-by: Linchen Xiao <xxllcc1993@gmail.com>
2024-10-15 16:36:05 +08:00
Songyang Zhang
ee058e25b2
[Feature] Support verbose for OpenAI API ( #1546 )
2024-09-20 17:12:52 +08:00
Linchen Xiao
245664f4c0
[Feature] Fullbench v0.1 language update ( #1463 )
...
* update
* update
* update
* update
2024-08-28 14:01:05 +08:00
CHEN PENGAN
463231c651
[Feature] Add icl_sliding_k_retriever.py and update __init__.py ( #1305 )
...
* Add icl_sliding_k_retriever.py and update __init__.py
* Fix flake8, isort, and yapf issues for Sliding Window Retriever
2024-08-23 17:18:31 +08:00
Hari Seldon
14b4b735cb
[Feature] Add support for SciCode ( #1417 )
...
* add SciCode
* add SciCode
* add SciCode
* add SciCode
* add SciCode
* add SciCode
* add SciCode
* add SciCode w/ bg
* add scicode
* Update README.md
* Update README.md
* Delete configs/eval_SciCode.py
* rename
* 1
* rename
* Update README.md
* Update scicode.py
* Update scicode.py
* fix some bugs
* Update
* Update
---------
Co-authored-by: root <HariSeldon0>
Co-authored-by: tonysy <sy.zhangbuaa@gmail.com>
2024-08-22 13:42:25 +08:00
Que Haoran
a244453d9e
[Feature] Support inference ppl datasets ( #1315 )
...
* commit inference ppl datasets
* revised format
* revise
* revise
* revise
* revise
* revise
* revise
2024-07-22 17:59:30 +08:00
Fengzhe Zhou
a32f21a356
[Sync] Sync with internal codes 2024.06.28 ( #1279 )
2024-06-28 14:16:34 +08:00
bittersweet1999
982e024540
[Feature] add dataset Fofo ( #1224 )
...
* add fofo dataset
* add dataset fofo
2024-06-06 11:40:48 +08:00
Fengzhe Zhou
2954913d9b
[Sync] bump version ( #1204 )
2024-05-28 23:09:59 +08:00
Fengzhe Zhou
2b3d4150f3
[Sync] update evaluator ( #1175 )
2024-05-21 14:22:46 +08:00
Fengzhe Zhou
7505b3cadf
[Feature] Add huggingface apply_chat_template ( #1098 )
...
* add TheoremQA with 5-shot
* add huggingface_above_v4_33 classes
* use num_worker partitioner in cli
* update theoremqa
* update TheoremQA
* add TheoremQA
* rename theoremqa -> TheoremQA
* update TheoremQA output path
* rewrite many model configs
* update huggingface
* further update
* refine configs
* update configs
* update configs
* add configs/eval_llama3_instruct.py
* add summarizer multi faceted
* update bbh datasets
* update configs/models/hf_llama/lmdeploy_llama3_8b_instruct.py
* rename class
* update readme
* update hf above v4.33
2024-05-14 14:50:16 +08:00
Alexander Lam
35c94d0cde
[Feature] Adding support for LLM Compression Evaluation ( #1108 )
...
* fixed formatting based on pre-commit tests
* fixed typo in comments; reduced the number of models in the eval config
* fixed a bug in LLMCompressionDataset, where setting samples=None would result in passing test[:None] to load_dataset
* removed unnecessary variable in _format_table_pivot; changed lark_reporter message to English
2024-04-30 10:51:01 +08:00
bittersweet1999
6ba1c4937d
[Feature] Support Math evaluation via judgemodel ( #1094 )
...
* support openai math evaluation
* support openai math evaluation
* support openai math evaluation
* support math llm judge
* support math llm judge
2024-04-26 14:56:23 +08:00
bittersweet1999
6f98c8d9ab
[Fix] Fix MultiRound Subjective Evaluation( #1043 )
...
* fix multiround
* fix
2024-04-22 12:06:03 +08:00
Fengzhe Zhou
b39f501563
[Sync] update taco ( #1030 )
2024-04-09 17:50:23 +08:00
bittersweet1999
2d4e559763
[Feature] Add multi-model judge and fix some problems ( #1016 )
...
* support multi-model judge and moe judge
* test_moe
* test_moe
* test
* add moe judge
* support multi-judge-model
2024-04-02 11:52:06 +08:00
Fengzhe Zhou
ab6cdb2be8
[Sync] Bump version 0.2.3 ( #957 )
2024-03-12 11:51:56 +08:00
bittersweet1999
848e7c8a76
[fix] add different temp for different question in mtbench ( #954 )
...
* add temp for mtbench
* add document for mtbench
* add document for mtbench
2024-03-11 17:24:39 +08:00
Yang Yong
3829be87b1
Fix LightllmApi ppl test ( #951 )
2024-03-08 12:04:44 +08:00
Fengzhe Zhou
9afbfa3639
[Sync] Fix TEvalEvaluator ( #929 )
2024-02-28 16:05:30 +08:00