.. |
api_examples
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
dataset_collections
|
[Sync] Sync with internal codes 2024.06.28 (#1279)
|
2024-06-28 14:16:34 +08:00 |
datasets
|
Update MathBench summarizer & fix cot setting (#1282)
|
2024-07-01 21:51:17 +08:00 |
models
|
[Feature] Add InternLM2.5 (#1286)
|
2024-07-04 20:10:31 +08:00 |
subjective
|
[Sync] bump version (#1204)
|
2024-05-28 23:09:59 +08:00 |
summarizers
|
Update MathBench summarizer & fix cot setting (#1282)
|
2024-07-01 21:51:17 +08:00 |
eval_alaya.py
|
Add support for DataCanvas Alaya LM (#612)
|
2023-11-21 17:51:30 +08:00 |
eval_attack.py
|
[Feat] implementation for support promptbench (#239)
|
2023-09-15 15:06:53 +08:00 |
eval_bluelm_32k_lveval.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_charm.py
|
[Sync] bump version (#1204)
|
2024-05-28 23:09:59 +08:00 |
eval_chat_agent_baseline.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_chat_agent.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_chat_cibench_api.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_chat_cibench.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_chat_last.py
|
[Feature] Support chat style inferencer. (#643)
|
2023-11-30 14:00:06 +08:00 |
eval_chembench.py
|
[Feature] Add ChemBench (#1032)
|
2024-04-12 08:46:26 +08:00 |
eval_cibench.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_circular.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_claude.py
|
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373)
|
2023-09-08 15:41:20 +08:00 |
eval_code_passk_repeat_dataset.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_code_passk.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_codeagent.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_codegeex2.py
|
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373)
|
2023-09-08 15:41:20 +08:00 |
eval_contamination.py
|
[Docs] Update contamination docs (#775)
|
2024-01-08 16:37:28 +08:00 |
eval_demo.py
|
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373)
|
2023-09-08 15:41:20 +08:00 |
eval_ds1000_interpreter.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_gpt3.5.py
|
[Feature] Calculate max_out_len without hard code for OpenAI model (#158)
|
2023-08-08 15:16:56 +08:00 |
eval_gpt4.py
|
[Enhancement] Add humaneval postprocessor for GPT models & eval config for GPT4, enhance the original humaneval postprocessor (#129)
|
2023-08-10 16:31:12 +08:00 |
eval_hf_llama2.py
|
[Sync] Sync with internal codes 2024.06.28 (#1279)
|
2024-06-28 14:16:34 +08:00 |
eval_hf_llama_7b.py
|
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373)
|
2023-09-08 15:41:20 +08:00 |
eval_internlm2_chat_keyset.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_internlm2_keyset.py
|
[Sync] Sync with internal codes 2024.06.28 (#1279)
|
2024-06-28 14:16:34 +08:00 |
eval_internlm_7b.py
|
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373)
|
2023-09-08 15:41:20 +08:00 |
eval_internlm_chat_lmdeploy_apiserver.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_internlm_chat_lmdeploy_pytorch.py
|
Support lmdeploy pytorch engine (#875)
|
2024-02-22 03:46:07 -03:00 |
eval_internlm_chat_lmdeploy_tis.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_internlm_chat_turbomind_tis.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_internlm_chat_turbomind.py
|
[Feature] Add end_str for turbomind (#859)
|
2024-02-01 22:31:14 +08:00 |
eval_internlm_flames_chat.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_internlm_lmdeploy_apiserver.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_internlm_math_chat.py
|
[Sync] Merge branch 'dev' into zfz/update-keyset-demo (#876)
|
2024-02-05 23:29:10 +08:00 |
eval_internlm_turbomind_tis.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_internlm_turbomind.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_internLM.py
|
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373)
|
2023-09-08 15:41:20 +08:00 |
eval_lightllm.py
|
[Sync] Sync with internal codes 2024.06.28 (#1279)
|
2024-06-28 14:16:34 +08:00 |
eval_llama2_7b_lveval.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_llama2_7b.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_llama3_instruct.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_llm_compression.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_math_llm_judge.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_mathbench.py
|
Update MathBench summarizer & fix cot setting (#1282)
|
2024-07-01 21:51:17 +08:00 |
eval_mmlu_pro.py
|
[Sync] Sync with internal codes 2024.06.28 (#1279)
|
2024-06-28 14:16:34 +08:00 |
eval_mmlu_with_zero_retriever_overwritten.py
|
[Feature] add global retriever config (#842)
|
2024-02-07 00:30:20 +08:00 |
eval_multi_prompt_demo.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_needlebench.py
|
[Doc] Update NeedleInAHaystack Docs (#1102)
|
2024-04-28 18:51:47 +08:00 |
eval_qwen_7b_chat_lawbench.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_qwen_7b_chat.py
|
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373)
|
2023-09-08 15:41:20 +08:00 |
eval_qwen_7b.py
|
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373)
|
2023-09-08 15:41:20 +08:00 |
eval_rwkv5_3b.py
|
add rwkv-5-3b model (#666)
|
2023-12-12 18:15:19 +08:00 |
eval_subjective_alignbench.py
|
[Sync] bump version (#1204)
|
2024-05-28 23:09:59 +08:00 |
eval_subjective_alpacaeval_oc.py
|
[Sync] bump version (#1204)
|
2024-05-28 23:09:59 +08:00 |
eval_subjective_alpacaeval_official.py
|
[Fix] fix pip version (#1228)
|
2024-06-06 11:48:07 +08:00 |
eval_subjective_arena_hard.py
|
[Fix] fix summarizer (#1217)
|
2024-05-31 11:40:47 +08:00 |
eval_subjective_compassarena.py
|
[Sync] bump version (#1204)
|
2024-05-28 23:09:59 +08:00 |
eval_subjective_compassbench.py
|
[Sync] format (#1214)
|
2024-05-30 00:21:58 +08:00 |
eval_subjective_creationbench.py
|
[Sync] bump version (#1204)
|
2024-05-28 23:09:59 +08:00 |
eval_subjective_fofo.py
|
[Feature] add dataset Fofo (#1224)
|
2024-06-06 11:40:48 +08:00 |
eval_subjective_functional_multiround.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_subjective_judge_pandalm.py
|
[Sync] bump version (#1204)
|
2024-05-28 23:09:59 +08:00 |
eval_subjective_mtbench101.py
|
MT-Bench-101 (#1215)
|
2024-06-03 14:52:12 +08:00 |
eval_subjective_mtbench.py
|
[Sync] bump version (#1204)
|
2024-05-28 23:09:59 +08:00 |
eval_subjective_wildbench_pair.py
|
Support wildbench (#1266)
|
2024-06-24 13:16:27 +08:00 |
eval_subjective_wildbench_single.py
|
Support wildbench (#1266)
|
2024-06-24 13:16:27 +08:00 |
eval_teval.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_TheoremQA.py
|
[Format] Add config lints (#892)
|
2024-05-14 15:35:58 +08:00 |
eval_with_model_dataset_combinations.py
|
[Sync] minor test (#683)
|
2023-12-11 17:42:53 +08:00 |