..
api_examples
[Fix] Update Zhipu API and Fix issue min_out_len issue of API models ( #847 )
2024-01-28 14:52:43 +08:00
datasets
[Feature] Adding support for LLM Compression Evaluation ( #1108 )
2024-04-30 10:51:01 +08:00
models
Fix Llama-3 meta template ( #1079 )
2024-04-24 16:46:25 +08:00
subjective
[Feature] Support AlpacaEval_V2 ( #1006 )
2024-03-28 16:49:04 +08:00
summarizers
Update CIBench ( #1089 )
2024-04-26 18:46:02 +08:00
eval_alaya.py
Add support for DataCanvas Alaya LM ( #612 )
2023-11-21 17:51:30 +08:00
eval_attack.py
[Feat] implementation for support promptbench ( #239 )
2023-09-15 15:06:53 +08:00
eval_bluelm_32k_lveval.py
[Feature] add lveval benchmark ( #914 )
2024-03-04 11:22:03 +08:00
eval_chat_agent_baseline.py
[Feat] Update math/agent ( #716 )
2023-12-19 21:20:42 +08:00
eval_chat_agent.py
[Feat] Update math/agent ( #716 )
2023-12-19 21:20:42 +08:00
eval_chat_cibench_api.py
[Feat] minor update agent related ( #839 )
2024-01-26 14:15:51 +08:00
eval_chat_cibench.py
[Sync] minor test ( #683 )
2023-12-11 17:42:53 +08:00
eval_chat_last.py
[Feature] Support chat style inferencer. ( #643 )
2023-11-30 14:00:06 +08:00
eval_chembench.py
[Feature] Add ChemBench ( #1032 )
2024-04-12 08:46:26 +08:00
eval_cibench.py
[Feat] Support cibench ( #538 )
2023-11-07 19:11:44 +08:00
eval_circular.py
[Feature] Add circular eval ( #610 )
2023-11-23 16:45:47 +08:00
eval_claude.py
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 ( #373 )
2023-09-08 15:41:20 +08:00
eval_code_passk_repeat_dataset.py
[Sync] deprecate old mbpps ( #1064 )
2024-04-19 20:49:46 +08:00
eval_code_passk.py
[Sync] deprecate old mbpps ( #1064 )
2024-04-19 20:49:46 +08:00
eval_codeagent.py
[Feat] Support cibench ( #538 )
2023-11-07 19:11:44 +08:00
eval_codegeex2.py
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 ( #373 )
2023-09-08 15:41:20 +08:00
eval_contamination.py
[Docs] Update contamination docs ( #775 )
2024-01-08 16:37:28 +08:00
eval_demo.py
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 ( #373 )
2023-09-08 15:41:20 +08:00
eval_ds1000_interpreter.py
[Feat] Support cibench ( #538 )
2023-11-07 19:11:44 +08:00
eval_gpt3.5.py
[Feature] Calculate max_out_len without hard code for OpenAI model ( #158 )
2023-08-08 15:16:56 +08:00
eval_gpt4.py
[Enhancement] Add humaneval postprocessor for GPT models & eval config for GPT4, enhance the original humaneval postprocessor ( #129 )
2023-08-10 16:31:12 +08:00
eval_hf_llama2.py
[Sync] Sync with internal codes 2023.01.08 ( #777 )
2024-01-08 14:07:24 +00:00
eval_hf_llama_7b.py
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 ( #373 )
2023-09-08 15:41:20 +08:00
eval_internlm2_chat_keyset.py
[Sync] deprecate old mbpps ( #1064 )
2024-04-19 20:49:46 +08:00
eval_internlm2_keyset.py
[Sync] deprecate old mbpps ( #1064 )
2024-04-19 20:49:46 +08:00
eval_internlm_7b.py
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 ( #373 )
2023-09-08 15:41:20 +08:00
eval_internlm_chat_lmdeploy_apiserver.py
Support get_ppl for TurbomindModel ( #878 )
2024-03-06 11:44:19 +08:00
eval_internlm_chat_lmdeploy_pytorch.py
Support lmdeploy pytorch engine ( #875 )
2024-02-22 03:46:07 -03:00
eval_internlm_chat_lmdeploy_tis.py
[Feature] Add lmdeploy tis python backend model ( #1014 )
2024-04-23 14:27:11 +08:00
eval_internlm_chat_turbomind_tis.py
[Fix] Fix turbomind_tis ( #992 )
2024-03-22 15:50:12 +08:00
eval_internlm_chat_turbomind.py
[Feature] Add end_str for turbomind ( #859 )
2024-02-01 22:31:14 +08:00
eval_internlm_flames_chat.py
fix prompt template ( #1104 )
2024-04-28 21:54:30 +08:00
eval_internlm_lmdeploy_apiserver.py
Support get_ppl for TurbomindModel ( #878 )
2024-03-06 11:44:19 +08:00
eval_internlm_math_chat.py
[Sync] Merge branch 'dev' into zfz/update-keyset-demo ( #876 )
2024-02-05 23:29:10 +08:00
eval_internlm_turbomind_tis.py
Integrate turbomind python api ( #484 )
2023-11-21 22:34:46 +08:00
eval_internlm_turbomind.py
[Fix] Fix turbomind and update docs ( #808 )
2024-01-18 14:41:35 +08:00
eval_internLM.py
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 ( #373 )
2023-09-08 15:41:20 +08:00
eval_lightllm.py
Support prompt template for LightllmApi. Update LightllmApi token bucket. ( #945 )
2024-03-06 15:33:53 +08:00
eval_llama2_7b_lveval.py
[Feature] add lveval benchmark ( #914 )
2024-03-04 11:22:03 +08:00
eval_llama2_7b.py
[Feature] Add GPQA Dataset ( #729 )
2024-01-01 15:54:40 +08:00
eval_llm_compression.py
[Feature] Adding support for LLM Compression Evaluation ( #1108 )
2024-04-30 10:51:01 +08:00
eval_math_llm_judge.py
[Fix] Fix Math Evaluation with Judge Model Evaluator & Add README ( #1103 )
2024-04-28 21:58:58 +08:00
eval_mixtral_8x7b.py
[Fix] fix a bug on configs/eval_mixtral_8x7b.py ( #706 )
2023-12-15 14:15:32 +08:00
eval_mmlu_with_zero_retriever_overwritten.py
[Feature] add global retriever config ( #842 )
2024-02-07 00:30:20 +08:00
eval_multi_prompt_demo.py
[Feature] Add multi-prompt generation demo ( #568 )
2023-11-20 16:16:37 +08:00
eval_needlebench.py
[Doc] Update NeedleInAHaystack Docs ( #1102 )
2024-04-28 18:51:47 +08:00
eval_qwen_7b_chat_lawbench.py
[Feature] Add lawbench ( #460 )
2023-10-13 06:51:36 -05:00
eval_qwen_7b_chat.py
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 ( #373 )
2023-09-08 15:41:20 +08:00
eval_qwen_7b.py
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 ( #373 )
2023-09-08 15:41:20 +08:00
eval_rwkv5_3b.py
add rwkv-5-3b model ( #666 )
2023-12-12 18:15:19 +08:00
eval_subjective_alignbench.py
[Feature] Add multi-model judge and fix some problems ( #1016 )
2024-04-02 11:52:06 +08:00
eval_subjective_alpacaeval_oc.py
[Feature] Add multi-model judge and fix some problems ( #1016 )
2024-04-02 11:52:06 +08:00
eval_subjective_alpacaeval.py
[Feature] Support AlpacaEval_V2 ( #1006 )
2024-03-28 16:49:04 +08:00
eval_subjective_arena_hard.py
[Feature] support arenahard evaluation ( #1096 )
2024-04-26 15:42:00 +08:00
eval_subjective_compassarena.py
[Feature] Add multi-model judge and fix some problems ( #1016 )
2024-04-02 11:52:06 +08:00
eval_subjective_creationbench.py
[Fix] Quick fix ( #995 )
2024-03-22 19:54:19 +08:00
eval_subjective_functional_multiround.py
[Fix] Fix MultiRound Subjective Evaluation( #1043 )
2024-04-22 12:06:03 +08:00
eval_subjective_judge_pandalm.py
[Feature] Add multi-model judge and fix some problems ( #1016 )
2024-04-02 11:52:06 +08:00
eval_subjective_mtbench.py
[Feature] Add multi-model judge and fix some problems ( #1016 )
2024-04-02 11:52:06 +08:00
eval_teval.py
[Sync] Merge branch 'dev' into zfz/update-keyset-demo ( #876 )
2024-02-05 23:29:10 +08:00
eval_TheoremQA.py
[Feature] Add TheoremQA with 5-shot ( #1048 )
2024-04-22 15:22:04 +08:00
eval_with_model_dataset_combinations.py
[Sync] minor test ( #683 )
2023-12-11 17:42:53 +08:00