OpenCompass/configs
2024-01-03 10:58:57 +08:00
..
api_examples [Feature] Add support of qwen api (#735) 2024-01-02 20:47:12 +08:00
datasets Support devops-eval 2024-01-03 10:58:57 +08:00
models [Feature] Support LLaMA2-Accessory (#732) 2024-01-02 20:48:51 +08:00
multimodal docs: fix typos in markdown files (#530) 2023-11-01 16:16:16 +08:00
summarizers [Feature] Add InfiniteBench (#739) 2023-12-26 15:36:27 +08:00
eval_alaya.py Add support for DataCanvas Alaya LM (#612) 2023-11-21 17:51:30 +08:00
eval_attack.py [Feat] implementation for support promptbench (#239) 2023-09-15 15:06:53 +08:00
eval_chat_agent_baseline.py [Feat] Update math/agent (#716) 2023-12-19 21:20:42 +08:00
eval_chat_agent.py [Feat] Update math/agent (#716) 2023-12-19 21:20:42 +08:00
eval_chat_cibench.py [Sync] minor test (#683) 2023-12-11 17:42:53 +08:00
eval_chat_last.py [Feature] Support chat style inferencer. (#643) 2023-11-30 14:00:06 +08:00
eval_cibench.py [Feat] Support cibench (#538) 2023-11-07 19:11:44 +08:00
eval_circular.py [Feature] Add circular eval (#610) 2023-11-23 16:45:47 +08:00
eval_claude.py [Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373) 2023-09-08 15:41:20 +08:00
eval_code_passk_repeat_dataset.py [Feature] Support sanitized MBPP dataset (#745) 2023-12-27 22:17:23 +08:00
eval_code_passk.py [Feature] Support sanitized MBPP dataset (#745) 2023-12-27 22:17:23 +08:00
eval_codeagent.py [Feat] Support cibench (#538) 2023-11-07 19:11:44 +08:00
eval_codegeex2.py [Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373) 2023-09-08 15:41:20 +08:00
eval_contamination.py [Feature] Add Data Contamination Analysis (#639) 2023-12-08 10:00:11 +08:00
eval_demo.py [Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373) 2023-09-08 15:41:20 +08:00
eval_ds1000_interpreter.py [Feat] Support cibench (#538) 2023-11-07 19:11:44 +08:00
eval_gpt3.5.py [Feature] Calculate max_out_len without hard code for OpenAI model (#158) 2023-08-08 15:16:56 +08:00
eval_gpt4.py [Enhancement] Add humaneval postprocessor for GPT models & eval config for GPT4, enhance the original humaneval postprocessor (#129) 2023-08-10 16:31:12 +08:00
eval_hf_internlm_chat_20b_cdme.py [Update] Change NeedleInAHaystackDataset to dynamic dataset loading (#754) 2024-01-02 17:22:56 +08:00
eval_hf_llama_7b.py [Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373) 2023-09-08 15:41:20 +08:00
eval_internlm_7b.py [Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373) 2023-09-08 15:41:20 +08:00
eval_internlm_chat_turbomind_api.py add turbomind restful api support (#693) 2023-12-24 01:40:00 +08:00
eval_internlm_chat_turbomind_tis.py Integrate turbomind python api (#484) 2023-11-21 22:34:46 +08:00
eval_internlm_chat_turbomind.py [Feature] Update configs for evaluating chat models like qwen, baichuan, llama2 using turbomind backend (#721) 2023-12-21 18:22:17 +08:00
eval_internlm_turbomind_api.py add turbomind restful api support (#693) 2023-12-24 01:40:00 +08:00
eval_internlm_turbomind_tis.py Integrate turbomind python api (#484) 2023-11-21 22:34:46 +08:00
eval_internlm_turbomind.py [Feature] Update configs for evaluating chat models like qwen, baichuan, llama2 using turbomind backend (#721) 2023-12-21 18:22:17 +08:00
eval_internLM.py [Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373) 2023-09-08 15:41:20 +08:00
eval_lightllm.py Update LightllmApi and Fix mmlu bug (#738) 2023-12-27 13:49:08 +08:00
eval_llama2_7b.py [Feature] Add GPQA Dataset (#729) 2024-01-01 15:54:40 +08:00
eval_mixtral_8x7b.py [Fix] fix a bug on configs/eval_mixtral_8x7b.py (#706) 2023-12-15 14:15:32 +08:00
eval_multi_prompt_demo.py [Feature] Add multi-prompt generation demo (#568) 2023-11-20 16:16:37 +08:00
eval_qwen_7b_chat_lawbench.py [Feature] Add lawbench (#460) 2023-10-13 06:51:36 -05:00
eval_qwen_7b_chat.py [Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373) 2023-09-08 15:41:20 +08:00
eval_qwen_7b.py [Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373) 2023-09-08 15:41:20 +08:00
eval_rwkv5_3b.py add rwkv-5-3b model (#666) 2023-12-12 18:15:19 +08:00
eval_subjective_alignbench.py [Feature] Add other judgelm prompts for Alignbench (#731) 2023-12-27 17:54:53 +08:00
eval_subjective_compare.py [Fix] Update alignmentbench (#704) 2023-12-14 18:24:21 +08:00
eval_subjective_judge_pandalm.py [Feature] Add JudgeLLMs (#710) 2023-12-19 18:40:25 +08:00
eval_subjective_score.py [Fix] Update alignmentbench (#704) 2023-12-14 18:24:21 +08:00
eval_with_model_dataset_combinations.py [Sync] minor test (#683) 2023-12-11 17:42:53 +08:00