.. |
api_examples
|
[Enhancement] Update API Interface and Mixtral (#681)
|
2023-12-10 13:29:26 +08:00 |
datasets
|
[Feature] Add NeedleInAHaystack Test Support (#714)
|
2023-12-23 12:00:51 +08:00 |
models
|
[Feature] Add JudgeLLMs (#710)
|
2023-12-19 18:40:25 +08:00 |
multimodal
|
docs: fix typos in markdown files (#530)
|
2023-11-01 16:16:16 +08:00 |
summarizers
|
[Feat] Update math/agent (#716)
|
2023-12-19 21:20:42 +08:00 |
eval_alaya.py
|
Add support for DataCanvas Alaya LM (#612)
|
2023-11-21 17:51:30 +08:00 |
eval_attack.py
|
[Feat] implementation for support promptbench (#239)
|
2023-09-15 15:06:53 +08:00 |
eval_chat_agent_baseline.py
|
[Feat] Update math/agent (#716)
|
2023-12-19 21:20:42 +08:00 |
eval_chat_agent.py
|
[Feat] Update math/agent (#716)
|
2023-12-19 21:20:42 +08:00 |
eval_chat_cibench.py
|
[Sync] minor test (#683)
|
2023-12-11 17:42:53 +08:00 |
eval_chat_last.py
|
[Feature] Support chat style inferencer. (#643)
|
2023-11-30 14:00:06 +08:00 |
eval_cibench.py
|
[Feat] Support cibench (#538)
|
2023-11-07 19:11:44 +08:00 |
eval_circular.py
|
[Feature] Add circular eval (#610)
|
2023-11-23 16:45:47 +08:00 |
eval_claude.py
|
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373)
|
2023-09-08 15:41:20 +08:00 |
eval_code_passk_repeat_dataset.py
|
[Feat] support humaneval and mbpp pass@k (#598)
|
2023-11-16 21:22:06 +08:00 |
eval_code_passk.py
|
[Feat] support humaneval and mbpp pass@k (#598)
|
2023-11-16 21:22:06 +08:00 |
eval_codeagent.py
|
[Feat] Support cibench (#538)
|
2023-11-07 19:11:44 +08:00 |
eval_codegeex2.py
|
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373)
|
2023-09-08 15:41:20 +08:00 |
eval_contamination.py
|
[Feature] Add Data Contamination Analysis (#639)
|
2023-12-08 10:00:11 +08:00 |
eval_demo.py
|
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373)
|
2023-09-08 15:41:20 +08:00 |
eval_ds1000_interpreter.py
|
[Feat] Support cibench (#538)
|
2023-11-07 19:11:44 +08:00 |
eval_gpt3.5.py
|
[Feature] Calculate max_out_len without hard code for OpenAI model (#158)
|
2023-08-08 15:16:56 +08:00 |
eval_gpt4.py
|
[Enhancement] Add humaneval postprocessor for GPT models & eval config for GPT4, enhance the original humaneval postprocessor (#129)
|
2023-08-10 16:31:12 +08:00 |
eval_hf_internlm_chat_20b_cdme.py
|
[Feature] Add NeedleInAHaystack Test Support (#714)
|
2023-12-23 12:00:51 +08:00 |
eval_hf_llama_7b.py
|
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373)
|
2023-09-08 15:41:20 +08:00 |
eval_internlm_7b.py
|
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373)
|
2023-09-08 15:41:20 +08:00 |
eval_internlm_chat_turbomind_tis.py
|
Integrate turbomind python api (#484)
|
2023-11-21 22:34:46 +08:00 |
eval_internlm_chat_turbomind.py
|
[Feature] Update configs for evaluating chat models like qwen, baichuan, llama2 using turbomind backend (#721)
|
2023-12-21 18:22:17 +08:00 |
eval_internlm_turbomind_tis.py
|
Integrate turbomind python api (#484)
|
2023-11-21 22:34:46 +08:00 |
eval_internlm_turbomind.py
|
[Feature] Update configs for evaluating chat models like qwen, baichuan, llama2 using turbomind backend (#721)
|
2023-12-21 18:22:17 +08:00 |
eval_internLM.py
|
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373)
|
2023-09-08 15:41:20 +08:00 |
eval_lightllm.py
|
[Feature] Support Lightllm API (#613)
|
2023-11-21 19:18:40 +08:00 |
eval_llama2_7b.py
|
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373)
|
2023-09-08 15:41:20 +08:00 |
eval_mixtral_8x7b.py
|
[Fix] fix a bug on configs/eval_mixtral_8x7b.py (#706)
|
2023-12-15 14:15:32 +08:00 |
eval_multi_prompt_demo.py
|
[Feature] Add multi-prompt generation demo (#568)
|
2023-11-20 16:16:37 +08:00 |
eval_qwen_7b_chat_lawbench.py
|
[Feature] Add lawbench (#460)
|
2023-10-13 06:51:36 -05:00 |
eval_qwen_7b_chat.py
|
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373)
|
2023-09-08 15:41:20 +08:00 |
eval_qwen_7b.py
|
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373)
|
2023-09-08 15:41:20 +08:00 |
eval_rwkv5_3b.py
|
add rwkv-5-3b model (#666)
|
2023-12-12 18:15:19 +08:00 |
eval_subjective_alignbench.py
|
[Fix] Update alignmentbench (#704)
|
2023-12-14 18:24:21 +08:00 |
eval_subjective_compare.py
|
[Fix] Update alignmentbench (#704)
|
2023-12-14 18:24:21 +08:00 |
eval_subjective_judge_pandalm.py
|
[Feature] Add JudgeLLMs (#710)
|
2023-12-19 18:40:25 +08:00 |
eval_subjective_score.py
|
[Fix] Update alignmentbench (#704)
|
2023-12-14 18:24:21 +08:00 |
eval_with_model_dataset_combinations.py
|
[Sync] minor test (#683)
|
2023-12-11 17:42:53 +08:00 |