.. |
datasets
|
Add aritch to mathbench (#607)
|
2023-11-20 19:40:41 +08:00 |
models
|
Add support for DataCanvas Alaya LM (#612)
|
2023-11-21 17:51:30 +08:00 |
multimodal
|
docs: fix typos in markdown files (#530)
|
2023-11-01 16:16:16 +08:00 |
summarizers
|
Mathbench update postprocess (#600)
|
2023-11-20 16:48:55 +08:00 |
eval_alaya.py
|
Add support for DataCanvas Alaya LM (#612)
|
2023-11-21 17:51:30 +08:00 |
eval_attack.py
|
[Feat] implementation for support promptbench (#239)
|
2023-09-15 15:06:53 +08:00 |
eval_cibench.py
|
[Feat] Support cibench (#538)
|
2023-11-07 19:11:44 +08:00 |
eval_claude.py
|
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373)
|
2023-09-08 15:41:20 +08:00 |
eval_code_passk_repeat_dataset.py
|
[Feat] support humaneval and mbpp pass@k (#598)
|
2023-11-16 21:22:06 +08:00 |
eval_code_passk.py
|
[Feat] support humaneval and mbpp pass@k (#598)
|
2023-11-16 21:22:06 +08:00 |
eval_codeagent.py
|
[Feat] Support cibench (#538)
|
2023-11-07 19:11:44 +08:00 |
eval_codegeex2.py
|
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373)
|
2023-09-08 15:41:20 +08:00 |
eval_demo.py
|
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373)
|
2023-09-08 15:41:20 +08:00 |
eval_ds1000_interpreter.py
|
[Feat] Support cibench (#538)
|
2023-11-07 19:11:44 +08:00 |
eval_gpt3.5.py
|
[Feature] Calculate max_out_len without hard code for OpenAI model (#158)
|
2023-08-08 15:16:56 +08:00 |
eval_gpt4.py
|
[Enhancement] Add humaneval postprocessor for GPT models & eval config for GPT4, enhance the original humaneval postprocessor (#129)
|
2023-08-10 16:31:12 +08:00 |
eval_hf_llama_7b.py
|
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373)
|
2023-09-08 15:41:20 +08:00 |
eval_internlm_7b.py
|
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373)
|
2023-09-08 15:41:20 +08:00 |
eval_internlm_chat_7b_turbomind.py
|
Integrate turbomind inference via its RPC API instead of its python API (#414)
|
2023-10-07 10:27:48 +08:00 |
eval_internLM.py
|
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373)
|
2023-09-08 15:41:20 +08:00 |
eval_lightllm.py
|
[Feature] Support Lightllm API (#613)
|
2023-11-21 19:18:40 +08:00 |
eval_llama2_7b.py
|
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373)
|
2023-09-08 15:41:20 +08:00 |
eval_minimax.py
|
[Fix] fix log re-direct (#564)
|
2023-11-09 19:34:19 +08:00 |
eval_multi_prompt_demo.py
|
[Feature] Add multi-prompt generation demo (#568)
|
2023-11-20 16:16:37 +08:00 |
eval_openai_agent.py
|
Support GSM8k evaluation with tools by Lagent and LangChain (#277)
|
2023-09-22 15:28:22 +08:00 |
eval_qwen_7b_chat_lawbench.py
|
[Feature] Add lawbench (#460)
|
2023-10-13 06:51:36 -05:00 |
eval_qwen_7b_chat.py
|
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373)
|
2023-09-08 15:41:20 +08:00 |
eval_qwen_7b.py
|
[Feaure] Add new models: baichuan2, tigerbot, vicuna v1.5 (#373)
|
2023-09-08 15:41:20 +08:00 |
eval_xunfei.py
|
[Feature] Add support for MiniMax API (#548)
|
2023-11-06 21:57:32 +08:00 |
eval_zhipu.py
|
[Fix] fix filename typo (#549)
|
2023-11-07 14:00:26 +08:00 |
subjective.py
|
Fix bugs in subjective evaluation (#589)
|
2023-11-14 16:11:55 +08:00 |