mirror of
https://github.com/open-compass/opencompass.git
synced 2025-05-30 16:03:24 +08:00
[Update] Update model support list (#1353)
* fix pip version * fix pip version * update model support
This commit is contained in:
parent
cf3e942f73
commit
86b6d18731
21
README.md
21
README.md
@ -462,20 +462,21 @@ Through the command line or configuration files, OpenCompass also supports evalu
|
|||||||
<tr valign="top">
|
<tr valign="top">
|
||||||
<td>
|
<td>
|
||||||
|
|
||||||
|
- [Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
|
||||||
|
- [Baichuan](https://github.com/baichuan-inc)
|
||||||
|
- [BlueLM](https://github.com/vivo-ai-lab/BlueLM)
|
||||||
|
- [ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)
|
||||||
|
- [ChatGLM3](https://github.com/THUDM/ChatGLM3-6B)
|
||||||
|
- [Gemma](https://huggingface.co/google/gemma-7b)
|
||||||
- [InternLM](https://github.com/InternLM/InternLM)
|
- [InternLM](https://github.com/InternLM/InternLM)
|
||||||
- [LLaMA](https://github.com/facebookresearch/llama)
|
- [LLaMA](https://github.com/facebookresearch/llama)
|
||||||
- [LLaMA3](https://github.com/meta-llama/llama3)
|
- [LLaMA3](https://github.com/meta-llama/llama3)
|
||||||
- [Vicuna](https://github.com/lm-sys/FastChat)
|
|
||||||
- [Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
|
|
||||||
- [Baichuan](https://github.com/baichuan-inc)
|
|
||||||
- [WizardLM](https://github.com/nlpxucan/WizardLM)
|
|
||||||
- [ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)
|
|
||||||
- [ChatGLM3](https://github.com/THUDM/ChatGLM3-6B)
|
|
||||||
- [TigerBot](https://github.com/TigerResearch/TigerBot)
|
|
||||||
- [Qwen](https://github.com/QwenLM/Qwen)
|
- [Qwen](https://github.com/QwenLM/Qwen)
|
||||||
- [BlueLM](https://github.com/vivo-ai-lab/BlueLM)
|
- [TigerBot](https://github.com/TigerResearch/TigerBot)
|
||||||
- [Gemma](https://huggingface.co/google/gemma-7b)
|
- [Vicuna](https://github.com/lm-sys/FastChat)
|
||||||
- ...
|
- [WizardLM](https://github.com/nlpxucan/WizardLM)
|
||||||
|
- [Yi](https://github.com/01-ai/Yi)
|
||||||
|
- ……
|
||||||
|
|
||||||
</td>
|
</td>
|
||||||
<td>
|
<td>
|
||||||
|
@ -463,19 +463,20 @@ python run.py --datasets ceval_ppl mmlu_ppl --hf-type base --hf-path huggyllama/
|
|||||||
<tr valign="top">
|
<tr valign="top">
|
||||||
<td>
|
<td>
|
||||||
|
|
||||||
|
- [Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
|
||||||
|
- [Baichuan](https://github.com/baichuan-inc)
|
||||||
|
- [BlueLM](https://github.com/vivo-ai-lab/BlueLM)
|
||||||
|
- [ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)
|
||||||
|
- [ChatGLM3](https://github.com/THUDM/ChatGLM3-6B)
|
||||||
|
- [Gemma](https://huggingface.co/google/gemma-7b)
|
||||||
- [InternLM](https://github.com/InternLM/InternLM)
|
- [InternLM](https://github.com/InternLM/InternLM)
|
||||||
- [LLaMA](https://github.com/facebookresearch/llama)
|
- [LLaMA](https://github.com/facebookresearch/llama)
|
||||||
- [LLaMA3](https://github.com/meta-llama/llama3)
|
- [LLaMA3](https://github.com/meta-llama/llama3)
|
||||||
- [Vicuna](https://github.com/lm-sys/FastChat)
|
|
||||||
- [Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
|
|
||||||
- [Baichuan](https://github.com/baichuan-inc)
|
|
||||||
- [WizardLM](https://github.com/nlpxucan/WizardLM)
|
|
||||||
- [ChatGLM2](https://github.com/THUDM/ChatGLM2-6B)
|
|
||||||
- [ChatGLM3](https://github.com/THUDM/ChatGLM3-6B)
|
|
||||||
- [TigerBot](https://github.com/TigerResearch/TigerBot)
|
|
||||||
- [Qwen](https://github.com/QwenLM/Qwen)
|
- [Qwen](https://github.com/QwenLM/Qwen)
|
||||||
- [BlueLM](https://github.com/vivo-ai-lab/BlueLM)
|
- [TigerBot](https://github.com/TigerResearch/TigerBot)
|
||||||
- [Gemma](https://huggingface.co/google/gemma-7b)
|
- [Vicuna](https://github.com/lm-sys/FastChat)
|
||||||
|
- [WizardLM](https://github.com/nlpxucan/WizardLM)
|
||||||
|
- [Yi](https://github.com/01-ai/Yi)
|
||||||
- ……
|
- ……
|
||||||
|
|
||||||
</td>
|
</td>
|
||||||
|
Loading…
Reference in New Issue
Block a user