OpenCompass/configs/models/mistral/vllm_mixtral_8x7b_instruct_v0_1.py
Fengzhe Zhou 7505b3cadf
[Feature] Add huggingface apply_chat_template (#1098)
* add TheoremQA with 5-shot

* add huggingface_above_v4_33 classes

* use num_worker partitioner in cli

* update theoremqa

* update TheoremQA

* add TheoremQA

* rename theoremqa -> TheoremQA

* update TheoremQA output path

* rewrite many model configs

* update huggingface

* further update

* refine configs

* update configs

* update configs

* add configs/eval_llama3_instruct.py

* add summarizer multi faceted

* update bbh datasets

* update configs/models/hf_llama/lmdeploy_llama3_8b_instruct.py

* rename class

* update readme

* update hf above v4.33
2024-05-14 14:50:16 +08:00

27 lines
652 B
Python

from opencompass.models import VLLM
_meta_template = dict(
begin="<s>",
round=[
dict(role="HUMAN", begin='[INST]', end='[/INST]'),
dict(role="BOT", begin="", end='</s>', generate=True),
],
)
models = [
dict(
type=VLLM,
abbr='mixtral-8x7b-instruct-v0.1-vllm',
path='mistralai/Mixtral-8x7B-Instruct-v0.1',
model_kwargs=dict(tensor_parallel_size=2),
meta_template=_meta_template,
max_out_len=100,
max_seq_len=2048,
batch_size=32,
generation_kwargs=dict(temperature=0),
end_str='</s>',
run_cfg=dict(num_gpus=2, num_procs=1),
)
]