OpenCompass/configs/models/mistral/vllm_mistral_7b_v0_1.py
Fengzhe Zhou 7505b3cadf
[Feature] Add huggingface apply_chat_template (#1098)
* add TheoremQA with 5-shot

* add huggingface_above_v4_33 classes

* use num_worker partitioner in cli

* update theoremqa

* update TheoremQA

* add TheoremQA

* rename theoremqa -> TheoremQA

* update TheoremQA output path

* rewrite many model configs

* update huggingface

* further update

* refine configs

* update configs

* update configs

* add configs/eval_llama3_instruct.py

* add summarizer multi faceted

* update bbh datasets

* update configs/models/hf_llama/lmdeploy_llama3_8b_instruct.py

* rename class

* update readme

* update hf above v4.33
2024-05-14 14:50:16 +08:00

18 lines
455 B
Python

from opencompass.models import VLLM
models = [
dict(
type=VLLM,
abbr='mistral-7b-v0.1-vllm',
path='mistralai/Mistral-7B-v0.1',
max_out_len=100,
max_seq_len=2048,
batch_size=32,
model_kwargs=dict(dtype='bfloat16'),
generation_kwargs=dict(temperature=0, top_p=1, max_tokens=2048, stop_token_ids=[2]),
run_cfg=dict(num_gpus=1, num_procs=1),
stop_words=['[INST]'],
)
]