OpenCompass/configs/models/others/vllm_orionstar_14b_longchat.py
Fengzhe Zhou 7505b3cadf
[Feature] Add huggingface apply_chat_template (#1098)
* add TheoremQA with 5-shot

* add huggingface_above_v4_33 classes

* use num_worker partitioner in cli

* update theoremqa

* update TheoremQA

* add TheoremQA

* rename theoremqa -> TheoremQA

* update TheoremQA output path

* rewrite many model configs

* update huggingface

* further update

* refine configs

* update configs

* update configs

* add configs/eval_llama3_instruct.py

* add summarizer multi faceted

* update bbh datasets

* update configs/models/hf_llama/lmdeploy_llama3_8b_instruct.py

* rename class

* update readme

* update hf above v4.33
2024-05-14 14:50:16 +08:00

27 lines
658 B
Python

from opencompass.models import VLLM
_meta_template = dict(
begin='<s>',
round=[
dict(role="HUMAN", begin='Human: ', end='\n'),
dict(role="BOT", begin="Assistant: ", end='</s>', generate=True),
],
)
models = [
dict(
abbr='orionstar-14b-longchat-vllm',
type=VLLM,
path='OrionStarAI/Orion-14B-LongChat',
model_kwargs=dict(tensor_parallel_size=4),
generation_kwargs=dict(temperature=0),
meta_template=_meta_template,
max_out_len=100,
max_seq_len=4096,
batch_size=32,
run_cfg=dict(num_gpus=4, num_procs=1),
end_str='<|endoftext|>',
)
]