OpenCompass/configs/models/hf_llama/lmdeploy_llama3_8b_instruct.py
Fengzhe Zhou 7505b3cadf
[Feature] Add huggingface apply_chat_template (#1098)
* add TheoremQA with 5-shot

* add huggingface_above_v4_33 classes

* use num_worker partitioner in cli

* update theoremqa

* update TheoremQA

* add TheoremQA

* rename theoremqa -> TheoremQA

* update TheoremQA output path

* rewrite many model configs

* update huggingface

* further update

* refine configs

* update configs

* update configs

* add configs/eval_llama3_instruct.py

* add summarizer multi faceted

* update bbh datasets

* update configs/models/hf_llama/lmdeploy_llama3_8b_instruct.py

* rename class

* update readme

* update hf above v4.33
2024-05-14 14:50:16 +08:00

25 lines
813 B
Python

from opencompass.models import TurboMindModel
_meta_template = dict(
round=[
dict(role="HUMAN", begin='<|begin_of_text|>user<|end_header_id|>\n\n', end='<|eot_id|>'),
dict(role="BOT", begin='<|begin_of_text|>assistant<|end_header_id|>\n\n', end='<|eot_id|>', generate=True),
],
)
models = [
dict(
type=TurboMindModel,
abbr='llama-3-8b-instruct-lmdeploy',
path='meta-llama/Meta-Llama-3-8B-Instruct',
engine_config=dict(session_len=4096, max_batch_size=16, tp=1),
gen_config=dict(top_k=1, temperature=1, top_p=0.9, max_new_tokens=1024, stop_words=[128001, 128009]),
max_out_len=1024,
max_seq_len=4096,
batch_size=16,
concurrency=16,
meta_template=_meta_template,
run_cfg=dict(num_gpus=1),
)
]