OpenCompass/configs/models/aquila/hf_aquilachat2_7b_16k.py
Fengzhe Zhou 7505b3cadf
[Feature] Add huggingface apply_chat_template (#1098)
* add TheoremQA with 5-shot

* add huggingface_above_v4_33 classes

* use num_worker partitioner in cli

* update theoremqa

* update TheoremQA

* add TheoremQA

* rename theoremqa -> TheoremQA

* update TheoremQA output path

* rewrite many model configs

* update huggingface

* further update

* refine configs

* update configs

* update configs

* add configs/eval_llama3_instruct.py

* add summarizer multi faceted

* update bbh datasets

* update configs/models/hf_llama/lmdeploy_llama3_8b_instruct.py

* rename class

* update readme

* update hf above v4.33
2024-05-14 14:50:16 +08:00

34 lines
873 B
Python

from opencompass.models import HuggingFaceCausalLM
_meta_template = dict(
begin='###',
round=[
dict(role='HUMAN', begin='Human: ', end='###'),
dict(role='BOT', begin='Assistant: ', end='</s>', generate=True),
],
)
models = [
dict(
type=HuggingFaceCausalLM,
abbr='aquilachat2-7b-16k-hf',
path="BAAI/AquilaChat2-7B-16K",
tokenizer_path='BAAI/AquilaChat2-7B-16K',
model_kwargs=dict(
device_map='auto',
trust_remote_code=True,
),
tokenizer_kwargs=dict(
padding_side='left',
truncation_side='left',
trust_remote_code=True,
use_fast=False,
),
meta_template=_meta_template,
max_out_len=100,
max_seq_len=4096,
batch_size=8,
run_cfg=dict(num_gpus=1, num_procs=1),
)
]