OpenCompass/configs/eval_llama2_7b_lveval.py
yuantao2108 bbec7d8733
[Feature] add lveval benchmark (#914)
* add lveval benchmark

* add LVEval readme file

* update LVEval readme file

* Update configs/eval_bluelm_32k_lveval.py

* Update configs/eval_llama2_7b_lveval.py

---------

Co-authored-by: yuantao <yuantao@infini-ai.com>
Co-authored-by: Mo Li <82895469+DseidLi@users.noreply.github.com>
2024-03-04 11:22:03 +08:00

17 lines
540 B
Python

from mmengine.config import read_base
with read_base():
from .datasets.lveval.lveval import LVEval_datasets as datasets
from .models.hf_llama.hf_llama2_7b_chat import models
from .summarizers.lveval import summarizer
models[0][
"path"
] = "/path/to/your/huggingface_models/Llama-2-7b-chat-hf"
models[0][
"tokenizer_path"
] = "/path/to/your/huggingface_models/Llama-2-7b-chat-hf"
models[0]["max_seq_len"] = 4096
models[0]["generation_kwargs"] = dict(do_sample=False)
models[0]["mode"] = "mid" # truncate in the middle