OpenCompass/opencompass/configs/models/gemma/vllm_gemma_3_4b_it.py
Mo Li ff3275edf0
[Update] Add Long-Context configs for Gemma, OREAL, and Qwen2.5 models (#2048)
* [Update] Update Gemma, Oreal, Qwen Config

* fix lint
2025-05-08 19:06:56 +08:00

18 lines
522 B
Python

from opencompass.models import VLLMwithChatTemplate
models = [
dict(
type=VLLMwithChatTemplate,
abbr='gemma-3-4b-it-vllm',
path='google/gemma-3-4b-it',
model_kwargs=dict(tensor_parallel_size=2,
# for long context
rope_scaling={'factor': 8.0, 'rope_type': 'linear'}),
max_seq_len=140000,
max_out_len=4096,
batch_size=1,
generation_kwargs=dict(temperature=0),
run_cfg=dict(num_gpus=2),
)
]