mirror of
https://github.com/open-compass/opencompass.git
synced 2025-05-30 16:03:24 +08:00
[Deperecate] Remove multi-modal related stuff (#1072)
* Remove MultiModal * update index.rst * update README * remove mmbench codes * update news --------- Co-authored-by: Leymore <zfz-960727@163.com>
This commit is contained in:
parent
f1ee11de14
commit
3a232db471
@ -70,6 +70,7 @@ Just like a compass guides us on our journey, OpenCompass will guide you through
|
||||
|
||||
## 🚀 What's New <a><img width="35" height="20" src="https://user-images.githubusercontent.com/12782558/212848161-5e783dd6-11e8-4fe0-bbba-39ffb77730be.png"></a>
|
||||
|
||||
- **\[2024.04.26\]** We deprecated the multi-madality evaluating function from OpenCompass, related implement has moved to [VLMEvalKit](https://github.com/open-compass/VLMEvalKit), welcome to use! 🔥🔥🔥.
|
||||
- **\[2024.04.26\]** We supported the evaluation of [ArenaHard](configs/eval_subjective_arena_hard.py) welcome to try!🔥🔥🔥.
|
||||
- **\[2024.04.22\]** We supported the evaluation of [LLaMA3](configs/models/hf_llama/hf_llama3_8b.py) 和 [LLaMA3-Instruct](configs/models/hf_llama/hf_llama3_8b_instruct.py), welcome to try! 🔥🔥🔥
|
||||
- **\[2024.02.29\]** We supported the MT-Bench, AlpacalEval and AlignBench, more information can be found [here](https://opencompass.readthedocs.io/en/latest/advanced_guides/subjective_evaluation.html)
|
||||
|
@ -60,7 +60,7 @@
|
||||
|
||||
🚩🚩🚩 欢迎加入 OpenCompass!我们目前**招聘全职研究人员/工程师和实习生**。如果您对 LLM 和 OpenCompass 充满热情,请随时通过[电子邮件](mailto:zhangsongyang@pjlab.org.cn)与我们联系。我们非常期待与您交流!
|
||||
|
||||
🔥🔥🔥 祝贺 **OpenCompass 作为大模型标准测试工具被Meta AI官方推荐**, 点击 Llama 的 [入门文档](https://ai.meta.com/llama/get-started/#validation) 获取更多信息.
|
||||
🔥🔥🔥 祝贺 **OpenCompass 作为大模型标准测试工具被Meta AI官方推荐**, 点击 Llama 的 [入门文档](https://ai.meta.com/llama/get-started/#validation) 获取更多信息。
|
||||
|
||||
> **注意**<br />
|
||||
> 我们正式启动 OpenCompass 共建计划,诚邀社区用户为 OpenCompass 提供更具代表性和可信度的客观评测数据集!
|
||||
@ -69,6 +69,7 @@
|
||||
|
||||
## 🚀 最新进展 <a><img width="35" height="20" src="https://user-images.githubusercontent.com/12782558/212848161-5e783dd6-11e8-4fe0-bbba-39ffb77730be.png"></a>
|
||||
|
||||
- **\[2024.04.26\]** 我们废弃了 OpenCompass 进行多模态大模型评测的功能,相关功能转移至 [VLMEvalKit](https://github.com/open-compass/VLMEvalKit),推荐使用!🔥🔥🔥.
|
||||
- **\[2024.04.26\]** 我们支持了 [ArenaHard评测](configs/eval_subjective_arena_hard.py) 欢迎试用!🔥🔥🔥.
|
||||
- **\[2024.04.22\]** 我们支持了 [LLaMA3](configs/models/hf_llama/hf_llama3_8b.py) 和 [LLaMA3-Instruct](configs/models/hf_llama/hf_llama3_8b_instruct.py) 的评测,欢迎试用!🔥🔥🔥.
|
||||
- **\[2024.02.29\]** 我们支持了MT-Bench、AlpacalEval和AlignBench,更多信息可以在[这里](https://opencompass.readthedocs.io/en/latest/advanced_guides/subjective_evaluation.html)找到。
|
||||
|
@ -1,49 +0,0 @@
|
||||
# InstructBLIP
|
||||
|
||||
### Prepare the environment
|
||||
|
||||
```sh
|
||||
git clone https://github.com/salesforce/LAVIS.git
|
||||
cd ./LAVIS
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
### Modify the config
|
||||
|
||||
Modify the config of InstructBlip, like model path of LLM and Qformer.
|
||||
|
||||
Then update `tasks.py` like the following code snippet.
|
||||
|
||||
```python
|
||||
from mmengine.config import read_base
|
||||
|
||||
with read_base():
|
||||
from .instructblip.instructblip_mmbench import (instruct_blip_dataloader,
|
||||
instruct_blip_evaluator,
|
||||
instruct_blip_load_from,
|
||||
instruct_blip_model)
|
||||
|
||||
models = [instruct_blip_model]
|
||||
datasets = [instruct_blip_dataloader]
|
||||
evaluators = [instruct_blip_evaluator]
|
||||
load_froms = [instruct_blip_load_from]
|
||||
num_gpus = 8
|
||||
num_procs = 8
|
||||
launcher = 'pytorch' # or 'slurm'
|
||||
```
|
||||
|
||||
### Start evaluation
|
||||
|
||||
#### Slurm
|
||||
|
||||
```sh
|
||||
cd $root
|
||||
python run.py configs/multimodal/tasks.py --mm-eval --slurm -p $PARTITION
|
||||
```
|
||||
|
||||
#### PyTorch
|
||||
|
||||
```sh
|
||||
cd $root
|
||||
python run.py configs/multimodal/tasks.py --mm-eval
|
||||
```
|
@ -1,53 +0,0 @@
|
||||
from opencompass.multimodal.models.instructblip import (
|
||||
InstructBlipCOCOCaotionPromptConstructor,
|
||||
InstructBlipCOCOCaptionPostProcessor,
|
||||
)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(384, 384),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(type='mmpretrain.PackInputs', algorithm_keys=['image_id'])
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.COCOCaption',
|
||||
data_root='data/coco',
|
||||
data_prefix=dict(img_path='images'),
|
||||
ann_file='annotations/coco_karpathy_val.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
instruct_blip_coco_caption_dataloader = dict(
|
||||
batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
instruct_blip_coco_caption_model = dict(
|
||||
type='blip2-vicuna-instruct',
|
||||
prompt_constructor=dict(type=InstructBlipCOCOCaotionPromptConstructor),
|
||||
post_processor=dict(type=InstructBlipCOCOCaptionPostProcessor),
|
||||
freeze_vit=True,
|
||||
low_resource=False,
|
||||
llm_model='/path/to/vicuna-7b/',
|
||||
img_size=384,
|
||||
is_caption_task=True,
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
instruct_blip_coco_caption_evaluator = [
|
||||
dict(
|
||||
type='mmpretrain.COCOCaption',
|
||||
ann_file='data/coco/annotations/coco_karpathy_val_gt.json',
|
||||
) # noqa
|
||||
]
|
||||
|
||||
instruct_blip_load_from = '/path/to/instruct_blip_vicuna7b_trimmed.pth'
|
@ -1,54 +0,0 @@
|
||||
from opencompass.multimodal.models.instructblip import (
|
||||
InstructBlipCOCOCaotionPromptConstructor,
|
||||
InstructBlipCOCOCaptionPostProcessor,
|
||||
)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(384, 384),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(type='mmpretrain.PackInputs', algorithm_keys=['image_id'])
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.Flickr30kCaption',
|
||||
data_root='data/flickr30k',
|
||||
ann_file='annotations/dataset_flickr30k.json',
|
||||
data_prefix='images',
|
||||
split='val',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
instruct_blip_flickr30k_dataloader = dict(
|
||||
batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
instruct_blip_flickr30k_model = dict(
|
||||
type='blip2-vicuna-instruct',
|
||||
prompt_constructor=dict(type=InstructBlipCOCOCaotionPromptConstructor),
|
||||
post_processor=dict(type=InstructBlipCOCOCaptionPostProcessor),
|
||||
freeze_vit=True,
|
||||
low_resource=False,
|
||||
llm_model='/path/to/vicuna-7b/',
|
||||
img_size=384,
|
||||
is_caption_task=True,
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
instruct_blip_flickr30k_evaluator = [
|
||||
dict(
|
||||
type='mmpretrain.COCOCaption',
|
||||
ann_file='data/flickr30k/annotations/flickr30k_val_gt.json',
|
||||
) # noqa
|
||||
]
|
||||
|
||||
instruct_blip_load_from = '/path/to/instruct_blip_vicuna7b_trimmed.pth'
|
@ -1,52 +0,0 @@
|
||||
from opencompass.multimodal.models.instructblip import (
|
||||
InstructBlipVQAPromptConstructor,
|
||||
InstructBlipVQAPostProcessor,
|
||||
)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.GQA',
|
||||
data_root='data/gqa',
|
||||
data_prefix='images',
|
||||
ann_file='annotations/testdev_balanced_questions.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
instruct_blip_gqa_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler',
|
||||
shuffle=False))
|
||||
|
||||
# model settings
|
||||
instruct_blip_gqa_model = dict(
|
||||
type='blip2-vicuna-instruct',
|
||||
prompt_constructor=dict(type=InstructBlipVQAPromptConstructor),
|
||||
post_processor=dict(type=InstructBlipVQAPostProcessor),
|
||||
freeze_vit=True,
|
||||
low_resource=False,
|
||||
llm_model='/path/to/vicuna-7b/',
|
||||
max_output_txt_len=10,
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
# evaluation settings
|
||||
instruct_blip_gqa_evaluator = [dict(type='mmpretrain.GQAAcc')]
|
||||
|
||||
instruct_blip_load_from = '/path/to/instruct_blip_vicuna7b_trimmed.pth'
|
@ -1,51 +0,0 @@
|
||||
from opencompass.multimodal.models.instructblip import (
|
||||
InstructBlipMMBenchPromptConstructor, InstructBlipMMBenchPostProcessor)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(type='mmpretrain.PackInputs',
|
||||
algorithm_keys=[
|
||||
'question', 'category', 'l2-category', 'context', 'index',
|
||||
'options_dict', 'options', 'split'
|
||||
])
|
||||
]
|
||||
|
||||
dataset = dict(type='opencompass.MMBenchDataset',
|
||||
data_file='data/mmbench/mmbench_test_20230712.tsv',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
instruct_blip_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler',
|
||||
shuffle=False))
|
||||
|
||||
# model settings
|
||||
instruct_blip_model = dict(
|
||||
type='blip2-vicuna-instruct',
|
||||
prompt_constructor=dict(type=InstructBlipMMBenchPromptConstructor),
|
||||
post_processor=dict(type=InstructBlipMMBenchPostProcessor),
|
||||
freeze_vit=True,
|
||||
low_resource=False,
|
||||
llm_model='/path/to/vicuna-7b/',
|
||||
sys_prompt= # noqa: E251
|
||||
'###Human: What is the capital of China? There are several options:\nA. Beijing\nB. Shanghai\nC. Guangzhou\nD. Shenzhen\n###Assistant: A\n'
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
instruct_blip_evaluator = [
|
||||
dict(
|
||||
type='opencompass.DumpResults',
|
||||
save_path= # noqa: E251
|
||||
'work_dirs/instructblip_vicuna7b/instructblipvicuna_mmbench.xlsx')
|
||||
]
|
||||
|
||||
instruct_blip_load_from = '/path/to/instruct_blip_vicuna7b_trimmed'
|
@ -1,51 +0,0 @@
|
||||
from opencompass.multimodal.models.instructblip import (
|
||||
InstructBlipVQAPromptConstructor,
|
||||
InstructBlipVQAPostProcessor,
|
||||
)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.OCRVQA',
|
||||
data_root='data/ocrvqa',
|
||||
ann_file='annotations/dataset.json',
|
||||
split='test',
|
||||
data_prefix='images',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
instruct_blip_ocr_vqa_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler',
|
||||
shuffle=False))
|
||||
|
||||
# model settings
|
||||
instruct_blip_ocr_vqa_model = dict(
|
||||
type='blip2-vicuna-instruct',
|
||||
prompt_constructor=dict(type=InstructBlipVQAPromptConstructor),
|
||||
post_processor=dict(type=InstructBlipVQAPostProcessor),
|
||||
freeze_vit=True,
|
||||
low_resource=False,
|
||||
llm_model='/path/to/vicuna-7b/',
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
instruct_blip_ocr_vqa_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
||||
|
||||
instruct_blip_load_from = '/path/to/instruct_blip_vicuna7b_trimmed.pth'
|
@ -1,54 +0,0 @@
|
||||
from opencompass.multimodal.models.instructblip import (
|
||||
InstructBlipVQAPromptConstructor,
|
||||
InstructBlipVQAPostProcessor,
|
||||
)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(
|
||||
type='mmpretrain.COCOVQA',
|
||||
data_root='data/okvqa',
|
||||
question_file='annotations/OpenEnded_mscoco_val2014_questions.json',
|
||||
ann_file='annotations/mscoco_val2014_annotations.json',
|
||||
pipeline=val_pipeline,
|
||||
data_prefix='images/val2014',
|
||||
)
|
||||
|
||||
instruct_blip_ok_vqa_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler',
|
||||
shuffle=False))
|
||||
|
||||
# model settings
|
||||
instruct_blip_ok_vqa_model = dict(
|
||||
type='blip2-vicuna-instruct',
|
||||
prompt_constructor=dict(type=InstructBlipVQAPromptConstructor),
|
||||
post_processor=dict(type=InstructBlipVQAPostProcessor),
|
||||
freeze_vit=True,
|
||||
low_resource=False,
|
||||
llm_model='/path/to/vicuna-7b/',
|
||||
max_output_txt_len=10,
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
instruct_blip_ok_vqa_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
||||
|
||||
instruct_blip_load_from = '/path/to/instruct_blip_vicuna7b_trimmed.pth'
|
@ -1,53 +0,0 @@
|
||||
from opencompass.multimodal.models.instructblip import (
|
||||
InstructBlipScienceQAPromptConstructor,
|
||||
InstructBlipScienceQAPostProcessor,
|
||||
)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(type='mmpretrain.PackInputs',
|
||||
algorithm_keys=[
|
||||
'question', 'gt_answer', 'choices', 'hint', 'lecture', 'solution', 'has_image'
|
||||
])
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.ScienceQA',
|
||||
data_root='./data/scienceqa',
|
||||
split='val',
|
||||
split_file='pid_splits.json',
|
||||
ann_file='problems.json',
|
||||
image_only=True,
|
||||
data_prefix=dict(img_path='val'),
|
||||
pipeline=val_pipeline)
|
||||
|
||||
instruct_blip_scienceqa_dataloader = dict(
|
||||
batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
instruct_blip_scienceqa_model = dict(
|
||||
type='blip2-vicuna-instruct',
|
||||
prompt_constructor=dict(type=InstructBlipScienceQAPromptConstructor),
|
||||
post_processor=dict(type=InstructBlipScienceQAPostProcessor),
|
||||
freeze_vit=True,
|
||||
low_resource=False,
|
||||
llm_model='/path/to/vicuna-7b/',
|
||||
max_output_txt_len=10,
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
instruct_blip_scienceqa_evaluator = [dict(type='mmpretrain.ScienceQAMetric')]
|
||||
|
||||
instruct_blip_load_from = '/path/to/instruct_blip_vicuna7b_trimmed.pth'
|
@ -1,53 +0,0 @@
|
||||
from opencompass.multimodal.models.instructblip import (
|
||||
InstructBlipVQAPromptConstructor,
|
||||
InstructBlipVQAPostProcessor,
|
||||
)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(
|
||||
type='mmpretrain.TextVQA',
|
||||
data_root='data/textvqa',
|
||||
ann_file='annotations/TextVQA_0.5.1_val.json',
|
||||
pipeline=val_pipeline,
|
||||
data_prefix='images/train_images',
|
||||
)
|
||||
|
||||
instruct_blip_textvqa_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler',
|
||||
shuffle=False))
|
||||
|
||||
# model settings
|
||||
instruct_blip_textvqa_model = dict(
|
||||
type='blip2-vicuna-instruct',
|
||||
prompt_constructor=dict(type=InstructBlipVQAPromptConstructor),
|
||||
post_processor=dict(type=InstructBlipVQAPostProcessor),
|
||||
freeze_vit=True,
|
||||
low_resource=False,
|
||||
llm_model='/path/to/vicuna-7b/',
|
||||
max_output_txt_len=10,
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
instruct_blip_textvqa_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
||||
|
||||
instruct_blip_load_from = '/path/to/instruct_blip_vicuna7b_trimmed.pth'
|
@ -1,51 +0,0 @@
|
||||
from opencompass.multimodal.models.instructblip import (
|
||||
InstructBlipVQAPromptConstructor,
|
||||
InstructBlipVQAPostProcessor,
|
||||
)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.VizWiz',
|
||||
data_root='data/vizwiz/',
|
||||
data_prefix='Images/val',
|
||||
ann_file='Annotations/val.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
instruct_blip_vizwiz_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler',
|
||||
shuffle=False))
|
||||
|
||||
# model settings
|
||||
instruct_blip_vizwiz_model = dict(
|
||||
type='blip2-vicuna-instruct',
|
||||
prompt_constructor=dict(type=InstructBlipVQAPromptConstructor),
|
||||
post_processor=dict(type=InstructBlipVQAPostProcessor),
|
||||
freeze_vit=True,
|
||||
low_resource=False,
|
||||
llm_model='/path/to/vicuna-7b/',
|
||||
max_output_txt_len=10,
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
instruct_blip_vizwiz_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
||||
|
||||
instruct_blip_load_from = '/path/to/instruct_blip_vicuna7b_trimmed.pth'
|
@ -1,53 +0,0 @@
|
||||
from opencompass.multimodal.models.instructblip import (
|
||||
InstructBlipVQAPromptConstructor,
|
||||
InstructBlipVQAPostProcessor,
|
||||
)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(
|
||||
type='mmpretrain.COCOVQA',
|
||||
data_root='data/coco',
|
||||
data_prefix='images/val2014',
|
||||
question_file='annotations/v2_OpenEnded_mscoco_val2014_questions.json',
|
||||
ann_file='annotations/v2_mscoco_val2014_annotations.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
instruct_blip_vqav2_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler',
|
||||
shuffle=False))
|
||||
|
||||
# model settings
|
||||
instruct_blip_vqav2_model = dict(
|
||||
type='blip2-vicuna-instruct',
|
||||
prompt_constructor=dict(type=InstructBlipVQAPromptConstructor),
|
||||
post_processor=dict(type=InstructBlipVQAPostProcessor),
|
||||
freeze_vit=True,
|
||||
low_resource=False,
|
||||
llm_model='/path/to/vicuna-7b/',
|
||||
max_output_txt_len=10,
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
instruct_blip_vqav2_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
||||
|
||||
instruct_blip_load_from = '/path/to/instruct_blip_vicuna7b_trimmed.pth'
|
@ -1,51 +0,0 @@
|
||||
from opencompass.multimodal.models.instructblip import (
|
||||
InstructBlipVSRPromptConstructor,
|
||||
InstructBlipVSRPostProcessor,
|
||||
)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.VSR',
|
||||
data_root='data/vsr/',
|
||||
data_prefix='images/',
|
||||
ann_file='annotations/test.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
instruct_blip_vsr_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler',
|
||||
shuffle=False))
|
||||
|
||||
# model settings
|
||||
instruct_blip_vsr_model = dict(
|
||||
type='blip2-vicuna-instruct',
|
||||
prompt_constructor=dict(type=InstructBlipVSRPromptConstructor),
|
||||
post_processor=dict(type=InstructBlipVSRPostProcessor),
|
||||
freeze_vit=True,
|
||||
low_resource=False,
|
||||
llm_model='/path/to/vicuna-7b/',
|
||||
max_output_txt_len=10,
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
instruct_blip_vsr_evaluator = [dict(type='mmpretrain.GQAAcc')]
|
||||
|
||||
instruct_blip_load_from = '/path/to/instruct_blip_vicuna7b_trimmed.pth'
|
@ -1,24 +0,0 @@
|
||||
# Llama Adapter V2
|
||||
|
||||
### Prepare the environment
|
||||
|
||||
```sh
|
||||
cd opencompass/multimodal/models/llama_adapter_v2_multimodal
|
||||
git clone https://github.com/OpenGVLab/LLaMA-Adapter.git
|
||||
```
|
||||
|
||||
### Start evaluation
|
||||
|
||||
#### Slurm
|
||||
|
||||
```sh
|
||||
cd $root
|
||||
python run.py configs/multimodal/tasks.py --mm-eval --slurm -p $PARTITION
|
||||
```
|
||||
|
||||
#### PyTorch
|
||||
|
||||
```sh
|
||||
cd $root
|
||||
python run.py configs/multimodal/tasks.py --mm-eval
|
||||
```
|
@ -1,48 +0,0 @@
|
||||
from opencompass.multimodal.models.llama_adapter_v2_multimodal import (
|
||||
LlamaAadapterMMBenchPostProcessor, LlamaAadapterMMBenchPromptConstructor)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(type='mmpretrain.PackInputs',
|
||||
algorithm_keys=[
|
||||
'question', 'answer', 'options', 'category', 'l2-category',
|
||||
'index', 'context', 'options_dict'
|
||||
])
|
||||
]
|
||||
|
||||
dataset = dict(type='opencompass.MMBenchDataset',
|
||||
data_file='data/mmbench/mmbench_test_20230712.tsv',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
llama_adapter_mmbench_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
llama_adapter_mmbench_model = dict(
|
||||
type='LLaMA-adapter-v2',
|
||||
llama_dir= # noqa
|
||||
'/llama_adapter_v2_multimodal',
|
||||
prompt_constructor=dict(type=LlamaAadapterMMBenchPromptConstructor),
|
||||
post_processor=dict(type=LlamaAadapterMMBenchPostProcessor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
llama_adapter_mmbench_evaluator = [
|
||||
dict(
|
||||
type='opencompass.DumpResults',
|
||||
save_path='work_dirs/llama-adapter-v2-multimodal-mmagibench-v0.1.0.xlsx'
|
||||
)
|
||||
]
|
||||
|
||||
llama_adapter_mmbench_load_from = None # noqa
|
||||
|
@ -1,10 +0,0 @@
|
||||
# LLaVA
|
||||
|
||||
### Prepare the environment
|
||||
|
||||
```sh
|
||||
cd opencompass/multimodal/models/llava
|
||||
git clone https://github.com/haotian-liu/LLaVA.git
|
||||
```
|
||||
|
||||
Then prepare the environment according to the [install instruction](https://github.com/haotian-liu/LLaVA/tree/main#install)
|
@ -1,50 +0,0 @@
|
||||
from opencompass.multimodal.models.llava import LLaVABasePromptConstructor, LLaVABasePostProcessor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(
|
||||
type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711),
|
||||
),
|
||||
dict(type='mmpretrain.PackInputs', algorithm_keys=['image_id']),
|
||||
]
|
||||
|
||||
|
||||
dataset = dict(type='mmpretrain.COCOCaption',
|
||||
data_root='data/coco',
|
||||
data_prefix=dict(img_path='images'),
|
||||
ann_file='annotations/coco_karpathy_val.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
llava_coco_caption_dataloader = dict(
|
||||
batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
)
|
||||
|
||||
# model settings
|
||||
llava_coco_caption_model = dict(
|
||||
type='llava',
|
||||
model_path='/path/to/llava',
|
||||
is_caption_task=True,
|
||||
prompt_constructor=dict(type=LLaVABasePromptConstructor),
|
||||
post_processor=dict(type=LLaVABasePostProcessor)
|
||||
) # noqa
|
||||
|
||||
# evaluation settings
|
||||
llava_coco_caption_evaluator = [
|
||||
dict(
|
||||
type='mmpretrain.COCOCaption',
|
||||
ann_file='data/coco/annotations/coco_karpathy_val_gt.json',
|
||||
) # noqa
|
||||
]
|
||||
|
@ -1,52 +0,0 @@
|
||||
from opencompass.multimodal.models.llava import LLaVABasePromptConstructor, LLaVABasePostProcessor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(
|
||||
type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711),
|
||||
),
|
||||
dict(type='mmpretrain.PackInputs', algorithm_keys=['image_id']),
|
||||
]
|
||||
|
||||
|
||||
dataset = dict(type='mmpretrain.Flickr30kCaption',
|
||||
data_root='data/flickr30k',
|
||||
ann_file='annotations/dataset_flickr30k.json',
|
||||
data_prefix='images',
|
||||
split='val',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
llava_flickr30k_dataloader = dict(
|
||||
batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
)
|
||||
|
||||
# model settings
|
||||
llava_flickr30k_model = dict(
|
||||
type='llava',
|
||||
model_path='/path/to/llava',
|
||||
is_caption_task=True,
|
||||
prompt_constructor=dict(type=LLaVABasePromptConstructor),
|
||||
post_processor=dict(type=LLaVABasePostProcessor)
|
||||
) # noqa
|
||||
|
||||
# evaluation settings
|
||||
llava_flickr30k_evaluator = [
|
||||
dict(
|
||||
type='mmpretrain.COCOCaption',
|
||||
ann_file='data/flickr30k/annotations/flickr30k_val_gt.json',
|
||||
) # noqa
|
||||
]
|
||||
|
||||
|
@ -1,49 +0,0 @@
|
||||
from opencompass.multimodal.models.llava import LLaVAVQAPromptConstructor, LLaVABasePostProcessor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(
|
||||
type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711),
|
||||
),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
|
||||
dataset = dict(type='mmpretrain.GQA',
|
||||
data_root='data/gqa',
|
||||
data_prefix='images',
|
||||
ann_file='annotations/testdev_balanced_questions.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
llava_gqa_dataloader = dict(
|
||||
batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
)
|
||||
|
||||
# model settings
|
||||
llava_gqa_model = dict(
|
||||
type='llava',
|
||||
model_path='/path/to/llava',
|
||||
prompt_constructor=dict(type=LLaVAVQAPromptConstructor),
|
||||
post_processor=dict(type=LLaVABasePostProcessor)
|
||||
) # noqa
|
||||
|
||||
# evaluation settings
|
||||
llava_gqa_evaluator = [dict(type='mmpretrain.GQAAcc')]
|
||||
|
||||
|
@ -1,47 +0,0 @@
|
||||
from opencompass.multimodal.models.llava import LLaVAMMBenchPromptConstructor, LLaVABasePostProcessor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(
|
||||
type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711),
|
||||
),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=[
|
||||
'question', 'category', 'l2-category', 'context', 'index',
|
||||
'options_dict', 'options', 'split'
|
||||
],
|
||||
),
|
||||
]
|
||||
|
||||
dataset = dict(type='opencompass.MMBenchDataset',
|
||||
data_file='data/mmbench/mmbench_test_20230712.tsv',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
llava_mmbench_dataloader = dict(
|
||||
batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
)
|
||||
|
||||
# model settings
|
||||
llava_mmbench_model = dict(
|
||||
type='llava',
|
||||
model_path='/path/to/llava',
|
||||
prompt_constructor=dict(type=LLaVAMMBenchPromptConstructor),
|
||||
post_processor=dict(type=LLaVABasePostProcessor)
|
||||
) # noqa
|
||||
|
||||
# evaluation settings
|
||||
llava_mmbench_evaluator = [
|
||||
dict(type='opencompass.DumpResults',
|
||||
save_path='work_dirs/llava-7b-mmbench.xlsx')
|
||||
]
|
@ -1,49 +0,0 @@
|
||||
from opencompass.multimodal.models.llava import LLaVAVQAPromptConstructor, LLaVABasePostProcessor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(
|
||||
type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711),
|
||||
),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.OCRVQA',
|
||||
data_root='data/ocrvqa',
|
||||
ann_file='annotations/dataset.json',
|
||||
split='test',
|
||||
data_prefix='images',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
llava_ocrvqa_dataloader = dict(
|
||||
batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
)
|
||||
|
||||
# model settings
|
||||
llava_ocrvqa_model = dict(
|
||||
type='llava',
|
||||
model_path='/path/to/llava',
|
||||
prompt_constructor=dict(type=LLaVAVQAPromptConstructor),
|
||||
post_processor=dict(type=LLaVABasePostProcessor)
|
||||
) # noqa
|
||||
|
||||
# evaluation settings
|
||||
llava_ocrvqa_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
||||
|
||||
|
@ -1,51 +0,0 @@
|
||||
from opencompass.multimodal.models.llava import LLaVAVQAPromptConstructor, LLaVABasePostProcessor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(
|
||||
type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711),
|
||||
),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(
|
||||
type='mmpretrain.COCOVQA',
|
||||
data_root='data/okvqa',
|
||||
question_file='annotations/OpenEnded_mscoco_val2014_questions.json',
|
||||
ann_file='annotations/mscoco_val2014_annotations.json',
|
||||
pipeline=val_pipeline,
|
||||
data_prefix='images/val2014',
|
||||
)
|
||||
|
||||
llava_okvqa_dataloader = dict(
|
||||
batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
)
|
||||
|
||||
# model settings
|
||||
llava_okvqa_model = dict(
|
||||
type='llava',
|
||||
model_path='/path/to/llava',
|
||||
prompt_constructor=dict(type=LLaVAVQAPromptConstructor),
|
||||
post_processor=dict(type=LLaVABasePostProcessor)
|
||||
) # noqa
|
||||
|
||||
# evaluation settings
|
||||
llava_okvqa_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
||||
|
||||
|
@ -1,50 +0,0 @@
|
||||
from opencompass.multimodal.models.llava import LLaVAScienceQAPromptConstructor, LLaVABasePostProcessor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(
|
||||
type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711),
|
||||
),
|
||||
dict(type='mmpretrain.PackInputs',
|
||||
algorithm_keys=[
|
||||
'question', 'gt_answer', 'choices', 'hint', 'lecture', 'solution', 'has_image'
|
||||
])
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.ScienceQA',
|
||||
data_root='./data/scienceqa',
|
||||
split='val',
|
||||
split_file='pid_splits.json',
|
||||
ann_file='problems.json',
|
||||
image_only=True,
|
||||
data_prefix=dict(img_path='val'),
|
||||
pipeline=val_pipeline)
|
||||
|
||||
llava_scienceqa_dataloader = dict(
|
||||
batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
)
|
||||
|
||||
# model settings
|
||||
llava_scienceqa_model = dict(
|
||||
type='llava',
|
||||
model_path='/path/to/llava',
|
||||
prompt_constructor=dict(type=LLaVAScienceQAPromptConstructor),
|
||||
post_processor=dict(type=LLaVABasePostProcessor)
|
||||
) # noqa
|
||||
|
||||
# evaluation settings
|
||||
llava_scienceqa_evaluator = [dict(type='mmpretrain.ScienceQAMetric')]
|
||||
|
||||
|
@ -1,50 +0,0 @@
|
||||
from opencompass.multimodal.models.llava import LLaVAVQAPromptConstructor, LLaVABasePostProcessor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(
|
||||
type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711),
|
||||
),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(
|
||||
type='mmpretrain.TextVQA',
|
||||
data_root='data/textvqa',
|
||||
ann_file='annotations/TextVQA_0.5.1_val.json',
|
||||
pipeline=val_pipeline,
|
||||
data_prefix='images/train_images',
|
||||
)
|
||||
|
||||
llava_textvqa_dataloader = dict(
|
||||
batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
)
|
||||
|
||||
# model settings
|
||||
llava_textvqa_model = dict(
|
||||
type='llava',
|
||||
model_path='/path/to/llava',
|
||||
prompt_constructor=dict(type=LLaVAVQAPromptConstructor),
|
||||
post_processor=dict(type=LLaVABasePostProcessor)
|
||||
) # noqa
|
||||
|
||||
# evaluation settings
|
||||
llava_textvqa_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
||||
|
||||
|
@ -1,48 +0,0 @@
|
||||
from opencompass.multimodal.models.llava import LLaVAVQAPromptConstructor, LLaVABasePostProcessor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(
|
||||
type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711),
|
||||
),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.VizWiz',
|
||||
data_root='data/vizwiz/',
|
||||
data_prefix='Images/val',
|
||||
ann_file='Annotations/val.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
llava_vizwiz_dataloader = dict(
|
||||
batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
)
|
||||
|
||||
# model settings
|
||||
llava_vizwiz_model = dict(
|
||||
type='llava',
|
||||
model_path='/path/to/llava',
|
||||
prompt_constructor=dict(type=LLaVAVQAPromptConstructor),
|
||||
post_processor=dict(type=LLaVABasePostProcessor)
|
||||
) # noqa
|
||||
|
||||
# evaluation settings
|
||||
llava_vizwiz_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
||||
|
||||
|
@ -1,50 +0,0 @@
|
||||
from opencompass.multimodal.models.llava import LLaVAVQAPromptConstructor, LLaVABasePostProcessor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(
|
||||
type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711),
|
||||
),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(
|
||||
type='mmpretrain.COCOVQA',
|
||||
data_root='data/coco',
|
||||
data_prefix='images/val2014',
|
||||
question_file='annotations/v2_OpenEnded_mscoco_val2014_questions.json',
|
||||
ann_file='annotations/v2_mscoco_val2014_annotations.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
llava_vqav2_dataloader = dict(
|
||||
batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
)
|
||||
|
||||
# model settings
|
||||
llava_vqav2_model = dict(
|
||||
type='llava',
|
||||
model_path='/path/to/llava',
|
||||
prompt_constructor=dict(type=LLaVAVQAPromptConstructor),
|
||||
post_processor=dict(type=LLaVABasePostProcessor)
|
||||
) # noqa
|
||||
|
||||
# evaluation settings
|
||||
llava_vqav2_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
||||
|
||||
|
@ -1,48 +0,0 @@
|
||||
from opencompass.multimodal.models.llava import LLaVAVQAPromptConstructor, LLaVAVSRPostProcessor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(
|
||||
type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711),
|
||||
),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.VSR',
|
||||
data_root='data/vsr/',
|
||||
data_prefix='images/',
|
||||
ann_file='annotations/test.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
llava_vsr_dataloader = dict(
|
||||
batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
)
|
||||
|
||||
# model settings
|
||||
llava_vsr_model = dict(
|
||||
type='llava',
|
||||
model_path='/path/to/llava',
|
||||
prompt_constructor=dict(type=LLaVAVQAPromptConstructor),
|
||||
post_processor=dict(type=LLaVAVSRPostProcessor)
|
||||
) # noqa
|
||||
|
||||
# evaluation settings
|
||||
llava_vsr_evaluator = [dict(type='mmpretrain.GQAAcc')]
|
||||
|
||||
|
@ -1,26 +0,0 @@
|
||||
# MiniGPT-4
|
||||
|
||||
### Prepare the environment
|
||||
|
||||
```sh
|
||||
cd opencompass/multimodal/models/minigpt_4
|
||||
git clone https://github.com/Vision-CAIR/MiniGPT-4.git
|
||||
```
|
||||
|
||||
Then prepare the environment according to this [doc](https://github.com/Vision-CAIR/MiniGPT-4)
|
||||
|
||||
### Start evaluation
|
||||
|
||||
#### Slurm
|
||||
|
||||
```sh
|
||||
cd $root
|
||||
python run.py configs/multimodal/tasks.py --mm-eval --slurm -p $PARTITION
|
||||
```
|
||||
|
||||
#### PyTorch
|
||||
|
||||
```sh
|
||||
cd $root
|
||||
python run.py configs/multimodal/tasks.py --mm-eval
|
||||
```
|
@ -1,53 +0,0 @@
|
||||
from opencompass.multimodal.models.minigpt_4 import (
|
||||
MiniGPT4COCOCaotionPromptConstructor,
|
||||
MiniGPT4COCOCaptionPostProcessor,
|
||||
)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(384, 384),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(type='mmpretrain.PackInputs', algorithm_keys=['image_id'])
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.COCOCaption',
|
||||
data_root='data/coco',
|
||||
data_prefix=dict(img_path='images'),
|
||||
ann_file='annotations/coco_karpathy_val.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
minigpt_4_coco_caption_dataloader = dict(
|
||||
batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
minigpt_4_coco_caption_model = dict(
|
||||
type='minigpt-4',
|
||||
low_resource=False,
|
||||
img_size=384,
|
||||
llama_model='/path/to/vicuna_weights_7b/',
|
||||
is_caption_task=True,
|
||||
prompt_constructor=dict(type=MiniGPT4COCOCaotionPromptConstructor,
|
||||
image_prompt='###Human: <Img><ImageHere></Img>',
|
||||
reply_prompt='###Assistant:'),
|
||||
post_processor=dict(type=MiniGPT4COCOCaptionPostProcessor))
|
||||
|
||||
# evaluation settings
|
||||
minigpt_4_coco_caption_evaluator = [
|
||||
dict(
|
||||
type='mmpretrain.COCOCaption',
|
||||
ann_file='data/coco/annotations/coco_karpathy_val_gt.json',
|
||||
) # noqa
|
||||
]
|
||||
|
||||
minigpt_4_coco_caption_load_from = '/path/to/prerained_minigpt4_7b.pth' # noqa
|
@ -1,54 +0,0 @@
|
||||
from opencompass.multimodal.models.minigpt_4 import (
|
||||
MiniGPT4COCOCaotionPromptConstructor,
|
||||
MiniGPT4COCOCaptionPostProcessor,
|
||||
)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(384, 384),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(type='mmpretrain.PackInputs', algorithm_keys=['image_id'])
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.Flickr30kCaption',
|
||||
data_root='data/flickr30k',
|
||||
ann_file='annotations/dataset_flickr30k.json',
|
||||
data_prefix='images',
|
||||
split='val',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
minigpt_4_flickr30k_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler',
|
||||
shuffle=False))
|
||||
|
||||
# model settings
|
||||
minigpt_4_flickr30k_model = dict(
|
||||
type='minigpt-4',
|
||||
low_resource=False,
|
||||
img_size=384,
|
||||
llama_model='/path/to/vicuna_weights_7b/',
|
||||
is_caption_task=True,
|
||||
prompt_constructor=dict(type=MiniGPT4COCOCaotionPromptConstructor,
|
||||
image_prompt='###Human: <Img><ImageHere></Img>',
|
||||
reply_prompt='###Assistant:'),
|
||||
post_processor=dict(type=MiniGPT4COCOCaptionPostProcessor))
|
||||
|
||||
# evaluation settings
|
||||
minigpt_4_flickr30k_evaluator = [
|
||||
dict(
|
||||
type='mmpretrain.COCOCaption',
|
||||
ann_file='data/flickr30k/annotations/flickr30k_val_gt.json',
|
||||
) # noqa
|
||||
]
|
||||
|
||||
minigpt_4_flickr30k_load_from = '/path/to/prerained_minigpt4_7b.pth' # noqa
|
@ -1,52 +0,0 @@
|
||||
from opencompass.multimodal.models.minigpt_4 import (
|
||||
MiniGPT4VQAPromptConstructor,
|
||||
MiniGPT4VQAPostProcessor,
|
||||
)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.GQA',
|
||||
data_root='data/gqa',
|
||||
data_prefix='images',
|
||||
ann_file='annotations/testdev_balanced_questions.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
minigpt_4_gqa_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler',
|
||||
shuffle=False))
|
||||
|
||||
# model settings
|
||||
minigpt_4_gqa_model = dict(type='minigpt-4',
|
||||
low_resource=False,
|
||||
img_size=224,
|
||||
max_length=10,
|
||||
llama_model='/path/to/vicuna_weights_7b/',
|
||||
prompt_constructor=dict(
|
||||
type=MiniGPT4VQAPromptConstructor,
|
||||
image_prompt='###Human: <Img><ImageHere></Img>',
|
||||
reply_prompt='###Assistant:'),
|
||||
post_processor=dict(type=MiniGPT4VQAPostProcessor))
|
||||
|
||||
# evaluation settings
|
||||
minigpt_4_gqa_evaluator = [dict(type='mmpretrain.GQAAcc')]
|
||||
|
||||
minigpt_4_gqa_load_from = '/path/to/prerained_minigpt4_7b.pth' # noqa
|
@ -1,47 +0,0 @@
|
||||
from opencompass.multimodal.models.minigpt_4 import (
|
||||
MiniGPT4MMBenchPromptConstructor, MiniGPT4MMBenchPostProcessor)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(type='mmpretrain.PackInputs',
|
||||
algorithm_keys=[
|
||||
'question', 'category', 'l2-category', 'context', 'index',
|
||||
'options_dict', 'options', 'split'
|
||||
])
|
||||
]
|
||||
|
||||
dataset = dict(type='opencompass.MMBenchDataset',
|
||||
data_file='data/mmbench/mmbench_test_20230712.tsv',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
minigpt_4_mmbench_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler',
|
||||
shuffle=False))
|
||||
|
||||
# model settings
|
||||
minigpt_4_mmbench_model = dict(
|
||||
type='minigpt-4',
|
||||
low_resource=False,
|
||||
llama_model='/path/to/vicuna-7b/',
|
||||
prompt_constructor=dict(type=MiniGPT4MMBenchPromptConstructor,
|
||||
image_prompt='###Human: <Img><ImageHere></Img>',
|
||||
reply_prompt='###Assistant:'),
|
||||
post_processor=dict(type=MiniGPT4MMBenchPostProcessor))
|
||||
|
||||
# evaluation settings
|
||||
minigpt_4_mmbench_evaluator = [
|
||||
dict(type='opencompass.DumpResults',
|
||||
save_path='work_dirs/minigpt-4-7b-mmbench.xlsx')
|
||||
]
|
||||
|
||||
minigpt_4_mmbench_load_from = '/path/to/prerained_minigpt4_7b.pth' # noqa
|
@ -1,43 +0,0 @@
|
||||
from opencompass.multimodal.models.minigpt_4 import (MiniGPT4MMEPostProcessor, MiniGPT4MMEPromptConstructor)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(type='mmpretrain.PackInputs',
|
||||
algorithm_keys=[
|
||||
'question', 'answer', 'task'
|
||||
])
|
||||
]
|
||||
|
||||
dataset = dict(type='opencompass.MMEDataset',
|
||||
data_dir='/path/to/MME',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
minigpt_4_mme_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
minigpt_4_model = dict(
|
||||
type='minigpt-4',
|
||||
low_resource=False,
|
||||
llama_model='/path/to/vicuna/',
|
||||
prompt_constructor=dict(type=MiniGPT4MMEPromptConstructor),
|
||||
post_processor=dict(type=MiniGPT4MMEPostProcessor))
|
||||
|
||||
# evaluation settings
|
||||
minigpt_4_mme_evaluator = [
|
||||
dict(type='opencompass.MMEMetric')
|
||||
]
|
||||
|
||||
minigpt_4_load_from = '/path/to/prerained_minigpt4_7b.pth' # noqa
|
@ -1,53 +0,0 @@
|
||||
from opencompass.multimodal.models.minigpt_4 import (
|
||||
MiniGPT4VQAPromptConstructor,
|
||||
MiniGPT4VQAPostProcessor,
|
||||
)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.OCRVQA',
|
||||
data_root='data/ocrvqa',
|
||||
ann_file='annotations/dataset.json',
|
||||
split='test',
|
||||
data_prefix='images',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
minigpt_4_ocr_vqa_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler',
|
||||
shuffle=False))
|
||||
|
||||
# model settings
|
||||
minigpt_4_ocr_vqa_model = dict(
|
||||
type='minigpt-4',
|
||||
low_resource=False,
|
||||
img_size=224,
|
||||
max_length=10,
|
||||
llama_model='/path/to/vicuna_weights_7b/',
|
||||
prompt_constructor=dict(type=MiniGPT4VQAPromptConstructor,
|
||||
image_prompt='###Human: <Img><ImageHere></Img>',
|
||||
reply_prompt='###Assistant:'),
|
||||
post_processor=dict(type=MiniGPT4VQAPostProcessor))
|
||||
|
||||
# evaluation settings
|
||||
minigpt_4_ocr_vqa_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
||||
|
||||
minigpt_4_ocr_vqa_load_from = '/path/to/prerained_minigpt4_7b.pth' # noqa
|
@ -1,55 +0,0 @@
|
||||
from opencompass.multimodal.models.minigpt_4 import (
|
||||
MiniGPT4VQAPromptConstructor,
|
||||
MiniGPT4VQAPostProcessor,
|
||||
)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(
|
||||
type='mmpretrain.COCOVQA',
|
||||
data_root='data/okvqa',
|
||||
question_file='annotations/OpenEnded_mscoco_val2014_questions.json',
|
||||
ann_file='annotations/mscoco_val2014_annotations.json',
|
||||
pipeline=val_pipeline,
|
||||
data_prefix='images/val2014',
|
||||
)
|
||||
|
||||
minigpt_4_ok_vqa_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler',
|
||||
shuffle=False))
|
||||
|
||||
# model settings
|
||||
minigpt_4_ok_vqa_model = dict(
|
||||
type='minigpt-4',
|
||||
low_resource=False,
|
||||
img_size=224,
|
||||
max_length=10,
|
||||
llama_model='/path/to/vicuna_weights_7b/',
|
||||
prompt_constructor=dict(type=MiniGPT4VQAPromptConstructor,
|
||||
image_prompt='###Human: <Img><ImageHere></Img>',
|
||||
reply_prompt='###Assistant:'),
|
||||
post_processor=dict(type=MiniGPT4VQAPostProcessor))
|
||||
|
||||
# evaluation settings
|
||||
minigpt_4_ok_vqa_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
||||
|
||||
minigpt_4_ok_vqa_load_from = '/path/to/prerained_minigpt4_7b.pth' # noqa
|
@ -1,52 +0,0 @@
|
||||
from opencompass.multimodal.models import (MiniGPT4ScienceQAPromptConstructor,
|
||||
MiniGPT4ScienceQAPostProcessor)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(type='mmpretrain.PackInputs',
|
||||
algorithm_keys=[
|
||||
'question', 'gt_answer', 'choices', 'hint', 'lecture', 'solution', 'has_image'
|
||||
])
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.ScienceQA',
|
||||
data_root='./data/scienceqa',
|
||||
split='val',
|
||||
split_file='pid_splits.json',
|
||||
ann_file='problems.json',
|
||||
image_only=True,
|
||||
data_prefix=dict(img_path='val'),
|
||||
pipeline=val_pipeline)
|
||||
|
||||
minigpt_4_scienceqa_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler',
|
||||
shuffle=False))
|
||||
|
||||
# model settings
|
||||
minigpt_4_scienceqa_model = dict(
|
||||
type='minigpt-4',
|
||||
low_resource=False,
|
||||
img_size=224,
|
||||
max_length=10,
|
||||
llama_model='/path/to/vicuna_weights_7b/',
|
||||
prompt_constructor=dict(type=MiniGPT4ScienceQAPromptConstructor,
|
||||
image_prompt='###Human: <Img><ImageHere></Img>',
|
||||
reply_prompt='###Assistant:'),
|
||||
post_processor=dict(type=MiniGPT4ScienceQAPostProcessor))
|
||||
|
||||
# evaluation settings
|
||||
minigpt_4_scienceqa_evaluator = [dict(type='mmpretrain.ScienceQAMetric')]
|
||||
|
||||
minigpt_4_scienceqa_load_from = '/path/to/prerained_minigpt4_7b.pth' # noqa
|
@ -1,63 +0,0 @@
|
||||
from opencompass.multimodal.models.minigpt_4 import MiniGPT4SEEDBenchPromptConstructor # noqa
|
||||
|
||||
# dataloader settings
|
||||
image_pipeline = [
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(type='mmpretrain.PackInputs',
|
||||
algorithm_keys=[
|
||||
'question', 'answer', 'choices', 'data_type', 'question_type_id',
|
||||
'index', 'data_path', 'question_id'
|
||||
])
|
||||
]
|
||||
video_pipeline = [
|
||||
dict(type='mmaction.Resize', scale=(224, 224), interpolation='bicubic'),
|
||||
dict(type='mmaction.CenterCrop', crop_size=224),
|
||||
dict(type='Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(type='mmpretrain.PackInputs',
|
||||
algorithm_keys=[
|
||||
'question', 'answer', 'choices', 'data_type', 'question_type_id',
|
||||
'index', 'data_path', 'question_id'
|
||||
])
|
||||
]
|
||||
|
||||
dataset = dict(
|
||||
type='opencompass.SEEDBenchDataset',
|
||||
ann_file='data/seedbench/SEED-Bench.json',
|
||||
cc3m_path='data/seedbench/SEED-Bench-image',
|
||||
sthv2_path='data/seedbench/sthv2/videos',
|
||||
epic_kitchens_path='data/seedbench/3h91syskeag572hl6tvuovwv4d/videos/test',
|
||||
breakfast_path='data/seedbench/BreakfastII_15fps_qvga_sync',
|
||||
image_pipeline=image_pipeline,
|
||||
video_pipeline=video_pipeline,
|
||||
only_image=True)
|
||||
|
||||
minigpt_4_seedbench_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler',
|
||||
shuffle=False))
|
||||
|
||||
# model settings
|
||||
minigpt_4_seedbench_model = dict(
|
||||
type='minigpt-4',
|
||||
low_resource=False,
|
||||
llama_model='/path/to/vicuna/',
|
||||
prompt_constructor=dict(type=MiniGPT4SEEDBenchPromptConstructor,
|
||||
image_prompt='###Human: <Img><ImageHere></Img>',
|
||||
reply_prompt='###Assistant:'),
|
||||
post_processor=None,
|
||||
mode='loss')
|
||||
|
||||
# evaluation settings
|
||||
minigpt_4_seedbench_evaluator = [dict(type='opencompass.SEEDBenchAcc')]
|
||||
|
||||
minigpt_4_load_from = '/path/to/prerained_minigpt4_7b.pth'
|
@ -1,55 +0,0 @@
|
||||
from opencompass.multimodal.models.minigpt_4 import (
|
||||
MiniGPT4VQAPromptConstructor,
|
||||
MiniGPT4VQAPostProcessor,
|
||||
)
|
||||
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(
|
||||
type='mmpretrain.TextVQA',
|
||||
data_root='data/textvqa',
|
||||
ann_file='annotations/TextVQA_0.5.1_val.json',
|
||||
pipeline=val_pipeline,
|
||||
data_prefix='images/train_images',
|
||||
)
|
||||
|
||||
minigpt_4_textvqa_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler',
|
||||
shuffle=False))
|
||||
|
||||
# model settings
|
||||
minigpt_4_textvqa_model = dict(
|
||||
type='minigpt-4',
|
||||
low_resource=False,
|
||||
img_size=224,
|
||||
max_length=10,
|
||||
llama_model='/path/to/vicuna_weights_7b/',
|
||||
prompt_constructor=dict(type=MiniGPT4VQAPromptConstructor,
|
||||
image_prompt='###Human: <Img><ImageHere></Img>',
|
||||
reply_prompt='###Assistant:'),
|
||||
post_processor=dict(type=MiniGPT4VQAPostProcessor))
|
||||
|
||||
# evaluation settings
|
||||
minigpt_4_textvqa_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
||||
|
||||
minigpt_4_textvqa_load_from = '/path/to/prerained_minigpt4_7b.pth' # noqa
|
@ -1,52 +0,0 @@
|
||||
from opencompass.multimodal.models.minigpt_4 import (
|
||||
MiniGPT4VQAPromptConstructor,
|
||||
MiniGPT4VQAPostProcessor,
|
||||
)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.VizWiz',
|
||||
data_root='data/vizwiz/',
|
||||
data_prefix='Images/val',
|
||||
ann_file='Annotations/val.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
minigpt_4_vizwiz_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler',
|
||||
shuffle=False))
|
||||
|
||||
# model settings
|
||||
minigpt_4_vizwiz_model = dict(
|
||||
type='minigpt-4',
|
||||
low_resource=False,
|
||||
img_size=224,
|
||||
max_length=10,
|
||||
llama_model='/path/to/vicuna_weights_7b/',
|
||||
prompt_constructor=dict(type=MiniGPT4VQAPromptConstructor,
|
||||
image_prompt='###Human: <Img><ImageHere></Img>',
|
||||
reply_prompt='###Assistant:'),
|
||||
post_processor=dict(type=MiniGPT4VQAPostProcessor))
|
||||
|
||||
# evaluation settings
|
||||
minigpt_4_vizwiz_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
||||
|
||||
minigpt_4_vizwiz_load_from = '/path/to/prerained_minigpt4_7b.pth' # noqa
|
@ -1,55 +0,0 @@
|
||||
from opencompass.multimodal.models.minigpt_4 import (
|
||||
MiniGPT4VQAPromptConstructor,
|
||||
MiniGPT4VQAPostProcessor,
|
||||
)
|
||||
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(
|
||||
type='mmpretrain.COCOVQA',
|
||||
data_root='data/coco',
|
||||
data_prefix='images/val2014',
|
||||
question_file='annotations/v2_OpenEnded_mscoco_val2014_questions.json',
|
||||
ann_file='annotations/v2_mscoco_val2014_annotations.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
minigpt_4_vqav2_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler',
|
||||
shuffle=False))
|
||||
|
||||
# model settings
|
||||
minigpt_4_vqav2_model = dict(
|
||||
type='minigpt-4',
|
||||
low_resource=False,
|
||||
img_size=224,
|
||||
max_length=10,
|
||||
llama_model='/path/to/vicuna_weights_7b/',
|
||||
prompt_constructor=dict(type=MiniGPT4VQAPromptConstructor,
|
||||
image_prompt='###Human: <Img><ImageHere></Img>',
|
||||
reply_prompt='###Assistant:'),
|
||||
post_processor=dict(type=MiniGPT4VQAPostProcessor))
|
||||
|
||||
# evaluation settings
|
||||
minigpt_4_vqav2_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
||||
|
||||
minigpt_4_vqav2_load_from = '/path/to/prerained_minigpt4_7b.pth' # noqa
|
@ -1,52 +0,0 @@
|
||||
from opencompass.multimodal.models.minigpt_4 import (
|
||||
MiniGPT4VSRPromptConstructor,
|
||||
MiniGPT4VSRPostProcessor,
|
||||
)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.VSR',
|
||||
data_root='data/vsr/',
|
||||
data_prefix='images/',
|
||||
ann_file='annotations/test.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
minigpt_4_vsr_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler',
|
||||
shuffle=False))
|
||||
|
||||
# model settings
|
||||
minigpt_4_vsr_model = dict(
|
||||
type='minigpt-4',
|
||||
low_resource=False,
|
||||
img_size=224,
|
||||
max_length=10,
|
||||
llama_model='/path/to/vicuna_weights_7b/',
|
||||
prompt_constructor=dict(type=MiniGPT4VSRPromptConstructor,
|
||||
image_prompt='###Human: <Img><ImageHere></Img>',
|
||||
reply_prompt='###Assistant:'),
|
||||
post_processor=dict(type=MiniGPT4VSRPostProcessor))
|
||||
|
||||
# evaluation settings
|
||||
minigpt_4_vsr_evaluator = [dict(type='mmpretrain.GQAAcc')]
|
||||
|
||||
minigpt_4_vsr_load_from = '/path/to/prerained_minigpt4_7b.pth' # noqa
|
@ -1,24 +0,0 @@
|
||||
# MplugOwl
|
||||
|
||||
### Prepare the environment
|
||||
|
||||
```sh
|
||||
cd opencompass/multimodal/models/mplug_owl
|
||||
git clone https://github.com/X-PLUG/mPLUG-Owl.git
|
||||
```
|
||||
|
||||
### Start evaluation
|
||||
|
||||
#### Slurm
|
||||
|
||||
```sh
|
||||
cd $root
|
||||
python run.py configs/multimodal/tasks.py --mm-eval --slurm -p $PARTITION
|
||||
```
|
||||
|
||||
#### PyTorch
|
||||
|
||||
```sh
|
||||
cd $root
|
||||
python run.py configs/multimodal/tasks.py --mm-eval
|
||||
```
|
@ -1,48 +0,0 @@
|
||||
from opencompass.multimodal.models.mplug_owl import (
|
||||
MplugOwlMMBenchPostProcessor, MplugOwlMMBenchPromptConstructor)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(
|
||||
type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711),
|
||||
),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=[
|
||||
'question', 'answer', 'category', 'l2-category', 'context',
|
||||
'index', 'options_dict', 'options'
|
||||
],
|
||||
),
|
||||
]
|
||||
|
||||
dataset = dict(type='opencompass.MMBenchDataset',
|
||||
data_file='data/mmbench/mmbench_test_20230712.tsv',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
mplug_owl_mmbench_dataloader = dict(
|
||||
batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
)
|
||||
|
||||
# model settings
|
||||
mplug_owl_mmbench_model = dict(
|
||||
type='mplug_owl-7b',
|
||||
model_path='/mplug-owl-llama-7b-ft',
|
||||
prompt_constructor=dict(type=MplugOwlMMBenchPromptConstructor),
|
||||
post_processor=dict(type=MplugOwlMMBenchPostProcessor)
|
||||
) # noqa
|
||||
|
||||
# evaluation settings
|
||||
mplug_owl_mmbench_evaluator = [
|
||||
dict(type='opencompass.DumpResults',
|
||||
save_path='work_dirs/mplug_owl-7b-mmagibench-v0.1.0.xlsx')
|
||||
]
|
@ -1,21 +0,0 @@
|
||||
# OpenFlamingo
|
||||
|
||||
### Prepare the environment
|
||||
|
||||
Install [MMPretrain](https://github.com/open-mmlab/mmpretrain) according to this [doc](https://mmpretrain.readthedocs.io/en/latest/get_started.html#installation)
|
||||
|
||||
### Start evaluation
|
||||
|
||||
#### Slurm
|
||||
|
||||
```sh
|
||||
cd $root
|
||||
python run.py configs/multimodal/tasks.py --mm-eval --slurm -p $PARTITION
|
||||
```
|
||||
|
||||
#### PyTorch
|
||||
|
||||
```sh
|
||||
cd $root
|
||||
python run.py configs/multimodal/tasks.py --mm-eval
|
||||
```
|
@ -1,75 +0,0 @@
|
||||
from opencompass.multimodal.models.openflamingo import OpenFlamingoCaptionPromptConstructor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ResizeEdge',
|
||||
scale=224,
|
||||
interpolation='bicubic',
|
||||
backend='pillow'),
|
||||
dict(type='CenterCrop', crop_size=(224, 224)),
|
||||
dict(type='mmpretrain.PackInputs', algorithm_keys=['image_id'])
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.COCOCaption',
|
||||
data_root='data/coco',
|
||||
data_prefix=dict(img_path='images'),
|
||||
ann_file='annotations/coco_karpathy_val.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
openflamingo_coco_caption_dataloader = dict(
|
||||
batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
collate_fn=dict(type='default_collate'),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
# model settings
|
||||
openflamingo_coco_caption_model = dict(
|
||||
type='openflamingo',
|
||||
data_preprocessor=dict(
|
||||
type='mmpretrain.MultiModalDataPreprocessor',
|
||||
mean=[122.770938, 116.7460125, 104.09373615],
|
||||
std=[68.5005327, 66.6321579, 70.32316305],
|
||||
to_rgb=True,
|
||||
),
|
||||
tokenizer=dict(type='mmpretrain.LlamaTokenizer',
|
||||
name_or_path='decapoda-research/llama-7b-hf'),
|
||||
vision_encoder=dict(
|
||||
type='mmpretrain.VisionTransformer',
|
||||
arch='l',
|
||||
patch_size=14,
|
||||
pre_norm=True,
|
||||
norm_cfg=dict(type='LN', eps=1e-5),
|
||||
layer_cfgs=dict(act_cfg=dict(type='mmpretrain.QuickGELU')),
|
||||
final_norm=False,
|
||||
out_type='raw',
|
||||
pretrained= # noqa: E251
|
||||
'/path/to/vision/encoder', # noqa
|
||||
),
|
||||
lang_encoder=dict(
|
||||
base=dict(type='mmpretrain.AutoModelForCausalLM',
|
||||
name_or_path=
|
||||
'decapoda-research/llama-7b-hf',
|
||||
local_files_only=True),
|
||||
adapter=dict(type='mmpretrain.FlamingoLMAdapter',
|
||||
vis_hidden_size=1024,
|
||||
cross_attn_every_n_layers=4,
|
||||
use_media_placement_augmentation=False),
|
||||
),
|
||||
task='caption',
|
||||
generation_cfg=dict(num_beams=3, max_new_tokens=20, length_penalty=-2.0),
|
||||
prompt_constructor=dict(type=OpenFlamingoCaptionPromptConstructor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
openflamingo_coco_caption_evaluator = [
|
||||
dict(
|
||||
type='mmpretrain.COCOCaption',
|
||||
ann_file='data/coco/annotations/coco_karpathy_val_gt.json',
|
||||
) # noqa
|
||||
]
|
||||
|
||||
openflamingo_load_from = '/path/to/pretrained/weights' # noqa
|
@ -1,76 +0,0 @@
|
||||
from opencompass.multimodal.models.openflamingo import OpenFlamingoCaptionPromptConstructor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ResizeEdge',
|
||||
scale=224,
|
||||
interpolation='bicubic',
|
||||
backend='pillow'),
|
||||
dict(type='CenterCrop', crop_size=(224, 224)),
|
||||
dict(type='mmpretrain.PackInputs', algorithm_keys=['image_id'])
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.Flickr30kCaption',
|
||||
data_root='data/flickr30k',
|
||||
ann_file='annotations/dataset_flickr30k.json',
|
||||
data_prefix='images',
|
||||
split='val',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
openflamingo_flickr30k_dataloader = dict(
|
||||
batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
collate_fn=dict(type='default_collate'),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
# model settings
|
||||
openflamingo_flickr30k_model = dict(
|
||||
type='openflamingo',
|
||||
data_preprocessor=dict(
|
||||
type='mmpretrain.MultiModalDataPreprocessor',
|
||||
mean=[122.770938, 116.7460125, 104.09373615],
|
||||
std=[68.5005327, 66.6321579, 70.32316305],
|
||||
to_rgb=True,
|
||||
),
|
||||
tokenizer=dict(type='mmpretrain.LlamaTokenizer',
|
||||
name_or_path='decapoda-research/llama-7b-hf'),
|
||||
vision_encoder=dict(
|
||||
type='mmpretrain.VisionTransformer',
|
||||
arch='l',
|
||||
patch_size=14,
|
||||
pre_norm=True,
|
||||
norm_cfg=dict(type='LN', eps=1e-5),
|
||||
layer_cfgs=dict(act_cfg=dict(type='mmpretrain.QuickGELU')),
|
||||
final_norm=False,
|
||||
out_type='raw',
|
||||
pretrained= # noqa: E251
|
||||
'/path/to/vision/encoder', # noqa
|
||||
),
|
||||
lang_encoder=dict(
|
||||
base=dict(type='mmpretrain.AutoModelForCausalLM',
|
||||
name_or_path=
|
||||
'decapoda-research/llama-7b-hf',
|
||||
local_files_only=True),
|
||||
adapter=dict(type='mmpretrain.FlamingoLMAdapter',
|
||||
vis_hidden_size=1024,
|
||||
cross_attn_every_n_layers=4,
|
||||
use_media_placement_augmentation=False),
|
||||
),
|
||||
task='caption',
|
||||
generation_cfg=dict(num_beams=3, max_new_tokens=20, length_penalty=-2.0),
|
||||
prompt_constructor=dict(type=OpenFlamingoCaptionPromptConstructor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
openflamingo_flickr30k_evaluator = [
|
||||
dict(
|
||||
type='mmpretrain.COCOCaption',
|
||||
ann_file='data/flickr30k/annotations/flickr30k_val_gt.json',
|
||||
) # noqa
|
||||
]
|
||||
|
||||
openflamingo_load_from = '/path/to/pretrained/weights' # noqa
|
@ -1,75 +0,0 @@
|
||||
from opencompass.multimodal.models.openflamingo import OpenFlamingoVQAPromptConstructor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ResizeEdge',
|
||||
scale=224,
|
||||
interpolation='bicubic',
|
||||
backend='pillow'),
|
||||
dict(type='CenterCrop', crop_size=(224, 224)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.GQA',
|
||||
data_root='data/gqa',
|
||||
data_prefix='images',
|
||||
ann_file='annotations/testdev_balanced_questions.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
openflamingo_gqa_dataloader = dict(
|
||||
batch_size=8,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
collate_fn=dict(type='default_collate'),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
# model settings
|
||||
openflamingo_gqa_model = dict(
|
||||
type='openflamingo',
|
||||
data_preprocessor=dict(
|
||||
type='mmpretrain.MultiModalDataPreprocessor',
|
||||
mean=[122.770938, 116.7460125, 104.09373615],
|
||||
std=[68.5005327, 66.6321579, 70.32316305],
|
||||
to_rgb=True,
|
||||
),
|
||||
tokenizer=dict(type='mmpretrain.LlamaTokenizer',
|
||||
name_or_path='decapoda-research/llama-7b-hf'),
|
||||
vision_encoder=dict(
|
||||
type='mmpretrain.VisionTransformer',
|
||||
arch='l',
|
||||
patch_size=14,
|
||||
pre_norm=True,
|
||||
norm_cfg=dict(type='LN', eps=1e-5),
|
||||
layer_cfgs=dict(act_cfg=dict(type='mmpretrain.QuickGELU')),
|
||||
final_norm=False,
|
||||
out_type='raw',
|
||||
pretrained= # noqa: E251
|
||||
'/path/to/vision/encoder', # noqa
|
||||
),
|
||||
lang_encoder=dict(
|
||||
base=dict(type='mmpretrain.AutoModelForCausalLM',
|
||||
name_or_path=
|
||||
'decapoda-research/llama-7b-hf',
|
||||
local_files_only=True),
|
||||
adapter=dict(type='mmpretrain.FlamingoLMAdapter',
|
||||
vis_hidden_size=1024,
|
||||
cross_attn_every_n_layers=4,
|
||||
use_media_placement_augmentation=False),
|
||||
),
|
||||
task='vqa',
|
||||
generation_cfg=dict(num_beams=3, max_new_tokens=20, length_penalty=-2.0),
|
||||
prompt_constructor=dict(type=OpenFlamingoVQAPromptConstructor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
openflamingo_gqa_evaluator = [dict(type='mmpretrain.GQAAcc')]
|
||||
|
||||
|
||||
openflamingo_load_from = '/path/to/pretrained/weights' # noqa
|
@ -1,77 +0,0 @@
|
||||
from opencompass.multimodal.models.openflamingo import OpenFlamingoMMBenchPromptConstructor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.PILToNumpy'),
|
||||
dict(type='mmpretrain.ResizeEdge',
|
||||
scale=224,
|
||||
interpolation='bicubic',
|
||||
backend='pillow'),
|
||||
dict(type='CenterCrop', crop_size=(224, 224)),
|
||||
dict(type='mmpretrain.PackInputs',
|
||||
algorithm_keys=[
|
||||
'question', 'options', 'category', 'l2-category', 'index',
|
||||
'context', 'options_dict'
|
||||
])
|
||||
]
|
||||
|
||||
dataset = dict(type='opencompass.MMBenchDataset',
|
||||
data_file='data/mmbench/mmbench_test_20230712.tsv',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
openflamingo_mmbench_dataloader = dict(
|
||||
batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
collate_fn=dict(type='default_collate'),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
# model settings
|
||||
openflamingo_mmbench_model = dict(
|
||||
type='openflamingo',
|
||||
data_preprocessor=dict(
|
||||
type='mmpretrain.MultiModalDataPreprocessor',
|
||||
mean=[122.770938, 116.7460125, 104.09373615],
|
||||
std=[68.5005327, 66.6321579, 70.32316305],
|
||||
to_rgb=True,
|
||||
),
|
||||
tokenizer=dict(type='mmpretrain.LlamaTokenizer',
|
||||
name_or_path='decapoda-research/llama-7b-hf'),
|
||||
vision_encoder=dict(
|
||||
type='mmpretrain.VisionTransformer',
|
||||
arch='l',
|
||||
patch_size=14,
|
||||
pre_norm=True,
|
||||
norm_cfg=dict(type='LN', eps=1e-5),
|
||||
layer_cfgs=dict(act_cfg=dict(type='mmpretrain.QuickGELU')),
|
||||
final_norm=False,
|
||||
out_type='raw',
|
||||
pretrained= # noqa: E251
|
||||
'/path/to/vision/encoder', # noqa
|
||||
),
|
||||
lang_encoder=dict(
|
||||
base=dict(type='mmpretrain.AutoModelForCausalLM',
|
||||
name_or_path=
|
||||
'decapoda-research/llama-7b-hf',
|
||||
local_files_only=True),
|
||||
adapter=dict(type='mmpretrain.FlamingoLMAdapter',
|
||||
vis_hidden_size=1024,
|
||||
cross_attn_every_n_layers=4,
|
||||
use_media_placement_augmentation=False),
|
||||
),
|
||||
task='vqa',
|
||||
generation_cfg=dict(num_beams=3, max_new_tokens=20, length_penalty=-2.0),
|
||||
prompt_constructor=dict(type=OpenFlamingoMMBenchPromptConstructor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
openflamingo_mmbench_evaluator = [
|
||||
dict(
|
||||
type='opencompass.DumpResults',
|
||||
save_path= # noqa: E251
|
||||
'work_dirs/9b-flamingo/9b-flamingo-mmbench.xlsx')
|
||||
]
|
||||
|
||||
openflamingo_load_from = '/path/to/pretrained/weights' # noqa
|
@ -1,75 +0,0 @@
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ResizeEdge',
|
||||
scale=224,
|
||||
interpolation='bicubic',
|
||||
backend='pillow'),
|
||||
dict(type='CenterCrop', crop_size=(224, 224)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.OCRVQA',
|
||||
data_root='data/ocrvqa',
|
||||
ann_file='annotations/dataset.json',
|
||||
split='test',
|
||||
data_prefix='images',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
openflamingo_ocrvqa_dataloader = dict(
|
||||
batch_size=8,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
collate_fn=dict(type='default_collate'),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
from opencompass.multimodal.models.openflamingo import OpenFlamingoVQAPromptConstructor
|
||||
|
||||
# model settings
|
||||
openflamingo_ocrvqa_model = dict(
|
||||
type='openflamingo',
|
||||
data_preprocessor=dict(
|
||||
type='mmpretrain.MultiModalDataPreprocessor',
|
||||
mean=[122.770938, 116.7460125, 104.09373615],
|
||||
std=[68.5005327, 66.6321579, 70.32316305],
|
||||
to_rgb=True,
|
||||
),
|
||||
tokenizer=dict(type='mmpretrain.LlamaTokenizer',
|
||||
name_or_path='decapoda-research/llama-7b-hf'),
|
||||
vision_encoder=dict(
|
||||
type='mmpretrain.VisionTransformer',
|
||||
arch='l',
|
||||
patch_size=14,
|
||||
pre_norm=True,
|
||||
norm_cfg=dict(type='LN', eps=1e-5),
|
||||
layer_cfgs=dict(act_cfg=dict(type='mmpretrain.QuickGELU')),
|
||||
final_norm=False,
|
||||
out_type='raw',
|
||||
pretrained= # noqa: E251
|
||||
'/path/to/vision/encoder', # noqa
|
||||
),
|
||||
lang_encoder=dict(
|
||||
base=dict(type='mmpretrain.AutoModelForCausalLM',
|
||||
name_or_path=
|
||||
'decapoda-research/llama-7b-hf',
|
||||
local_files_only=True),
|
||||
adapter=dict(type='mmpretrain.FlamingoLMAdapter',
|
||||
vis_hidden_size=1024,
|
||||
cross_attn_every_n_layers=4,
|
||||
use_media_placement_augmentation=False),
|
||||
),
|
||||
task='vqa',
|
||||
generation_cfg=dict(num_beams=3, max_new_tokens=20, length_penalty=-2.0),
|
||||
prompt_constructor=dict(type=OpenFlamingoVQAPromptConstructor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
openflamingo_ocrvqa_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
||||
|
||||
openflamingo_load_from = '/path/to/pretrained/weights' # noqa
|
@ -1,77 +0,0 @@
|
||||
from opencompass.multimodal.models.openflamingo import OpenFlamingoVQAPromptConstructor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ResizeEdge',
|
||||
scale=224,
|
||||
interpolation='bicubic',
|
||||
backend='pillow'),
|
||||
dict(type='CenterCrop', crop_size=(224, 224)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(
|
||||
type='mmpretrain.COCOVQA',
|
||||
data_root='data/okvqa',
|
||||
question_file='annotations/OpenEnded_mscoco_val2014_questions.json',
|
||||
ann_file='annotations/mscoco_val2014_annotations.json',
|
||||
pipeline=val_pipeline,
|
||||
data_prefix='images/val2014',
|
||||
)
|
||||
|
||||
openflamingo_okvqa_dataloader = dict(
|
||||
batch_size=8,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
collate_fn=dict(type='default_collate'),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
# model settings
|
||||
openflamingo_okvqa_model = dict(
|
||||
type='openflamingo',
|
||||
data_preprocessor=dict(
|
||||
type='mmpretrain.MultiModalDataPreprocessor',
|
||||
mean=[122.770938, 116.7460125, 104.09373615],
|
||||
std=[68.5005327, 66.6321579, 70.32316305],
|
||||
to_rgb=True,
|
||||
),
|
||||
tokenizer=dict(type='mmpretrain.LlamaTokenizer',
|
||||
name_or_path='decapoda-research/llama-7b-hf'),
|
||||
vision_encoder=dict(
|
||||
type='mmpretrain.VisionTransformer',
|
||||
arch='l',
|
||||
patch_size=14,
|
||||
pre_norm=True,
|
||||
norm_cfg=dict(type='LN', eps=1e-5),
|
||||
layer_cfgs=dict(act_cfg=dict(type='mmpretrain.QuickGELU')),
|
||||
final_norm=False,
|
||||
out_type='raw',
|
||||
pretrained= # noqa: E251
|
||||
'/path/to/vision/encoder', # noqa
|
||||
),
|
||||
lang_encoder=dict(
|
||||
base=dict(type='mmpretrain.AutoModelForCausalLM',
|
||||
name_or_path=
|
||||
'decapoda-research/llama-7b-hf',
|
||||
local_files_only=True),
|
||||
adapter=dict(type='mmpretrain.FlamingoLMAdapter',
|
||||
vis_hidden_size=1024,
|
||||
cross_attn_every_n_layers=4,
|
||||
use_media_placement_augmentation=False),
|
||||
),
|
||||
task='vqa',
|
||||
generation_cfg=dict(num_beams=3, max_new_tokens=20, length_penalty=-2.0),
|
||||
prompt_constructor=dict(type=OpenFlamingoVQAPromptConstructor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
openflamingo_okvqa_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
||||
|
||||
openflamingo_load_from = '/path/to/pretrained/weights' # noqa
|
@ -1,76 +0,0 @@
|
||||
from opencompass.multimodal.models.openflamingo import OpenFlamingoScienceQAPromptConstructor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ResizeEdge',
|
||||
scale=224,
|
||||
interpolation='bicubic',
|
||||
backend='pillow'),
|
||||
dict(type='CenterCrop', crop_size=(224, 224)),
|
||||
dict(type='mmpretrain.PackInputs',
|
||||
algorithm_keys=[
|
||||
'question', 'gt_answer', 'choices', 'hint', 'lecture', 'solution'
|
||||
])
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.ScienceQA',
|
||||
data_root='./data/scienceqa',
|
||||
split='val',
|
||||
split_file='pid_splits.json',
|
||||
ann_file='problems.json',
|
||||
image_only=True,
|
||||
data_prefix=dict(img_path='val'),
|
||||
pipeline=val_pipeline)
|
||||
|
||||
openflamingo_scienceqa_dataloader = dict(
|
||||
batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
collate_fn=dict(type='default_collate'),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
# model settings
|
||||
openflamingo_scienceqa_model = dict(
|
||||
type='openflamingo',
|
||||
data_preprocessor=dict(
|
||||
type='mmpretrain.MultiModalDataPreprocessor',
|
||||
mean=[122.770938, 116.7460125, 104.09373615],
|
||||
std=[68.5005327, 66.6321579, 70.32316305],
|
||||
to_rgb=True,
|
||||
),
|
||||
tokenizer=dict(type='mmpretrain.LlamaTokenizer',
|
||||
name_or_path='decapoda-research/llama-7b-hf'),
|
||||
vision_encoder=dict(
|
||||
type='mmpretrain.VisionTransformer',
|
||||
arch='l',
|
||||
patch_size=14,
|
||||
pre_norm=True,
|
||||
norm_cfg=dict(type='LN', eps=1e-5),
|
||||
layer_cfgs=dict(act_cfg=dict(type='mmpretrain.QuickGELU')),
|
||||
final_norm=False,
|
||||
out_type='raw',
|
||||
pretrained= # noqa: E251
|
||||
'/path/to/vision/encoder', # noqa
|
||||
),
|
||||
lang_encoder=dict(
|
||||
base=dict(type='mmpretrain.AutoModelForCausalLM',
|
||||
name_or_path=
|
||||
'decapoda-research/llama-7b-hf',
|
||||
local_files_only=True),
|
||||
adapter=dict(type='mmpretrain.FlamingoLMAdapter',
|
||||
vis_hidden_size=1024,
|
||||
cross_attn_every_n_layers=4,
|
||||
use_media_placement_augmentation=False),
|
||||
),
|
||||
task='vqa',
|
||||
generation_cfg=dict(num_beams=3, max_new_tokens=20, length_penalty=-2.0),
|
||||
prompt_constructor=dict(type=OpenFlamingoScienceQAPromptConstructor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
openflamingo_scienceqa_evaluator = [dict(type='mmpretrain.ScienceQAMetric')]
|
||||
|
||||
openflamingo_load_from = '/path/to/pretrained/weights' # noqa
|
@ -1,76 +0,0 @@
|
||||
from opencompass.multimodal.models.openflamingo import OpenFlamingoVQAPromptConstructor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ResizeEdge',
|
||||
scale=224,
|
||||
interpolation='bicubic',
|
||||
backend='pillow'),
|
||||
dict(type='CenterCrop', crop_size=(224, 224)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(
|
||||
type='mmpretrain.TextVQA',
|
||||
data_root='data/textvqa',
|
||||
ann_file='annotations/TextVQA_0.5.1_val.json',
|
||||
pipeline=val_pipeline,
|
||||
data_prefix='images/train_images',
|
||||
)
|
||||
|
||||
openflamingo_textvqa_dataloader = dict(
|
||||
batch_size=8,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
collate_fn=dict(type='default_collate'),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
# model settings
|
||||
openflamingo_textvqa_model = dict(
|
||||
type='openflamingo',
|
||||
data_preprocessor=dict(
|
||||
type='mmpretrain.MultiModalDataPreprocessor',
|
||||
mean=[122.770938, 116.7460125, 104.09373615],
|
||||
std=[68.5005327, 66.6321579, 70.32316305],
|
||||
to_rgb=True,
|
||||
),
|
||||
tokenizer=dict(type='mmpretrain.LlamaTokenizer',
|
||||
name_or_path='decapoda-research/llama-7b-hf'),
|
||||
vision_encoder=dict(
|
||||
type='mmpretrain.VisionTransformer',
|
||||
arch='l',
|
||||
patch_size=14,
|
||||
pre_norm=True,
|
||||
norm_cfg=dict(type='LN', eps=1e-5),
|
||||
layer_cfgs=dict(act_cfg=dict(type='mmpretrain.QuickGELU')),
|
||||
final_norm=False,
|
||||
out_type='raw',
|
||||
pretrained= # noqa: E251
|
||||
'/path/to/vision/encoder', # noqa
|
||||
),
|
||||
lang_encoder=dict(
|
||||
base=dict(type='mmpretrain.AutoModelForCausalLM',
|
||||
name_or_path=
|
||||
'decapoda-research/llama-7b-hf',
|
||||
local_files_only=True),
|
||||
adapter=dict(type='mmpretrain.FlamingoLMAdapter',
|
||||
vis_hidden_size=1024,
|
||||
cross_attn_every_n_layers=4,
|
||||
use_media_placement_augmentation=False),
|
||||
),
|
||||
task='vqa',
|
||||
generation_cfg=dict(num_beams=3, max_new_tokens=20, length_penalty=-2.0),
|
||||
prompt_constructor=dict(type=OpenFlamingoVQAPromptConstructor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
openflamingo_textvqa_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
||||
|
||||
openflamingo_load_from = '/path/to/pretrained/weights' # noqa
|
@ -1,74 +0,0 @@
|
||||
from opencompass.multimodal.models.openflamingo import OpenFlamingoVQAPromptConstructor
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ResizeEdge',
|
||||
scale=224,
|
||||
interpolation='bicubic',
|
||||
backend='pillow'),
|
||||
dict(type='CenterCrop', crop_size=(224, 224)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.VizWiz',
|
||||
data_root='data/vizwiz/',
|
||||
data_prefix='Images/val',
|
||||
ann_file='Annotations/val.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
openflamingo_vizwiz_dataloader = dict(
|
||||
batch_size=8,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
collate_fn=dict(type='default_collate'),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
# model settings
|
||||
openflamingo_vizwiz_model = dict(
|
||||
type='openflamingo',
|
||||
data_preprocessor=dict(
|
||||
type='mmpretrain.MultiModalDataPreprocessor',
|
||||
mean=[122.770938, 116.7460125, 104.09373615],
|
||||
std=[68.5005327, 66.6321579, 70.32316305],
|
||||
to_rgb=True,
|
||||
),
|
||||
tokenizer=dict(type='mmpretrain.LlamaTokenizer',
|
||||
name_or_path='decapoda-research/llama-7b-hf'),
|
||||
vision_encoder=dict(
|
||||
type='mmpretrain.VisionTransformer',
|
||||
arch='l',
|
||||
patch_size=14,
|
||||
pre_norm=True,
|
||||
norm_cfg=dict(type='LN', eps=1e-5),
|
||||
layer_cfgs=dict(act_cfg=dict(type='mmpretrain.QuickGELU')),
|
||||
final_norm=False,
|
||||
out_type='raw',
|
||||
pretrained= # noqa: E251
|
||||
'/path/to/vision/encoder', # noqa
|
||||
),
|
||||
lang_encoder=dict(
|
||||
base=dict(type='mmpretrain.AutoModelForCausalLM',
|
||||
name_or_path=
|
||||
'decapoda-research/llama-7b-hf',
|
||||
local_files_only=True),
|
||||
adapter=dict(type='mmpretrain.FlamingoLMAdapter',
|
||||
vis_hidden_size=1024,
|
||||
cross_attn_every_n_layers=4,
|
||||
use_media_placement_augmentation=False),
|
||||
),
|
||||
task='vqa',
|
||||
generation_cfg=dict(num_beams=3, max_new_tokens=20, length_penalty=-2.0),
|
||||
prompt_constructor=dict(type=OpenFlamingoVQAPromptConstructor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
openflamingo_vizwiz_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
||||
|
||||
|
||||
openflamingo_load_from = '/path/to/pretrained/weights' # noqa
|
@ -1,75 +0,0 @@
|
||||
from opencompass.multimodal.models.openflamingo import OpenFlamingoVQAPromptConstructor
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ResizeEdge',
|
||||
scale=224,
|
||||
interpolation='bicubic',
|
||||
backend='pillow'),
|
||||
dict(type='CenterCrop', crop_size=(224, 224)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(
|
||||
type='mmpretrain.COCOVQA',
|
||||
data_root='data/coco',
|
||||
data_prefix='images/val2014',
|
||||
question_file='annotations/v2_OpenEnded_mscoco_val2014_questions.json',
|
||||
ann_file='annotations/v2_mscoco_val2014_annotations.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
openflamingo_vqav2_dataloader = dict(
|
||||
batch_size=8,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
collate_fn=dict(type='default_collate'),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
# model settings
|
||||
openflamingo_vqav2_model = dict(
|
||||
type='openflamingo',
|
||||
data_preprocessor=dict(
|
||||
type='mmpretrain.MultiModalDataPreprocessor',
|
||||
mean=[122.770938, 116.7460125, 104.09373615],
|
||||
std=[68.5005327, 66.6321579, 70.32316305],
|
||||
to_rgb=True,
|
||||
),
|
||||
tokenizer=dict(type='mmpretrain.LlamaTokenizer',
|
||||
name_or_path='decapoda-research/llama-7b-hf'),
|
||||
vision_encoder=dict(
|
||||
type='mmpretrain.VisionTransformer',
|
||||
arch='l',
|
||||
patch_size=14,
|
||||
pre_norm=True,
|
||||
norm_cfg=dict(type='LN', eps=1e-5),
|
||||
layer_cfgs=dict(act_cfg=dict(type='mmpretrain.QuickGELU')),
|
||||
final_norm=False,
|
||||
out_type='raw',
|
||||
pretrained= # noqa: E251
|
||||
'/path/to/vision/encoder', # noqa
|
||||
),
|
||||
lang_encoder=dict(
|
||||
base=dict(type='mmpretrain.AutoModelForCausalLM',
|
||||
name_or_path=
|
||||
'decapoda-research/llama-7b-hf',
|
||||
local_files_only=True),
|
||||
adapter=dict(type='mmpretrain.FlamingoLMAdapter',
|
||||
vis_hidden_size=1024,
|
||||
cross_attn_every_n_layers=4,
|
||||
use_media_placement_augmentation=False),
|
||||
),
|
||||
task='vqa',
|
||||
generation_cfg=dict(num_beams=3, max_new_tokens=20, length_penalty=-2.0),
|
||||
prompt_constructor=dict(type=OpenFlamingoVQAPromptConstructor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
openflamingo_vqav2_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
||||
|
||||
openflamingo_load_from = '/path/to/pretrained/weights' # noqa
|
@ -1,75 +0,0 @@
|
||||
from opencompass.multimodal.models.openflamingo import OpenFlamingoVQAPromptConstructor, OpenFlamingoVSRPostProcessor
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ResizeEdge',
|
||||
scale=224,
|
||||
interpolation='bicubic',
|
||||
backend='pillow'),
|
||||
dict(type='CenterCrop', crop_size=(224, 224)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.VSR',
|
||||
data_root='data/vsr/',
|
||||
data_prefix='images/',
|
||||
ann_file='annotations/test.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
openflamingo_vsr_dataloader = dict(
|
||||
batch_size=8,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
sampler=dict(type='DefaultSampler', shuffle=False),
|
||||
collate_fn=dict(type='default_collate'),
|
||||
persistent_workers=True,
|
||||
)
|
||||
|
||||
# model settings
|
||||
openflamingo_vsr_model = dict(
|
||||
type='openflamingo',
|
||||
data_preprocessor=dict(
|
||||
type='mmpretrain.MultiModalDataPreprocessor',
|
||||
mean=[122.770938, 116.7460125, 104.09373615],
|
||||
std=[68.5005327, 66.6321579, 70.32316305],
|
||||
to_rgb=True,
|
||||
),
|
||||
tokenizer=dict(type='mmpretrain.LlamaTokenizer',
|
||||
name_or_path='decapoda-research/llama-7b-hf'),
|
||||
vision_encoder=dict(
|
||||
type='mmpretrain.VisionTransformer',
|
||||
arch='l',
|
||||
patch_size=14,
|
||||
pre_norm=True,
|
||||
norm_cfg=dict(type='LN', eps=1e-5),
|
||||
layer_cfgs=dict(act_cfg=dict(type='mmpretrain.QuickGELU')),
|
||||
final_norm=False,
|
||||
out_type='raw',
|
||||
pretrained= # noqa: E251
|
||||
'/path/to/vision/encoder', # noqa
|
||||
),
|
||||
lang_encoder=dict(
|
||||
base=dict(type='mmpretrain.AutoModelForCausalLM',
|
||||
name_or_path=
|
||||
'decapoda-research/llama-7b-hf',
|
||||
local_files_only=True),
|
||||
adapter=dict(type='mmpretrain.FlamingoLMAdapter',
|
||||
vis_hidden_size=1024,
|
||||
cross_attn_every_n_layers=4,
|
||||
use_media_placement_augmentation=False),
|
||||
),
|
||||
task='vqa',
|
||||
generation_cfg=dict(num_beams=3, max_new_tokens=20, length_penalty=-2.0),
|
||||
prompt_constructor=dict(type=OpenFlamingoVQAPromptConstructor, shot_prompt=('The cat is behind the laptop. Short Answer:yes<|endofchunk|>' # noqa: E501
|
||||
'The cow is ahead of the person. Short Answer:no<|endofchunk|>')),
|
||||
post_processor=dict(type=OpenFlamingoVSRPostProcessor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
openflamingo_vsr_evaluator = [dict(type='mmpretrain.GQAAcc')]
|
||||
|
||||
openflamingo_load_from = '/path/to/pretrained/weights' # noqa
|
@ -1,24 +0,0 @@
|
||||
# OTTER: Multi-modal In-context Instruction Tuning.
|
||||
|
||||
### Prepare the environment
|
||||
|
||||
```sh
|
||||
pip install otter_ai
|
||||
```
|
||||
|
||||
### Start evaluation
|
||||
|
||||
#### Slurm
|
||||
|
||||
```sh
|
||||
cd $root
|
||||
python run.py configs/multimodal/tasks.py --mm-eval --slurm -p $PARTITION
|
||||
```
|
||||
|
||||
#### PyTorch
|
||||
|
||||
|
||||
```sh
|
||||
cd $root
|
||||
python run.py configs/multimodal/tasks.py --mm-eval
|
||||
```
|
@ -1,43 +0,0 @@
|
||||
# dataloader settings
|
||||
from opencompass.multimodal.models.otter import (
|
||||
OTTERMMBenchPromptConstructor, OTTERMMBenchPostProcessor)
|
||||
|
||||
val_pipeline = [
|
||||
dict(type="mmpretrain.torchvision/Resize", size=(224, 224), interpolation=3),
|
||||
dict(type="mmpretrain.torchvision/ToTensor"),
|
||||
dict(
|
||||
type="mmpretrain.torchvision/Normalize",
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711),
|
||||
),
|
||||
dict(
|
||||
type="mmpretrain.PackInputs",
|
||||
algorithm_keys=["question", "answer", "options", "category", "l2-category", "context", "index", "options_dict"],
|
||||
),
|
||||
]
|
||||
|
||||
dataset = dict(
|
||||
type="opencompass.MMBenchDataset", data_file="/path/to/mmbench/mmbench_test_20230712.tsv", pipeline=val_pipeline
|
||||
)
|
||||
|
||||
otter_9b_mmbench_dataloader = dict(
|
||||
batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type="pseudo_collate"),
|
||||
sampler=dict(type="DefaultSampler", shuffle=False),
|
||||
)
|
||||
|
||||
# model settings
|
||||
otter_9b_mmbench_model = dict(
|
||||
type="otter-9b",
|
||||
model_path="/path/to/OTTER-Image-MPT7B/", # noqa
|
||||
load_bit="bf16",
|
||||
prompt_constructor=dict(type=OTTERMMBenchPromptConstructor,
|
||||
model_label='GPT',
|
||||
user_label='User'),
|
||||
post_processor=dict(type=OTTERMMBenchPostProcessor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
otter_9b_mmbench_evaluator = [dict(type="opencompass.DumpResults", save_path="work_dirs/otter-9b-mmbench.xlsx")]
|
@ -1,41 +0,0 @@
|
||||
from opencompass.multimodal.models.qwen import QwenVLMMBenchPromptConstructor, QwenVLBasePostProcessor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(448, 448),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(type='mmpretrain.PackInputs',
|
||||
algorithm_keys=[
|
||||
'question', 'options', 'category', 'l2-category', 'context',
|
||||
'index', 'options_dict'
|
||||
])
|
||||
]
|
||||
|
||||
dataset = dict(type='opencompass.MMBenchDataset',
|
||||
data_file='data/mmbench/mmbench_test_20230712.tsv',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
qwen_mmbench_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
qwen_model = dict(
|
||||
type='qwen-vl-base',
|
||||
pretrained_path='Qwen/Qwen-VL', # or Huggingface repo id
|
||||
prompt_constructor=dict(type=QwenMMBenchPromptConstructor),
|
||||
post_processor=dict(type=QwenVLBasePostProcessor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
qwen_mmbench_evaluator = [
|
||||
dict(type='opencompass.DumpResults',
|
||||
save_path='work_dirs/qwenvl-base-7b-mmbench.xlsx')
|
||||
]
|
@ -1,44 +0,0 @@
|
||||
from opencompass.multimodal.models.qwen import QwenVLChatPromptConstructor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(448, 448),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['image_id'])
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.COCOCaption',
|
||||
data_root='data/coco',
|
||||
data_prefix=dict(img_path='images'),
|
||||
ann_file='annotations/coco_karpathy_val.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
qwen_coco_caption_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
qwen_coco_caption_model = dict(
|
||||
type='qwen-vl-chat',
|
||||
pretrained_path='Qwen/Qwen-VL-Chat', # or Huggingface repo id
|
||||
prompt_constructor=dict(type=QwenVLChatPromptConstructor, prompt='Describe the image.'),
|
||||
is_caption_task=True,
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
qwen_coco_caption_evaluator = [
|
||||
dict(
|
||||
type='mmpretrain.COCOCaption',
|
||||
ann_file='data/coco/annotations/coco_karpathy_val_gt.json',
|
||||
) # noqa
|
||||
]
|
@ -1,44 +0,0 @@
|
||||
from opencompass.multimodal.models.qwen import QwenVLChatPromptConstructor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(448, 448),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(type='mmpretrain.PackInputs', algorithm_keys=['image_id'])
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.Flickr30kCaption',
|
||||
data_root='data/flickr30k',
|
||||
ann_file='annotations/dataset_flickr30k.json',
|
||||
data_prefix='images',
|
||||
split='val',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
qwen_flickr30k_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
qwen_flickr30k_model = dict(
|
||||
type='qwen-vl-chat',
|
||||
pretrained_path='Qwen/Qwen-VL-Chat', # or Huggingface repo id
|
||||
prompt_constructor=dict(type=QwenVLChatPromptConstructor, prompt='Describe the image.'),
|
||||
is_caption_task=True,
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
qwen_flickr30k_evaluator = [
|
||||
dict(
|
||||
type='mmpretrain.COCOCaption',
|
||||
ann_file='data/flickr30k/annotations/flickr30k_val_gt.json',
|
||||
) # noqa
|
||||
]
|
@ -1,41 +0,0 @@
|
||||
from opencompass.multimodal.models.qwen import QwenVLChatVQAPromptConstructor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(448, 448),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.GQA',
|
||||
data_root='data/gqa',
|
||||
data_prefix='images',
|
||||
ann_file='annotations/testdev_balanced_questions.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
qwen_gqa_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
qwen_gqa_model = dict(
|
||||
type='qwen-vl-chat',
|
||||
pretrained_path='Qwen/Qwen-VL-Chat', # or Huggingface repo id
|
||||
prompt_constructor=dict(type=QwenVLChatVQAPromptConstructor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
qwen_gqa_evaluator = [dict(type='mmpretrain.GQAAcc')]
|
@ -1,40 +0,0 @@
|
||||
from opencompass.multimodal.models.qwen import QwenVLMMBenchPromptConstructor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(448, 448),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(type='mmpretrain.PackInputs',
|
||||
algorithm_keys=[
|
||||
'question', 'options', 'category', 'l2-category', 'context',
|
||||
'index', 'options_dict'
|
||||
])
|
||||
]
|
||||
|
||||
dataset = dict(type='opencompass.MMBenchDataset',
|
||||
data_file='data/mmbench/mmbench_test_20230712.tsv',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
qwen_mmbench_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
qwen_model = dict(
|
||||
type='qwen-vl-chat',
|
||||
pretrained_path='Qwen/Qwen-VL-Chat', # or Huggingface repo id
|
||||
prompt_constructor=dict(type=QwenVLMMBenchPromptConstructor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
qwen_mmbench_evaluator = [
|
||||
dict(type='opencompass.DumpResults',
|
||||
save_path='work_dirs/qwenvl-chat-7b-mmbench.xlsx')
|
||||
]
|
@ -1,41 +0,0 @@
|
||||
from opencompass.multimodal.models.qwen import QwenVLMMBenchPromptConstructor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(448, 448),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(type='mmpretrain.PackInputs',
|
||||
algorithm_keys=[
|
||||
'question', 'options', 'category', 'l2-category', 'context',
|
||||
'index', 'options_dict'
|
||||
])
|
||||
]
|
||||
|
||||
dataset = dict(type='opencompass.MMBenchDataset',
|
||||
data_file='/mnt/petrelfs/share_data/yuanyike/cnbench_v010_rolling.tsv',
|
||||
pipeline=val_pipeline,
|
||||
sys_prompt='请从以下选项中选择一个正确选项。')
|
||||
|
||||
qwen_mmbench_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
qwen_model = dict(
|
||||
type='qwen-vl-chat',
|
||||
pretrained_path='Qwen/Qwen-VL-Chat', # or Huggingface repo id
|
||||
prompt_constructor=dict(type=QwenVLMMBenchPromptConstructor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
qwen_mmbench_evaluator = [
|
||||
dict(type='opencompass.DumpResults',
|
||||
save_path='work_dirs/qwenvl-chat-7b-cnbench-v010.xlsx')
|
||||
]
|
@ -1,42 +0,0 @@
|
||||
from opencompass.multimodal.models.qwen import QwenVLChatVQAPromptConstructor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(448, 448),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.OCRVQA',
|
||||
data_root='data/ocrvqa',
|
||||
ann_file='annotations/dataset.json',
|
||||
split='test',
|
||||
data_prefix='images',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
qwen_ocrvqa_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
qwen_ocrvqa_model = dict(
|
||||
type='qwen-vl-chat',
|
||||
pretrained_path='Qwen/Qwen-VL-Chat', # or Huggingface repo id
|
||||
prompt_constructor=dict(type=QwenVLChatVQAPromptConstructor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
qwen_ocrvqa_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
@ -1,44 +0,0 @@
|
||||
from opencompass.multimodal.models.qwen import QwenVLChatVQAPromptConstructor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(448, 448),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(
|
||||
type='mmpretrain.COCOVQA',
|
||||
data_root='data/okvqa',
|
||||
question_file='annotations/OpenEnded_mscoco_val2014_questions.json',
|
||||
ann_file='annotations/mscoco_val2014_annotations.json',
|
||||
pipeline=val_pipeline,
|
||||
data_prefix='images/val2014',
|
||||
)
|
||||
|
||||
qwen_okvqa_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
qwen_okvqa_model = dict(
|
||||
type='qwen-vl-chat',
|
||||
pretrained_path='Qwen/Qwen-VL-Chat', # or Huggingface repo id
|
||||
prompt_constructor=dict(type=QwenVLChatVQAPromptConstructor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
qwen_okvqa_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
@ -1,43 +0,0 @@
|
||||
from opencompass.multimodal.models.qwen import QwenVLChatScienceQAPromptConstructor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(448, 448),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(type='mmpretrain.PackInputs',
|
||||
algorithm_keys=[
|
||||
'question', 'gt_answer', 'choices', 'hint', 'lecture', 'solution'
|
||||
])
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.ScienceQA',
|
||||
data_root='./data/scienceqa',
|
||||
split='val',
|
||||
split_file='pid_splits.json',
|
||||
ann_file='problems.json',
|
||||
image_only=True,
|
||||
data_prefix=dict(img_path='val'),
|
||||
pipeline=val_pipeline)
|
||||
|
||||
qwen_scienceqa_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
qwen_scienceqa_model = dict(
|
||||
type='qwen-vl-chat',
|
||||
pretrained_path='Qwen/Qwen-VL-Chat', # or Huggingface repo id
|
||||
prompt_constructor=dict(type=QwenVLChatScienceQAPromptConstructor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
qwen_scienceqa_evaluator = [dict(type='mmpretrain.ScienceQAMetric')]
|
@ -1,43 +0,0 @@
|
||||
from opencompass.multimodal.models.qwen import QwenVLChatVQAPromptConstructor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(448, 448),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(
|
||||
type='mmpretrain.TextVQA',
|
||||
data_root='data/textvqa',
|
||||
ann_file='annotations/TextVQA_0.5.1_val.json',
|
||||
pipeline=val_pipeline,
|
||||
data_prefix='images/train_images',
|
||||
)
|
||||
|
||||
qwen_textvqa_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
qwen_textvqa_model = dict(
|
||||
type='qwen-vl-chat',
|
||||
pretrained_path='Qwen/Qwen-VL-Chat', # or Huggingface repo id
|
||||
prompt_constructor=dict(type=QwenVLChatVQAPromptConstructor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
qwen_textvqa_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
@ -1,41 +0,0 @@
|
||||
from opencompass.multimodal.models.qwen import QwenVLChatVQAPromptConstructor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(448, 448),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.VizWiz',
|
||||
data_root='data/vizwiz/',
|
||||
data_prefix='Images/val',
|
||||
ann_file='Annotations/val.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
qwen_vizwiz_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
qwen_vizwiz_model = dict(
|
||||
type='qwen-vl-chat',
|
||||
pretrained_path='Qwen/Qwen-VL-Chat', # or Huggingface repo id
|
||||
prompt_constructor=dict(type=QwenVLChatVQAPromptConstructor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
qwen_vizwiz_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
@ -1,43 +0,0 @@
|
||||
from opencompass.multimodal.models.qwen import QwenVLChatVQAPromptConstructor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(448, 448),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(
|
||||
type='mmpretrain.COCOVQA',
|
||||
data_root='data/coco',
|
||||
data_prefix='images/val2014',
|
||||
question_file='annotations/v2_OpenEnded_mscoco_val2014_questions.json',
|
||||
ann_file='annotations/v2_mscoco_val2014_annotations.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
qwen_vqav2_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
qwen_vqav2_model = dict(
|
||||
type='qwen-vl-chat',
|
||||
pretrained_path='Qwen/Qwen-VL-Chat', # or Huggingface repo id
|
||||
prompt_constructor=dict(type=QwenVLChatVQAPromptConstructor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
qwen_vqav2_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
@ -1,42 +0,0 @@
|
||||
from opencompass.multimodal.models.qwen import QwenVLChatVQAPromptConstructor, QwenVLChatVSRPostProcessor
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(448, 448),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.VSR',
|
||||
data_root='data/vsr/',
|
||||
data_prefix='images/',
|
||||
ann_file='annotations/test.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
qwen_vsr_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
qwen_vsr_model = dict(
|
||||
type='qwen-vl-chat',
|
||||
pretrained_path='Qwen/Qwen-VL-Chat', # or Huggingface repo id
|
||||
prompt_constructor=dict(type=QwenVLChatVQAPromptConstructor),
|
||||
post_processor=dict(type=QwenVLChatVSRPostProcessor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
qwen_vsr_evaluator = [dict(type='mmpretrain.GQAAcc')]
|
@ -1,16 +0,0 @@
|
||||
from mmengine.config import read_base
|
||||
|
||||
with read_base():
|
||||
from .minigpt_4.minigpt_4_7b_mmbench import (minigpt_4_mmbench_dataloader,
|
||||
minigpt_4_mmbench_evaluator,
|
||||
minigpt_4_mmbench_load_from,
|
||||
minigpt_4_mmbench_model)
|
||||
|
||||
models = [minigpt_4_mmbench_model]
|
||||
datasets = [minigpt_4_mmbench_dataloader]
|
||||
evaluators = [minigpt_4_mmbench_evaluator]
|
||||
load_froms = [minigpt_4_mmbench_load_from]
|
||||
|
||||
num_gpus = 8
|
||||
num_procs = 8
|
||||
launcher = 'pytorch'
|
@ -1,45 +0,0 @@
|
||||
from opencompass.multimodal.models.visualglm import (VisualGLMBasePostProcessor, VisualGLMBasePromptConstructor)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(type='mmpretrain.PackInputs', algorithm_keys=['image_id'])
|
||||
]
|
||||
|
||||
|
||||
dataset = dict(type='mmpretrain.COCOCaption',
|
||||
data_root='data/coco',
|
||||
data_prefix=dict(img_path='images'),
|
||||
ann_file='annotations/coco_karpathy_val.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
visualglm_coco_caption_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
visualglm_coco_caption_model = dict(
|
||||
type='visualglm',
|
||||
pretrained_path='/path/to/visualglm', # or Huggingface repo id
|
||||
is_caption_task=True,
|
||||
prompt_constructor=dict(type=VisualGLMBasePromptConstructor, system_prompt='Describe the image.'),
|
||||
post_processor=dict(type=VisualGLMBasePostProcessor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
visualglm_coco_caption_evaluator = [
|
||||
dict(
|
||||
type='mmpretrain.COCOCaption',
|
||||
ann_file='data/coco/annotations/coco_karpathy_val_gt.json',
|
||||
) # noqa
|
||||
]
|
@ -1,46 +0,0 @@
|
||||
from opencompass.multimodal.models.visualglm import (VisualGLMBasePostProcessor, VisualGLMBasePromptConstructor)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(type='mmpretrain.PackInputs', algorithm_keys=['image_id'])
|
||||
]
|
||||
|
||||
|
||||
dataset = dict(type='mmpretrain.Flickr30kCaption',
|
||||
data_root='data/flickr30k',
|
||||
ann_file='annotations/dataset_flickr30k.json',
|
||||
data_prefix='images',
|
||||
split='val',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
visualglm_flickr30k_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
visualglm_flickr30k_model = dict(
|
||||
type='visualglm',
|
||||
pretrained_path='/path/to/visualglm', # or Huggingface repo id
|
||||
is_caption_task=True,
|
||||
prompt_constructor=dict(type=VisualGLMBasePromptConstructor, system_prompt='Describe the image.'),
|
||||
post_processor=dict(type=VisualGLMBasePostProcessor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
visualglm_flickr30k_evaluator = [
|
||||
dict(
|
||||
type='mmpretrain.COCOCaption',
|
||||
ann_file='data/flickr30k/annotations/flickr30k_val_gt.json',
|
||||
) # noqa
|
||||
]
|
@ -1,42 +0,0 @@
|
||||
from opencompass.multimodal.models.visualglm import (VisualGLMBasePostProcessor, VisualGLMVQAPromptConstructor)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.GQA',
|
||||
data_root='data/gqa',
|
||||
data_prefix='images',
|
||||
ann_file='annotations/testdev_balanced_questions.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
visualglm_gqa_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
visualglm_gqa_model = dict(
|
||||
type='visualglm',
|
||||
pretrained_path='/path/to/visualglm', # or Huggingface repo id
|
||||
prompt_constructor=dict(type=VisualGLMVQAPromptConstructor),
|
||||
post_processor=dict(type=VisualGLMBasePostProcessor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
visualglm_gqa_evaluator = [dict(type='mmpretrain.GQAAcc')]
|
@ -1,42 +0,0 @@
|
||||
from opencompass.multimodal.models.visualglm import (VisualGLMBasePostProcessor, VisualGLMMMBenchPromptConstructor)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(type='mmpretrain.PackInputs',
|
||||
algorithm_keys=[
|
||||
'question', 'options', 'category', 'l2-category', 'context',
|
||||
'index', 'options_dict'
|
||||
])
|
||||
]
|
||||
|
||||
dataset = dict(type='opencompass.MMBenchDataset',
|
||||
data_file='data/mmbench/mmbench_test_20230712.tsv',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
visualglm_mmbench_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
visualglm_mmbench_model = dict(
|
||||
type='visualglm',
|
||||
pretrained_path='/path/to/visualglm', # or Huggingface repo id
|
||||
prompt_constructor=dict(type=VisualGLMMMBenchPromptConstructor),
|
||||
post_processor=dict(type=VisualGLMBasePostProcessor),
|
||||
gen_kwargs=dict(max_new_tokens=50,num_beams=5,do_sample=False,repetition_penalty=1.0,length_penalty=-1.0)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
visualglm_mmbench_evaluator = [
|
||||
dict(type='opencompass.DumpResults',
|
||||
save_path='work_dirs/visualglm-6b-mmbench.xlsx')
|
||||
]
|
@ -1,43 +0,0 @@
|
||||
from opencompass.multimodal.models.visualglm import (VisualGLMBasePostProcessor, VisualGLMVQAPromptConstructor)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.OCRVQA',
|
||||
data_root='data/ocrvqa',
|
||||
ann_file='annotations/dataset.json',
|
||||
split='test',
|
||||
data_prefix='images',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
visualglm_ocrvqa_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
visualglm_ocrvqa_model = dict(
|
||||
type='visualglm',
|
||||
pretrained_path='/path/to/visualglm', # or Huggingface repo id
|
||||
prompt_constructor=dict(type=VisualGLMVQAPromptConstructor),
|
||||
post_processor=dict(type=VisualGLMBasePostProcessor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
visualglm_ocrvqa_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
@ -1,45 +0,0 @@
|
||||
from opencompass.multimodal.models.visualglm import (VisualGLMBasePostProcessor, VisualGLMVQAPromptConstructor)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(
|
||||
type='mmpretrain.COCOVQA',
|
||||
data_root='data/okvqa',
|
||||
question_file='annotations/OpenEnded_mscoco_val2014_questions.json',
|
||||
ann_file='annotations/mscoco_val2014_annotations.json',
|
||||
pipeline=val_pipeline,
|
||||
data_prefix='images/val2014',
|
||||
)
|
||||
|
||||
visualglm_okvqa_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
visualglm_okvqa_model = dict(
|
||||
type='visualglm',
|
||||
pretrained_path='/path/to/visualglm', # or Huggingface repo id
|
||||
prompt_constructor=dict(type=VisualGLMVQAPromptConstructor),
|
||||
post_processor=dict(type=VisualGLMBasePostProcessor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
visualglm_okvqa_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
@ -1,44 +0,0 @@
|
||||
from opencompass.multimodal.models.visualglm import (VisualGLMBasePostProcessor, VisualGLMScienceQAPromptConstructor)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(type='mmpretrain.PackInputs',
|
||||
algorithm_keys=[
|
||||
'question', 'gt_answer', 'choices', 'hint', 'lecture', 'solution', 'has_image'
|
||||
])
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.ScienceQA',
|
||||
data_root='./data/scienceqa',
|
||||
split='val',
|
||||
split_file='pid_splits.json',
|
||||
ann_file='problems.json',
|
||||
image_only=True,
|
||||
data_prefix=dict(img_path='val'),
|
||||
pipeline=val_pipeline)
|
||||
|
||||
visualglm_scienceqa_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
visualglm_scienceqa_model = dict(
|
||||
type='visualglm',
|
||||
pretrained_path='/path/to/visualglm', # or Huggingface repo id
|
||||
prompt_constructor=dict(type=VisualGLMScienceQAPromptConstructor),
|
||||
post_processor=dict(type=VisualGLMBasePostProcessor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
visualglm_scienceqa_evaluator = [dict(type='mmpretrain.ScienceQAMetric')]
|
@ -1,44 +0,0 @@
|
||||
from opencompass.multimodal.models.visualglm import (VisualGLMBasePostProcessor, VisualGLMVQAPromptConstructor)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(
|
||||
type='mmpretrain.TextVQA',
|
||||
data_root='data/textvqa',
|
||||
ann_file='annotations/TextVQA_0.5.1_val.json',
|
||||
pipeline=val_pipeline,
|
||||
data_prefix='images/train_images',
|
||||
)
|
||||
|
||||
visualglm_textvqa_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
visualglm_textvqa_model = dict(
|
||||
type='visualglm',
|
||||
pretrained_path='/path/to/visualglm', # or Huggingface repo id
|
||||
prompt_constructor=dict(type=VisualGLMVQAPromptConstructor),
|
||||
post_processor=dict(type=VisualGLMBasePostProcessor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
visualglm_textvqa_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
@ -1,42 +0,0 @@
|
||||
from opencompass.multimodal.models.visualglm import (VisualGLMBasePostProcessor, VisualGLMVQAPromptConstructor)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(type='mmpretrain.VizWiz',
|
||||
data_root='data/vizwiz/',
|
||||
data_prefix='Images/val',
|
||||
ann_file='Annotations/val.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
visualglm_vizwiz_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
visualglm_vizwiz_model = dict(
|
||||
type='visualglm',
|
||||
pretrained_path='/path/to/visualglm', # or Huggingface repo id
|
||||
prompt_constructor=dict(type=VisualGLMVQAPromptConstructor),
|
||||
post_processor=dict(type=VisualGLMBasePostProcessor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
visualglm_vizwiz_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
@ -1,44 +0,0 @@
|
||||
from opencompass.multimodal.models.visualglm import (VisualGLMBasePostProcessor, VisualGLMVQAPromptConstructor)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
dataset = dict(
|
||||
type='mmpretrain.COCOVQA',
|
||||
data_root='data/coco',
|
||||
data_prefix='images/val2014',
|
||||
question_file='annotations/v2_OpenEnded_mscoco_val2014_questions.json',
|
||||
ann_file='annotations/v2_mscoco_val2014_annotations.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
visualglm_vqav2_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
visualglm_vqav2_model = dict(
|
||||
type='visualglm',
|
||||
pretrained_path='/path/to/visualglm', # or Huggingface repo id
|
||||
prompt_constructor=dict(type=VisualGLMVQAPromptConstructor),
|
||||
post_processor=dict(type=VisualGLMBasePostProcessor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
visualglm_vqav2_evaluator = [dict(type='mmpretrain.VQAAcc')]
|
@ -1,43 +0,0 @@
|
||||
from opencompass.multimodal.models.visualglm import (VisualGLMVSRPostProcessor, VisualGLMVQAPromptConstructor)
|
||||
|
||||
# dataloader settings
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.LoadImageFromFile'),
|
||||
dict(type='mmpretrain.ToPIL', to_rgb=True),
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(
|
||||
type='mmpretrain.PackInputs',
|
||||
algorithm_keys=['question', 'gt_answer', 'gt_answer_weight'],
|
||||
meta_keys=['question_id', 'image_id'],
|
||||
)
|
||||
]
|
||||
|
||||
|
||||
dataset = dict(type='mmpretrain.VSR',
|
||||
data_root='data/vsr/',
|
||||
data_prefix='images/',
|
||||
ann_file='annotations/test.json',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
visualglm_vsr_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler', shuffle=False))
|
||||
|
||||
# model settings
|
||||
visualglm_vsr_model = dict(
|
||||
type='visualglm',
|
||||
pretrained_path='/path/to/visualglm', # or Huggingface repo id
|
||||
prompt_constructor=dict(type=VisualGLMVQAPromptConstructor),
|
||||
post_processor=dict(type=VisualGLMVSRPostProcessor)
|
||||
)
|
||||
|
||||
# evaluation settings
|
||||
visualglm_vsr_evaluator = [dict(type='mmpretrain.GQAAcc')]
|
@ -1,132 +0,0 @@
|
||||
# Evaluation pipeline on MMBench
|
||||
|
||||
## Intro to each data sample in MMBench
|
||||
|
||||
MMBecnh is split into **dev** and **test** split, and each data sample in each split contains the following field:
|
||||
|
||||
```
|
||||
img: the raw data of an image
|
||||
question: the question
|
||||
options: the concated options
|
||||
category: the leaf category
|
||||
l2-category: the l2-level category
|
||||
options_dict: the dict contains all options
|
||||
index: the unique identifier of current question
|
||||
context (optional): the context to a question, which is optional.
|
||||
answer: the target answer to current question. (only exists in the dev split, and is keep confidential for the test split on our evaluation server)
|
||||
```
|
||||
|
||||
## Load MMBench
|
||||
|
||||
We provide a code snippet as an example of loading MMBench
|
||||
|
||||
```python
|
||||
import base64
|
||||
import io
|
||||
import random
|
||||
|
||||
import pandas as pd
|
||||
from PIL import Image
|
||||
from torch.utils.data import Dataset
|
||||
|
||||
def decode_base64_to_image(base64_string):
|
||||
image_data = base64.b64decode(base64_string)
|
||||
image = Image.open(io.BytesIO(image_data))
|
||||
return image
|
||||
|
||||
class MMBenchDataset(Dataset):
|
||||
def __init__(self,
|
||||
data_file,
|
||||
sys_prompt='There are several options:'):
|
||||
self.df = pd.read_csv(data_file, sep='\t')
|
||||
self.sys_prompt = sys_prompt
|
||||
|
||||
def __len__(self):
|
||||
return len(self.df)
|
||||
|
||||
def __getitem__(self, idx):
|
||||
index = self.df.iloc[idx]['index']
|
||||
image = self.df.iloc[idx]['image']
|
||||
image = decode_base64_to_image(image)
|
||||
question = self.df.iloc[idx]['question']
|
||||
answer = self.df.iloc[idx]['answer'] if 'answer' in self.df.iloc[0].keys() else None
|
||||
catetory = self.df.iloc[idx]['category']
|
||||
l2_catetory = self.df.iloc[idx]['l2-category']
|
||||
|
||||
option_candidate = ['A', 'B', 'C', 'D', 'E']
|
||||
options = {
|
||||
cand: self.load_from_df(idx, cand)
|
||||
for cand in option_candidate
|
||||
if self.load_from_df(idx, cand) is not None
|
||||
}
|
||||
options_prompt = f'{self.sys_prompt}\n'
|
||||
for key, item in options.items():
|
||||
options_prompt += f'{key}. {item}\n'
|
||||
|
||||
hint = self.load_from_df(idx, 'hint')
|
||||
data = {
|
||||
'img': image,
|
||||
'question': question,
|
||||
'answer': answer,
|
||||
'options': options_prompt,
|
||||
'category': catetory,
|
||||
'l2-category': l2_catetory,
|
||||
'options_dict': options,
|
||||
'index': index,
|
||||
'context': hint,
|
||||
}
|
||||
return data
|
||||
def load_from_df(self, idx, key):
|
||||
if key in self.df.iloc[idx] and not pd.isna(self.df.iloc[idx][key]):
|
||||
return self.df.iloc[idx][key]
|
||||
else:
|
||||
return None
|
||||
```
|
||||
|
||||
## How to construct the inference prompt
|
||||
|
||||
```python
|
||||
if data_sample['context'] is not None:
|
||||
prompt = data_sample['context'] + ' ' + data_sample['question'] + ' ' + data_sample['options']
|
||||
else:
|
||||
prompt = data_sample['question'] + ' ' + data_sample['options']
|
||||
```
|
||||
|
||||
For example:
|
||||
Question: Which category does this image belong to?
|
||||
A. Oil Painting
|
||||
B. Sketch
|
||||
C. Digital art
|
||||
D. Photo
|
||||
|
||||
<div align=center>
|
||||
<img src="https://github-production-user-asset-6210df.s3.amazonaws.com/34324155/255581681-1364ef43-bd27-4eb5-b9e5-241327b1f920.png" width="50%"/>
|
||||
</div>
|
||||
|
||||
```python
|
||||
prompt = """
|
||||
###Human: Question: Which category does this image belong to?
|
||||
There are several options: A. Oil Painting, B. Sketch, C. Digital art, D. Photo
|
||||
###Assistant:
|
||||
"""
|
||||
```
|
||||
|
||||
You can make custom modifications to the prompt
|
||||
|
||||
## How to save results:
|
||||
|
||||
You should dump your model's predictions into an excel(.xlsx) file, and this file should contain the following fields:
|
||||
|
||||
```
|
||||
question: the question
|
||||
A: The first choice
|
||||
B: The second choice
|
||||
C: The third choice
|
||||
D: The fourth choice
|
||||
prediction: The prediction of your model to current question
|
||||
category: the leaf category
|
||||
l2_category: the l2-level category
|
||||
index: the question index
|
||||
```
|
||||
|
||||
If there are any questions with fewer than four options, simply leave those fields blank.
|
@ -1,108 +0,0 @@
|
||||
# Multi-modality Evaluation
|
||||
|
||||
We support several multi-modality datasets, such as [MMBench](https://opencompass.org.cn/MMBench), [SEED-Bench](https://github.com/AILab-CVC/SEED-Bench) to evaluate multi-modality models. Before starting, please make sure you have downloaded the evaluation datasets following the official instruction.
|
||||
|
||||
## Start Evaluation
|
||||
|
||||
Before evaluation, you could modify `tasks.py` or create a new file like `tasks.py` to evaluate your own model.
|
||||
|
||||
Generally to run the evaluation, we use command below.
|
||||
|
||||
### Slurm
|
||||
|
||||
```sh
|
||||
cd $root
|
||||
python run.py configs/multimodal/tasks.py --mm-eval --slurm -p $PARTITION
|
||||
```
|
||||
|
||||
### PyTorch
|
||||
|
||||
```sh
|
||||
cd $root
|
||||
python run.py configs/multimodal/tasks.py --mm-eval
|
||||
```
|
||||
|
||||
## Configuration File
|
||||
|
||||
We adapt the new config format of [MMEngine](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/config.html#a-pure-python-style-configuration-file-beta).
|
||||
|
||||
### Task File
|
||||
|
||||
Here is the example config of `configs/multimodal/tasks.py`.
|
||||
|
||||
```python
|
||||
from mmengine.config import read_base
|
||||
|
||||
with read_base():
|
||||
from .minigpt_4.minigpt_4_7b_mmbench import (minigpt_4_mmbench_dataloader,
|
||||
minigpt_4_mmbench_evaluator,
|
||||
minigpt_4_mmbench_load_from,
|
||||
minigpt_4_mmbench_model)
|
||||
|
||||
models = [minigpt_4_mmbench_model]
|
||||
datasets = [minigpt_4_mmbench_dataloader]
|
||||
evaluators = [minigpt_4_mmbench_evaluator]
|
||||
load_froms = [minigpt_4_mmbench_load_from]
|
||||
|
||||
# set the platform and resources
|
||||
num_gpus = 8
|
||||
num_procs = 8
|
||||
launcher = 'pytorch'
|
||||
```
|
||||
|
||||
### Details of Task
|
||||
|
||||
Here is an example of MiniGPT-4 with MMBench and we provide some comments for
|
||||
users to understand the meaning of the keys in config.
|
||||
|
||||
```python
|
||||
from opencompass.multimodal.models.minigpt_4 import (
|
||||
MiniGPT4MMBenchPromptConstructor, MiniGPT4MMBenchPostProcessor)
|
||||
|
||||
# dataloader settings
|
||||
# Here we use Transforms in MMPreTrain to process images
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(type='mmpretrain.PackInputs',
|
||||
algorithm_keys=[
|
||||
'question', 'category', 'l2-category', 'context', 'index',
|
||||
'options_dict', 'options', 'split'
|
||||
])
|
||||
]
|
||||
|
||||
# The defined MMBench datasets to load evaluation data
|
||||
dataset = dict(type='opencompass.MMBenchDataset',
|
||||
data_file='data/mmbench/mmbench_test_20230712.tsv',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
minigpt_4_mmbench_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler',
|
||||
shuffle=False))
|
||||
|
||||
# model settings
|
||||
minigpt_4_mmbench_model = dict(
|
||||
type='minigpt-4', # the test multomodal algorithm, the type can be found in `opencompass/multimodal/models/minigpt_4.py`, `@MM_MODELS.register_module('minigpt-4')`
|
||||
low_resource=False,
|
||||
llama_model='/path/to/vicuna-7b/', # the model path of LLM
|
||||
prompt_constructor=dict(type=MiniGPT4MMBenchPromptConstructor, # the PromptConstructor to construct the prompt
|
||||
image_prompt='###Human: <Img><ImageHere></Img>',
|
||||
reply_prompt='###Assistant:'),
|
||||
post_processor=dict(type=MiniGPT4MMBenchPostProcessor)) # the PostProcessor to deal with the output, process it into the required format
|
||||
|
||||
# evaluation settings
|
||||
minigpt_4_mmbench_evaluator = [
|
||||
dict(type='opencompass.DumpResults', # the evaluator will dump results to save_path, code can be found in `opencompass/metrics/dump_results.py`
|
||||
save_path='work_dirs/minigpt-4-7b-mmbench.xlsx')
|
||||
]
|
||||
|
||||
minigpt_4_mmbench_load_from = '/path/to/prerained_minigpt4_7b.pth' # the model path of linear layer between Q-Former and LLM in MiniGPT-4
|
||||
```
|
@ -64,7 +64,6 @@ We always welcome *PRs* and *Issues* for the betterment of OpenCompass.
|
||||
advanced_guides/evaluation_lightllm.md
|
||||
advanced_guides/code_eval.md
|
||||
advanced_guides/code_eval_service.md
|
||||
advanced_guides/multimodal_eval.md
|
||||
advanced_guides/prompt_attack.md
|
||||
advanced_guides/longeval.md
|
||||
advanced_guides/subjective_evaluation.md
|
||||
|
@ -2,7 +2,7 @@
|
||||
|
||||
## Evaluation Targets
|
||||
|
||||
The primary evaluation targets of this algorithm library are large language models and multimodal large models. We introduce specific model types for evaluation using the large language model as an example.
|
||||
The primary evaluation targets of this algorithm library are large language models. We introduce specific model types for evaluation using the large language model as an example.
|
||||
|
||||
- base Model: Typically obtained through training on massive textual data in a self-supervised manner (e.g., OpenAI's GPT-3, Meta's LLaMA). These models usually have powerful text continuation capabilities.
|
||||
|
||||
|
@ -1,107 +0,0 @@
|
||||
# 多模态评测
|
||||
|
||||
我们支持了多个多模态数据集,例如 [MMBench](https://opencompass.org.cn/MMBench),[SEED-Bench](https://github.com/AILab-CVC/SEED-Bench),来对多模态模型进行评测。在开始评测之前,请确保您已经按照官方教程下载了评测数据集。
|
||||
|
||||
## 开始评测
|
||||
|
||||
在评测前,您需要先修改 `tasks.py` 或者创建一个类似的新文件 `tasks_your_model.py` 来对您的模型进行评测。
|
||||
|
||||
一般来说我们使用下列命令启动评测。
|
||||
|
||||
### Slurm
|
||||
|
||||
```sh
|
||||
cd $root
|
||||
python run.py configs/multimodal/tasks.py --mm-eval --slurm -p $PARTITION
|
||||
```
|
||||
|
||||
### PyTorch
|
||||
|
||||
```sh
|
||||
cd $root
|
||||
python run.py configs/multimodal/tasks.py --mm-eval
|
||||
```
|
||||
|
||||
## 配置文件
|
||||
|
||||
We adapt the new config format of [MMEngine](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/config.html#a-pure-python-style-configuration-file-beta).
|
||||
|
||||
### 任务文件
|
||||
|
||||
这是 `configs/multimodal/tasks.py` 的示例。
|
||||
|
||||
```python
|
||||
from mmengine.config import read_base
|
||||
|
||||
with read_base():
|
||||
from .minigpt_4.minigpt_4_7b_mmbench import (minigpt_4_mmbench_dataloader,
|
||||
minigpt_4_mmbench_evaluator,
|
||||
minigpt_4_mmbench_load_from,
|
||||
minigpt_4_mmbench_model)
|
||||
|
||||
models = [minigpt_4_mmbench_model]
|
||||
datasets = [minigpt_4_mmbench_dataloader]
|
||||
evaluators = [minigpt_4_mmbench_evaluator]
|
||||
load_froms = [minigpt_4_mmbench_load_from]
|
||||
|
||||
# set the platform and resources
|
||||
num_gpus = 8
|
||||
num_procs = 8
|
||||
launcher = 'pytorch'
|
||||
```
|
||||
|
||||
### 细节配置
|
||||
|
||||
这是使用 MMBench 对 MiniGPT-4 进行评测的示例,我们提供了部分注释方便用户理解配置文件的含义。
|
||||
|
||||
```python
|
||||
from opencompass.multimodal.models.minigpt_4 import (
|
||||
MiniGPT4MMBenchPromptConstructor, MiniGPT4MMBenchPostProcessor)
|
||||
|
||||
# dataloader settings
|
||||
# 我们使用 MMPreTrain 中的 transforms 对图像数据进行处理
|
||||
val_pipeline = [
|
||||
dict(type='mmpretrain.torchvision/Resize',
|
||||
size=(224, 224),
|
||||
interpolation=3),
|
||||
dict(type='mmpretrain.torchvision/ToTensor'),
|
||||
dict(type='mmpretrain.torchvision/Normalize',
|
||||
mean=(0.48145466, 0.4578275, 0.40821073),
|
||||
std=(0.26862954, 0.26130258, 0.27577711)),
|
||||
dict(type='mmpretrain.PackInputs',
|
||||
algorithm_keys=[
|
||||
'question', 'category', 'l2-category', 'context', 'index',
|
||||
'options_dict', 'options', 'split'
|
||||
])
|
||||
]
|
||||
|
||||
# 定义 MMBench dataset 来读取对应的数据
|
||||
dataset = dict(type='opencompass.MMBenchDataset',
|
||||
data_file='data/mmbench/mmbench_test_20230712.tsv',
|
||||
pipeline=val_pipeline)
|
||||
|
||||
minigpt_4_mmbench_dataloader = dict(batch_size=1,
|
||||
num_workers=4,
|
||||
dataset=dataset,
|
||||
collate_fn=dict(type='pseudo_collate'),
|
||||
sampler=dict(type='DefaultSampler',
|
||||
shuffle=False))
|
||||
|
||||
# model settings
|
||||
minigpt_4_mmbench_model = dict(
|
||||
type='minigpt-4', # 被测试的多模模型,type 在 `opencompass/multimodal/models/minigpt_4.py` 的 `@MM_MODELS.register_module('minigpt-4')` 中有定义
|
||||
low_resource=False,
|
||||
llama_model='/path/to/vicuna-7b/', # LLM 的模型路径
|
||||
prompt_constructor=dict(type=MiniGPT4MMBenchPromptConstructor, # 使用 PromptConstructor 来构建 LLM 的输入 prompt
|
||||
image_prompt='###Human: <Img><ImageHere></Img>',
|
||||
reply_prompt='###Assistant:'),
|
||||
post_processor=dict(type=MiniGPT4MMBenchPostProcessor)) # 使用 PostProcessor 来处理模型输出,使其符合输出格式的要求
|
||||
|
||||
# evaluation settings
|
||||
minigpt_4_mmbench_evaluator = [
|
||||
dict(type='opencompass.DumpResults', # evaluator 将结果保存在 save_path,代码在 `opencompass/metrics/dump_results.py`
|
||||
save_path='work_dirs/minigpt-4-7b-mmbench.xlsx')
|
||||
]
|
||||
|
||||
minigpt_4_mmbench_load_from = '/path/to/prerained_minigpt4_7b.pth' # 线性层的模型路径(MiniGPT-4 中 Q-Former 和 LLM 之间的线性投影层)
|
||||
```
|
@ -64,7 +64,6 @@ OpenCompass 上手路线
|
||||
advanced_guides/evaluation_lightllm.md
|
||||
advanced_guides/code_eval.md
|
||||
advanced_guides/code_eval_service.md
|
||||
advanced_guides/multimodal_eval.md
|
||||
advanced_guides/prompt_attack.md
|
||||
advanced_guides/longeval.md
|
||||
advanced_guides/subjective_evaluation.md
|
||||
|
@ -6,13 +6,12 @@ from datetime import datetime
|
||||
|
||||
from mmengine.config import Config, DictAction
|
||||
|
||||
from opencompass.partitioners import MultimodalNaivePartitioner
|
||||
from opencompass.registry import PARTITIONERS, RUNNERS, build_from_cfg
|
||||
from opencompass.runners import SlurmRunner
|
||||
from opencompass.summarizers import DefaultSummarizer
|
||||
from opencompass.utils import LarkReporter, get_logger
|
||||
from opencompass.utils.run import (exec_mm_infer_runner, fill_eval_cfg,
|
||||
fill_infer_cfg, get_config_from_arg)
|
||||
from opencompass.utils.run import (fill_eval_cfg, fill_infer_cfg,
|
||||
get_config_from_arg)
|
||||
|
||||
|
||||
def parse_args():
|
||||
@ -34,11 +33,6 @@ def parse_args():
|
||||
help='Whether to force tasks to run on dlc. If '
|
||||
'True, `--aliyun-cfg` must be set. Defaults'
|
||||
' to False')
|
||||
# multi-modal support
|
||||
parser.add_argument('--mm-eval',
|
||||
help='Whether or not enable multimodal evaluation',
|
||||
action='store_true',
|
||||
default=False)
|
||||
# Add shortcut parameters (models, datasets and summarizer)
|
||||
parser.add_argument('--models', nargs='+', help='', default=None)
|
||||
parser.add_argument('--datasets', nargs='+', help='', default=None)
|
||||
@ -278,13 +272,6 @@ def main():
|
||||
'also specified --slurm or --dlc. '
|
||||
'The "infer" configuration will be overridden by '
|
||||
'your runtime arguments.')
|
||||
# Check whether run multimodal evaluation
|
||||
if args.mm_eval:
|
||||
partitioner = MultimodalNaivePartitioner(
|
||||
osp.join(cfg['work_dir'], 'predictions/'))
|
||||
tasks = partitioner(cfg)
|
||||
exec_mm_infer_runner(tasks, args, cfg)
|
||||
return
|
||||
|
||||
if args.dlc or args.slurm or cfg.get('infer', None) is None:
|
||||
fill_infer_cfg(cfg, args)
|
||||
|
@ -1,6 +0,0 @@
|
||||
from .mmbench import MMBenchDataset # noqa: F401, F403
|
||||
from .mme import MMEDataset # noqa: F401, F403
|
||||
from .seedbench import SEEDBenchDataset # noqa: F401, F403
|
||||
|
||||
__all__ = ['MMBenchDataset'
|
||||
'SEEDBenchDataset', 'MMEDataset']
|
@ -1,84 +0,0 @@
|
||||
import base64
|
||||
import io
|
||||
from typing import List, Optional
|
||||
|
||||
import pandas as pd
|
||||
from mmengine.dataset import Compose
|
||||
from PIL import Image
|
||||
from torch.utils.data import Dataset
|
||||
|
||||
from opencompass.registry import DATASETS
|
||||
|
||||
|
||||
def decode_base64_to_image(base64_string) -> Image:
|
||||
"""Convert raw data into Pillow image."""
|
||||
image_data = base64.b64decode(base64_string)
|
||||
image = Image.open(io.BytesIO(image_data))
|
||||
return image
|
||||
|
||||
|
||||
@DATASETS.register_module()
|
||||
class MMBenchDataset(Dataset):
|
||||
"""Dataset to load MMBench dataset.
|
||||
|
||||
Args:
|
||||
data_file (str): The path of the dataset.
|
||||
pipeline (dict): The data augmentation.
|
||||
sys_prompt (str): The system prompt added to the head
|
||||
of these options. Defaults to
|
||||
There are several options:
|
||||
"""
|
||||
|
||||
def __init__(self,
|
||||
data_file: str,
|
||||
pipeline: List[dict],
|
||||
sys_prompt: str = 'There are several options:') -> None:
|
||||
self.df = pd.read_csv(data_file, sep='\t')
|
||||
self.pipeline = Compose(pipeline)
|
||||
self.sys_prompt = sys_prompt
|
||||
|
||||
def __len__(self) -> None:
|
||||
return len(self.df)
|
||||
|
||||
def __getitem__(self, idx: int) -> dict:
|
||||
# Mandatory Fields Begin
|
||||
index = self.df.iloc[idx]['index']
|
||||
image = self.df.iloc[idx]['image']
|
||||
image = decode_base64_to_image(image)
|
||||
question = self.df.iloc[idx]['question']
|
||||
|
||||
option_candidate = ['A', 'B', 'C', 'D', 'E']
|
||||
options = {
|
||||
cand: self.load_from_df(idx, cand)
|
||||
for cand in option_candidate
|
||||
if self.load_from_df(idx, cand) is not None
|
||||
}
|
||||
options_prompt = f'{self.sys_prompt}\n'
|
||||
for key, item in options.items():
|
||||
options_prompt += f'{key}. {item}\n'
|
||||
# Mandatory Fields End
|
||||
|
||||
# Optional Fields Begin
|
||||
hint = self.load_from_df(idx, 'hint')
|
||||
category = self.load_from_df(idx, 'category')
|
||||
l2_catetory = self.load_from_df(idx, 'l2-category')
|
||||
# Optional Fields End
|
||||
|
||||
data = {
|
||||
'img': image,
|
||||
'question': question,
|
||||
'options': options_prompt,
|
||||
'category': category,
|
||||
'l2-category': l2_catetory,
|
||||
'options_dict': options,
|
||||
'index': index,
|
||||
'context': hint,
|
||||
}
|
||||
data = self.pipeline(data)
|
||||
return data
|
||||
|
||||
def load_from_df(self, idx: int, key: str) -> Optional[str]:
|
||||
if key in self.df.iloc[idx] and not pd.isna(self.df.iloc[idx][key]):
|
||||
return self.df.iloc[idx][key]
|
||||
else:
|
||||
return None
|
@ -1,74 +0,0 @@
|
||||
import os
|
||||
from typing import List
|
||||
|
||||
from mmengine.dataset import Compose
|
||||
from torch.utils.data import Dataset
|
||||
|
||||
from opencompass.registry import DATASETS
|
||||
|
||||
|
||||
@DATASETS.register_module()
|
||||
class MMEDataset(Dataset):
|
||||
"""Dataset to load MME dataset.
|
||||
|
||||
Args:
|
||||
data_dir (str): The path of the dataset.
|
||||
pipeline (List[dict]): The data augmentation.
|
||||
"""
|
||||
tasks = [
|
||||
'artwork', 'celebrity', 'code_reasoning', 'color',
|
||||
'commonsense_reasoning', 'count', 'existence', 'landmark',
|
||||
'numerical_calculation', 'OCR', 'position', 'posters', 'scene',
|
||||
'text_translation'
|
||||
]
|
||||
sub_dir_name = ('images', 'questions_answers_YN')
|
||||
|
||||
def __init__(self, data_dir: str, pipeline: List[dict]) -> None:
|
||||
self.pipeline = Compose(pipeline)
|
||||
self.load_data(data_dir)
|
||||
|
||||
def load_data(self, data_dir: str):
|
||||
self.data_list = []
|
||||
image_dir, question_dir = self.sub_dir_name
|
||||
for task in self.tasks:
|
||||
if os.path.exists(os.path.join(data_dir, task, question_dir)):
|
||||
q_list = os.listdir(os.path.join(data_dir, task, question_dir))
|
||||
i_list = os.listdir(os.path.join(data_dir, task, image_dir))
|
||||
q_prefix = os.path.join(data_dir, task, question_dir)
|
||||
i_prefix = os.path.join(data_dir, task, image_dir)
|
||||
else:
|
||||
fn_list = os.listdir(os.path.join(data_dir, task))
|
||||
q_list = [fn for fn in fn_list if '.txt' in fn]
|
||||
i_list = [fn for fn in fn_list if fn not in q_list]
|
||||
q_prefix = i_prefix = os.path.join(data_dir, task)
|
||||
|
||||
q_list.sort()
|
||||
i_list.sort()
|
||||
assert len(q_list) == len(i_list)
|
||||
for q_fn, i_fn in zip(q_list, i_list):
|
||||
assert q_fn.split('.')[0] == i_fn.split('.')[0]
|
||||
q_path = os.path.join(q_prefix, q_fn)
|
||||
image_path = os.path.join(i_prefix, i_fn)
|
||||
with open(q_path, 'r') as f:
|
||||
q1, a1 = f.readline().strip().split('\t')
|
||||
q2, a2 = f.readline().strip().split('\t')
|
||||
self.data_list.append({
|
||||
'img_path': image_path,
|
||||
'question': q1,
|
||||
'answer': a1,
|
||||
'task': task
|
||||
})
|
||||
self.data_list.append({
|
||||
'img_path': image_path,
|
||||
'question': q2,
|
||||
'answer': a2,
|
||||
'task': task
|
||||
})
|
||||
|
||||
def __len__(self) -> None:
|
||||
return len(self.data_list)
|
||||
|
||||
def __getitem__(self, idx: int) -> dict:
|
||||
data_sample = self.data_list[idx]
|
||||
data_sample = self.pipeline(data_sample)
|
||||
return data_sample
|
@ -1,174 +0,0 @@
|
||||
import importlib
|
||||
import json
|
||||
import os.path as osp
|
||||
from typing import List
|
||||
|
||||
import numpy as np
|
||||
import torch
|
||||
from decord import VideoReader, cpu
|
||||
from mmengine.dataset import Compose
|
||||
from PIL import Image
|
||||
from torch.utils.data import Dataset
|
||||
|
||||
from opencompass.registry import DATASETS
|
||||
|
||||
|
||||
@DATASETS.register_module()
|
||||
class SEEDBenchDataset(Dataset):
|
||||
"""Dataset to load SEED-Bench dataset.
|
||||
|
||||
Args:
|
||||
ann_file (str): The path of the annotation file.
|
||||
cc3m_path (str): The data path of the image dimension(1-9).
|
||||
sthv2_path (str): The data path of the dimension 10.
|
||||
epic_kitchens_path (str): The data path of the dimension 11.
|
||||
breakfast_path (str): The data path of the dimension 12.
|
||||
image_pipeline (List[dict]): The data transforms for image.
|
||||
video_pipeline (List[dict]): The data transforms for video.
|
||||
only_image (bool): Whether run SEED-Bench only with image data.
|
||||
Defaults to True.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
ann_file: str,
|
||||
cc3m_path: str,
|
||||
sthv2_path: str,
|
||||
epic_kitchens_path: str,
|
||||
breakfast_path: str,
|
||||
image_pipeline: List[dict],
|
||||
video_pipeline: List[dict],
|
||||
only_image: bool = True,
|
||||
) -> None:
|
||||
ann_file = json.load(open(ann_file, 'rb'))
|
||||
if 'questions' in ann_file.keys():
|
||||
self.ann_file = ann_file['questions']
|
||||
self.cc3m_path = cc3m_path
|
||||
self.sthv2_path = sthv2_path
|
||||
self.epic_kitchens_path = epic_kitchens_path
|
||||
self.breakfast_path = breakfast_path
|
||||
self.image_pipeline = Compose(image_pipeline)
|
||||
if only_image:
|
||||
image_ann_file = [
|
||||
ann for ann in self.ann_file if ann['data_type'] == 'image'
|
||||
]
|
||||
self.ann_file = image_ann_file
|
||||
if not only_image:
|
||||
raise NotImplementedError
|
||||
self.video_pipeline = Compose(video_pipeline)
|
||||
|
||||
def __len__(self) -> None:
|
||||
return len(self.ann_file)
|
||||
|
||||
def __getitem__(self, idx: str) -> dict:
|
||||
item = self.ann_file[idx]
|
||||
data = {
|
||||
'question':
|
||||
item['question'],
|
||||
'answer':
|
||||
item['answer'],
|
||||
'choices': [
|
||||
item['choice_a'], item['choice_b'], item['choice_c'],
|
||||
item['choice_d']
|
||||
],
|
||||
'data_type':
|
||||
item['data_type'],
|
||||
'question_id':
|
||||
item['question_id'],
|
||||
'question_type_id':
|
||||
item['question_type_id'],
|
||||
'index':
|
||||
idx,
|
||||
}
|
||||
|
||||
if item['data_type'] == 'image':
|
||||
data_path = osp.join(self.cc3m_path, item['data_id'])
|
||||
raw_image = Image.open(open(data_path, 'rb')).convert('RGB')
|
||||
data['data_path'] = data_path
|
||||
data['img'] = raw_image
|
||||
data = self.image_pipeline(data)
|
||||
elif item['data_type'] == 'video':
|
||||
if item['question_type_id'] == 10:
|
||||
data_path = osp.join(self.sthv2_path, item['data_id'])
|
||||
data['data_path'] = data_path
|
||||
elif item['question_type_id'] == 11:
|
||||
data_path = osp.join(self.epic_kitchens_path, item['data_id'])
|
||||
data['data_path'] = data_path
|
||||
data['segment'] = item['segment']
|
||||
elif item['question_type_id'] == 12:
|
||||
data_path = osp.join(self.breakfast_path, item['data_id'])
|
||||
data['data_path'] = data_path
|
||||
data['segment'] = item['segment']
|
||||
else:
|
||||
raise ValueError('The question type id is not valid.')
|
||||
|
||||
# preprocessing videos in evaluation dimension 10-12
|
||||
use_pyav = False
|
||||
if 'segment' in data.keys():
|
||||
segment = data['segment']
|
||||
if isinstance(segment[0], int):
|
||||
# using pyav for decoding videos in evaluation dimension 12
|
||||
use_pyav = True
|
||||
start, end = segment[0], segment[1]
|
||||
else:
|
||||
start = 0.0
|
||||
end = 0.0
|
||||
|
||||
if use_pyav:
|
||||
# using pyav for videos in evaluation dimension 12
|
||||
av = importlib.importmodule('av')
|
||||
reader = av.open(data_path)
|
||||
frames = [
|
||||
torch.from_numpy(f.to_rgb().to_ndarray())
|
||||
for f in reader.decode(video=0)
|
||||
]
|
||||
video_len = len(frames)
|
||||
start_frame, end_frame = start, end
|
||||
end_frame = min(end_frame, video_len)
|
||||
offset = self.get_index(end_frame - start_frame, 8)
|
||||
frame_indices = offset + start_frame
|
||||
buffer = torch.stack([frames[idx] for idx in frame_indices])
|
||||
buffer = buffer.numpy()
|
||||
else:
|
||||
# using decord for videos in evaluating dimension 10-11
|
||||
import io
|
||||
|
||||
import mmengine.fileio as fileio
|
||||
file_obj = io.BytesIO(fileio.get(data_path))
|
||||
vr = VideoReader(file_obj, num_threads=1, ctx=cpu(0))
|
||||
video_len = len(vr)
|
||||
fps = vr.get_avg_fps()
|
||||
if 'segment' in data.keys():
|
||||
# obtain start and end frame for the video segment
|
||||
# in evaluation dimension 11
|
||||
start_frame = int(min(max(start * fps, 0), video_len - 1))
|
||||
end_frame = int(min(max(end * fps, 0), video_len - 1))
|
||||
tot_frames = int(end_frame - start_frame)
|
||||
offset = self.get_index(tot_frames, 8)
|
||||
frame_indices = offset + start_frame
|
||||
else:
|
||||
# sample frames of the video in evaluation dimension 10
|
||||
frame_indices = self.get_index(video_len - 1, 8)
|
||||
vr.seek(0)
|
||||
buffer = vr.get_batch(frame_indices)
|
||||
buffer = buffer.asnumpy()
|
||||
data['imgs'] = buffer
|
||||
data = self.video_pipeline(data)
|
||||
|
||||
else:
|
||||
raise ValueError('The data type is not valid.')
|
||||
|
||||
return data
|
||||
|
||||
def get_index(self, num_frames, num_segments):
|
||||
if num_segments > num_frames:
|
||||
offsets = np.array([idx for idx in range(num_frames)])
|
||||
else:
|
||||
# uniform sampling
|
||||
seg_size = float(num_frames - 1) / num_segments
|
||||
start = int(seg_size / 2)
|
||||
offsets = np.array([
|
||||
start + int(np.round(seg_size * idx))
|
||||
for idx in range(num_segments)
|
||||
])
|
||||
return offsets
|
@ -1,24 +0,0 @@
|
||||
import os.path as osp
|
||||
|
||||
from opencompass.utils import satisfy_requirement
|
||||
|
||||
if satisfy_requirement('salesforce-lavis'):
|
||||
from .instructblip import * # noqa: F401, F403
|
||||
|
||||
if osp.exists('opencompass/multimodal/models/minigpt_4/MiniGPT-4'):
|
||||
from .minigpt_4 import * # noqa: F401, F403
|
||||
|
||||
if osp.exists(
|
||||
'opencompass/multimodal/models/llama_adapter_v2_multimodal/LLaMA-Adapter' # noqa
|
||||
):
|
||||
from .llama_adapter_v2_multimodal import * # noqa: F401, F403
|
||||
|
||||
from .llava import * # noqa: F401, F403
|
||||
|
||||
if osp.exists('opencompass/multimodal/models/mplug_owl/mPLUG-Owl'):
|
||||
from .mplug_owl import * # noqa: F401, F403
|
||||
|
||||
from .openflamingo import * # noqa: F401, F403
|
||||
from .otter import * # noqa: F401, F403
|
||||
from .qwen import * # noqa: F401, F403
|
||||
from .visualglm import * # noqa: F401, F403
|
@ -1,25 +0,0 @@
|
||||
from .blip2_vicuna_instruct import InstructBlipInferencer
|
||||
from .post_processor import (InstructBlipCOCOCaptionPostProcessor,
|
||||
InstructBlipMMBenchPostProcessor,
|
||||
InstructBlipScienceQAPostProcessor,
|
||||
InstructBlipVQAPostProcessor,
|
||||
InstructBlipVSRPostProcessor)
|
||||
from .prompt_constructor import (InstructBlipCOCOCaotionPromptConstructor,
|
||||
InstructBlipMMBenchPromptConstructor,
|
||||
InstructBlipScienceQAPromptConstructor,
|
||||
InstructBlipVQAPromptConstructor,
|
||||
InstructBlipVSRPromptConstructor)
|
||||
|
||||
__all__ = [
|
||||
'InstructBlipInferencer',
|
||||
'InstructBlipMMBenchPromptConstructor',
|
||||
'InstructBlipMMBenchPostProcessor',
|
||||
'InstructBlipCOCOCaotionPromptConstructor',
|
||||
'InstructBlipCOCOCaptionPostProcessor',
|
||||
'InstructBlipVQAPromptConstructor',
|
||||
'InstructBlipVQAPostProcessor',
|
||||
'InstructBlipScienceQAPromptConstructor',
|
||||
'InstructBlipScienceQAPostProcessor',
|
||||
'InstructBlipVSRPromptConstructor',
|
||||
'InstructBlipVSRPostProcessor',
|
||||
]
|
@ -1,248 +0,0 @@
|
||||
"""Requires Transformer 4.28 and above, implementation may change according the
|
||||
Llama implementation."""
|
||||
import logging
|
||||
|
||||
import mmengine
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
from lavis.models.blip2_models.blip2 import Blip2Base, disabled_train
|
||||
from mmengine.device import get_device
|
||||
from transformers import LlamaForCausalLM, LlamaTokenizer
|
||||
|
||||
from opencompass.registry import MM_MODELS
|
||||
|
||||
|
||||
@MM_MODELS.register_module('blip2-vicuna-instruct')
|
||||
class InstructBlipInferencer(Blip2Base):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
prompt_constructor: dict,
|
||||
post_processor: dict,
|
||||
vit_model: str = 'eva_clip_g',
|
||||
img_size: int = 224,
|
||||
drop_path_rate: float = 0,
|
||||
use_grad_checkpoint: bool = False,
|
||||
vit_precision: str = 'fp16',
|
||||
freeze_vit: bool = True,
|
||||
num_query_token: int = 32,
|
||||
llm_model: str = '',
|
||||
sys_prompt: str = '',
|
||||
prompt: str = '',
|
||||
max_txt_len: int = 128,
|
||||
max_output_txt_len: int = 256,
|
||||
qformer_text_input: bool = True,
|
||||
low_resource: bool = False,
|
||||
mode: str = 'generation',
|
||||
is_caption_task=False,
|
||||
):
|
||||
super().__init__()
|
||||
self.mode = mode
|
||||
self.prompt_constructor = mmengine.registry.build_from_cfg(
|
||||
prompt_constructor, MM_MODELS)
|
||||
self.post_processor = mmengine.registry.build_from_cfg(
|
||||
post_processor, MM_MODELS)
|
||||
|
||||
self.tokenizer = self.init_tokenizer(truncation_side='left')
|
||||
|
||||
self.visual_encoder, self.ln_vision = self.init_vision_encoder(
|
||||
vit_model, img_size, drop_path_rate, use_grad_checkpoint,
|
||||
vit_precision)
|
||||
if freeze_vit:
|
||||
for name, param in self.visual_encoder.named_parameters():
|
||||
param.requires_grad = False
|
||||
self.visual_encoder = self.visual_encoder.eval()
|
||||
self.visual_encoder.train = disabled_train
|
||||
logging.info('freeze vision encoder')
|
||||
|
||||
self.Qformer, self.query_tokens = self.init_Qformer(
|
||||
num_query_token, self.visual_encoder.num_features)
|
||||
|
||||
if not qformer_text_input:
|
||||
self.Qformer.bert.embeddings.word_embeddings = None
|
||||
self.Qformer.bert.embeddings.position_embeddings = None
|
||||
for layer in self.Qformer.bert.encoder.layer:
|
||||
layer.output = None
|
||||
layer.intermediate = None
|
||||
else:
|
||||
self.Qformer.resize_token_embeddings(len(self.tokenizer))
|
||||
self.Qformer.cls = None
|
||||
|
||||
self.llm_tokenizer = LlamaTokenizer.from_pretrained(
|
||||
llm_model, use_fast=False, truncation_side='left')
|
||||
|
||||
if low_resource:
|
||||
self.llm_model = LlamaForCausalLM.from_pretrained(
|
||||
llm_model,
|
||||
torch_dtype=torch.float16,
|
||||
load_in_8bit=True,
|
||||
device_map={'': 0})
|
||||
else:
|
||||
self.llm_model = LlamaForCausalLM.from_pretrained(
|
||||
llm_model, torch_dtype=torch.float16)
|
||||
self.llm_tokenizer.add_special_tokens({'pad_token': '[PAD]'})
|
||||
self.llm_tokenizer.add_special_tokens({'bos_token': '</s>'})
|
||||
self.llm_tokenizer.add_special_tokens({'eos_token': '</s>'})
|
||||
self.llm_tokenizer.add_special_tokens({'unk_token': '</s>'})
|
||||
|
||||
self.llm_model.resize_token_embeddings(len(self.llm_tokenizer))
|
||||
|
||||
for name, param in self.llm_model.named_parameters():
|
||||
param.requires_grad = False
|
||||
|
||||
self.llm_proj = nn.Linear(self.Qformer.config.hidden_size,
|
||||
self.llm_model.config.hidden_size)
|
||||
|
||||
self.max_txt_len = max_txt_len
|
||||
self.max_output_txt_len = max_output_txt_len
|
||||
self.sys_prompt = sys_prompt
|
||||
self.prompt = prompt
|
||||
self.is_caption_task = is_caption_task
|
||||
|
||||
self._lemmatizer = None
|
||||
|
||||
self.qformer_text_input = qformer_text_input
|
||||
|
||||
def forward(self, batch):
|
||||
if self.mode == 'generation':
|
||||
return self.generate(batch)
|
||||
else:
|
||||
raise RuntimeError(f'Invalid mode "{self.mode}".')
|
||||
|
||||
def concat_text_input_output(self, input_ids, input_atts, output_ids,
|
||||
output_atts):
|
||||
input_part_targets_len = []
|
||||
llm_tokens = {'input_ids': [], 'attention_mask': []}
|
||||
for i in range(input_ids.size(0)):
|
||||
this_input_ones = input_atts[i].sum()
|
||||
input_part_targets_len.append(this_input_ones)
|
||||
llm_tokens['input_ids'].append(
|
||||
torch.cat([
|
||||
input_ids[i][:this_input_ones], output_ids[i][1:],
|
||||
input_ids[i][this_input_ones:]
|
||||
]))
|
||||
llm_tokens['attention_mask'].append(
|
||||
torch.cat([
|
||||
input_atts[i][:this_input_ones], output_atts[i][1:],
|
||||
input_atts[i][this_input_ones:]
|
||||
]))
|
||||
llm_tokens['input_ids'] = torch.stack(llm_tokens['input_ids'])
|
||||
llm_tokens['attention_mask'] = torch.stack(
|
||||
llm_tokens['attention_mask'])
|
||||
return llm_tokens, input_part_targets_len
|
||||
|
||||
def pack_inputs(self, batch):
|
||||
images = [image.unsqueeze(0) for image in batch['inputs']]
|
||||
data_samples = [data_sample for data_sample in batch['data_samples']]
|
||||
images = torch.cat(images, dim=0).to(get_device())
|
||||
inputs = {'image': images, 'data_samples': data_samples}
|
||||
return inputs
|
||||
|
||||
@torch.no_grad()
|
||||
def generate(
|
||||
self,
|
||||
batch,
|
||||
use_nucleus_sampling=False,
|
||||
num_beams=5,
|
||||
max_length=256,
|
||||
min_length=1,
|
||||
top_p=0.9,
|
||||
repetition_penalty=1.5,
|
||||
length_penalty=1,
|
||||
num_captions=1,
|
||||
temperature=1,
|
||||
):
|
||||
inputs = self.pack_inputs(batch)
|
||||
inputs = self.prompt_constructor(inputs)
|
||||
image = inputs['image']
|
||||
prompt = inputs['prompt']
|
||||
data_samples = inputs['data_samples']
|
||||
|
||||
self.llm_tokenizer.padding_side = 'left'
|
||||
|
||||
bs = image.size(0)
|
||||
|
||||
if isinstance(prompt, str):
|
||||
prompt = [prompt] * bs
|
||||
else:
|
||||
assert len(
|
||||
prompt
|
||||
) == bs, 'The number of prompts must be equal to the batch size.'
|
||||
|
||||
query_tokens = self.query_tokens.expand(bs, -1, -1)
|
||||
if self.qformer_text_input:
|
||||
text_Qformer = self.tokenizer(
|
||||
prompt,
|
||||
padding='longest',
|
||||
truncation=True,
|
||||
max_length=self.max_txt_len,
|
||||
return_tensors='pt',
|
||||
).to(image.device)
|
||||
query_atts = torch.ones(query_tokens.size()[:-1],
|
||||
dtype=torch.long).to(image.device)
|
||||
Qformer_atts = torch.cat([query_atts, text_Qformer.attention_mask],
|
||||
dim=1)
|
||||
|
||||
with self.maybe_autocast():
|
||||
image_embeds = self.ln_vision(self.visual_encoder(image))
|
||||
image_atts = torch.ones(image_embeds.size()[:-1],
|
||||
dtype=torch.long).to(image.device)
|
||||
|
||||
if self.qformer_text_input:
|
||||
query_output = self.Qformer.bert(
|
||||
text_Qformer.input_ids,
|
||||
attention_mask=Qformer_atts,
|
||||
query_embeds=query_tokens,
|
||||
encoder_hidden_states=image_embeds,
|
||||
encoder_attention_mask=image_atts,
|
||||
return_dict=True,
|
||||
)
|
||||
else:
|
||||
query_output = self.Qformer.bert(
|
||||
query_embeds=query_tokens,
|
||||
encoder_hidden_states=image_embeds,
|
||||
encoder_attention_mask=image_atts,
|
||||
return_dict=True,
|
||||
)
|
||||
|
||||
inputs_llm = self.llm_proj(
|
||||
query_output.last_hidden_state[:, :query_tokens.size(1), :])
|
||||
atts_llm = torch.ones(inputs_llm.size()[:-1],
|
||||
dtype=torch.long).to(image.device)
|
||||
|
||||
prompt = ['###Human: ' + p + '###Assistant:' for p in prompt]
|
||||
prompt = [self.sys_prompt + p for p in prompt]
|
||||
llm_tokens = self.llm_tokenizer(prompt,
|
||||
padding='longest',
|
||||
return_tensors='pt').to(image.device)
|
||||
|
||||
with self.maybe_autocast():
|
||||
inputs_embeds = self.llm_model.get_input_embeddings()(
|
||||
llm_tokens.input_ids)
|
||||
inputs_embeds = torch.cat([inputs_llm, inputs_embeds], dim=1)
|
||||
attention_mask = torch.cat([atts_llm, llm_tokens.attention_mask],
|
||||
dim=1)
|
||||
|
||||
outputs = self.llm_model.generate(
|
||||
inputs_embeds=inputs_embeds,
|
||||
attention_mask=attention_mask,
|
||||
do_sample=use_nucleus_sampling,
|
||||
top_p=top_p,
|
||||
temperature=temperature,
|
||||
num_beams=num_beams,
|
||||
max_length=self.max_output_txt_len,
|
||||
min_length=min_length,
|
||||
repetition_penalty=repetition_penalty,
|
||||
length_penalty=length_penalty,
|
||||
num_return_sequences=num_captions,
|
||||
)
|
||||
|
||||
for i, data_sample in enumerate(data_samples):
|
||||
output_token = outputs[i]
|
||||
output_text = self.post_processor(output_token, self.llm_tokenizer)
|
||||
if self.is_caption_task:
|
||||
data_sample.pred_caption = output_text
|
||||
else:
|
||||
data_sample.pred_answer = output_text
|
||||
data_samples[i] = data_sample
|
||||
return data_samples
|
@ -1,111 +0,0 @@
|
||||
import random
|
||||
import re
|
||||
|
||||
import torch
|
||||
|
||||
|
||||
class InstructBlipMMBenchPostProcessor:
|
||||
""""Post processor for MiniGPT-4 on MMBench."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
pass
|
||||
|
||||
def __call__(self, output_token: torch.tensor, tokenizer) -> str:
|
||||
# convert output id 0 to 2 (eos_token_id)
|
||||
output_token[output_token == 0] = 2
|
||||
output_text = tokenizer.decode(output_token,
|
||||
add_special_tokens=False) # noqa
|
||||
output_text = self._extract_key_words(output_text.strip())
|
||||
return output_text
|
||||
|
||||
def _extract_key_words(self, output_text: str) -> str:
|
||||
|
||||
output_text = output_text.split('###')[0]
|
||||
output_text = output_text.split('Assistant:')[-1].strip()
|
||||
output_text = output_text.strip('</s><s>')
|
||||
output_text = output_text.strip('</Img>')
|
||||
output_text = output_text.strip()
|
||||
pattern = re.compile(r'([A-Z]\.)')
|
||||
res = pattern.findall(output_text)
|
||||
if len(res) > 0:
|
||||
output_text = res[0][:-1]
|
||||
return output_text
|
||||
|
||||
|
||||
class InstructBlipCOCOCaptionPostProcessor:
|
||||
""""Post processor for InstructBlip on COCO Caption."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
pass
|
||||
|
||||
def __call__(self, output_token: torch.tensor, tokenizer) -> str:
|
||||
|
||||
output_token[output_token == 0] = 2
|
||||
output_text = tokenizer.decode(output_token,
|
||||
add_special_tokens=False) # noqa
|
||||
output_text = output_text.split('###')[0]
|
||||
output_text = output_text.split('Assistant:')[-1].strip()
|
||||
output_text = output_text.strip('</s><s>')
|
||||
output_text = output_text.strip('</Img>')
|
||||
output_text = output_text.strip()
|
||||
return output_text
|
||||
|
||||
|
||||
class InstructBlipVQAPostProcessor:
|
||||
""""Post processor for InstructBlip on VQA."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
pass
|
||||
|
||||
def __call__(self, output_token: torch.tensor, tokenizer) -> str:
|
||||
output_token[output_token == 0] = 2
|
||||
output_text = tokenizer.decode(output_token,
|
||||
add_special_tokens=False) # noqa
|
||||
output_text = output_text.split('###')[0]
|
||||
output_text = output_text.split('Assistant:')[-1].strip()
|
||||
output_text = output_text.strip('</s><s>')
|
||||
output_text = output_text.strip('</Img>')
|
||||
output_text = output_text.strip()
|
||||
return output_text
|
||||
|
||||
|
||||
class InstructBlipScienceQAPostProcessor:
|
||||
""""Post processor for InstructBlip on ScienceQA."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
pass
|
||||
|
||||
def __call__(self, output_token: torch.tensor, tokenizer) -> str:
|
||||
|
||||
output_token[output_token == 0] = 2
|
||||
output_text = tokenizer.decode(output_token,
|
||||
add_special_tokens=False) # noqa
|
||||
output_text = output_text.split('###')[0]
|
||||
output_text = output_text.split('Assistant:')[-1].strip()
|
||||
output_text = output_text.strip('</s><s>')
|
||||
output_text = output_text.strip('</Img>')
|
||||
output_text = output_text.strip()
|
||||
pattern = re.compile(r'\(([A-Z])\)')
|
||||
output_text = pattern.findall(output_text)
|
||||
if len(output_text) == 0:
|
||||
output_text = random.choice(['A', 'B', 'C', 'D'])
|
||||
else:
|
||||
output_text = output_text[0]
|
||||
return output_text
|
||||
|
||||
|
||||
class InstructBlipVSRPostProcessor:
|
||||
""""Post processor for InstructBlip on VSR."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
pass
|
||||
|
||||
def __call__(self, output_token: torch.tensor, tokenizer) -> str:
|
||||
|
||||
output_token[output_token == 0] = 2
|
||||
output_text = tokenizer.decode(output_token, add_special_tokens=False)
|
||||
pattern = r'yes|no|Yes|No'
|
||||
output_text = re.findall(pattern, output_text)
|
||||
if len(output_text) > 0:
|
||||
output_text = output_text[0].lower()
|
||||
return output_text
|
@ -1,122 +0,0 @@
|
||||
from typing import List
|
||||
|
||||
from mmpretrain.structures import DataSample
|
||||
|
||||
|
||||
class InstructBlipMMBenchPromptConstructor:
|
||||
"""Prompt constructor for InstructBlip on MMBench.
|
||||
|
||||
Args:
|
||||
image_prompt (str): Image prompt.
|
||||
reply_prompt (str): Reply prompt.
|
||||
"""
|
||||
|
||||
def __init__(self, image_prompt: str = '', reply_prompt: str = '') -> None:
|
||||
self.image_prompt = image_prompt
|
||||
self.reply_prompt = reply_prompt
|
||||
|
||||
def __call__(self, inputs: dict) -> dict:
|
||||
"""Construct prompt.
|
||||
|
||||
Args:
|
||||
inputs (dict): Input data containing image and data_samples.
|
||||
|
||||
Returns:
|
||||
dict: A dict containing prompt, images and data_samples.
|
||||
"""
|
||||
data_samples = inputs['data_samples']
|
||||
prompt = self._process(data_samples)
|
||||
inputs.update({'prompt': prompt})
|
||||
|
||||
return inputs
|
||||
|
||||
def _process(self, data_samples: List[DataSample]) -> str:
|
||||
"""Process data sample to prompt.
|
||||
|
||||
Args:
|
||||
data_samples (List[DataSample]): A list of data_samples.
|
||||
|
||||
Returns:
|
||||
str: Prompt.
|
||||
"""
|
||||
assert len(data_samples) == 1, 'Only support batch size 1.'
|
||||
questions = [
|
||||
data_sample.get('question') for data_sample in data_samples
|
||||
]
|
||||
options = [data_sample.get('options') for data_sample in data_samples]
|
||||
contexts = [data_sample.get('context') for data_sample in data_samples]
|
||||
question = questions[0]
|
||||
option = options[0]
|
||||
context = contexts[0]
|
||||
if context is not None:
|
||||
prompt = self.image_prompt + ' ' + context + ' ' + question + ' ' + option + ' ' + self.reply_prompt # noqa
|
||||
else:
|
||||
prompt = self.image_prompt + ' ' + question + ' ' + option + ' ' + self.reply_prompt # noqa
|
||||
return prompt
|
||||
|
||||
|
||||
class InstructBlipCOCOCaotionPromptConstructor(
|
||||
InstructBlipMMBenchPromptConstructor):
|
||||
"""Prompt constructor for InstructBlip on COCO Caption."""
|
||||
|
||||
def _process(self, data_samples: List[DataSample]) -> str:
|
||||
assert len(data_samples) == 1, 'Only support batch size 1.'
|
||||
prompt = self.image_prompt + ' ' + 'a photo of' + self.reply_prompt
|
||||
return prompt
|
||||
|
||||
|
||||
class InstructBlipVQAPromptConstructor(InstructBlipMMBenchPromptConstructor):
|
||||
"""Prompt constructor for InstructBlip on VQA."""
|
||||
|
||||
def _process(self, data_samples: List[DataSample]) -> str:
|
||||
assert len(data_samples) == 1, 'Only support batch size 1.'
|
||||
questions = [
|
||||
data_sample.get('question') for data_sample in data_samples
|
||||
]
|
||||
question = questions[0]
|
||||
prompt = self.image_prompt + ' ' + question + ' ' + 'Answer this question in a single word.' + ' ' + self.reply_prompt # noqa
|
||||
return prompt
|
||||
|
||||
|
||||
class InstructBlipScienceQAPromptConstructor(
|
||||
InstructBlipMMBenchPromptConstructor):
|
||||
"""Prompt constructor for InstructBlip on ScienceQA."""
|
||||
|
||||
choice_mapping = {0: 'A', 1: 'B', 2: 'C', 3: 'D', 4: 'E', 5: 'F'}
|
||||
|
||||
def _process(self, data_samples: List[DataSample]) -> str:
|
||||
assert len(data_samples) == 1, 'Only support batch size 1.'
|
||||
questions = [
|
||||
'Question: ' + data_sample.get('question') + '\n'
|
||||
for data_sample in data_samples
|
||||
] # noqa
|
||||
choices = [data_sample.get('choices') for data_sample in data_samples]
|
||||
choices = [[
|
||||
f'({self.choice_mapping[i]}) ' + item
|
||||
for i, item in enumerate(choice)
|
||||
] for choice in choices]
|
||||
choices = [
|
||||
'Choices: ' + ' '.join(choice) + '\n' for choice in choices
|
||||
] # noqa
|
||||
contexts = [
|
||||
'Context: ' + data_sample.get('hint') + '\n'
|
||||
for data_sample in data_samples
|
||||
] # noqa
|
||||
question = questions[0]
|
||||
choice = choices[0]
|
||||
context = contexts[0]
|
||||
prompt = self.image_prompt + ' ' + context + ' ' + question + ' ' + choice + self.reply_prompt + ' ' + 'The answer is' # noqa
|
||||
return prompt
|
||||
|
||||
|
||||
class InstructBlipVSRPromptConstructor(InstructBlipMMBenchPromptConstructor):
|
||||
"""Prompt constructor for InstructBlip on VSR."""
|
||||
|
||||
def _process(self, data_samples: List[DataSample]) -> str:
|
||||
assert len(data_samples) == 1, 'Only support batch size 1.'
|
||||
questions = [
|
||||
data_sample.get('question') for data_sample in data_samples
|
||||
]
|
||||
question = questions[0]
|
||||
prompt = self.image_prompt + ' ' + question + ' ' + 'Is the above description correct? Answer yes or no.' + ' ' + self.reply_prompt # noqa
|
||||
return prompt
|
@ -1,8 +0,0 @@
|
||||
from .llama_adapter import LLaMA_adapter_v2
|
||||
from .post_processor import LlamaAadapterMMBenchPostProcessor
|
||||
from .prompt_constructor import LlamaAadapterMMBenchPromptConstructor # noqa
|
||||
|
||||
__all__ = [
|
||||
'LLaMA_adapter_v2', 'LlamaAadapterMMBenchPostProcessor',
|
||||
'LlamaAadapterMMBenchPromptConstructor'
|
||||
]
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user