OpenCompass/configs/multimodal/instructblip
Yuan Liu 545d50a4c0
[Fix]: Add has_image to scienceqa (#391)
Co-authored-by: bensenliu <bensenliu@tencent.com>
2023-09-13 13:07:14 +08:00
..
instructblip_coco_caption.py [Feature] Add open source dataset eval config of instruct-blip (#370) 2023-09-08 15:07:09 +08:00
instructblip_flickr30k.py [Feature] Add open source dataset eval config of instruct-blip (#370) 2023-09-08 15:07:09 +08:00
instructblip_gqa.py [Feature] Add open source dataset eval config of instruct-blip (#370) 2023-09-08 15:07:09 +08:00
instructblip_mmbench.py [Refactor] Refactor instructblip (#227) 2023-08-23 15:33:59 +08:00
instructblip_ocr_vqa.py [Feature] Add open source dataset eval config of instruct-blip (#370) 2023-09-08 15:07:09 +08:00
instructblip_ok_vqa.py [Feature] Add open source dataset eval config of instruct-blip (#370) 2023-09-08 15:07:09 +08:00
instructblip_scienceqa.py [Fix]: Add has_image to scienceqa (#391) 2023-09-13 13:07:14 +08:00
instructblip_textvqa.py [Feature] Add open source dataset eval config of instruct-blip (#370) 2023-09-08 15:07:09 +08:00
instructblip_vizwiz.py [Feature] Add open source dataset eval config of instruct-blip (#370) 2023-09-08 15:07:09 +08:00
instructblip_vqav2.py [Feature] Add open source dataset eval config of instruct-blip (#370) 2023-09-08 15:07:09 +08:00
instructblip_vsr.py [Feature] Add open source dataset eval config of instruct-blip (#370) 2023-09-08 15:07:09 +08:00
README.md [Refactor] Refactor instructblip (#227) 2023-08-23 15:33:59 +08:00

InstructBLIP

Prepare the environment

git clone https://github.com/salesforce/LAVIS.git
cd ./LAVIS
pip install -e .

Modify the config

Modify the config of InstructBlip, like model path of LLM and Qformer.

Then update tasks.py like the following code snippet.

from mmengine.config import read_base

with read_base():
    from .instructblip.instructblip_mmbench import (instruct_blip_dataloader,
                                                    instruct_blip_evaluator,
                                                    instruct_blip_load_from,
                                                    instruct_blip_model)

models = [instruct_blip_model]
datasets = [instruct_blip_dataloader]
evaluators = [instruct_blip_evaluator]
load_froms = [instruct_blip_load_from]
num_gpus = 8
num_procs = 8
launcher = 'pytorch'  # or 'slurm'

Start evaluation

Slurm

cd $root
python run.py configs/multimodal/tasks.py --mm-eval --slurm -p $PARTITION

PyTorch

cd $root
python run.py configs/multimodal/tasks.py --mm-eval