File size: 3,806 Bytes
36746ff e78573e 36746ff e78573e 6a062bf e78573e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 |
---
license: apache-2.0
datasets:
- michelecafagna26/hl
language:
- en
metrics:
- sacrebleu
- rouge
- meteor
- spice
- cider
library_name: pytorch
tags:
- pytorch
- image-to-text
---
# Model Card: VinVL for Captioning ๐ผ๏ธ
[Microsoft's VinVL](https://github.com/microsoft/Oscar) base fine-tuned on [HL dataset](https://arxiv.org/abs/2302.12189?context=cs.CL) for **rationale description generation** downstream task.
# Model fine-tuning ๐๏ธโ
The model has been finetuned for 10 epochs on the rationale captions of the [HL dataset](https://arxiv.org/abs/2302.12189?context=cs.CL) (available on ๐ค HUB: [michelecafagna26/hl](https://huggingface.co./datasets/michelecafagna26/hl))
# Test set metrics ๐
Obtained with beam size 5 and max length 20
| Bleu-1 | Bleu-2 | Bleu-3 | Bleu-4 | METEOR | ROUGE-L | CIDEr | SPICE |
|--------|--------|--------|--------|--------|---------|-------|-------|
| 0.55 | 0.38 | 0.23 | 0.15 | 0.17 | 0.44 | 0.44 | 0.10 |
# Usage and Installation:
More info about how to install and use this model can be found here: [michelecafagna26/VinVL
](https://github.com/michelecafagna26/VinVL)
# Feature extraction โ๏ธ
This model has a separate Visualbackbone used to extract features.
More info about:
- the model: [michelecafagna26/vinvl_vg_x152c4](https://huggingface.co./michelecafagna26/vinvl_vg_x152c4)
- the usage: [michelecafagna26/vinvl-visualbackbone](https://github.com/michelecafagna26/vinvl-visualbackbone)
# Quick start: ๐
```python
from transformers.pytorch_transformers import BertConfig, BertTokenizer
from oscar.modeling.modeling_bert import BertForImageCaptioning
from oscar.wrappers import OscarTensorizer
ckpt = "path/to/the/checkpoint"
device = "cuda" if torch.cuda.is_available() else "cpu"
# original code
config = BertConfig.from_pretrained(ckpt)
tokenizer = BertTokenizer.from_pretrained(ckpt)
model = BertForImageCaptioning.from_pretrained(ckpt, config=config).to(device)
# This takes care of the preprocessing
tensorizer = OscarTensorizer(tokenizer=tokenizer, device=device)
# numpy-arrays with shape (1, num_boxes, feat_size)
# feat_size is 2054 by default in VinVL
visual_features = torch.from_numpy(feat_obj).to(device).unsqueeze(0)
# labels are usually extracted by the features extractor
labels = [['boat', 'boat', 'boat', 'bottom', 'bush', 'coat', 'deck', 'deck', 'deck', 'dock', 'hair', 'jacket']]
inputs = tensorizer.encode(visual_features, labels=labels)
outputs = model(**inputs)
pred = tensorizer.decode(outputs)
# the output looks like this:
# pred = {0: [{'caption': 'he is on leisure', 'conf': 0.7070220112800598]}
```
# Citations ๐งพ
HL Dataset paper:
```BibTeX
@inproceedings{cafagna2023hl,
title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and
{R}ationales},
author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert},
booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)},
address = {Prague, Czech Republic},
year={2023}
}
```
Please consider citing the original project and the VinVL paper
```BibTeX
@misc{han2021image,
title={Image Scene Graph Generation (SGG) Benchmark},
author={Xiaotian Han and Jianwei Yang and Houdong Hu and Lei Zhang and Jianfeng Gao and Pengchuan Zhang},
year={2021},
eprint={2107.12604},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@inproceedings{zhang2021vinvl,
title={Vinvl: Revisiting visual representations in vision-language models},
author={Zhang, Pengchuan and Li, Xiujun and Hu, Xiaowei and Yang, Jianwei and Zhang, Lei and Wang, Lijuan and Choi, Yejin and Gao, Jianfeng},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={5579--5588},
year={2021}
}
``` |