File size: 6,042 Bytes
4b51996 5af9374 4b51996 5af9374 4b51996 4b12a14 5af9374 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 |
---
tags:
- clip
- RAG
library_name: open_clip
pipeline_tag: zero-shot-image-classification
license: mit
datasets:
- JianLiao/spectrum-icons
language:
- en
base_model:
- laion/CLIP-ViT-L-14-laion2B-s32B-b82K
---
# Model card for CLIP-ViT-L-14-spectrum-icons-23k
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Details](#training-details)
4. [Evaluation](#evaluation)
5. [Acknowledgements](#acknowledgements)
6. [Citation](#citation)
7. [How To Get Started With the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
This is a fine-tuned CLIP ViT-L/14 model based on the pretrained [`laion/CLIP-ViT-L-14-laion2B-s32B-b82K`](https://huggingface.co./laion/CLIP-ViT-L-14-laion2B-s32B-b82K) from LAION, adapted for improved text-to-image and image-to-text retrieval tasks using a custom dataset of 23,000 PNG-text caption pairs([JianLiao/spectrum-icons](https://huggingface.co./datasets/JianLiao/spectrum-icons)). The fine-tuning process utilized the OpenCLIP library and NVIDIA GPUs to specialize the model for handling abstract visual features and enhancing RAG performance.
The base model was originally trained on the LAION-2B dataset, leveraging natural language supervision to align visual and textual embeddings. This fine-tuning task aimed to adapt the model further for specific domains while maintaining generalization.
# Uses
## Direct Use
- Zero-shot image classification.
- Text-to-image and image-to-image retrieval.
- Improving text-image alignment in abstract visual contexts.
## Downstream Use
- Fine-tuning for domain-specific image-text retrieval tasks.
- Integration into applications requiring enhanced semantic search.
# Training Details
## Training Data
The model was fine-tuned on 23,000 image-text caption pairs. The dataset was designed to include diverse and abstract visual elements paired with detailed textual descriptions to enhance the model's capability in handling abstract queries and retrieval tasks.
## Training Procedure
The fine-tuning was conducted using the OpenCLIP library on a machine with 6 NVIDIA RTX-3090 GPUs. Key hyperparameters include:
- **Learning Rate**: `5e-6` with cosine decay.
- **Batch Size**: `64` per GPU, effective global batch size of `384`.
- **Epochs**: `40`.
- **Precision**: Mixed precision (`amp_bf16`) for improved efficiency.
- **Augmentations**:
- Color Jitter: `(0.2, 0.2, 0.1, 0.0)` with a probability of `0.7`.
- Grayscale Probability: `0.2`.
The training incorporated gradient checkpointing, distributed data parallelism (NCCL), and regular evaluations for zero-shot performance. Validation was performed after each epoch.
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
The model was evaluated on the validation set split from the 23,000 image-text pairs. Metrics were computed for both **image-to-text** and **text-to-image** retrieval tasks.
### Metrics
1. **Recall at K**:
- R@1, R@5, R@10 for image-to-text and text-to-image retrieval.
2. **Mean Rank** and **Median Rank**:
- Average and median positions of the correct match in retrieval.
## Results
- **Image-to-Text Retrieval**:
- R@1: ~70.0%
- R@5: ~96.0%
- R@10: ~97.8%
- Mean Rank: ~2.24
- Median Rank: ~1.0
- **Text-to-Image Retrieval**:
- R@1: ~70.3%
- R@5: ~96.4%
- R@10: ~98.1%
- Mean Rank: ~2.17
- Median Rank: ~1.0
The results demonstrate robust alignment between visual and textual embeddings, with strong performance on both retrieval tasks.
# Acknowledgements
- The pretrained base model was developed by LAION and trained on the LAION-2B dataset.
# Citation
**BibTeX:**
```bibtex
@inproceedings{cherti2023reproducible,
title={Reproducible scaling laws for contrastive language-image learning},
author={Cherti, Mehdi and Beaumont, Romain and Wightman, Ross and Wortsman, Mitchell and Ilharco, Gabriel and Gordon, Cade and Schuhmann, Christoph and Schmidt, Ludwig and Jitsev, Jenia},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={2818--2829},
year={2023}
}
```
OpenAI CLIP paper
```bibtex
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
OpenCLIP software
```bibtex
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
# How to Get Started with the Model
Install the required dependencies and load the fine-tuned model:
```python
from open_clip import create_model_and_transforms, tokenizer
model, preprocess = create_model_and_transforms(
model_name="hf-hub:JianLiao/CLIP-ViT-L-14-spectrum-icons-20k"
)
tokenizer = tokenizer("ViT-L-14")
# Example: Text-to-Image Retrieval
text_inputs = tokenizer(["a description of the image", "another description of the image"])
image = preprocess("/path/to/image.png").unsqueeze(0)
with torch.no_grad():
logits_per_image, logits_per_text = model(image, text_inputs)
probs = logits_per_image.softmax(dim=-1).numpy()
``` |