CLIP-Spanish

CLIP Spanish is a CLIP-like model for Spanish language. It is composed of BERTIN as a language encoder and the ViT-B/32 image encoder from CLIP. The model is implemented in Flax, including training scripts (see training.md). This is part of the Flax/Jax Community Week, organised by HuggingFace and TPU usage sponsored by Google.

Spanish WIT

We used a subset of 141,230 Spanish captions from the WIT dataset for training.

Team members

Useful links

Downloads last month
8
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Spaces using flax-community/clip-spanish 2