CLIPSeg model

CLIPSeg model with reduce dimension 64, refined (using a more complex convolution). It was introduced in the paper Image Segmentation Using Text and Image Prompts by LΓΌddecke et al. and first released in this repository.

Intended use cases

This model is intended for zero-shot and one-shot image segmentation.

Usage

Refer to the documentation.

Downloads last month
10,040,427
Safetensors
Model size
151M params
Tensor type
I64
Β·
F32
Β·
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for CIDAS/clipseg-rd64-refined

Merges
1 model
Quantizations
1 model

Spaces using CIDAS/clipseg-rd64-refined 75