stable-diffusion-3.5-large-GGUF

Original Model

stabilityai/stable-diffusion-3.5-large

Run with sd-api-server

  • Version: coming soon

Quantized GGUF Models

Name Quant method Bits Size Use case
clip_g-Q8_0.gguf Q8_0 8 739 MB
clip_g.safetensors f16 16 1.39 GB
clip_l.safetensors f16 16 246 MB
sd3.5_large-Q4_0.gguf Q4_0 4 5.11 GB
sd3.5_large-Q4_1.gguf Q4_1 4 5.61 GB
sd3.5_large-Q5_0.gguf Q5_0 5 6.11 GB
sd3.5_large-Q5_1.gguf Q5_1 5 6.61 GB
sd3.5_large-Q8_0.gguf Q8_0 8 9.11 GB
sd3.5_large.safetensors f16 16 16.5 GB
t5xxl-Q4_0.gguf Q4_0 4 2.75 GB
t5xxl-Q4_1.gguf Q4_1 4 3.06 GB
t5xxl-Q5_0.gguf Q5_0 5 3.36 GB
t5xxl-Q5_1.gguf Q5_1 5 3.67 GB
t5xxl-Q8_0.gguf Q8_0 8 5.20 GB
t5xxl_fp16.safetensors f16 16 9.79 GB

Quantized with stable-diffusion.cpp master-ac54e00.

Downloads last month
691
GGUF
Model size
4.89B params
Architecture
undefined

4-bit

5-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for second-state/stable-diffusion-3.5-large-GGUF

Quantized
(8)
this model