Where is 't5xxl.safetensors' ?
Help. Downloaded https://huggingface.co./stabilityai/stable-diffusion-3.5-medium/blob/main/text_encoders/t5xxl_fp8_e4m3fn.safetensors
and renamed to t5xxl.safetensors because I could not find this exact filename anywhere on huggingface.
Inference runs but generates error FileNotFoundError: No such file or directory: "models/sd3.5_large.safetensors" with my renamed safetensors file.
I am trying to run on low end consumer device. The chart indicates Nvidia RTX 3060 should work with 3.5 medium. Almost there.
Where is 3.5 medium supported with proper text encoder?
thanks,
This one downloaded automatically for me:
google/t5-v1_1-xxl
or do:
text_encoder_3=None,
tokenizer_3=None
how do I get a safetensors file from t5-v1_1-xxl? At 45GB size it is too large for low end consumer device.
Thanks all, it finally worked for me ( eGPU connected Nvidia RTX 3060, AMD Ryzen CPU 16 cores with 64 GB RAM, Ubuntu 22.04, Nvidia CUDA 12.6 driver 560).
Since the code on Github is a reference implementation, I had to change the following:
- Rename either "t5xxl_fp16.safetensors" or "t5xll_fp8_e4m3fn.safetensors" to t5xxl.safetensors
- Change sd_infer.py to set MODEL = "models/sd3.5_medium.safetensors" It was large by default previously.
Image generation from a prompt "cute wallpaper art of a cat" took 68 to 71 seconds using only half of RTX 3060 memory (6GB out of 12 GB, RAM peak 32GB, average 28GB, out of 50 GB available)
I think this should run faster if I can get it to use more 3060 RAM.