Diffusers documentation
Hybrid Inference
You are viewing main version, which requires installation from source. If you'd like
regular pip install, checkout the latest stable version (v0.32.2).
Hybrid Inference
Empowering local AI builders with Hybrid Inference
Hybrid Inference is an experimental feature. Feedback can be provided here.
Why use Hybrid Inference?
Hybrid Inference offers a fast and simple way to offload local generation requirements.
- 🚀 Reduced Requirements: Access powerful models without expensive hardware.
- 💎 Without Compromise: Achieve the highest quality without sacrificing performance.
- 💰 Cost Effective: It’s free! 🤑
- 🎯 Diverse Use Cases: Fully compatible with Diffusers 🧨 and the wider community.
- 🔧 Developer-Friendly: Simple requests, fast responses.
Available Models
- VAE Decode 🖼️: Quickly decode latent representations into high-quality images without compromising performance or workflow speed.
- VAE Encode 🔢 (coming soon): Efficiently encode images into latent representations for generation and training.
- Text Encoders 📃 (coming soon): Compute text embeddings for your prompts quickly and accurately, ensuring a smooth and high-quality workflow.
Integrations
- SD.Next: All-in-one UI with direct supports Hybrid Inference.
- ComfyUI-HFRemoteVae: ComfyUI node for Hybrid Inference.
Contents
The documentation is organized into two sections:
- VAE Decode Learn the basics of how to use VAE Decode with Hybrid Inference.
- API Reference Dive into task-specific settings and parameters.