Diffusers documentation

AWS Neuron

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v0.31.0).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

AWS Neuron

Diffusers functionalities are available on AWS Inf2 instances, which are EC2 instances powered by Neuron machine learning accelerators. These instances aim to provide better compute performance (higher throughput, lower latency) with good cost-efficiency, making them good candidates for AWS users to deploy diffusion models to production.

Optimum Neuron is the interface between Hugging Face libraries and AWS Accelerators, including AWS Trainium and AWS Inferentia. It supports many of the features in Diffusers with similar APIs, so it is easier to learn if you’re already familiar with Diffusers. Once you have created an AWS Inf2 instance, install Optimum Neuron.

python -m pip install --upgrade-strategy eager optimum[neuronx]

We provide pre-built Hugging Face Neuron Deep Learning AMI (DLAMI) and Optimum Neuron containers for Amazon SageMaker. It’s recommended to correctly set up your environment.

The example below demonstrates how to generate images with the Stable Diffusion XL model on an inf2.8xlarge instance (you can switch to cheaper inf2.xlarge instances once the model is compiled). To generate some images, use the NeuronStableDiffusionXLPipeline class, which is similar to the StableDiffusionXLPipeline class in Diffusers.

Unlike Diffusers, you need to compile models in the pipeline to the Neuron format, .neuron. Launch the following command to export the model to the .neuron format.

optimum-cli export neuron --model stabilityai/stable-diffusion-xl-base-1.0 \
  --batch_size 1 \
  --height 1024 `# height in pixels of generated image, eg. 768, 1024` \
  --width 1024 `# width in pixels of generated image, eg. 768, 1024` \
  --num_images_per_prompt 1 `# number of images to generate per prompt, defaults to 1` \
  --auto_cast matmul `# cast only matrix multiplication operations` \
  --auto_cast_type bf16 `# cast operations from FP32 to BF16` \
  sd_neuron_xl/

Now generate some images with the pre-compiled SDXL model.

>>> from optimum.neuron import NeuronStableDiffusionXLPipeline

>>> stable_diffusion_xl = NeuronStableDiffusionXLPipeline.from_pretrained("sd_neuron_xl/")
>>> prompt = "a pig with wings flying in floating US dollar banknotes in the air, skyscrapers behind, warm color palette, muted colors, detailed, 8k"
>>> image = stable_diffusion_xl(prompt).images[0]
peggy generated by sdxl on inf2

Feel free to check out more guides and examples on different use cases from the Optimum Neuron documentation!

< > Update on GitHub