Unable to run on CPU

#33
by kartikpodugu - opened

I took this basic script from https://huggingface.co./stabilityai/sdxl-turbo

from diffusers import AutoPipelineForText2Image
import torch

pipe = AutoPipelineForText2Image.from_pretrained("stabilityai/sdxl-turbo", torch_dtype=torch.float16, variant="fp16")
#pipe.to("cuda")

prompt = "A cinematic shot of a baby racoon wearing an intricate italian priest robe."

image = pipe(prompt=prompt, num_inference_steps=1, guidance_scale=0.0).images[0]

Since I don't have Nvidia GPU on my machine, I commented out the line pipe.to("cuda")

I am unable to generate image.
Can't I run sdxl turbo on CPU ?

Hello! Yes, you can. You must change the torch_dtype param to torch.float32 when using CPU. This is works for me:
UPD: I moved the pipeline creation to the top level, outside of the function, to prevent memory leaks and ensure the pipeline is instantiated only once.

import io
import base64
import torch
from diffusers import AutoPipelineForText2Image

USE_GPU = torch.cuda.is_available()

pipe = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/sdxl-turbo", 
    torch_dtype=torch.float16 if USE_GPU else torch.float32, 
    variant="fp16", 
    cache_dir="/models-cache"
)
if USE_GPU:
    pipe = pipe.to("cuda")


def prompt_to_base_64(prompt: str):
    """
    Generate image from prompt and return it as base64 string.
    :param prompt: Prompt for image generation (sanitized)
    :return: Base64 string of image
    """
    image = pipe(prompt=prompt, num_inference_steps=1, guidance_scale=0.0).images[0]
    image_bytes = io.BytesIO()
    image.save(image_bytes, format='JPEG')
    return base64.b64encode(image_bytes.getvalue()).decode('utf-8')

But you must have 28-32 Gb of RAM and ~20-25Gb of free space on the local drive.

Thank you

Thank you

Hi, I noticed that in the code I shared earlier, the pipeline is being created inside the function. However, to avoid memory leaks, it's better to move the pipeline creation to the top level, outside of the function, so that it gets instantiated only once and can be reused globally. This will help prevent excessive memory consumption over time, especially when handling multiple requests.

import io
import base64
import torch
from diffusers import AutoPipelineForText2Image

USE_GPU = torch.cuda.is_available()

pipe = AutoPipelineForText2Image.from_pretrained(
    "stabilityai/sdxl-turbo", 
    torch_dtype=torch.float16 if USE_GPU else torch.float32, 
    variant="fp16", 
    cache_dir="/models-cache"
)
if USE_GPU:
    pipe = pipe.to("cuda")


def prompt_to_base_64(prompt: str) -> str:
    """
    Generate image from prompt and return it as base64 string.
    :param prompt: Prompt for image generation (sanitized)
    :return: Base64 string of image
    """
    image = pipe(prompt=prompt, num_inference_steps=1, guidance_scale=0.0).images[0]
    image_bytes = io.BytesIO()
    image.save(image_bytes, format='JPEG')
    return base64.b64encode(image_bytes.getvalue()).decode('utf-8')

Sign up or log in to comment