Optimum documentation

Ryzen Inference pipelines

You are viewing main version, which requires installation from source. If you'd like regular pip install, checkout the latest stable version (v1.23.3).
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Ryzen Inference pipelines

The pipelines are a great and easy way to use models for inference. These pipelines are objects that abstract most of the complex code from the library, offering a simple API dedicated to several tasks.

Currently the supported tasks are:

The pipeline abstraction

The pipeline abstraction is a wrapper around all the available pipelines for specific tasks. The pipeline() function automatically loads a default model and tokenizer/feature-extractor capable of performing inference for your task.

  1. Start by creating a pipeline by specifying an inference task:
>>> from optimum.amd.ryzenai import pipeline

>>> detector = pipeline("object-detection")
  1. Pass your input text/image to the ~pipelines.pipeline function:
>>> import requests
>>> from PIL import Image

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> detector(image)
[   {   'box': {'xmax': 325, 'xmin': 2, 'ymax': 465, 'ymin': 50},
        'label': 15.0,
        'score': 0.7081549763679504},
    {   'box': {'xmax': 630, 'xmin': 347, 'ymax': 381, 'ymin': 16},
        'label': 15.0,
        'score': 0.6494212746620178},
    {   'box': {'xmax': 369, 'xmin': 335, 'ymax': 187, 'ymin': 76},
        'label': 65.0,
        'score': 0.6064183115959167},
    {   'box': {'xmax': 645, 'xmin': 2, 'ymax': 475, 'ymin': 4},
        'label': 57.0,
        'score': 0.599224865436554},
    {   'box': {'xmax': 174, 'xmin': 40, 'ymax': 116, 'ymin': 73},
        'label': 65.0,
        'score': 0.408765971660614}]

Inference with RyzenAI models from the Hugging Face Hub

The pipeline() function can load Ryzen AI supported models from the Hugging Face Hub. Once you have picked an appropriate model, you can create the pipeline() by specifying the model repository:

>>> import requests
>>> from PIL import Image

>>> from optimum.amd.ryzenai import pipeline

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> # Hugging Face hub model-id with the quantized ONNX model
>>> model_id = "mohitsha/timm-resnet18-onnx-quantized-ryzen"

>>> pipe = pipeline("image-classification", model=model_id)
>>> print(pipe(image))

Using RyzenModelForXXX class

It is also possible to load the model with the RyzenModelForXXX class. For example, here is how you can load the ~ryzenai.RyzenModelForImageClassification class for image classification:

>>> import requests
>>> from PIL import Image

>>> from optimum.amd.ryzenai import RyzenAIModelForImageClassification
>>> from optimum.amd.ryzenai import pipeline


>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> # Hugging Face hub model-id with the quantized ONNX model
>>> model_id = "mohitsha/timm-resnet18-onnx-quantized-ryzen"
>>> model = RyzenAIModelForImageClassification.from_pretrained(model_id)

>>> pipe = pipeline("image-classification", model=model)
>>> print(pipe(image))

Using model_type argument

For some models, model_type and/or image_preprocessor has to be provided in addition to the model_id for inference. For example, here is how you can run inference using yolox:

>>> import requests
>>> from PIL import Image

>>> from optimum.amd.ryzenai import pipeline

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> model_id = "amd/yolox-s"

>>> pipe = pipeline("object-detection", model=model_id, model_type="yolox")
>>> print(pipe(image))
< > Update on GitHub