File size: 4,036 Bytes
6f8454f b8b1ada 6f8454f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 |
---
license: apache-2.0
tags:
- RyzenAI
- image-classification
- onnx
datasets:
- imagenet-1k
metrics:
- accuracy
---
## MobileNetV2
MobileNetV2 is an image classification model pre-trained on ImageNet-1k dataset at resolution 224x224. It was introduced in the paper [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler et al. and first released in [this repository](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet).
We develop a modified version that could be supported by [AMD Ryzen AI](https://ryzenai.docs.amd.com/en/latest/).
## Model description
MobileNetV2 is a simple network architecture that allows to build a family of highly efficient mobile models. It allows memory-efficient inference. MobileNetV2 is a model typically used for image classification tasks. And also can be used for object detection and image segmentation tasks. All tasks show competitive results.
The model is named **mobilenet_v2_depth_size**, for example, **mobilenet_v2_1.4_224**, where **1.4** is the depth multiplier and **224** is the resolution of the input images the model was trained on.
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co./models?search=mobilenet_v2) to look for fine-tuned versions on a task that interests you.
## How to use
### Installation
1. Follow [Ryzen AI Installation](https://ryzenai.docs.amd.com/en/latest/inst.html) to prepare the environment for Ryzen AI.
2. Run the following script to install pre-requisites for this model.
```shell
pip install -r requirements.txt
```
### Test & Evaluation
- Inference one image (Image Classification):
```python
import sys
import onnxruntime
import torch
import torchvision.transforms as transforms
from PIL import Image
image_path = sys.argv[1]
onnx_model = sys.argv[2]
normalize = transforms.Normalize(
mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
img_transformer = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize])
img_tensor = img_transformer(Image.open(image_path)).unsqueeze(0)
img_tensor = torch.permute(img_tensor, (0, 2, 3, 1))
so = onnxruntime.SessionOptions()
ort_session = onnxruntime.InferenceSession(
onnx_model, so,
providers=['CPUExecutionProvider'],
provider_options=None)
input = img_tensor.numpy()
ort_input = {ort_session.get_inputs()[0].name: input}
output = ort_session.run(None, ort_input)
top5_probabilities, top5_class_indices = torch.topk(torch.nn.functional.softmax(torch.tensor(output[0])), k=5)
```
- Evaluate ImageNet validation dataset (50,000 Images), using `eval_onnx.py` .
- Test accuracy of the quantized model on CPU.
```shell
python eval_onnx.py --onnx_model=./mobilenetv2_int8.onnx --data_dir=./{DATA_PATH}
```
- Test accuracy of the quantized model on IPU.
```shell
python eval_onnx.py --onnx_model=./mobilenetv2_int8.onnx --data_dir=./{DATA_PATH} --ipu --provider_config Path\To\vaip_config.json
```
- Users can use `vaip_config.json` in folder `voe-4.0-win_amd64` of `ryzen-ai-sw-1.0.zip` file.
`DATA_PATH`: Path to ImageNet dataset where contains the `validation` folder.
### Performance
Dataset: ImageNet validation dataset (50,000 images).
| Metric | Accuracy on IPU |
| :-----------------: | :-------------: |
| top1& top5 accuracy | 75.62% / 92.52% |
## Citation
```bibtex
@article{MobileNet v2,
author = {Mark Sandler and
Andrew G. Howard and
Menglong Zhu and
Andrey Zhmoginov and
Liang{-}Chieh Chen},
title = {MobileNetV2: Inverted Residuals and Linear Bottlenecks},
year = {2018},
url = {http://arxiv.org/abs/1801.04381},
}
``` |