zhengrongzhang commited on
Commit
6f8454f
1 Parent(s): b9424a5

init model

Browse files
Files changed (4) hide show
  1. README.md +122 -0
  2. eval_onnx.py +161 -0
  3. mobilenetv2_int8.onnx +3 -0
  4. requirements.txt +5 -0
README.md ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - RyzenAI
5
+ - image-classification
6
+ - onnx
7
+ datasets:
8
+ - imagenet-1k
9
+ metrics:
10
+ - accuracy
11
+ ---
12
+
13
+ ## MobileNetV2
14
+
15
+ MobileNetV2 is an image classification model pre-trained on ImageNet-1k dataset at resolution 224x224. It was introduced in the paper [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381) by Mark Sandler et al. and first released in [this repository](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet).
16
+
17
+ We develop a modified version that could be supported by [AMD Ryzen AI](https://ryzenai.docs.amd.com/en/latest/).
18
+
19
+
20
+ ## Model description
21
+
22
+ MobileNetV2 is a simple network architecture that allows to build a family of highly efficient mobile models. It allows memory-efficient inference. MobileNetV2 is a model typically used for image classification tasks. And also can be used for object detection and image segmentation tasks. All tasks show competitive results.
23
+
24
+ The model is named **mobilenet_v2_depth_size**, for example, **mobilenet_v2_1.4_224**, where **1.4** is the depth multiplier and **224** is the resolution of the input images the model was trained on.
25
+
26
+
27
+ ## Intended uses & limitations
28
+
29
+ You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilenet_v2) to look for fine-tuned versions on a task that interests you.
30
+
31
+
32
+ ## How to use
33
+
34
+ ### Installation
35
+
36
+ 1. Follow [Ryzen AI Installation](https://ryzenai.docs.amd.com/en/latest/inst.html) to prepare the environment for Ryzen AI.
37
+
38
+ 2. Run the following script to install pre-requisites for this model.
39
+
40
+ ```shell
41
+ pip install -r requirements.txt
42
+ ```
43
+
44
+ ### Test & Evaluation
45
+
46
+ - Inference one image (Image Classification):
47
+
48
+ ```python
49
+ import sys
50
+ import onnxruntime
51
+ import torch
52
+ import torchvision.transforms as transforms
53
+ from PIL import Image
54
+
55
+ image_path = sys.argv[1]
56
+ onnx_model = sys.argv[2]
57
+
58
+ normalize = transforms.Normalize(
59
+ mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
60
+ img_transformer = transforms.Compose([
61
+ transforms.Resize(256),
62
+ transforms.CenterCrop(224),
63
+ transforms.ToTensor(),
64
+ normalize])
65
+ img_tensor = img_transformer(Image.open(image_path)).unsqueeze(0)
66
+
67
+ so = onnxruntime.SessionOptions()
68
+ ort_session = onnxruntime.InferenceSession(
69
+ onnx_model, so,
70
+ providers=['CPUExecutionProvider'],
71
+ provider_options=None)
72
+ input = img_tensor.numpy()
73
+ ort_input = {ort_session.get_inputs()[0].name: input}
74
+
75
+ output = ort_session.run(None, ort_input)
76
+ top5_probabilities, top5_class_indices = torch.topk(torch.nn.functional.softmax(torch.tensor(output[0])), k=5)
77
+ ```
78
+
79
+
80
+
81
+ - Evaluate ImageNet validation dataset (50,000 Images), using `eval_onnx.py` .
82
+
83
+ - Test accuracy of the quantized model on CPU.
84
+
85
+ ```shell
86
+ python eval_onnx.py --onnx_model=./mobilenetv2_int8.onnx --data_dir=./{DATA_PATH}
87
+ ```
88
+
89
+ - Test accuracy of the quantized model on IPU.
90
+
91
+ ```shell
92
+ python eval_onnx.py --onnx_model=./mobilenetv2_int8.onnx --data_dir=./{DATA_PATH} --ipu --provider_config Path\To\vaip_config.json
93
+ ```
94
+ - Users can use `vaip_config.json` in folder `voe-4.0-win_amd64` of `ryzen-ai-sw-1.0.zip` file.
95
+
96
+
97
+
98
+ ​ `DATA_PATH`: Path to ImageNet dataset where contains the `validation` folder.
99
+
100
+ ### Performance
101
+
102
+ Dataset: ImageNet validation dataset (50,000 images).
103
+
104
+ | Metric | Accuracy on IPU |
105
+ | :-----------------: | :-------------: |
106
+ | top1& top5 accuracy | 75.62% / 92.52% |
107
+
108
+ ## Citation
109
+
110
+
111
+ ```bibtex
112
+ @article{MobileNet v2,
113
+ author = {Mark Sandler and
114
+ Andrew G. Howard and
115
+ Menglong Zhu and
116
+ Andrey Zhmoginov and
117
+ Liang{-}Chieh Chen},
118
+ title = {MobileNetV2: Inverted Residuals and Linear Bottlenecks},
119
+ year = {2018},
120
+ url = {http://arxiv.org/abs/1801.04381},
121
+ }
122
+ ```
eval_onnx.py ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+
3
+ from typing import Tuple
4
+
5
+ import argparse
6
+ import onnxruntime
7
+ import os
8
+ import sys
9
+ import time
10
+ import torch
11
+ import torchvision.datasets as datasets
12
+ import torchvision.transforms as transforms
13
+ from torchvision.transforms import InterpolationMode
14
+ from torch.utils.data import DataLoader
15
+ from tqdm import tqdm
16
+
17
+ parser = argparse.ArgumentParser()
18
+ parser.add_argument(
19
+ "--onnx_model", default="model.onnx", help="Input onnx model")
20
+ parser.add_argument(
21
+ "--data_dir",
22
+ default="/workspace/dataset/imagenet",
23
+ help="Directory of dataset")
24
+ parser.add_argument(
25
+ "--batch_size", default=1, type=int, help="Evaluation batch size")
26
+ parser.add_argument(
27
+ "--ipu",
28
+ action="store_true",
29
+ help="Use IPU for inference.",
30
+ )
31
+ parser.add_argument(
32
+ "--provider_config",
33
+ type=str,
34
+ default="vaip_config.json",
35
+ help="Path of the config file for seting provider_options.",
36
+ )
37
+ args = parser.parse_args()
38
+
39
+ class AverageMeter(object):
40
+ """Computes and stores the average and current value"""
41
+
42
+ def __init__(self, name, fmt=':f'):
43
+ self.name = name
44
+ self.fmt = fmt
45
+ self.reset()
46
+
47
+ def reset(self):
48
+ self.val = 0
49
+ self.avg = 0
50
+ self.sum = 0
51
+ self.count = 0
52
+
53
+ def update(self, val, n=1):
54
+ self.val = val
55
+ self.sum += val * n
56
+ self.count += n
57
+ self.avg = self.sum / self.count
58
+
59
+ def __str__(self):
60
+ fmtstr = '{name} {val' + self.fmt + '} ({avg' + self.fmt + '})'
61
+ return fmtstr.format(**self.__dict__)
62
+
63
+ def accuracy(output: torch.Tensor,
64
+ target: torch.Tensor,
65
+ topk: Tuple[int] = (1,)) -> Tuple[float]:
66
+ """Computes the accuracy over the k top predictions for the specified values of k.
67
+ Args:
68
+ output: Prediction of the model.
69
+ target: Ground truth labels.
70
+ topk: Topk accuracy to compute.
71
+ Returns:
72
+ Accuracy results according to 'topk'.
73
+ """
74
+
75
+ with torch.no_grad():
76
+ maxk = max(topk)
77
+ batch_size = target.size(0)
78
+
79
+ _, pred = output.topk(maxk, 1, True, True)
80
+ pred = pred.t()
81
+ correct = pred.eq(target.view(1, -1).expand_as(pred))
82
+
83
+ res = []
84
+ for k in topk:
85
+ correct_k = correct[:k].contiguous().view(-1).float().sum(0, keepdim=True)
86
+ res.append(correct_k.mul_(100.0 / batch_size))
87
+ return res
88
+
89
+ def prepare_data_loader(data_dir: str,
90
+ batch_size: int = 100,
91
+ workers: int = 8) -> torch.utils.data.DataLoader:
92
+ """Returns a validation data loader of ImageNet by given `data_dir`.
93
+ Args:
94
+ data_dir: Directory where images stores. There must be a subdirectory named
95
+ 'validation' that stores the validation set of ImageNet.
96
+ batch_size: Batch size of data loader.
97
+ workers: How many subprocesses to use for data loading.
98
+ Returns:
99
+ An object of torch.utils.data.DataLoader.
100
+ """
101
+
102
+ valdir = os.path.join(data_dir, 'validation')
103
+
104
+ normalize = transforms.Normalize(
105
+ mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
106
+ val_dataset = datasets.ImageFolder(
107
+ valdir,
108
+ transforms.Compose([
109
+ transforms.Resize(256, interpolation=InterpolationMode.BICUBIC),
110
+ transforms.CenterCrop(224),
111
+ transforms.ToTensor(),
112
+ normalize,
113
+ ]))
114
+
115
+ return torch.utils.data.DataLoader(
116
+ val_dataset,
117
+ batch_size=batch_size,
118
+ shuffle=False,
119
+ num_workers=workers,
120
+ pin_memory=True)
121
+
122
+ def val_imagenet():
123
+ """Validate ONNX model on ImageNet dataset."""
124
+ print(f'Current onnx model: {args.onnx_model}')
125
+
126
+ if args.ipu:
127
+ providers = ["VitisAIExecutionProvider"]
128
+ provider_options = [{"config_file": args.provider_config}]
129
+ else:
130
+ providers = ['CUDAExecutionProvider', 'CPUExecutionProvider']
131
+ provider_options = None
132
+ ort_session = onnxruntime.InferenceSession(
133
+ args.onnx_model, providers=providers, provider_options=provider_options)
134
+
135
+ val_loader = prepare_data_loader(args.data_dir, args.batch_size)
136
+
137
+ top1 = AverageMeter('Acc@1', ':6.2f')
138
+ top5 = AverageMeter('Acc@5', ':6.2f')
139
+
140
+ start_time = time.time()
141
+ val_loader = tqdm(val_loader, file=sys.stdout)
142
+ with torch.no_grad():
143
+ for batch_idx, (images, targets) in enumerate(val_loader):
144
+ inputs, targets = images.numpy(), targets
145
+ ort_inputs = {ort_session.get_inputs()[0].name: inputs}
146
+
147
+ outputs = ort_session.run(None, ort_inputs)
148
+ outputs = torch.from_numpy(outputs[0])
149
+
150
+ acc1, acc5 = accuracy(outputs, targets, topk=(1, 5))
151
+ top1.update(acc1, images.size(0))
152
+ top5.update(acc5, images.size(0))
153
+
154
+ current_time = time.time()
155
+ print('Test Top1 {:.2f}%\tTop5 {:.2f}%\tTime {:.2f}s\n'.format(
156
+ float(top1.avg), float(top5.avg), (current_time - start_time)))
157
+
158
+ return top1.avg, top5.avg
159
+
160
+ if __name__ == '__main__':
161
+ val_imagenet()
mobilenetv2_int8.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:370de7c9cd44e725221de3019fa0235c3fef2c0a9c436b5c4bc29eb5564690ca
3
+ size 24459517
requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ torch>=1.12.0
2
+ torchvision>=0.13.0
3
+ numpy
4
+ tqdm
5
+ #onnxruntime