yolov3 / README.md
zhengrongzhang's picture
init model
32865f3
|
raw
history blame
3.94 kB
metadata
license: apache-2.0
datasets:
  - COCO
metrics:
  - mAP
language:
  - en
tags:
  - RyzenAI
  - object-detection
  - vision
  - YOLO
  - Pytorch

YOLOv3 model trained on COCO

YOLOv3 is trained on COCO object detection (118k annotated images) at resolution 416x416. It was released in https://github.com/ultralytics/yolov3/tree/v8.

We develop a modified version that could be supported by AMD Ryzen AI.

Model description

YOLOv3 🚀 is the world's most loved vision AI, representing Ultralytics open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development.

Intended uses & limitations

You can use the raw model for object detection. See the model hub to look for all available YOLOv3 models.

How to use

Installation

Follow Ryzen AI Installation to prepare the environment for Ryzen AI. Run the following script to install pre-requisites for this model.

pip install -r requirements.txt 

Data Preparation (optional: for accuracy evaluation)

The dataset MSCOCO2017 contains 118287 images for training and 5000 images for validation.

  1. Download COCO dataset
  2. Run general_json2yolo.py to generate the labels folder and val2017.txt
python general_json2yolo.py

Finally, COCO dataset should look like this:

+ coco/
    + annotations/
        + instance_val2017.json
        + ...
    + images/
        + val2017/
          + 000000000139.jpg
          + 000000000285.jpg
          + ...   
    + labels/
        + val2017/
          + 000000000139.txt
          + 000000000285.txt
          + ...      
    + val2017.txt

Test & Evaluation

    onnx_path = "yolov3-8.onnx"
    onnx_model = onnxruntime.InferenceSession(
        onnx_path, providers=providers, provider_options=provider_options)

    path = opt.img
    new_path = os.path.join(opt.out, "demo_infer.jpg")

    conf_thres, iou_thres, classes, agnostic_nms, max_det = 0.25, \
        0.45, None, False, 1000

    img0 = cv2.imread(path)
    img = pre_process(img0)
    onnx_input = {onnx_model.get_inputs()[0].name: img}
    onnx_output = onnx_model.run(None, onnx_input)
    onnx_output = post_process(onnx_output)

    pred = non_max_suppression(
        onnx_output[0],
        conf_thres,
        iou_thres,
        multi_label=False,
        classes=classes,
        agnostic=agnostic_nms)

    colors = [[random.randint(0, 255) for _ in range(3)]
              for _ in range(len(names))]
    det = pred[0]
    im0 = img0.copy()

    if len(det):
        # Rescale boxes from imgsz to im0 size
        det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round()

        # Write results
        for *xyxy, conf, cls in reversed(det):
            label = '%s %.2f' % (names[int(cls)], conf)
            plot_one_box(xyxy, im0, label=label, color=colors[int(cls)])

    # Stream results
    cv2.imwrite(new_path, im0)
  • Run inference for a single image
python onnx_inference.py --img INPUT_IMG_PATH --out OUTPUT_DIR --ipu --provider_config Path\To\vaip_config.json

Note: vaip_config.json is located at the setup package of Ryzen AI (refer to Installation)

  • Test accuracy of the quantized model
python onnx_test.py --ipu --provider_config Path\To\vaip_config.json

Performance

Metric Accuracy on IPU
[email protected]:0.95 0.389
@misc{redmon2018yolov3,
      title={YOLOv3: An Incremental Improvement}, 
      author={Joseph Redmon and Ali Farhadi},
      year={2018},
      eprint={1804.02767},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}