Edit model card

Model description

PAN is an lightwight image super-resolution method with pixel pttention. It was introduced in the paper Efficient Image Super-Resolution Using Pixel Attention by Hengyuan Zhao et al. and first released in this repository.

We changed the negative slope of the leaky ReLU of the original model and replaced the sigmoid activation with hard sigmoid to make the model compatible with AMD Ryzen AI. We loaded the published model parameters and fine-tuned them on the DIV2K dataset.

Intended uses & limitations

You can use the raw model for super resolution. See the model hub to look for all available PAN models.

How to use

Installation

Follow Ryzen AI Installation to prepare the environment for Ryzen AI. Run the following script to install pre-requisites for this model.

pip install -r requirements.txt 

Data Preparation (optional: for accuracy evaluation)

  1. Download the benchmark(https://cv.snu.ac.kr/research/EDSR/benchmark.tar) dataset.
  2. Unzip the dataset and put it under the project folder. Organize the dataset directory as follows:
PAN
└── dataset
     └── benchmark
          β”œβ”€β”€ Set5
               β”œβ”€β”€ HR
               |   β”œβ”€β”€ baby.png
               |   β”œβ”€β”€ ...
               └── LR_bicubic
                   └──X2
                      β”œβ”€β”€babyx2.png
                      β”œβ”€β”€ ...
          β”œβ”€β”€ Set14
          β”œβ”€β”€ ...    

Test & Evaluation

    parser = argparse.ArgumentParser(description='PAN SR')
    parser.add_argument('--onnx_path',
                        type=str, 
                        default='PAN_int8.onnx',
                        help='Onnx path')
    parser.add_argument('--image_path', 
                        type=str,
                        default='test_data/test.png',
                        help='Path to your input image.')
    parser.add_argument('--output_path', 
                        type=str,
                        default='test_data/sr.png',
                        help='Path to your output image.')
    parser.add_argument('--provider_config',
                        type=str,
                        default="vaip_config.json",
                        help="Path of the config file for seting provider_options.")
    parser.add_argument('--ipu', action='store_true', help='Use Ipu for interence.')

    args = parser.parse_args()
   
    onnx_file_name = args.onnx_path
    image_path = args.image_path
    output_path = args.output_path

    if args.ipu:
        providers = ["VitisAIExecutionProvider"]
        provider_options = [{"config_file": args.provider_config}]
    else:
        providers = ['CPUExecutionProvider']
        provider_options = None
    ort_session = onnxruntime.InferenceSession(onnx_file_name,  providers=providers, provider_options=provider_options) 

    lr = cv2.imread(image_path)[np.newaxis,:,:,:].transpose((0,3,1,2)).astype(np.float32)
    sr = tiling_inference(ort_session, lr, 8, (56, 56))
    sr = np.clip(sr, 0, 255)
    sr = sr.squeeze().transpose((1,2,0)).astype(np.uint8)
    sr = cv2.imwrite(output_path, sr)
  • Run inference for a single image
python infer_onnx.py --onnx_path PAN_int8.onnx --image_path /Path/To/Your/Image --ipu --provider_config Path\To\vaip_config.json
  • Test accuracy of the quantized model
python eval_onnx.py --onnx_path PAN_int8.onnx --data_test Set5 --ipu --provider_config Path\To\vaip_config.json

Note: vaip_config.json is located at the setup package of Ryzen AI (refer to Installation)

Performance

Method Scale Flops Set5
PAN (float) X2 141G 38.00 / 0.961
PAN_amd (float) X2 141G 37.859 / 0.960
PAN_amd (int8) X2 141G 37.18 / 0.952
  • Note: the Flops is calculated with the output resolution is 360x640
@inproceedings{zhao2020efficient,
  title={Efficient image super-resolution using pixel attention},
  author={Zhao, Hengyuan and Kong, Xiangtao and He, Jingwen and Qiao, Yu and Dong, Chao},
  booktitle={European Conference on Computer Vision},
  pages={56--72},
  year={2020},
  organization={Springer}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .