Image Classification
timm
English
vision
Lupin1998
fix moganet_tiny_224_in1k
f1ea797
metadata
tags:
  - vision
  - image-classification
datasets:
  - imagenet
metrics:
  - accuracy
library_tag: MogaNet
license: apache-2.0
language:
  - en
library_name: timm
pipeline_tag: image-classification
widget:
  - src: >-
      https://huggingface.co./datasets/mishig/sample_images/resolve/main/tiger.jpg
    example_title: Tiger

Model card for moganet_tiny_224_in1k

MogaNet a new family of efficient ConvNets with preferable parameter-performance trade-offs, which is trained on ImageNet-1k (1 million images, 1,000 classes). It was first introduced in the paper MogaNet and released in Westlake/MogaNet and Westlake/openmixup.

Description

Since the recent success of Vision Transformers (ViTs), explorations toward ViT-style architectures have triggered the resurgence of ConvNets. In this work, we explore the representation ability of modern ConvNets from a novel view of multi-order game-theoretic interaction, which reflects inter-variable interaction effects w.r.t. contexts of different scales based on game theory. Within the modern ConvNet framework, we tailor the two feature mixers with conceptually simple yet effective depthwise convolutions to facilitate middle-order information across spatial and channel spaces respectively. In this light, a new family of pure ConvNet architecture, dubbed MogaNet, is proposed, which shows excellent scalability and attains competitive results among state-of-the-art models with more efficient use of parameters on ImageNet and multifarious typical vision benchmarks, including COCO object detection, ADE20K semantic segmentation, 2D&3D human pose estimation and video prediction.Typically, MogaNet hits 80.0% and 87.8% top-1 accuracy with 5.2M and 181M parameters on ImageNet, outperforming ParC-Net-S and ConvNeXt-L while saving 59% FLOPs and 17M parameters.

model image

Model Usage

Setup before using the model.

git clone https://github.com/Westlake-AI/MogaNet
cd MogaNet

Image Classification

from urllib.request import urlopen
from PIL import Image
import timm
import models

img = Image.open(
    urlopen('https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))

model = timm.create_model('moganet_tiny_1k', pretrained=True)
model = model.eval()

# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)

output = model(transforms(img).unsqueeze(0))  # unsqueeze single image into batch of 1

top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)

Feature Map Extraction

from urllib.request import urlopen
from PIL import Image
import timm

img = Image.open(
    urlopen('https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))

model = timm.create_model(
    'moganet_tiny_1k',
    pretrained=True,
    fork_feat=True,
)
model = model.eval()

# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)

output = model(transforms(img).unsqueeze(0))  # unsqueeze single image into batch of 1

for o in output:
    # print shape of each feature map in output
    print(o.shape)

Model Comparison

Model Resolution Params (M) Flops (G) Top-1 / top-5 (%) Download
moganet_xtiny_224_in1k 224x224 2.97 0.80 76.5 / 93.4 GitHub | Hugging Face🤗
moganet_xtiny_256_in1k 256x256 2.97 1.04 77.2 / 93.8 GitHub | Hugging Face🤗
moganet_tiny_224_in1k 224x224 5.20 1.10 79.0 / 94.6 GitHub | Hugging Face🤗
moganet_tiny_256_in1k 256x256 5.20 1.44 79.6 / 94.9 GitHub | Hugging Face🤗
moganet_small_224_in1k 224x224 25.3 4.97 83.4 / 96.9 GitHub | Hugging Face🤗
moganet_base_224_in1k 224x224 43.9 9.93 84.3 / 97.0 GitHub | Hugging Face🤗
moganet_large_224_in1k 224x224 82.5 15.9 84.7 / 97.1 GitHub | Hugging Face🤗
moganet_xlarge_224_in1k 224x224 180.8 34.5 85.1 / 97.4 GitHub | Hugging Face🤗

Citation

@article{Li2022MogaNet,
  title={Efficient Multi-order Gated Aggregation Network},
  author={Siyuan Li and Zedong Wang and Zicheng Liu and Cheng Tan and Haitao Lin and Di Wu and Zhiyuan Chen and Jiangbin Zheng and Stan Z. Li},
  journal={ArXiv},
  year={2022},
  volume={abs/2211.03295}
}