Image Classification
timm
English
vision
Lupin1998 commited on
Commit
589afa9
1 Parent(s): 4e38af5

update moganet_xtiny_256_in1k

Browse files
Files changed (1) hide show
  1. README.md +106 -0
README.md CHANGED
@@ -1,3 +1,109 @@
1
  ---
 
 
 
 
 
 
 
 
2
  license: apache-2.0
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ tags:
3
+ - vision
4
+ - image-classification
5
+ datasets:
6
+ - imagenet
7
+ metrics:
8
+ - accuracy
9
+ library_tag: MogaNet
10
  license: apache-2.0
11
+ language:
12
+ - en
13
+ library_name: timm
14
+ pipeline_tag: image-classification
15
+ widget:
16
+ - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
17
+ example_title: Tiger
18
  ---
19
+
20
+ # Model card for moganet_xtiny_256_in1k
21
+
22
+ MogaNet a new family of efficient ConvNets with preferable parameter-performance trade-offs, which is trained on ImageNet-1k (1 million images, 1,000 classes). It was first introduced in the paper [MogaNet](https://arxiv.org/abs/2211.03295) and released in [Westlake/MogaNet](https://github.com/Westlake-AI/MogaNet) and [Westlake/openmixup](https://github.com/Westlake-AI/openmixup).
23
+
24
+ ## Description
25
+
26
+ Since the recent success of Vision Transformers (ViTs), explorations toward ViT-style architectures have triggered the resurgence of ConvNets. In this work, we explore the representation ability of modern ConvNets from a novel view of multi-order game-theoretic interaction, which reflects inter-variable interaction effects w.r.t. contexts of different scales based on game theory. Within the modern ConvNet framework, we tailor the two feature mixers with conceptually simple yet effective depthwise convolutions to facilitate middle-order information across spatial and channel spaces respectively. In this light, a new family of pure ConvNet architecture, dubbed MogaNet, is proposed, which shows excellent scalability and attains competitive results among state-of-the-art models with more efficient use of parameters on ImageNet and multifarious typical vision benchmarks, including COCO object detection, ADE20K semantic segmentation, 2D\&3D human pose estimation and video prediction.Typically, MogaNet hits 80.0\% and 87.8\% top-1 accuracy with 5.2M and 181M parameters on ImageNet, outperforming ParC-Net-S and ConvNeXt-L while saving 59\% FLOPs and 17M parameters.
27
+
28
+ ![model image](https://user-images.githubusercontent.com/44519745/224821476-843a1814-1894-4fa7-b919-551f0a183856.jpg)
29
+
30
+ ## Model Usage
31
+
32
+ Setup before using the model.
33
+ ```bash
34
+ git clone https://github.com/Westlake-AI/MogaNet
35
+ cd MogaNet
36
+ ```
37
+
38
+ ### Image Classification
39
+ ```python
40
+ from urllib.request import urlopen
41
+ from PIL import Image
42
+ import timm
43
+ import models
44
+
45
+ img = Image.open(
46
+ urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
47
+
48
+ model = timm.create_model('moganet_xtiny_1k_sz256', pretrained=True)
49
+ model = model.eval()
50
+
51
+ # get model specific transforms (normalization, resize)
52
+ data_config = timm.data.resolve_model_data_config(model)
53
+ transforms = timm.data.create_transform(**data_config, is_training=False)
54
+
55
+ output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
56
+
57
+ top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
58
+ ```
59
+
60
+ ### Feature Map Extraction
61
+ ```python
62
+ from urllib.request import urlopen
63
+ from PIL import Image
64
+ import timm
65
+
66
+ img = Image.open(
67
+ urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
68
+
69
+ model = timm.create_model(
70
+ 'moganet_xtiny_1k_sz256',
71
+ pretrained=True,
72
+ fork_feat=True,
73
+ )
74
+ model = model.eval()
75
+
76
+ # get model specific transforms (normalization, resize)
77
+ data_config = timm.data.resolve_model_data_config(model)
78
+ transforms = timm.data.create_transform(**data_config, is_training=False)
79
+
80
+ output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
81
+
82
+ for o in output:
83
+ # print shape of each feature map in output
84
+ print(o.shape)
85
+ ```
86
+
87
+ ## Model Comparison
88
+
89
+ | Model | Resolution | Params (M) | Flops (G) | Top-1 / top-5 (%) | Download |
90
+ |---|:---:|:---:|:---:|:---:|:---:|
91
+ | moganet_xtiny_224_in1k | 224x224 | 2.97 | 0.80 | 76.5 / 93.4 | [GitHub](https://github.com/Westlake-AI/MogaNet/releases/download/moganet-in1k-weights/moganet_xtiny_sz224_8xbs128_ep300.pth.tar) \| [Hugging Face🤗](https://huggingface.co/MogaNet/moganet_xtiny_224_in1k) |
92
+ | moganet_xtiny_256_in1k | 256x256 | 2.97 | 1.04 | 77.2 / 93.8 | [GitHub](https://github.com/Westlake-AI/MogaNet/releases/download/moganet-in1k-weights/moganet_xtiny_sz256_8xbs128_ep300.pth.tar) \| [Hugging Face🤗](https://huggingface.co/MogaNet/moganet_xtiny_256_in1k) |
93
+ | moganet_tiny_224_in1k | 224x224 | 5.20 | 1.10 | 79.0 / 94.6 | [GitHub](https://github.com/Westlake-AI/MogaNet/releases/download/moganet-in1k-weights/moganet_tiny_sz224_8xbs128_ep300.pth.tar) \| [Hugging Face🤗](https://huggingface.co/MogaNet/moganet_tiny_224_in1k) |
94
+ | moganet_tiny_256_in1k | 256x256 | 5.20 | 1.44 | 79.6 / 94.9 | [GitHub](https://github.com/Westlake-AI/MogaNet/releases/download/moganet-in1k-weights/moganet_tiny_sz256_8xbs128_ep300.pth.tar) \| [Hugging Face🤗](https://huggingface.co/MogaNet/moganet_tiny_256_in1k) |
95
+ | moganet_small_224_in1k | 224x224 | 25.3 | 4.97 | 83.4 / 96.9 | [GitHub](https://github.com/Westlake-AI/MogaNet/releases/download/moganet-in1k-weights/moganet_small_sz224_8xbs128_ep300.pth.tar) \| [Hugging Face🤗](https://huggingface.co/MogaNet/moganet_small_224_in1k) |
96
+ | moganet_base_224_in1k | 224x224 | 43.9 | 9.93 | 84.3 / 97.0 | [GitHub](https://github.com/Westlake-AI/MogaNet/releases/download/moganet-in1k-weights/moganet_base_sz224_8xbs128_ep300.pth.tar) \| [Hugging Face🤗](https://huggingface.co/MogaNet/moganet_base_224_in1k) |
97
+ | moganet_large_224_in1k | 224x224 | 82.5 | 15.9 | 84.7 / 97.1 | [GitHub](https://github.com/Westlake-AI/MogaNet/releases/download/moganet-in1k-weights/moganet_large_sz224_8xbs64_ep300.pth.tar) \| [Hugging Face🤗](https://huggingface.co/MogaNet/moganet_large_224_in1k) |
98
+ | moganet_xlarge_224_in1k | 224x224 | 180.8 | 34.5 | 85.1 / 97.4 | [GitHub](https://github.com/Westlake-AI/MogaNet/releases/download/moganet-in1k-weights/moganet_xlarge_sz224_8xbs64_ep300.pth.tar) \| [Hugging Face🤗](https://huggingface.co/MogaNet/moganet_xlarge_224_in1k) |
99
+
100
+ ## Citation
101
+ ```bibtex
102
+ @article{Li2022MogaNet,
103
+ title={Efficient Multi-order Gated Aggregation Network},
104
+ author={Siyuan Li and Zedong Wang and Zicheng Liu and Cheng Tan and Haitao Lin and Di Wu and Zhiyuan Chen and Jiangbin Zheng and Stan Z. Li},
105
+ journal={ArXiv},
106
+ year={2022},
107
+ volume={abs/2211.03295}
108
+ }
109
+ ```