qubvel-hf HF staff commited on
Commit
f6e1613
·
verified ·
1 Parent(s): 0918519

Upload folder using huggingface_hub

Browse files
Files changed (4) hide show
  1. README.md +245 -0
  2. config.json +141 -0
  3. model.safetensors +3 -0
  4. preprocessor_config.json +22 -0
README.md ADDED
@@ -0,0 +1,245 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ language:
5
+ - en
6
+ pipeline_tag: keypoint-detection
7
+ ---
8
+
9
+ # Model Card for Model ID
10
+
11
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6579e0eaa9e58aec614e9d97/ZuIwMdomy2_6aJ_JTE1Yd.png)
12
+
13
+
14
+ ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation and ViTPose+: Vision Transformer Foundation Model for Generic Body Pose Estimation. It obtains 81.1 AP on MS COCO Keypoint test-dev set.
15
+
16
+ ## Model Details
17
+
18
+ Although no specific domain knowledge is considered in the design, plain vision transformers have shown excellent performance in visual recognition tasks. However, little effort has been made to reveal the potential of such simple structures for
19
+ pose estimation tasks. In this paper, we show the surprisingly good capabilities of plain vision transformers for pose estimation from various aspects, namely simplicity in model structure, scalability in model size, flexibility in training paradigm,
20
+ and transferability of knowledge between models, through a simple baseline model called ViTPose. Specifically, ViTPose employs plain and non-hierarchical vision
21
+ transformers as backbones to extract features for a given person instance and a
22
+ lightweight decoder for pose estimation. It can be scaled up from 100M to 1B
23
+ parameters by taking the advantages of the scalable model capacity and high
24
+ parallelism of transformers, setting a new Pareto front between throughput and performance. Besides, ViTPose is very flexible regarding the attention type, input resolution, pre-training and finetuning strategy, as well as dealing with multiple pose
25
+ tasks. We also empirically demonstrate that the knowledge of large ViTPose models
26
+ can be easily transferred to small ones via a simple knowledge token. Experimental
27
+ results show that our basic ViTPose model outperforms representative methods
28
+ on the challenging MS COCO Keypoint Detection benchmark, while the largest
29
+ model sets a new state-of-the-art, i.e., 80.9 AP on the MS COCO test-dev set. The
30
+ code and models are available at https://github.com/ViTAE-Transformer/ViTPose
31
+
32
+ ### Model Description
33
+
34
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
35
+
36
+ - **Developed by:** Yufei Xu, Jing Zhang, Qiming Zhang, Dacheng Tao
37
+ - **Funded by:** ARC FL-170100117 and IH-180100002.
38
+ - **License:** Apache-2.0
39
+ - **Ported to 🤗 Transformers by:** Sangbum Choi and Niels Rogge
40
+
41
+ ### Model Sources
42
+
43
+ - **Original repository:** https://github.com/ViTAE-Transformer/ViTPose
44
+ - **Paper:** https://arxiv.org/pdf/2204.12484
45
+ - **Demo:** https://huggingface.co/spaces?sort=trending&search=vitpose
46
+
47
+ ## Uses
48
+
49
+ The ViTPose model, developed by the ViTAE-Transformer team, is primarily designed for pose estimation tasks. Here are some direct uses of the model:
50
+
51
+ Human Pose Estimation: The model can be used to estimate the poses of humans in images or videos. This involves identifying the locations of key body joints such as the head, shoulders, elbows, wrists, hips, knees, and ankles.
52
+
53
+ Action Recognition: By analyzing the poses over time, the model can help in recognizing various human actions and activities.
54
+
55
+ Surveillance: In security and surveillance applications, ViTPose can be used to monitor and analyze human behavior in public spaces or private premises.
56
+
57
+ Health and Fitness: The model can be utilized in fitness apps to track and analyze exercise poses, providing feedback on form and technique.
58
+
59
+ Gaming and Animation: ViTPose can be integrated into gaming and animation systems to create more realistic character movements and interactions.
60
+
61
+
62
+ ## Bias, Risks, and Limitations
63
+
64
+ In this paper, we propose a simple yet effective vision transformer baseline for pose estimation,
65
+ i.e., ViTPose. Despite no elaborate designs in structure, ViTPose obtains SOTA performance
66
+ on the MS COCO dataset. However, the potential of ViTPose is not fully explored with more
67
+ advanced technologies, such as complex decoders or FPN structures, which may further improve the
68
+ performance. Besides, although the ViTPose demonstrates exciting properties such as simplicity,
69
+ scalability, flexibility, and transferability, more research efforts could be made, e.g., exploring the
70
+ prompt-based tuning to demonstrate the flexibility of ViTPose further. In addition, we believe
71
+ ViTPose can also be applied to other pose estimation datasets, e.g., animal pose estimation [47, 9, 45]
72
+ and face keypoint detection [21, 6]. We leave them as the future work.
73
+
74
+ ## How to Get Started with the Model
75
+
76
+ Use the code below to get started with the model.
77
+
78
+ ```python
79
+ import torch
80
+ import requests
81
+ import numpy as np
82
+
83
+ from PIL import Image
84
+
85
+ from transformers import (
86
+ AutoProcessor,
87
+ RTDetrForObjectDetection,
88
+ VitPoseForPoseEstimation,
89
+ )
90
+
91
+ device = "cuda" if torch.cuda.is_available() else "cpu"
92
+
93
+ url = "http://images.cocodataset.org/val2017/000000000139.jpg"
94
+ image = Image.open(requests.get(url, stream=True).raw)
95
+
96
+ # ------------------------------------------------------------------------
97
+ # Stage 1. Detect humans on the image
98
+ # ------------------------------------------------------------------------
99
+
100
+ # You can choose detector by your choice
101
+ person_image_processor = AutoProcessor.from_pretrained("PekingU/rtdetr_r50vd_coco_o365")
102
+ person_model = RTDetrForObjectDetection.from_pretrained("PekingU/rtdetr_r50vd_coco_o365", device_map=device)
103
+
104
+ inputs = person_image_processor(images=image, return_tensors="pt").to(device)
105
+
106
+ with torch.no_grad():
107
+ outputs = person_model(**inputs)
108
+
109
+ results = person_image_processor.post_process_object_detection(
110
+ outputs, target_sizes=torch.tensor([(image.height, image.width)]), threshold=0.3
111
+ )
112
+ result = results[0] # take first image results
113
+
114
+ # Human label refers 0 index in COCO dataset
115
+ person_boxes = result["boxes"][result["labels"] == 0]
116
+ person_boxes = person_boxes.cpu().numpy()
117
+
118
+ # Convert boxes from VOC (x1, y1, x2, y2) to COCO (x1, y1, w, h) format
119
+ person_boxes[:, 2] = person_boxes[:, 2] - person_boxes[:, 0]
120
+ person_boxes[:, 3] = person_boxes[:, 3] - person_boxes[:, 1]
121
+
122
+ # ------------------------------------------------------------------------
123
+ # Stage 2. Detect keypoints for each person found
124
+ # ------------------------------------------------------------------------
125
+
126
+ image_processor = AutoProcessor.from_pretrained("usyd-community/vitpose-base-simple")
127
+ model = VitPoseForPoseEstimation.from_pretrained("usyd-community/vitpose-base-simple", device_map=device)
128
+
129
+ inputs = image_processor(image, boxes=[person_boxes], return_tensors="pt").to(device)
130
+
131
+ with torch.no_grad():
132
+ outputs = model(**inputs)
133
+
134
+ pose_results = image_processor.post_process_pose_estimation(outputs, boxes=[person_boxes], threshold=0.3)
135
+ image_pose_result = pose_results[0] # results for first image
136
+
137
+ for i, person_pose in enumerate(image_pose_result):
138
+ print(f"Person #{i}")
139
+ for keypoint, label, score in zip(
140
+ person_pose["keypoints"], person_pose["labels"], person_pose["scores"]
141
+ ):
142
+ keypoint_name = model.config.id2label[label.item()]
143
+ x, y = keypoint
144
+ print(f" - {keypoint_name}: x={x.item():.2f}, y={y.item():.2f}, score={score.item():.2f}")
145
+
146
+ ```
147
+ Output:
148
+ ```
149
+ Person #0
150
+ - Nose: x=428.72, y=170.61, score=0.92
151
+ - L_Eye: x=429.47, y=167.83, score=0.90
152
+ - R_Eye: x=428.73, y=168.16, score=0.79
153
+ - L_Ear: x=433.88, y=167.35, score=0.94
154
+ - R_Ear: x=441.09, y=166.86, score=0.90
155
+ - L_Shoulder: x=440.02, y=177.15, score=0.93
156
+ - R_Shoulder: x=446.28, y=178.39, score=0.74
157
+ - L_Elbow: x=436.88, y=197.90, score=0.92
158
+ - R_Elbow: x=433.35, y=201.22, score=0.54
159
+ - L_Wrist: x=431.45, y=218.66, score=0.88
160
+ - R_Wrist: x=420.09, y=212.80, score=0.96
161
+ - L_Hip: x=444.81, y=224.16, score=0.81
162
+ - R_Hip: x=452.33, y=223.91, score=0.82
163
+ - L_Knee: x=442.24, y=256.03, score=0.83
164
+ - R_Knee: x=451.12, y=255.20, score=0.82
165
+ - L_Ankle: x=443.20, y=288.18, score=0.60
166
+ - R_Ankle: x=456.03, y=285.76, score=0.82
167
+ Person #1
168
+ - Nose: x=398.12, y=181.71, score=0.87
169
+ - L_Eye: x=398.45, y=179.73, score=0.82
170
+ - R_Eye: x=396.07, y=179.45, score=0.90
171
+ - R_Ear: x=388.85, y=180.22, score=0.88
172
+ - L_Shoulder: x=397.24, y=194.16, score=0.76
173
+ - R_Shoulder: x=384.60, y=190.74, score=0.64
174
+ - L_Wrist: x=402.25, y=207.03, score=0.33
175
+ ```
176
+
177
+ ## Training Details
178
+
179
+ ### Training Data
180
+
181
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
182
+
183
+ Dataset details. We use MS COCO [28], AI Challenger [41], MPII [3], and CrowdPose [22] datasets
184
+ for training and evaluation. OCHuman [54] dataset is only involved in the evaluation stage to measure
185
+ the models’ performance in dealing with occluded people. The MS COCO dataset contains 118K
186
+ images and 150K human instances with at most 17 keypoint annotations each instance for training.
187
+ The dataset is under the CC-BY-4.0 license. MPII dataset is under the BSD license and contains
188
+ 15K images and 22K human instances for training. There are at most 16 human keypoints for each
189
+ instance annotated in this dataset. AI Challenger is much bigger and contains over 200K training
190
+ images and 350 human instances, with at most 14 keypoints for each instance annotated. OCHuman
191
+ contains human instances with heavy occlusion and is just used for val and test set, which includes
192
+ 4K images and 8K instances.
193
+
194
+
195
+ #### Training Hyperparameters
196
+
197
+ - **Training regime:** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6579e0eaa9e58aec614e9d97/Gj6gGcIGO3J5HD2MAB_4C.png)
198
+
199
+ #### Speeds, Sizes, Times
200
+
201
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6579e0eaa9e58aec614e9d97/rsCmn48SAvhi8xwJhX8h5.png)
202
+
203
+ ## Evaluation
204
+
205
+ OCHuman val and test set. To evaluate the performance of human pose estimation models on the
206
+ human instances with heavy occlusion, we test the ViTPose variants and representative models on
207
+ the OCHuman val and test set with ground truth bounding boxes. We do not adopt extra human
208
+ detectors since not all human instances are annotated in the OCHuman datasets, where the human
209
+ detector will cause a lot of “false positive” bounding boxes and can not reflect the true ability of
210
+ pose estimation models. Specifically, the decoder head of ViTPose corresponding to the MS COCO
211
+ dataset is used, as the keypoint definitions are the same in MS COCO and OCHuman datasets.
212
+
213
+ MPII val set. We evaluate the performance of ViTPose and representative models on the MPII val
214
+ set with the ground truth bounding boxes. Following the default settings of MPII, we use PCKh
215
+ as metric for performance evaluation.
216
+
217
+ ### Results
218
+
219
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6579e0eaa9e58aec614e9d97/FcHVFdUmCuT2m0wzB8QSS.png)
220
+
221
+
222
+ ### Model Architecture and Objective
223
+
224
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6579e0eaa9e58aec614e9d97/kf3e1ifJkVtOMbISvmMsM.png)
225
+
226
+ #### Hardware
227
+
228
+ The models are trained on 8 A100 GPUs based on the mmpose codebase
229
+
230
+
231
+ ## Citation
232
+
233
+ **BibTeX:**
234
+
235
+ ```bibtex
236
+ @article{xu2022vitposesimplevisiontransformer,
237
+ title={ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation},
238
+ author={Yufei Xu and Jing Zhang and Qiming Zhang and Dacheng Tao},
239
+ year={2022},
240
+ eprint={2204.12484},
241
+ archivePrefix={arXiv},
242
+ primaryClass={cs.CV},
243
+ url={https://arxiv.org/abs/2204.12484}
244
+ }
245
+ ```
config.json ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "VitPoseForPoseEstimation"
4
+ ],
5
+ "backbone": null,
6
+ "backbone_config": {
7
+ "model_type": "vitpose_backbone",
8
+ "out_features": [
9
+ "stage12"
10
+ ],
11
+ "out_indices": [
12
+ 12
13
+ ],
14
+ "part_features": 0
15
+ },
16
+ "backbone_kwargs": null,
17
+ "edges": [
18
+ [
19
+ 15,
20
+ 13
21
+ ],
22
+ [
23
+ 13,
24
+ 11
25
+ ],
26
+ [
27
+ 16,
28
+ 14
29
+ ],
30
+ [
31
+ 14,
32
+ 12
33
+ ],
34
+ [
35
+ 11,
36
+ 12
37
+ ],
38
+ [
39
+ 5,
40
+ 11
41
+ ],
42
+ [
43
+ 6,
44
+ 12
45
+ ],
46
+ [
47
+ 5,
48
+ 6
49
+ ],
50
+ [
51
+ 5,
52
+ 7
53
+ ],
54
+ [
55
+ 6,
56
+ 8
57
+ ],
58
+ [
59
+ 7,
60
+ 9
61
+ ],
62
+ [
63
+ 8,
64
+ 10
65
+ ],
66
+ [
67
+ 1,
68
+ 2
69
+ ],
70
+ [
71
+ 0,
72
+ 1
73
+ ],
74
+ [
75
+ 0,
76
+ 2
77
+ ],
78
+ [
79
+ 1,
80
+ 3
81
+ ],
82
+ [
83
+ 2,
84
+ 4
85
+ ],
86
+ [
87
+ 3,
88
+ 5
89
+ ],
90
+ [
91
+ 4,
92
+ 6
93
+ ]
94
+ ],
95
+ "id2label": {
96
+ "0": "Nose",
97
+ "1": "L_Eye",
98
+ "2": "R_Eye",
99
+ "3": "L_Ear",
100
+ "4": "R_Ear",
101
+ "5": "L_Shoulder",
102
+ "6": "R_Shoulder",
103
+ "7": "L_Elbow",
104
+ "8": "R_Elbow",
105
+ "9": "L_Wrist",
106
+ "10": "R_Wrist",
107
+ "11": "L_Hip",
108
+ "12": "R_Hip",
109
+ "13": "L_Knee",
110
+ "14": "R_Knee",
111
+ "15": "L_Ankle",
112
+ "16": "R_Ankle"
113
+ },
114
+ "initializer_range": 0.02,
115
+ "label2id": {
116
+ "L_Ankle": 15,
117
+ "L_Ear": 3,
118
+ "L_Elbow": 7,
119
+ "L_Eye": 1,
120
+ "L_Hip": 11,
121
+ "L_Knee": 13,
122
+ "L_Shoulder": 5,
123
+ "L_Wrist": 9,
124
+ "Nose": 0,
125
+ "R_Ankle": 16,
126
+ "R_Ear": 4,
127
+ "R_Elbow": 8,
128
+ "R_Eye": 2,
129
+ "R_Hip": 12,
130
+ "R_Knee": 14,
131
+ "R_Shoulder": 6,
132
+ "R_Wrist": 10
133
+ },
134
+ "model_type": "vitpose",
135
+ "scale_factor": 4,
136
+ "torch_dtype": "float32",
137
+ "transformers_version": "4.47.0.dev0",
138
+ "use_pretrained_backbone": false,
139
+ "use_simple_decoder": true,
140
+ "use_timm_backbone": false
141
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:85375373893ddd3641f3912821073e53f5435f9e966e1dca59d004454bfe4fdf
3
+ size 343673076
preprocessor_config.json ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_affine_transform": true,
3
+ "do_normalize": true,
4
+ "do_rescale": true,
5
+ "image_mean": [
6
+ 0.485,
7
+ 0.456,
8
+ 0.406
9
+ ],
10
+ "image_processor_type": "VitPoseImageProcessor",
11
+ "image_std": [
12
+ 0.229,
13
+ 0.224,
14
+ 0.225
15
+ ],
16
+ "normalize_factor": 200.0,
17
+ "rescale_factor": 0.00392156862745098,
18
+ "size": {
19
+ "height": 256,
20
+ "width": 192
21
+ }
22
+ }