sapiens
English

Normal-Sapiens-1B

Model Details

Sapiens is a family of vision transformers pretrained on 300 million human images at 1024 x 1024 image resolution. The pretrained models, when finetuned for human-centric vision tasks, generalize to in-the-wild conditions. Sapiens-1B natively support 1K high-resolution inference. The resulting models exhibit remarkable generalization to in-the-wild data, even when labeled data is scarce or entirely synthetic.

  • Developed by: Meta
  • Model type: Vision Transformer
  • License: Creative Commons Attribution-NonCommercial 4.0
  • Task: normal
  • Format: original
  • File: sapiens_1b_normal_render_people_epoch_115.pth

Model Card

  • Image Size: 1024 x 768 (H x W)
  • Num Parameters: 1.169 B
  • FLOPs: 4.647 TFLOPs
  • Patch Size: 16 x 16
  • Embedding Dimensions: 1536
  • Num Layers: 40
  • Num Heads: 24
  • Feedforward Channels: 6144

More Resources

Uses

Normal 1B model can be used to estimate surface normal (XYZ) on human images.

Downloads last month
25
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Collection including facebook/sapiens-normal-1b