SlimSAM / README.md
Zigeng's picture
Update README.md
ce91e85
|
raw
history blame
No virus
1.61 kB
metadata
license: apache-2.0

0.1% Data Makes Segment Anything Slim
Zigeng Chen, Gongfan Fang, Xinyin Ma, Xinchao Wang
Learning and Vision Lab, National University of Singapore
Paper: [Arxiv] GitHub: [SlimSAM]

SlimSAM is a novel SAM compression method, which efficiently reuses pre-trained SAMs without the necessity for extensive retraining. This is achieved by the efficient reuse of pre-trained SAMs through a unified pruning-distillation framework. To enhance knowledge inheritance from the original SAM, we employ an innovative alternate slimming strategy that partitions the compression process into a progressive procedure. Diverging from prior pruning techniques, we meticulously prune and distill decoupled model structures in an alternating fashion. Furthermore, a novel label-free pruning criterion is also proposed to align the pruning objective with the optimization target, thereby boosting the post-distillation after pruning. SlimSAM achieves approaching performance while reducing the parameter counts to 0.9% (5.7M), MACs to 0.8% (21G), and requiring mere 0.1% (10k) of the training data when compared to the original SAM-H. Extensive experiments demonstrate that our method realize significant superior performance while utilizing over 10 times less training data when compared to other SAM compression methods.