Model Card for Model ID

This repo contains a 2:4 sparse version of the LLaMA2-7B model. Trainied with methods from AAAI25 paper Pruning Large Language Models with Semi-Structural Adaptive Sparse Training.

Model Description

Same structured as LLaMA2-7B, but weight from linear layer conform to 2:4 sparse pattern.

Downloads last month
5
Inference API
Unable to determine this model's library. Check the docs .

Model tree for Yellowtree/LLaMA2-7B_2-by-4_Sparse

Finetuned
(637)
this model