Model Card for oopere/pruned20-llama-1b
This model is a pruned version of the Llama-3.2-3B model, with a parameter reduction of 10% in the MLP Layers. The pruning process aims to enhance computational efficiency while maintaining acceptable performance across specific tasks. This model is not intended to be used directly, but rather to be fine-tuned for specific tasks where it can achieve equal or superior performance compared to fine-tuning the base model for the same task.
Model Details
- Model Type: Pruned version of LLaMA-3.2 using structured pruning
- Original Model: meta-llama/Llama-3.2-3B
- Pruning Method: Structured pruning of MLP layers using importance scores based on absolute maximum weights
- Size Reduction: 7,47% (from 3.21B to 3B parameters)
- Architecture: Same as original LLaMA but with reduced MLP layer sizes
- Language(s): Same as original model
- License: Same as original model
- Developed by: Pere Martra
Performance on Standard Benchmarks
Benchmark | Original Model | Pruned Model | Relative Change |
---|---|---|---|
ARC-Easy | 65.19% | 60.69% | -6.9% |
BoolQ | 64.16% | 51.22% | -20.2% |
LAMBADA-OpenAI | 62.20% | 59.64% | -4.1% |
LAMBADA-Standard | 53.46% | 54.61% | +2.2% |
Key Findings
- Surprisingly, an improvement is observed on the LAMBADA-Standard benchmark, with a 2.2% relative increase in accuracy.
- Maintains competitive performance on binary classification tasks (BoolQ), with a 20.2% relative decrease in accuracy.
- Moderate degradation observed on reasoning tasks (ARC-Easy), with a 6.9% relative decrease in accuracy.
- Minimal impact on long-range comprehension (LAMBADA-OpenAI), with only a 4.1% relative decrease in accuracy.
Limitations
- Reduced performance on tasks requiring complex reasoning, with moderate degradation observed on benchmarks like ARC-Easy.
- Noticeable decrease in accuracy on binary classification tasks, as seen in BoolQ.
- Mixed results on long-range dependencies, with minimal degradation on LAMBADA-OpenAI but variability across benchmarks.
- May not be suitable for applications requiring consistently high accuracy across diverse language tasks.
Implementation Details
- Pruning Notebook: Detailed implementation and methodology
- GitHub Repository: LLM Course
Pruning Method
- Technique: Structured pruning targeting MLP layers
- Pruning Ratio: 10% of neurons removed from MLP layers
- Selection Criteria: Importance scoring based on absolute maximum weights
- Architecture Specifics: Maintained GLU structure during pruning
Hardware Requirements
- Reduced memory footprint compared to original model
- Can run on hardware with ~10% less memory than original
Acknowledgments
- Thanks to Mariusz Kurman for creating llama-pruning, a library that extends and improve this pruning methodology.
- Downloads last month
- 12
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for oopere/pruned10-llama-3.2-3B
Base model
meta-llama/Llama-3.2-3B