HeshamHaroon
commited on
Commit
•
06c752e
1
Parent(s):
c478e73
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
library_name: transformers
|
6 |
+
pipeline_tag: text-generation
|
7 |
+
tags:
|
8 |
+
- text-generation-inference
|
9 |
+
---
|
10 |
+
|
11 |
+
|
12 |
+
# GPTQ Algorithm with `auto-gptq` Integration
|
13 |
+
|
14 |
+
## Model Description
|
15 |
+
|
16 |
+
The GPTQ algorithm, developed by Frantar et al., is designed to compress transformer-based language models into fewer bits with minimal performance degradation. The `auto-gptq` library, based on the GPTQ algorithm, has been seamlessly integrated into the 🤗 transformers, enabling users to load and work with models quantized using the GPTQ algorithm.
|
17 |
+
|
18 |
+
## Features
|
19 |
+
|
20 |
+
- **Quantization**: Compress transformer-based language models with minimal performance loss.
|
21 |
+
- **Integration with 🤗 transformers**: Directly load models quantized with the GPTQ algorithm.
|
22 |
+
- **Flexibility**: Offers two scenarios for users:
|
23 |
+
1. Quantize a language model from scratch.
|
24 |
+
2. Load a pre-quantized model from the 🤗 Hub.
|
25 |
+
- **Calibration**: Uses model inference to calibrate the quantized weights, ensuring optimal performance.
|
26 |
+
- **Custom Dataset Support**: Users can quantize models using either a supported dataset or a custom dataset.
|
27 |
+
|
28 |
+
## Intended Use
|
29 |
+
|
30 |
+
This integration is intended for users who want to compress their transformer-based language models without significant performance loss. It's especially useful for deployment scenarios where model size is a constraint.
|
31 |
+
|
32 |
+
## Limitations and Considerations
|
33 |
+
|
34 |
+
- The quality of quantization may vary based on the dataset used for calibration. It's recommended to use a dataset closely related to the model's domain for best results.
|
35 |
+
- While the GPTQ algorithm minimizes performance degradation, some loss in performance is expected, especially at lower bit quantizations.
|
36 |
+
|
37 |
+
## Training Data
|
38 |
+
|
39 |
+
The GPTQ algorithm requires calibration data for optimal quantization. Users can either use supported datasets like "c4", "wikitext2", etc., or provide a custom dataset for calibration.
|
40 |
+
|
41 |
+
## Evaluation Results
|
42 |
+
|
43 |
+
Performance after quantization may vary based on the dataset used for calibration and the bit precision chosen for quantization. It's recommended to evaluate the quantized model on relevant tasks to ensure it meets the desired performance criteria.
|
44 |
+
|
45 |
+
## References
|
46 |
+
|
47 |
+
- Frantar et al., "GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers"
|
48 |
+
- [AutoGPTQ GitHub Repository](https://github.com/PanQiWei/AutoGPTQ)
|