Update README.md
Browse files
README.md
CHANGED
@@ -8,6 +8,9 @@ pipeline_tag: text-generation
|
|
8 |
This is a version of the Mixtral-8x7B-v0.1 model (https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) quantized with a mix of 4-bit and 2-bit via Half-Quadratic Quantization (HQQ).
|
9 |
|
10 |
More specifically, the attention layers are quantized to 4-bit and the experts are quantized to 2-bit. This simple change yields a huge improvement in perplexity vs the all 2-bit model (4.69 vs. 5.90) for a slight increase in model size (18.2GB vs. 18GB).
|
|
|
|
|
|
|
11 |
### Basic Usage
|
12 |
To run the model, install the HQQ library from https://github.com/mobiusml/hqq and use it as follows:
|
13 |
``` Python
|
|
|
8 |
This is a version of the Mixtral-8x7B-v0.1 model (https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) quantized with a mix of 4-bit and 2-bit via Half-Quadratic Quantization (HQQ).
|
9 |
|
10 |
More specifically, the attention layers are quantized to 4-bit and the experts are quantized to 2-bit. This simple change yields a huge improvement in perplexity vs the all 2-bit model (4.69 vs. 5.90) for a slight increase in model size (18.2GB vs. 18GB).
|
11 |
+
|
12 |
+
This idea was suggest by Artem Eliseev (@lavawolfiee) and Denis Mazur (@dvmazur) [in this Github discussion](hhttps://github.com/mobiusml/hqq/issues/2).
|
13 |
+
|
14 |
### Basic Usage
|
15 |
To run the model, install the HQQ library from https://github.com/mobiusml/hqq and use it as follows:
|
16 |
``` Python
|