Update README.md
Browse files
README.md
CHANGED
@@ -10,6 +10,8 @@ This is a version of the Mixtral-8x7B-Instruct-v0.1 model (https://huggingface.c
|
|
10 |
More specifically, the attention layers are quantized to 4-bit and the experts are quantized to 2-bit.
|
11 |
This model should perform a lot better compared to the all 2-bit model for a slight increase in model size (18.2GB vs. 18GB).
|
12 |
|
|
|
|
|
13 |
### Basic Usage
|
14 |
To run the model, install the HQQ library from https://github.com/mobiusml/hqq and use it as follows:
|
15 |
``` Python
|
|
|
10 |
More specifically, the attention layers are quantized to 4-bit and the experts are quantized to 2-bit.
|
11 |
This model should perform a lot better compared to the all 2-bit model for a slight increase in model size (18.2GB vs. 18GB).
|
12 |
|
13 |
+
This idea was suggest by Artem Eliseev (@lavawolfiee) and Denis Mazur (@dvmazur) [in this Github discussion](https://github.com/mobiusml/hqq/issues/2).
|
14 |
+
|
15 |
### Basic Usage
|
16 |
To run the model, install the HQQ library from https://github.com/mobiusml/hqq and use it as follows:
|
17 |
``` Python
|