|
--- |
|
license: apache-2.0 |
|
tags: |
|
- moe |
|
train: false |
|
inference: false |
|
pipeline_tag: text-generation |
|
--- |
|
## Mixtral-8x7B-v0.1-hf-attn-4bit-moe-2bit-HQQ |
|
This is a version of the Mixtral-8x7B-v0.1 model (https://huggingface.co./mistralai/Mixtral-8x7B-v0.1) quantized with a mix of 4-bit and 2-bit via Half-Quadratic Quantization (HQQ). |
|
|
|
More specifically, the attention layers are quantized to 4-bit and the experts are quantized to 2-bit. This simple change yields a huge improvement in perplexity vs the all 2-bit model (4.69 vs. 5.90) for a slight increase in model size (18.2GB vs. 18GB). |
|
|
|
This idea was suggest by Artem Eliseev (@lavawolfiee) and Denis Mazur (@dvmazur) [in this Github discussion](https://github.com/mobiusml/hqq/issues/2). |
|
|
|
### Basic Usage |
|
To run the model, install the HQQ library from https://github.com/mobiusml/hqq and use it as follows: |
|
``` Python |
|
model_id = 'mobiuslabsgmbh/Mixtral-8x7B-v0.1-hf-attn-4bit-moe-2bit-HQQ' |
|
#Load the model |
|
from hqq.engine.hf import HQQModelForCausalLM, AutoTokenizer |
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
model = HQQModelForCausalLM.from_quantized(model_id) |
|
#Optional |
|
from hqq.core.quantize import * |
|
HQQLinear.set_backend(HQQBackend.PYTORCH_COMPILE) |
|
``` |
|
|
|
### Quantization |
|
You can reproduce the model using the following quant configs: |
|
|
|
``` Python |
|
from hqq.engine.hf import HQQModelForCausalLM, AutoTokenizer |
|
model_id = "mistralai/Mixtral-8x7B-v0.1" |
|
model = HQQModelForCausalLM.from_pretrained(model_id, use_auth_token=hf_auth, cache_dir=cache_path) |
|
|
|
#Quantize params |
|
from hqq.core.quantize import * |
|
attn_prams = BaseQuantizeConfig(nbits=4, group_size=64, quant_zero=True, quant_scale=True) |
|
attn_prams['scale_quant_params']['group_size'] = 256 |
|
experts_params = BaseQuantizeConfig(nbits=2, group_size=16, quant_zero=True, quant_scale=True) |
|
|
|
quant_config = {} |
|
#Attention |
|
quant_config['self_attn.q_proj'] = attn_prams |
|
quant_config['self_attn.k_proj'] = attn_prams |
|
quant_config['self_attn.v_proj'] = attn_prams |
|
quant_config['self_attn.o_proj'] = attn_prams |
|
#Experts |
|
quant_config['block_sparse_moe.experts.w1'] = experts_params |
|
quant_config['block_sparse_moe.experts.w2'] = experts_params |
|
quant_config['block_sparse_moe.experts.w3'] = experts_params |
|
|
|
#Quantize |
|
model.quantize_model(quant_config=quant_config) |
|
``` |
|
|