mpt-7b-int4-ov

Description

This is mosaicml/mpt-7b-instruct model converted to the OpenVINO™ IR (Intermediate Representation) format with weights compressed to INT4 by NNCF..

Quantization Parameters

Weight compression was performed using nncf.compress_weights with the following parameters:

  • mode: INT4_SYM
  • group_size: 128
  • ratio: 1.0

For more information on quantization, check the OpenVINO model optimization guide.

Compatibility

The provided OpenVINO™ IR model is compatible with:

  • OpenVINO version 2024.2.0 and higher
  • Optimum Intel 1.17.0 and higher

Running Model Inference

  1. Install packages required for using Optimum Intel integration with the OpenVINO backend:
pip install optimum[openvino]
  1. Run model inference:
from transformers import AutoTokenizer
from optimum.intel.openvino import OVModelForCausalLM

model_id = "OpenVINO/mpt-7b-int4-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForCausalLM.from_pretrained(model_id)

inputs = tokenizer("What is OpenVINO?", return_tensors="pt")

outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)

For more examples and possible optimizations, refer to the OpenVINO Large Language Model Inference Guide.

Limitations

Check the original model card for limitations.

Legal information

The original model is distributed under apache-2.0 license. More details can be found in mosaicml/mpt-7b-instruct.

Disclaimer

Intel is committed to respecting human rights and avoiding causing or contributing to adverse impacts on human rights. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to adverse impacts on human rights.

Downloads last month
9
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Model tree for OpenVINO/mpt-7b-int4-ov

Quantized
(4)
this model