Edit model card

This is an INT4 quantized version of the Phi-3-mini-128k-instruct model. The Python packages used in creating this model are as follows:

openvino==2024.5.0rc1
optimum==1.23.3
optimum-intel==1.20.1
nncf==2.13.0
torch==2.5.1
transformers==4.46.2

This quantized model is created using the following command:

optimum-cli export openvino --model "microsoft/Phi-3-mini-128k-instruct" --weight-format int4 --group-size 128 --sym --ratio 1 --all-layers ./Phi-3-mini-128k-instruct-ov-int4

For more details, run the following command from your Python environment: optimum-cli export openvino --help

INFO:nncf:Statistics of the bitwidth distribution:

Num bits (N) % all parameters (layers) % ratio-defining parameters (layers)
4 100% (130 / 130) 100% (130 / 130)
Downloads last month
18
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including jojo1899/Phi-3-mini-128k-instruct-ov-int4