Phi-4-mlx-int4

This is a quantized INT4 model based on Apple MLX Framework Phi-4. You can deploy it on Apple Silicon devices (M1,M2,M3,M4...).

Note: This is unoffical version,just for test and dev.

Installation


pip install -U mlx-lm 

Conversion


python -m mlx_lm.convert --hf-path {Your Phi-4-MLX Path} -q

Samples


from mlx_lm import load, generate

model, tokenizer = load("Your Phi-4-mlx-int4 Path")

prompt = tokenizer.apply_chat_template(
    [{"role": "user", "content": "I have $20,000 in my savings account, where I receive a 4% profit per year and payments twice a year. Can you please tell me how long it will take for me to become a millionaire? Also, can you please explain the math step by step as if you were explaining it to an uneducated person?"}],
    tokenize=False,
    add_generation_prompt=True,
)

response = generate(model, tokenizer, prompt=prompt,max_tokens=1024, verbose=True)
Downloads last month
8
Safetensors
Model size
2.29B params
Tensor type
FP16
·
U32
·
Inference API
Unable to determine this model's library. Check the docs .