Alt text

Mistral7B-Inst-v0.2-4bit-mlx-distilabel-capybara-dpo-7k

This model was converted to MLX format from mlx-community/Mistral-7B-Instruct-v0.2-8-bit-mlx. Refer to the original model card for more details on the model.

Using a DPO dataset by Argilla dataset

Use with mlx

pip install mlx
git clone https://github.com/ml-explore/mlx-examples.git
cd mlx-examples/llms/hf_llm
python generate.py --model mlx-community/Mistral7B-Inst-v0.2-4bit-mlx-distilabel-capybara-dpo-7k --prompt "What wights more 1kg of feathers or 0.5kg of steel?"
Downloads last month
13
Safetensors
Model size
1.24B params
Tensor type
BF16
U32
F32
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API has been turned off for this model.