mlx-community/Molmo-7B-D-0924-6bit

This model was converted to MLX format from allenai/Molmo-7B-D-0924 using mlx-vlm version 0.1.0. Refer to the original model card for more details on the model.

Use with mlx

pip install -U mlx-vlm
python -m mlx_vlm.generate --model mlx-community/Molmo-7B-D-0924-6bit --max-tokens 100 --temp 0.0
Downloads last month
29
Safetensors
Model size
2.18B params
Tensor type
FP16
U32
F32
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Model tree for mlx-community/Molmo-7B-D-0924-6bit

Base model

Qwen/Qwen2-7B
Finetuned
(53)
this model