Velvet-14B model - quantized and converted to MLX
Moritz
moot20
·
AI & ML interests
MLX conversion & more
Recent Activity
updated
a model
about 17 hours ago
moot20/Velvet-14B-MLX-8bits
updated
a collection
about 18 hours ago
Velvet
published
a model
about 18 hours ago
moot20/Velvet-14B-MLX-8bits
Organizations
None yet
Collections
9
Mistral-Small-24B-2501 model - quantized and converted to MLX
-
moot20/Mistral-Small-24B-Base-2501-MLX-4bit
Text Generation • Updated • 17 -
moot20/Mistral-Small-24B-Base-2501-MLX-6bits
Text Generation • Updated • 7 -
moot20/Mistral-Small-24B-Base-2501-MLX-8bits
Text Generation • Updated • 6 -
moot20/Mistral-Small-24B-Instruct-2501-MLX-4bit
Text Generation • Updated • 34
models
73
moot20/Velvet-14B-MLX-8bits
Text Generation
•
Updated
moot20/Velvet-14B-MLX-6bits
Text Generation
•
Updated
moot20/Velvet-14B-MLX-4bits
Text Generation
•
Updated
moot20/Mistral-Small-24B-Instruct-2501-MLX-8bits
Text Generation
•
Updated
•
7
moot20/Mistral-Small-24B-Instruct-2501-MLX-6bits
Text Generation
•
Updated
•
7
moot20/SmolVLM-256M-Base-MLX
Image-Text-to-Text
•
Updated
•
2
moot20/SmolVLM-500M-Instruct-MLX
Image-Text-to-Text
•
Updated
•
4
moot20/SmolVLM-256M-Instruct-MLX
Image-Text-to-Text
•
Updated
•
2
moot20/SmolVLM-500M-Base-MLX
Image-Text-to-Text
•
Updated
•
6
moot20/SmolVLM-500M-Base-MLX-8bits
Image-Text-to-Text
•
Updated
•
2
datasets
None public yet