Use Models from the Hugging Face Hub in LM Studio

Community Article Published November 28, 2024

You can run MLX or llama.cpp LLMs, VLMs, and Embeddings Models from the Hugging Face Hub by downloading them directly within LM Studio, locally on your machine.

image/png

LM Studio is a desktop application for experimenting & developing with local AI models directly on your computer. It works on Mac (Apple Silicon), Windows, and Linux!

Getting models from Hugging Face into LM Studio

Use the 'Use this model' button right from Hugging Face

For any GGUF or MLX LLM, click the "Use this model" dropdown and select LM Studio. This will run the model directly in LM Studio if you already have it, or show you a download option if you don't.

image/png

Try it out with trending model! Find them here: https://huggingface.co./models?library=gguf&sort=trending

Use LM Studio's the in-app downloader:

Press ⌘ + Shift + M on Mac, or Ctrl + Shift + M on PC (M stands for Models) and search for any model.

image/png

You can even paste entire Hugging Face URLs into the search bar!

Use lms, LM Studio's CLI:

If you prefer a terminal based workflow, use lms LM Studio's CLI.

image/png

Download any model from Hugging Face by specifiying {user}/{repo}

# lms get {user}/{repo}
lms get qwen/qwen2.5-coder-32b-instruct-gguf

Use a full Hugging Face URL

lms get https://huggingface.co./lmstudio-community/granite-3.0-2b-instruct-GGUF

Choose a quantization with the @ qualifier

lms get qwen/qwen2.5-coder-32b-instruct-gguf@Q4_K_M

Search models by keyword from the terminal

lms get qwen

Get GGUF or MLX results

lms get qwen --mlx # or --gguf

Keeping tabs with the latest models

Follow the LM Studio Community page on Hugging Face to stay updated on the latest & greatest local LLMs as soon as they come out.