license: apache-2.0 tags: - moe - merge - epfl-llm/meditron-7b - chaoyi-wu/PMC_LLAMA_7B_10_epoch - allenai/tulu-2-dpo-7b - microsoft/Orca-2-7b - TensorBlock - GGUF base_model: Technoculture/Mediquad-4x7b
Feedback and support: TensorBlock's Twitter/X, Telegram Group and Discord server
This repo contains GGUF format model files for Technoculture/Mediquad-4x7b.
The files were quantized using machines provided by TensorBlock, and they are compatible with llama.cpp as of commit b4242.
Firstly, install Huggingface Client
pip install -U "huggingface_hub[cli]"
Then, downoad the individual model file the a local directory
huggingface-cli download tensorblock/Mediquad-4x7b-GGUF --include "Mediquad-4x7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
If you wanna download multiple model files with a pattern (e.g., *Q4_K*gguf), you can try:
*Q4_K*gguf
huggingface-cli download tensorblock/Mediquad-4x7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'