GGUF
English
Mixture of Experts
olmo
olmoe
Inference Endpoints
conversational

GGUF version of https://huggingface.co./allenai/OLMoE-1B-7B-0924-Instruct

@misc{muennighoff2024olmoeopenmixtureofexpertslanguage,
      title={OLMoE: Open Mixture-of-Experts Language Models}, 
      author={Niklas Muennighoff and Luca Soldaini and Dirk Groeneveld and Kyle Lo and Jacob Morrison and Sewon Min and Weijia Shi and Pete Walsh and Oyvind Tafjord and Nathan Lambert and Yuling Gu and Shane Arora and Akshita Bhagia and Dustin Schwenk and David Wadden and Alexander Wettig and Binyuan Hui and Tim Dettmers and Douwe Kiela and Ali Farhadi and Noah A. Smith and Pang Wei Koh and Amanpreet Singh and Hannaneh Hajishirzi},
      year={2024},
      eprint={2409.02060},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2409.02060}, 
}
Downloads last month
3,266
GGUF
Model size
6.92B params
Architecture
olmoe

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this modelโ€™s pipeline type. Check the docs .

Model tree for allenai/OLMoE-1B-7B-0924-Instruct-GGUF

Quantized
(9)
this model

Dataset used to train allenai/OLMoE-1B-7B-0924-Instruct-GGUF

Spaces using allenai/OLMoE-1B-7B-0924-Instruct-GGUF 2

Collection including allenai/OLMoE-1B-7B-0924-Instruct-GGUF