Mixtral 8X7B Instruct v0.1 - Llamafile πŸ¦™

Overview

This model card describes the mixtral-8x7b-instruct-v0.1.Q3_K_M.llamafile, a single-file executable version of the Mixtral 8X7B Instruct v0.1 model.
It is built upon the original work by TheBloke and Mistral AI, repackaged for ease of use as a standalone application.
See here

Like many of you, i am GPU poor. The goal behind this approach was to have easy access to a good opensourced model with limited GPU resources, like a Macbook Pro M1 32GB.
It's not the full model, but it's the most feasible given the resource constraints - see here for notes on performance

Usage

Because the model is converted to llamafile, it can be executed on any OS with no additional installations required.Read more about llamafile here.
To use this model, ensure you have execution permissions set:

chmod +x mixtral-8x7b-instruct-v0.1.Q3_K_M.llamafile
./mixtral-8x7b-instruct-v0.1.Q3_K_M.llamafile

See here for local API server details.

Credits and Acknowledgements

This executable is a derivative of TheBloke's original Mixtral model, repurposed for easier deployment. It is licensed under the same terms as TheBloke's model.

Limitations

As with the original Mixtral model, this executable does not include moderation mechanisms and should be used with consideration for its capabilities and limitations.

Additional Information

For more detailed instructions and insights, please refer to the original model documentation provided by TheBloke and Mistral AI.

Downloads last month
4
Inference API
Unable to determine this model's library. Check the docs .