Mixtral_AI_MasterTron-GGUF

Quantized GGUF model files for Mixtral_AI_MasterTron from LeroyDyer

Original Model Card:

To get a TURLY GREAT MODEL :.... Merge the series into a single model :: ie the Tron Series : very powerfull Merge!! different methods at ech stage different philosphy

Uploaded model

  • Developed by: LeroyDyer
  • License: apache-2.0
  • Finetuned from model : LeroyDyer/Mixtral_AI_MasterMind_II

Updated to include function calling and mistral 3.0 upgraded.... as well as add some bible !

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
33
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.