IntelligentEstate/Chocolat_Bite-14B-Q4_K_M-GGUF

chocolatine.png

This model was converted to GGUF format from the always amazing models of Jpacifico jpacifico/Chocolatine-2-14B-Instruct-v2.0b2 using llama.cpp Refer to the original model card for more details on the model.

Made for a larger but still under 10GB base station GGUF backbone of the Estate/Enterprise system Project CutPurse(API FREEDOM) Quant Test

Set up as the base or writing layer of your swarm agent or in your server for Quick and reliable inference(Much better than ChatGPT 1o/R1 when tied to tool use from Pancho and web query from RSS feeds and so on) while keeping all your Data and your clients/Families/financials secure.

Use with a Limit Crossing AGI template for your own Agent of Coheasion or Chaos. !!(Use LimitCrossing with Extreame Caution)!! Paper in Files

Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

Downloads last month
20
GGUF
Model size
14.8B params
Architecture
qwen2

4-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for IntelligentEstate/Chocolate_Bite-14B-Q4_K_M-GGUF

Quantized
(2)
this model

Dataset used to train IntelligentEstate/Chocolate_Bite-14B-Q4_K_M-GGUF