Keynote-Technology/TinyKAI-3B-beta-GGUF
Quantized GGUF model files for TinyKAI-3B-beta from Keynote-Technology
Name | Quant method | Size |
---|---|---|
tinykai-3b-beta.q2_k.gguf | q2_k | 2.15 GB |
tinykai-3b-beta.q3_k_m.gguf | q3_k_m | 2.27 GB |
tinykai-3b-beta.q4_k_m.gguf | q4_k_m | 2.58 GB |
tinykai-3b-beta.q5_k_m.gguf | q5_k_m | 2.76 GB |
tinykai-3b-beta.q6_k.gguf | q6_k | 3.64 GB |
tinykai-3b-beta.q8_0.gguf | q8_0 | 3.64 GB |
Original Model Card:
TinyKAI 3B is a fine-tuned LLM (Large Language Model) based off of OpenLlama 3B v2. The TinyKAI models are a series of lightweight LLMs under 5 Billion parameters, usually used for research.
Direct Use
TinyKAI 3B is optimal for research on large language models, specifically the influence of web data on the properties of large language models (fairness, safety, limitations, capabilities, etc.).
Training
This model was trained on a mixture of the Falcon refined-web dataset, the StarCoder dataset and the wikipedia, arxiv, book and stackexchange part of the RedPajama dataset.
Banned Use
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful or insulting to anyone or any certain group.
Limitations
TinyKAI 3B is trained on English data only, and will not generate appropriately reasonable content in other languages. Being trained on a representative of the web, it will carry the stereotypes and biases commonly encountered online.
Recommendations
We recommend users of TinyKAI 3B to consider finetuning it for personal use, and for precautions to be taken for any commercial use.
WARNING!
This model runs on an older version of transformers, v4.10.0, and therefore may be unstable.
- Downloads last month
- 19
Model tree for afrideva/TinyKAI-3B-beta-GGUF
Base model
Keynote-Technology/TinyKAI-3B-v0.1