File size: 2,625 Bytes
64036ef |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
---
base_model: Dimensity/Dimensity-3B
inference: false
language:
- en
license: mit
model_creator: Dimensity
model_name: Dimensity-3B
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- sft
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# Dimensity/Dimensity-3B-GGUF
Quantized GGUF model files for [Dimensity-3B](https://huggingface.co./Dimensity/Dimensity-3B) from [Dimensity](https://huggingface.co./Dimensity)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [dimensity-3b.fp16.gguf](https://huggingface.co./afrideva/Dimensity-3B-GGUF/resolve/main/dimensity-3b.fp16.gguf) | fp16 | 5.59 GB |
| [dimensity-3b.q2_k.gguf](https://huggingface.co./afrideva/Dimensity-3B-GGUF/resolve/main/dimensity-3b.q2_k.gguf) | q2_k | 1.20 GB |
| [dimensity-3b.q3_k_m.gguf](https://huggingface.co./afrideva/Dimensity-3B-GGUF/resolve/main/dimensity-3b.q3_k_m.gguf) | q3_k_m | 1.39 GB |
| [dimensity-3b.q4_k_m.gguf](https://huggingface.co./afrideva/Dimensity-3B-GGUF/resolve/main/dimensity-3b.q4_k_m.gguf) | q4_k_m | 1.71 GB |
| [dimensity-3b.q5_k_m.gguf](https://huggingface.co./afrideva/Dimensity-3B-GGUF/resolve/main/dimensity-3b.q5_k_m.gguf) | q5_k_m | 1.99 GB |
| [dimensity-3b.q6_k.gguf](https://huggingface.co./afrideva/Dimensity-3B-GGUF/resolve/main/dimensity-3b.q6_k.gguf) | q6_k | 2.30 GB |
| [dimensity-3b.q8_0.gguf](https://huggingface.co./afrideva/Dimensity-3B-GGUF/resolve/main/dimensity-3b.q8_0.gguf) | q8_0 | 2.97 GB |
## Original Model Card:
```Dimensity-3B```
# Model Details
Dimensity-3B is a finetuned version of the StableLM framework trained on a variety of conversational data. It contains 3 billion parameters.
# Intended Uses
This model is intended for conversational AI applications. It can engage in open-ended dialogue by generating responses to user prompts.
## Factors
# Training Data
The model was trained on a large dataset of over 100 million conversational exchanges extracted from Reddit comments, customer support logs, and other online dialogues.
# Prompt Template
The model was finetuned using the following prompt template:
```
### Human: {prompt}
### Assistant:
```
This prompts the model to take on an assistant role.
# Ethical Considerations
As the model was trained on public conversational data, it may generate responses that contain harmful stereotypes or toxic content. The model should be used with caution in sensitive contexts.
# Caveats and Recommendations
This model is designed for open-ended conversation. It may sometimes generate plausible-sounding but incorrect information. Outputs should be validated against external sources. |