Dimensity-3B-GGUF / README.md
afrideva's picture
Upload README.md with huggingface_hub
64036ef
metadata
base_model: Dimensity/Dimensity-3B
inference: false
language:
  - en
license: mit
model_creator: Dimensity
model_name: Dimensity-3B
pipeline_tag: text-generation
quantized_by: afrideva
tags:
  - sft
  - gguf
  - ggml
  - quantized
  - q2_k
  - q3_k_m
  - q4_k_m
  - q5_k_m
  - q6_k
  - q8_0

Dimensity/Dimensity-3B-GGUF

Quantized GGUF model files for Dimensity-3B from Dimensity

Name Quant method Size
dimensity-3b.fp16.gguf fp16 5.59 GB
dimensity-3b.q2_k.gguf q2_k 1.20 GB
dimensity-3b.q3_k_m.gguf q3_k_m 1.39 GB
dimensity-3b.q4_k_m.gguf q4_k_m 1.71 GB
dimensity-3b.q5_k_m.gguf q5_k_m 1.99 GB
dimensity-3b.q6_k.gguf q6_k 2.30 GB
dimensity-3b.q8_0.gguf q8_0 2.97 GB

Original Model Card:

Dimensity-3B

Model Details

Dimensity-3B is a finetuned version of the StableLM framework trained on a variety of conversational data. It contains 3 billion parameters.

Intended Uses

This model is intended for conversational AI applications. It can engage in open-ended dialogue by generating responses to user prompts.

Factors

Training Data

The model was trained on a large dataset of over 100 million conversational exchanges extracted from Reddit comments, customer support logs, and other online dialogues.

Prompt Template

The model was finetuned using the following prompt template:

### Human: {prompt} 

### Assistant:

This prompts the model to take on an assistant role.

Ethical Considerations

As the model was trained on public conversational data, it may generate responses that contain harmful stereotypes or toxic content. The model should be used with caution in sensitive contexts.

Caveats and Recommendations

This model is designed for open-ended conversation. It may sometimes generate plausible-sounding but incorrect information. Outputs should be validated against external sources.