afrideva's picture
Upload README.md with huggingface_hub
271a6a1
---
base_model: Felladrin/llama2_xs_460M_experimental_evol_instruct
datasets:
- KnutJaegersberg/WizardLM_evol_instruct_V2_196k_instruct_format
inference: false
model_creator: Felladrin
model_name: llama2_xs_460M_experimental_evol_instruct
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- autotrain
- text-generation
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
widget:
- text: '### Instruction:
Find me a list of some nice places to visit around the world.
### Response:'
- text: '### Instruction:
Tell me all you know about the Earth.
### Response:'
---
# Felladrin/llama2_xs_460M_experimental_evol_instruct-GGUF
Quantized GGUF model files for [llama2_xs_460M_experimental_evol_instruct](https://huggingface.co./Felladrin/llama2_xs_460M_experimental_evol_instruct) from [Felladrin](https://huggingface.co./Felladrin)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama2_xs_460m_experimental_evol_instruct.q2_k.gguf](https://huggingface.co./afrideva/llama2_xs_460M_experimental_evol_instruct-GGUF/resolve/main/llama2_xs_460m_experimental_evol_instruct.q2_k.gguf) | q2_k | 212.56 MB |
| [llama2_xs_460m_experimental_evol_instruct.q3_k_m.gguf](https://huggingface.co./afrideva/llama2_xs_460M_experimental_evol_instruct-GGUF/resolve/main/llama2_xs_460m_experimental_evol_instruct.q3_k_m.gguf) | q3_k_m | 238.87 MB |
| [llama2_xs_460m_experimental_evol_instruct.q4_k_m.gguf](https://huggingface.co./afrideva/llama2_xs_460M_experimental_evol_instruct-GGUF/resolve/main/llama2_xs_460m_experimental_evol_instruct.q4_k_m.gguf) | q4_k_m | 288.52 MB |
| [llama2_xs_460m_experimental_evol_instruct.q5_k_m.gguf](https://huggingface.co./afrideva/llama2_xs_460M_experimental_evol_instruct-GGUF/resolve/main/llama2_xs_460m_experimental_evol_instruct.q5_k_m.gguf) | q5_k_m | 333.29 MB |
| [llama2_xs_460m_experimental_evol_instruct.q6_k.gguf](https://huggingface.co./afrideva/llama2_xs_460M_experimental_evol_instruct-GGUF/resolve/main/llama2_xs_460m_experimental_evol_instruct.q6_k.gguf) | q6_k | 380.87 MB |
| [llama2_xs_460m_experimental_evol_instruct.q8_0.gguf](https://huggingface.co./afrideva/llama2_xs_460M_experimental_evol_instruct-GGUF/resolve/main/llama2_xs_460m_experimental_evol_instruct.q8_0.gguf) | q8_0 | 492.67 MB |
## Original Model Card:
# ahxt's llama2_xs_460M_experimental trained on the WizardLM's Evol Instruct dataset using AutoTrain
- Base model: [ahxt/llama2_xs_460M_experimental](https://huggingface.co./ahxt/llama2_xs_460M_experimental)
- Dataset: [KnutJaegersberg/WizardLM_evol_instruct_V2_196k_instruct_format](https://huggingface.co./datasets/KnutJaegersberg/WizardLM_evol_instruct_V2_196k_instruct_format)
- Training: 13.5h under [these parameters](https://huggingface.co./Felladrin/llama2_xs_460M_experimental_evol_instruct/blob/cc151c5669ea37c3ef972e375c74f2d9bfd92b49/training_params.json)
## Recommended Prompt Format
```
### Instruction:
<instruction>
### Response:
```
## Recommended Inference Parameters
```yml
repetition_penalty: 1.15
do_sample: true
temperature: 0.5
top_p: 0.5
```