File size: 1,713 Bytes
13427e2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
---
tags:
- autotrain
- text-generation
base_model: ahxt/llama2_xs_460M_experimental
datasets:
- KnutJaegersberg/WizardLM_evol_instruct_V2_196k_instruct_format
widget:
- text: |-
    ### Instruction:
    Find me a list of some nice places to visit around the world.
    
    ### Response:
- text: |-
    ### Instruction:
    Tell me a story about some magical place.
    
    ### Response:
- text: |-
    ### Instruction:
    Tell me all you know about the Earth.
    
    ### Response:
inference:
  parameters:
    max_new_tokens: 32
    repetition_penalty: 1.15
    do_sample: true
    temperature: 0.5
    top_p: 0.5
---

# ahxt's llama2_xs_460M_experimental trained on the WizardLM's Evol Instruct dataset using AutoTrain

- Base model: [ahxt/llama2_xs_460M_experimental](https://huggingface.co./ahxt/llama2_xs_460M_experimental)
- Dataset: [KnutJaegersberg/WizardLM_evol_instruct_V2_196k_instruct_format](https://huggingface.co./datasets/KnutJaegersberg/WizardLM_evol_instruct_V2_196k_instruct_format)
- [Training hyperparameters](https://huggingface.co./Felladrin/llama2_xs_460M_experimental_evol_instruct/blob/cc151c5669ea37c3ef972e375c74f2d9bfd92b49/training_params.json)
- Availability in other ML formats:
  - GGUF: [afrideva/llama2_xs_460M_experimental_evol_instruct-GGUF](https://huggingface.co./afrideva/llama2_xs_460M_experimental_evol_instruct-GGUF)
  - ONNX: [Felladrin/onnx-llama2_xs_460M_experimental_evol_instruct](https://huggingface.co./Felladrin/onnx-llama2_xs_460M_experimental_evol_instruct)

## Recommended Prompt Format

```
### Instruction:
<instruction>

### Response:
```

## Recommended Inference Parameters

```yml
repetition_penalty: 1.15
do_sample: true
temperature: 0.5
top_p: 0.5
```