File size: 2,652 Bytes
696c857
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
base_model: cnmoro/ahxt_llama2_xs_460M_experimental_ptbr_instruct
inference: false
language:
- en
- pt
model_creator: cnmoro
model_name: ahxt_llama2_xs_460M_experimental_ptbr_instruct
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
widget:
- text: "### Instruction: \nSua instrução aqui\n\n### Response:\n"
---
# cnmoro/ahxt_llama2_xs_460M_experimental_ptbr_instruct-GGUF

Quantized GGUF model files for [ahxt_llama2_xs_460M_experimental_ptbr_instruct](https://huggingface.co./cnmoro/ahxt_llama2_xs_460M_experimental_ptbr_instruct) from [cnmoro](https://huggingface.co./cnmoro)


| Name | Quant method | Size |
| ---- | ---- | ---- |
| [ahxt_llama2_xs_460m_experimental_ptbr_instruct.fp16.gguf](https://huggingface.co./afrideva/ahxt_llama2_xs_460M_experimental_ptbr_instruct-GGUF/resolve/main/ahxt_llama2_xs_460m_experimental_ptbr_instruct.fp16.gguf) | fp16 | 925.45 MB  |
| [ahxt_llama2_xs_460m_experimental_ptbr_instruct.q2_k.gguf](https://huggingface.co./afrideva/ahxt_llama2_xs_460M_experimental_ptbr_instruct-GGUF/resolve/main/ahxt_llama2_xs_460m_experimental_ptbr_instruct.q2_k.gguf) | q2_k | 212.56 MB  |
| [ahxt_llama2_xs_460m_experimental_ptbr_instruct.q3_k_m.gguf](https://huggingface.co./afrideva/ahxt_llama2_xs_460M_experimental_ptbr_instruct-GGUF/resolve/main/ahxt_llama2_xs_460m_experimental_ptbr_instruct.q3_k_m.gguf) | q3_k_m | 238.87 MB  |
| [ahxt_llama2_xs_460m_experimental_ptbr_instruct.q4_k_m.gguf](https://huggingface.co./afrideva/ahxt_llama2_xs_460M_experimental_ptbr_instruct-GGUF/resolve/main/ahxt_llama2_xs_460m_experimental_ptbr_instruct.q4_k_m.gguf) | q4_k_m | 288.52 MB  |
| [ahxt_llama2_xs_460m_experimental_ptbr_instruct.q5_k_m.gguf](https://huggingface.co./afrideva/ahxt_llama2_xs_460M_experimental_ptbr_instruct-GGUF/resolve/main/ahxt_llama2_xs_460m_experimental_ptbr_instruct.q5_k_m.gguf) | q5_k_m | 333.29 MB  |
| [ahxt_llama2_xs_460m_experimental_ptbr_instruct.q6_k.gguf](https://huggingface.co./afrideva/ahxt_llama2_xs_460M_experimental_ptbr_instruct-GGUF/resolve/main/ahxt_llama2_xs_460m_experimental_ptbr_instruct.q6_k.gguf) | q6_k | 380.87 MB  |
| [ahxt_llama2_xs_460m_experimental_ptbr_instruct.q8_0.gguf](https://huggingface.co./afrideva/ahxt_llama2_xs_460M_experimental_ptbr_instruct-GGUF/resolve/main/ahxt_llama2_xs_460m_experimental_ptbr_instruct.q8_0.gguf) | q8_0 | 492.67 MB  |



## Original Model Card:
Finetuned version of [ahxt/llama2\_xs\_460M\_experimental](https://huggingface.co./ahxt/llama2_xs_460M_experimental)

Using a PTBR Instruct dataset.

Just like ahxt's, this is experimental and for research purposes only