GGUFs of Llama-3-8B-16K

GGUF conversion and quantization of https://huggingface.co./mattshumer/Llama-3-8B-16K

Done with Maxime Labonne's AutoGGUF

Orginal model card

This is an extended (16K) context version of LLaMA 3 8B (base, not instruct). Trained for five hours on 8x A6000 GPUs, using the Yukang/LongAlpaca-16k-length dataset.

rope_theta was set to 1000000.0. Trained with Axolotl.

Downloads last month
54
GGUF
Model size
8.03B params
Architecture
llama

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Dataset used to train olafgeibig/Llama-3-8B-16K-GGUF