TinyLLama-v0-GGUF / README.md
afrideva's picture
Upload README.md with huggingface_hub
d2206a6
|
raw
history blame
3.08 kB
metadata
base_model: Maykeye/TinyLLama-v0
inference: false
license: apache-2.0
model_creator: Maykeye
model_name: TinyLLama-v0
pipeline_tag: text-generation
quantized_by: afrideva
tags:
  - gguf
  - ggml
  - quantized
  - q2_k
  - q3_k_m
  - q4_k_m
  - q5_k_m
  - q6_k
  - q8_0

Maykeye/TinyLLama-v0-GGUF

Quantized GGUF model files for TinyLLama-v0 from Maykeye

Name Quant method Size
tinyllama-v0.fp16.gguf fp16 11.08 MB
tinyllama-v0.q2_k.gguf q2_k 5.47 MB
tinyllama-v0.q3_k_m.gguf q3_k_m 5.63 MB
tinyllama-v0.q4_k_m.gguf q4_k_m 5.79 MB
tinyllama-v0.q5_k_m.gguf q5_k_m 5.95 MB
tinyllama-v0.q6_k.gguf q6_k 6.72 MB
tinyllama-v0.q8_0.gguf q8_0 6.75 MB

Original Model Card:

This is a first version of recreating roneneldan/TinyStories-1M but using Llama architecture.

  • Full training process is included in the notebook train.ipynb. Recreating it as simple as downloading TinyStoriesV2-GPT4-train.txt and TinyStoriesV2-GPT4-valid.txt in the same folder with the notebook and running the cells. Validation content is not used by the script so you put anythin in

  • Backup directory has a script do_backup that I used to copy weights from remote machine to local. Weight are generated too quickly, so by the time script copied weihgt N+1

  • This is extremely PoC version. Training truncates stories that are longer than context size and doesn't use any sliding window to train story not from the start

  • Training took approximately 9 hours (3 hours per epoch) on 40GB A100. ~30GB VRAM was used

  • I use tokenizer from open_llama_3b. However I had troubles with it locally(https://github.com/openlm-research/open_llama/issues/69). I had no troubles on the cloud machine with preninstalled libraries.

  • Demo script is demo.py

  • Validation script is provided: valid.py. use it like python valid.py path/to/TinyStoriesV2-GPT4-valid.txt [optional-model-id-or-path]: After training I decided that it's not necessary to beat validation into chunks

  • Also this version uses very stupid caching mechinsm to shuffle stories for training: it keeps cache of N recently loaded chunks so if random shuffle asks for a story, it may use cache or load chunk. Training dataset is too small, so in next versions I will get rid of it.

from transformers import AutoModelForCausalLM, AutoTokenizer