Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co./docs/hub/model-cards#model-card-metadata)

Planck-OpenLAiNN-10M-GGUF 🤗

Hey there fellow researchers, developers, and AI enthusiasts! Today I'm releasing a new family of Models, Planck LAiNN, These are probably some of the smallest LLMs that are on HF. They aren't super useful but it was a fun expierment!~

These are the GGUF quants of the models. For the original models, you can find them here.

Models Overview

  • Panck-OpenLAiNN-10M: A Truely Tiny model with just 10 Million parameters, this is probably boarderline useless, but it IS functional.
  • Panck-OpenLAiNN-25M: The second smallest model, 25 million parameters, it's not that much better.
  • Panck-OpenLAiNN-50M: Surprisingly smart, it's 50 Million parameters and could potentially maybe, Possibly even be useful ;)
  • Panck-OpenLAiNN-75M: The current ""heavy"" weight of the Plank-OpenLAiNN Models.

Pretraining Details

Plank-OpenLAiNN was trained on 32B tokens of the Fineweb dataset, it's the same one that was used for the Pico-LAiNN family of models. The model was pretrained with a context length of 1024 tokens.

Other information:

  • Compatibility: Built to be compatible with existing projects that use LLAMA 2's tokenizer and architecture.
  • Ease of Use: No need to reinvent the wheel. These models are ready to be plugged into your applications.
  • Open Source: Fully open source, so you can tweak, tune, and twist them to your heart's content.

Benchy

Tasks Value Stderr
arc_challenge 0.1766 ± 0.0111
arc_easy 0.3144 ± 0.0095
boolq 0.5847 ± 0.0086
hellaswag 0.2622 ± 0.0044
lambada_openai 0.0047 ± 0.0009
piqa 0.5718 ± 0.0115
winogrande 0.4957 ± 0.0141

Future Plans

  • More Models: I'm currenetly training the bigger siblings of Pico-OpenLAiNN, including a 1B parameter version and beyond. 2-4 Billion parameter versions are planned. These will be Released as OpenLAiNN.
  • New architecture: This is still up in the air and I'm still developing it, things are going well and I'll post updates.
  • Paper: A detailed paper or training data will be posted at some point.

Credit Where Credit's Due

If you find these models useful and decide to use these models, a link to this repository would be highly appreciated. I am a one man show running this and I'm doing this for free, Thanks 🤗

Contact

If you have questions, Please reach out to me at [email protected]

U.U.F.O Research Logo

Downloads last month
15
GGUF
Model size
13M params
Architecture
llama

4-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .