UUFO-Aigis commited on
Commit
8817adb
·
verified ·
1 Parent(s): 35f007a

Added Models.

Browse files

I fucking love breakcore.

.gitattributes CHANGED
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Planck-OpenLAiNN-10M_F16.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Planck-OpenLAiNN-10M_Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Planck-OpenLAiNN-10M_Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Planck-OpenLAiNN-10M_F16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c60e6de6e1a1ee6e350d0f09b3dcee921c39f547cb917e6065492dbc275219f8
3
+ size 26703680
Planck-OpenLAiNN-10M_Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:150d0c4c19ad9250e3b38874479397332c1206a53a275789b30cbda1d952ab4f
3
+ size 11104864
Planck-OpenLAiNN-10M_Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b7ff57afdc5d91b00426da5c0ec61b9a3874fde1053ef669e9648401a0c4cd32
3
+ size 14527072
README.md ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Planck-OpenLAiNN-10M-GGUF 🤗
2
+
3
+ Hey there fellow researchers, developers, and AI enthusiasts! Today I'm releasing a new family of Models, Planck LAiNN, These are probably some of the smallest LLMs that are on HF. They aren't super useful but it was a fun expierment!~
4
+
5
+ These are the GGUF quants of the models. For the original models, you can find them [here](https://huggingface.co/UUFO-Aigis/Planck-OpenLAiNN-10M).
6
+
7
+ ## Models Overview
8
+ - **Panck-OpenLAiNN-10M**: A Truely Tiny model with just 10 Million parameters, this is probably boarderline useless, but it *IS* functional.
9
+ - **Panck-OpenLAiNN-25M**: The second smallest model, 25 million parameters, it's not that much better.
10
+ - **Panck-OpenLAiNN-50M**: Surprisingly smart, it's 50 Million parameters and could potentially maybe, Possibly even be useful ;)
11
+ - **Panck-OpenLAiNN-75M**: The current *""heavy""* weight of the Plank-OpenLAiNN Models.
12
+ ## Pretraining Details
13
+
14
+ Plank-OpenLAiNN was trained on 32B tokens of the Fineweb dataset, it's the same one that was used for the Pico-LAiNN family of models. The model was pretrained with a context length of 1024 tokens.
15
+
16
+ ## Other information:
17
+
18
+ - **Compatibility**: Built to be compatible with existing projects that use LLAMA 2's tokenizer and architecture.
19
+ - **Ease of Use**: No need to reinvent the wheel. These models are ready to be plugged into your applications.
20
+ - **Open Source**: Fully open source, so you can tweak, tune, and twist them to your heart's content.
21
+
22
+ # Benchy
23
+
24
+ | Tasks | Value | |Stderr|
25
+ |--------------|------:|---|-----:|
26
+ |arc_challenge | 0.1766|± |0.0111|
27
+ |arc_easy | 0.3144|± |0.0095|
28
+ |boolq | 0.5847|± |0.0086|
29
+ |hellaswag | 0.2622|± |0.0044|
30
+ |lambada_openai| 0.0047|± |0.0009| # Yes, really
31
+ |piqa | 0.5718|± |0.0115|
32
+ |winogrande | 0.4957|± |0.0141|
33
+
34
+ ## Future Plans
35
+
36
+ - **More Models**: I'm currenetly training the bigger siblings of Pico-OpenLAiNN, including a 1B parameter version and beyond. 2-4 Billion parameter versions are planned. These will be Released as OpenLAiNN.
37
+ - **New architecture**: This is still up in the air and I'm still developing it, things are going well and I'll post updates.
38
+ - **Paper**: A detailed paper or training data will be posted at some point.
39
+
40
+ ## Credit Where Credit's Due
41
+
42
+ If you find these models useful and decide to use these models, a link to this repository would be highly appreciated. I am a one man show running this and I'm doing this for free, Thanks 🤗
43
+ ## Contact
44
+ If you have questions, Please reach out to me at [email protected]
45
+
46
+ <p align="center">
47
+ <img src="UUFO.png" alt="U.U.F.O Research Logo" width="250"/>
48
+ </p>
UUFO.png ADDED