Triangle104 commited on
Commit
db6c7e2
·
verified ·
1 Parent(s): d9d3a27

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -0
README.md CHANGED
@@ -10,6 +10,36 @@ tags:
10
  This model was converted to GGUF format from [`SicariusSicariiStuff/2B_or_not_2B`](https://huggingface.co/SicariusSicariiStuff/2B_or_not_2B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
11
  Refer to the [original model card](https://huggingface.co/SicariusSicariiStuff/2B_or_not_2B) for more details on the model.
12
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  ## Use with llama.cpp
14
  Install llama.cpp through brew (works on Mac and Linux)
15
 
 
10
  This model was converted to GGUF format from [`SicariusSicariiStuff/2B_or_not_2B`](https://huggingface.co/SicariusSicariiStuff/2B_or_not_2B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
11
  Refer to the [original model card](https://huggingface.co/SicariusSicariiStuff/2B_or_not_2B) for more details on the model.
12
 
13
+ ---
14
+ The model's name is fully credited to invisietch and Shakespeare; without them, this model would not have existed.
15
+
16
+
17
+
18
+
19
+ Regarding the question, I am happy to announce that it is, in fact, 2B, as it is so stated on the original Google model card, which this model was finetuned on.
20
+
21
+
22
+ If there's one thing we can count on, it is Google to tell us what is
23
+ true, and what is misinformation. You should always trust and listen to
24
+ your elders, and especially to your big brother.
25
+
26
+
27
+ This model was finetuned on a whimsical whim, on my poor laptop. It's
28
+ not really poor, the GPU is 4090 16GB, but... it is driver-locked to
29
+ 80watts because nVidia probably does not have the resources to make
30
+ better drivers for Linux.
31
+ I hope nVidia will manage to recover, as I have seen poor Jensen with
32
+ the same old black leather jacket for years upon years. The stock is
33
+ down like 22% already in this month (August 11th, 2024).
34
+
35
+
36
+ Finetuning took about 4 hours, while the laptop was on my lap, and
37
+ while I was talking about books and stuff on Discord. Luckily, the
38
+ laptop wasn't too hot, as 80 watts is not the 175w I was promised, which
39
+ would have surely been hot enough to make an Omelette. Always remain an
40
+ optimist fellas!
41
+
42
+ ---
43
  ## Use with llama.cpp
44
  Install llama.cpp through brew (works on Mac and Linux)
45