Triangle104 commited on
Commit
0adae30
·
verified ·
1 Parent(s): f1ef5b4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -0
README.md CHANGED
@@ -12,6 +12,32 @@ tags:
12
  This model was converted to GGUF format from [`SicariusSicariiStuff/2B-ad`](https://huggingface.co/SicariusSicariiStuff/2B-ad) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
  Refer to the [original model card](https://huggingface.co/SicariusSicariiStuff/2B-ad) for more details on the model.
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ## Use with llama.cpp
16
  Install llama.cpp through brew (works on Mac and Linux)
17
 
 
12
  This model was converted to GGUF format from [`SicariusSicariiStuff/2B-ad`](https://huggingface.co/SicariusSicariiStuff/2B-ad) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
  Refer to the [original model card](https://huggingface.co/SicariusSicariiStuff/2B-ad) for more details on the model.
14
 
15
+ ---
16
+ Model details:
17
+ -
18
+ This is a Gemma-2 2B Finetune with surprisingly good Role-Play capabilities for its small 2B size.
19
+
20
+ Update:
21
+
22
+ The size is not exactly 2B, more like 3B, it's a model I did some
23
+ merges on a long time ago and forgot about it, then finetuned on top of
24
+ it.
25
+
26
+
27
+ Also, due to an old mergekit Gemma-2 quirk, it seems that the
28
+ increased size is due to the way the previous version of mergekit
29
+ handles lmhead. Anyway, it turned out pretty awesome, even for a 3B
30
+ size. The base is presented in FP32.
31
+
32
+ Details
33
+
34
+ Censorship level: Low
35
+
36
+ 7.3 / 10 (10 completely uncensored)
37
+
38
+ Intended use: Creative Writing, Role-Play, General tasks.
39
+
40
+ ---
41
  ## Use with llama.cpp
42
  Install llama.cpp through brew (works on Mac and Linux)
43