Triangle104 commited on
Commit
f99061f
·
verified ·
1 Parent(s): 955c876

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -0
README.md CHANGED
@@ -12,6 +12,22 @@ tags:
12
  This model was converted to GGUF format from [`SicariusSicariiStuff/2B-ad`](https://huggingface.co/SicariusSicariiStuff/2B-ad) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
  Refer to the [original model card](https://huggingface.co/SicariusSicariiStuff/2B-ad) for more details on the model.
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ## Use with llama.cpp
16
  Install llama.cpp through brew (works on Mac and Linux)
17
 
 
12
  This model was converted to GGUF format from [`SicariusSicariiStuff/2B-ad`](https://huggingface.co/SicariusSicariiStuff/2B-ad) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
  Refer to the [original model card](https://huggingface.co/SicariusSicariiStuff/2B-ad) for more details on the model.
14
 
15
+ ---
16
+ Model details:
17
+ -
18
+ This is a Gemma-2 2B Finetune with surprisingly good Role-Play capabilities for its small 2B size.
19
+
20
+ Update: The size is not exactly 2B, more like 3B, it's a model I did some merges on a long time ago and forgot about it, then finetuned on top of it.
21
+
22
+ Also, due to an old mergekit Gemma-2 quirk, it seems that the increased size is due to the way the previous version of mergekit handles lmhead. Anyway, it turned out pretty awesome, even for a 3B size. The base is presented in FP32.
23
+
24
+ Censorship level: Low
25
+
26
+ 7.3 / 10 (10 completely uncensored)
27
+
28
+ Intended use: Creative Writing, Role-Play, General tasks.
29
+
30
+ ---
31
  ## Use with llama.cpp
32
  Install llama.cpp through brew (works on Mac and Linux)
33