aashish1904 commited on
Commit
2216bc4
·
verified ·
1 Parent(s): 84ba175

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +66 -0
README.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ library_name: transformers
5
+ tags:
6
+ - mergekit
7
+ - merge
8
+ base_model:
9
+ - bluuwhale/L3-SAO-MIX-8B-V1
10
+ - Sao10K/L3-8B-Niitama-v1
11
+ - Sao10K/L3-8B-Lunaris-v1
12
+ - Sao10K/L3-8B-Tamamo-v1
13
+ - Sao10K/L3-8B-Stheno-v3.2
14
+
15
+ ---
16
+
17
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
18
+
19
+
20
+ # QuantFactory/L3-bluuwhale-SAO-MIX-8B-V1_fp32-merge-calc-GGUF
21
+ This is quantized version of [Casual-Autopsy/L3-bluuwhale-SAO-MIX-8B-V1_fp32-merge-calc](https://huggingface.co/Casual-Autopsy/L3-bluuwhale-SAO-MIX-8B-V1_fp32-merge-calc) created using llama.cpp
22
+
23
+ # Original Model Card
24
+
25
+ # L3-bluuwhale-SAO-MIX-8B-V1_fp32-merge-calc
26
+
27
+ This is a remerge of [bluuwhale's merger](https://huggingface.co/bluuwhale/L3-SAO-MIX-8B-V1) using the exact yaml config with the only difference being that merge calculations are done in fp32 instead of bfp16
28
+
29
+ I've done this since I'm planning to use this for another merger, but you can use as is if you wish.
30
+
31
+ ## Merge Details
32
+ ### Merge Method
33
+
34
+ This model was merged using the della merge method using [Sao10K/L3-8B-Niitama-v1](https://huggingface.co/Sao10K/L3-8B-Niitama-v1) as a base.
35
+
36
+ ### Models Merged
37
+
38
+ The following models were included in the merge:
39
+ * [Sao10K/L3-8B-Lunaris-v1](https://huggingface.co/Sao10K/L3-8B-Lunaris-v1)
40
+ * [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
41
+ * [Sao10K/L3-8B-Tamamo-v1](https://huggingface.co/Sao10K/L3-8B-Tamamo-v1)
42
+
43
+ ### Configuration
44
+
45
+ The following YAML configuration was used to produce this model:
46
+
47
+ ```yaml
48
+ models:
49
+ - model: Sao10K/L3-8B-Lunaris-v1
50
+ parameters:
51
+ weight: 1.0
52
+ - model: Sao10K/L3-8B-Stheno-v3.2
53
+ parameters:
54
+ weight: 1.0
55
+ - model: Sao10K/L3-8B-Niitama-v1
56
+ parameters:
57
+ weight: 1.0
58
+ - model: Sao10K/L3-8B-Tamamo-v1
59
+ parameters:
60
+ weight: 1.0
61
+ base_model: Sao10K/L3-8B-Niitama-v1
62
+ merge_method: della
63
+ dtype: float32
64
+ out_dtype: bfloat16
65
+ ```
66
+