MaziyarPanahi commited on
Commit
3e70180
·
verified ·
1 Parent(s): 31ff1f6

Upload folder using huggingface_hub (#1)

Browse files

- f89f7c908e96177ce34f64ff55f32bfdd99dc55d628be7eb6e51d87f84bf58a4 (9e8dce16c1246b1f5bd41ba346cfeadc75f62db3)
- 00b7029901af04438fbea54c044949086ded5390a84ea7a5c04e83d517027373 (903245d7893b11aeb3dbcc298d873268d5fec875)
- 6cac99da178ae5683d01d3f0d83242b3dc6237e57e4814880e9a5781827392fc (b2015b1a01790bac3b00544194f11f53d7f05cd5)
- d29ca24139f1ae272823952ca8aaa8f61cfab9c6b45f935cc384973d67de41cc (21976f06a3fbc3d8a0c5dd0fb3fa9db2199d1e6f)
- b5f6fcff9dd3f416cb3a09dc7737b164ddbdb66c711922c160aace6a9ee7897f (ce11fa6254776c6d89f91afedf94787eb08a4545)
- 3d1d37981721ebea56ad3b4e7880902fc68ad5c56961e236caf6e2cd507eb30f (6af130a1dcebc1b966f36fb4d6a42443a318a43d)

.gitattributes CHANGED
@@ -33,3 +33,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ MN-Chinofun-12B-3.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
37
+ MN-Chinofun-12B-3.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
38
+ MN-Chinofun-12B-3.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
39
+ MN-Chinofun-12B-3.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
40
+ MN-Chinofun-12B-3.fp16.gguf filter=lfs diff=lfs merge=lfs -text
41
+ MN-Chinofun-12B-3-GGUF_imatrix.dat filter=lfs diff=lfs merge=lfs -text
MN-Chinofun-12B-3-GGUF_imatrix.dat ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:73ce42ed8ddc558b82a1487ea770582b691faf2120451943b8741b982db23d84
3
+ size 7054394
MN-Chinofun-12B-3.Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c4d6fcdc34160c0974e2aac5272add25d106aa22f6588f2a1af2e7ac5f6dd2a
3
+ size 8727632640
MN-Chinofun-12B-3.Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d1d2feb8e76a6b1bdf1617824266ae0e5436e448ebf11bd0aa4d316833550059
3
+ size 8518736640
MN-Chinofun-12B-3.Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1b8ba31fa4d9cd1b416bf36d75af1fe0562251b59a22c190e4a1f9d54369be7
3
+ size 10056211200
MN-Chinofun-12B-3.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:163293a0a5bd612a35e8e0798b65212c43cb76279660c56874518fdb2434e94d
3
+ size 13022370560
MN-Chinofun-12B-3.fp16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bdacdcadfb65bd378fa722088e81b5636408d2c8030f5194c85035b0fbcc24bb
3
+ size 24504277536
README.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: djuna/MN-Chinofun-12B-3
3
+ inference: false
4
+ model_creator: djuna
5
+ model_name: MN-Chinofun-12B-3-GGUF
6
+ pipeline_tag: text-generation
7
+ quantized_by: MaziyarPanahi
8
+ tags:
9
+ - quantized
10
+ - 2-bit
11
+ - 3-bit
12
+ - 4-bit
13
+ - 5-bit
14
+ - 6-bit
15
+ - 8-bit
16
+ - GGUF
17
+ - text-generation
18
+ ---
19
+ # [MaziyarPanahi/MN-Chinofun-12B-3-GGUF](https://huggingface.co/MaziyarPanahi/MN-Chinofun-12B-3-GGUF)
20
+ - Model creator: [djuna](https://huggingface.co/djuna)
21
+ - Original model: [djuna/MN-Chinofun-12B-3](https://huggingface.co/djuna/MN-Chinofun-12B-3)
22
+
23
+ ## Description
24
+ [MaziyarPanahi/MN-Chinofun-12B-3-GGUF](https://huggingface.co/MaziyarPanahi/MN-Chinofun-12B-3-GGUF) contains GGUF format model files for [djuna/MN-Chinofun-12B-3](https://huggingface.co/djuna/MN-Chinofun-12B-3).
25
+
26
+ ### About GGUF
27
+
28
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
29
+
30
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
31
+
32
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
33
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
34
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
35
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
36
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
37
+ * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
38
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
39
+ * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
40
+ * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
41
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
42
+
43
+ ## Special thanks
44
+
45
+ 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.