Kquant03 commited on
Commit
a53ef0a
·
verified ·
1 Parent(s): c9c42e5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -1
README.md CHANGED
@@ -21,7 +21,18 @@ An augmentation of the script I used for Eukaryote...hoping that this one does e
21
  - [alnrg2arg/test2_4](https://huggingface.co/alnrg2arg/blockchainlabs_7B_merged_test2_4) - expert #6
22
  - [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) - expert #7
23
  - [eren23/slerp-test-turdus-beagle](https://huggingface.co/eren23/slerp-test-turdus-beagle) - expert #8
24
-
 
 
 
 
 
 
 
 
 
 
 
25
  # "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
26
  ### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
27
 
 
21
  - [alnrg2arg/test2_4](https://huggingface.co/alnrg2arg/blockchainlabs_7B_merged_test2_4) - expert #6
22
  - [mlabonne/Beagle14-7B](https://huggingface.co/mlabonne/Beagle14-7B) - expert #7
23
  - [eren23/slerp-test-turdus-beagle](https://huggingface.co/eren23/slerp-test-turdus-beagle) - expert #8
24
+ ## Provided files
25
+
26
+ | Name | Quant method | Bits | Size | Max RAM required | Use case |
27
+ | ---- | ---- | ---- | ---- | ---- | ----- |
28
+ | [Q2_K Tiny](https://huggingface.co/Kquant03/Prokaryote-8x7B-GGUF/blob/main/ggml-model-q2_k.gguf) | Q2_K | 2 | 15.6 GB| 17.6 GB | smallest, significant quality loss - not recommended for most purposes |
29
+ | [Q3_K_M](https://huggingface.co/Kquant03/Prokaryote-8x7B-GGUF/blob/main/ggml-model-q3_k_m.gguf) | Q3_K_M | 3 | 20.4 GB| 22.4 GB | very small, high quality loss |
30
+ | [Q4_0](https://huggingface.co/Kquant03/Prokaryote-8x7B-GGUF/blob/main/ggml-model-q4_0.gguf) | Q4_0 | 4 | 26.4 GB| 28.4 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
31
+ | [Q4_K_M](hhttps://huggingface.co/Kquant03/Prokaryote-8x7B-GGUF/blob/main/ggml-model-q4_k_m.gguf) | Q4_K_M | 4 | ~26.4 GB| ~28.4 GB | medium, balanced quality - recommended |
32
+ | [Q5_0](https://huggingface.co/Kquant03/Prokaryote-8x7B-GGUF/blob/main/ggml-model-q5_0.gguf) | Q5_0 | 5 | 32.2 GB| 34.2 GB | legacy; large, balanced quality |
33
+ | [Q5_K_M](https://huggingface.co/Kquant03/Prokaryote-8x7B-GGUF/blob/main/ggml-model-q5_k_m.gguf) | Q5_K_M | 5 | ~32.2 GB| ~34.2 GB | large, balanced quality - recommended |
34
+ | [Q6 XL](https://huggingface.co/Kquant03/Prokaryote-8x7B-GGUF/blob/main/ggml-model-q6_k.gguf) | Q6_K | 6 | 38.4 GB| 40.4 GB | very large, extremely minor degradation |
35
+ | [Q8 XXL](https://huggingface.co/Kquant03/Prokaryote-8x7B-GGUF/blob/main/ggml-model-q8_0.gguf) | Q8_0 | 8 | 49.6 GB| 51.4 GB | very large, extremely minor degradation - not recommended |
36
  # "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
37
  ### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
38