Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ TL;DR: this model has had certain weights manipulated to "inhibit" the model's a
|
|
11 |
## GGUF quants
|
12 |
Uploaded quants:
|
13 |
|
14 |
-
fp16 (in main) - good for converting to other platforms or getting the quantization you actually want, not recommended
|
15 |
|
16 |
q8_0 (in own branch) - if you've got the spare capacity, might as well?
|
17 |
|
|
|
11 |
## GGUF quants
|
12 |
Uploaded quants:
|
13 |
|
14 |
+
fp16 (in main) - good for converting to other platforms or getting the quantization you actually want, not recommended but obviously highest quality
|
15 |
|
16 |
q8_0 (in own branch) - if you've got the spare capacity, might as well?
|
17 |
|