Transformers
Inference Endpoints
mradermacher commited on
Commit
cbdf046
1 Parent(s): ae8a985

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -5,6 +5,9 @@ datasets:
5
  - pankajmathur/WizardLM_Orca
6
  language:
7
  - en
 
 
 
8
  library_name: transformers
9
  quantized_by: mradermacher
10
  ---
@@ -42,7 +45,6 @@ more details, including on how to concatenate multi-part files.
42
  | [PART 1](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-GGUF/resolve/main/falcon-180B-WizardLM_Orca.Q8_0.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-GGUF/resolve/main/falcon-180B-WizardLM_Orca.Q8_0.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-GGUF/resolve/main/falcon-180B-WizardLM_Orca.Q8_0.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-GGUF/resolve/main/falcon-180B-WizardLM_Orca.Q8_0.gguf.part4of4) | Q8_0 | 189.8 | fast, best quality |
43
  | [P1](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-GGUF/resolve/main/falcon-180B-WizardLM_Orca.SOURCE.gguf.part1of8) [P2](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-GGUF/resolve/main/falcon-180B-WizardLM_Orca.SOURCE.gguf.part2of8) [P3](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-GGUF/resolve/main/falcon-180B-WizardLM_Orca.SOURCE.gguf.part3of8) [P4](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-GGUF/resolve/main/falcon-180B-WizardLM_Orca.SOURCE.gguf.part4of8) [P5](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-GGUF/resolve/main/falcon-180B-WizardLM_Orca.SOURCE.gguf.part5of8) [P6](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-GGUF/resolve/main/falcon-180B-WizardLM_Orca.SOURCE.gguf.part6of8) [P7](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-GGUF/resolve/main/falcon-180B-WizardLM_Orca.SOURCE.gguf.part7of8) [P8](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-GGUF/resolve/main/falcon-180B-WizardLM_Orca.SOURCE.gguf.part8of8) | SOURCE | 357.2 | source gguf, only provided when it was hard to come by |
44
 
45
-
46
  Here is a handy graph by ikawrakow comparing some lower-quality quant
47
  types (lower is better):
48
 
 
5
  - pankajmathur/WizardLM_Orca
6
  language:
7
  - en
8
+ - de
9
+ - es
10
+ - fr
11
  library_name: transformers
12
  quantized_by: mradermacher
13
  ---
 
45
  | [PART 1](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-GGUF/resolve/main/falcon-180B-WizardLM_Orca.Q8_0.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-GGUF/resolve/main/falcon-180B-WizardLM_Orca.Q8_0.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-GGUF/resolve/main/falcon-180B-WizardLM_Orca.Q8_0.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-GGUF/resolve/main/falcon-180B-WizardLM_Orca.Q8_0.gguf.part4of4) | Q8_0 | 189.8 | fast, best quality |
46
  | [P1](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-GGUF/resolve/main/falcon-180B-WizardLM_Orca.SOURCE.gguf.part1of8) [P2](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-GGUF/resolve/main/falcon-180B-WizardLM_Orca.SOURCE.gguf.part2of8) [P3](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-GGUF/resolve/main/falcon-180B-WizardLM_Orca.SOURCE.gguf.part3of8) [P4](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-GGUF/resolve/main/falcon-180B-WizardLM_Orca.SOURCE.gguf.part4of8) [P5](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-GGUF/resolve/main/falcon-180B-WizardLM_Orca.SOURCE.gguf.part5of8) [P6](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-GGUF/resolve/main/falcon-180B-WizardLM_Orca.SOURCE.gguf.part6of8) [P7](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-GGUF/resolve/main/falcon-180B-WizardLM_Orca.SOURCE.gguf.part7of8) [P8](https://huggingface.co/mradermacher/falcon-180B-WizardLM_Orca-GGUF/resolve/main/falcon-180B-WizardLM_Orca.SOURCE.gguf.part8of8) | SOURCE | 357.2 | source gguf, only provided when it was hard to come by |
47
 
 
48
  Here is a handy graph by ikawrakow comparing some lower-quality quant
49
  types (lower is better):
50