Text Generation
Transformers
Safetensors
llama
finetuned
quantized
4-bit precision
gptq
dataset:ai2_arc
dataset:unalignment/spicy-3.1
dataset:codeparrot/apps
dataset:facebook/belebele
dataset:boolq
dataset:jondurbin/cinematika-v0.1
dataset:drop
dataset:lmsys/lmsys-chat-1m
dataset:TIGER-Lab/MathInstruct
dataset:cais/mmlu
dataset:Muennighoff/natural-instructions
dataset:openbookqa
dataset:piqa
dataset:Vezora/Tested-22k-Python-Alpaca
dataset:cakiki/rosetta-code
dataset:Open-Orca/SlimOrca
dataset:spider
dataset:squad_v2
dataset:migtissera/Synthia-v1.3
dataset:datasets/winogrande
dataset:nvidia/HelpSteer
dataset:Intel/orca_dpo_pairs
dataset:unalignment/toxic-dpo-v0.1
dataset:jondurbin/truthy-dpo-v0.1
dataset:allenai/ultrafeedback_binarized_cleaned
dataset:Squish42/bluemoon-fandom-1-1-rp-cleaned
dataset:LDJnr/Capybara
dataset:JULIELab/EmoBank
dataset:kingbri/PIPPA-shareGPT
Inference Endpoints
text-generation-inference
has_space
conversational
Eval Results
Commit
•
562f616
1
Parent(s):
7094ef1
Adding Evaluation Results (#1)
Browse files- Adding Evaluation Results (f582c4c5f613098adc5ff30e64845d2caad067d2)
Co-authored-by: Open LLM Leaderboard PR Bot <[email protected]>
README.md
CHANGED
@@ -50,6 +50,109 @@ inference: false
|
|
50 |
model_creator: one-man-army
|
51 |
pipeline_tag: text-generation
|
52 |
quantized_by: MaziyarPanahi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
---
|
54 |
# Description
|
55 |
[MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ](https://huggingface.co/MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ) is a quantized (GPTQ) version of [one-man-army/UNA-34Beagles-32K-bf16-v1](https://huggingface.co/one-man-army/UNA-34Beagles-32K-bf16-v1)
|
@@ -97,4 +200,17 @@ pipe = pipeline(
|
|
97 |
|
98 |
outputs = pipe("What is a large language model?")
|
99 |
print(outputs[0]["generated_text"])
|
100 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
50 |
model_creator: one-man-army
|
51 |
pipeline_tag: text-generation
|
52 |
quantized_by: MaziyarPanahi
|
53 |
+
model-index:
|
54 |
+
- name: UNA-34Beagles-32K-bf16-v1-GPTQ
|
55 |
+
results:
|
56 |
+
- task:
|
57 |
+
type: text-generation
|
58 |
+
name: Text Generation
|
59 |
+
dataset:
|
60 |
+
name: AI2 Reasoning Challenge (25-Shot)
|
61 |
+
type: ai2_arc
|
62 |
+
config: ARC-Challenge
|
63 |
+
split: test
|
64 |
+
args:
|
65 |
+
num_few_shot: 25
|
66 |
+
metrics:
|
67 |
+
- type: acc_norm
|
68 |
+
value: 26.11
|
69 |
+
name: normalized accuracy
|
70 |
+
source:
|
71 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ
|
72 |
+
name: Open LLM Leaderboard
|
73 |
+
- task:
|
74 |
+
type: text-generation
|
75 |
+
name: Text Generation
|
76 |
+
dataset:
|
77 |
+
name: HellaSwag (10-Shot)
|
78 |
+
type: hellaswag
|
79 |
+
split: validation
|
80 |
+
args:
|
81 |
+
num_few_shot: 10
|
82 |
+
metrics:
|
83 |
+
- type: acc_norm
|
84 |
+
value: 26.29
|
85 |
+
name: normalized accuracy
|
86 |
+
source:
|
87 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ
|
88 |
+
name: Open LLM Leaderboard
|
89 |
+
- task:
|
90 |
+
type: text-generation
|
91 |
+
name: Text Generation
|
92 |
+
dataset:
|
93 |
+
name: MMLU (5-Shot)
|
94 |
+
type: cais/mmlu
|
95 |
+
config: all
|
96 |
+
split: test
|
97 |
+
args:
|
98 |
+
num_few_shot: 5
|
99 |
+
metrics:
|
100 |
+
- type: acc
|
101 |
+
value: 24.43
|
102 |
+
name: accuracy
|
103 |
+
source:
|
104 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ
|
105 |
+
name: Open LLM Leaderboard
|
106 |
+
- task:
|
107 |
+
type: text-generation
|
108 |
+
name: Text Generation
|
109 |
+
dataset:
|
110 |
+
name: TruthfulQA (0-shot)
|
111 |
+
type: truthful_qa
|
112 |
+
config: multiple_choice
|
113 |
+
split: validation
|
114 |
+
args:
|
115 |
+
num_few_shot: 0
|
116 |
+
metrics:
|
117 |
+
- type: mc2
|
118 |
+
value: 47.27
|
119 |
+
source:
|
120 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ
|
121 |
+
name: Open LLM Leaderboard
|
122 |
+
- task:
|
123 |
+
type: text-generation
|
124 |
+
name: Text Generation
|
125 |
+
dataset:
|
126 |
+
name: Winogrande (5-shot)
|
127 |
+
type: winogrande
|
128 |
+
config: winogrande_xl
|
129 |
+
split: validation
|
130 |
+
args:
|
131 |
+
num_few_shot: 5
|
132 |
+
metrics:
|
133 |
+
- type: acc
|
134 |
+
value: 50.83
|
135 |
+
name: accuracy
|
136 |
+
source:
|
137 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ
|
138 |
+
name: Open LLM Leaderboard
|
139 |
+
- task:
|
140 |
+
type: text-generation
|
141 |
+
name: Text Generation
|
142 |
+
dataset:
|
143 |
+
name: GSM8k (5-shot)
|
144 |
+
type: gsm8k
|
145 |
+
config: main
|
146 |
+
split: test
|
147 |
+
args:
|
148 |
+
num_few_shot: 5
|
149 |
+
metrics:
|
150 |
+
- type: acc
|
151 |
+
value: 0.0
|
152 |
+
name: accuracy
|
153 |
+
source:
|
154 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ
|
155 |
+
name: Open LLM Leaderboard
|
156 |
---
|
157 |
# Description
|
158 |
[MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ](https://huggingface.co/MaziyarPanahi/UNA-34Beagles-32K-bf16-v1-GPTQ) is a quantized (GPTQ) version of [one-man-army/UNA-34Beagles-32K-bf16-v1](https://huggingface.co/one-man-army/UNA-34Beagles-32K-bf16-v1)
|
|
|
200 |
|
201 |
outputs = pipe("What is a large language model?")
|
202 |
print(outputs[0]["generated_text"])
|
203 |
+
```
|
204 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
205 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__UNA-34Beagles-32K-bf16-v1-GPTQ)
|
206 |
+
|
207 |
+
| Metric |Value|
|
208 |
+
|---------------------------------|----:|
|
209 |
+
|Avg. |29.15|
|
210 |
+
|AI2 Reasoning Challenge (25-Shot)|26.11|
|
211 |
+
|HellaSwag (10-Shot) |26.29|
|
212 |
+
|MMLU (5-Shot) |24.43|
|
213 |
+
|TruthfulQA (0-shot) |47.27|
|
214 |
+
|Winogrande (5-shot) |50.83|
|
215 |
+
|GSM8k (5-shot) | 0.00|
|
216 |
+
|