itlwas commited on
Commit
d2ef381
·
verified ·
1 Parent(s): 63945b6

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +202 -0
README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: other
5
+ tags:
6
+ - axolotl
7
+ - instruct
8
+ - finetune
9
+ - chatml
10
+ - gpt4
11
+ - synthetic data
12
+ - science
13
+ - physics
14
+ - chemistry
15
+ - biology
16
+ - math
17
+ - qwen
18
+ - qwen2
19
+ - llama-cpp
20
+ - gguf-my-repo
21
+ base_model: Weyaxi/Einstein-v7-Qwen2-7B
22
+ datasets:
23
+ - allenai/ai2_arc
24
+ - camel-ai/physics
25
+ - camel-ai/chemistry
26
+ - camel-ai/biology
27
+ - camel-ai/math
28
+ - metaeval/reclor
29
+ - openbookqa
30
+ - mandyyyyii/scibench
31
+ - derek-thomas/ScienceQA
32
+ - TIGER-Lab/ScienceEval
33
+ - jondurbin/airoboros-3.2
34
+ - LDJnr/Capybara
35
+ - Cot-Alpaca-GPT4-From-OpenHermes-2.5
36
+ - STEM-AI-mtl/Electrical-engineering
37
+ - knowrohit07/saraswati-stem
38
+ - sablo/oasst2_curated
39
+ - lmsys/lmsys-chat-1m
40
+ - TIGER-Lab/MathInstruct
41
+ - bigbio/med_qa
42
+ - meta-math/MetaMathQA-40K
43
+ - openbookqa
44
+ - piqa
45
+ - metaeval/reclor
46
+ - derek-thomas/ScienceQA
47
+ - scibench
48
+ - sciq
49
+ - Open-Orca/SlimOrca
50
+ - migtissera/Synthia-v1.3
51
+ - TIGER-Lab/ScienceEval
52
+ - allenai/WildChat
53
+ - microsoft/orca-math-word-problems-200k
54
+ - openchat/openchat_sharegpt4_dataset
55
+ - teknium/GPTeacher-General-Instruct
56
+ - m-a-p/CodeFeedback-Filtered-Instruction
57
+ - totally-not-an-llm/EverythingLM-data-V3
58
+ - HuggingFaceH4/no_robots
59
+ - OpenAssistant/oasst_top1_2023-08-25
60
+ - WizardLM/WizardLM_evol_instruct_70k
61
+ - abacusai/SystemChat-1.1
62
+ - H-D-T/Buzz-V1.2
63
+ model-index:
64
+ - name: Einstein-v7-Qwen2-7B
65
+ results:
66
+ - task:
67
+ type: text-generation
68
+ name: Text Generation
69
+ dataset:
70
+ name: IFEval (0-Shot)
71
+ type: HuggingFaceH4/ifeval
72
+ args:
73
+ num_few_shot: 0
74
+ metrics:
75
+ - type: inst_level_strict_acc and prompt_level_strict_acc
76
+ value: 41.0
77
+ name: strict accuracy
78
+ source:
79
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v7-Qwen2-7B
80
+ name: Open LLM Leaderboard
81
+ - task:
82
+ type: text-generation
83
+ name: Text Generation
84
+ dataset:
85
+ name: BBH (3-Shot)
86
+ type: BBH
87
+ args:
88
+ num_few_shot: 3
89
+ metrics:
90
+ - type: acc_norm
91
+ value: 32.84
92
+ name: normalized accuracy
93
+ source:
94
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v7-Qwen2-7B
95
+ name: Open LLM Leaderboard
96
+ - task:
97
+ type: text-generation
98
+ name: Text Generation
99
+ dataset:
100
+ name: MATH Lvl 5 (4-Shot)
101
+ type: hendrycks/competition_math
102
+ args:
103
+ num_few_shot: 4
104
+ metrics:
105
+ - type: exact_match
106
+ value: 15.18
107
+ name: exact match
108
+ source:
109
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v7-Qwen2-7B
110
+ name: Open LLM Leaderboard
111
+ - task:
112
+ type: text-generation
113
+ name: Text Generation
114
+ dataset:
115
+ name: GPQA (0-shot)
116
+ type: Idavidrein/gpqa
117
+ args:
118
+ num_few_shot: 0
119
+ metrics:
120
+ - type: acc_norm
121
+ value: 6.6
122
+ name: acc_norm
123
+ source:
124
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v7-Qwen2-7B
125
+ name: Open LLM Leaderboard
126
+ - task:
127
+ type: text-generation
128
+ name: Text Generation
129
+ dataset:
130
+ name: MuSR (0-shot)
131
+ type: TAUR-Lab/MuSR
132
+ args:
133
+ num_few_shot: 0
134
+ metrics:
135
+ - type: acc_norm
136
+ value: 14.06
137
+ name: acc_norm
138
+ source:
139
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v7-Qwen2-7B
140
+ name: Open LLM Leaderboard
141
+ - task:
142
+ type: text-generation
143
+ name: Text Generation
144
+ dataset:
145
+ name: MMLU-PRO (5-shot)
146
+ type: TIGER-Lab/MMLU-Pro
147
+ config: main
148
+ split: test
149
+ args:
150
+ num_few_shot: 5
151
+ metrics:
152
+ - type: acc
153
+ value: 34.4
154
+ name: accuracy
155
+ source:
156
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Weyaxi/Einstein-v7-Qwen2-7B
157
+ name: Open LLM Leaderboard
158
+ ---
159
+
160
+ # AIronMind/Einstein-v7-Qwen2-7B-Q4_K_M-GGUF
161
+ This model was converted to GGUF format from [`Weyaxi/Einstein-v7-Qwen2-7B`](https://huggingface.co/Weyaxi/Einstein-v7-Qwen2-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
162
+ Refer to the [original model card](https://huggingface.co/Weyaxi/Einstein-v7-Qwen2-7B) for more details on the model.
163
+
164
+ ## Use with llama.cpp
165
+ Install llama.cpp through brew (works on Mac and Linux)
166
+
167
+ ```bash
168
+ brew install llama.cpp
169
+
170
+ ```
171
+ Invoke the llama.cpp server or the CLI.
172
+
173
+ ### CLI:
174
+ ```bash
175
+ llama-cli --hf-repo AIronMind/Einstein-v7-Qwen2-7B-Q4_K_M-GGUF --hf-file einstein-v7-qwen2-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
176
+ ```
177
+
178
+ ### Server:
179
+ ```bash
180
+ llama-server --hf-repo AIronMind/Einstein-v7-Qwen2-7B-Q4_K_M-GGUF --hf-file einstein-v7-qwen2-7b-q4_k_m.gguf -c 2048
181
+ ```
182
+
183
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
184
+
185
+ Step 1: Clone llama.cpp from GitHub.
186
+ ```
187
+ git clone https://github.com/ggerganov/llama.cpp
188
+ ```
189
+
190
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
191
+ ```
192
+ cd llama.cpp && LLAMA_CURL=1 make
193
+ ```
194
+
195
+ Step 3: Run inference through the main binary.
196
+ ```
197
+ ./llama-cli --hf-repo AIronMind/Einstein-v7-Qwen2-7B-Q4_K_M-GGUF --hf-file einstein-v7-qwen2-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
198
+ ```
199
+ or
200
+ ```
201
+ ./llama-server --hf-repo AIronMind/Einstein-v7-Qwen2-7B-Q4_K_M-GGUF --hf-file einstein-v7-qwen2-7b-q4_k_m.gguf -c 2048
202
+ ```