itlwas commited on
Commit
39fab2c
·
verified ·
1 Parent(s): 665b6f9

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +167 -0
README.md ADDED
@@ -0,0 +1,167 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - fr
5
+ - de
6
+ - es
7
+ - it
8
+ - pt
9
+ - ru
10
+ - zh
11
+ - ja
12
+ license: apache-2.0
13
+ tags:
14
+ - merge
15
+ - llama-cpp
16
+ - gguf-my-repo
17
+ datasets:
18
+ - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
19
+ - anthracite-org/stheno-filtered-v1.1
20
+ - PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT
21
+ - Gryphe/Sonnet3.5-Charcard-Roleplay
22
+ - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
23
+ - anthracite-org/kalo-opus-instruct-22k-no-refusal
24
+ - anthracite-org/nopm_claude_writing_fixed
25
+ - anthracite-org/kalo_opus_misc_240827
26
+ pipeline_tag: text-generation
27
+ base_model: Epiculous/Violet_Twilight-v0.2
28
+ model-index:
29
+ - name: Violet_Twilight-v0.2
30
+ results:
31
+ - task:
32
+ type: text-generation
33
+ name: Text Generation
34
+ dataset:
35
+ name: IFEval (0-Shot)
36
+ type: HuggingFaceH4/ifeval
37
+ args:
38
+ num_few_shot: 0
39
+ metrics:
40
+ - type: inst_level_strict_acc and prompt_level_strict_acc
41
+ value: 45.32
42
+ name: strict accuracy
43
+ source:
44
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Epiculous/Violet_Twilight-v0.2
45
+ name: Open LLM Leaderboard
46
+ - task:
47
+ type: text-generation
48
+ name: Text Generation
49
+ dataset:
50
+ name: BBH (3-Shot)
51
+ type: BBH
52
+ args:
53
+ num_few_shot: 3
54
+ metrics:
55
+ - type: acc_norm
56
+ value: 23.94
57
+ name: normalized accuracy
58
+ source:
59
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Epiculous/Violet_Twilight-v0.2
60
+ name: Open LLM Leaderboard
61
+ - task:
62
+ type: text-generation
63
+ name: Text Generation
64
+ dataset:
65
+ name: MATH Lvl 5 (4-Shot)
66
+ type: hendrycks/competition_math
67
+ args:
68
+ num_few_shot: 4
69
+ metrics:
70
+ - type: exact_match
71
+ value: 2.72
72
+ name: exact match
73
+ source:
74
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Epiculous/Violet_Twilight-v0.2
75
+ name: Open LLM Leaderboard
76
+ - task:
77
+ type: text-generation
78
+ name: Text Generation
79
+ dataset:
80
+ name: GPQA (0-shot)
81
+ type: Idavidrein/gpqa
82
+ args:
83
+ num_few_shot: 0
84
+ metrics:
85
+ - type: acc_norm
86
+ value: 2.13
87
+ name: acc_norm
88
+ source:
89
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Epiculous/Violet_Twilight-v0.2
90
+ name: Open LLM Leaderboard
91
+ - task:
92
+ type: text-generation
93
+ name: Text Generation
94
+ dataset:
95
+ name: MuSR (0-shot)
96
+ type: TAUR-Lab/MuSR
97
+ args:
98
+ num_few_shot: 0
99
+ metrics:
100
+ - type: acc_norm
101
+ value: 13.61
102
+ name: acc_norm
103
+ source:
104
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Epiculous/Violet_Twilight-v0.2
105
+ name: Open LLM Leaderboard
106
+ - task:
107
+ type: text-generation
108
+ name: Text Generation
109
+ dataset:
110
+ name: MMLU-PRO (5-shot)
111
+ type: TIGER-Lab/MMLU-Pro
112
+ config: main
113
+ split: test
114
+ args:
115
+ num_few_shot: 5
116
+ metrics:
117
+ - type: acc
118
+ value: 23.45
119
+ name: accuracy
120
+ source:
121
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Epiculous/Violet_Twilight-v0.2
122
+ name: Open LLM Leaderboard
123
+ ---
124
+
125
+ # AIronMind/Violet_Twilight-v0.2-Q4_K_M-GGUF
126
+ This model was converted to GGUF format from [`Epiculous/Violet_Twilight-v0.2`](https://huggingface.co/Epiculous/Violet_Twilight-v0.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
127
+ Refer to the [original model card](https://huggingface.co/Epiculous/Violet_Twilight-v0.2) for more details on the model.
128
+
129
+ ## Use with llama.cpp
130
+ Install llama.cpp through brew (works on Mac and Linux)
131
+
132
+ ```bash
133
+ brew install llama.cpp
134
+
135
+ ```
136
+ Invoke the llama.cpp server or the CLI.
137
+
138
+ ### CLI:
139
+ ```bash
140
+ llama-cli --hf-repo AIronMind/Violet_Twilight-v0.2-Q4_K_M-GGUF --hf-file violet_twilight-v0.2-q4_k_m.gguf -p "The meaning to life and the universe is"
141
+ ```
142
+
143
+ ### Server:
144
+ ```bash
145
+ llama-server --hf-repo AIronMind/Violet_Twilight-v0.2-Q4_K_M-GGUF --hf-file violet_twilight-v0.2-q4_k_m.gguf -c 2048
146
+ ```
147
+
148
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
149
+
150
+ Step 1: Clone llama.cpp from GitHub.
151
+ ```
152
+ git clone https://github.com/ggerganov/llama.cpp
153
+ ```
154
+
155
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
156
+ ```
157
+ cd llama.cpp && LLAMA_CURL=1 make
158
+ ```
159
+
160
+ Step 3: Run inference through the main binary.
161
+ ```
162
+ ./llama-cli --hf-repo AIronMind/Violet_Twilight-v0.2-Q4_K_M-GGUF --hf-file violet_twilight-v0.2-q4_k_m.gguf -p "The meaning to life and the universe is"
163
+ ```
164
+ or
165
+ ```
166
+ ./llama-server --hf-repo AIronMind/Violet_Twilight-v0.2-Q4_K_M-GGUF --hf-file violet_twilight-v0.2-q4_k_m.gguf -c 2048
167
+ ```