Orenguteng commited on
Commit
26b840c
1 Parent(s): 93f5174

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +112 -3
README.md CHANGED
@@ -1,5 +1,100 @@
1
  ---
2
  license: llama3.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
 
5
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/644ad182f434a6a63b18eee6/7mnEJyioRzQaWz8xLM4KI.png)
@@ -8,18 +103,18 @@ VERSION 2 Update Notes:
8
  ---
9
  - More compliant
10
  - Smarter
11
- - Not fully evaluated, but scores higher on Winogrande compared to the original instruct model. 0.77901 vs 0.78848
12
  - For best response, use this system prompt (feel free to expand upon it as you wish):
13
 
14
  Think step by step with a logical reasoning and intellectual sense before you provide any response.
15
 
16
  - For more uncensored and compliant response, you can expand the system message differently, or simply enter a dot "." as system message.
17
 
18
- - IMPORTANT:
19
- Upon further investigation, the Q4 seems to have refusal issues sometimes.
20
  There seems to be some of the fine-tune loss happening due to the quantization. I will look into it for V3.
21
  Until then, I suggest you run F16 or Q8 if possible.
22
 
 
 
23
  GENERAL INFO:
24
  ---
25
 
@@ -43,3 +138,17 @@ If you find any issues or have suggestions for improvements, feel free to leave
43
 
44
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/644ad182f434a6a63b18eee6/uqJv-R1LeJEfMxi1nmTH5.png)
45
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: llama3.1
3
+ model-index:
4
+ - name: Llama-3.1-8B-Lexi-Uncensored-V2
5
+ results:
6
+ - task:
7
+ type: text-generation
8
+ name: Text Generation
9
+ dataset:
10
+ name: IFEval (0-Shot)
11
+ type: HuggingFaceH4/ifeval
12
+ args:
13
+ num_few_shot: 0
14
+ metrics:
15
+ - type: inst_level_strict_acc and prompt_level_strict_acc
16
+ value: 77.92
17
+ name: strict accuracy
18
+ source:
19
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
20
+ name: Open LLM Leaderboard
21
+ - task:
22
+ type: text-generation
23
+ name: Text Generation
24
+ dataset:
25
+ name: BBH (3-Shot)
26
+ type: BBH
27
+ args:
28
+ num_few_shot: 3
29
+ metrics:
30
+ - type: acc_norm
31
+ value: 29.69
32
+ name: normalized accuracy
33
+ source:
34
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
35
+ name: Open LLM Leaderboard
36
+ - task:
37
+ type: text-generation
38
+ name: Text Generation
39
+ dataset:
40
+ name: MATH Lvl 5 (4-Shot)
41
+ type: hendrycks/competition_math
42
+ args:
43
+ num_few_shot: 4
44
+ metrics:
45
+ - type: exact_match
46
+ value: 16.92
47
+ name: exact match
48
+ source:
49
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
50
+ name: Open LLM Leaderboard
51
+ - task:
52
+ type: text-generation
53
+ name: Text Generation
54
+ dataset:
55
+ name: GPQA (0-shot)
56
+ type: Idavidrein/gpqa
57
+ args:
58
+ num_few_shot: 0
59
+ metrics:
60
+ - type: acc_norm
61
+ value: 4.36
62
+ name: acc_norm
63
+ source:
64
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
65
+ name: Open LLM Leaderboard
66
+ - task:
67
+ type: text-generation
68
+ name: Text Generation
69
+ dataset:
70
+ name: MuSR (0-shot)
71
+ type: TAUR-Lab/MuSR
72
+ args:
73
+ num_few_shot: 0
74
+ metrics:
75
+ - type: acc_norm
76
+ value: 7.77
77
+ name: acc_norm
78
+ source:
79
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
80
+ name: Open LLM Leaderboard
81
+ - task:
82
+ type: text-generation
83
+ name: Text Generation
84
+ dataset:
85
+ name: MMLU-PRO (5-shot)
86
+ type: TIGER-Lab/MMLU-Pro
87
+ config: main
88
+ split: test
89
+ args:
90
+ num_few_shot: 5
91
+ metrics:
92
+ - type: acc
93
+ value: 30.9
94
+ name: accuracy
95
+ source:
96
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
97
+ name: Open LLM Leaderboard
98
  ---
99
 
100
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/644ad182f434a6a63b18eee6/7mnEJyioRzQaWz8xLM4KI.png)
 
103
  ---
104
  - More compliant
105
  - Smarter
 
106
  - For best response, use this system prompt (feel free to expand upon it as you wish):
107
 
108
  Think step by step with a logical reasoning and intellectual sense before you provide any response.
109
 
110
  - For more uncensored and compliant response, you can expand the system message differently, or simply enter a dot "." as system message.
111
 
112
+ - IMPORTANT: Upon further investigation, the Q4 seems to have refusal issues sometimes.
 
113
  There seems to be some of the fine-tune loss happening due to the quantization. I will look into it for V3.
114
  Until then, I suggest you run F16 or Q8 if possible.
115
 
116
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/644ad182f434a6a63b18eee6/zaHhRjsk3rvo_YewgXV2Z.png)
117
+
118
  GENERAL INFO:
119
  ---
120
 
 
138
 
139
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/644ad182f434a6a63b18eee6/uqJv-R1LeJEfMxi1nmTH5.png)
140
 
141
+
142
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
143
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Orenguteng__Llama-3.1-8B-Lexi-Uncensored-V2)
144
+
145
+ | Metric |Value|
146
+ |-------------------|----:|
147
+ |Avg. |27.93|
148
+ |IFEval (0-Shot) |77.92|
149
+ |BBH (3-Shot) |29.69|
150
+ |MATH Lvl 5 (4-Shot)|16.92|
151
+ |GPQA (0-shot) | 4.36|
152
+ |MuSR (0-shot) | 7.77|
153
+ |MMLU-PRO (5-shot) |30.90|
154
+