Update README.md
Browse files
README.md
CHANGED
@@ -3,4 +3,69 @@ license: mit
|
|
3 |
base_model:
|
4 |
- DevQuasar/DevQuasar-R1-Uncensored-Llama-8B
|
5 |
pipeline_tag: text-generation
|
6 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
base_model:
|
4 |
- DevQuasar/DevQuasar-R1-Uncensored-Llama-8B
|
5 |
pipeline_tag: text-generation
|
6 |
+
---
|
7 |
+
|
8 |
+
# DevQuasar-R1-Uncensored-Llama-8B
|
9 |
+
|
10 |
+
## Eval results
|
11 |
+
`hf (pretrained=DevQuasar/DevQuasar-R1-Uncensored-Llama-8B,parallelize=True,dtype=float16), gen_kwargs: (temperature=0.6,top_p=0.95,do_sample=True), limit: None, num_fewshot: None, batch_size: auto:4 (1,16,64,64)`
|
12 |
+
| Tasks |Version|Filter|n-shot| Metric | |Value | |Stderr|
|
13 |
+
|----------------------------------------------------------|-------|------|-----:|-----------------------|---|-----:|---|------|
|
14 |
+
|hellaswag | 1|none | 0|acc |↑ |0.6052|± |0.0049|
|
15 |
+
| | |none | 0|acc_norm |↑ |0.8021|± |0.0040|
|
16 |
+
|leaderboard_bbh | N/A| | | | | | | |
|
17 |
+
| - leaderboard_bbh_boolean_expressions | 1|none | 3|acc_norm |↑ |0.8360|± |0.0235|
|
18 |
+
| - leaderboard_bbh_causal_judgement | 1|none | 3|acc_norm |↑ |0.6043|± |0.0359|
|
19 |
+
| - leaderboard_bbh_date_understanding | 1|none | 3|acc_norm |↑ |0.4840|± |0.0317|
|
20 |
+
| - leaderboard_bbh_disambiguation_qa | 1|none | 3|acc_norm |↑ |0.6360|± |0.0305|
|
21 |
+
| - leaderboard_bbh_formal_fallacies | 1|none | 3|acc_norm |↑ |0.5680|± |0.0314|
|
22 |
+
| - leaderboard_bbh_geometric_shapes | 1|none | 3|acc_norm |↑ |0.2760|± |0.0283|
|
23 |
+
| - leaderboard_bbh_hyperbaton | 1|none | 3|acc_norm |↑ |0.5440|± |0.0316|
|
24 |
+
| - leaderboard_bbh_logical_deduction_five_objects | 1|none | 3|acc_norm |↑ |0.4320|± |0.0314|
|
25 |
+
| - leaderboard_bbh_logical_deduction_seven_objects | 1|none | 3|acc_norm |↑ |0.4640|± |0.0316|
|
26 |
+
| - leaderboard_bbh_logical_deduction_three_objects | 1|none | 3|acc_norm |↑ |0.6440|± |0.0303|
|
27 |
+
| - leaderboard_bbh_movie_recommendation | 1|none | 3|acc_norm |↑ |0.7600|± |0.0271|
|
28 |
+
| - leaderboard_bbh_navigate | 1|none | 3|acc_norm |↑ |0.6240|± |0.0307|
|
29 |
+
| - leaderboard_bbh_object_counting | 1|none | 3|acc_norm |↑ |0.5440|± |0.0316|
|
30 |
+
| - leaderboard_bbh_penguins_in_a_table | 1|none | 3|acc_norm |↑ |0.4658|± |0.0414|
|
31 |
+
| - leaderboard_bbh_reasoning_about_colored_objects | 1|none | 3|acc_norm |↑ |0.5640|± |0.0314|
|
32 |
+
| - leaderboard_bbh_ruin_names | 1|none | 3|acc_norm |↑ |0.7160|± |0.0286|
|
33 |
+
| - leaderboard_bbh_salient_translation_error_detection | 1|none | 3|acc_norm |↑ |0.4920|± |0.0317|
|
34 |
+
| - leaderboard_bbh_snarks | 1|none | 3|acc_norm |↑ |0.5899|± |0.0370|
|
35 |
+
| - leaderboard_bbh_sports_understanding | 1|none | 3|acc_norm |↑ |0.6880|± |0.0294|
|
36 |
+
| - leaderboard_bbh_temporal_sequences | 1|none | 3|acc_norm |↑ |0.2200|± |0.0263|
|
37 |
+
| - leaderboard_bbh_tracking_shuffled_objects_five_objects | 1|none | 3|acc_norm |↑ |0.1880|± |0.0248|
|
38 |
+
| - leaderboard_bbh_tracking_shuffled_objects_seven_objects| 1|none | 3|acc_norm |↑ |0.1320|± |0.0215|
|
39 |
+
| - leaderboard_bbh_tracking_shuffled_objects_three_objects| 1|none | 3|acc_norm |↑ |0.3040|± |0.0292|
|
40 |
+
| - leaderboard_bbh_web_of_lies | 1|none | 3|acc_norm |↑ |0.4760|± |0.0316|
|
41 |
+
|leaderboard_gpqa | N/A| | | | | | | |
|
42 |
+
| - leaderboard_gpqa_diamond | 1|none | 0|acc_norm |↑ |0.3232|± |0.0333|
|
43 |
+
| - leaderboard_gpqa_extended | 1|none | 0|acc_norm |↑ |0.3498|± |0.0204|
|
44 |
+
| - leaderboard_gpqa_main | 1|none | 0|acc_norm |↑ |0.3527|± |0.0226|
|
45 |
+
|leaderboard_ifeval | 3|none | 0|inst_level_loose_acc |↑ |0.4628|± | N/A|
|
46 |
+
| | |none | 0|inst_level_strict_acc |↑ |0.4365|± | N/A|
|
47 |
+
| | |none | 0|prompt_level_loose_acc |↑ |0.3216|± |0.0201|
|
48 |
+
| | |none | 0|prompt_level_strict_acc|↑ |0.2902|± |0.0195|
|
49 |
+
|leaderboard_math_hard | N/A| | | | | | | |
|
50 |
+
| - leaderboard_math_algebra_hard | 2|none | 4|exact_match |↑ |0.5798|± |0.0282|
|
51 |
+
| - leaderboard_math_counting_and_prob_hard | 2|none | 4|exact_match |↑ |0.2276|± |0.0380|
|
52 |
+
| - leaderboard_math_geometry_hard | 2|none | 4|exact_match |↑ |0.1970|± |0.0347|
|
53 |
+
| - leaderboard_math_intermediate_algebra_hard | 2|none | 4|exact_match |↑ |0.1036|± |0.0182|
|
54 |
+
| - leaderboard_math_num_theory_hard | 2|none | 4|exact_match |↑ |0.3377|± |0.0382|
|
55 |
+
| - leaderboard_math_prealgebra_hard | 2|none | 4|exact_match |↑ |0.4715|± |0.0360|
|
56 |
+
| - leaderboard_math_precalculus_hard | 2|none | 4|exact_match |↑ |0.1111|± |0.0271|
|
57 |
+
|leaderboard_mmlu_pro | 0.1|none | 5|acc |↑ |0.3608|± |0.0044|
|
58 |
+
|leaderboard_musr | N/A| | | | | | | |
|
59 |
+
| - leaderboard_musr_murder_mysteries | 1|none | 0|acc_norm |↑ |0.5920|± |0.0311|
|
60 |
+
| - leaderboard_musr_object_placements | 1|none | 0|acc_norm |↑ |0.3867|± |0.0305|
|
61 |
+
| - leaderboard_musr_team_allocation | 1|none | 0|acc_norm |↑ |0.3560|± |0.0303|
|
62 |
+
|
63 |
+
### Compare to base DeepSeek-R1-Distill-Llama-8B
|
64 |
+
|
65 |
+
Model shows improvements in most if these tests:
|
66 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64e6d37e02dee9bcb9d9fa18/x1KIerKZylkEbv8eK5gqN.png)
|
67 |
+
|
68 |
+
#### Link to eval results
|
69 |
+
|
70 |
+
[DevQuasar-R1-Uncensored-Llama-8B](https://github.com/csabakecskemeti/lm_eval_results/blob/main/DevQuasar__DevQuasar-R1-Uncensored-Llama-8B/results_2025-01-28T21-04-03.910794.json)
|
71 |
+
[DeepSeek-R1-Distill-Llama-8B](https://github.com/csabakecskemeti/lm_eval_results/blob/main/deepseek-ai__DeepSeek-R1-Distill-Llama-8B/results_2025-01-26T22-29-00.931915.json)
|