Adding Evaluation Results

#6
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -57,4 +57,17 @@ model = AutoModelForCausalLM.from_pretrained(
57
 
58
 
59
  ## Benchmarks
60
- ![benchmarks](imgs/benchmarks.png "Benchmark Scores")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
 
58
 
59
  ## Benchmarks
60
+ ![benchmarks](imgs/benchmarks.png "Benchmark Scores")
61
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
62
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_LeoLM__leo-hessianai-13b)
63
+
64
+ | Metric | Value |
65
+ |-----------------------|---------------------------|
66
+ | Avg. | 45.97 |
67
+ | ARC (25-shot) | 57.25 |
68
+ | HellaSwag (10-shot) | 81.94 |
69
+ | MMLU (5-shot) | 53.65 |
70
+ | TruthfulQA (0-shot) | 38.03 |
71
+ | Winogrande (5-shot) | 76.09 |
72
+ | GSM8K (5-shot) | 8.95 |
73
+ | DROP (3-shot) | 5.91 |