Datasets:

Modalities:
Text
Formats:
json
Size:
< 1K
Libraries:
Datasets
Dask
License:
model
stringclasses
3 values
base_model
stringclasses
1 value
revision
stringclasses
3 values
private
bool
1 class
precision
stringclasses
1 value
params
int64
0
8
architectures
stringclasses
2 values
weight_type
stringclasses
1 value
status
stringclasses
1 value
submitted_time
timestamp[s]
model_type
stringclasses
2 values
job_id
stringclasses
3 values
job_start_time
stringclasses
3 values
hooking-dev/Jennifer-v1.0
c775edefa2f3b65ff4618cb1685c80d6135432a0
false
float16
0
MistralForCausalLM
Original
FINISHED
2024-05-26T02:37:53
πŸ”Ά : πŸ”Ά fine-tuned on domain-specific datasets
5302029
2024-05-27T00:11:47.739086
hooking-dev/Monah-8b-Uncensored-v0.2
e56c3addcb821fcf8d59dd9019331a764debd0db
false
float16
8
LlamaForCausalLM
Original
FINISHED
2024-05-17T20:49:46
πŸ”Ά : πŸ”Ά fine-tuned on domain-specific datasets
5245848
2024-05-24T23:48:39.399632
hooking-dev/Monah-8b
main
false
float16
8
LlamaForCausalLM
Original
FINISHED
2024-04-29T15:39:04
πŸ”Ά : fine-tuned on domain-specific datasets
4074459
2024-04-29T16:31:30.251476

HuggingFace LeaderBoard

Open LLM Leaderboard Requests

This repository contains the request files of models that have been submitted to the Open LLM Leaderboard.

You can take a look at the current status of your model by finding its request file in this dataset. If your model failed, feel free to open an issue on the Open LLM Leaderboard! (We don't follow issues in this repository as often)

Evaluation Methodology

The evaluation process involves running your models against several benchmarks from the Eleuther AI Harness, a unified framework for measuring the effectiveness of generative language models. Below is a brief overview of each benchmark:

  1. AI2 Reasoning Challenge (ARC) - Grade-School Science Questions (25-shot)
  2. HellaSwag - Commonsense Inference (10-shot)
  3. MMLU - Massive Multi-Task Language Understanding, knowledge on 57 domains (5-shot)
  4. TruthfulQA - Propensity to Produce Falsehoods (0-shot)
  5. Winogrande - Adversarial Winograd Schema Challenge (5-shot)
  6. GSM8k - Grade School Math Word Problems Solving Complex Mathematical Reasoning (5-shot)

Together, these benchmarks provide an assessment of a model's capabilities in terms of knowledge, reasoning, and some math, in various scenarios.

Accessing Your Results

To view the numerical results of your evaluated models, visit the dedicated Hugging Face Dataset at https://huggingface.co./datasets/open-llm-leaderboard/results. This dataset offers a thorough breakdown of each model's performance on the individual benchmarks.

Exploring Model Details

For further insights into the inputs and outputs of specific models, locate the "πŸ“„" emoji associated with the desired model within this repository. Clicking on this icon will direct you to the respective GitHub page containing detailed information about the model's behavior during the evaluation process.

Downloads last month
146,324

Spaces using open-llm-leaderboard-old/requests 13