🌐 NoMIRACL Dataset [EMNLP'24]
Collection
A collection of multilingual relevance assessment datasets. We also have SFT fine-tuned models (Mistral-7B & Llama-3 8B)
•
7 items
•
Updated
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the nthakur/nomiracl-instruct dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.6576 | 0.2981 | 200 | 1.6656 |
1.6447 | 0.5961 | 400 | 1.6409 |
1.6245 | 0.8942 | 600 | 1.6358 |
Base model
meta-llama/Meta-Llama-3-8B-Instruct