Update README.md
Browse files
README.md
CHANGED
@@ -11,18 +11,20 @@ datasets:
|
|
11 |
- ruslandev/tagengo-rus-gpt-4o
|
12 |
---
|
13 |
|
14 |
-
# Llama-3 8B GPT-4o-
|
15 |
|
16 |
[[Dataset]](https://huggingface.co/datasets/ruslandev/tagengo-rus-gpt-4o)
|
17 |
|
18 |
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
|
19 |
The idea behind this model is to train on a dataset derived from a smaller subset of the [tagengo-gpt4](https://huggingface.co/datasets/lightblue/tagengo-gpt4), but with improved data quality.
|
20 |
I tried to achieve higher data quality by prompting GPT-4o, the latest OpenAI's LLM with better multilingual capabilities. The training objective is primarily focused on the Russian language (80% of the training examples).
|
21 |
-
The model shows promising results on the MT-Bench evaluation benchmark, surpassing GPT-3.5 and being on par with [Suzume](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual) in Russian language scores,
|
22 |
even though the latter is trained on 8x bigger and more diverse dataset.
|
23 |
|
24 |
## Evaluation scores
|
25 |
|
|
|
|
|
26 |
| |meta-llama/Meta-Llama-3-8B-Instruct | ruslandev/llama-3-8b-gpt-4o-ru1.0 | lightblue/suzume-llama-3-8B-multilingual | Nexusflow/Starling-LM-7B-beta | gpt-3.5-turbo |
|
27 |
|:----------:|:----------------------------------:|:---------------------------------:|:----------------------------------------:|:-----------------------------:|:-------------:|
|
28 |
| Russian 🇷🇺 | NaN | 8.12 | 8.19 | 8.06 | 7.94 |
|
|
|
11 |
- ruslandev/tagengo-rus-gpt-4o
|
12 |
---
|
13 |
|
14 |
+
# Llama-3 8B GPT-4o-RU1.0
|
15 |
|
16 |
[[Dataset]](https://huggingface.co/datasets/ruslandev/tagengo-rus-gpt-4o)
|
17 |
|
18 |
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
|
19 |
The idea behind this model is to train on a dataset derived from a smaller subset of the [tagengo-gpt4](https://huggingface.co/datasets/lightblue/tagengo-gpt4), but with improved data quality.
|
20 |
I tried to achieve higher data quality by prompting GPT-4o, the latest OpenAI's LLM with better multilingual capabilities. The training objective is primarily focused on the Russian language (80% of the training examples).
|
21 |
+
The model shows promising results on the MT-Bench evaluation benchmark, surpassing GPT-3.5-turbo and being on par with [Suzume](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual) in Russian language scores,
|
22 |
even though the latter is trained on 8x bigger and more diverse dataset.
|
23 |
|
24 |
## Evaluation scores
|
25 |
|
26 |
+
I achieved the following scores on Ru/En MT-Bench:
|
27 |
+
|
28 |
| |meta-llama/Meta-Llama-3-8B-Instruct | ruslandev/llama-3-8b-gpt-4o-ru1.0 | lightblue/suzume-llama-3-8B-multilingual | Nexusflow/Starling-LM-7B-beta | gpt-3.5-turbo |
|
29 |
|:----------:|:----------------------------------:|:---------------------------------:|:----------------------------------------:|:-----------------------------:|:-------------:|
|
30 |
| Russian 🇷🇺 | NaN | 8.12 | 8.19 | 8.06 | 7.94 |
|