Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Russian
Libraries:
Datasets
Dask
License:
File size: 3,221 Bytes
749004a
 
271679c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
749004a
271679c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---

license: apache-2.0
dataset_info:
  features:
  - name: text
    dtype: string
  splits:
  - name: train
    num_examples: 142178930
  - name: validation
    num_examples: 71208
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
task_categories:
- text-generation
language:
- ru
size_categories:
- 100M<n<1B
---


# Cultura-Ru-Edu

The `Cultura-Ru-Edu` dataset consists of Russian educational web pages filtered from the [`uonlp/CulturaX`](https://huggingface.co./datasets/uonlp/CulturaX) dataset.

The dataset creation was inspired by [`HuggingFaceFW/fineweb-edu`](https://huggingface.co./datasets/HuggingFaceFW/fineweb-edu), but with a focus on the Russian language.
By filtering the dataset based on educational criteria, the `Cultura-Ru-Edu` dataset is both high-quality and large enough to train a Russian-focused language model for tasks requiring knowledge of the world.

## Dataset curation

To create this dataset, we annotated a subset with the `Meta-Llama-3-70B-Instruct` model, trained a classifier on it, and then applied it to the entire dataset, keeping only the high-quality samples.

### Annotation

Follow [`deepvk/cultura_ru_edu_llama3_annotations`](https://huggingface.co./datasets/deepvk/cultura_ru_edu_llama3_annotations) to see details about creating the annotation dataset.

### Training classifier

We trained a classifier based on the [`USER-base`](https://huggingface.co./deepvk/USER-base) model.
Unlike the original FineWeb-Edu pipeline, we used binary classification, where the positive class includes samples with a score of 3 and higher.
We found this approach more stable due to the high imbalance in the annotation dataset.

### Dataset scoring

We converted the classifier to ONNX format and applied it to the Russian part of the [`uonlp/CulturaX`](https://huggingface.co./datasets/uonlp/CulturaX) dataset.
The original dataset contained approximately 800 million documents, and after filtration, only 140 million documents remained (~17.5% of the original dataset).

## Dataset information

Each sample contains only one property — `text`, the original text document.

Some notes:
- This dataset is a filtered version of the larger, multilingual [`uonlp/CulturaX`](https://huggingface.co./datasets/uonlp/CulturaX) dataset. No other information was added or removed.
- Since the original dataset consists of parsed web pages, there may still be artifacts in the text header or footer. Future work may include detecting and removing such blocks.

## Usage

To use this dataset, one may simply use the `datasets` API.

```python

from datasets import load_dataset



cultura_ru_edu = load_dataset("deepvk/cultura_ru_edu", split="train", streaming=True)



```

Note that the dataset size is approximately 500GB, so it is better to use streaming or download it directly via Git LFS.

## Citations

```

@misc{deepvk2024cultura-ru-edu,

    title={Cultura-Ru-Edu},

    author={Spirin, Egor and  Sokolov, Andrey},

    url={https://huggingface.co./datasets/deepvk/cultura_ru_edu},

    publisher={Hugging Face}

    year={2024},

}

```