Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Languages:
Russian
Size:
10M - 100M
License:
license: apache-2.0 | |
dataset_info: | |
features: | |
- name: text | |
dtype: string | |
splits: | |
- name: train | |
num_examples: 142178930 | |
- name: validation | |
num_examples: 71208 | |
configs: | |
- config_name: default | |
data_files: | |
- split: train | |
path: data/train-* | |
- split: validation | |
path: data/validation-* | |
task_categories: | |
- text-generation | |
language: | |
- ru | |
size_categories: | |
- 100M<n<1B | |
# Cultura-Ru-Edu | |
The `Cultura-Ru-Edu` dataset consists of Russian educational web pages filtered from the [`uonlp/CulturaX`](https://huggingface.co./datasets/uonlp/CulturaX) dataset. | |
The dataset creation was inspired by [`HuggingFaceFW/fineweb-edu`](https://huggingface.co./datasets/HuggingFaceFW/fineweb-edu), but with a focus on the Russian language. | |
By filtering the dataset based on educational criteria, the `Cultura-Ru-Edu` dataset is both high-quality and large enough to train a Russian-focused language model for tasks requiring knowledge of the world. | |
## Dataset curation | |
To create this dataset, we annotated a subset with the `Meta-Llama-3-70B-Instruct` model, trained a classifier on it, and then applied it to the entire dataset, keeping only the high-quality samples. | |
### Annotation | |
Follow [`deepvk/cultura_ru_edu_llama3_annotations`](https://huggingface.co./datasets/deepvk/cultura_ru_edu_llama3_annotations) to see details about creating the annotation dataset. | |
### Training classifier | |
We trained a classifier based on the [`USER-base`](https://huggingface.co./deepvk/USER-base) model. | |
Unlike the original FineWeb-Edu pipeline, we used binary classification, where the positive class includes samples with a score of 3 and higher. | |
We found this approach more stable due to the high imbalance in the annotation dataset. | |
### Dataset scoring | |
We converted the classifier to ONNX format and applied it to the Russian part of the [`uonlp/CulturaX`](https://huggingface.co./datasets/uonlp/CulturaX) dataset. | |
The original dataset contained approximately 800 million documents, and after filtration, only 140 million documents remained (~17.5% of the original dataset). | |
## Dataset information | |
Each sample contains only one property — `text`, the original text document. | |
Some notes: | |
- This dataset is a filtered version of the larger, multilingual [`uonlp/CulturaX`](https://huggingface.co./datasets/uonlp/CulturaX) dataset. No other information was added or removed. | |
- Since the original dataset consists of parsed web pages, there may still be artifacts in the text header or footer. Future work may include detecting and removing such blocks. | |
## Usage | |
To use this dataset, one may simply use the `datasets` API. | |
```python | |
from datasets import load_dataset | |
cultura_ru_edu = load_dataset("deepvk/cultura_ru_edu", split="train", streaming=True) | |
``` | |
Note that the dataset size is approximately 500GB, so it is better to use streaming or download it directly via Git LFS. | |
## Citations | |
``` | |
@misc{deepvk2024cultura-ru-edu, | |
title={Cultura-Ru-Edu}, | |
author={Spirin, Egor and Sokolov, Andrey}, | |
url={https://huggingface.co./datasets/deepvk/cultura_ru_edu}, | |
publisher={Hugging Face} | |
year={2024}, | |
} | |
``` | |