|
--- |
|
dataset_info: |
|
features: |
|
- name: input_ids |
|
sequence: int32 |
|
- name: attention_mask |
|
sequence: int8 |
|
splits: |
|
- name: train |
|
num_bytes: 3977615851 |
|
num_examples: 2293647 |
|
download_size: 1879839994 |
|
dataset_size: 3977615851 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
--- |
|
This dataset is an Arabic sample extracted from the [Fineeb2](https://huggingface.co./datasets/HuggingFaceFW/fineweb-2) |
|
Arabic subset (arb_Arab) which is supposed to be standard Arabic. |
|
There are around 2.3 million rows in this sample. First, the whole dataset (57.8M rows) was scanned and rows |
|
were kept if they had over 95% Arabic words. Then this 2.3M sample was randomly sampled from the _mostly Arabic_ data. |
|
Notice that language_score is not an accurate measure. Also, this did not exclude slang, dialects or inappropriate |
|
content (no editing was done to any row and all columns were kept). |
|
The main purpose of this dataset is educational and I hope it helps researchers in designing and developing pre-processing |
|
for the main FineWeb2 dataset (or any other Arabic corpora). |
|
Example: |
|
```python |
|
from datasets import load_dataset |
|
from pprint import pprint |
|
import random |
|
ds = load_dataset("akhooli/fineweb2_ar_24_sample") |
|
max_n = len(ds['train']) |
|
index = random.randint(0,max_n) # random row |
|
pprint(ds['train'][index]['text']) # article |
|
``` |
|
|