Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:

Most of the data is duplicated?

#7
by underspirit - opened

After downloading all files in the data directory, deduplicate all text fields and found that 75% of them are duplicates, only 25% of the data has been retained as non-duplicate.
@anton-l

The method of deduplication: calculating md5 for the text field to remove duplicates.

Is it because fineweb uses the "INDIVIDUAL DUMP DEDUP" deduplication strategy?

FWIW I see the same. 324M docs have unique text out of about 1.28B. Yes it's almost certainly because of finding the same docs across crawls, as deduping is only at the crawl level in FineWeb.

I suppose I'm surprised, but the FineWeb blog does go to into detail about deduping by crawl. Does the logic extrapolate to not deduping subsets extracted from the whole dataset? Not sure.
In a similar context, I had definitely chosen to dedupe the results extracted from FineWeb.

HuggingFaceFW org

Hi, this is indeed a filtered subset of FineWeb, which only does individual dump deduplication.

If you apply MinHash deduplication across dumps on FineWeb-Edu you get about 200B unique tokens, we did an ablation after the release and haven’t seen an improvement or performance degradation from deduplication (1.8B model trained on 350B tokens) so it should be ok to deduplicate if you don't need more tokens
image.png

That's a reasonable test - at 350B tokens, and 200B tokens from unique docs, you expect that 42% of the deduped training tokens are repeats, but like 84% of the full training set are repeats. Different, though not wildly different setups, and you see no difference at this scale.

It'd be more different if, say, you trained for only 200B tokens. The deduped dataset would have no repeats compared to 84%
And as you train for longer, in the limit they get more similar. It's the same 200B tokens repeated as many times as you like either way, just weighted differently.

For that reason IMHO it might have made sense to exact-dedupe the dataset, because I can sort of recreate the dataset with duplicates simply by repeating the tokens / training longer. Not exactly the same thing because not every doc reappears at the same rate, for better or worse. It'd be a smaller dataset too, FWIW.

That said it is course not a bit hard to dedupe it downstream if one wants to!

This comment has been hidden

Fun Fact: the 10BT sample has 399067 exact duplicates out of 9672101 documents (4.12% of the dataset are duplicates)

This one: https://huggingface.co./datasets/HuggingFaceFW/fineweb-edu/tree/main/sample/10BT

Sign up or log in to comment