Support parallel downloading of data by providing multiple compressed files

#12
by albertvillanova HF staff - opened

Currently, the data is provided as a single ZIP archive containing all the JSON-Lines files. However, this structure limits the ability to download files in parallel, which could enhance the performance of data retrieval.

You could enable parallel downloading by providing the data as multiple individually compressed files. This would offer the following advantages:

  • Parallelism: Users can download multiple files concurrently, speeding up the data transfer process, especially for large datasets.
  • Selective Downloads: Users can selectively download only the needed parts of the dataset, reducing unnecessary bandwidth usage.

This change could significantly improve data accessibility and usability, especially in distributed or large-scale data processing workflows.

Parquet is the best option as you said in https://huggingface.co./datasets/wikimedia/structured-wikipedia/discussions/11

Ideally this dataset can be in multiple Parquet files.

Wikimedia org

Independently of the file format, this thread is intended to discuss using multiple files per config, instead of a single one, so that parallel download is supported.

Wikimedia org

I deleted the comment (which mentioned .jsonl.gz as a possible data format), so that it is now clearer that this applies to whatever data file format.

Wikimedia org

Thanks for this feedback - that kind of input is exactly why this beta dataset is shared here, so it’s good to know that providing this in smaller files would be useful for people.

Sign up or log in to comment