Datasets:
Data set differs in size from original common voice data?
When loading the Dutch data set using the code below, I get a dataset that has 10930 rows or instances while the data from https://commonvoice.mozilla.org/en/datasets has 84823 rows/instances, which is a massive difference.from datasets import load_dataset
load_dataset("mozilla-foundation/common_voice_13_0", "nl", split="validation", streaming=False)
When looking at the release_stats.py file it is actually supposed to have 84823 instances. (https://huggingface.co./datasets/mozilla-foundation/common_voice_13_0/blob/main/release_stats.py)
Does anyone know where this difference comes from?
Hi there @RikRaes ,
There are two issues here:
Splits
I think the issue here is that when you load the dataset in the load_dataset()
command, you are loading the validation
split of the data, not all the data.
Common Voice by default has 3 splits of data - dev, training and validation. The default splits are explained more here on GitHub.
Different languages have different numbers of recordings
The Dutch v13 dataset has 10930 recorded sentences in the dev
split, according to the cv-datasets
metadata for nl
, which you can learn more about here.
'nl': {'duration': 411681817,
'buckets': {'dev': 10930, 'invalidated': 5331, 'other': 2723, 'reported': 334, 'test': 10936, 'train': 31906, 'validated': 86798},
'reportedSentences': 334,
'clips': 94852,
'splits': {'accent': {'': 1}, 'age': {'': 0.41, 'twenties': 0.21, 'fourties': 0.15, 'thirties': 0.11, 'teens': 0.02, 'fifties': 0.08, 'sixties': 0.02, 'nineties': 0, 'eighties': 0, 'seventies': 0}, 'gender': {'': 0.42, 'male': 0.47, 'female': 0.11, 'other': 0}},
'users': 1610,
'size': 2808697434,
'checksum': '2a8edc9005bbc8a3623ce25bfe95979bc9144e49a09468e8fd574ea76de30d94',
'avgDurationSecs': 4.34, 'validDurationSecs': 376725.407,
'totalHrs': 114.35,
'validHrs': 104.64},
Hope this helps!
We have a Common Voice Discourse form at - https://discourse.mozilla.org/c/voice/239
Thanks for the response @KathyReid , I agree, I load the validation set, however, I want to load the validated data set but this is not available. You see, when loading this data without picking a split, I obtain the following object, which shows that there is no validated split when loading this data set. I would like to load all 86798 instances which can be downloaded from the common voice project itself, using the load_dataset(), but this does not seem possible. I have also tried this for other languages but there not seem to be a possibility to load the validated split?
You have to pick a split. In fact, you can pick more than one split, which is what you want: split='train+validation+test'