Search is not available for this dataset
audio
audioduration (s)
0.26
1
label
class label
10 classes
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
0eight
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
1five
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co./docs/hub/datasets-cards)

SC09 Dataset

SC09 is a raw audio waveform dataset used in the paper "It's Raw! Audio Generation with State-Space Models". It was previously used as a challenging problem for unconditional audio generation by Donahue et al. (2019), and was originally introduced as a dataset for keyword spotting by Warden (2018). The SC09 dataset consists of 1s clips of utterances of the digits zero through nine across a variety of speakers, with diverse accents and noise conditions.

We include an sc09.zip file that contains:

  • folders zero through nine, each containing audio files sampled at 16kHz corresponding to utterances for the digit
  • validation_list.txt containing the list of validation utterances
  • testing_list.txt containing the list of testing utterances
  • the original LICENSE file

We split the data into train-val-test for training SaShiMi models and baselines by following the splits provided in validation_list.txt and testing_list.txt.

We also include a sc09_quantized.zip file, which contains examples that were used in our MTurk study (details of which can be found in the SaShiMi paper). In particular, we take 50 random examples from each digit class and run each through a round of mu-law quantization followed by dequantization. This mimics the quantization noise that is experienced by samples generated by autoregressive models that are trained with mu-law quantization.

You can use the following BibTeX entries to appropriately cite prior work related to this dataset if you decide to use this in your research:

@article{goel2022sashimi,
  title={It's Raw! Audio Generation with State-Space Models},
  author={Goel, Karan and Gu, Albert and Donahue, Chris and R\'{e}, Christopher},
  journal={arXiv preprint arXiv:2202.09729},
  year={2022}
}

@inproceedings{donahue2019adversarial,
  title={Adversarial Audio Synthesis},
  author={Donahue, Chris and McAuley, Julian and Puckette, Miller},
  booktitle={International Conference on Learning Representations},
  year={2019}
}

@article{Warden2018SpeechCA,
  title={Speech Commands: A Dataset for Limited-Vocabulary Speech Recognition},
  author={Pete Warden},
  journal={ArXiv},
  year={2018},
  volume={abs/1804.03209}
}
Downloads last month
39