mls_without_script (#15)
Browse files- Upload dataset (56cb16aa5347685f0059abd744f098c2e0716eb1)
- Upload dataset (b2688928afd3a23afaca076626ee58346feef469)
- Upload dataset (c82617d933b9de71a3ceebce12cc2d8da66bc319)
- Upload dataset (8ac09f12ed9cc3a6676c13be759a02de56af3eea)
- Upload dataset (part 00000-of-00002) (124f036150ad37664355f25c1cb79988b674f199)
- Upload dataset (part 00001-of-00002) (12bf0eb4b1e903aafdb0a4c39f2353872e07d8f6)
- Upload dataset (0d33592b032b0c6d9c8fc2530bc7f48147f7b9d9)
- Upload dataset (part 00000-of-00002) (0a4a1bac8fbb68632722ced68b81645c1faec46b)
- Upload dataset (part 00001-of-00002) (c22a1da6206b7a069e9d8d1e69ac423b771e560d)
- Update README.md (326dcbdd5fd51afc48fc11d276991c2b404317c5)
- Create create_dataset.py (8a439ef1cea73b319377c4f688a691359c91017f)
- Delete multilingual_librispeech.py (314c28d8ff4b9546bd8e1c956731ece5e6984158)
Co-authored-by: Yoach Lacombe <[email protected]>
- README.md +416 -14
- create_dataset.py +106 -0
- dutch/1_hours-00000-of-00001.parquet +3 -0
- dutch/9_hours-00000-of-00001.parquet +3 -0
- dutch/dev-00000-of-00001.parquet +3 -0
- dutch/test-00000-of-00001.parquet +3 -0
- dutch/train-00000-of-00048.parquet +3 -0
- dutch/train-00001-of-00048.parquet +3 -0
- dutch/train-00002-of-00048.parquet +3 -0
- dutch/train-00003-of-00048.parquet +3 -0
- dutch/train-00004-of-00048.parquet +3 -0
- dutch/train-00005-of-00048.parquet +3 -0
- dutch/train-00006-of-00048.parquet +3 -0
- dutch/train-00007-of-00048.parquet +3 -0
- dutch/train-00008-of-00048.parquet +3 -0
- dutch/train-00009-of-00048.parquet +3 -0
- dutch/train-00010-of-00048.parquet +3 -0
- dutch/train-00011-of-00048.parquet +3 -0
- dutch/train-00012-of-00048.parquet +3 -0
- dutch/train-00013-of-00048.parquet +3 -0
- dutch/train-00014-of-00048.parquet +3 -0
- dutch/train-00015-of-00048.parquet +3 -0
- dutch/train-00016-of-00048.parquet +3 -0
- dutch/train-00017-of-00048.parquet +3 -0
- dutch/train-00018-of-00048.parquet +3 -0
- dutch/train-00019-of-00048.parquet +3 -0
- dutch/train-00020-of-00048.parquet +3 -0
- dutch/train-00021-of-00048.parquet +3 -0
- dutch/train-00022-of-00048.parquet +3 -0
- dutch/train-00023-of-00048.parquet +3 -0
- dutch/train-00024-of-00048.parquet +3 -0
- dutch/train-00025-of-00048.parquet +3 -0
- dutch/train-00026-of-00048.parquet +3 -0
- dutch/train-00027-of-00048.parquet +3 -0
- dutch/train-00028-of-00048.parquet +3 -0
- dutch/train-00029-of-00048.parquet +3 -0
- dutch/train-00030-of-00048.parquet +3 -0
- dutch/train-00031-of-00048.parquet +3 -0
- dutch/train-00032-of-00048.parquet +3 -0
- dutch/train-00033-of-00048.parquet +3 -0
- dutch/train-00034-of-00048.parquet +3 -0
- dutch/train-00035-of-00048.parquet +3 -0
- dutch/train-00036-of-00048.parquet +3 -0
- dutch/train-00037-of-00048.parquet +3 -0
- dutch/train-00038-of-00048.parquet +3 -0
- dutch/train-00039-of-00048.parquet +3 -0
- dutch/train-00040-of-00048.parquet +3 -0
- dutch/train-00041-of-00048.parquet +3 -0
- dutch/train-00042-of-00048.parquet +3 -0
- dutch/train-00043-of-00048.parquet +3 -0
@@ -1,5 +1,4 @@
|
|
1 |
---
|
2 |
-
pretty_name: MultiLingual LibriSpeech
|
3 |
annotations_creators:
|
4 |
- expert-generated
|
5 |
language_creators:
|
@@ -13,17 +12,387 @@ language:
|
|
13 |
- es
|
14 |
- pt
|
15 |
- pl
|
|
|
16 |
license:
|
17 |
- cc-by-4.0
|
18 |
multilinguality:
|
19 |
- multilingual
|
20 |
-
paperswithcode_id: multilingual-librispeech
|
21 |
size_categories:
|
22 |
- 100K<n<1M
|
23 |
source_datasets:
|
24 |
- original
|
25 |
task_categories:
|
26 |
- automatic-speech-recognition
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
---
|
28 |
|
29 |
# Dataset Card for MultiLingual LibriSpeech
|
@@ -66,11 +435,12 @@ This is a streamable version of the Multilingual LibriSpeech (MLS) dataset.
|
|
66 |
The data archives were restructured from the original ones from [OpenSLR](http://www.openslr.org/94) to make it easier to stream.
|
67 |
|
68 |
MLS dataset is a large multilingual corpus suitable for speech research. The dataset is derived from read audiobooks from LibriVox and consists of
|
69 |
-
8 languages - English, German, Dutch, Spanish, French, Italian, Portuguese, Polish.
|
70 |
|
71 |
### Supported Tasks and Leaderboards
|
72 |
|
73 |
- `automatic-speech-recognition`, `speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/dataset/multilingual-librispeech and ranks models based on their WER.
|
|
|
74 |
|
75 |
### Languages
|
76 |
|
@@ -83,16 +453,13 @@ The `datasets` library allows you to load and pre-process your dataset in pure P
|
|
83 |
For example, to download the German config, simply specify the corresponding language config name (i.e., "german" for German):
|
84 |
```python
|
85 |
from datasets import load_dataset
|
86 |
-
|
87 |
mls = load_dataset("facebook/multilingual_librispeech", "german", split="train")
|
88 |
```
|
89 |
|
90 |
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
|
91 |
```python
|
92 |
from datasets import load_dataset
|
93 |
-
|
94 |
mls = load_dataset("facebook/multilingual_librispeech", "german", split="train", streaming=True)
|
95 |
-
|
96 |
print(next(iter(mls)))
|
97 |
```
|
98 |
|
@@ -103,7 +470,6 @@ Local:
|
|
103 |
```python
|
104 |
from datasets import load_dataset
|
105 |
from torch.utils.data.sampler import BatchSampler, RandomSampler
|
106 |
-
|
107 |
mls = load_dataset("facebook/multilingual_librispeech", "german", split="train")
|
108 |
batch_sampler = BatchSampler(RandomSampler(mls), batch_size=32, drop_last=False)
|
109 |
dataloader = DataLoader(mls, batch_sampler=batch_sampler)
|
@@ -114,7 +480,6 @@ Streaming:
|
|
114 |
```python
|
115 |
from datasets import load_dataset
|
116 |
from torch.utils.data import DataLoader
|
117 |
-
|
118 |
mls = load_dataset("facebook/multilingual_librispeech", "german", split="train", streaming=True)
|
119 |
dataloader = DataLoader(mls, batch_size=32)
|
120 |
```
|
@@ -155,12 +520,11 @@ A typical data point comprises the path to the audio file, usually called `file`
|
|
155 |
- id: unique id of the data sample.
|
156 |
|
157 |
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
|
158 |
-
|
159 |
- chapter_id: id of the audiobook chapter which includes the transcription.
|
160 |
|
161 |
### Data Splits
|
162 |
|
163 |
-
|
|
164 |
| ----- | ------ | ----- | ---- | ---- | ---- |
|
165 |
| german | 469942 | 2194 | 241 | 3469 | 3394 |
|
166 |
| dutch | 374287 | 2153 | 234 | 3095 | 3075 |
|
@@ -170,8 +534,6 @@ A typical data point comprises the path to the audio file, usually called `file`
|
|
170 |
| portuguese | 37533 | 2116 | 236 | 826 | 871 |
|
171 |
| polish | 25043 | 2173 | 238 | 512 | 520 |
|
172 |
|
173 |
-
|
174 |
-
|
175 |
## Dataset Creation
|
176 |
|
177 |
### Curation Rationale
|
@@ -238,7 +600,47 @@ Public Domain, Creative Commons Attribution 4.0 International Public License ([C
|
|
238 |
}
|
239 |
```
|
240 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
241 |
### Contributions
|
242 |
|
243 |
-
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten)
|
244 |
-
and [@polinaeterna](https://github.com/polinaeterna) for adding this dataset.
|
|
|
1 |
---
|
|
|
2 |
annotations_creators:
|
3 |
- expert-generated
|
4 |
language_creators:
|
|
|
12 |
- es
|
13 |
- pt
|
14 |
- pl
|
15 |
+
- en
|
16 |
license:
|
17 |
- cc-by-4.0
|
18 |
multilinguality:
|
19 |
- multilingual
|
|
|
20 |
size_categories:
|
21 |
- 100K<n<1M
|
22 |
source_datasets:
|
23 |
- original
|
24 |
task_categories:
|
25 |
- automatic-speech-recognition
|
26 |
+
- text-to-speech
|
27 |
+
- text-to-audio
|
28 |
+
paperswithcode_id: multilingual-librispeech
|
29 |
+
pretty_name: MultiLingual LibriSpeech
|
30 |
+
dataset_info:
|
31 |
+
- config_name: dutch
|
32 |
+
features:
|
33 |
+
- name: audio
|
34 |
+
dtype: audio
|
35 |
+
- name: original_path
|
36 |
+
dtype: string
|
37 |
+
- name: begin_time
|
38 |
+
dtype: float64
|
39 |
+
- name: end_time
|
40 |
+
dtype: float64
|
41 |
+
- name: transcript
|
42 |
+
dtype: string
|
43 |
+
- name: audio_duration
|
44 |
+
dtype: float64
|
45 |
+
- name: speaker_id
|
46 |
+
dtype: string
|
47 |
+
- name: chapter_id
|
48 |
+
dtype: string
|
49 |
+
- name: file
|
50 |
+
dtype: string
|
51 |
+
- name: id
|
52 |
+
dtype: string
|
53 |
+
splits:
|
54 |
+
- name: dev
|
55 |
+
num_bytes: 199959986
|
56 |
+
num_examples: 3095
|
57 |
+
- name: test
|
58 |
+
num_bytes: 199298575
|
59 |
+
num_examples: 3075
|
60 |
+
- name: train
|
61 |
+
num_bytes: 23931679031
|
62 |
+
num_examples: 374287
|
63 |
+
- name: 9_hours
|
64 |
+
num_bytes: 139884664.668
|
65 |
+
num_examples: 2153
|
66 |
+
- name: 1_hours
|
67 |
+
num_bytes: 15462181
|
68 |
+
num_examples: 234
|
69 |
+
download_size: 24376256629
|
70 |
+
dataset_size: 24486284437.668
|
71 |
+
- config_name: french
|
72 |
+
features:
|
73 |
+
- name: audio
|
74 |
+
dtype: audio
|
75 |
+
- name: original_path
|
76 |
+
dtype: string
|
77 |
+
- name: begin_time
|
78 |
+
dtype: float64
|
79 |
+
- name: end_time
|
80 |
+
dtype: float64
|
81 |
+
- name: transcript
|
82 |
+
dtype: string
|
83 |
+
- name: audio_duration
|
84 |
+
dtype: float64
|
85 |
+
- name: speaker_id
|
86 |
+
dtype: string
|
87 |
+
- name: chapter_id
|
88 |
+
dtype: string
|
89 |
+
- name: file
|
90 |
+
dtype: string
|
91 |
+
- name: id
|
92 |
+
dtype: string
|
93 |
+
splits:
|
94 |
+
- name: dev
|
95 |
+
num_bytes: 157923970.696
|
96 |
+
num_examples: 2416
|
97 |
+
- name: test
|
98 |
+
num_bytes: 158352158.582
|
99 |
+
num_examples: 2426
|
100 |
+
- name: train
|
101 |
+
num_bytes: 16984935842.04
|
102 |
+
num_examples: 258213
|
103 |
+
- name: 9_hours
|
104 |
+
num_bytes: 142796680.609
|
105 |
+
num_examples: 2167
|
106 |
+
- name: 1_hours
|
107 |
+
num_bytes: 15675831
|
108 |
+
num_examples: 241
|
109 |
+
download_size: 17381581776
|
110 |
+
dataset_size: 17459684482.927002
|
111 |
+
- config_name: german
|
112 |
+
features:
|
113 |
+
- name: audio
|
114 |
+
dtype: audio
|
115 |
+
- name: original_path
|
116 |
+
dtype: string
|
117 |
+
- name: begin_time
|
118 |
+
dtype: float64
|
119 |
+
- name: end_time
|
120 |
+
dtype: float64
|
121 |
+
- name: transcript
|
122 |
+
dtype: string
|
123 |
+
- name: audio_duration
|
124 |
+
dtype: float64
|
125 |
+
- name: speaker_id
|
126 |
+
dtype: string
|
127 |
+
- name: chapter_id
|
128 |
+
dtype: string
|
129 |
+
- name: file
|
130 |
+
dtype: string
|
131 |
+
- name: id
|
132 |
+
dtype: string
|
133 |
+
splits:
|
134 |
+
- name: dev
|
135 |
+
num_bytes: 224293581.302
|
136 |
+
num_examples: 3469
|
137 |
+
- name: test
|
138 |
+
num_bytes: 225756069.096
|
139 |
+
num_examples: 3394
|
140 |
+
- name: train
|
141 |
+
num_bytes: 31050881388
|
142 |
+
num_examples: 469942
|
143 |
+
- name: 9_hours
|
144 |
+
num_bytes: 142777983.118
|
145 |
+
num_examples: 2194
|
146 |
+
- name: 1_hours
|
147 |
+
num_bytes: 15714704
|
148 |
+
num_examples: 241
|
149 |
+
download_size: 31526161821
|
150 |
+
dataset_size: 31659423725.516
|
151 |
+
- config_name: italian
|
152 |
+
features:
|
153 |
+
- name: audio
|
154 |
+
dtype: audio
|
155 |
+
- name: original_path
|
156 |
+
dtype: string
|
157 |
+
- name: begin_time
|
158 |
+
dtype: float64
|
159 |
+
- name: end_time
|
160 |
+
dtype: float64
|
161 |
+
- name: transcript
|
162 |
+
dtype: string
|
163 |
+
- name: audio_duration
|
164 |
+
dtype: float64
|
165 |
+
- name: speaker_id
|
166 |
+
dtype: string
|
167 |
+
- name: chapter_id
|
168 |
+
dtype: string
|
169 |
+
- name: file
|
170 |
+
dtype: string
|
171 |
+
- name: id
|
172 |
+
dtype: string
|
173 |
+
splits:
|
174 |
+
- name: dev
|
175 |
+
num_bytes: 81607596.048
|
176 |
+
num_examples: 1248
|
177 |
+
- name: test
|
178 |
+
num_bytes: 83216752.046
|
179 |
+
num_examples: 1262
|
180 |
+
- name: train
|
181 |
+
num_bytes: 3896742625
|
182 |
+
num_examples: 59623
|
183 |
+
- name: 9_hours
|
184 |
+
num_bytes: 141671904.428
|
185 |
+
num_examples: 2173
|
186 |
+
- name: 1_hours
|
187 |
+
num_bytes: 15560398
|
188 |
+
num_examples: 240
|
189 |
+
download_size: 4200633596
|
190 |
+
dataset_size: 4218799275.522
|
191 |
+
- config_name: polish
|
192 |
+
features:
|
193 |
+
- name: audio
|
194 |
+
dtype: audio
|
195 |
+
- name: original_path
|
196 |
+
dtype: string
|
197 |
+
- name: begin_time
|
198 |
+
dtype: float64
|
199 |
+
- name: end_time
|
200 |
+
dtype: float64
|
201 |
+
- name: transcript
|
202 |
+
dtype: string
|
203 |
+
- name: audio_duration
|
204 |
+
dtype: float64
|
205 |
+
- name: speaker_id
|
206 |
+
dtype: string
|
207 |
+
- name: chapter_id
|
208 |
+
dtype: string
|
209 |
+
- name: file
|
210 |
+
dtype: string
|
211 |
+
- name: id
|
212 |
+
dtype: string
|
213 |
+
splits:
|
214 |
+
- name: dev
|
215 |
+
num_bytes: 32746725
|
216 |
+
num_examples: 512
|
217 |
+
- name: test
|
218 |
+
num_bytes: 33735044
|
219 |
+
num_examples: 520
|
220 |
+
- name: train
|
221 |
+
num_bytes: 1638889846
|
222 |
+
num_examples: 25043
|
223 |
+
- name: 9_hours
|
224 |
+
num_bytes: 142005461
|
225 |
+
num_examples: 2173
|
226 |
+
- name: 1_hours
|
227 |
+
num_bytes: 15681216
|
228 |
+
num_examples: 238
|
229 |
+
download_size: 1855342312
|
230 |
+
dataset_size: 1863058292
|
231 |
+
- config_name: portuguese
|
232 |
+
features:
|
233 |
+
- name: audio
|
234 |
+
dtype: audio
|
235 |
+
- name: original_path
|
236 |
+
dtype: string
|
237 |
+
- name: begin_time
|
238 |
+
dtype: float64
|
239 |
+
- name: end_time
|
240 |
+
dtype: float64
|
241 |
+
- name: transcript
|
242 |
+
dtype: string
|
243 |
+
- name: audio_duration
|
244 |
+
dtype: float64
|
245 |
+
- name: speaker_id
|
246 |
+
dtype: string
|
247 |
+
- name: chapter_id
|
248 |
+
dtype: string
|
249 |
+
- name: file
|
250 |
+
dtype: string
|
251 |
+
- name: id
|
252 |
+
dtype: string
|
253 |
+
splits:
|
254 |
+
- name: dev
|
255 |
+
num_bytes: 57533473
|
256 |
+
num_examples: 826
|
257 |
+
- name: test
|
258 |
+
num_bytes: 59141979
|
259 |
+
num_examples: 871
|
260 |
+
- name: train
|
261 |
+
num_bytes: 2518553713.946
|
262 |
+
num_examples: 37533
|
263 |
+
- name: 9_hours
|
264 |
+
num_bytes: 141641902.42
|
265 |
+
num_examples: 2116
|
266 |
+
- name: 1_hours
|
267 |
+
num_bytes: 15697139
|
268 |
+
num_examples: 236
|
269 |
+
download_size: 2780836500
|
270 |
+
dataset_size: 2792568207.366
|
271 |
+
- config_name: spanish
|
272 |
+
features:
|
273 |
+
- name: audio
|
274 |
+
dtype: audio
|
275 |
+
- name: original_path
|
276 |
+
dtype: string
|
277 |
+
- name: begin_time
|
278 |
+
dtype: float64
|
279 |
+
- name: end_time
|
280 |
+
dtype: float64
|
281 |
+
- name: transcript
|
282 |
+
dtype: string
|
283 |
+
- name: audio_duration
|
284 |
+
dtype: float64
|
285 |
+
- name: speaker_id
|
286 |
+
dtype: string
|
287 |
+
- name: chapter_id
|
288 |
+
dtype: string
|
289 |
+
- name: file
|
290 |
+
dtype: string
|
291 |
+
- name: id
|
292 |
+
dtype: string
|
293 |
+
splits:
|
294 |
+
- name: dev
|
295 |
+
num_bytes: 157804903.144
|
296 |
+
num_examples: 2408
|
297 |
+
- name: test
|
298 |
+
num_bytes: 158526899.32
|
299 |
+
num_examples: 2385
|
300 |
+
- name: train
|
301 |
+
num_bytes: 14562584188
|
302 |
+
num_examples: 220701
|
303 |
+
- name: 9_hours
|
304 |
+
num_bytes: 142473624.48
|
305 |
+
num_examples: 2110
|
306 |
+
- name: 1_hours
|
307 |
+
num_bytes: 15702048
|
308 |
+
num_examples: 233
|
309 |
+
download_size: 14971394533
|
310 |
+
dataset_size: 15037091662.944
|
311 |
+
configs:
|
312 |
+
- config_name: dutch
|
313 |
+
data_files:
|
314 |
+
- split: dev
|
315 |
+
path: dutch/dev-*
|
316 |
+
- split: test
|
317 |
+
path: dutch/test-*
|
318 |
+
- split: train
|
319 |
+
path: dutch/train-*
|
320 |
+
- split: 9_hours
|
321 |
+
path: dutch/9_hours-*
|
322 |
+
- split: 1_hours
|
323 |
+
path: dutch/1_hours-*
|
324 |
+
- config_name: french
|
325 |
+
data_files:
|
326 |
+
- split: dev
|
327 |
+
path: french/dev-*
|
328 |
+
- split: test
|
329 |
+
path: french/test-*
|
330 |
+
- split: train
|
331 |
+
path: french/train-*
|
332 |
+
- split: 9_hours
|
333 |
+
path: french/9_hours-*
|
334 |
+
- split: 1_hours
|
335 |
+
path: french/1_hours-*
|
336 |
+
- config_name: german
|
337 |
+
data_files:
|
338 |
+
- split: dev
|
339 |
+
path: german/dev-*
|
340 |
+
- split: test
|
341 |
+
path: german/test-*
|
342 |
+
- split: train
|
343 |
+
path: german/train-*
|
344 |
+
- split: 9_hours
|
345 |
+
path: german/9_hours-*
|
346 |
+
- split: 1_hours
|
347 |
+
path: german/1_hours-*
|
348 |
+
- config_name: italian
|
349 |
+
data_files:
|
350 |
+
- split: dev
|
351 |
+
path: italian/dev-*
|
352 |
+
- split: test
|
353 |
+
path: italian/test-*
|
354 |
+
- split: train
|
355 |
+
path: italian/train-*
|
356 |
+
- split: 9_hours
|
357 |
+
path: italian/9_hours-*
|
358 |
+
- split: 1_hours
|
359 |
+
path: italian/1_hours-*
|
360 |
+
- config_name: polish
|
361 |
+
data_files:
|
362 |
+
- split: dev
|
363 |
+
path: polish/dev-*
|
364 |
+
- split: test
|
365 |
+
path: polish/test-*
|
366 |
+
- split: train
|
367 |
+
path: polish/train-*
|
368 |
+
- split: 9_hours
|
369 |
+
path: polish/9_hours-*
|
370 |
+
- split: 1_hours
|
371 |
+
path: polish/1_hours-*
|
372 |
+
- config_name: portuguese
|
373 |
+
data_files:
|
374 |
+
- split: dev
|
375 |
+
path: portuguese/dev-*
|
376 |
+
- split: test
|
377 |
+
path: portuguese/test-*
|
378 |
+
- split: train
|
379 |
+
path: portuguese/train-*
|
380 |
+
- split: 9_hours
|
381 |
+
path: portuguese/9_hours-*
|
382 |
+
- split: 1_hours
|
383 |
+
path: portuguese/1_hours-*
|
384 |
+
- config_name: spanish
|
385 |
+
data_files:
|
386 |
+
- split: dev
|
387 |
+
path: spanish/dev-*
|
388 |
+
- split: test
|
389 |
+
path: spanish/test-*
|
390 |
+
- split: train
|
391 |
+
path: spanish/train-*
|
392 |
+
- split: 9_hours
|
393 |
+
path: spanish/9_hours-*
|
394 |
+
- split: 1_hours
|
395 |
+
path: spanish/1_hours-*
|
396 |
---
|
397 |
|
398 |
# Dataset Card for MultiLingual LibriSpeech
|
|
|
435 |
The data archives were restructured from the original ones from [OpenSLR](http://www.openslr.org/94) to make it easier to stream.
|
436 |
|
437 |
MLS dataset is a large multilingual corpus suitable for speech research. The dataset is derived from read audiobooks from LibriVox and consists of
|
438 |
+
8 languages - English, German, Dutch, Spanish, French, Italian, Portuguese, Polish. It includes about 44.5K hours of English and a total of about 6K hours for other languages.
|
439 |
|
440 |
### Supported Tasks and Leaderboards
|
441 |
|
442 |
- `automatic-speech-recognition`, `speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active leaderboard which can be found at https://paperswithcode.com/dataset/multilingual-librispeech and ranks models based on their WER.
|
443 |
+
- `text-to-speech`, `text-to-audio`: The dataset can also be used to train a model for Text-To-Speech (TTS).
|
444 |
|
445 |
### Languages
|
446 |
|
|
|
453 |
For example, to download the German config, simply specify the corresponding language config name (i.e., "german" for German):
|
454 |
```python
|
455 |
from datasets import load_dataset
|
|
|
456 |
mls = load_dataset("facebook/multilingual_librispeech", "german", split="train")
|
457 |
```
|
458 |
|
459 |
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
|
460 |
```python
|
461 |
from datasets import load_dataset
|
|
|
462 |
mls = load_dataset("facebook/multilingual_librispeech", "german", split="train", streaming=True)
|
|
|
463 |
print(next(iter(mls)))
|
464 |
```
|
465 |
|
|
|
470 |
```python
|
471 |
from datasets import load_dataset
|
472 |
from torch.utils.data.sampler import BatchSampler, RandomSampler
|
|
|
473 |
mls = load_dataset("facebook/multilingual_librispeech", "german", split="train")
|
474 |
batch_sampler = BatchSampler(RandomSampler(mls), batch_size=32, drop_last=False)
|
475 |
dataloader = DataLoader(mls, batch_sampler=batch_sampler)
|
|
|
480 |
```python
|
481 |
from datasets import load_dataset
|
482 |
from torch.utils.data import DataLoader
|
|
|
483 |
mls = load_dataset("facebook/multilingual_librispeech", "german", split="train", streaming=True)
|
484 |
dataloader = DataLoader(mls, batch_size=32)
|
485 |
```
|
|
|
520 |
- id: unique id of the data sample.
|
521 |
|
522 |
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
|
|
|
523 |
- chapter_id: id of the audiobook chapter which includes the transcription.
|
524 |
|
525 |
### Data Splits
|
526 |
|
527 |
+
| Number of samples | Train | Train.9h | Train.1h | Dev | Test |
|
528 |
| ----- | ------ | ----- | ---- | ---- | ---- |
|
529 |
| german | 469942 | 2194 | 241 | 3469 | 3394 |
|
530 |
| dutch | 374287 | 2153 | 234 | 3095 | 3075 |
|
|
|
534 |
| portuguese | 37533 | 2116 | 236 | 826 | 871 |
|
535 |
| polish | 25043 | 2173 | 238 | 512 | 520 |
|
536 |
|
|
|
|
|
537 |
## Dataset Creation
|
538 |
|
539 |
### Curation Rationale
|
|
|
600 |
}
|
601 |
```
|
602 |
|
603 |
+
|
604 |
+
### Data Statistics
|
605 |
+
|
606 |
+
| Duration (h) | Train | Dev | Test |
|
607 |
+
|--------------|-----------|-------|-------|
|
608 |
+
| English | 44,659.74 | 15.75 | 15.55 |
|
609 |
+
| German | 1,966.51 | 14.28 | 14.29 |
|
610 |
+
| Dutch | 1,554.24 | 12.76 | 12.76 |
|
611 |
+
| French | 1,076.58 | 10.07 | 10.07 |
|
612 |
+
| Spanish | 917.68 | 9.99 | 10 |
|
613 |
+
| Italian | 247.38 | 5.18 | 5.27 |
|
614 |
+
| Portuguese | 160.96 | 3.64 | 3.74 |
|
615 |
+
| Polish | 103.65 | 2.08 | 2.14 |
|
616 |
+
|
617 |
+
| # Speakers | Train | | Dev | | Test | |
|
618 |
+
|------------|-------|------|-----|----|------|----|
|
619 |
+
| Gender | M | F | M | F | M | F |
|
620 |
+
| English | 2742 | 2748 | 21 | 21 | 21 | 21 |
|
621 |
+
| German | 81 | 95 | 15 | 15 | 15 | 15 |
|
622 |
+
| Dutch | 9 | 31 | 3 | 3 | 3 | 3 |
|
623 |
+
| French | 62 | 80 | 9 | 9 | 9 | 9 |
|
624 |
+
| Spanish | 36 | 50 | 10 | 10 | 10 | 10 |
|
625 |
+
| Italian | 22 | 43 | 5 | 5 | 5 | 5 |
|
626 |
+
| Portuguese | 26 | 16 | 5 | 5 | 5 | 5 |
|
627 |
+
| Polish | 6 | 5 | 2 | 2 | 2 | 2 |
|
628 |
+
|
629 |
+
| # Hours / Gender | Dev | | Test | |
|
630 |
+
|------------------|------|------|------|------|
|
631 |
+
| Gender | M | F | M | F |
|
632 |
+
| English | 7.76 | 7.99 | 7.62 | 7.93 |
|
633 |
+
| German | 7.06 | 7.22 | 7 | 7.29 |
|
634 |
+
| Dutch | 6.44 | 6.32 | 6.72 | 6.04 |
|
635 |
+
| French | 5.13 | 4.94 | 5.04 | 5.02 |
|
636 |
+
| Spanish | 4.91 | 5.08 | 4.78 | 5.23 |
|
637 |
+
| Italian | 2.5 | 2.68 | 2.38 | 2.9 |
|
638 |
+
| Portuguese | 1.84 | 1.81 | 1.83 | 1.9 |
|
639 |
+
| Polish | 1.12 | 0.95 | 1.09 | 1.05 |
|
640 |
+
|
641 |
+
|
642 |
+
|
643 |
+
|
644 |
### Contributions
|
645 |
|
646 |
+
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) and [@polinaeterna](https://github.com/polinaeterna) for adding this dataset.
|
|
@@ -0,0 +1,106 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import os
|
2 |
+
from datasets import DatasetDict, Audio
|
3 |
+
import pandas as pd
|
4 |
+
from datasets.table import embed_table_storage
|
5 |
+
import argparse
|
6 |
+
|
7 |
+
|
8 |
+
if __name__ == "__main__":
|
9 |
+
parser = argparse.ArgumentParser()
|
10 |
+
|
11 |
+
|
12 |
+
parser.add_argument("main_folder_path", type=str, help="Path of the base mls folder")
|
13 |
+
parser.add_argument("configuration", type=str, help="Dataset configuration to use, if necessary. Here corresponds to the language name.")
|
14 |
+
parser.add_argument("output_dir", type=str, help="Save the dataset on disk with this path.")
|
15 |
+
|
16 |
+
parser.add_argument("--cpu_num_workers", default=1, type=int, help="Number of CPU workers.")
|
17 |
+
parser.add_argument("--csv_folder_path", default=None, type=str, help="Path where to save intermediate csv, by default will be main_foldr_path")
|
18 |
+
parser.add_argument("--repo_id", default="facebook/multilingual_librispeech", type=str, help="Push the dataset to the hub.")
|
19 |
+
|
20 |
+
|
21 |
+
args = parser.parse_args()
|
22 |
+
|
23 |
+
main_folder_path = args.main_folder_path
|
24 |
+
csv_folder_path = args.csv_folder_path if args.csv_folder_path is not None else main_folder_path
|
25 |
+
if not os.path.exists(csv_folder_path):
|
26 |
+
os.makedirs(csv_folder_path)
|
27 |
+
|
28 |
+
splits = ["dev", "test", "train"]
|
29 |
+
|
30 |
+
# total_length_per_split = 10_000 * 60 * 60 # in sec -> 10k hours
|
31 |
+
|
32 |
+
csv_dict = {}
|
33 |
+
for split in splits:
|
34 |
+
segment_path = os.path.join(main_folder_path, split, "segments.txt")
|
35 |
+
transcript_path = os.path.join(main_folder_path, split, "transcripts.txt")
|
36 |
+
|
37 |
+
segments = pd.read_csv(segment_path, sep='\t', names=["audio", "original_path", "begin_time", "end_time"],
|
38 |
+
index_col="audio")
|
39 |
+
transcripts = pd.read_csv(transcript_path, sep='\t', names=["audio", "transcript"], index_col="audio")
|
40 |
+
|
41 |
+
df = pd.concat([segments, transcripts], axis=1, join="inner")
|
42 |
+
print(
|
43 |
+
f"Segments and transcripts of {split} has been joined: new length {len(df)}, old lengths {(len(segments), len(transcripts))}")
|
44 |
+
|
45 |
+
# add audio duration
|
46 |
+
df["audio_duration"] = df["end_time"] - df["begin_time"]
|
47 |
+
df["split"] = split
|
48 |
+
|
49 |
+
print(f"len df {len(df)}")
|
50 |
+
|
51 |
+
df.to_csv(os.path.join(csv_folder_path, f"{split}.csv"))
|
52 |
+
csv_dict[split] = os.path.join(csv_folder_path, f"{split}.csv")
|
53 |
+
|
54 |
+
# take care of /limited_supervision
|
55 |
+
if split == "train":
|
56 |
+
nine_hours_segment_path = os.path.join(main_folder_path, "train/limited_supervision/9hr/handles.txt")
|
57 |
+
nine_hours_segment = pd.read_csv(nine_hours_segment_path, sep='\t', names=["audio"], index_col="audio").index
|
58 |
+
nine_hours_df = df.filter(items=nine_hours_segment, axis=0)
|
59 |
+
nine_hours_df.to_csv(os.path.join(csv_folder_path, f"9_hours.csv"))
|
60 |
+
csv_dict["9_hours"] = os.path.join(csv_folder_path, f"9_hours.csv")
|
61 |
+
|
62 |
+
one_hours_segments = [ os.path.join(f.path, "handles.txt") for f in os.scandir( os.path.join(main_folder_path, "train/limited_supervision/1hr")) if f.is_dir()]
|
63 |
+
one_hours_segments = pd.concat([pd.read_csv(one, sep='\t', names=["audio"], index_col="audio") for one in one_hours_segments], axis=0).index
|
64 |
+
one_hours_df = df.filter(items=one_hours_segments, axis=0)
|
65 |
+
one_hours_df.to_csv(os.path.join(csv_folder_path, f"1_hours.csv"))
|
66 |
+
csv_dict["1_hours"] = os.path.join(csv_folder_path, f"1_hours.csv")
|
67 |
+
|
68 |
+
|
69 |
+
|
70 |
+
|
71 |
+
dataset = DatasetDict.from_csv(csv_dict)
|
72 |
+
|
73 |
+
def extract_speaker_id_and_format_path(audio, split):
|
74 |
+
speaker_id = audio.split("_")[0]
|
75 |
+
chapter_id = audio.split("_")[1]
|
76 |
+
file = f"{audio}.opus"
|
77 |
+
|
78 |
+
path = os.path.join(main_folder_path, split, "audio", speaker_id, chapter_id, file)
|
79 |
+
return {"audio": path, "speaker_id": speaker_id, "chapter_id": chapter_id, "file": file, "id": audio}
|
80 |
+
|
81 |
+
# correct audio path
|
82 |
+
dataset = dataset.map(extract_speaker_id_and_format_path, input_columns=["audio", "split"], num_proc=args.cpu_num_workers, remove_columns=["split"])
|
83 |
+
dataset = dataset.cast_column("audio", Audio())
|
84 |
+
|
85 |
+
print(dataset)
|
86 |
+
print(dataset["dev"][0])
|
87 |
+
|
88 |
+
print("Embed table storage")
|
89 |
+
|
90 |
+
# load_dataset(...)
|
91 |
+
format = dataset["train"].format
|
92 |
+
dataset = dataset.with_format("arrow")
|
93 |
+
dataset = dataset.map(embed_table_storage, batched=True, num_proc=args.cpu_num_workers)
|
94 |
+
dataset = dataset.with_format(**format)
|
95 |
+
|
96 |
+
|
97 |
+
dataset.save_to_disk(args.output_dir, num_proc=args.cpu_num_workers)
|
98 |
+
|
99 |
+
if args.repo_id:
|
100 |
+
pushed = False
|
101 |
+
while not pushed:
|
102 |
+
try:
|
103 |
+
dataset.push_to_hub(args.repo_id, args.configuration, revision="refs/pr/15")
|
104 |
+
pushed = True
|
105 |
+
except:
|
106 |
+
pass
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1ba5b95170c0452a71580c730104d42fce130ef291922fa5bec8c94abd4b24bb
|
3 |
+
size 15398863
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:173586e5abe761359d5d6906d630f6826a66e41a588945c0add7a8ecdb883c74
|
3 |
+
size 139332979
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0c40d8db30f3cc86041430fb43f091768cba1e65b0b27d8cc3cda02e28571881
|
3 |
+
size 198983422
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:feb88a282be746e7c14e1f09c017b4f69b07df9c25c94a3943eaac5ca7794ad5
|
3 |
+
size 198390898
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d5197c6697455438f0f22a32e8108fd9295b8f21071d46c06f8ef5822e879b1b
|
3 |
+
size 498278678
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7e09d94d9eb6f8a8df26ff85377c1a5b1d8db68f5fdd175e6dbcf7610b18e1c6
|
3 |
+
size 493730836
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:da9939252200d99e8e1c7922b16b720ee9f0798779ee4446d357de84d4c6c50a
|
3 |
+
size 492751421
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:19f5d62017ffd56dfcbd2ddb37d816c81c9c5b0c81b5468f727c9fd659b6e5a1
|
3 |
+
size 492439483
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:38a9df9aa261cdd5a19ab234552a6de54da5760e681f91f7b43667fc2a169fb2
|
3 |
+
size 507283150
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:84e5473fc0d4e1820ace39eb130b843ffc4550fabd53dbdfd3de1adbd52942df
|
3 |
+
size 506228652
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bca27f4aad28aeb155544c321205e1c3316791b6492d1e7e4290eff18be54f95
|
3 |
+
size 503469896
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d3c2977b37bd03ac58acd5227575cafd13604aa42f88fb266a67a7b594fa6e2a
|
3 |
+
size 505732565
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c58e2cd6150500777ffd19067db6474419bfeaa8517accc37febd7d79ca0cfb4
|
3 |
+
size 497204004
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1045ccd20f04d49acf356c29145c349edf31d5ecb4879b012e62f4b290ccf73f
|
3 |
+
size 499720663
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c6131ece5adc923f7a4d78a90c7f537726eb535c684535205ddffb60e5ef0edc
|
3 |
+
size 509800583
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2a7aee3d7a9d5bf05ca454f85cd1c11244115f1656239cc21ca6d3aa104982b2
|
3 |
+
size 496752382
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d7bef88920a3c837426536529c50538e92c22a9dde91b3997b061d1a49cb07b4
|
3 |
+
size 490141218
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:70f1ccf17d7071dee92a91dcf17ac170d94a7559016070d3ee693538c0251a7c
|
3 |
+
size 495052581
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ae9d4a435e0c0316ae50fe2cf5d97317a69b01ed9ca127e6b008433469e4ec13
|
3 |
+
size 497074003
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:56a2f5bb81abd080e60a58400a1618dd7deb997089df6592dbbd6757c26742d1
|
3 |
+
size 496902151
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f3828df35f9fac1e0f3938f652050dd8c1b7ba67438939aae4002fc1986544d8
|
3 |
+
size 500414203
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:be08e3f22832d9c1855acc6155192bd39a0af5595584bde51e9ce85fd2ef0f1c
|
3 |
+
size 497591344
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6e706257981e9558d694407154d1c894d3fb424f2a969c0237f283d06ac6c658
|
3 |
+
size 490266080
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2a1e2db14534a9464fca84339b35472ea9d3af9fb7b15084319e71ee7e92087e
|
3 |
+
size 490653457
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:61f23455642dadd84b2f968edc5b00ecc674db749dcd12d9a3a38967d52b1500
|
3 |
+
size 495251474
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:773ff43f23d2764093b60c9827e1584c0a62bafe14e9c782f38c7083de799590
|
3 |
+
size 495863633
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a4ad84fa298641ee71f899e7806b5dac2decfeb714f3be31d1ce01bd450f0255
|
3 |
+
size 498791755
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a52e17a2b7e521139c21c623f6be7ca0e176cbad1e8539b4b623da723c33e68f
|
3 |
+
size 494115983
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f55d61f3301ef7f59463129a13b2b4bb0e969d616f3250be621a8024a515bc10
|
3 |
+
size 496357824
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5bccbdee0d7c3a55faa52caa8d4354f1b53b8f5ae8eb417053b0054930caa8bf
|
3 |
+
size 496049208
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7d3ef6dc3b3e3c139e98d462ae96fcbf0e0b83011fa1735064fce98ab285a1e2
|
3 |
+
size 492055241
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6948db6a68c4a6192f4a8d135aa7c6b9fce89ee09c9bd46ee2437525130905ae
|
3 |
+
size 497549729
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:36afd357fae1ec6e042c72e90e39832c92e4f02fe506f1e3c520dd88e5492bb9
|
3 |
+
size 491405199
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0b5550bafa417f5c81611125bb5f244b21e40012191b73fc58ffc92b4647d9b5
|
3 |
+
size 490596710
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d541892b1ad46a6988c96196d832f5e12b17c81bfb1842121206ad86ed2c9322
|
3 |
+
size 492830835
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ceb71076ebc5d3b0b20edd16597d0cdc894e2fb69b9157f5f2156be539275c3b
|
3 |
+
size 492267570
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d4b8697914269d883adabdb59746e793efdf166e26b7191b6d83663aeab6b002
|
3 |
+
size 498742993
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:27467712ad4a83a8a07dac91f2811db7af6ce1e4fbc65976d5bc701af7941591
|
3 |
+
size 489899475
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:38ce47c438759c725ca1d71d01e253492d2c3e62975a4df71be313037655c215
|
3 |
+
size 495329697
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:00e5af34f05f99d093444b222e2a735a9b339d795b5a10acd5b0754e51bd9645
|
3 |
+
size 493591605
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8da20769d1b9422d3cca118b3f0e621dee00ddd86cb824810ecd290e90655b57
|
3 |
+
size 495242385
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d3163a6f718dabb3c87784fa48d5f7101a078c23e0f0c0a650b2fe175882f420
|
3 |
+
size 497938540
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5efa29e0a500ed52f32c84852da8dc74d629d73b91fa99c7aa81dcd67a8044be
|
3 |
+
size 493616604
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b9694d9e6eacf8d63e3bad43e4f3cba401caf1215c2cc243923369880c6e7d78
|
3 |
+
size 498235792
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:32dcb54c6060b4f69a63cdbf65046d137875d2732af2cf846dfac57f2b813d21
|
3 |
+
size 492895497
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f7824d10dc265289c5ed1d1e8b24b7e3c6b1a52aeacf4777521947188956ce0a
|
3 |
+
size 495966896
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:87f323c63bcbfd02ab606dfafaef0d01f9a16591dfbf45ed8ee5ea2702c9e1a3
|
3 |
+
size 495535322
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7c9b9319fa8870c4befedd1d4535bf4e488719b598662ba75bbed78fbd0a81f0
|
3 |
+
size 495532373
|