File size: 5,004 Bytes
5d0279d
 
 
 
b261b71
 
 
 
5d0279d
 
 
 
 
b261b71
5d0279d
 
b261b71
 
 
 
 
45c294d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5d0279d
 
45c294d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f90ec2f
45c294d
 
 
 
 
 
 
 
 
de4fe0d
45c294d
 
 
 
 
 
 
 
f90ec2f
45c294d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dbb678d
45c294d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f90ec2f
 
 
 
 
 
45c294d
 
 
 
 
dbb678d
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
---
configs:
- config_name: default
  data_files:
  - split: original
    path: data/original-*
  - split: lat
    path: data/lat-*
dataset_info:
  features:
  - name: text
    dtype: string
  splits:
  - name: original
    num_bytes: 19244856855
    num_examples: 39712
  - name: lat
    num_bytes: 13705512346
    num_examples: 39712
  download_size: 16984559355
  dataset_size: 32950369201
annotations_creators:
- no-annotation
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
multilinguality:
- monolingual
language:
- uz
size_categories:
- 10M<n<100M
pretty_name: UzBooks
license: apache-2.0
tags:
- uz
- books
---

# Dataset Card for BookCorpus

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Homepage:** [https://tahrirchi.uz/grammatika-tekshiruvi](https://tahrirchi.uz/grammatika-tekshiruvi)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 16.98 GB
- **Size of the generated dataset:** 32.95 GB
- **Total amount of disk used:** 49.93 GB

### Dataset Summary

In an effort to democratize research on low-resource languages, we release UzBooks dataset, a cleaned book corpus consisting of nearly 40000 books in Uzbek Language divided into two branches: "original" and "lat," representing the OCRed (Latin and Cyrillic) and fully Latin versions of the texts, respectively. 

Please refer to our [blogpost](https://tahrirchi.uz/grammatika-tekshiruvi) and paper (Coming soon!) for further details.

To load and use dataset, run this script:

```python
from datasets import load_dataset

uz_books=load_dataset("tahrirchi/uz-books")
```

## Dataset Structure

### Data Instances

#### plain_text

- **Size of downloaded dataset files:** 16.98 GB
- **Size of the generated dataset:** 32.95 GB
- **Total amount of disk used:** 49.93 GB

An example of 'train' looks as follows.
```
{
    "text": "Hamsa\nAlisher Navoiy ..."
}
```

### Data Fields

The data fields are the same among all splits.

#### plain_text
- `text`: a `string` feature that contains text of the books.

### Data Splits

| name            |         |
|-----------------|--------:|
| original        | 39712   |
| lat             | 39712   |

## Dataset Creation

The books have been crawled from various internet sources and preprocessed using Optical Character Recognition techniques in [Tesseract OCR Engine](https://github.com/tesseract-ocr/tesseract). The latin version is created by converting the original dataset with highly curated scripts in order to put more emphasis on the research and development of the field.


## Citation

Please cite this model using the following format:

```
@online{Mamasaidov2023UzBooks,
    author    = {Mukhammadsaid Mamasaidov and Abror Shopulatov},
    title     = {UzBooks dataset},
    year      = {2023},
    url       = {https://huggingface.co./datasets/tahrirchi/uz-books},
    note      = {Accessed: 2023-10-28}, % change this date
    urldate   = {2023-10-28} % change this date
}
```

## Gratitude

We are thankfull for these awesome organizations and people for help to make it happen:

 - [Ilya Gusev](https://github.com/IlyaGusev/): for advise throughout the process  
 - [David Dale](https://daviddale.ru): for advise throughout the process

## Contacts

We believe that this work will enable and inspire all enthusiasts around the world to open the hidden beauty of low resource languages, in particular Uzbek. 

For further development and issues about the dataset, please use [email protected] or [email protected] to contact.