--- language: - bn - en - gu - hi - kn - ml - mr - or - pa - ta - te - ur license: cc-by-4.0 size_categories: - 1M arXiv HuggingFace GitHub License: CC BY 4.0 **PRALEKHA** is a large-scale benchmark for evaluating document-level alignment techniques. It includes 2M+ documents, covering 11 Indic languages and English, with a balanced mix of aligned and unaligned pairs. --- ## Dataset Description **PRALEKHA** covers 12 languages—Bengali (`ben`), Gujarati (`guj`), Hindi (`hin`), Kannada (`kan`), Malayalam (`mal`), Marathi (`mar`), Odia (`ori`), Punjabi (`pan`), Tamil (`tam`), Telugu (`tel`), Urdu (`urd`), and English (`eng`). It includes a mixture of high- and medium-resource languages, covering 11 different scripts. The dataset spans two broad domains: **news bulletins** and **podcast scripts**, offering both written and spoken forms of data. All the data is human-written or human-verified, ensuring high quality. The dataset has a **1:2 ratio of aligned to unaligned document pairs**, making it ideal for benchmarking cross-lingual document alignment techniques. ### Data Fields Each data sample includes: - **`n_id`:** Unique identifier for aligned document pairs. - **`doc_id`:** Unique identifier for individual documents. - **`lang`:** Language of the document (ISO-3 code). - **`text`:** The textual content of the document. ### Data Sources 1. **News Bulletins:** Data was custom-scraped from the [Indian Press Information Bureau (PIB)](https://pib.gov.in) website. Documents were aligned by matching bulletin IDs, which interlink bulletins across languages. 2. **Podcast Scripts:** Data was sourced from [Mann Ki Baat](https://www.pmindia.gov.in/en/mann-ki-baat), a radio program hosted by the Indian Prime Minister. This program, originally spoken in Hindi, was manually transcribed and translated into various Indian languages. ### Dataset Size Statistics | Split | Number of Documents | Size (bytes) | |---------------|---------------------|--------------------| | **Aligned** | 1,566,404 | 10,274,361,211 | | **Unaligned** | 783,197 | 4,466,506,637 | | **Total** | 2,349,601 | 14,740,867,848 | ### Language-wise Statistics | Language (`ISO-3`) | Aligned Documents | Unaligned Documents | Total Documents | |---------------------|-------------------|---------------------|-----------------| | Bengali (`ben`) | 95,813 | 47,906 | 143,719 | | English (`eng`) | 298,111 | 149,055 | 447,166 | | Gujarati (`guj`) | 67,847 | 33,923 | 101,770 | | Hindi (`hin`) | 204,809 | 102,404 | 307,213 | | Kannada (`kan`) | 61,998 | 30,999 | 92,997 | | Malayalam (`mal`) | 67,760 | 33,880 | 101,640 | | Marathi (`mar`) | 135,301 | 67,650 | 202,951 | | Odia (`ori`) | 46,167 | 23,083 | 69,250 | | Punjabi (`pan`) | 108,459 | 54,229 | 162,688 | | Tamil (`tam`) | 149,637 | 74,818 | 224,455 | | Telugu (`tel`) | 110,077 | 55,038 | 165,115 | | Urdu (`urd`) | 220,425 | 110,212 | 330,637 | --- # Usage You can use the following commands to download and explore the dataset: ## Downloading the Entire Dataset ```python from datasets import load_dataset dataset = load_dataset("ai4bharat/pralekha") ``` ## Downloading a Specific Split (aligned or unaligned) ``` python from datasets import load_dataset dataset = load_dataset("ai4bharat/pralekha", split="") # For example: dataset = load_dataset("ai4bharat/pralekha", split="aligned") ``` ## Downloading a Specific Language from a Split ```python from datasets import load_dataset dataset = load_dataset("ai4bharat/pralekha", split="/") # For example: dataset = load_dataset("ai4bharat/pralekha", split="aligned/ben") ``` --- ## License This dataset is released under the [**CC BY 4.0**](https://creativecommons.org/licenses/by/4.0/) license. --- ## Contact For any questions or feedback, please contact: - Raj Dabre ([raj.dabre@cse.iitm.ac.in](mailto:raj.dabre@cse.iitm.ac.in)) - Sanjay Suryanarayanan ([sanj.ai@outlook.com](mailto:sanj.ai@outlook.com)) - Haiyue Song ([haiyue.song@nict.go.jp](mailto:haiyue.song@nict.go.jp)) - Mohammed Safi Ur Rahman Khan ([safikhan2000@gmail.com](mailto:safikhan2000@gmail.com)) Please get in touch with us for any copyright concerns.