GlotCC: An Open Broad-Coverage CommonCrawl Corpus and Pipeline for Minority Languages
Abstract
The need for large text corpora has increased with the advent of pretrained language models and, in particular, the discovery of scaling laws for these models. Most available corpora have sufficient data only for languages with large dominant communities. However, there is no corpus available that (i) covers a wide range of minority languages; (ii) is generated by an open-source reproducible pipeline; and (iii) is rigorously cleaned from noise, making it trustworthy to use. We present GlotCC, a clean, document-level, 2TB general domain corpus derived from CommonCrawl, covering more than 1000 languages. We make GlotCC and the system used to generate it - including the pipeline, language identification model, and filters - available to the research community. Corpus v. 1.0 https://huggingface.co./datasets/cis-lmu/GlotCC-v1, Pipeline v. 3.0 https://github.com/cisnlp/GlotCC.
Community
GlotCC is here! 💥 (Accepted at NeurIPS 2024!)
How can we scale NLP research to 1,000 languages? We built an open-source corpus and pipeline, including a LangID model, to mine data from web.
Paper: https://arxiv.org/abs/2410.23825
Corpus: https://huggingface.co./datasets/cis-lmu/GlotCC-V1
Our pipeline and homepage for GlotCC: https://github.com/cisnlp/GlotCC
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- EMMA-500: Enhancing Massively Multilingual Adaptation of Large Language Models (2024)
- SWEb: A Large Web Dataset for the Scandinavian Languages (2024)
- MURI: High-Quality Instruction Tuning Datasets for Low-Resource Languages via Reverse Instructions (2024)
- Generative Model for Less-Resourced Language with 1 billion parameters (2024)
- MOSEL: 950,000 Hours of Speech Data for Open-Source Speech Foundation Model Training on EU Languages (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper