license: apache-2.0
language:
- que
datasets:
- allenai/nllb
- cis-lmu/Glot500
- sil-ai/bloom-lm
- statmt/cc100
- Llamacha/monolingual-quechua-iic
- legacy-datasets/wikipedia
- allenai/MADLAD-400
- oscar-corpus/OSCAR-2109
library_name: transformers
pipeline_tag: text-generation
tags:
- goldfish
- arxiv:2408.10441
que_latn_full
Goldfish is a suite of monolingual language models trained for 350 languages. This model is the Quechua (Latin script) model trained on 139MB of data (all our data in the language), after accounting for an estimated byte premium of 1.21; content-matched text in Quechua takes on average 1.21x as many UTF-8 bytes to encode as English. The Goldfish models are trained primarily for comparability across languages and for low-resource languages; Goldfish performance for high-resource languages is not designed to be comparable with modern large language models (LLMs).
Note: que_latn is a macrolanguage code. Individual language codes quz_latn (Cusco Quechua) and quy_latn (Ayacucho Quechua) are included in Goldfish, although with less data.
All training and hyperparameter details are in our paper, Goldfish: Monolingual Language Models for 350 Languages (Chang et al., 2024).
Training code and sample usage: https://github.com/tylerachang/goldfish
Sample usage also in this Google Colab: link
Model details:
To access all Goldfish model details programmatically, see https://github.com/tylerachang/goldfish/blob/main/model_details.json. All models are trained with a [CLS] (same as [BOS]) token prepended, and a [SEP] (same as [EOS]) token separating sequences. For best results, make sure that [CLS] is prepended to your input sequence (see sample usage linked above)! Details for this model specifically:
- Architecture: gpt2
- Parameters: 124770816
- Maximum sequence length: 512 tokens
- Training text data (raw): 169.32MB
- Training text data (byte premium scaled): 139.385MB
- Training tokens: 40595968 (x10 epochs)
- Vocabulary size: 50000
- Compute cost: 2.07152584261632e+17 FLOPs or ~19.6 NVIDIA A6000 GPU hours
Training datasets (percentages prior to deduplication):
- 66.54127%: NLLB (CommonCrawl and ParaCrawl)
- 16.31458%: AmericasNLP (excluding AmericasNLI)
- 7.98999%: Glot500, including BLOOM, CC100, Earthlings, OSCAR, Quechua-IIC, Tatoeba, W2C, Wikipedia Hugging Face
- 5.28328%: MADLAD-400 (CommonCrawl)
- 3.76909%: Wikipedia 2023/08
- 0.09735%: OSCAR 2021/09
- 0.00445%: Tatoeba
Citation
If you use this model, please cite:
@article{chang-etal-2024-goldfish,
title={Goldfish: Monolingual Language Models for 350 Languages},
author={Chang, Tyler A. and Arnett, Catherine and Tu, Zhuowen and Bergen, Benjamin K.},
journal={Preprint},
year={2024},
url={https://www.arxiv.org/abs/2408.10441},
}