Papers
arxiv:2407.05975

LLaMAX: Scaling Linguistic Horizons of LLM by Enhancing Translation Capabilities Beyond 100 Languages

Published on Jul 8
Β· Submitted by FeYuan on Jul 9
#2 Paper of the day
Authors:
,
,
,

Abstract

Large Language Models~(LLMs) demonstrate remarkable translation capabilities in high-resource language tasks, yet their performance in low-resource languages is hindered by insufficient multilingual data during pre-training. To address this, we dedicate 35,000 A100-SXM4-80GB GPU hours in conducting extensive multilingual continual pre-training on the LLaMA series models, enabling translation support across more than 100 languages. Through a comprehensive analysis of training strategies, such as vocabulary expansion and data augmentation, we develop LLaMAX. Remarkably, without sacrificing its generalization ability, LLaMAX achieves significantly higher translation performance compared to existing open-source LLMs~(by more than 10 spBLEU points) and performs on-par with specialized translation model~(M2M-100-12B) on the Flores-101 benchmark. Extensive experiments indicate that LLaMAX can serve as a robust multilingual foundation model. The code~\url{https://github.com/CONE-MT/LLaMAX/.} and models~\url{https://huggingface.co./LLaMAX/.} are publicly available.

Community

Paper author Paper submitter

LLaMAX is a powerful language model created specifically for multilingual scenarios. Built upon Meta's LLaMA series models, LLaMAX undergoes extensive training across more than 100 languages. Remarkably, it enhances its multilingual capabilities without compromising its generalization ability, surpassing existing LLMs.

Highlights:

  • LlaMAX with enhanced translation performance across all 101 languages covered by Flores-101.

  • LLaMAX benefits for unseen long-tail low-resource languages as well by evaluating its performance on Flores-200.

  • LLaMAX provides a better starting point for multilingual tasks, as demonstrated by >5% accuracy improvements after fine-tuning with task-specific data.

  • LLaMAX also provides a lot of analysis on the multilingual continual pre-training.

More Details:

This comment has been hidden

Sign up or log in to comment

Models citing this paper 12

Browse 12 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2407.05975 in a dataset README.md to link it from this page.

Spaces citing this paper 3

Collections including this paper 5