Post
2419
βοΈ Sailor: A New Multilingual Open LLM for South-East Asia π
Last month we have released a new family of multilingual language models called **Sailor**, ranging from 0.5B to 7B parameters, continually pre-trained from the Qwen1.5 models. Based on our extensive benchmarking, the Sailor models demonstrate exceptional performance on South-East Asian languages, taking us one step closer to multilingual LLMs that can serve the diverse needs of the region and beyond.
Today, we're more than excited to share the key technical details behind the Sailor models! πͺ
**Key highlights**:
π Data curation: Merging short examples, document-level code-switching, aggressive data cleaning and deduplication.
π€ Tokenization Robustness: We find that BPE dropout is really effective to deal with prompt variations.
π Optimizing Data Mixture: We propose a new approach to automatically balance capabilities across different languages!
π Recipe in Continual Pre-training: We discover a powerful metric that can help predict how well the Sailor models will perform on the original domain (e.g., English) after continual pre-training.
We are thrilled to share these technical details with the community and invite you to explore the Sailor models. We hope Sailor models take us one step closer to multilingual LLMs in the world! πβ¨
To learn more, please access our research paper or reach out to our team.
π Paper: Sailor: Open Language Models for South-East Asia (2404.03608)
𧩠Model: sail/sailor-language-models-65e19a749f978976f1959825
π» Code: https://github.com/sail-sg/sailor-llm
Last month we have released a new family of multilingual language models called **Sailor**, ranging from 0.5B to 7B parameters, continually pre-trained from the Qwen1.5 models. Based on our extensive benchmarking, the Sailor models demonstrate exceptional performance on South-East Asian languages, taking us one step closer to multilingual LLMs that can serve the diverse needs of the region and beyond.
Today, we're more than excited to share the key technical details behind the Sailor models! πͺ
**Key highlights**:
π Data curation: Merging short examples, document-level code-switching, aggressive data cleaning and deduplication.
π€ Tokenization Robustness: We find that BPE dropout is really effective to deal with prompt variations.
π Optimizing Data Mixture: We propose a new approach to automatically balance capabilities across different languages!
π Recipe in Continual Pre-training: We discover a powerful metric that can help predict how well the Sailor models will perform on the original domain (e.g., English) after continual pre-training.
We are thrilled to share these technical details with the community and invite you to explore the Sailor models. We hope Sailor models take us one step closer to multilingual LLMs in the world! πβ¨
To learn more, please access our research paper or reach out to our team.
π Paper: Sailor: Open Language Models for South-East Asia (2404.03608)
𧩠Model: sail/sailor-language-models-65e19a749f978976f1959825
π» Code: https://github.com/sail-sg/sailor-llm