---
license: llama2
datasets:
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
- HuggingFaceH4/cai-conversation-harmless
language:
- hu
- en
---
# SambaLingo-Hungarian-Chat-70B
SambaLingo-Hungarian-Chat-70B is a human aligned chat model trained in Hungarian and English. It is trained using direct preference optimization on top the base model [SambaLingo-Hungarian-Base](https://huggingface.co./sambanovasystems/SambaLingo-Hungarian-Base). The base model adapts [Llama-2-70b](https://huggingface.co./meta-llama/Llama-2-70b-hf) to Hungarian by training on 19 billion tokens from the Hungarian split of the [Cultura-X](https://huggingface.co./datasets/uonlp/CulturaX) dataset. Try This Model at [SambaLingo-chat-space](https://huggingface.co./spaces/sambanovasystems/SambaLingo-chat-space).
## Model Description
- **Developed by:** [SambaNova Systems](https://sambanova.ai/)
- **Model type:** Language Model
- **Language(s):** Hungarian, English
- **Finetuned from model:** [Llama-2-70b](https://huggingface.co./meta-llama/Llama-2-70b-hfΩ)
- **Paper:** [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
- **Blog Post**: [sambalingo-open-source-language-experts](https://sambanova.ai/blog/sambalingo-open-source-language-experts)
## Getting Started
### Loading Model With Hugging Face
Please make sure to set use_fast=False when loading the tokenizer.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("sambanovasystems/SambaLingo-Hungarian-Chat-70B", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("sambanovasystems/SambaLingo-Hungarian-Chat-70B", device_map="auto", torch_dtype="auto")
```
### Interacting With Model Pipeline
Please make sure to set use_fast=False when loading the tokenizer.
```python
from transformers import pipeline
pipe = pipeline("text-generation", model="sambanovasystems/SambaLingo-Hungarian-Chat-70B", device_map="auto", use_fast=False)
messages = [
{"role": "user", "content": {YOUR_QUESTION}},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt)[0]
outputs = outputs["generated_text"]
```
### Suggested Inference Parameters
- Temperature: 0.8
- Repetition penalty: 1.0
- Top-p: 0.9
### Prompting Guidelines
To prompt this model, please use the following chat template:
```
<|user|>\n{question}\n<|assistant|>\n
```
### Example Prompts and Generations
```
<|user|>
Írj át minden nevet -ra a következő bekezdésben:\nNagy Róbert felkelt, megitta reggeli kávéját, elköszönt feleségétől, Évától és beült autójába. Késésben volt, ezért gyorsabban vezzetett a megengedettnél, aminek következményeképp Pesti Károly ezredes megállította. Végül egy bírsággal megúszta, de a munkából már bőven elkésett.
<|assistant|>
Anonim felkelt, megitta reggeli kávéját, elköszönt feleségétől, Anonimtól, és beült autójába. Késésben volt, ezért gyorsabban vezzett a megengedettnél, aminek következtében Anonim ezredes megállította. Végül egy bírsággal megúszta, de a munkából már bőven elkésett.
```
## Training Details
The alignment phase follows the recipe for [Zephyr-7B](https://huggingface.co./HuggingFaceH4/zephyr-7b-beta), and comprises two stages: supervised fine-tuning (SFT) and Direct Performance Optimization (DPO).
The SFT phase was done on the [ultrachat_200k](https://huggingface.co./datasets/HuggingFaceH4/ultrachat_200k) dataset mixed with the Google translated version of the ultrachat_200k dataset. It was trained for one epoch with global batch size 512 and max sequence length 2048 tokens. We used a linear decay learning rate of 2e-5 and 10% warmup.
The DPO phase was done on the [ultrafeedback](https://huggingface.co./datasets/HuggingFaceH4/ultrafeedback_binarized) dataset and [cai-conversation-harmless](https://huggingface.co./datasets/HuggingFaceH4/cai-conversation-harmless) dataset, mixed with 10% of the data Google translated. It was trained with global batch size 32 and for three epochs. We used a linear decay learning rate of 5e-7, 10% warmup and β=0.1 as the regularization factor for DPO.
## Tokenizer Details
We extended the vocabulary of the base llama model from 32,000 tokens to 57,000 tokens by adding up to 25,000 non-overlapping tokens from the new language.
## Evaluation
For evaluation results see our paper: [SambaLingo: Teaching Large Language Models New Languages](https://arxiv.org/abs/2404.05829)
## Uses
### Direct Use
Use of this model is governed by the Meta’s [Llama 2 Community License Agreement](https://ai.meta.com/llama/license/). Please review and accept the license before downloading the model weights.
### Out-of-Scope Use
SambaLingo should NOT be used for:
- Mission-critical applications
- Applications that involve the safety of others
- Making highly important decisions
## Bias, Risks, and Limitations
Like all LLMs, SambaLingo has certain limitations:
- Hallucination: Model may sometimes generate responses that contain plausible-sounding but factually incorrect or irrelevant information.
- Code Switching: The model might unintentionally switch between languages or dialects within a single response, affecting the coherence and understandability of the output.
- Repetition: The Model may produce repetitive phrases or sentences, leading to less engaging and informative responses.
- Coding and Math: The model's performance in generating accurate code or solving complex mathematical problems may be limited.
- Toxicity: The model could inadvertently generate responses containing inappropriate or harmful content.
## Acknowledgments
We extend our heartfelt gratitude to the open-source AI community; this endeavor would not have been possible without open source. SambaNova embraces the open-source community and aspires to actively contribute to this initiative.
We would like to give a special thanks to the following groups:
- Meta for open sourcing LLama 2 and open sourcing FLORES-200 dataset
- Nguyen et al for open sourcing CulturaX dataset
- CohereAI for releasing AYA-101 and open sourcing a multilingual instruction tuning dataset
- EleutherAI for their open source evaluation framework
- Hugging Face-H4 team for open source the zephyr training recipe and alignment handbook repo
## Cite SambaLingo
```
@misc{csaki2024sambalingo,
title={SambaLingo: Teaching Large Language Models New Languages},
author={Zoltan Csaki and Bo Li and Jonathan Li and Qiantong Xu and Pian Pawakapan and Leon Zhang and Yun Du and Hengyu Zhao and Changran Hu and Urmish Thakker},
year={2024},
eprint={2404.05829},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```