Tsunami Model

Tsunami-0.5x-7B-Instruct

TSUNAMI: Transformative Semantic Understanding and Natural Augmentation Model for Intelligence.

TSUNAMI full name was created by ChatGPT.


infomation

Tsunami-0.5x-7B-Instruct is Thai Large Language Model that fine-tuned from Qwen2.5-7B around 100,000 rows in Thai dataset.


Prompt Template

This model uses ChatML prompt template:

<|im_start|>system
{System}<|im_end|>
<|im_start|>user
{User}<|im_end|>
<|im_start|>assistant
{Assistant}

How to use


from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_name = "Tsunami-th/Tsunami-0.5x-7B-Instruct"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "สวัสดีครับ"}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

inputs = tokenizer(text, return_tensors="pt")
inputs = inputs.to(model.device)
with torch.no_grad():
   output = model.generate(**inputs, max_new_tokens=512)

response = tokenizer.decode(output[0, len(inputs['input_ids'][0]):], skip_special_tokens=True)

Author


  • Tsunami-0.5x-7B-Instruct is the version 0.5x that did not train on the whole dataset.
  • Tsunami-1.0-7B-Instruct is coming soon.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 29.80
IFEval (0-Shot) 70.99
BBH (3-Shot) 37.36
MATH Lvl 5 (4-Shot) 4.83
GPQA (0-shot) 8.61
MuSR (0-shot) 18.57
MMLU-PRO (5-shot) 38.42
Downloads last month
313
Safetensors
Model size
7.62B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Tsunami-th/Tsunami-0.5x-7B-Instruct

Base model

Qwen/Qwen2.5-7B
Finetuned
(169)
this model
Merges
17 models
Quantizations
3 models

Evaluation results