AXCXEPT's picture
Update README.md
461597f verified
metadata
library_name: transformers
license: apache-2.0
language:
  - ja
  - en
base_model:
  - Qwen/Qwen2.5-Math-7B-Instruct
pipeline_tag: text-generation
datasets:
  - openai/gsm8k

Qwen2.5-Math-7B-Instruct-jp-EZO_OREO

🚨 Qwen2.5-Math-7B-Instruct-jp-EZO_OREO mainly supports solving Japanese and English and Chinese math problems through CoT and TIR. We do not recommend using this series of models for other tasks.

image/png

🤗 Hugging Face Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "AXCXEPT/Qwen2.5-Math-7B-Instruct-jp-EZO_OREO"
device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Find the value of $x$ that satisfies the equation $4x+5 = 6x+7$."

# CoT(CoTをさせる場合はこちら)
messages = [
    {"role": "system", "content": "Please reason step by step, and put your final answer within \\boxed{}."},
    {"role": "user", "content": prompt}
]

# TIR(TIR:Toolを使用させる場合はこちら ※こちらの方がベンチマーク性能は高い)
messages = [
    {"role": "system", "content": "Please integrate natural language reasoning with programs to solve the problem above, and put your final answer within \\boxed{}."},
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]