How to use

微调基于Qwen2.5-7B-Instruct


# Use a pipeline as a high-level helper

from transformers import pipeline

messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="Minami-su/test-v2-7B-00")
pipe(messages)


# Load model directly

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("Minami-su/test-v2-7B-00")
model = AutoModelForCausalLM.from_pretrained("Minami-su/test-v2-7B-00")
Downloads last month
0
Safetensors
Model size
7.62B params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .