ジェルばんは~

🧬UPDATE: Please see Here for v0.2 model.

🧬Rain-7B-v0.1

Rain-7B-v0.1 is an experimental model finetuned on Qwen1.5-7B-Chat with thousands of chain of thought conversations.

It work better for "think step by step" prompt.

🧬Evaluation

Model name MMLU
Qwen1.5-7B-Chat 55.8
Rain-7B-v0.1 58.1

🧬Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "raincandy-u/Rain-7B-v0.1"
messages = [{"role": "user", "content": "What is chain of thoughts?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
80
Safetensors
Model size
7.72B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train raincandy-u/Rain-7B-v0.1