Hermes-Instruct-7B-217K
Mistral-7B-Instruct-v0.2 trained with 217K rows of teknium/openhermes, in Alpaca format. Why? Mistral-7B-Instruct-v0.2 has native 32K context and rope theta of 1M. It's not a base model, so I've used the same recipe with different amounts of data to gauge the effects of further finetuning.
Prompt Format
Both the default Mistral-Instruct tags and Alpaca are fine, so either:
<s>[INST] {sys_prompt} {instruction} [/INST]
or
{sys_prompt}
### Instruction:
{instruction}
### Response:
The tokenizer default is Alpaca this time around.
Usage
from transformers import AutoTokenizer
import transformers
import torch
model = "lodrick-the-lafted/Hermes-Instruct-7B-217K"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.bfloat16},
)
messages = [{"role": "user", "content": "Give me a cooking recipe for an apple pie."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_p=0.95)
print(outputs[0]["generated_text"])
- Downloads last month
- 179
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.