Edit model card
Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer
Qwen2 fine-tune

MaziyarPanahi/Qwen2-7B-Instruct-v0.8

This is a fine-tuned version of the Qwen/Qwen2-7B model. It aims to improve the base model across all benchmarks.

⚑ Quantized GGUF

All GGUF models are available here: MaziyarPanahi/Qwen2-7B-Instruct-v0.8-GGUF

πŸ† Open LLM Leaderboard Evaluation Results

coming soon!

Prompt Template

This model uses ChatML prompt template:

<|im_start|>system
{System}
<|im_end|>
<|im_start|>user
{User}
<|im_end|>
<|im_start|>assistant
{Assistant}

How to use


# Use a pipeline as a high-level helper

from transformers import pipeline

messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="MaziyarPanahi/Qwen2-7B-Instruct-v0.8")
pipe(messages)


# Load model directly

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("MaziyarPanahi/Qwen2-7B-Instruct-v0.8")
model = AutoModelForCausalLM.from_pretrained("MaziyarPanahi/Qwen2-7B-Instruct-v0.8")
Downloads last month
6
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for Suparious/Qwen2-7B-Instruct-v0.8-exl2-6.5bpw

Base model

Qwen/Qwen2-7B
Quantized
(47)
this model

Datasets used to train Suparious/Qwen2-7B-Instruct-v0.8-exl2-6.5bpw