bunnycore's picture
Update README.md
f980aae verified
|
raw
history blame
924 Bytes
metadata
base_model: unsloth/gemma-2-2b-it-bnb-4bit
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - gemma2
  - trl
license: apache-2.0
language:
  - en

Uploaded model

dataset = load_dataset("TokenBender/roleplay_alpaca", split="train")
dataset = dataset.map(formatting_prompts_func_alpaca, batched=True)

# Load and map the second dataset
dataset2 = load_dataset("Magpie-Align/Magpie-Reasoning-V1-150K-CoT-QwQ", split="train")
dataset2 = dataset2.map(formatting_prompts_func_magpie, batched=True)
  • Developed by: bunnycore
  • License: apache-2.0
  • Finetuned from model : unsloth/gemma-2-2b-it-bnb-4bit

This gemma2 model was trained 2x faster with Unsloth and Huggingface's TRL library.