Model Trained Using AutoTrain

This model was trained using AutoTrain. For more information, please visit AutoTrain.

Usage


from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = "mrcuddle/Tiny-DarkLlama-Chat"

tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
    model_path,
    device_map="auto",
    torch_dtype='auto'
).eval()

# Prompt content: "hi"
messages = [
    {"role": "user", "content": "hi"}
]

input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)

# Model response: "Hello! How can I assist you today?"
print(response)

Datasets used in training:

  • ChaoticNeutrals/Synthetic-Dark-RP
  • ChaoticNeutrals/Synthetic-RP
  • ChaoticNeutrals/Luminous_Opus
  • NobodyExistsOnTheInternet/ToxicQAFinal

Eval

huggingface (pretrained=mrcuddle/tiny-darkllama-chat), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 16

Tasks Version Filter n-shot Metric Value Stderr
hellaswag 1 none 0 acc ↑ 0.4659 ± 0.0050
none 0 acc_norm ↑ 0.6044 ± 0.0049
lambada_openai 1 none 0 acc ↑ 0.6101 ± 0.0068
none 0 perplexity ↓ 5.9720 ± 0.1591
Downloads last month
190
Safetensors
Model size
1.1B params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for mrcuddle/Tiny-DarkLlama-Chat

Finetuned
(183)
this model
Finetunes
1 model
Quantizations
2 models

Datasets used to train mrcuddle/Tiny-DarkLlama-Chat