Anacondia

Anacondia-70m is a Pythia-70m-deduped model fine-tuned with QLoRA on timdettmers/openassistant-guanaco

Usage

Anacondia is not intended for any downstream usage and was trained for educational purposes. Please fine tune for downstream tasks or consider more serious models for inference if this doesn't fall into your usage aim.

Training procedure

The following bitsandbytes quantization config was used during training:

  • load_in_8bit: False
  • load_in_4bit: True
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: nf4
  • bnb_4bit_use_double_quant: True
  • bnb_4bit_compute_dtype: bfloat16

Framework versions

  • PEFT 0.4.0

Inference


#import necessary modules
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "UncleanCode/anacondia-70m"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

input= tokenizer("This is a sentence ",return_tensors="pt")
output= model.generate(**input)

tokenizer.decode(output[0])
Downloads last month
18
Safetensors
Model size
70.4M params
Tensor type
F32
·
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train UncleanCode/anacondia-70m