See axolotl config
axolotl version: 0.5.2
base_model: huihui-ai/Llama-3.1-Tulu-3-8B-abliterated
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: FourOhFour/RP_Phase
type: chat_template
chat_template: llama3
roles_to_train: ["gpt"]
field_messages: conversations
message_field_role: from
message_field_content: value
train_on_eos: turn
shuffle_merged_datasets: true
default_system_message:
dataset_prepared_path:
val_set_size: 0.0125
output_dir: ./output/out
hub_model_id: jeiku/evil8b
hub_strategy: "all_checkpoints"
push_dataset_to_hub:
hf_use_auth_token: true
sequence_len: 8192
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len:
wandb_project: evil
wandb_entity:
wandb_watch:
wandb_name: evil
wandb_log_model:
gradient_accumulation_steps: 16
micro_batch_size: 2
num_epochs: 2
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 1e-5
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 2
eval_table_size:
eval_max_new_tokens:
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.05
fsdp:
fsdp_config:
special_tokens:
pad_token: <|finetune_right_pad_id|>
eos_token: <|eot_id|>
evil8b
This model is a fine-tuned version of huihui-ai/Llama-3.1-Tulu-3-8B-abliterated on the FourOhFour/RP_Phase dataset. It achieves the following results on the evaluation set:
- Loss: 1.0089
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.5229 | 0.5004 | 131 | 1.0768 |
2.103 | 1.0012 | 262 | 1.0223 |
1.3982 | 1.5016 | 393 | 1.0089 |
Framework versions
- Transformers 4.46.3
- Pytorch 2.3.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
- Downloads last month
- 56
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for FourOhFour/Tulu-Tree-Fiddy-8B
Base model
meta-llama/Llama-3.1-8B
Finetuned
allenai/Llama-3.1-Tulu-3-8B-SFT
Finetuned
allenai/Llama-3.1-Tulu-3-8B-DPO
Finetuned
allenai/Llama-3.1-Tulu-3-8B