results

This model is a fine-tuned version of halcyon-llm/SmolLM2-360M-japanese_patch-11000 on the kajuma/training_01-09_token dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1685

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.003
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 64
  • total_train_batch_size: 256
  • optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine_with_min_lr
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 1.0

Training results

Training Loss Epoch Step Validation Loss
1.3365 0.0438 500 1.3520
1.3108 0.0877 1000 1.3424
1.2777 0.1315 1500 1.3183
1.3173 0.1754 2000 1.2995
1.2953 0.2192 2500 1.2839
1.2651 0.2631 3000 1.2729
1.2416 0.3069 3500 1.2610
1.2501 0.3508 4000 1.2496
1.2258 0.3946 4500 1.2393
1.1961 0.4385 5000 1.2292
1.2401 0.4823 5500 1.2193
1.2089 0.5262 6000 1.2098
1.1854 0.5700 6500 1.2019
1.1716 0.6138 7000 1.1943
1.2056 0.6577 7500 1.1877
1.1998 0.7015 8000 1.1821
1.1582 0.7454 8500 1.1777
1.1667 0.7892 9000 1.1744
1.1042 0.8331 9500 1.1722
1.1436 0.8769 10000 1.1705
1.1224 0.9208 10500 1.1695
1.1215 0.9646 11000 1.1688

Framework versions

  • Transformers 4.48.0.dev0
  • Pytorch 2.5.1+cu124
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
0
Safetensors
Model size
362M params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for halcyon-llm/SmolLM2-360M-japanese_token-11403

Finetuned
(1)
this model

Dataset used to train halcyon-llm/SmolLM2-360M-japanese_token-11403