root/workspace/outputs/21/3f1295f1-18df-4a54-ab83-e61c86ee523d

This model is a fine-tuned version of VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5933

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Framework versions

  • PEFT 0.13.2
  • Transformers 4.46.3
  • Pytorch 2.5.1+cu124
  • Datasets 3.1.0
  • Tokenizers 0.20.3
Downloads last month
3
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for philip-hightech/3f1295f1-18df-4a54-ab83-e61c86ee523d

Adapter
(291)
this model