Uploaded model
- Computed sponsored by: Arroc ECS Denmark and Nvidia through Danish Data Science Community
- Developed by: ThatsGroes
- License: apache-2.0
- Finetuned from model : AI-Sweden-Models/Llama-3-8B-instruct
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
{'train_runtime': 215438.6486, 'train_samples_per_second': 3.141, 'train_steps_per_second': 0.393, 'train_loss': 1.035243785800245, 'epoch': 4.0}
[codecarbon INFO @ 07:52:43] Energy consumed for RAM : 11.292402 kWh. RAM Power : 188.78840446472168 W [codecarbon INFO @ 07:52:43] Energy consumed for all GPUs : 17.520012 kWh. Total GPU Power : 245.77458591976836 W [codecarbon INFO @ 07:52:43] Energy consumed for all CPUs : 2.543341 kWh. Total CPU Power : 42.5 W [codecarbon INFO @ 07:52:43] 31.355754 kWh of electricity used since the beginning.
We ended up using 65.56 GB GPU memory (82.84%), of which 49.83 GB (62.97%) was used for LoRa.
- Downloads last month
- 2
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for ThatsGroes/Llama-3.1-8B-Instruct-SkoleGPT-DaSlimOrca-4e
Base model
meta-llama/Meta-Llama-3-8B
Finetuned
AI-Sweden-Models/Llama-3-8B
Finetuned
AI-Sweden-Models/Llama-3-8B-instruct