--- base_model: AI-Sweden-Models/Llama-3-8B-instruct tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en datasets: - kobprof/skolegpt-instruct - Mabeck/Danish-SlimOrca --- # Uploaded model - **Computed sponsored by:** Arroc ECS Denmark and Nvidia through Danish Data Science Community - **Developed by:** ThatsGroes - **License:** apache-2.0 - **Finetuned from model :** AI-Sweden-Models/Llama-3-8B-instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. {'train_runtime': 215438.6486, 'train_samples_per_second': 3.141, 'train_steps_per_second': 0.393, 'train_loss': 1.035243785800245, 'epoch': 4.0} [codecarbon INFO @ 07:52:43] Energy consumed for RAM : 11.292402 kWh. RAM Power : 188.78840446472168 W [codecarbon INFO @ 07:52:43] Energy consumed for all GPUs : 17.520012 kWh. Total GPU Power : 245.77458591976836 W [codecarbon INFO @ 07:52:43] Energy consumed for all CPUs : 2.543341 kWh. Total CPU Power : 42.5 W [codecarbon INFO @ 07:52:43] 31.355754 kWh of electricity used since the beginning. We ended up using 65.56 GB GPU memory (82.84%), of which 49.83 GB (62.97%) was used for LoRa. [](https://github.com/unslothai/unsloth)