Trained on all the 6 different languages so it should hopefully be useful for all of them though the quality of the datasets probably vary a lot.

Uses ChatML as usual.

Merged: mpasila/Viking-SlimInstruct-V1-7B

Uses the following datasets:

saillab/alpaca-icelandic-cleaned, kobprof/skolegpt-instruct, tollefj/nor-instruct-cleaned, skvarre/sv-instruct-v1, Gryphe/Sonnet3.5-SlimOrcaDedupCleaned-20k, LumiOpen/instruction-collection-fin, neph1/Alpaca-Lora-GPT4-Swedish-Refined

Uploaded Viking-SlimInstruct-LoRA-V1-7B model

  • Developed by: mpasila
  • License: apache-2.0
  • Finetuned from model : LumiOpen/Viking-7B

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
3
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for mpasila/Viking-SlimInstruct-LoRA-V1-7B

Base model

LumiOpen/Viking-7B
Adapter
(9)
this model

Datasets used to train mpasila/Viking-SlimInstruct-LoRA-V1-7B