Edit model card

Merged as below:


slices:

  • sources:

    • model: athirdpath/Llama-3.1-Instruct_NSFW-pretrained_e1-plus_reddit
    • layer_range: [0, 23]
  • sources:

    • model: athirdpath/Llama-3.1-Techne-RP-8b-v1
    • layer_range: [9, 31]

merge_method: passthrough

dtype: float16

tokenizer_source: athirdpath/Llama-3.1-Techne-RP-8b-v1


Then pretrained for 1 epoch on the Iambe dataset as an 11b model

Downloads last month
8
Safetensors
Model size
10.9B params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for athirdpath/Llama-3.1-11b-pretrained

Quantizations
2 models