--- base_model: - gradientai/Llama-3-8B-Instruct-262k - Nitral-AI/Echidna-7b-128k library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * [gradientai/Llama-3-8B-Instruct-262k](https://huggingface.co./gradientai/Llama-3-8B-Instruct-262k) * [Nitral-AI/Echidna-7b-128k](https://huggingface.co./Nitral-AI/Echidna-7b-128k) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Nitral-AI/Echidna-7b-128k parameters: weight: 1.0 - model: gradientai/Llama-3-8B-Instruct-262k parameters: weight: 0.1 merge_method: linear dtype: float16 name: echidna-linear ```