Do you use transformer or mergekit to merge models?

#5
by QuangDuy - opened

If you use mergekit can you tell me how to use it?

I use mergekit! If you'd like some examples you can refer to their github, or I could also share some merge templates that I use :)

I hope you can share some merge templates you are using. I am doing it with Qwen 7b.

Here are a couple that I've used/seen people use:

models:
  - model: qingy2024/NaturalLM3-8B-Instruct-v0.1
    parameters:
      weight: 1
      density: 1
  - model: NousResearch/Hermes-3-Llama-3.1-8B
    parameters:
      weight: 1
      density: 1
merge_method: ties
base_model: meta-llama/Meta-Llama-3.1-8B
parameters:
  weight: 1
  density: 1
  normalize: true
  int8_mask: true
tokenizer_source: qingy2024/NaturalLM3-8B-Instruct-v0.1
dtype: bfloat16

models:
  - model: arcee-ai/Virtuoso-Small
    parameters:
      weight: 1
      density: 1
merge_method: ties
base_model: Qwen/Qwen2.5-14B
parameters:
  weight: 1
  density: 1
  normalize: true
  int8_mask: true
dtype: float16

models:
  - model: Qwen/Qwen2.5-Math-7B-Instruct
    parameters:
      weight: 1
      density: 1
  - model: Qwen/Qwen2.5-7B-Instruct
    parameters:
      weight: 1
      density: 1
merge_method: ties
base_model: Qwen/Qwen2.5-7B
parameters:
  weight: 1
  density: 1
  normalize: true
  int8_mask: true
tokenizer_source: Qwen/Qwen2.5-7B-Instruct
dtype: bfloat16

models:
  - model: CultriX/SeQwence-14Bv1
    parameters:
      weight: 0.22        # Boosted slightly to improve general task performance
      density: 0.62       # Prioritize generalist adaptability
  - model: allknowingroger/QwenSlerp6-14B
    parameters:
      weight: 0.18
      density: 0.59       # Slight increase to enhance contextual reasoning (tinyHellaswag)
  - model: CultriX/Qwen2.5-14B-Wernickev3
    parameters:
      weight: 0.16
      density: 0.56       # Minor increase to stabilize GPQA and MUSR performance
  - model: CultriX/Qwen2.5-14B-Emergedv3
    parameters:
      weight: 0.15        # Increase weight for domain-specific expertise
      density: 0.55
  - model: VAGOsolutions/SauerkrautLM-v2-14b-DPO
    parameters:
      weight: 0.12
      density: 0.56       # Enhance factual reasoning and IFEval contributions
  - model: CultriX/Qwen2.5-14B-Unity
    parameters:
      weight: 0.10
      density: 0.53
  - model: qingy2019/Qwen2.5-Math-14B-Instruct
    parameters:
      weight: 0.10
      density: 0.51       # Retain focus on MATH and advanced reasoning tasks

merge_method: dare_ties
base_model: CultriX/SeQwence-14Bv1
parameters:
  normalize: true
  int8_mask: true
dtype: bfloat16
tokenizer_source: Qwen/Qwen2.5-14B-Instruct

adaptive_merge_parameters:
  task_weights:
    IFEval: 1.5           # Strengthened for better instruction-following
    BBH: 1.3
    MATH: 1.6             # Emphasize advanced reasoning and problem-solving
    GPQA: 1.4             # Improve factual recall and logical QA tasks
    MUSR: 1.5             # Strengthened multi-step reasoning capabilities
    MMLU-PRO: 1.3         # Slight boost for domain-specific multitask knowledge
  smoothing_factor: 0.19   # Refined for smoother blending of task strengths
gradient_clipping: 0.88    # Tightened slightly for precise parameter contribution

Thank you so much

Sign up or log in to comment