Fine-Tuning 'retrieval.query' and 'retrieval.passage' LoRA Adapters Together?

#31
by miweru - opened

Hi everyone,

Hi, I am trying to fine-tune the lora-adapters (retrieval.query and retrieval.passage') together on a new task. The model apparently integrates these adapters as additional parameter layers inside the model weights (e.g., lora_Aandlora_Bunder theparametrizations` module and indexed for each adapter).

The LoRA weights are embedded in the following way:

  • Each layer has lora_A and lora_B parameters injected into the model's named parameters (e.g., roberta.encoder.layers.X.mlp.fc1.parametrizations.weight.0.lora_A).

What I'm trying to achieve:

  • Fine-tune both retrieval.query (query embeddings) and retrieval.passage (passage embeddings) adapters simultaneously.
  • I need to set up training such that only the task-specific LoRA weights (lora_A, lora_B) are updated, not the base weights - or merge and unload them.

Challenges I'm facing:

  • Isolating the correct LoRA weights for fine-tuning without affecting the base model.
  • Ensuring proper task-specific fine-tuning for both adapters in a single training process.

Has anyone successfully fine-tuned these adapters together? If so, I would appreciate any advice, code snippets, or best practices to help set up the fine-tuning process.

sorry if my question is a bit naive, i have not yet worked with fine tuning bert-models with lora

Thanks for your help!

Sign up or log in to comment