Fine-Tuning 'retrieval.query' and 'retrieval.passage' LoRA Adapters Together?
Hi everyone,
Hi, I am trying to fine-tune the lora-adapters (retrieval.query
and retrieval.passage') together on a new task. The model apparently integrates these adapters as additional parameter layers inside the model weights (e.g.,
lora_Aand
lora_Bunder the
parametrizations` module and indexed for each adapter).
The LoRA weights are embedded in the following way:
- Each layer has
lora_A
andlora_B
parameters injected into the model's named parameters (e.g.,roberta.encoder.layers.X.mlp.fc1.parametrizations.weight.0.lora_A
).
What I'm trying to achieve:
- Fine-tune both
retrieval.query
(query embeddings) andretrieval.passage
(passage embeddings) adapters simultaneously. - I need to set up training such that only the task-specific LoRA weights (
lora_A
,lora_B
) are updated, not the base weights - or merge and unload them.
Challenges I'm facing:
- Isolating the correct LoRA weights for fine-tuning without affecting the base model.
- Ensuring proper task-specific fine-tuning for both adapters in a single training process.
Has anyone successfully fine-tuned these adapters together? If so, I would appreciate any advice, code snippets, or best practices to help set up the fine-tuning process.
sorry if my question is a bit naive, i have not yet worked with fine tuning bert-models with lora
Thanks for your help!
There is a config parameter to configure whether the main parameters (non-lora parameters) are trainable, so this should be easy to handle:
https://huggingface.co./jinaai/jina-embeddings-v3/blob/main/config.json#L27
By default only the LoRA adapters are trained, and if you use 2 adapters, gradients are only calculated for those two. So I think it is relatively straight forward, however, we use our own custom training code, so I haven't tested how well it works with finetuning code provided by SentenceTransformers etc, especially, when you plan to use multiple different task-attributes (LoRA adapters) during the training.
@miweru
Hello, I'm also intrested in finetuning the query and passage embeddings at the same time with my own dataset. Could I ask you if you have managed to do so, and if yes how you did ?
Thanks in advance !
Any example how to get Fine-Tuning 'retrieval.query' and 'retrieval.passage' LoRA Adapters Together?