Adaptive Parameter-Efficient Federated Fine-Tuning on Heterogeneous Devices
Abstract
Federated fine-tuning (FedFT) has been proposed to fine-tune the pre-trained language models in a distributed manner. However, there are two critical challenges for efficient FedFT in practical applications, i.e., resource constraints and system heterogeneity. Existing works rely on parameter-efficient fine-tuning methods, e.g., low-rank adaptation (LoRA), but with major limitations. Herein, based on the inherent characteristics of FedFT, we observe that LoRA layers with higher ranks added close to the output help to save resource consumption while achieving comparable fine-tuning performance. Then we propose a novel LoRA-based FedFT framework, termed LEGEND, which faces the difficulty of determining the number of LoRA layers (called, LoRA depth) and the rank of each LoRA layer (called, rank distribution). We analyze the coupled relationship between LoRA depth and rank distribution, and design an efficient LoRA configuration algorithm for heterogeneous devices, thereby promoting fine-tuning efficiency. Extensive experiments are conducted on a physical platform with 80 commercial devices. The results show that LEGEND can achieve a speedup of 1.5-2.8times and save communication costs by about 42.3% when achieving the target accuracy, compared to the advanced solutions.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Sequential Compression Layers for Efficient Federated Learning in Foundational Models (2024)
- Federated Sketching LoRA: On-Device Collaborative Fine-Tuning of Large Language Models (2025)
- Adaptive Rank Allocation for Federated Parameter-Efficient Fine-Tuning of Language Models (2025)
- Efficient Deployment of Large Language Models on Resource-constrained Devices (2025)
- CLoQ: Enhancing Fine-Tuning of Quantized LLMs via Calibrated LoRA Initialization (2025)
- ASLoRA: Adaptive Sharing Low-Rank Adaptation Across Layers (2024)
- SaLoRA: Safety-Alignment Preserved Low-Rank Adaptation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper