Preprint
Article

This version is not peer-reviewed.

Adaptive-PEFT: Dynamic Rank Adjustment for Efficient and Enhanced Large Language Model Fine-Tuning

Submitted:

12 March 2026

Posted:

16 March 2026

You are already at the latest version

Abstract
The substantial computational and memory demands of Large Language Models (LLMs) during fine-tuning are partially addressed by Parameter-Efficient Fine-Tuning (PEFT) methods like LoRA. However, their static low-rank configurations overlook heterogeneous learning sensitivity across layers, leading to suboptimal capacity allocation. We propose Adaptive-PEFT (AP-PEFT), a novel dynamic PEFT framework that introduces a real-time, layer-specific rank adjustment mechanism. This is accomplished via a lightweight module that assesses layer contributions using gradient information, combined with a dynamic rank strategy involving growth and shrink thresholds and a smooth transition for stability. Comprehensive experiments on diverse LLMs (from 3B to 8B parameters) and datasets show AP-PEFT achieves superior task performance and enhanced resource efficiency. AP-PEFT consistently demonstrates competitive or improved metrics in memory usage, compute utilization, latency, throughput, and energy consumption compared to state-of-the-art PEFT baselines and full fine-tuning. This work underscores the importance of dynamic parameter allocation for achieving an optimal balance between performance and efficiency in LLM fine-tuning.
Keywords: 
;  ;  ;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated