Document Details


Clip: A Rank Stabilization Scaling Factor for Fine-Tuning with LoRA Damjan Kalajdzievski 1 Tenyx Abstract As large language models (LLMs) have become increasingly compute and memory intensive, parameter-efficient fine-tuning (PEFT) methods are now a common strategy to fine-tune LLMs. A popular PEFT method is Low-Rank Adapters (LoRA), which adds trainable low-rank “adapters” to selected layers. Each adapter consists of a low-rank matrix product, multiplicatively scaled by a rank-dependent factor. This scaling factor, which divides adapters by a factor of the rank, results in slowed learning and stunted performance for LoRA with higher-rank adapters. Consequently, the use of LoRA in practice has generally been limited to very low ranks. In this work, we study the impact of the scaling factor on the learning process and prove that LoRA adapters should be divided by a factor of the square root of the rank. Modifying LoRA with the appropriate scaling factor, which we call the rank-stabilized LoRA (rsLoRA) method, easily provides for a fine-tuning compute/performance trade-off, where larger ranks can be used to trade off increased computational resources during training for better fine-tuning performance, with no
Filename: 2312.03732
Filetype: application/pdf
Size: 876626 bytes
Uploaded On: 2024-06-05
Abstract:
Summary:
Tags:
Notes:
Visible: 1
Status: Parsed
Author: Damjan Kalajdzievski
CreationDate: 2023-12-08T02:01:24+00:00
Creator: LaTeX with hyperref
Keywords:
ModDate: 2023-12-08T02:01:24+00:00
PTEX.Fullbanner: This is pdfTeX, Version 3.141592653-2.6-1.40.25 (TeX Live 2023) kpathsea version 6.3.5
Producer: pdfTeX-1.40.25
Subject:
Title: A Rank Stabilization Scaling Factor for Fine-Tuning with LoRA
Trapped: False
Pages: 12

Return to Document Library