What is LoRA
LoRA, or Low-Rank Adaptation, is a technique designed to make fine-tuning Large Language Models (LLMs) much more efficient. In traditional fine-tuning, you have to update all the parameters of an LLM, which can be slow and resource-intensive. With LoRA, instead of changing the entire model, you add a few small, trainable components to the base LLM. These components are trained for your specific task, allowing the model to adapt quickly without needing to retrain everything from scratch.When to use LoRA
Because LoRA only updates a small part of the model, itβs much faster than standard fine-tuning. As a general rule, if regular fine-tuning of an LLM on a large dataset would take around 30 minutes, LoRA can often get the job done in 10 minutes or less. This makes it a great choice when you want to adapt a model quickly and efficiently, without needing deep expertise in machine learning. Below is a table comparing when to use LoRA versus full fine-tuning:Scenario | LoRA | Full Fine-tuning | Real-world Example |
---|---|---|---|
Quick adaptation π | β | β | Customizing a chatbot for your companyβs tone and style |
Limited computational resources π» | β | β | Small startup fine-tuning on a laptop or basic GPU |
Small to medium datasets π | β | β | Training on 1,000-10,000 customer support tickets |
Fast experimentation π§ͺ | β | β | Testing different prompt styles for marketing content |
Domain-specific tasks π― | β | β οΈ | Adapting a model for legal document analysis |
Massive datasets π | β | β | Training on millions of medical research papers |
Fundamental behavior change π | β | β | Teaching a general model to code in a new programming language |
Maximum performance π | β οΈ | β | Building a state-of-the-art translation system |
Budget constraints π° | β | β | Bootstrapped companies with limited cloud computing budget |
Time-sensitive projects β° | β | β | Launching a customer service bot in a week |
Using LoRA in Prem Studio
1
Create a New Fine-Tuning Job

Non Reasoning
. Note that LoRA is not supported
for Reasoning Fine-Tuning. Once youβve configured these settings, click the Create Fine-Tuning Job button.2
Configure Your LoRA Settings

- Choose the model you want to fine-tune and toggle on the LoRA option.
- Click the Start Experiments button.
- A confirmation dialog will appear asking you to confirm starting the experiments.
Gemma models (specifically Gemma 3 1B and Gemma 3 4B) are not available for LoRA fine-tuning.
For complex tasks, you may need to increase the number of epochs to achieve the best results.