Varga, Kristóf and Hatvani, Péter and Yang, Zijian Győző (2025) Full-parameter fine-tuning vs. LoRA fine-tuning on PULI models. In: Proceedings of the International Conference on Formal Methods and Foundations of Artificial Intelligence. Eszterházy Károly Katolikus Egyetem Líceum Kiadó, Eger, pp. 226-232. ISBN 9789634963035
|
Text
fmfai2025_pp226-232.pdf - Published Version Download (584kB) | Preview |
Abstract
In this study, we compare full-parameter fine-tuning and parameter- efficient LoRA on various Hungarian PULI large language models, evaluating their performance across six Hungarian language understanding benchmarks. While full-parameter fine-tuning updates all model weights and requires substantial computational resources, LoRA adapts a smaller subset of parameters, enabling more efficient training. Our experiments on the monolingual PULI 3SX and the multilingual LlumiX and LlumiX-Llama-3.1 models reveal that LoRA consistently matches or surpasses full fine-tuning on most tasks, particularly when applied to larger models. Notably, LlumiXLlama- 3.1 with LoRA achieves state-of-the-art results on five out of six benchmarks while significantly reducing resource demands. These findings highlight LoRA’s potential as a scalable and effective fine-tuning method for Hungarian large language models.
| Item Type: | Book Section |
|---|---|
| Additional Information: | International Conference on Formal Methods and Foundations of Artificial Intelligence, Eger, June 5–7, 2025 |
| Uncontrolled Keywords: | LoRA, PULI models, HuLU benchmarks, fine-tuning, parameterefficient adaptation |
| Subjects: | Q Science / természettudomány > QA Mathematics / matematika > QA75 Electronic computers. Computer science / számítástechnika, számítógéptudomány |
| Depositing User: | Tibor Gál |
| Date Deposited: | 30 Oct 2025 13:18 |
| Last Modified: | 30 Oct 2025 14:44 |
| URI: | https://real.mtak.hu/id/eprint/227760 |
Actions (login required)
![]() |
Edit Item |




