ZO2: Scalable Zeroth-Order Fine-Tuning for Extremely Large Language Models with Limited GPU Memory
Published in NeurIPS workshop, 2024; arXiv preprint, 2025
This paper presents a novel framework for efficient zeroth-order fine-tuning of extremely large language models with limited GPU memory.
Recommended citation: Liangyu Wang, Jie Ren, Hang Xu, Junxiao Wang, Huanyi Xie, David E. Keyes, and Di Wang. (2025). "ZO2: Scalable Zeroth-Order Fine-Tuning for Extremely Large Language Models with Limited GPU Memory." arXiv preprint arXiv:2503.12668
Download Paper