Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2
Fine-tuning 175B parameter LLMs with only 18GB GPU memory
Published in Mobisys workshop 2024, 2024
This paper explores efficient adaptation approaches for running large language models on personal hardware with limited resources.
Recommended citation: Liangyu Wang, Junxiao Wang and Di Wang. (2024). "WiP: Towards Light Adaptation of Large Language Models For Personal Hardware." Mobisys workshop 2024.
Download Paper
Published in NeurIPS workshop 2024, 2024
This paper presents a memory-efficient and high-throughput approach for training large language models with differential privacy guarantees.
Recommended citation: Liangyu Wang, Junxiao Wang, Jie Ren, Zihang Xiang, David E. Keyes, and Di Wang. (2024). "FlashDP: Memory-Efficient and High-Throughput DP-SGD Training for Large Language Models." NeurIPS workshop 2024.
Download Paper
Published in NeurIPS workshop, 2024; arXiv preprint, 2025
This paper presents a novel framework for efficient zeroth-order fine-tuning of extremely large language models with limited GPU memory.
Recommended citation: Liangyu Wang, Jie Ren, Hang Xu, Junxiao Wang, Huanyi Xie, David E. Keyes, and Di Wang. (2025). "ZO2: Scalable Zeroth-Order Fine-Tuning for Extremely Large Language Models with Limited GPU Memory." arXiv preprint arXiv:2503.12668
Download Paper
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.