Sitemap
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Pages
Posts
Future Blog Post
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Blog Post number 4
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Blog Post number 3
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Blog Post number 2
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Blog Post number 1
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
portfolio
Portfolio item number 1
Short description of portfolio item number 1
Portfolio item number 2
Short description of portfolio item number 2
projects
ZO2: Zeroth-Order Offloading
Fine-tuning 175B parameter LLMs with only 18GB GPU memory
publications
WiP: Towards Light Adaptation of Large Language Models For Personal Hardware
Published in Mobisys workshop 2024, 2024
This paper explores efficient adaptation approaches for running large language models on personal hardware with limited resources.
Recommended citation: Liangyu Wang, Junxiao Wang and Di Wang. (2024). "WiP: Towards Light Adaptation of Large Language Models For Personal Hardware." Mobisys workshop 2024.
Download Paper
FlashDP: Memory-Efficient and High-Throughput DP-SGD Training for Large Language Models
Published in NeurIPS workshop 2024, 2024
This paper presents a memory-efficient and high-throughput approach for training large language models with differential privacy guarantees.
Recommended citation: Liangyu Wang, Junxiao Wang, Jie Ren, Zihang Xiang, David E. Keyes, and Di Wang. (2024). "FlashDP: Memory-Efficient and High-Throughput DP-SGD Training for Large Language Models." NeurIPS workshop 2024.
Download Paper
ZO2: Scalable Zeroth-Order Fine-Tuning for Extremely Large Language Models with Limited GPU Memory
Published in NeurIPS workshop, 2024; arXiv preprint, 2025
This paper presents a novel framework for efficient zeroth-order fine-tuning of extremely large language models with limited GPU memory.
Recommended citation: Liangyu Wang, Jie Ren, Hang Xu, Junxiao Wang, Huanyi Xie, David E. Keyes, and Di Wang. (2025). "ZO2: Scalable Zeroth-Order Fine-Tuning for Extremely Large Language Models with Limited GPU Memory." arXiv preprint arXiv:2503.12668
Download Paper
talks
Talk 1 on Relevant Topic in Your Field
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Conference Proceeding talk 3 on Relevant Topic in Your Field
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
teaching
Teaching experience 1
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Teaching experience 2
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.