Weijun Wang (王蔚峻)

alt text 

Research Fellow
Institute for AI Industry Research
Tsinghua University
Beijing, China
Email: wangweijun AT mail DOT tsinghua DOT edu DOT cn

Find me at linkedin github

The left photo was taken close to Taksim Square in Istanbul.

Research Interests

My current research interest is building systems to boost large models serving, and empowering traditional systems with large models.

Current Projects

Recent Publications (full list)

News

  • 11/2024. My proposal on “Research on Efficient Inference of Vision-Large-Model” is approved and funded by China Postdoctoral Science Foundation. Welcome to contact me if you are interested.

  • 08/2024. My proposal on “Key Technologies for Large-Model-powered Edge Video Analysis” is approved and funded by NSFC. Welcome to contact me if you are interested! We are recruiting research interns of both undergraduates and graduates.

  • 07/2024. RegenHance was accepted by NSDI'25 (Spring round). Many thanks to Prof. Yunxin Liu's insightful comments!

  • 05/2024. SwapMoE was accepted by ACL'24. Congratulations to Rui!

  • 02/2024. AccDecoder journal version was accepted by ACM/IEEE TON. Congratulations to Liang Mi for his first top-journal paper!

  • 02/2024. DUET journal version was accepted by IEEE TMC. Congratulations to Lihao for his first top-journal paper!

  • 01/2024. Our PERSONAL LLM AGENTS - Survey has been published and reported by one of the most popular WeChat Official Accounts 机器之心!! Check out it and Repo for more details.

  • 12/2023. BiSwift was accepted to IEEE INFOCOM'24. Congratulations to Lin for her first top-conference paper!

  • 11/2023. DARPA journal version journal version was accepted by IEEE TMC. Thanks for all collaborators!

  • 07/2023. I was elected to the Postdoctoral Talent Introduction Program from the Ministry of Education, China (国家博士后海外引才专项) and the Shuimu Tsinghua Scholar Program from Tsinghua University (水木学者). Many thanks to my Postdoc holder Prof. Yunxin Liu.

  • 04/2023. VSiM journal version was accepted to TON. Many thanks to our collaborators Yali, Yuhan, Sripriya, Kai, and Prof. Xiaoming Fu!