Jiawei Liu

PhD candidate at UIUC CS; 3rd-year since 2021 Fall

avatarv2.jpg

My research goal is to simplify the making of great software with and for machine learning and its systems. Currently, I work on Programming Languages, Formal Methods, and Software Engineering with Lingming Zhang at UIUC.

🛡️ How to detect and mitigate errors in emerging ML systems?

🤖 How to build and evaluate code LLMs to assist software engineering?

🚀 How to simplify the making of emerging ML systems?

🤗 Feel free to drop me an email if you are interested in my research.

Papers Show All

  1. Pre-print
    XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts
    Yifeng DingJiawei LiuYuxiang Wei, Terry Yue Zhuo,  and Lingming Zhang
    arXiv preprint arXiv:2404.15247. 2024
  2. Pre-print
    Emerging Platforms Meet Emerging LLMs: A Year-Long Journey of Top-Down Development
    Siyuan FengJiawei LiuRuihang LaiCharlie F. Ruan, Yong Yu, Lingming Zhang,  and Tianqi Chen
    arXiv preprint arXiv:2404.09151. 2024
  3. Pre-print
    Magicoder: Source Code Is All You Need
    Yuxiang Wei, Zhe Wang,  Jiawei LiuYifeng Ding,  and Lingming Zhang
    arXiv preprint arXiv:2312.02120. 2023
  4. NeurIPS’23
    Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation
    Thirty-seventh Conference on Neural Information Processing Systems. 2023
  5. ESEC/FSE’23
    Atifact AvailableAtifact Reusable
    NeuRI: Diversifying DNN Generation via Inductive Rule Inference
    Jiawei LiuJinjun PengYuyao Wang,  and Lingming Zhang
    Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 2023
    🏆  ACM SIGSOFT Distinguished Paper Award
  6. ASPLOS’23
    Atifact AvailableAtifact FunctionalResults Reproduced
    NNSmith: Generating Diverse and Valid Test Cases for Deep Learning Compilers
    Jiawei Liu, Jinkun Lin, Fabian Ruffy, Cheng Tan, Jinyang Li, Aurojit Panda,  and Lingming Zhang
    Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2. 2023
    🏆  Distinguished Artifact Award
  7. OOPSLA’22
    Atifact AvailableAtifact Reusable
    Coverage-guided tensor compiler fuzzing with joint IR-pass mutation
    Jiawei LiuYuxiang Wei, Sen Yang, Yinlin Deng,  and Lingming Zhang
    Proceedings of the ACM on Programming Languages 6 (OOPSLA1). Apr 2022
*PLSE conferences like OOPSLA and ESEC/FSE do not badge for reproducibility at artifact evaluation as it requires third-party re-implementation. Nonetheless, we got all badges we can get. :D

Service

Organizing: LLM4Code@ICSE'24

Reviewer: TSE, TOSEM, DCAA@AAAI'23, R2FM@ICLR'24

Artifact Evaluation Committee: PLDI'23, OSDI'22, ATC'22

Invited Talk

ARiSE Lab, Columbia University: Simplify the Making of Great Software in the ML Era April 2024

Kwai Inc: EvalPlus, Magicoder, and StarCoder2 Mar 2024

Snowflake GenAI: Rigorous Evaluation of LLMs for Code (Slides) Feb 2024

AST Lab, ETH Zürich: Generating Test-Cases for ML Compilers (Slides) Jan 2024

GAI4SE, NC State University: LLMs for Software Testing (Guest Lecture) Nov 2023

Apache TVM Conference: Automating DL Compiler Bug Finding with NNSmith Mar 2023

SAMPL, University of Washington: Coverage-Guided Tensor Compiler Fuzzing (Slides) May 2022

Experience

UIUC, 2021~TBD   CS PhD @ PL/FM/SE

Google TPU, Smr+Fall. 23   ML SDC

OctoML, Smr. 22   Pattern Language

Tongji University, 20{17~21}   B.Eng. in CS

Alibaba DAMO, Smr. 21 GNN4Assembly

NYU Systems Group, Smr. 20 Video Analytics

ByteDance AI Lab, Spr. 20   Model Serving