I am a research scientist at Meta. Previously, I was at the University of Wisconsin–Madison, where I was advised by Somesh Jha, prior to that, by by Ben Liblit. I was affliated with the fantasitic madPL group and have had the privilege of being mentored by Thomas Reps, Aws Albarghouthi and Yudong Chen.

My research is situated at the intersection of programming languages and deep learning, guided by a broad interest in computational and linguistic problems and a tendency to interpret challenges through these lenses. I am particularly dedicated to formalizing computational phenomena in deep learning—a process known as abstracting in computer science and modeling in physics. My research philosophy draws significant inspiration from Immanuel Kant and the classical Göttingen school of mathematicians. Currently, I employ meta-programming techniques to investigate and harness emergent linguistic phenomena and novel computational paradigms within language models. Previously, my work has focused on analyzing the variational properties of neural networks.

I studied mathematics and philosophy at the University of Illinois Urbana-Champaign as an undergraduate. I also completed a master degree in computer science at the Courant Institute, New York University, before moving to Madison. At New York University, my research advisor was Richard Cole.

The best way to reach me is my email: “MyInitials (two letters) at cs.wisc.edu” (for example, pl@cs.wisc.edu).

Publications and manuscripts

Zi Wang, Shiwei Weng, Mohannad Alhanahnah, Somesh Jha and Tom Reps. PEA: Enhancing LLM Performance on Computational-Reasoning Tasks. Under Review

Zi Wang, Divyam Anshumaan, Ashish Hooda, Yudong Chen and Somesh Jha. Functional Homotopy: Smoothing Discrete Optimization via Continuous Parameters for LLM Jailbreak Attacks. ICLR 2025

Zi Wang, Bin Hu, Aaron Havens, Alexandre Araujo, Yang Zheng, Yudong Chen and Somesh Jha. On the Scalability and Memory Efficiency of Semidefinite Programs for Lipschitz Constant Estimation of Neural Networks. ICLR 2024

Zi Wang, Gautam Prakriya and Somesh Jha. A Quantitative Geometric Approach to Neural Network Smoothness. NeurIPS 2022

Zi Wang, Aws Albarghouthi, Gautam Prakriya and Somesh Jha. Interval Universal Approximation for Neural Networks. POPL 2022

Jordan Henkel, Goutham Ramakrishnan, Zi Wang, Aws Albarghouthi, Somesh Jha and Thomas Reps. Semantic Robustness of Models of Source Code. SANER 2022

Thomas K. Panum, Zi Wang, Pengyu Kan, Earlence Fernandes and Somesh Jha. Exploring Adversarial Robustness of Deep Metric Learning. arXiv preprint arXiv:2102.07265

Zi Wang, Ben Liblit and Thomas Reps. TOFU: Target-Oriented FUzzer. arXiv preprint arXiv:2004.14375

Zi Wang, Jihye Choi, Ke Wang and Somesh Jha. Rethinking Diversity in Deep Neural Network Testing. arXiv:2305.15698

Zi Wang. A New Strongly Polynomial Algorithm for Computing Fisher Market Equilibria with Spending Constraint Utilities. Master Thesis

Miscellaneous

My recent favorite album is RENAISSANCE. Great ideas will come back again and again 🫡.

My MOST favorite movie is Life of Pi. Life is full of metaphors, and the archetypes underneath the metaphors are the innate abstractions of the world.