Papers

Topics

Fast Algorithms

A Simple and Fast Reduction from Gomory-Hu Trees to Polylog Maxflows
with Maximilian Probst Gutenberg, Weixuan Yuan, and Wuwei Yuan.
To appear in SODA 2026.
Deterministic Almost-Linear-Time Gomory-Hu Trees
with Amir Abboud, Jason Li, Debmalya Panigrahi, Maximilian Probst Gutenberg, Thatchaphol Saranurak, Wuwei Yuan, and Weixuan Yuan.
In FOCS 2025.
Random-Shift Revisited: Tight Approximations for Tree Embeddings and ℓ1-Oblivious Routings
with Maximilian Probst Gutenberg and Tim Rieder.
In FOCS 2025.
Bootstrapping Dynamic APSP via Sparsification
with Simon Meierhans and Gernot Zöcklein.
In ESA 2025.
Acceleration Meets Inverse Maintenance: Faster \(\ell_{\infty}\)-Regression
with Deeksha Adil and Shunhua Jiang.
In ICALP 2025.
A Simple Dynamic Spanner via APSP
with Simon Meierhans and Gernot Zöcklein.
In ICALP 2025.
Almost-Linear Time Algorithms for Decremental Graphs: Min-Cost Flow and More via Duality
with Jan van den Brand, Li Chen, Yang P. Liu, Simon Meierhans, Maximilian Probst Gutenberg, and Sushant Sachdeva.
In FOCS 2024.
Optimal Electrical Oblivious Routing on Expanders
with Cella Florescu, Maximilian Probst Gutenberg, and Sushant Sachdeva.
In ICALP 2024.
A Framework for Parallelizing Approximate Gaussian Elimination
with Yves Baumann.
In SPAA 2024.
Almost-Linear Time Algorithms for Incremental Graphs: Cycle Detection, SCCs, s-t Shortest Path, and Minimum-Cost Flow
with Li Chen, Yang P. Liu, Simon Meierhans, and Maximilian Probst Gutenberg.
In STOC 2024.
A Dynamic Shortest Paths Toolbox: Low-Congestion Vertex Sparsifiers and their Applications
with Simon Meierhans and Maximilian Probst Gutenberg.
In STOC 2024.
Incremental Approximate Maximum Flow on Undirected Graphs in Subpolynomial Update Time
with Jan van den Brand, Li Chen, Yang P. Liu, Richard Peng, Maximilian Probst Gutenberg, Sushant Sachdeva, and Aaron Sidford.
In SODA 2024.
A Deterministic Almost-Linear Time Algorithm for Minimum-Cost Flow
with Jan van den Brand, Li Chen, Yang P. Liu, Richard Peng, Maximilian Probst Gutenberg, Sushant Sachdeva, and Aaron Sidford.
In FOCS 2023.
Maintaining Expander Decompositions via Sparse Cuts
with Yiding Hua, Maximilian Probst Gutenberg, and Zihang Wu.
In SODA 2023.
Arxiv
A Simple Framework for Finding Balanced Sparse Cuts via APSP
with Li Chen, Maximilian Probst Gutenberg, and Sushant Sachdeva.
In SOSA 2023.
Arxiv
Maximum Flow and Minimum-Cost Flow in Almost-Linear Time
with Li Chen, Yang P. Liu, Richard Peng, Maximilian Probst Gutenberg, and Sushant Sachdeva.
In FOCS 2022. Won the FOCS Best Paper Award.
Quanta coverage
Derandomizing Random Walks in Almost-Linear Time
with Simon Meierhans and Maximilian Probst Gutenberg.
In FOCS 2022.
Arxiv
Incremental SSSP for Sparse Digraphs Beyond the Hopset Barrier
with Simon Meierhans and Maximilian Probst Gutenberg.
In SODA 2022.
Arxiv
Faster Sparse Matrix Inversion and Rank Computation in Finite Fields
with Sílvia Casacuberta.
In ITCS 2022.
Arxiv
Almost-linear-time Weighted \(\ell_p\)-norm Solvers in Slightly Dense Graphs via Sparsification
with Deeksha Adil, Brian Bullins, and Sushant Sachdeva
In ICALP 2021.
Arxiv
Flows in Almost Linear Time via Adaptive Preconditioning
with Richard Peng, Sushant Sachdeva, and Di Wang.
In STOC 2019.
Iterative Refinement for \(\ell_p\)-norm Regression
with Deeksha Adil, Richard Peng, and Sushant Sachdeva.
In SODA 2019.
Solving Directed Laplacians in Nearly Linear Time through Sparse LU Factorizations
with Michael B. Cohen, Jonathan Kelner, John Peebles, Richard Peng, Anup B. Rao, Aaron Sidford.
In FOCS 2018.
Incomplete Nested Dissection
with Richard Peng, Robert Schwieterman, and Peng Zhang.
In STOC 2018.
Approximate Gaussian Elimination
Rasmus Kyng.
My PhD dissertation, 2017.
Sampling Random Spanning Trees Faster than Matrix Multiplication
with David Durfee, John Peebles, Anup B. Rao, and Sushant Sachdeva.
In STOC 2017.
A Framework for Analyzing Resparsification Algorithms
with Jakub Pachocki, Richard Peng, and Sushant Sachdeva.
In SODA 2017.
Approximate Gaussian Elimination for Laplacians: Fast, Sparse, and Simple
with Sushant Sachdeva.
In FOCS 2016.
Sparsified Cholesky and Multigrid Solvers for Connection Laplacians
with Yin Tat Lee, Richard Peng, Sushant Sachdeva, and Daniel A. Spielman.
In STOC 2016.
Fast, Provable Algorithms for Isotonic Regression in all \(\ell_p\)-norms
with Anup B. Rao and Sushant Sachdeva.
In NIPS 2015.
Our implementation of the least-squares Isotonic regression algorithm is available on the Isotonic Github repository.
Algorithms for Lipschitz Learning on Graphs
with Anup B. Rao, Daniel A. Spielman, and Sushant Sachdeva.
In COLT 2015.
Our implementations of the lex-minimization algorithms are available on the YINSlex Github repository.
Solving SDD Linear Systems in Nearly \( m\log^{1/2}n \) Time
with Michael B. Cohen, Gary L. Miller, Jakub W. Pachocki, Richard Peng, Anup B. Rao and Shen Chen Xu.
In STOC 2014.
The paper Solving SDD Linear Systems in Nearly \( m\log^{1/2}n \) Time was a merged submission of the following two papers:
Preconditioning in Expectation
with Michael B. Cohen, Jakub W. Pachocki, Richard Peng, and Anup B. Rao.
Stretching Stretch
Michael B. Cohen, Gary L. Miller, Jakub W. Pachocki, Richard Peng, and Shen Chen Xu.

Experiments and Applications and Machine Learning and Scientific Computing

AC(K): Robust Solution of Laplacian Equations by Randomized Approximate Cholesky Factorization
with Yuan Gao and Daniel A. Spielman.
To appear in the SIAM Journal on Scientific Computing (SISC) 2025.
This paper describes ApproxChol, the Laplacian solver in Laplacians.jl.
Assessing GPT Performance in a Proof-Based University-Level Course Under Blind Grading
with Ming Ding, Federico Soldà, and Weixuan Yuan.
In the EATCS Bulletin, Vol 146, 2025.
Robust and Practical Solution of Laplacian Equations by Approximate Elimination
with Yuan Gao and Daniel A. Spielman.
Manuscript.
Arxiv
This paper describes ApproxChol, the Laplacian solver in Laplacians.jl.
Fast, Provable Algorithms for Isotonic Regression in all \(\ell_p\)-norms
with Anup B. Rao and Sushant Sachdeva.
In NIPS 2015.
Our implementation of the least-squares Isotonic regression algorithm is available on the Isotonic Github repository.
Algorithms for Lipschitz Learning on Graphs
with Anup B. Rao, Daniel A. Spielman, and Sushant Sachdeva.
In COLT 2015.
Our implementations of the lex-minimization algorithms are available on the YINSlex Github repository.

Fine-grained Complexity Theory

Hardness Results for Laplacians of Simplicial Complexes via Sparse-Linear Equation Complete Gadgets
with Ming Ding, Maximilian Probst Gutenberg, and Peng Zhang.
In ICALP 2022.
Arxiv
Two-Commodity Flow is as Hard as Linear Programming
with Ming Ding and Peng Zhang.
In ICALP 2022.
Arxiv
On the Oracle Complexity of Higher-Order Smooth Non-Convex Finite-Sum Optimization
with Nicolas Emmenegger and Ahad N. Zehmakan.
In AISTATS 2022 and in the ICML 2021 workshop 'Beyond first-order methods in ML systems'.
Arxiv
Packing LPs are Hard to Solve Accurately, Assuming Linear Equations are Hard
with Di Wang and Peng Zhang.
In SODA 2020.
Conference
Hardness Results for Structured Linear Systems
with Peng Zhang.
In FOCS 2017. Won the Machtey Award for Best Student Paper.

Discrepancy Theory and Random Matrix Theory

Scalar and Matrix Chernoff Bounds from \(\ell_{\infty}\)-Independence
with Tali Kaufman and Federico Solda.
In SODA 2022.
Arxiv
Four Deviations Suffice for Rank 1 Matrices
with Kyle Luh and Zhao Song.
In Advances in Mathematics, Volume 375, 2 December 2020.
A Matrix Chernoff Bound for Strongly Rayleigh Distributions and Spectral Sparsifiers from a few Random Spanning Trees
with Zhao Song.
In FOCS 2018.