Portrait of me

About Me

I am an assistant professor at the Department of Computer Science, ETH Zurich, where I joined in the fall 2019. My research focuses on fast algorithms along with other topics in theoretical computer science and applications in machine learning.

You can write to me to apply for a PhD or apply to our Direct Doctorate Program (joint Master's and PhD). Candidates should have a strong background in theoretical computer science or mathematics.

My research is supported in part by grant no. 200021 204787 of the Swiss National Science Foundation.

I spent 2018 and the spring of 2019 as a postdoc in the Theory of Computation Group at Harvard working with Jelani Nelson. Before that, I spent the fall semester 2017 as a research fellow at the Simons Institute at UC Berkeley. I finished my PhD at the Department of Computer Science at Yale in the summer 2017, where I was advised by Daniel A. Spielman. Before attending Yale, I received B.A. in Computer Science from the University of Cambridge in 2011.

You can read about my work on almost-linear time algorithms for maximum and minimum cost flow in this Quanta article "Researchers Achieve ‘Absurdly Fast’ Algorithm for Network Flow" (June 8th, 2022). This is based on joint work with with Li Chen, Yang Liu, Maximilian Probst Gutenberg, Richard Peng, and Sushant Sachdeva. See also publications below.

Group

Teaching

Thesis and project supervision

My group supervises bachelor and master's theses of students in ETHZ D-INFK, and we supervise student projects through the courses 'Research in Computer Science' and 'Praktische Arbeit'. In some cases, we also supervise master's theses and projects for students in other ETHZ departments.

Learn more about our thesis and project supervision here.

Papers

Maintaining Expander Decompositions via Sparse Cuts
with Yiding Hua, Maximilian Probst Gutenberg, and Zihang Wu.
In SODA 2023.
Arxiv
A Simple Framework for Finding Balanced Sparse Cuts via APSP
with Li Chen, Maximilian Probst Gutenberg, and Sushant Sachdeva.
In SOSA 2023.
Arxiv
Maximum Flow and Minimum-Cost Flow in Almost-Linear Time
with Li Chen, Yang P. Liu, Richard Peng, Maximilian Probst Gutenberg, and Sushant Sachdeva.
In FOCS 2022. Won the FOCS Best Paper Award.
Quanta coverage
Derandomizing Random Walks in Almost-Linear Time
with Simon Meierhans and Maximilian Probst Gutenberg.
In FOCS 2022.
Arxiv
Scalar and Matrix Chernoff Bounds from \(\ell_{\infty}\)-Independence
with Tali Kaufman and Federico Solda.
In SODA 2022.
Arxiv
Incremental SSSP for Sparse Digraphs Beyond the Hopset Barrier
with Simon Meierhans and Maximilian Probst Gutenberg.
In SODA 2022.
Arxiv
Hardness Results for Laplacians of Simplicial Complexes via Sparse-Linear Equation Complete Gadgets
with Ming Ding, Maximilian Probst Gutenberg, and Peng Zhang.
In ICALP 2022.
Arxiv
Two-Commodity Flow is as Hard as Linear Programming
with Ming Ding and Peng Zhang.
In ICALP 2022.
Arxiv
Faster Sparse Matrix Inversion and Rank Computation in Finite Fields
with Sílvia Casacuberta.
In ITCS 2022.
Arxiv
On the Oracle Complexity of Higher-Order Smooth Non-Convex Finite-Sum Optimization
with Nicolas Emmenegger and Ahad N. Zehmakan.
In AISTATS 2022 and in the ICML 2021 workshop 'Beyond first-order methods in ML systems'.
Arxiv
Almost-linear-time Weighted \(\ell_p\)-norm Solvers in Slightly Dense Graphs via Sparsification
with Deeksha Adil, Brian Bullins, and Sushant Sachdeva.
In ICALP 2021.
Arxiv
Four Deviations Suffice for Rank 1 Matrices
with Kyle Luh and Zhao Song.
In Advances in Mathematics, Volume 375, 2 December 2020.
Packing LPs are Hard to Solve Accurately, Assuming Linear Equations are Hard
with Di Wang and Peng Zhang.
In SODA 2020.
Conference
Flows in Almost Linear Time via Adaptive Preconditioning
with Richard Peng, Sushant Sachdeva, and Di Wang.
In STOC 2019.
Iterative Refinement for \(\ell_p\)-norm Regression
with Deeksha Adil, Richard Peng, and Sushant Sachdeva.
In SODA 2019.
A Matrix Chernoff Bound for Strongly Rayleigh Distributions and Spectral Sparsifiers from a few Random Spanning Trees
with Zhao Song.
In FOCS 2018.
Solving Directed Laplacians in Nearly Linear Time through Sparse LU Factorizations
with Michael B. Cohen, Jonathan Kelner, John Peebles, Richard Peng, Anup B. Rao, Aaron Sidford.
In FOCS 2018.
Incomplete Nested Dissection
with Richard Peng, Robert Schwieterman, and Peng Zhang.
In STOC 2018.
Approximate Gaussian Elimination
Rasmus Kyng.
My PhD dissertation, 2017.
Hardness Results for Structured Linear Systems
with Peng Zhang.
In FOCS 2017. Won the Machtey Award for Best Student Paper.
In the SIAM Journal on Computing, Special Section FOCS 2017.
Sampling Random Spanning Trees Faster than Matrix Multiplication
with David Durfee, John Peebles, Anup B. Rao, and Sushant Sachdeva.
In STOC 2017.
A Framework for Analyzing Resparsification Algorithms
with Jakub Pachocki, Richard Peng, and Sushant Sachdeva.
In SODA 2017.
Approximate Gaussian Elimination for Laplacians: Fast, Sparse, and Simple
with Sushant Sachdeva.
In FOCS 2016.
Sparsified Cholesky and Multigrid Solvers for Connection Laplacians
with Yin Tat Lee, Richard Peng, Sushant Sachdeva, and Daniel A. Spielman.
In STOC 2016.
Fast, Provable Algorithms for Isotonic Regression in all \(\ell_p\)-norms
with Anup B. Rao and Sushant Sachdeva.
In NIPS 2015.
Our implementation of the least-squares Isotonic regression algorithm is available on the Isotonic Github repository.
Algorithms for Lipschitz Learning on Graphs
with Anup B. Rao, Daniel A. Spielman, and Sushant Sachdeva.
In COLT 2015.
Our implementations of the lex-minimization algorithms are available on the YINSlex Github repository.
Solving SDD Linear Systems in Nearly \( m\log^{1/2}n \) Time
with Michael B. Cohen, Gary L. Miller, Jakub W. Pachocki, Richard Peng, Anup B. Rao and Shen Chen Xu.
In STOC 2014.
The paper Solving SDD Linear Systems in Nearly \( m\log^{1/2}n \) Time was a merged submission of the following two papers:
Preconditioning in Expectation
with Michael B. Cohen, Jakub W. Pachocki, Richard Peng, and Anup B. Rao.
Stretching Stretch
Michael B. Cohen, Gary L. Miller, Jakub W. Pachocki, Richard Peng, and Shen Chen Xu.

Recent and Upcoming Talks

More Talks

  • Stanford Theory Seminar; October 2017. Approximate Gaussian Elimination.
  • FOCS, Berkeley; October 2017. Hardness Results for Structured Linear Systems.
  • UC Berkeley; September 2017. Hardness Results for Structured Linear Systems.
  • Google Research Mountain View; August 2017. Hardness Results for Structured Linear Systems.
  • YPNG Seminar, Yale Dept. of Statistics and Data Science; July 2017. Approximate Gaussian Elimination.
  • MSR Redmond; January 2017. Regression, Elimination, and Sampling on Graphs.
  • University of Copenhagen; January 2017. Approximate Gaussian Elimination.
  • CMU; December 2016. Approximate Gaussian Elimination.
  • Georgia Tech; November 2016. Approximate Gaussian Elimination.
  • Princeton; October 2016. Approximate Gaussian Elimination.
  • UC Berkeley; October 2016. Approximate Gaussian Elimination.
  • Google Research NYC; October 2016. Approximate Gaussian Elimination.
  • FOCS, New Brunswick; October 2016. Approximate Gaussian Elimination.
  • MIT A&C Seminar; September 2016. Approximate Gaussian Elimination.
  • Aarhus University; September 2016. Lipschitz Learning on Graphs.
  • China Theory Week, Hong Kong; August 2016. Approximate Gaussian Elimination.
  • SIAM Annual Meeting, Boston; July 2016. Approximate Cholesky Factorization
  • STOC, Boston; June 2016. Sparsified Cholesky and Multigrid Solvers for Connection Laplacians.
  • IT University of Copenhagen; December 2015. Lipschitz Learning and Isotonic Regression on Graphs

Code

Laplacians.jl
Laplacians.jl is a Julia package containing graph algorithms, especially ones related to spectral and algebraic graph theory. It was started by Dan Spielman, and has contributions from Dan, Serban Stan, myself and several others.
YINSlex Github Repository
Our implementations of the lex-minimization algorithms from the paper Algorithms for Lipschitz Learning on Graphs . The code was written by Anup Rao, Sushant Sachdeva, Dan Spielman, and myself.
Isotonic Github Repository
An implementation of the least-squares Isotonic regression algorithm from the paper Fast, Provable Algorithms for Isotonic Regression in all \(\ell_p\)-norms . The code was written by Anup Rao, Sushant Sachdeva, and myself.

Contact

I can be reached by email at

[last name] @ inf.ethz.ch

[last name] = kyng