Portrait of me

About Me

I am an assistant professor at the Department of Computer Science, ETH Zurich, where I joined in the fall 2019. My research focuses on fast algorithms for graph problems and convex optimization, on probability and discrepancy theory, fine-grained complexity theory, and applications in machine learning.

My research is supported in part by project grant no. 200021 204787 and starting grant no. TMSGI2 218022 of the Swiss National Science Foundation.

I spent 2018 and the spring of 2019 as a postdoc in the Theory of Computation Group at Harvard working with Jelani Nelson. Before that, I spent the fall semester 2017 as a research fellow at the Simons Institute at UC Berkeley. I finished my PhD at the Department of Computer Science at Yale in the summer 2017, where I was advised by Daniel A. Spielman. Before attending Yale, I received B.A. in Computer Science from the University of Cambridge in 2011.

You can read about my work on almost-linear time algorithms for maximum and minimum cost flow in this Quanta article "Researchers Achieve ‘Absurdly Fast’ Algorithm for Network Flow" (Jun 8th, 2022). Nikhil Srivastava wrote another very informative discussion of our minimum-cost flow algorithm and our recent work on almost-linear time algorithms for many problems in 'incremental graphs' in the Simons Institute Newsletter (Jan 22nd, 2024). These introductions cover joint works with with Li Chen, Yang Liu, Simon Meierhans, Maximilian Probst Gutenberg, Richard Peng, and Sushant Sachdeva. See also publications below.

My CV.

PhD and Postdoc Applications

PhD openings

If you're interested in applying to do a PhD in our group, you can send an email to applications-kyng@lists.inf.ethz.ch (as a precaution against spam filters, cc me as well). Please include your transcript, a CV, and any other relevant information. We may request letters of recommendation later in our interview process, but prefer to receive these directly from recommenders.

Our standard PhD program requires you to hold a master's degree when you start. If you would like to start directly after receiving a bachelor's degree, you should apply to our Direct Doctorate Program (joint Master's and PhD), and also send an email to the above address to let me know you are interested in joining my group. Candidates should have a strong background in theoretical computer science or mathematics.

Students in our group are usually jointly mentored by Max Probst Gutenberg and myself.

Postdoc openings

We have an opening for a two-year postdoc in our group. Candidates should have a strong publication record in theoretical computer science and research interests that overlap with our group. The starting date is flexible, and we offer a competitive salary, and generous funding for work travel.

To apply, send an email to applications-kyng@lists.inf.ethz.ch (as a precaution against spam filters, cc me as well). Please include your CV and any other relevant information. We may request letters of recommendation later in our interview process, but prefer to receive these directly from recommenders. Beyond the postdoc positions in our group, theory researchers interested in doing a postdoc at ETH are also highly encouraged to apply to the ITS junior fellowship.

Group

Teaching

Thesis and project supervision

Our group supervises bachelor and master's theses of students in ETHZ D-INFK, and we supervise student projects through the courses 'Research in Computer Science' and 'Praktische Arbeit'. In some cases, we also supervise master's theses and projects for students in other ETHZ departments.

Learn more about our thesis and project supervision here.

Papers

Optimal Electrical Oblivious Routing on Expanders
with Cella Florescu, Maximilian Probst Gutenberg, and Sushant Sachdeva.
In ICALP 2024.
A Framework for Parallelizing Approximate Gaussian Elimination
with Yves Baumann.
In SPAA 2024.
Almost-Linear Time Algorithms for Incremental Graphs: Cycle Detection, SCCs, s-t Shortest Path, and Minimum-Cost Flow
with Li Chen, Yang P. Liu, Simon Meierhans, and Maximilian Probst Gutenberg.
To appear in STOC 2024.
A Dynamic Shortest Paths Toolbox: Low-Congestion Vertex Sparsifiers and their Applications
with Simon Meierhans and Maximilian Probst Gutenberg.
To appear in STOC 2024.
Incremental Approximate Maximum Flow on Undirected Graphs in Subpolynomial Update Time
with Jan van den Brand, Li Chen, Yang P. Liu, Richard Peng, Maximilian Probst Gutenberg, Sushant Sachdeva, and Aaron Sidford.
In SODA 2024.
A Deterministic Almost-Linear Time Algorithm for Minimum-Cost Flow
with Jan van den Brand, Li Chen, Yang P. Liu, Richard Peng, Maximilian Probst Gutenberg, Sushant Sachdeva, and Aaron Sidford.
In FOCS 2023.
Robust and Practical Solution of Laplacian Equations by Approximate Elimination
with Yuan Gao and Daniel A. Spielman.
Manuscript.
Arxiv
This paper describes ApproxChol, the Laplacian solver in Laplacians.jl.
Maintaining Expander Decompositions via Sparse Cuts
with Yiding Hua, Maximilian Probst Gutenberg, and Zihang Wu.
In SODA 2023.
Arxiv
A Simple Framework for Finding Balanced Sparse Cuts via APSP
with Li Chen, Maximilian Probst Gutenberg, and Sushant Sachdeva.
In SOSA 2023.
Arxiv
Maximum Flow and Minimum-Cost Flow in Almost-Linear Time
with Li Chen, Yang P. Liu, Richard Peng, Maximilian Probst Gutenberg, and Sushant Sachdeva.
In FOCS 2022. Won the FOCS Best Paper Award.
Won an inaugural ICBS Frontiers of Science Award in Theoretical Computer Science.
Derandomizing Random Walks in Almost-Linear Time
with Simon Meierhans and Maximilian Probst Gutenberg.
In FOCS 2022.
Arxiv
Scalar and Matrix Chernoff Bounds from \(\ell_{\infty}\)-Independence
with Tali Kaufman and Federico Solda.
In SODA 2022.
Arxiv
Incremental SSSP for Sparse Digraphs Beyond the Hopset Barrier
with Simon Meierhans and Maximilian Probst Gutenberg.
In SODA 2022.
Arxiv
Hardness Results for Laplacians of Simplicial Complexes via Sparse-Linear Equation Complete Gadgets
with Ming Ding, Maximilian Probst Gutenberg, and Peng Zhang.
In ICALP 2022.
Arxiv
Two-Commodity Flow is as Hard as Linear Programming
with Ming Ding and Peng Zhang.
In ICALP 2022.
Arxiv
Faster Sparse Matrix Inversion and Rank Computation in Finite Fields
with Sílvia Casacuberta.
In ITCS 2022.
Arxiv
On the Oracle Complexity of Higher-Order Smooth Non-Convex Finite-Sum Optimization
with Nicolas Emmenegger and Ahad N. Zehmakan.
In AISTATS 2022 and in the ICML 2021 workshop 'Beyond first-order methods in ML systems'.
Arxiv
Almost-linear-time Weighted \(\ell_p\)-norm Solvers in Slightly Dense Graphs via Sparsification
with Deeksha Adil, Brian Bullins, and Sushant Sachdeva.
In ICALP 2021.
Arxiv
Four Deviations Suffice for Rank 1 Matrices
with Kyle Luh and Zhao Song.
In Advances in Mathematics, Volume 375, 2 December 2020.
Packing LPs are Hard to Solve Accurately, Assuming Linear Equations are Hard
with Di Wang and Peng Zhang.
In SODA 2020.
Conference
Flows in Almost Linear Time via Adaptive Preconditioning
with Richard Peng, Sushant Sachdeva, and Di Wang.
In STOC 2019.
Iterative Refinement for \(\ell_p\)-norm Regression
with Deeksha Adil, Richard Peng, and Sushant Sachdeva.
In SODA 2019.
A Matrix Chernoff Bound for Strongly Rayleigh Distributions and Spectral Sparsifiers from a few Random Spanning Trees
with Zhao Song.
In FOCS 2018.
Solving Directed Laplacians in Nearly Linear Time through Sparse LU Factorizations
with Michael B. Cohen, Jonathan Kelner, John Peebles, Richard Peng, Anup B. Rao, Aaron Sidford.
In FOCS 2018.
Incomplete Nested Dissection
with Richard Peng, Robert Schwieterman, and Peng Zhang.
In STOC 2018.
Approximate Gaussian Elimination
Rasmus Kyng.
My PhD dissertation, 2017.
Hardness Results for Structured Linear Systems
with Peng Zhang.
In FOCS 2017. Won the Machtey Award for Best Student Paper.
In the SIAM Journal on Computing, Special Section FOCS 2017.
Sampling Random Spanning Trees Faster than Matrix Multiplication
with David Durfee, John Peebles, Anup B. Rao, and Sushant Sachdeva.
In STOC 2017.
A Framework for Analyzing Resparsification Algorithms
with Jakub Pachocki, Richard Peng, and Sushant Sachdeva.
In SODA 2017.
Approximate Gaussian Elimination for Laplacians: Fast, Sparse, and Simple
with Sushant Sachdeva.
In FOCS 2016.
Sparsified Cholesky and Multigrid Solvers for Connection Laplacians
with Yin Tat Lee, Richard Peng, Sushant Sachdeva, and Daniel A. Spielman.
In STOC 2016.
Fast, Provable Algorithms for Isotonic Regression in all \(\ell_p\)-norms
with Anup B. Rao and Sushant Sachdeva.
In NIPS 2015.
Our implementation of the least-squares Isotonic regression algorithm is available on the Isotonic Github repository.
Algorithms for Lipschitz Learning on Graphs
with Anup B. Rao, Daniel A. Spielman, and Sushant Sachdeva.
In COLT 2015.
Our implementations of the lex-minimization algorithms are available on the YINSlex Github repository.
Solving SDD Linear Systems in Nearly \( m\log^{1/2}n \) Time
with Michael B. Cohen, Gary L. Miller, Jakub W. Pachocki, Richard Peng, Anup B. Rao and Shen Chen Xu.
In STOC 2014.
The paper Solving SDD Linear Systems in Nearly \( m\log^{1/2}n \) Time was a merged submission of the following two papers:
Preconditioning in Expectation
with Michael B. Cohen, Jakub W. Pachocki, Richard Peng, and Anup B. Rao.
Stretching Stretch
Michael B. Cohen, Gary L. Miller, Jakub W. Pachocki, Richard Peng, and Shen Chen Xu.

Recent and Upcoming Talks

More Talks

  • Stanford Theory Seminar; October 2017. Approximate Gaussian Elimination.
  • FOCS, Berkeley; October 2017. Hardness Results for Structured Linear Systems.
  • UC Berkeley; September 2017. Hardness Results for Structured Linear Systems.
  • Google Research Mountain View; August 2017. Hardness Results for Structured Linear Systems.
  • YPNG Seminar, Yale Dept. of Statistics and Data Science; July 2017. Approximate Gaussian Elimination.
  • MSR Redmond; January 2017. Regression, Elimination, and Sampling on Graphs.
  • University of Copenhagen; January 2017. Approximate Gaussian Elimination.
  • CMU; December 2016. Approximate Gaussian Elimination.
  • Georgia Tech; November 2016. Approximate Gaussian Elimination.
  • Princeton; October 2016. Approximate Gaussian Elimination.
  • UC Berkeley; October 2016. Approximate Gaussian Elimination.
  • Google Research NYC; October 2016. Approximate Gaussian Elimination.
  • FOCS, New Brunswick; October 2016. Approximate Gaussian Elimination.
  • MIT A&C Seminar; September 2016. Approximate Gaussian Elimination.
  • Aarhus University; September 2016. Lipschitz Learning on Graphs.
  • China Theory Week, Hong Kong; August 2016. Approximate Gaussian Elimination.
  • SIAM Annual Meeting, Boston; July 2016. Approximate Cholesky Factorization
  • STOC, Boston; June 2016. Sparsified Cholesky and Multigrid Solvers for Connection Laplacians.
  • IT University of Copenhagen; December 2015. Lipschitz Learning and Isotonic Regression on Graphs

Code

Laplacians.jl
Laplacians.jl is a Julia package containing graph algorithms, especially ones related to spectral and algebraic graph theory, run by Dan Spielman. The repo contains ApproxChol, our Laplacian solver from the paper Robust and Practical Solution of Laplacian Equations by Approximate Elimination.
YINSlex Github Repository
Our implementations of the lex-minimization algorithms from the paper Algorithms for Lipschitz Learning on Graphs . The code was written by Anup Rao, Sushant Sachdeva, Dan Spielman, and myself.
Isotonic Github Repository
An implementation of the least-squares Isotonic regression algorithm from the paper Fast, Provable Algorithms for Isotonic Regression in all \(\ell_p\)-norms . The code was written by Anup Rao, Sushant Sachdeva, and myself.

Contact

I can be reached by email at

[last name] @ inf.ethz.ch

[last name] = kyng