About Me
I am an assistant professor at
the Department of Computer Science, ETH
Zurich,
where I joined in the fall 2019.
My research focuses on fast algorithms for graph problems and convex optimization,
on probability and discrepancy theory, fine-grained complexity theory, and applications in
machine learning.
My research is supported in part by project grant no. 200021 204787
and starting grant no. TMSGI2 218022 of the Swiss National Science
Foundation.
I spent 2018 and the spring of 2019 as a postdoc in the Theory of
Computation Group at
Harvard working with Jelani
Nelson.
Before that, I spent the fall semester 2017 as a
research fellow at the
Simons Institute at UC Berkeley.
I finished my PhD at the Department
of Computer Science at Yale
in the summer 2017, where I was advised by Daniel A.
Spielman.
Before attending Yale, I received B.A. in Computer Science from the University of Cambridge in
2011.
You can read about my work on almost-linear time algorithms for maximum
and minimum cost flow in this
Quanta article "Researchers Achieve ‘Absurdly Fast’ Algorithm for Network
Flow" (Jun 8th, 2022).
Shang-Hua Teng wrote an excellent technical perspective on this work for
Communications of the ACM (Dec 13th, 2023).
Nikhil Srivastava wrote another very informative discussion
of our minimum-cost flow algorithm and our recent work on almost-linear time algorithms
for many problems in 'incremental graphs' in the Simons Institute Newsletter (Jan 22nd, 2024).
ETH Zurich News has also written about my flow algorithms and algorithms for 'partially dynamic' graphs
here (Jun 28th, 2024).
See also publications below.
My CV.
PhD and Postdoc Applications
PhD openings
If you're interested in applying to do a PhD in our group,
you can send an email to applications-kyng@lists.inf.ethz.ch (as a precaution against spam
filters, cc me as well).
Please include your transcript, a CV, and any other relevant information.
We may request letters of recommendation later in our interview process, but prefer to receive these directly from recommenders.
Our standard PhD program requires you to hold a master's degree when you start.
If you would like to start directly after receiving a bachelor's degree, you should apply to our
Direct Doctorate
Program (joint Master's and PhD),
and also send an email to the above address to let me know you are interested in joining my
group.
Candidates should have a strong background in theoretical computer science or mathematics.
Students in our group are usually jointly mentored by Max Probst Gutenberg and myself.
Postdoc openings
We have an opening for a two-year postdoc in our group.
Candidates should have a strong publication record in theoretical computer science
and research interests that overlap with our group.
The starting date is flexible, and we offer a competitive salary, and generous funding for work travel.
To apply, send an email to applications-kyng@lists.inf.ethz.ch (as a precaution against spam
filters, cc me as well).
Please include your CV and any other relevant information.
We may request letters of recommendation later in our interview process, but prefer to receive these directly from recommenders.
Beyond the postdoc positions in our group, theory researchers interested in doing a postdoc at ETH are also highly encouraged to apply to
the
ITS junior fellowship.
Group
-
Maximilian
Probst Gutenberg, established researcher, Fall 2020-present
-
Ming Ding, PhD candidate, Fall 2020-present
-
Federico Soldà, PhD candidate, Fall 2020-present
-
Simon Meierhans, PhD candidate,
Fall 2021-present
-
Aurelio Sulser, PhD candidate, Fall 2022-present
Teaching
-
ETH Zurich; Advanced Graph Algorithms and Optimization, Spring
2023,
2022,
2021,
2020.
-
ETH Zurich; Algorithms, Probability, and Computing, Fall
2022,
2021,
2020.
-
Harvard; AM 221: Advanced Optimization, Spring 2018.
Thesis and project supervision
Our group supervises bachelor and master's theses of students in ETHZ D-INFK, and we supervise
student projects through the courses 'Research in Computer Science' and 'Praktische Arbeit'.
In some cases, we also supervise master's theses and projects for students in other ETHZ
departments.
Learn more about our thesis and project
supervision here.
Papers
-
Almost-Linear Time Algorithms for Decremental Graphs:
Min-Cost Flow and More via Duality
-
with Jan van den Brand, Li Chen, Yang P. Liu, Simon Meierhans, Maximilian Probst Gutenberg, and
Sushant Sachdeva.
-
To appear in FOCS 2024.
-
We give the first almost-linear total time algorithm for deciding if a flow of cost at most \(F\) still exists in a directed graph, with edge costs and capacities, undergoing decremental updates, i.e., edge deletions, capacity decreases, or cost increases.
This implies almost-linear time algorithms for approximating the minimum-cost flow value and \(s\)-\(t\) distance on such decremental graphs.
Our framework additionally allows us to maintain decremental strongly connected components in almost-linear time deterministically.
These algorithms also improve over the current best known runtimes for statically computing minimum-cost flow, in both the randomized and deterministic settings.
We obtain our algorithms by taking the dual perspective, which yields cut-based algorithms.
More precisely, our algorithm computes the flow via a sequence of \(m^{1+o(1)}\) dynamic min-ratio cut problems, the dual analog of the dynamic min-ratio cycle problem that underlies recent fast algorithms for minimum-cost flow. Our main technical contribution is a new data structure that returns an approximately optimal min-ratio cut in amortized \(m^{o(1)}\) time by maintaining a tree-cut sparsifier. This is achieved by devising a new algorithm to maintain the dynamic expander hierarchy of [Goranci-Räcke-Saranurak-Tan, SODA 2021] that also works in capacitated graphs. All our algorithms are deterministc, though they can be sped up further using randomized techniques while still working against an adaptive adversary.
-
A Simple Dynamic Spanner via APSP
-
with Simon Meierhans and Gernot Zöcklein
-
Manuscript.
-
We give a simple algorithm for maintaining a \(n^{o(1)}\)-approximate spanner \(H\) of a graph \(G\) with \(n\) vertices as \(G\) receives edge updates by reduction to the dynamic All-Pairs Shortest Paths (APSP) problem. Given an initially empty graph \(G\), our algorithm processes \(m\) insertions and \(n\) deletions in total time \(m^{1 + o(1)}\) and maintains an initially empty spanner \(H\) with total recourse \(n^{1 + o(1)}\). When the number of insertions is much larger than the number of deletions, this notably yields recourse sub-linear in the total number of updates.
Our algorithm only has a single \(O(\log n)\) factor overhead in runtime and approximation compared to the underlying APSP data structure. Therefore, future improvements for APSP will directly yield an improved dynamic spanner.
-
Bootstrapping Dynamic APSP via Sparsification
-
with Simon Meierhans and Gernot Zöcklein
-
Manuscript.
-
We give a simple algorithm for the dynamic approximate All-Pairs Shortest Paths (APSP) problem. Given a graph $G = (V, E, l)$ with polynomially bounded edge lengths, our data structure processes $|E|$ edge insertions and deletions in total time $|E|^{1 + o(1)}$ and provides query access to $|E|^{o(1)}$-approximate distances in time $\tilde{O}(1)$ per query.
We produce a data structure that mimics Thorup-Zwick distance oracles [TZ05], but is dynamic and deterministic. Our algorithm selects a small number of pivot vertices. Then, for every other vertex, it reduces distance computation to maintaining distances to a small neighborhood around that vertex and to the nearest pivot. We maintain distances between pivots efficiently by representing them in a smaller graph and recursing. We construct these smaller graphs by (a) reducing vertex count using the dynamic distance-preserving core graphs of Kyng-Meierhans-Probst Gutenberg [KMP24] in a black-box manner and (b) reducing edge-count using a dynamic spanner akin to Chen-Kyng-Liu-Meierhans-Probst Gutenberg [CKL+24]. Our dynamic spanner internally uses an APSP data structure. Choosing a large enough size reduction factor in the first step allows us to simultaneously bootstrap our spanner and a dynamic APSP data structure.
Notably, our approach does not need expander graphs, an otherwise ubiquitous tool in derandomization.
-
Optimal Electrical Oblivious Routing on Expanders
-
with Cella Florescu, Maximilian Probst Gutenberg, and Sushant Sachdeva.
-
In ICALP 2024.
-
In this paper, we investigate the question of whether the electrical flow routing is a good oblivious routing scheme on an \(m\)-edge graph \(G = (V, E)\) that is a \(\Phi\)-expander, i.e. where \(\lvert \partial S \rvert \geq \Phi \cdot \mathrm{vol}(S)\) for every \(S \subseteq V, \mathrm{vol}(S) \leq \mathrm{vol}(V)/2\). Beyond its simplicity and structural importance, this question is well-motivated by the current state-of-the-art of fast algorithms for \(\ell_{\infty}\) oblivious routings that reduce to the expander-case which is in turn solved by electrical flow routing.
Our main result proves that the electrical routing is an \(O(\Phi^{-1} \log m)\)-competitive oblivious routing in the \(\ell_1\)- and \(\ell_\infty\)-norms. We further observe that the oblivious routing is \(O(\log^2 m)\)-competitive in the \(\ell_2\)-norm and, in fact, \(O(\log m)\)-competitive if \(\ell_2\)-localization is \(O(\log m)\) which is widely believed.
Using these three upper bounds, we can smoothly interpolate to obtain upper bounds for every \(p \in [2, \infty]\) and \(q\) given by \(1/p + 1/q = 1\). Assuming \(\ell_2\)-localization in \(O(\log m)\), we obtain that in \(\ell_p\) and \(\ell_q\), the electrical oblivious routing is \(O(\Phi^{-(1-2/p)}\log m)\) competitive. Using the currently known result for \(\ell_2\)-localization, this ratio deteriorates by at most a sublogarithmic factor for every \(p, q \neq 2\).
We complement our upper bounds with lower bounds that show that the electrical routing for any such \(p\) and \(q\) is \(\Omega(\Phi^{-(1-2/p)}\log m)\)-competitive. This renders our results in \(\ell_1\) and \(\ell_{\infty}\) unconditionally tight up to constants, and the result in any \(\ell_p\)- and \(\ell_q\)-norm to be tight in case of \(\ell_2\)-localization in \(O(\log m)\).
-
A Framework for Parallelizing Approximate Gaussian Elimination
-
with Yves Baumann.
-
In SPAA 2024.
-
In a breakthrough result, Spielman and Teng (2004) developed a nearly-linear time solver for Laplacian linear equations, i.e. equations where the coefficient matrix is symmetric with non-negative diagonals and zero row sums. Since the development of the Spielman-Teng solver, there has been substantial progress, simplifying and improving their result, but obtaining a fast practical, parallel Laplacian solver remains an open problem.
We present a framework for obtaining extremely simple, parallel Laplacian linear equation solvers with nearly-linear work and sublinear depth. Our framework allows us to parallelize any Laplacian solver based on repeated single-vertex approximate Gaussian elimination. We demonstrate this by parallelizing both the algorithm of Kyng and Sachdeva (2016) and the practical variant by Gao, Kyng, and Spielman (2023). Our framework is work-efficient in the sense of matching the sequential work of these algorithms.
Our parallelization framework is very simple: We sample a subset of the current low-degree vertices (sparse columns), and in parallel we eliminate all vertices that are isolated in the resulting induced subgraph. This approach can be combined with any parallelizable approximate single-vertex elimination subroutine with sparse output. Given the simplicity of the approach, we believe that using it to parallelize the solver of Gao, Kyng, and Spielman (2023) is the most promising direction for obtaining practical parallel Laplacian solvers.
If we additionally use a parallel spectral sparsification routine, our approach can be modified to work in polylogarithmic depth and nearly-linear work.
-
Almost-Linear Time Algorithms for Incremental Graphs:
Cycle Detection, SCCs, s-t Shortest Path, and Minimum-Cost Flow
-
with Li Chen, Yang P. Liu, Simon Meierhans, and Maximilian Probst Gutenberg.
-
In STOC 2024.
-
-
We give the first almost-linear time algorithms for several problems in incremental
graphs
including cycle detection, strongly connected component maintenance, \(s\)-\(t\)
shortest path, maximum flow, and minimum-cost flow.
To solve these problems, we give a deterministic data structure that returns a
\(m^{o(1)}\)-approximate minimum-ratio cycle in fully dynamic graphs in amortized
\(m^{o(1)}\) time per update.
Combining this with the interior point method framework of Brand-Liu-Sidford (STOC
2023) gives the first almost-linear time algorithm for deciding the first update in
an incremental graph after which the cost of the minimum-cost flow attains value at
most some given threshold \(F\).
By rather direct reductions to minimum-cost flow, we are then able to solve the
problems in incremental graphs mentioned above.
Our new data structure also leads to a modular and deterministic almost-linear time
algorithm for minimum-cost flow by removing the need for complicated modeling of a
restricted adversary, in contrast to recent randomized and deterministic algorithms
for minimum-cost flow in
Chen-Kyng-Liu-Peng-Probst Gutenberg-Sachdeva (FOCS 2022) &
Brand-Chen-Kyng-Liu-Peng-Probst Gutenberg-Sachdeva-Sidford (FOCS 2023).
At a high level, our algorithm dynamizes the \(\ell_1\) oblivious routing of
Rozho\v{n}-Grunau-Haeupler-Zuzic-Li (STOC 2022), and develops a method to extract an
approximate minimum ratio cycle from the structure of the oblivious routing. To
maintain the oblivious routing, we use tools from concurrent work of
Kyng-Meierhans-Probst Gutenberg which designed vertex sparsifiers for shortest
paths, in order to maintain a sparse neighborhood cover in fully dynamic graphs.
To find a cycle, we first show that an approximate minimum ratio cycle can be
represented as a fundamental cycle on a small set of trees resulting from the
oblivious routing. Then, we find a cycle whose quality is comparable to the best
tree cycle.
This final cycle query step involves vertex and edge sparsification procedures
reminiscent of the techniques
introduced in Chen-Kyng-Liu-Peng-Probst Gutenberg-Sachdeva (FOCS 2022), but
crucially requires a more powerful dynamic spanner, which can handle far more edge
insertions than prior work.
We build such a spanner via a construction that hearkens back to the classic greedy
spanner algorithm of
Althöfer-Das-Dobkin-Joseph-Soares
(Discrete & Computational Geometry 1993).
-
A Dynamic Shortest Paths Toolbox: Low-Congestion Vertex Sparsifiers and their
Applications
-
with Simon Meierhans and Maximilian Probst Gutenberg.
-
In STOC 2024.
-
We present a general toolbox, based on new vertex sparsifiers, for designing data
structures to maintain shortest paths in dynamic graphs.
In an \(m\)-edge graph undergoing edge insertions and deletions, our data
structures give the first algorithms for maintaining (a) \(m^{o(1)}\)-approximate
all-pairs shortest paths (APSP) with \emph{worst-case} update time \(m^{o(1)}\) and
query time \(\tilde{O}(1)\), and (b) a tree \(T\) that has diameter no larger than a
subpolynomial factor times the diameter of the underlying graph, where each update
is handled in amortized subpolynomial time.
In graphs undergoing only edge deletions, we develop a simpler and more
efficient data structure to maintain a \((1+\epsilon)\)-approximate single-source
shortest paths (SSSP) tree \(T\) in a graph undergoing edge deletions in amortized
time \(m^{o(1)}\) per update.
Our data structures are deterministic. The trees we can maintain are not
subgraphs of \(G\), but embed with small edge congestion into \(G\). This is in
stark
contrast to previous approaches and is useful for algorithms that internally use
trees to route flow.
To illustrate the power of our new toolbox, we show that our SSSP data structure
gives simple deterministic implementations of flow-routing MWU methods in several
contexts, where previously only randomized methods had been known.
To obtain our toolbox, we give the first algorithm that, given a graph \(G\)
undergoing edge insertions and deletions and a dynamic terminal set \(A\), maintains
a
vertex sparsifier \(H\) that approximately preserves distances between terminals in
\(A\), consists of at most \(|A|m^{o(1)}\) vertices and edges, and can be updated in
worst-case time \(m^{o(1)}\).
Crucially, our vertex sparsifier construction allows us to maintain a low
edge-congestion embedding of \(H\) into \(G\), which is needed for our applications.
-
Incremental Approximate Maximum Flow on Undirected Graphs in Subpolynomial Update
Time
-
with Jan van den Brand, Li Chen, Yang P. Liu, Richard Peng, Maximilian Probst Gutenberg,
Sushant Sachdeva, and Aaron Sidford.
-
In SODA 2024.
-
We provide an algorithm which, with high probability, maintains a
\((1-\epsilon)\)-approximate maximum flow on an undirected graph undergoing
\(m\)-edge additions in amortized \(m^{o(1)} \epsilon^{-3}\) time per update. To
obtain this result, we provide a more general algorithm that solves what we call the
incremental, thresholded \(p\)-norm flow problem that asks to determine the first
edge-insertion in an undirected graph that causes the minimum \(\ell_p\)-norm flow
to decrease below a given threshold in value. Since we solve this thresholded
problem, our data structure succeeds against an adaptive adversary that can only see
the data structure's output. Furthermore, since our algorithm holds for \(p =
2\), we obtain improved algorithms for dynamically maintaining the effective
resistance between a pair of vertices in an undirected graph undergoing edge
insertions. Our algorithm builds upon previous dynamic algorithms for approximately
solving the minimum-ratio cycle problem that underlie previous advances on the
maximum flow problem [Chen-Kyng-Liu-Peng-Probst Gutenberg-Sachdeva, FOCS '22] as
well as recent dynamic maximum flow algorithms [v.d.Brand-Liu-Sidford, STOC
'23]. Instead of using interior point methods, which were a key component of
these recent advances, our algorithm uses an optimization method based on
\(\ell_p\)-norm iterative refinement and the multiplicative weight update method.
This ensures a monotonicity property in the minimum-ratio cycle subproblems that
allows us to apply known data structures and bypass issues arising from adaptive
queries.
-
A Deterministic Almost-Linear Time Algorithm for Minimum-Cost Flow
-
with Jan van den Brand, Li Chen, Yang P. Liu, Richard Peng, Maximilian Probst Gutenberg,
Sushant Sachdeva, and Aaron Sidford.
-
In FOCS 2023.
-
We give a deterministic algorithm that computes exact maximum flows and minimum-cost
flows on directed graphs with \(m\) edges and polynomially bounded integral demands,
costs, and capacities in \(m^{1+o(1)}\) time. As a consequence, we obtain the first
improvement in the running time of deterministic algorithms for computing
maximum-flow in graphs with polynomial bounded capacities since the work of
Goldberg-Rao [J.ACM '98].
Our algorithm is based on the flow framework of
Chen-Kyng-Liu-Peng-Gutenberg-Sachdeva [FOCS '22] that computes the optimal flow
through a sequence of \(m^{1+o(1)}\) approximate undirected minimum-ratio cycles. We
obtain a deterministic dynamic graph data-structure to process such a sequence of
minimum-ratio cycles in an amortized \(m^{o(1)}\) time.
Our two key technical contributions are 1) a deterministic analog of the vertex
sparsification component of the data-structure from Chen et al. that was
based on sampling random low-stretch trees, and 2) a deterministic edge
sparsification component based on a new construction of dynamic spanners
with explicit embeddings. Our methods can also be applied to give a deterministic
data structure that maintains a fully dynamic low-stretch spanning tree on graphs
with polynomially large lengths, with subpolynomial average stretch and requiring
only subpolynomial amortized update time.
-
Robust and Practical Solution of Laplacian Equations by Approximate Elimination
-
with Yuan Gao and Daniel A. Spielman.
-
Manuscript.
-
We introduce a new algorithm and software for solving linear equations in symmetric
diagonally dominant matrices with non-positive off-diagonal entries (SDDM matrices),
including Laplacian matrices. We use pre-conditioned conjugate gradient (PCG) to
solve the system of linear equations. Our preconditioner is a variant of the
Approximate Cholesky factorization of Kyng and Sachdeva (FOCS 2016). Our
factorization approach is simple: we eliminate matrix rows/columns one at a time and
update the remaining matrix using sampling to approximate the outcome of complete
Cholesky factorization. Unlike earlier approaches, our sampling always maintains a
connectivity in the remaining non-zero structure. Our algorithm comes with a tuning
parameter that upper bounds the number of samples made per original entry.
We implement our algorithm in Julia, providing two versions, AC and AC2, that
respectively use 1 and 2 samples per original entry. We compare their
single-threaded performance to that of current state-of-the-art solvers
Combinatorial Multigrid (CMG), BoomerAMG-preconditioned Krylov solvers from HyPre
and PETSc, Lean Algebraic Multigrid (LAMG), and MATLAB’s PCG with Incomplete
Cholesky Factorization (ICC). Our evaluation uses a broad class of problems,
including all large SDDM matrices from the SuiteSparse collection and diverse
programmatically generated instances. Our experiments suggest that our algorithm
attains a level of robustness and reliability not seen before in SDDM solvers, while
retaining good performance across all instances.
Our code and data are public, and we provide a tutorial on how to replicate our
tests. We hope that others will adopt this suite of tests as a benchmark, which we
refer to as SDDM2023. Our solver code is available on
the
Laplacians.jl Github Repo.
Our benchmarking data and tutorial are available in our
SDDM2023 Benchmark.
Arxiv
-
This paper describes ApproxChol, the Laplacian solver in
Laplacians.jl.
-
Maintaining Expander Decompositions via Sparse Cuts
-
with Yiding Hua, Maximilian Probst Gutenberg, and Zihang Wu.
-
In SODA 2023.
-
In this article, we show that the algorithm of maintaining expander
decompositions in graphs undergoing edge deletions directly by removing sparse
cuts repeatedly can be made efficient. Formally, for an \(m\)-edge undirected graph
\(G\), we say a cut \((S, \bar{S})\)
is \(\phi\)-sparse if \(|E_G(S, \bar{S})| < \phi \cdot \min\{vol_G(S),
vol_G(\bar{S})\}\). A \(\phi\)-expander decomposition of \(G\) is a partition of
\(V\)
into sets \(X_1, X_2, \ldots, X_k\) such that each cluster \(G[X_i]\) contains no
\(\phi\)-sparse cut (meaning it is a \(\phi\)-expander) with \(\tilde{O}(\phi m)\)
edges crossing between clusters. A natural way to compute a \(\phi\)-expander
decomposition is to decompose clusters by \(\phi\)-sparse cuts until no such cut
is contained in any cluster. We show that even in graphs undergoing edge
deletions, a slight relaxation of this meta-algorithm can be implemented
efficiently with amortized update time \(m^{o(1)}/\phi^2\). Our approach naturally
extends to maintaining directed \(\phi\)-expander
decompositions and \(\phi\)-expander hierarchies and thus gives a unifying
framework while having simpler proofs than previous state-of-the-art work. In
all settings, our algorithm matches the run-times of previous algorithms up to
subpolynomial factors. Moreover, our algorithm provides stronger guarantees for
\(\phi\)-expander decompositions, for example, for graphs undergoing edge
deletions, our approach achieves the first sublinear \(\phi m^{o(1)}\) recourse
bounds on the number of edges to become crossing between clusters. Our techniques
also give by far the simplest, deterministic algorithms for
maintaining Strongly-Connected Components (SCCs) in directed graphs undergoing
edge deletions, and for maintaining connectivity in undirected fully-dynamic
graphs, both matching the current state-of-the-art run-times up to
subpolynomial factors.
Arxiv
-
A Simple Framework for Finding Balanced Sparse Cuts via APSP
-
with Li Chen, Maximilian Probst Gutenberg, and Sushant Sachdeva.
-
In SOSA 2023.
-
We present a very simple and intuitive algorithm to find balanced sparse cuts
in a graph via shortest-paths. Our algorithm combines a new
multiplicative-weights framework for solving unit-weight multi-commodity flows
with standard ball growing arguments. Using Dijkstra's algorithm for computing
the shortest paths afresh every time gives a very simple algorithm that runs in
time \(\widetilde{O}(m^2/\phi)\) and finds an \(\widetilde{O}(\phi)\)-sparse
balanced cut, when the given graph has a \(\phi\)-sparse balanced cut. Combining
our algorithm with known deterministic data-structures for answering
approximate All Pairs Shortest Paths (APSP) queries under increasing edge
weights (decremental setting), we obtain a simple deterministic algorithm that
finds \(m^{o(1)}\phi\)-sparse balanced cuts in \(m^{1+o(1)}/\phi\) time. Our
deterministic almost-linear time algorithm matches the state-of-the-art in
randomized and deterministic settings up to subpolynomial factors, while being
significantly simpler to understand and analyze, especially compared to the
only almost-linear time deterministic algorithm, a recent breakthrough by
Chuzhoy-Gao-Li-Nanongkai-Peng-Saranurak (FOCS 2020).
Arxiv
-
Maximum Flow and Minimum-Cost Flow in Almost-Linear Time
-
with Li Chen, Yang P. Liu, Richard Peng, Maximilian Probst Gutenberg, and Sushant Sachdeva.
-
In FOCS 2022.
Won the FOCS Best Paper Award.
-
Won an inaugural ICBS Frontiers of Science Award in Theoretical Computer Science.
-
-
We give an algorithm that computes exact maximum flows and minimum-cost flows
on directed graphs with \(m\) edges and polynomially bounded integral demands,
costs, and capacities in \(m^{1+o(1)}\) time. Our algorithm builds the flow
through a sequence of \(m^{1+o(1)}\) approximate undirected minimum-ratio cycles,
each of which is computed and processed in amortized \(m^{o(1)}\) time using a
dynamic data structure.
Our framework extends to an algorithm running in \(m^{1+o(1)}\) time for
computing flows that minimize general edge-separable convex functions to high
accuracy. This gives an almost-linear time algorithm for several problems
including entropy-regularized optimal transport, matrix scaling, \(p\)-norm
flows, and isotonic regression.
-
Derandomizing Random Walks in Almost-Linear Time
-
with Simon Meierhans and Maximilian Probst Gutenberg.
-
In FOCS 2022.
-
In this article, we present the first deterministic directed Laplacian L
systems solver that runs in time almost-linear in the number of non-zero
entries of L. Previous reductions imply the first deterministic almost-linear
time algorithms for computing various fundamental quantities on directed graphs
including stationary distributions, personalized PageRank, hitting times and
escape probabilities. We obtain these results by introducing partial symmetrization,
a new
technique that makes the Laplacian of an Eulerian directed graph ``less
directed'' in a useful sense, which may be of independent interest. The
usefulness of this technique comes from two key observations: Firstly, the
partially symmetrized Laplacian preconditions the original Eulerian Laplacian
well in Richardson iteration, enabling us to construct a solver for the
original matrix from a solver for the partially symmetrized one. Secondly, the
undirected structure in the partially symmetrized Laplacian makes it possible
to sparsify the matrix very crudely, i.e. with large spectral error, and still
show that Richardson iterations convergence when using the sparsified matrix as
a preconditioner. This allows us to develop deterministic sparsification tools
for the partially symmetrized Laplacian. Together with previous reductions from
directed Laplacians to Eulerian
Laplacians, our technique results in the first deterministic almost-linear time
algorithm for solving linear equations in directed Laplacians. To emphasize the
generality of our new technique, we show that two prominent existing
(randomized) frameworks for solving linear equations in Eulerian Laplacians can
be derandomized in this way: the squaring-based framework of Cohen, Kelner,
Peebles, Peng, Rao, Sidford and Vladu (STOC 2017) and the sparsified
Cholesky-based framework of Peng and Song (STOC 2022).
Arxiv
-
Scalar and Matrix Chernoff Bounds from \(\ell_{\infty}\)-Independence
-
with Tali Kaufman and Federico Solda.
-
In SODA 2022.
-
We present new scalar and matrix Chernoff-style concentration bounds for a
broad class of probability distributions over the binary hypercube \(\{0,1\}^n\).
Motivated by recent tools developed for the study of mixing times of Markov
chains on discrete distributions, we say that a distribution is
\(\ell_\infty\)-independent when the infinity norm of its influence matrix
\(\mathcal{I}\) is bounded by a constant. We show that any distribution which is
\(\ell_\infty\)-infinity independent satisfies a matrix Chernoff bound that
matches the matrix Chernoff bound for independent random variables due to
Tropp. Our matrix Chernoff bound is a broad generalization and strengthening of
the matrix Chernoff bound of Kyng and Song (FOCS'18). Using our bound, we can
conclude as a corollary that a union of \(O(\log|V|)\) random spanning trees
gives a spectral graph sparsifier of a graph with \(|V|\) vertices with high
probability, matching results for independent edge sampling, and matching lower
bounds from Kyng and Song.
Arxiv
-
Incremental SSSP for Sparse Digraphs Beyond the Hopset Barrier
-
with Simon Meierhans and Maximilian Probst Gutenberg.
-
In SODA 2022.
-
Recently, Gutenberg, Williams and Wein [STOC'20] introduced a deterministic
\(\tilde{O}(n^2)\) algorithm for this problem, achieving near linear time for
very dense graphs. For sparse graphs, Chechik and Zhang [SODA'21] recently
presented a deterministic \(\tilde{O}(m^{5/3})\) algorithm, and an adaptive
randomized algorithm with run-time \(\tilde{O}(m\sqrt{n} + m^{7/5})\). This
algorithm is remarkable for two reasons: 1) in very spare graphs it reaches the
directed hopset barrier of \(\tilde{\Omega}(n^{3/2})\) that applied to all
previous approaches for partially-dynamic SSSP [STOC'14, SODA'20, FOCS'20]
\emph{and} 2) it does not resort to a directed hopset technique itself.
In this article we introduce \emph{propagation synchronization}, a new
technique for controlling the error build-up on paths throughout batches of
insertions. This leads us to a significant improvement of the approach in
[SODA'21] yielding a \emph{deterministic} \(\tilde{O}(m^{3/2})\) algorithm for
the problem. By a very careful combination of our new technique with the
sampling approach from [SODA'21], we further obtain an adaptive randomized
algorithm with total update time \(\tilde{O}(m^{4/3})\). This is the first
partially-dynamic SSSP algorithm in sparse graphs to bypass the notorious
directed hopset barrier which is often seen as the fundamental challenge
towards achieving truly near-linear time algorithms.
Arxiv
-
Hardness Results for Laplacians of Simplicial Complexes via Sparse-Linear Equation
Complete Gadgets
-
with Ming Ding, Maximilian Probst Gutenberg, and Peng Zhang.
-
In ICALP 2022.
-
We study linear equations in combinatorial Laplacians of k-dimensional simplicial
complexes (k-complexes), a natural generalization of graph Laplacians. Combinatorial
Laplacians play a crucial role in homology and are a central tool in topology.
Beyond this, they have various applications in data analysis and physical modeling
problems. It is known that nearly-linear time solvers exist for graph Laplacians.
However, nearly-linear time solvers for combinatorial Laplacians are only known for
restricted classes of complexes.
This paper shows that linear equations in combinatorial Laplacians of 2-complexes
are as hard to solve as general linear equations. More precisely, for any constant
c≥1, if we can solve linear equations in combinatorial Laplacians of 2-complexes to
high accuracy in time \(\tilde{O}(\text{# of nonzero coefficients})^c\), then we can
solve general linear equations with polynomially bounded integer coefficients and
condition numbers to high accuracy in time \(\tilde{O}(\text{# of nonzero
coefficients})^c\). We prove this by a nearly-linear time reduction from general
linear equations to combinatorial Laplacians of 2-complexes. Our reduction preserves
the sparsity of the problem instances up to poly-logarithmic factors.
Arxiv
-
Two-Commodity Flow is as Hard as Linear Programming
-
with Ming Ding and Peng Zhang.
-
In ICALP 2022.
-
We give a nearly-linear time reduction that encodes any linear
program as a 2-commodity flow problem with only a polylogarithmic
blow-up in size.
Our reduction applies to high-accuracy approximation algorithms and
exact algorithms.
Given an approximate solution to the 2-commodity flow problem, we
can extract a solution to the linear program in linear time with
only a polynomial factor increase in the error.
This implies that
any algorithm that solves the 2-commodity flow problem can solve linear
programs in essentially the same time.
Given a directed graph with edge capacities and two source-sink pairs, the goal of
the 2-commodity flow problem is to maximize the sum of the flows routed between the
two source-sink pairs subject to edge capacities and flow conservation.
A 2-commodity flow problem can be formulated as a linear program,
which can be solved to high accuracy in almost the current matrix
multiplication time (Cohen-Lee-Song JACM'21). In this paper, we show
that linear programs can be approximately solved, to high accuracy,
using 2-commodity flow as well. As a corollary, if a 2-commodity flow
problem can be approximately solved in time \(O(|E|^c
\operatorname{polylog}(U|E|\epsilon^{-1}))\), where \(E\) is the
graph edge set, \(U\) is the ratio of maximum to minimum edge capacity, \(\epsilon\)
is
the multiplicative error parameter, and \(c\) is a constant greater than
or equal to 1, then a linear program with integer coefficients and
feasible set radius \(r\) can be approximately solved in time \(O(N^c
\operatorname{polylog}((r+1)X\epsilon^{-1}))\), where \(N\) is the number
of nonzeros and \(X\) is the largest magnitude of the coefficients.
Thus a solver for 2-commodity flow with running time exponent \(c < \omega\),
where
\(\omega \leq 2.37286\) is the matrix multiplication constant, would improve the
running time for solving sparse linear programs. Our proof follows the outline
of Itai’s polynomial-time reduction of a linear program to a 2-commodity flow
problem (JACM’78). Itai's reduction shows that exactly solving 2-commodity flow
and exactly solving linear programming are polynomial-time equivalent. We
improve Itai’s reduction to preserve the sparsity of all the intermediate steps.
In addition, we establish an error bound for approximately solving each
intermediate problem in the reduction, and show that the accumulated error is
polynomially bounded. We remark that our reduction does not run in strongly
polynomial time and that it is open whether 2-commodity flow and linear
programming are equivalent in strongly polynomial time.
Arxiv
-
Faster Sparse Matrix Inversion and Rank Computation in Finite Fields
-
with Sílvia Casacuberta.
-
In ITCS 2022.
-
We improve the current best running time value to invert sparse matrices over
finite fields, lowering it to an expected \(O\big(n^{2.2131}\big)\) time for the
current values of fast rectangular matrix multiplication. We achieve the same
running time for the computation of the rank and nullspace of a sparse matrix
over a finite field. This improvement relies on two key techniques. First, we
adopt the decomposition of an arbitrary matrix into block Krylov and Hankel
matrices from Eberly et al. (ISSAC 2007). Second, we show how to recover the
explicit inverse of a block Hankel matrix using low displacement rank
techniques for structured matrices and fast rectangular matrix multiplication
algorithms. We generalize our inversion method to block structured matrices
with other displacement operators and strengthen the best known upper bounds
for explicit inversion of block Toeplitz-like and block Hankel-like matrices,
as well as for explicit inversion of block Vandermonde-like matrices with
structured blocks. As a further application, we improve the complexity of
several algorithms in topological data analysis and in finite group theory.
Arxiv
-
On the Oracle Complexity of Higher-Order Smooth Non-Convex Finite-Sum Optimization
-
with Nicolas Emmenegger and Ahad N. Zehmakan.
-
In AISTATS 2022 and in
the ICML 2021 workshop 'Beyond first-order methods in ML systems'.
-
We prove lower bounds for higher-order methods in smooth non-convex finite-sum
optimization.
Our contribution is threefold: We first show that a deterministic algorithm cannot
profit
from the finite-sum structure of the objective, and that simulating a pth-order
regularized
method on the whole function by constructing exact gradient information is optimal
up to constant factors.
We further show lower bounds for randomized algorithms and compare them with the
best known upper bounds.
To address some gaps between the bounds, we propose a new second-order smoothness
assumption that can be
seen as an analogue of the first-order mean-squared smoothness assumption. We prove
that it is sufficient
to ensure state-of-the-art convergence guarantees, while allowing for a sharper
lower bound.
Arxiv
-
Almost-linear-time Weighted \(\ell_p\)-norm Solvers in Slightly Dense Graphs via
Sparsification
-
with Deeksha Adil, Brian Bullins, and Sushant Sachdeva.
-
In ICALP 2021.
-
We give almost-linear-time algorithms for constructing sparsifiers
with \(n \text{poly}(\log n)\) edges that approximately preserve weighted
\((\ell^{2}_2 + \ell^{p}_p)\) flow or voltage objectives on graphs. For
flow objectives, this is the first sparsifier construction for such
mixed objectives beyond unit \(\ell_p\) weights, and is based on
expander decompositions. For voltage objectives, we give the first
sparsifier construction for these objectives, which we build using
graph spanners and leverage score sampling. Together with the
iterative refinement framework of [Adil et al, SODA 2019], and a new
multiplicative-weights based constant-approximation algorithm for
mixed-objective flows or voltages, we show how to find
\((1+2^{-\text{poly}(\log n)})\) approximations for weighted
\(\ell_p\)-norm minimizing flows or voltages in
\(p(m^{1+o(1)} + n^{4/3 + o(1)})\) time for \(p=\omega(1),\) which is
almost-linear for graphs that are slightly dense
(\(m \ge n^{4/3 + o(1)}\)).
Arxiv
-
Four Deviations Suffice for Rank 1 Matrices
-
with Kyle Luh and Zhao Song.
-
In Advances in Mathematics, Volume 375, 2 December 2020.
-
We prove a matrix discrepancy bound that strengthens the famous
Kadison-Singer result of Marcus, Spielman, and Srivastava. Consider any
independent scalar random variables \(ξ_1, \ldots, ξ_n\) with finite support,
e.g.
\(\{ \pm 1 \}\) or \(\{ 0,1 \}\)-valued random variables, or some combination
thereof. Let \(u_1, \dots, u_n \in \mathbb{C}^m\) and \( σ^2 = \left\|
\sum_{i=1}^n \text{Var}[ ξ_i ] (u_i u_i^{*})^2 \right\|.\) Then there exists
a choice of outcomes \(\varepsilon_1,\ldots,\varepsilon_n\) in the support of
\(ξ_1, \ldots, ξ_n\) s.t. \( \left \|\sum_{i=1}^n \mathbb{E} [ ξ_i] u_i
u_i^* - \sum_{i=1}^n \varepsilon_i u_i u_i^* \right \| \leq 4 σ.\) A
simple consequence of our result is an improvement of a Lyapunov-type theorem
of Akemann and Weaver.
-
Packing LPs are Hard to Solve Accurately, Assuming Linear Equations are Hard
-
with Di Wang and Peng Zhang.
-
In SODA 2020.
-
We study the complexity of approximately solving packing linear programs. In the Real
RAM model, it is known how to solve packing LPs with N non-zeros in time
Õ(N/ϵ). We investigate whether the ϵ dependence in the running time
can be improved.
Our first main result relates the difficulty of this problem to
hardness assumptions for solving dense linear equations. We show that, in the Real
RAM model,
unless linear equations in matrices n × n with condition number O(n10)
can be solved to ϵ accuracy faster than Õ(n2.01 log(1/ϵ)), no algorithm
(1−ϵ)-approximately
solves a O(n)×O(n) packing LPs (where N = O(n2))
in time Õ(n2ϵ−0.0003). It would be surprising to solve linear
equations
in the Real RAM model this fast, as we currently cannot solve them faster than
Õ(nω), where
ω denotes the exponent in the running time for matrix multiplication in the Real RAM
model (and equivalently matrix inversion).
The current best bound on this exponent is roughly ω ≤ 2.372. Note, however, that a
fast solver for linear equations does not
directly imply faster matrix multiplication. But, our reduction shows that if fast
and accurate packing LP solvers exist, then either
linear equations can be solved much faster than matrix multiplication or the matrix
multiplication constant is very close to 2.
Instantiating the same reduction with different parameters, we show that unless
linear equations in matrices with condition number
O(n1.5) can be solved to ϵ accuracy faster than Õ(n2.372
log(1/ϵ)), no
algorithm (1 – ϵ)-approximately solves packing LPs in time
Õ(n2ϵ−0.067).
Thus smaller improvements in the exponent for ϵ in the running time of Packing LP
solvers also imply improvements in the current
state-of-the-art for solving linear equations.
Our second main result relates the difficulty of approximately solving packing
linear programs to hardness assumptions for solving sparse linear equations: In the
Real RAM model, unless well-conditioned sparse systems
of linear equations can be solved faster than Õ((no. non-zeros of matrix) ),
no algorithm (1 – ϵ)-approximately solves packing LPs with N non-zeros in time
Õ(Nϵ−0.165). This
running time of Õ((no. non-zeros of matrix) )
is obtained by the classical Conjugate Gradient algorithm by a standard analysis.
Our reduction implies that if sufficiently good packing LP solvers exist, then this
long-standing best-known bound on the running time for solving well-conditioned
systems of linear equations is sub-optimal. While we prove results in the Real RAM
model, our condition number assumptions ensure that our results can be translated to
fixed point arithmetic with (log n)O(1) bits per number.
Conference
-
Flows in Almost Linear Time via Adaptive Preconditioning
-
with Richard Peng, Sushant
Sachdeva, and Di Wang.
-
In STOC 2019.
-
We present algorithms for solving a large class of flow and regression
problems on unit weighted graphs to \((1 + 1 / poly(n))\) accuracy in
almost-linear time. These problems include \(\ell_p\)-norm minimizing flow for
\(p\) large (\(p \in [ω(1), o(\log^{2/3} n) ]\)), and their duals,
\(\ell_p\)-norm semi-supervised learning for \(p\) close to \(1\).
As \(p\) tends to infinity, \(\ell_p\)-norm flow and its dual tend to max-flow
and min-cut respectively. Using this connection and our algorithms, we give an
alternate approach for approximating undirected max-flow, and the first
almost-linear time approximations of discretizations of total variation
minimization objectives.
This algorithm demonstrates that many tools previous viewed as limited to
linear systems are in fact applicable to a much wider range of convex
objectives. It is based on the the routing-based solver for Laplacian linear
systems by Spielman and Teng (STOC '04, SIMAX '14), but require several new
tools: adaptive non-linear preconditioning, tree-routing based
ultra-sparsification for mixed \(\ell_2\) and \(\ell_p\) norm objectives, and
decomposing graphs into uniform expanders.
-
Iterative Refinement for \(\ell_p\)-norm Regression
-
with Deeksha Adil, Richard Peng, and Sushant Sachdeva.
-
In SODA 2019.
-
We give improved algorithms for the \(\ell_{p}\)-regression problem, \(\min_{x}
\|x\|_{p}\) such that \(A x=b,\) for all \(p \in (1,2) \cup (2,\infty).\) Our
algorithms obtain a high accuracy solution in \(\tilde{O}_{p}(m^{\frac{|p-2|}{2p
+ |p-2|}}) \le \tilde{O}_{p}(m^{\frac{1}{3}})\) iterations, where each iteration
requires solving an \(m \times m\) linear system, \(m\) being the dimension of the
ambient space.
By maintaining an approximate inverse of the linear systems that we solve in
each iteration, we give algorithms for solving \(\ell_{p}\)-regression to \(1 /
\text{poly}(n)\) accuracy that run in time \(\tilde{O}_p(m^{\max\{ω,
7/3\}}),\) where \(ω\) is the matrix multiplication constant. For the current
best value of \(ω> 2.37\), we can thus solve \(\ell_{p}\) regression as fast
as \(\ell_{2}\) regression, for all constant \(p\) bounded away from \(1.\)
Our algorithms can be combined with fast graph Laplacian linear equation
solvers to give minimum \(\ell_{p}\)-norm flow / voltage solutions to \(1 /
\text{poly}(n)\) accuracy on an undirected graph with \(m\) edges in
\( \tilde{O}_{p}(m^{1 + \frac{|p-2|}{2p + |p-2|}}) \le
\tilde{O}_{p}(m^{\frac{4}{3}}) \) time.
For sparse graphs and for matrices with similar dimensions, our iteration
counts and running times improve on the \(p\)-norm regression algorithm by
[Bubeck-Cohen-Lee-Li STOC`18] and general-purpose convex optimization
algorithms. At the core of our algorithms is an iterative refinement scheme for
\(\ell_{p}\)-norms, using the smoothed \(\ell_{p}\)-norms introduced in the work of
Bubeck et al. Given an initial solution, we construct a problem that seeks to
minimize a quadratically-smoothed \(\ell_{p}\) norm over a subspace, such that a
crude solution to this problem allows us to improve the initial solution by a
constant factor, leading to algorithms with fast convergence.
-
A Matrix Chernoff Bound for Strongly Rayleigh Distributions and Spectral Sparsifiers
from a few Random Spanning Trees
-
with Zhao Song.
-
In FOCS 2018.
-
Strongly Rayleigh distributions are a class of negatively dependent
distributions of binary-valued random variables [Borcea, Branden, Liggett JAMS
09]. Recently, these distributions have played a crucial role in the analysis
of algorithms for fundamental graph problems, e.g. Traveling Salesman Problem
[Gharan, Saberi, Singh FOCS 11]. We prove a new matrix Chernoff bound for
Strongly Rayleigh distributions.
As an immediate application, we show that adding together the Laplacians of
\(ε^{-2} \log^2 n\) random spanning trees gives an \((1\pm ε)\)
spectral sparsifiers of graph Laplacians with high probability. Thus, we
positively answer an open question posed in [Baston, Spielman, Srivastava, Teng
JACM 13]. Our number of spanning trees for spectral sparsifier matches the
number of spanning trees required to obtain a cut sparsifier in [Fung,
Hariharan, Harvey, Panigraphi STOC 11]. The previous best result was by naively
applying a classical matrix Chernoff bound which requires \(ε^{-2} n \log
n\) spanning trees. For the tree averaging procedure to agree with the original
graph Laplacian in expectation, each edge of the tree should be reweighted by
the inverse of the edge leverage score in the original graph. We also show that
when using this reweighting of the edges, the Laplacian of single random tree
is bounded above in the PSD order by the original graph Laplacian times a
factor \(\log n\) with high probability, i.e. \(L_T \preceq O(\log n) L_G\).
We show a lower bound that almost matches our last result, namely that in
some graphs, with high probability, the random spanning tree is \(\it{not}\)
bounded above in the spectral order by \(\frac{\log n}{\log\log n}\) times the
original graph Laplacian. We also show a lower bound that in \(ε^{-2}
\log n\) spanning trees are necessary to get a \((1\pm ε)\) spectral
sparsifier.
-
Solving Directed Laplacians in Nearly Linear Time
through Sparse LU Factorizations
-
with Michael B. Cohen, Jonathan Kelner, John Peebles,
Richard Peng, Anup B. Rao, Aaron Sidford.
-
In FOCS 2018.
-
We show how to solve directed Laplacian systems in nearly-linear time. Given
a linear system in an \(n \times n\) Eulerian directed Laplacian with \(m\) nonzero
entries, we show how to compute an \(ε\)-approximate solution in time \(O(m
\log^{O(1)} (n) \log (1/ε))\). Through reductions from [Cohen et al.
FOCS'16] , this gives the first nearly-linear time algorithms for computing
\(ε\)-approximate solutions to row or column diagonally dominant linear
systems (including arbitrary directed Laplacians) and computing
\(ε\)-approximations to various properties of random walks on directed
graphs, including stationary distributions, personalized PageRank vectors,
hitting times, and escape probabilities. These bounds improve upon the recent
almost-linear algorithms of [Cohen et al. STOC'17], which gave an algorithm to
solve Eulerian Laplacian systems in time \(O((m+n2^{O(\sqrt{\log n \log \log
n})})\log^{O(1)}(n ε^{-1}))\).
To achieve our results, we provide a structural result that we believe is of
independent interest. We show that Laplacians of all strongly connected
directed graphs have sparse approximate LU-factorizations. That is, for every
such directed Laplacian \({\mathbf{L}}\), there is a lower triangular matrix
\(\boldsymbol{\mathit{\mathfrak{L}}}\) and an upper triangular matrix
\(\boldsymbol{\mathit{\mathfrak{U}}}\), each with at most \(\tilde{O}(n)\)
nonzero entries, such that their product \(\boldsymbol{\mathit{\mathfrak{L}}}
\boldsymbol{\mathit{\mathfrak{U}}}\) spectrally approximates \({\mathbf{L}}\)
in an appropriate norm. This claim can be viewed as an analogue of recent work
on sparse Cholesky factorizations of Laplacians of undirected graphs. We show
how to construct such factorizations in nearly-linear time and prove that, once
constructed, they yield nearly-linear time algorithms for solving directed
Laplacian systems.
-
Incomplete Nested Dissection
-
with Richard Peng, Robert
Schwieterman, and Peng Zhang.
-
In STOC 2018.
-
We present an asymptotically faster algorithm for solving linear systems in
well-structured 3-dimensional truss stiffness matrices. These linear systems
arise from linear elasticity problems, and can be viewed as extensions of graph
Laplacians into higher dimensions. Faster solvers for the 2-D variants of such
systems have been studied using generalizations of tools for solving graph
Laplacians [Daitch-Spielman CSC'07, Shklarski-Toledo SIMAX'08].
Given a 3-dimensional truss over \(n\) vertices which is formed from a union of
\(k\) convex structures (tetrahedral meshes) with bounded aspect ratios, whose
individual tetrahedrons are also in some sense well-conditioned, our algorithm
solves a linear system in the associated stiffness matrix up to accuracy
\(ε\) in time \(O(k^{1/3} n^{5/3} \log (1 / ε))\). This
asymptotically improves the running time \(O(n^2)\) by Nested Dissection for all
\(k \ll n\).
We also give a result that improves on Nested Dissection even when we allow
any aspect ratio for each of the \(k\) convex structures (but we still require
well-conditioned individual tetrahedrons). In this regime, we improve on Nested
Dissection for \(k \ll n^{1/44}\).
The key idea of our algorithm is to combine nested dissection and support
theory. Both of these techniques for solving linear systems are well studied,
but usually separately. Our algorithm decomposes a 3-dimensional truss into
separate and balanced regions with small boundaries. We then bound the spectrum
of each such region separately, and utilize such bounds to obtain improved
algorithms by preconditioning with partial states of separator-based Gaussian
elimination.
-
Approximate Gaussian Elimination
-
Rasmus Kyng.
-
My PhD dissertation, 2017.
-
-
Hardness Results for Structured Linear Systems
-
with Peng Zhang.
-
In FOCS 2017. Won the Machtey Award for Best Student
Paper.
-
In the SIAM Journal on Computing, Special Section FOCS 2017.
-
We show that if the nearly-linear time solvers for Laplacian matrices and
their generalizations can be extended to solve just slightly larger families of
linear systems, then they can be used to quickly solve all systems of linear
equations over the reals. This result can be viewed either positively or
negatively: either we will develop nearly-linear time algorithms for solving
all systems of linear equations over the reals, or progress on the families we
can solve in nearly-linear time will soon halt.
-
Sampling Random Spanning Trees Faster than Matrix Multiplication
-
with David Durfee, John Peebles, Anup B. Rao, and Sushant Sachdeva.
-
In STOC 2017.
-
We present an algorithm that, with high probability, generates a random
spanning tree from an edge-weighted undirected graph in
\(\tilde{O}(n^{4/3}m^{1/2}+n^{2})\) time (The \(\tilde{O}(\cdot)\) notation hides
\(\operatorname{polylog}(n)\) factors). The tree is sampled from a distribution
where the probability of each tree is proportional to the product of its edge
weights. This improves upon the previous best algorithm due to Colbourn et al.
that runs in matrix multiplication time, \(O(n^ω)\). For the special case of
unweighted graphs, this improves upon the best previously known running time of
\(\tilde{O}(\min\{n^ω,m\sqrt{n},m^{4/3}\})\) for \(m \gg n^{5/3}\) (Colbourn
et al. '96, Kelner-Madry '09, Madry et al. '15).
The effective resistance metric is essential to our algorithm, as in the work
of Madry et al., but we eschew determinant-based and random walk-based
techniques used by previous algorithms. Instead, our algorithm is based on
Gaussian elimination, and the fact that effective resistance is preserved in
the graph resulting from eliminating a subset of vertices (called a Schur
complement). As part of our algorithm, we show how to compute
\(ε\)-approximate effective resistances for a set \(S\) of vertex pairs via
approximate Schur complements in \(\tilde{O}(m+(n + |S|)ε^{-2})\) time,
without using the Johnson-Lindenstrauss lemma which requires \(\tilde{O}(
\min\{(m + |S|)ε^{-2}, m+nε^{-4} +|S|ε^{-2}\})\) time. We
combine this approximation procedure with an error correction procedure for
handing edges where our estimate isn't sufficiently accurate.
-
A Framework for Analyzing Resparsification Algorithms
-
with Jakub Pachocki, Richard Peng, and Sushant Sachdeva.
-
In SODA 2017.
-
A spectral sparsifier of a graph \(G\) is a sparser graph \(H\) that
approximately preserves the quadratic form of \(G\), i.e. for all vectors \(x\),
\(x^T L_G x \approx x^T L_H x\), where \(L_G\) and \(L_H\) denote the respective
graph Laplacians. Spectral sparsifiers generalize cut sparsifiers, and have
found many applications in designing graph algorithms. In recent years, there
has been interest in computing spectral sparsifiers in semi-streaming and
dynamic settings. Natural algorithms in these settings often involve repeated
sparsification of a graph, and accumulation of errors across these steps. We
present a framework for analyzing algorithms that perform repeated
sparsifications that only incur error corresponding to a single sparsification
step, leading to better results for many resparsification-based algorithms. As
an application, we show how to maintain a spectral sparsifier in the
semi-streaming setting: We present a simple algorithm that, for a graph \(G\) on
\(n\) vertices and \(m\) edges, computes a spectral sparsifier of \(G\) with \(O(n
\log n)\) edges in a single pass over \(G\), using only \(O(n \log n)\) space, and
\(O(m \log^2 n)\) total time. This improves on previous best semi-streaming
algorithms for both spectral and cut sparsifiers by a factor of \(\log{n}\) in
both space and runtime. The algorithm extends to semi-streaming row sampling
for general PSD matrices. We also use our framework to combine a spectral
sparsification algorithm by Koutis with improved spanner constructions to give
a parallel algorithm for constructing \(O(n\log^2{n}\log\log{n})\) sized spectral
sparsifiers in \(O(m\log^2{n}\log\log{n})\) time. This is the best known
combinatorial graph sparsification algorithm.The size of the sparsifiers is
only a factor \(\log{n}\log\log{n}\) more than ones produced by numerical
routines.
-
Approximate Gaussian Elimination for Laplacians: Fast, Sparse, and Simple
-
with
Sushant Sachdeva.
-
In FOCS 2016.
-
We show how to perform sparse approximate Gaussian elimination for Laplacian
matrices. We present a simple, nearly linear time algorithm that approximates a
Laplacian by a matrix with a sparse Cholesky factorization, the version of
Gaussian elimination for symmetric matrices. This is the first nearly linear
time solver for Laplacian systems that is based purely on random sampling, and
does not use any graph theoretic constructions such as low-stretch trees,
sparsifiers, or expanders. The crux of our analysis is a novel concentration
bound for matrix martingales where the differences are sums of conditionally
independent variables.
-
Sparsified Cholesky and Multigrid Solvers for Connection Laplacians
-
with
Yin Tat Lee,
Richard Peng,
Sushant Sachdeva, and
Daniel A. Spielman.
-
In STOC 2016.
-
We introduce the sparsified Cholesky and sparsified multigrid algorithms for
solving systems of linear equations. These algorithms accelerate Gaussian
elimination by sparsifying the nonzero matrix entries created by the
elimination process. We use these new algorithms to derive the first nearly
linear time algorithms for solving systems of equations in connection
Laplacians, a generalization of Laplacian matrices that arise in many problems
in image and signal processing. We also prove that every connection Laplacian
has a linear sized approximate inverse. This is an LU factorization with a
linear number of nonzero entries that is a strong approximation of the original
matrix. Using such a factorization one can solve systems of equations in a
connection Laplacian in linear time. Such a factorization was unknown even for
ordinary graph Laplacians.
independent variables.
-
Fast, Provable Algorithms for Isotonic Regression in all \(\ell_p\)-norms
-
with
Anup B. Rao and
Sushant Sachdeva.
-
In NIPS 2015.
-
Given a directed acyclic graph \(G,\) and a set of values \(y\) on the vertices,
the Isotonic Regression of \(y\) is a vector \(x\) that respects the partial order
described by \(G,\) and minimizes \(||x-y||,\) for a specified norm. This paper
gives improved algorithms for computing the Isotonic Regression for all
weighted \(\ell_{p}\)-norms with rigorous performance guarantees. Our algorithms
are quite practical, and their variants can be implemented to run fast in
practice.
-
Our implementation of the
least-squares Isotonic regression algorithm is available on
the Isotonic Github repository.
-
Algorithms for Lipschitz Learning on Graphs
-
with
Anup B. Rao,
Daniel A. Spielman, and
Sushant Sachdeva.
-
In COLT 2015.
-
We develop fast algorithms for solving regression problems on graphs where
one is given the value of a function at some vertices, and must find its
smoothest possible extension to all vertices. The extension we compute is the
absolutely minimal Lipschitz extension, and is the limit for large \(p\) of
\(p\)-Laplacian regularization. We present an algorithm that computes a minimal
Lipschitz extension in expected linear time, and an algorithm that computes an
absolutely minimal Lipschitz extension in expected time \(\widetilde{O} (m n)\).
The latter algorithm has variants that seem to run much faster in practice.
These extensions are particularly amenable to regularization: we can perform
\(l_{0}\)-regularization on the given values in polynomial time and
\(l_{1}\)-regularization on the initial function values and on graph edge weights
in time \(\widetilde{O} (m^{3/2})\).
-
Our implementations of the lex-minimization algorithms are available on
the YINSlex Github repository.
-
Solving SDD Linear Systems in Nearly \( m\log^{1/2}n \) Time
-
with
Michael B. Cohen,
Gary L. Miller,
Jakub W. Pachocki,
Richard Peng,
Anup B. Rao
and Shen Chen Xu.
-
In STOC 2014.
-
We show an algorithm for solving symmetric diagonally dominant (SDD) linear systems
with m non-zero entries to a relative error of ε in O(m log1/2n
logcn log(1/ε)) time. Our approach follows the recursive preconditioning
framework, which aims to reduce graphs to trees using iterative methods. We improve
two key components of this framework: random sampling and tree embeddings. Both of
these components are used in a variety of other algorithms, and our approach also
extends to the dual problem of computing electrical flows.
We show that preconditioners constructed by random sampling can perform well without
meeting the standard requirements of iterative methods. In the graph setting, this
leads to ultra-sparsifiers that have optimal behavior in expectation.
The improved running time makes previous low stretch embedding algorithms the running
time bottleneck in this framework. In our analysis, we relax the requirement of
these embeddings to snowflake spaces. We then obtain a two-pass approach algorithm
for constructing optimal embeddings in snowflake spaces that runs in O(m log log n)
time. This algorithm is also readily parallelizable.
The paper Solving SDD Linear Systems in Nearly \( m\log^{1/2}n \) Time was a merged submission of
the following two papers:
-
Preconditioning in Expectation
-
with
Michael B. Cohen,
Jakub W. Pachocki,
Richard Peng,
and Anup B. Rao.
-
We show that preconditioners constructed by random sampling can perform well
without meeting the standard requirements of iterative methods. When applied to
graph Laplacians, this leads to ultra-sparsifiers that in expectation behave as
the nearly-optimal ones given by [Kolla-Makarychev-Saberi-Teng STOC`10].
Combining this with the recursive preconditioning framework by [Spielman-Teng
STOC`04] and improved embedding algorithms, this leads to algorithms that solve
symmetric diagonally dominant linear systems and electrical flow problems in
expected time close to \(m\log^{1/2}n\).
-
Stretching Stretch
-
Michael B. Cohen,
Gary L. Miller,
Jakub W. Pachocki,
Richard Peng,
and Shen Chen Xu.
-
We give a generalized definition of stretch that simplifies the efficient
construction of low-stretch embeddings suitable for graph algorithms. The
generalization, based on discounting highly stretched edges by taking their
\(p\)-th power for some \(0 < p < 1\), is directly related to performances of
existing algorithms. This discounting of high-stretch edges allows us to treat
many classes of edges with coarser granularity. It leads to a two-pass approach
that combines bottom-up clustering and top-down decompositions to construct
these embeddings in \(\mathcal{O}(m\log\log{n})\) time. Our algorithm
parallelizes readily and can also produce generalizations of low-stretch
subgraphs.
Recent and Upcoming Talks
-
Informal Blackboard Talks;
Simons Institute, Berkeley,
November
2023.
-
Almost-Linear Time Algorithms for Incremental Graphs:
Cycle Detection, SCCs, s-t Shortest Path, and Minimum-Cost Flow.
-
-
Georgia Tech College of Computing Seminar;
Atlanta,
September 2023.
-
Robust and Practical Solution of Laplacian Equations by Approximate Elimination.
-
-
ICALP;
Paderborn,
July, 2023.
-
Invited Plenary Talk, An Almost-Linear Time Algorithm for Maximum Flow and More.
-
-
DIMACS Workshop on Modern Techniques in Graph Algorithms;
New Brunswick,
June, 2023.
-
Tutorial: Maximum Flow and Minimum-Cost Flow in Almost-Linear Time.
-
-
Highlights of Algorithms;
Prague,
June, 2023.
-
Maximum Flow and Minimum-Cost Flow in Almost-Linear Time.
-
-
Seminar on Scalable Data Structures;
Dagstuhl,
June, 2023.
-
Dynamic spanners.
-
-
Perspectives on Matrix Computations: Theoretical Computer Science Meets Numerical
Analysis, BIRS Workshop;
Banff, January, 2023.
-
Robust and Practical Solution of Laplacian Equations by Approximate Elimination
(Video).
(Slides).
-
-
Swiss Winter School on Theoretical Computer Science;
EPFL and ETHZ,
January, 2023.
-
Lecture series: Maximum Flow and Minimum-Cost Flow in Almost-Linear Time.
-
Notes and problem sets.
-
-
Department of Computer Science Colloquium;
Yale University,
November, 2022.
-
Maximum Flow and Minimum-Cost Flow in Almost-Linear Time.
-
-
Columbia Theory Seminar;
Columbia University,
November, 2022.
-
Maximum Flow and Minimum-Cost Flow in Almost-Linear Time.
-
-
Bernoulli Center Workshop: Modern Trends in Combinatorial Optimization;
EPFL,
July, 2022.
-
Maximum Flow and Minimum-Cost Flow in Almost-Linear Time.
-
-
Milan Theory Workshop: Spectral and Convex Optimization Techniques in Graph
Algorithms;
Bocconi University,
June, 2022.
-
Maximum Flow and Minimum-Cost Flow in Almost-Linear Time.
-
-
Algorithms and Foundations for Data Science Workshop;
Nanyang Technological University and Carnegie Mellon University,
June, 2022.
-
Scalar and Matrix Chernoff Bounds from \(\ell_{\infty}\)-Independence.
-
-
European Meeting on Algorithmic Challenges of Big Data;
Warsaw,
May, 2022.
-
Almost-Linear Time Algorithms for Maximum Flow and More.
-
-
TCS+ Talk;
April, 2022.
-
Almost-Linear Time Algorithms for Maximum Flow and More.
(Video).
-
-
Hausdorff Program on Discrete Optimization, Workshop: Approximation and Relaxation;
November, 2021.
-
Two-Commodity Flow is as Hard as Linear Programming.
-
-
INFORMS 2021 Session on Bridging Discrete and Continuous Optimization;
October, 2021.
-
A Numerical Analysis Approach to Convex Optimization.
-
-
Hausdorff Program on Discrete Optimization, Workshop: Continuous Approaches to Discrete
Optimization;
October, 2021.
-
A Numerical Analysis Approach to Convex Optimization.
-
-
CMC Seminar; Panel Discussion,
September, 2021.
-
Laplacian Solvers.
-
-
WALD(O);
Virtual Conference,
August,
2021.
-
Hardness Results for Structured Linear Equations and Programs.
-
-
ADFOCS;
Virtual summer school,
July-August,
2021.
-
Graphs, Sampling, and Iterative methods (three lectures).
-
-
SIAM Annual Meeting;
Virtual conference,
July,
2021.
-
Two-Commodity Flow is as Hard as Linear Programming.
-
-
Georgetown University Computer Science Colloquium;
Georgetown,
April
2021.
-
A Numerical Analysis Approach to Convex Optimization
(Video).
-
-
Hebrew University Theory Seminar;
Jerusalem,
January
2021.
-
A Numerical Analysis Approach to Convex Optimization
(Slides).
-
-
EPFL Theory Seminar;
Lausanne,
March
2020.
-
A Numerical Analysis Approach to Convex Optimization
(Slides).
-
-
ICCOPT;
Berlin,
August
2019.
-
Optimization on Graphs
(Slides).
-
-
UT Austin Theory Seminar;
May
2019.
-
A Numerical Analysis Approach to Convex Optimization.
-
-
Harvard Theory of Computation Seminar;
February
2019.
-
A Numerical Analysis Approach to Convex Optimization.
-
-
Beyond Randomized Rounding and the Probabilistic
Method Workshop;
Simons Institute, Berkeley,
February
2019.
-
A Matrix Chernoff Bound for Strongly Rayleigh
Distributions and Spectral Sparsifiers from a few Random Spanning Trees
(Video),
(Slides).
-
-
SODA 2019;
San Diego,
January
2019.
-
Iterative Refinement for \(\ell_p\)-norm Regression.
-
-
Bridging Continuous and Discrete Optimization Reunion Workshop;
Simons Institute, Berkeley,
December
2018.
-
Iterative Refinement for \(\ell_p\)-norm Regression.
-
-
Caltech Theory Seminar;
November
2018.
-
Approximate Gaussian Elimination
(Slides).
-
-
Northwestern Quarterly Theory Workshop;
Northwestern,
November
2018.
-
Analysis Using Matrix Martingales
(Slides).
-
-
FOCS;
Paris,
October
2018.
-
A Matrix Chernoff Bound for Strongly Rayleigh
Distributions and Spectral Sparsifiers from a few Random Spanning Trees.
-
-
FOCS;
Paris,
October
2018.
-
Solving Directed Laplacians in Nearly Linear Time through Sparse LU Factorizations.
-
-
Laplacian 2.0 Workshop;
FOCS, Paris,
October
2018.
-
Analysis Using Matrix Martingales
(Slides).
-
-
RNLA Workshop;
Simons Institute, Berkeley,
September
2018.
-
Matrix Martingales in Randomized Numerical
Linear Algebra
(Video).
-
-
High-Performance Graph Algorithms Seminar;
Dagstuhl,
June 2018.
-
Optimization on Graphs.
-
-
Discrepancy & IP Workshop;
CWI Amsterdam, June 2018.
-
Matrix
Approximation by Row Sampling
(Notes).
-
-
GraphXD Workshop;
UC Berkeley,
March 2018.
-
Optimization on Graphs
(Video),
(Slides).
-
-
Michael Cohen Memorial Symposium;
Simons Institute, Berkeley,
November 2017.
-
Michael Cohen and Directed Laplacians
(Video).
-
More Talks
-
Stanford Theory Seminar; October 2017.
Approximate Gaussian Elimination.
-
FOCS, Berkeley; October 2017.
Hardness Results for Structured Linear Systems.
-
UC Berkeley; September 2017.
Hardness Results for Structured Linear Systems.
-
Google Research Mountain View; August 2017.
Hardness Results for Structured Linear Systems.
-
YPNG Seminar, Yale Dept. of Statistics and Data
Science; July 2017.
Approximate Gaussian Elimination.
-
MSR Redmond; January 2017.
Regression, Elimination, and Sampling on Graphs.
-
University of Copenhagen; January 2017.
Approximate Gaussian Elimination.
-
CMU; December 2016.
Approximate Gaussian Elimination.
-
Georgia Tech; November 2016.
Approximate Gaussian Elimination.
-
Princeton; October 2016.
Approximate Gaussian Elimination.
-
UC Berkeley; October 2016.
Approximate Gaussian Elimination.
-
Google Research NYC; October 2016.
Approximate Gaussian Elimination.
-
FOCS, New Brunswick; October 2016.
Approximate Gaussian Elimination.
-
MIT A&C Seminar; September 2016.
Approximate Gaussian Elimination.
-
Aarhus University; September 2016.
Lipschitz Learning on Graphs.
-
China Theory Week, Hong Kong; August 2016.
Approximate Gaussian Elimination.
-
SIAM Annual Meeting, Boston; July 2016.
Approximate Cholesky Factorization
-
STOC, Boston; June 2016.
Sparsified Cholesky and Multigrid Solvers for Connection Laplacians.
-
IT University of Copenhagen; December 2015.
Lipschitz Learning and Isotonic Regression on
Graphs
Code
-
Laplacians.jl
-
Laplacians.jl is a Julia package containing graph
algorithms, especially ones related to
spectral and algebraic graph theory, run by Dan Spielman.
The repo contains ApproxChol, our Laplacian solver from the paper
Robust and Practical Solution of Laplacian Equations by Approximate Elimination.
-
YINSlex Github Repository
-
Our implementations of the lex-minimization
algorithms from the paper
Algorithms for Lipschitz Learning on Graphs
.
The code was written by Anup Rao, Sushant Sachdeva,
Dan Spielman, and myself.
-
Isotonic Github Repository
-
An implementation of the
least-squares Isotonic regression algorithm
from the paper
Fast, Provable Algorithms for Isotonic Regression in all \(\ell_p\)-norms
.
The code was written by Anup Rao, Sushant Sachdeva, and myself.
Contact
I can be reached by email at
[last name] @ inf.ethz.ch
[last name] = kyng