I was born in Hong Kong before 1997, a time before the abyss.
I am a theoretical computer scientist working in the areas of
Algorithmic Game Theory, Dynamical-systemic Aspects of Machine Learning and Discrete Mathematics. More specifically, I am currently focusing on the following topics:
Natural Algorithms in Economics (including Learning Algorithms/Dynamics in Games and Market Dynamics),
Asynchronous Optimization and Iterative Systems and
Graph Sparsification, Partitioning and Clustering.
This contains rolling updates of my research activities. When a paper is accepted, I write a short paragraph to describe the motivation, the background and our main contributions.
13 January 2021: Our paper with Yixin Tao, "Chaos of Learning Beyond Zero-sum and Coordination via Game Decompositions", is accepted to ICLR 2021. In an earlier work with Georgios (COLT 2019, NeurIPS 2020), we showed that Multiplicative Weights Update (MWU) is Lyapunov chaotic in two-person zero-sum games and have bad behaviors via volume analysis. Analogous results are extended to different algorithms (FTRL, Optimistic MWU) and other games (coordination game, generalized Rock-Paper-Scissors). But really, when we talk about "learning in games", we want to understand the behaviors of learning algorithms in general games. In the ICLR paper, we extend the volume analysis argument to two-person general games via a canonical game decomposition into zero-sum and coordination components. This elegantly decouples the volume-changing behavior into opposing competition between the two components. We propose measures that compare the strengths of the two components and characterizes games in which MWU is Lyapunov chaotic (almost) everywhere. We also share some insights into general games with three or more players.
18 November 2020: Our paper with Richard Cole and Yixin Tao, "Parallel Stochastic Asynchronous Coordinate Descent: Tight Bounds on the Possible Parallelism", is accepted to SIAM Journal on Optimization (SIOPT). This is a companion work of an earlier paper accepted to Mathematical Programming Series A (MPA). In the SIOPT paper, we use an explicit construction to show that when the number of cores used for asynchronous implementation of stochastic coordinate descent exceeds a certain limit, the process can diverge from the minimal point quickly. The limit we show matches asymptotically with the lower bound presented in the MPA paper.
15 November 2020: For generating high-quality animations of volume changes in learning dynamical system, we need to simulate the system with a lot of starting points. This took a long time with CPU. Recently, I have learnt that GPU can accelerate this process by hundreds of times. Via GPU programming with Python, I generate a prototype video below when both players in Matching-Pennies game (left) and a tiny perturbation of Matching-Pennies game (right) learn using Multiplicative Weights Update (MWU). You will see that the volume expands in both cases, but the shapes in the later stages are very different, even when the underlying games are very similar. The video length is 4 minutes. The first 2 minutes is a bit boring; be patient (or you can skip it...).
25 September, 2020: Our paper with Georgios Piliouras, "Chaos, Extremism and Optimism: Volume Analysis of Learning in Games", is accepted to NeurIPS 2020. We prove that Multiplicative Weights Update (MWU) in two-person zero-sum game is chaotic and have bad behaviors (named extremism) via volume analysis. Briefly, the volume of a learning system quantities its possibilities. By proving the volume expands quickly, we show that the learning system becomes more unpredictable in the long run, which can be formally captured as Lyapunov chaos. See a short video for volume expansion in 3D, of MWU in a cyclic Matching Pennies game.
1 August, 2020: Our paper with Richard Cole and Yixin Tao, "Fully Asynchronous Stochastic Coordinate Descent: a Tight Lower Bound on the Parallelism Achieving Linear Speedup", is accepted to Mathematical Programming Series A. In numerical optimization, it is natural to consider parallel and asynchronous implementations of first-order methods, e.g. gradient descent. Ideally, we hope that a speedup in proportion to the number of cores (processors) can be achieved. This is called linear speedup. But linear speedup can only be guaranteed when the number of cores is less than a limit that depends on the convexity parameters of the function being minimized. We present the asymptotic tight lower bound of the limit.
Such implementations without any synchronization overhead can cause subtle asynchrony effects, which make a rigorous analysis difficult. Prior works have bypassed some of these difficulties via assumptions. Our analysis makes essentially no assumption, except that a thread can only overlap with a bounded number of other threads.
24 July, 2020: I gave a talk in SUTD on "Online Optimization in Games via Control Theory", a joint work with Georgios Piliouras. The slides are available here.
Publications (in reverse chronological order)
Manuscript / Ongoing Work
[M1]Allocation of Mining Resources in the Blockchain Economy
with Stefanos Leonardos, Shyam Sridhar and Georgios Piliouras. Working Paper, appeared in WINE 2020 Workshop on Blockchain.
Chaos of Learning Beyond Zero-sum and Coordination via Game Decompositions
with Yixin Tao. To appear in ICLR 2021.
Fully Asynchronous Stochastic Coordinate Descent: A Tight Lower Bound on the Parallelism Achieving Linear Speedup
with Richard Cole and Yixin Tao. Accepted to Mathematical Programming (Series A) in September 2020.
Parallel Stochastic Asynchronous Coordinate Descent: Tight Bounds on the Possible Parallelism
with Richard Cole and Yixin Tao. In SIAM Journal on Optimization (SIOPT) Volume 31 Issue 1, February 2021. [official][arxiv]
Chaos, Extremism and Optimism: Volume Analysis of Learning in Games
with Georgios Piliouras. NeurIPS 2020.
Tatonnement Beyond Gross Substitutes? Gradient Descent to the Rescue
with Richard Cole and Nikhil R. Devanur. STOC 2013.
[official for STOC]
Full journal version in Games and Economic Behavior Volume 123, September 2020 (Special Issue for STOC/FOCS/SODA 2013).
[official for GEB]
Vortices Instead of Equilibria in MinMax Optimization: Chaos and Butterfly Effects of Online Learning in Zero-Sum Games
with Georgios Piliouras. COLT 2019.
Tracing Equilibrium in Dynamic Markets via Distributed Adaptation
with Martin Hoefer and Paresh Nakhe. AAMAS 2019.
Steiner Point Removal -- Distant Terminals Don't (Really) Bother SODA 2018.
Dynamics of Distributed Updating in Fisher Markets
with Richard Cole and Yixin Tao. ACM EC 2018.
Spanning Tree Congestion and Computation of Generalized Győri-Lovász Partition
with L. Sunil Chandran and Davis Issac. ICALP 2018.
Multiplicative Weights Updates with Constant Step-Size in Graphical Constant-Sum Games NeurIPS 2018.
Amortized Analysis of Asynchronous Price Dynamics
with Richard Cole. ESA 2018.
On Fair Division of Indivisible Items
with Bhaskar Ray Chaudhury, Jugal Garg, Naveen Garg, Martin Hoefer and Kurt Mehlhorn. FSTTCS 2018.
Graph Minors for Preserving Terminal Distances Approximately -- Lower and Upper Bounds
with Gramoz Goranci and Monika Henzinger. ICALP 2016.
Better Strategyproof Mechanisms without Payments or Prior -- An Analytic Approach
Combinatorial Auctions with Conflict-Based Externalities
with Monika Henzinger, Martin Hoefer and Martin Starnberger. WINE 2015.
Tatonnement in Ongoing Markets of Complementary Goods
with Richard Cole and Ashish Rastogi. ACM EC 2012.