NYCAC 2021 - Abstracts

Friday, December 17, 2021   NYCAC 2021 will take place virtually, on Webex


The New York Colloquium on Algorithms and Complexity


Pathwise-random trees and models of second-order arithmetic
George Barmpalias and Wei Wang

A tree is pathwise-random if all of its paths are algorithmically random, with a fixed upper bound on their randomness deficiency. We show that: (a) an algorithmically random real computes a pathwise-random tree with infinitely many paths if and only if it computes the halting problem; it follows that the class of pathwise-random trees with unbounded width is null, with respect to any computable measure; (b) there exists a positive-measure pathwise-random tree which does not compute any complete extension of Peano arithmetic; and (c) there exists a perfect pathwise-random tree which does not compute any positive pathwise-random tree. We use these results to obtain models of second-order arithmetic that separate compactness principles below weak Konigs lemma, answering questions by Chong et al.(2019). We also discuss connections with Levin's quantitative version of Godel's incompleteness.

Classification Program of Counting Problems
Jin-Yi Cai

Abstract: I will give a (partial) overview of our on-going classification program of the complexity of counting problems. The classification is for classes of sum-of-product computation and is addressed in three successively more general frameworks: Graph homomorphisms (or vertex models), Counting constraint satisfaction problems (#CSP), and Holant problems. I will describe some complexity dichotomy theorems that classify every problem in a class into either polynomial-time computable ones, or #P-hard problems, depending on the specific constraint functions that define the problem. There are also complexity trichotomy theorems that carve out exactly the class of problems that are #P-hard in general but solvable in polynomial-time on planar instances. These include Valiant's holographic reductions to Kasteleyn's algorithm for planar perfect matchings, and more. Many open problems remain.

Characterizations and approximability of hard counting classes below #P
Aggeliki Chalki

An important objective of research in counting complexity is to understand which counting problems are approximable. In this quest, the complexity class TotP, a hard subclass of #P, is of key importance, as it contains self-reducible counting problems with easy decision version, thus eligible to be approximable. Indeed, most problems known so far to admit an fpras fall into this class. An open question raised recently by the community of descriptive complexity is to find a logical characterization of TotP and of robust subclasses of TotP. In this work we define two subclasses of TotP, in terms of descriptive complexity, both of which are robust in the sense that they have natural complete problems, which are defined in terms of satisfiability of Boolean formulae. We then explore the relationship between the class of approximable counting problems and TotP.We prove that TotP is not a subclass of the class FPRAS iff NP=/=RP and FPRAS is not a subclass of TotP unless RP=P. To this end we introduce two ancillary classes that can both be seen as counting versions of RP. We further show that FPRAS lies between one of these classes and a counting version of BPP. Finally, we provide a complete picture of inclusions among all the classes defined or discussed in this paper with respect to different conjectures about the NP vs RP vs P questions. This is joint work with Bakali Eleni, Aris Pagourtzis, and Stathis Zachos

Hierarchical Clustering: Recent Progress and Open Questions
Evangelos Chatziafratis

Hierarchical Clustering is an important tool for unsupervised learning whose goal is to construct a hierarchical decomposition of a given dataset describing relationships at all levels of granularity simultaneously. Despite its long history, Hierarchical Clustering was underdeveloped from a theoretical perspective, partly because of a lack of suitable objectives and algorithms with guarantees. In this talk, I want to tell you about the recent progress in the area with an emphasis on approximation algorithms and hardness results, and also highlight some interesting open problems.

Deterministic Budget-Feasible Clock Auctions
Vasilis Gkatzelis

We revisit the well-studied problem of budget-feasible procurement, where a buyer with a strict budget constraint seeks to acquire services from a group of strategic providers (the sellers). During the last decade, several strategyproof budget-feasible procurement auctions have been proposed, aiming to maximize the value of the buyer, while eliciting each seller’s true cost for providing their service. These solutions predominantly take the form of randomized sealed bid auctions: they ask the sellers to report their private costs and then use randomization to determine which subset of services will be procured and how much each of the chosen providers will be paid, ensuring that the total payment does not exceed the buyer’s budget. Our main result in this paper is a novel method for designing budget-feasible auctions, leading to solutions that outperform the previously proposed auctions in multiple ways. First, our solutions take the form of descending clock auctions, and thus satisfy a list of very appealing properties, such as obvious strategyproofness, group strategyproofness, transparency, and unconditional winner privacy; this makes these auctions much more likely to be used in practice. Second, in contrast to previous results that heavily depend on randomization, our auctions are deterministic. As a result, we provide an affirmative answer to one of the main open questions in this literature, asking whether a deterministic strategyproof auction can achieve a constant approximation when the buyer’s valuation function is submodular over the set of services. In addition to this, we also provide the first deterministic budget-feasible auction that matches the approximation bound of the best-known randomized auction for the class of subadditive valuations. Finally, using our method, we improve the best-known approximation factor for monotone submodular valuations, which has been the focus of most of the prior work. This is joint work with Eric Balkanski, Pranav Garimidi, Daniel Schoepflin, and Xizhi Tan.

Efficient Algorithms for Learning from Coarse Labels
Alkis Kalavasis

For many learning problems one may not have access to fine grained label information; e.g., an image can be labeled as husky, dog, or even animal depending on the expertise of the annotator. In this work, we formalize these settings and study the problem of learning from such coarse data. Instead of observing the actual labels from a set Z, we observe coarse labels corresponding to a partition of Z (or a mixture of partitions). Our main algorithmic result is that essentially any problem learnable from fine grained labels can also be learned efficiently when the coarse data are sufficiently informative. We obtain our result through a generic reduction for answering Statistical Queries (SQ) over fine grained labels given only coarse labels. The number of coarse labels required depends polynomially on the information distortion due to coarsening and the number of fine labels |Z|. We also investigate the case of (infinitely many) real valued labels focusing on a central problem in censored and truncated statistics: Gaussian mean estimation from coarse data. We provide an efficient algorithm when the sets in the partition are convex and establish that the problem is NP-hard even for very simple non-convex sets.

A Simple Algorithm for Multiple-Source Shortest Paths in Planar Digraphs
Evangelos Kipouridis

Given an n-vertex planar embedded digraph G with non-negative edge weights and a face f of G, Klein presented a data structure with O(nlog n) space and preprocessing time which can answer any query (u,v) for the shortest path distance in G from u to v or from v to u in O(log n) time, provided u is on f. This data structure is a key tool in a number of state-of-the-art algorithms and data structures for planar graphs. Klein's data structure relies on dynamic trees and the persistence technique as well as a highly non-trivial interaction between primal shortest path trees and their duals. The construction of our data structure follows a completely different and in our opinion very simple divide-and-conquer approach that solely relies on Single-Source Shortest Path computations and contractions in the primal graph. Our space and preprocessing time bound is O(nlog |f|) and query time is O(log |f|) which is an improvement over Klein's data structure when f has small size. Based on joint work with Debarati Das, Maximilian Probst Gutenberg, Christian Wulff-Nilsen [paper to appear on SOSA 2022]

Worst-Case Efficient Dynamic Geometric Independent Set
Grigorios Koumoutsos

We consider the problem of maintaining an approximate maximum independent set of geometric objects under insertions and deletions. We present data structures that maintain a constant-factor approximate maximum independent set for broad classes of fat objects in ddimensions, where dis assumed to be a constant, in sublinear worst-case update time. This gives the first results for dynamic independent set in a wide variety of geometric settings, such as disks, fat polygons, and their high-dimensional equivalents. Our result is obtained via a two-level approach. First, we develop a dynamic data structure which stores all objects and provides an approximate independent set when queried, with output-sensitive running time. We show that via standard methods such a structure can be used to obtain a dynamic algorithm with \textit{amortized} update time bounds. Then, to obtain worst-case update time algorithms, we develop a generic deamortization scheme that with each insertion/deletion keeps (i) the update time bounded and (ii) the number of changes in the independent set constant. We show that such a scheme is applicable to fat objects by showing an appropriate generalization of a separator theorem. Interestingly, we show that our deamortization scheme is also necessary in order to obtain worst-case update bounds: If for a class of objects our scheme is not applicable, then no constant-factor approximation with sublinear worst-case update time is possible. We show that such a lower bound applies even for seemingly simple classes of geometric objects including axis-aligned rectangles in the plane.

Prompt Truthful Mechanisms
Alkiviadis Mertzios

A natural property that one desires from an online scheduling algorithm is to know when a job will begin and complete its execution. If an algorithm decides the completion time of a job upon its arrival, it is called prompt. Promptness was first introduced by Eden et al. and it leads to fixed patterns of execution, which are independent of the input. As such, these algorithms can be easily extended to truthful mechanisms since declarations by the jobs are inconsequential to the choices of the mechanism. We apply this idea to Parallel Machine Scheduling and to the Travelling Repairman Problem to elicit truthful mechanisms.

One-way Functions and a Conditional Variant of MKTP
Dimitrios Myrisiotis

One-way functions (OWFs) are central objects of study in cryptography and computational complexity theory. In a seminal work, Liu and Pass (FOCS 2020) proved that the average-case hardness of computing time-bounded Kolmogorov complexity is equivalent to the existence of OWFs. It remained an open problem to establish such an equivalence for the average-case hardness of some natural NP-complete problem. In this paper, we make progress on this question by studying a conditional variant of the Minimum KT-complexity Problem (MKTP), which we call McKTP, as follows. 1. First, we prove that if McKTP is average-case hard on a polynomial fraction of its instances, then there exist OWFs. 2. Then, we observe that McKTP is NP-complete under polynomial-time randomized reductions. 3. Finally, we prove that the existence of OWFs implies the nontrivial average-case hardness of McKTP. Thus the existence of OWFs is inextricably linked to the average-case hardness of this NP-complete problem. In fact, building on recent results of Ren and Santhanam (CCC 2021), we show that McKTP is hard-on-average if and only if there are logspace-computable OWFs. Joint work with Eric Allender, Mahdi Cheraghchi, Harsha Tirumala and Ilya Volkovich (FSTTCS 2021).

Global Convergence of Multi-Agent Policy Gradient in Markov Potential Games
Ioannis Panageas

Potential games are arguably one of the most important and widely studied classes of normal form games. They define the archetypal setting of multi-agent coordination as all agent utilities are perfectly aligned with each other via a common potential function. Can this intuitive framework be transplanted in the setting of Markov Games? What are the similarities and differences between multi-agent coordination with and without state dependence? We present a novel definition of Markov Potential Games (MPG) that generalizes prior attempts at capturing complex stateful multi-agent coordination. Counter-intuitively, insights from normal-form potential games do not carry over as MPGs can consist of settings where state-games can be zero-sum games. In the opposite direction, Markov games where every state-game is a potential game are not necessarily MPGs. Nevertheless, MPGs showcase standard desirable properties such as the existence of deterministic Nash policies. In our main technical result, we prove fast convergence of independent policy gradient (and its stochastic variant) to Nash policies by adapting recent gradient dominance property arguments developed for single agent MDPs to multi-agent learning settings.

Strategyproof Facility Location in Perturbation Stable Instances
Panagiotis Patsilinakos

We consider $k$-Facility Location games, where $n$ strategic agents report their locations on the real line, and a mechanism maps them to $k\ge 2$ facilities. Each agent seeks to minimize her distance to the nearest facility. We are interested in (deterministic or randomized) strategyproofmechanisms without payments that achieve a reasonable approximation ratio to the optimal social cost of the agents. To circumvent the inapproximability of $k$-Facility Location by deterministic strategyproof mechanisms, we restrict our attention to perturbation stable instances. An instance of $k$-Facility Location on the line is $\gamma$-perturbation stable (or simply, $\gamma$-stable), for some $\gamma\ge 1$, if the optimal agent clustering is not affected by moving any subset of consecutive agent locations closer to each other by a factor at most $\gamma$. We show that the optimal solution is strategyproof in $(2+\sqrt{3})$-stable instances whose optimal solution does not include any singleton clusters, and that allocating the facilityto the agent next to the rightmost one in each optimal cluster (or to the unique agent, for singleton clusters) is strategyproof and $(n-2)/2$-approximate for $5$-stable instances (even if their optimal solution includes singleton clusters). On the negative side, we show that for any $k\ge 3$ and any $\delta > 0$, there is no deterministic anonymous mechanism that achieves a bounded approximation ratio and is strategyproof in $(\sqrt{2}-\delta)$-stable instances. We also prove that allocating the facility to a random agent of each optimal cluster is strategyproof and $2$-approximate in $5$-stable instances. To the best of our knowledge, this is the first time that the existence of deterministic (resp. randomized) strategyproof mechanisms with a bounded (resp. constant) approximation ratio is shown for a large and natural class of $k$-FacilityLocation instances.

On the Complexity of Consensus-Halving and Necklace Splitting
Aris Filos-Ratsikas

The Necklace Splitting problem and its continuous counterpart, the Consensus-Halving problem, are classic problems in combinatorics, with applications to fair division and social choice theory. A distinctive characteristic of these problems is that they are total, i.e., they always have a solution, but it was not known until recently whether such a solution can be found efficiently. In this talk, I will present results from a series of recent papers (from 2018 to 2021) related to the computational complexity of these problems. The talk will also provide a gentle introduction to the computational subclasses of the class TFNP, the class of total search problems for which a solution can be verified in polynomial time.

Opening Pandora’s Box: the Correlated Case
Christos Tzamos

The Pandora's Box problem and its extensions capture optimization problems with stochastic input where the algorithm can obtain the exact instantiations of input random variables at some cost. All previous work on this class of problems makes the, somewhat unrealistic, assumption that different random variables in the input are distributed independently. Removing this assumption, we present the first approximation algorithms for Pandora's Box-type problems with correlations. Algorithms for these problems must determine an order in which to probe random variables, as well as when to stop and return the best solution found so far. In general, an optimal algorithm may make both decisions adaptively based on instantiations observed previously. Such ``fully adaptive" (FA) strategies cannot be efficiently approximated to within any sub-linear factor with sample access. We initially focus on the simpler objective of approximating ``partially adaptive" (PA) strategies that probe random variables in a fixed predetermined order but decide when to stop based on the instantiations observed. We consider a number of different feasibility constraints and provide simple PA strategies that are approximately optimal with respect to the best PA strategy for each case. Shifting our attention back to FA strategies, with explicitly given distributions, we connect Pandora's Box to the well studied Optimal Decision Tree. Specifically, we show via a reduction to a simpler version of Pandora's Box, that the problem is equivalent (up to constant factors) to the Uniform Decision Tree problem, making it strictly easier than ODT.

Teamwork makes von Neumann work: Min-Max Optimization in Two-Team Zero-Sum Games
Manolis Vlatakis

In this talk, motivated by recent advances in both theoretical and applied aspects of multiplayer games, spanning from e-sports to multi-agent generative adversarial networks, we focus on min-max optimization in team zero-sum games. In this class of games, players are split into two teams with payoffs equal within the same team and of opposite sign across the opponent team. We discuss their position inside the TFNP taxonomy. From an optimization perspective, we present our negative results about a vast family of first order methods. Leveraging techniques from control theory, we complement these negative results by designing a modified GDA that converges locally to Nash equilibria. Finally, we discuss connections of our framework with AI architectures with team competition structures like multi-agent generative adversarial networks