NYCAC 2016 - Abstracts

Friday, November 18, 2016,   New York City
The Graduate Center, CUNY
Room 9207 (9th floor)


The New York Colloquium on Algorithms and Complexity


New Insights on the (Non)-Hardness of Circuit Minimization and Related Problems
Eric Allender

The Minimum Circuit Size Problem (MCSP) and a related problem (MKTP) that deals with time-bounded Kolmogorov complexity are prominent candidates for NP-intermediate status. We prove new unconditional lower bounds for MKTP, by showing that MKTP is hard for the complexity class DET (a class that captures the complexity of computing the determinant; DET contains NL) under non-uniform NC^0 reductions.
Under a suitable derandomization hypothesis, we obtain uniform reductions, and we are even able to show that circuit size lower bounds are equivalent to hardness results for certain problems related to MKTP, and provide the first results showing that hardness of MCSP problems implies hardness for MKTP problems. We also present new results relating the complexity of various promise problems related to MCSP and MKTP. We show that, under modest cryptographic assumptions, some of these promise problems must have intermediate complexity: they have no solution in BPP and are not NP-hard under BPP reductions.
Joint work with Shuichi Hirahara.


Alexandr Andoni

We survey recent advances in the approximate nearest neighbor search problem in high-dimensional Euclidean/Hamming spaces, which go beyond the classic Locality Sensitive Hashing technique for the problem. The culmination of these advances is a new optimal hashing algorithm that achieves the full trade-off between space vs query time. For example, we obtain the first algorithm with near-linear space and sub-linear query time for any approximation factor greater than 1, which is perhaps the most important regime in practice.
Our algorithm also unifies, simplifies, and improves upon the previous data structures for the problem, combining elements of data-dependent hashing and Locality Sensitive Filtering.
Joint work with Thijs Laarhoven (IBM Research Zurich), Ilya Razenshteyn (MIT), and Erik Waingarten (Columbia). Preprint at http://arxiv.org/pdf/1608.03580v1.pdf

On the Query Complexity of Boolean Monotonicity Testing
Xi Chen

Monotonicity testing has been a touchstone problem in property testing for more than fifteen years, with many exciting recent developments in just the past few years. When specialized to Boolean-valued functions over {0,1}^n, we are interested in the number of queries required to distinguish whether an unknown function is monotone or far from every monotone function. In this talk we discuss recent results on Boolean monotonicity testing and some related problems, focusing on the lower bound side.
Based on joint works with Anindya De, Rocco Servedio, Li-Yang Tan, Erik Waingarten, and Jinyu Xie.

The Muffin Problem
William Gasarch

Work by: Guangqi Cui, Naveen Durvasula, William Gasarch, Naveen Raman, Sung Hyun Yoo
Consider the following problem:
You have 5 muffins and 3 students. You want to divide the muffins evenly so that everyone gets 5/3 of a muffin. You can clearly divide each muffin in 3 pieces and give each person 5/3. NOTE- the smallest piece is of size 1/3.
Is there a procedure where the smallest piece is BIGGER than 1/3?
More generally: What is the best you can do with m muffins and s students. We have proven many results about this that involve rather complex arguments.
In this talk I will discuss the cases for THREE students, FOUR students and FIVE students.
The arguments are combinatorial, interesting, and beautiful.

Approximating the Nash Social Welfare with Indivisible Items
Vasilis Gkatzelis

We study the problem of allocating a set of indivisible items among agents with additive valuations with the goal of maximizing the geometric mean of the agents' valuations, i.e., the Nash social welfare. This problem is known to be APX-hard, and our main result is the first efficient constant-factor approximation algorithm for this objective. We first observe that the integrality gap of the natural fractional relaxation is unbounded, so we propose a different fractional allocation which implies a tighter upper bound and, after appropriate rounding, yields a good integral allocation. The intuition that leads to this new algorithm leverages the close connection between the Nash social welfare objective and the competitive equilibria in Fisher markets.
Joint work with Richard Cole; appeared in STOC 2015

True Randomness from Big Data
Periklis A. Papakonstantinou

Big Data is a fashionable buzzword that carries a considerable amount of research activity. So far, every work in Big Data focused on how to make use of Big Data, i.e. aimed to discover useful structure. Our take is completely different. We view Big Data as a source of low quality randomness. Our goal is to extract provably "purely" uniform distributed random bits.
The main technical obstacle is that each sample from a Big Data Source is big...
Formally, the processing has to happen in a streaming fashion. Before our work it was shown that if one uses one stream for randomness extraction, then this is provably impossible. We instead develop a theory for randomness extraction using a data streaming machine with two streams. More importantly, we do extensive experiments on 12 types of big data sources that one commonly finds in the Internet. Our experimental results overwhelmingly indicate that this, first-ever, Streaming Randomness Extractor outperforms every previous practical extraction method, including those based on quantum randomness expanders.
Joint work with David Woodruff and Guang Yang

Learning Sums of Independent Commonly Supported Integer Random Variables
Rocco Servedio

This talk is about learning Sums of Independent Commonly Supported Integer Random Variables, or SICSIRVs. For S a finite set of natural numbers, an "S-supported SICSIRV" is a random variable which is the sum of N independent random variables each of which is supported on S. So, for example, if S = {0,1}, then an S-supported SICSIRV is a sum of N independent but not necessarily identically distributed Bernoulli random variables (a.k.a. a Poisson Binomial random variable). Daskalakis et. al. (STOC 12, FOCS 13) have shown that for S={0,1,...,k-1}, there is an algorithm for learning an unknown S-SICSIRV using poly(k,1/\epsilon) draws from the distribution and running in poly(k,1/\epsilon) time, independent of N.
In this work we investigate the problem of learning an unknown S-SICSIRV where S={a_1,...,a_k} may be any finite set. We show that We thus establish a sharp transition in the complexity of learning S-SICSIRVs when the size of S goes from three to four.
Joint work with Anindya De and Phil Long.

Computationally Efficient Learning in Auctions
Vasilis Syrgkanis

A line of recent work provides welfare guarantees of simple combinatorial auction formats, such as selling m items via simultaneous second price auctions. These guarantees hold even when the auctions are repeatedly executed and players use no-regret learning algorithms. Unfortunately, off-the-shelf no-regret algorithms for these auctions are computationally inefficient as the number of actions is exponential. We show that this obstacle is insurmountable: there are no polynomial-time no-regret algorithms for simultaneous second-price auctions, even when the bidders are unit-demand. Our lower bound raises the question of how good outcomes polynomially-bounded bidders may discover in such auctions. To answer this question, we propose a novel concept of learning in auctions, termed "no-envy learning." This notion is founded upon Walrasian equilibrium, and we show that it is both efficiently implementable and results in approximately optimal welfare, even when the bidders have fractionally subadditive (XOS) valuations (assuming demand oracles) or coverage valuations (without demand oracles). Our results for XOS valuations are enabled by a novel Follow-The-Perturbed-Leader algorithm for settings where the number of experts is infinite, and the payoff function of the learner is non-linear. Our result for coverage valuations is based on a novel use of convex rounding schemes and a reduction to online convex optimization.
Joint work with Constantinos Daskalakis
Link to paper: https://arxiv.org/abs/1511.01411