Friday, November 18, 2016,
New York City

The Graduate Center, CUNY

Room 9207 (9^{th} floor)

Eric Allender

Under a suitable derandomization hypothesis, we obtain uniform reductions, and we are even able to show that circuit size lower bounds are equivalent to hardness results for certain problems related to MKTP, and provide the first results showing that hardness of MCSP problems implies hardness for MKTP problems. We also present new results relating the complexity of various promise problems related to MCSP and MKTP. We show that, under modest cryptographic assumptions, some of these promise problems must have intermediate complexity: they have no solution in BPP and are not NP-hard under BPP reductions.

Joint work with Shuichi Hirahara.

Alexandr Andoni

Our algorithm also unifies, simplifies, and improves upon the previous data structures for the problem, combining elements of data-dependent hashing and Locality Sensitive Filtering.

Joint work with Thijs Laarhoven (IBM Research Zurich), Ilya Razenshteyn (MIT), and Erik Waingarten (Columbia). Preprint at http://arxiv.org/pdf/1608.03580v1.pdf

Xi Chen

Based on joint works with Anindya De, Rocco Servedio, Li-Yang Tan, Erik Waingarten, and Jinyu Xie.

William Gasarch

Consider the following problem:

You have 5 muffins and 3 students. You want to divide the muffins evenly so that everyone gets 5/3 of a muffin. You can clearly divide each muffin in 3 pieces and give each person 5/3. NOTE- the smallest piece is of size 1/3.

Is there a procedure where the smallest piece is BIGGER than 1/3?

More generally: What is the best you can do with m muffins and s students. We have proven many results about this that involve rather complex arguments.

In this talk I will discuss the cases for THREE students, FOUR students and FIVE students.

The arguments are combinatorial, interesting, and beautiful.

Vasilis Gkatzelis

Joint work with Richard Cole; appeared in STOC 2015

Periklis A. Papakonstantinou

The main technical obstacle is that each sample from a Big Data Source is big...

Formally, the processing has to happen in a streaming fashion. Before our work it was shown that if one uses one stream for randomness extraction, then this is provably impossible. We instead develop a theory for randomness extraction using a data streaming machine with two streams. More importantly, we do extensive experiments on 12 types of big data sources that one commonly finds in the Internet. Our experimental results overwhelmingly indicate that this, first-ever, Streaming Randomness Extractor outperforms every previous practical extraction method, including those based on quantum randomness expanders.

Joint work with David Woodruff and Guang Yang

Rocco Servedio

In this work we investigate the problem of learning an unknown S-SICSIRV where S={a_1,...,a_k} may be any finite set. We show that

- when |S|=3, it is possible to learn any S-SICSIRV with poly(1/\epsilon) samples, independent of the values a_1,a_2,a_3;
- when |S| >= 4, it is possible to learn any S-SICSIRV with poly(1/\epsilon, \log \log a_k) samples; and
- already when |S|=4, at least \log \log a_k samples may be required for learning.

Joint work with Anindya De and Phil Long.

Vasilis Syrgkanis

Joint work with Constantinos Daskalakis

Link to paper: https://arxiv.org/abs/1511.01411