Friday, November 15, 2013,
New York City

The Graduate Center, CUNY

Room 4102 (Science Center)

__ Revisiting some Questions about the Structure of Complete Sets __

This lecture will re-examine three different approaches that have been explored, in order to arrive at an understanding of the ``natural'' NP-complete sets:

- isomorphism,
- creativity, and
- universality.

__
Random Walks and Log Space Complexity
__

Reingold has shown that L=SL, that s-t connectivity in a poly-mixing digraph is complete for promise-RL, and that s-t connectivity for a poly-mixing out-regular digraph with known stationary distribution is in L. The relationship between random walks on digraphs and various log space bounded complexity classes such as L, RL. BPL, NL, FEWL, UL, and ReachUL has been well established. Some preliminary work has been done on identifying properties of digraphs that effect cover and mixing times, such as the spectral expansion, and the Cheeger constant (or the digraph conductance). Since it is known that random walks on balanced digraphs are well behaved, we study random walk behavior as a function of how unbalanced a digraph is. We examine the complexity of random walks on a basic parameterized family of unbalanced digraphs called Strong Chains (which model weakly symmetric computation), and a special family of Strong Chains called Harps. We identify a basic imbalance property, the number of asymmetric vertices, and we show that the worst case hitting times of Strong Chain families vary smoothly with this number and identify the necessary condition for non-polynomial cover time. This analysis also yields bounds on the cover times of general digraphs. We survey and compare how graph structure affects the cover time for undirected and directed graphs. Our goal is to use these structural properties to develop space efficient digraph modification for randomized search and to develop derandomized search strategies for digraph families. Additionally we relate random walks on graphs to solving large domain problems by Monte Carlo methods using small space.

__ Reductions from Mechanism to Algorithm Design __

Algorithmic mechanism design centers around the following question: How much harder is optimizing an objective over inputs that are furnished by rational agents compared to when the inputs are known? We present computationally efficient reductions from mechanism design (i.e. optimizing over rational inputs) to algorithm design (i.e. optimizing over known inputs) in general Bayesian settings. We also explore whether structural properties about optimal mechanisms can be inferred from these reductions. As an application, we present extensions of Myerson's celebrated single-item auction to multi-item settings.

__ A Survey of Verifiable Delegation of Computations __

In this talk I will give an overview of past and recent research on the
area of Verifiable Delegation of Computation. The goal is to enable a
computationally weak client to "outsource" the computation of a function
F on various inputs x_1,...,x_k to one or more powerful servers. The
server must return the result of the function evaluation, e.g.,
y_i=F(x_i), as well as a proof that the computation of F was carried out
correctly on the given value x_i. A crucial requirement is that the
verification of the proof should require substantially less
computational effort than computing F(x_i) from scratch.

For the "general purpose" case (protocols that work for any function F)
I will discuss the different ways this problem has been approached theoretically,
particularly the line of research that links Interactive Proofs, to Probabilistic Checkable
Proofs, to Succinct Non-Interactive Arguments. I will also survey recent exciting
experimental results that show how these techniques are on the verge of
becoming practical.

I will also talk about "ad hoc" protocols that aim to verify specific computations
of particular importance in practice.

__ Trading Mutual Information for Efficient Computation __

In recent years, when improvements in computational speed have been
slowing, and approaching their limits, there have been remarkable
advances in the accessible speed and size of computer memory.
We propose a system which takes advantage of this large and fast
memory in order to efficiently store and recall relevant
computational histories, and automatically exploits patterns to
accelerate certain computations.

Specifically our project is developing a system which,

- uses information compression methods to efficiently cache full computation states,
- leverages and develops machine learning techniques to learn and analyze patterns in these computations, and finally
- speculatively collects and organizes resultant states and uses them to accelerate the original computation in many instances.

Joint work with Jonathan Appavoo, Sam Epstein, Amos Waterland and Katherine Zhao.

__ The Computational Complexity of the Game Set and its Theoretical Applications. __

The game of SET is a popular card game in which the objective is to form Sets using cards from a special deck. In this talk we analyze single- and multi-round variations of this game from the computational complexity point of view and establish interesting connections with other classical computational problems, such as Perfect Multi-Dimensional Matching, 3-Set Packing and Independent Edge Dominating Set. Finally we present a 2-player version of the game, for which we show a close connection to the game Arc Kayles.

__ Trustworthy Computing with Untrusted Resources __

In the age of big data, cloud computing plays a major role in processing
and analyzing massive amounts of information. Services like Amazon S3 and
EC2 offer easily accessible outsourced storage and computation, gradually
replacing our local hard drives and desktop machines. Nevertheless, many
security concerns have arisen in this new paradigm. In an untrusted cloud
setting, users’ data and computations can be potentially tampered with and
sensitive data could be leaked to unauthorized parties.

In this talk, I will present my work that tackles the above mentioned
problems through protocols and systems that offer verifiability and privacy
assurances of data and computations in the cloud (or generally in untrusted
environments). First, I will review some of my work on securing storage
applications efficiently and present a fast and scalable system for
verifying dynamic data objects stored at untrusted cloud servers. Second, I
will describe applied cryptographic techniques for verifying more
expressive queries than simple storage queries. Such queries include
conjunctive and disjunctive keyword search, SQL, range search and
statistical queries, usually appearing in information retrieval and data
streaming applications. I will show that my protocols are efficient both in
theory and in practice. I will conclude by highlighting some of my recent
work on cloud privacy concerning efficient and parallel searching of
dynamic encrypted data and by summarizing future research directions.

__ The Complexity of Non-Monotone Markets __

We introduce the notion of non-monotone utilities, which covers a wide variety of utility functions in economic theory. We show that it is PPAD-hard to compute an approximate Arrow-Debreu market equilibrium in markets with linear and non-monotone utilities. Building on this result, we settle the long-standing open problem regarding the complexity of computing an approximate Arrow-Debreu market equilibrium in markets with Constant Elasticity of Substitution (CES) utilities.

Joint work with Xi Chen and Mihalis Yannakakis.

__ Analyzing Quantum Circuits Via Polynomials __

This talk expands the simulation of quantum circuits by systems of polynomial equations, originally given by Dawson et al. (2004) for systems over Z_2, to other fields and rings. As an example this enables Gottesman and Knill's polynomial-time simulation of stabilizer circuits to follow from the general classification theorem of Cai, Chen, Lipton, and Lu (2010) in the case of Z_4. In general this facilitates simulating quantum circuits via computer algebra packages. Theorems about counting solutions to polynomials over Z_k, where k = 2^r, may imply limitations on efficient amplification of success probabilities by purely quantum means. We propose that the geometric-degree condition used in Baur-Strassen's algebraic lower bound technique may yield an effective measure of multi-partite entanglement, which may allow such lower bounds to be transferred.

Part of this ongoing work is joint with my student Robert Surowka.