Friday, November 18, 2011,  
New York City
The Graduate Center, CUNY
Room 4102
Kolmogorov Complexity, Circuits, and the Strength of Logical Theories of Arithmetic
Can complexity classes be characterized in terms of efficient reducibility to the (undecidable) set of Kolmogorov-random strings? Although this might seem improbable, a series of papers has recently provided evidence that this may be the case. In particular, it is known that there is
a class of problems C defined in terms of polynomial-time truth-table reducibility to R_K (the set of Kolmogorov-random strings) that lies between BPP and PSPACE. In this talk, we investigate improving this upper bound from PSPACE to PSPACE intersect P/poly. (P/poly is the class of problems with polynomial-size circuits.)
More precisely, we present a collection of true statements in the language of
arithmetic, (each provable in ZF) and show that if these statements can be proved in certain extensions of Peano arithmetic, then
BPP \subseteq C \subseteq PSPACE intersect P/poly.
We conjecture that C is equal to P, and discuss the possibility this might be an avenue for trying to prove the equality of BPP and P.
Joint work with George Davie, Luke Friedman, Sam Hopkins, and Iddo Tzameret.
slides
Approximating k-Set Covers
(joint work with Huiwen Yu)
The k-set cover problem asks to cover a set U with a minimal number of subsets of U from a given collection C of such subsets, all of which have size at most k. The Greedy algorithm obtains an approximation ratio of H(k) ≥ ln k + 1, where H(k) is the k-th harmonic number. Trevisan 2001 has shown a hardness bound of ln k - O(log log k) based on Feige's 1998 hardness result for the general set cover problem.
We conjecture that the best ratio achievable in polynomial time is H(k) - O(1). Indeed, there have been many approximation algorithms improving the O(1) term. Much motivation comes from the trivial observation that for k=2, the problem is in P, a gain of 1/2 in the approximation ratio from H(2). To retain this benefit of 1/2 for larger k, one has to avoid the occurrence of unnecessary singleton sets. Here it is assumed that the collection C is closed under subsets. Thus it is sufficient to consider disjoint covers.
For k ≥ 4, packing based algorithms are better than Greedy. It turns out to be quite challenging to determine the performance of sophisticated packing algorithms that avoid the occurrence of singleton sets.
Using Ramsey's Theorem to Prove Programs Terminate
Given a program we want to know that no matter what crazy thing
the user does, it will terminate. And we want to prove this.
This looks like HALT but actually it is much harder.
We will look at some programs where you can actually PROVE that
no matter what the user does they will terminate. We will do this in
two ways:
a) Traditional Method due to Floyd: Find some parameter that is
decreasing and such that when it hits bottom the program must
have terminated. This usually involves reasoning about single steps
(which is easy) but may involve a complicated ordering (Note that
I said `decrease' but it may be in some complicated ordering)
b) Ramsey Method due to Lee, Jones, Ben-Amram, Cook, Podelski, Rybalchenko
(and probably others). This involved reasoning about several steps (which is harder),
applyig Ramsey's Theorem (which I think is COOL!), and using rather simple orderings.
We will discuss these two approaches and some open problems that arise.
slides
Using reputation and misinformation to influence users in selfish routing
Can we use deception for the manipulation of the selfish users in a network towards a desireable
equilibrium. In this talk, we will examine reputation as an instigator of beneficial user behavior
in selfish routing and when the network users rely on the network operator for information on
the network parameters. Instead of the use of tolls or artificial delays, the network operator takes
advantage of the users' insufficient data, in order to manipulate them through the information he
provides. The issue that arises then is what can be the operator's gain without compromising (by too
much) the trust users put on the information provided, i.e., by maintaining a reputation for (at least
some) trustworthiness. We will see an example of modeling such a system as a repeated game of incomplete
information in the case of single-commodity general networks with affine latency functions. This allows
us to apply known folk-like theorems to get bounds on the price of anarchy that are better (if that is
possible at all) than the well-known $4/3$ bound without any data fudging.
slides
The Landscape of Structural Graph Parameters
In traditional computational complexity we measure algorithm running
times as functions of one variable, the size of the input. Though in
this setting our goal is usually to design polynomial-time algorithms,
most interesting graph problems are unfortunately believed to require
exponential time to solve exactly. Parameterized complexity theory
refines this by introducing a second variable, called the parameter,
which is supposed to quantify the “hardness” of each specific
instance. The goal now becomes to confine the combinatorial explosion
to the parameter, by designing an algorithm that runs in time
polynomial in the size of the input, though inevitably exponential in
the parameter. This will allow us to tackle instances where the
parameter value is much more modest than the input size, which will
happen often if the parameter is chosen well.
Of course, this setting is rather general and there are countless ways
in which one may attempt to measure the hardness of specific
instances, depending on the problem. The parameterized graph
algorithms literature has been dominated for a long time by a very
successful notion called treewidth, which measures graph complexity by
quantifying a graph's similarity to a tree. However, more recently the
study of alternative graph parameters and widths has become more
popular. In this (self-contained) talk we will attempt to explore the
algorithmic properties of treewidth, its related graph width
parameters, as well as other graph invariants which may serve as
measurements of a graph's complexity such as Vertex Cover, or Max-Leaf
number. We will focus especially on general results which prove
tractability for whole classes of problems characterized by
expressibility in certain logics, results often referred to as
"algorithmic meta-theorems".
slides
Hardness Preserving Constructions of Pseudorandom Functions
We show a hardness-preserving
construction of a PRF from any length
doubling PRG which improves upon known constructions
whenever we can put a non-trivial
upper bound $q$ on the number of queries to the PRF.
Our construction requires only $O(\log q)$ invocations to the
underlying PRG with each query.
In comparision, the number of invocations by the best previous
hardness-preserving construction (GGM using Levin's
trick) is logarithmic in the \emph{hardness} of the PRG.
For example, starting from an exponentially secure PRG
$\set{0,1}^n\mapsto \set{0,1}^{2n}$, we get a PRF which is exponentially secure
if queried at most $q=\exp(\sqrt n)$ times and where each invocation
of the PRF requires $\Theta(\sqrt n)$ queries to the underlying
PRG.
This is much less than the $\Theta(n)$ required by known constructions.
Winner-Imposing Strategyproof Mechanisms for Multiple Facility Location Games
We study Facility Location games, where a number of facilities are
placed in a metric space based on locations reported by strategic
agents. A mechanism maps the agents’ locations to a set of facilities.
The agents seek to minimize
their connection cost, namely the distance of their true location to
the nearest facility, and may misreport their location. We are
interested in mechanisms that are
strategyproof, i.e., ensure that no agent can bene?t from misreporting
her location, do not resort to monetary transfers, and approximate the
optimal social cost.
We focus on the closely related problems of k-Facility Location and
Facility Location with a uniform facility opening cost, and mostly
study winner-imposing
mechanisms, which allocate facilities to the agents and require that each agent
allocated a facility should connect to it. We show that the
winner-imposing version of the Proportional Mechanism (Lu et al., EC
’10) is stategyproof and 4kapproximate for the k-Facility Location
game. For the Facility Location game, we
show that the winner-imposing version of the randomized algorithm of
(Meyerson, FOCS ’01), which has an approximation ratio of 8, is
strategyproof. Furthermore, we present a deterministic non-imposing
group strategyproof O(log n)-
approximate mechanism for the Facility Location game on the line
Player-Centric Byzantine Agreement
Most of the existing feasibility results for Byzantine Agreement (BA) are of an all-or-nothing
fashion: in Broadcast they address the question whether or not there
exists a protocol which allows any player to broadcast his input.
Similarly, in Consensus the question is whether or not consensus can be
reached which respects pre-agreement on the inputs of all correct
players. In this work, we introduce the natural notion of player-centric BA which is a class of BA
primitives, denoted as PCBA={PCBA_C}_C, parametrized
by subsets C of the player set P. For each primitive PCBA_C the validity is defined on the input(s) of the players in C.
Broadcast (with sender p) and Consensus are special (extreme) cases of PCB primitives for C={p} and C=P, respectively.
We study feasibility of PCBA in the presence of a general (aka non-threshold) mixed (active/passive)
adversary, and give a complete characterization for perfect, statistical, and computational security.
Our results expose an asymmetry of Broadcast which has, so far, been neglected in the literature: there exist
non-trivial adversaries which can be tolerated for Broadcast with sender some player p_i but not for every player p_j being the
sender. Finally, we extend the definition of PCBA by adding fail corruption to the adversary's capabilities,
and give exact feasibility bounds for computationally secure PCBA_P (aka Consensus) in this setting. This answers an open problem from
ASIACRYPT~2008 concerning feasibility of computationally secure multi-party computation in this model.