__ Symmetries in Games, and the Complexity of Nash Equilibria __

How long does it take a market or, more broadly, a game to reach an equilibrium state? I will review recent work with Paul Goldberg and Christos Papadimitriou, showing that convergence to a Nash equilibrium may take prohibitively long. Since a Nash equilibrium is guaranteed to exist in every game---by Nash's seminal result, the theory of NP-completeness does not seem appropriate for characterizing its complexity. Our result is that the Nash equilibrium is as hard computationally as the Brouwer fixed-point computation problem, in a precise technical sense. The existence of the Nash equilibrium was established via Brouwer's fixed-point theorem; hence our result is the computational converse of Nash's theorem.

To alleviate the negative implications of this hardness result, I will consider special classes of games with potentially better computational features. One such class is the class of symmetric games, where all players are the same in that they have the same payoff functions and these functions are symmetric with respect to the other players' strategies. Symmetric games were already considered in Nash's original 1951 paper, where it was shown that there always exists a symmetric equilibrium, assigning the same mixed strategy to each player of the game. As a consequence of this result, an equilibrium for these games can be computed in polynomial time, as long as the number of strategies of the players is small.

Motivated by this positive result, I will consider a broader class of games, called anonymous, in which the players' utilities only depend on the aggregate behavior of the other players, like in symmetric games, but each player may have now a different utility function. This class of games is arguably more attractive and relevant to applications than symmetric games; examples arise in traffic, social interactions, and certain auction settings. I will present several approaches for approximating multiplayer anonymous games with a small number of strategies, culminating in an efficient polynomial time approximation scheme with quasi-polynomial running time in the approximation guarantee. All approximation schemes are based on characterizing the symmetries of these games via finitary Central Limit Theorems.

__ The power of bounded independence against threshold functions __

TBA

__ The Combinatorial Approach to the Graph Isomorphism Problem __

The question whether the graph isomorphism problem has a polynomial time solution is still wide open. No efficient algorithm is in sight, and it seems almost certain that graph isomorphism is not NP-complete. There are polynomial time algorithms for the classes of graphs with bounded genus, bounded degree, and bounded eigenvalue multiplicity. For the second and third class, group theoretical methods are essential, and for the bounded eigenvalue case also methods from linear algebra and numerical approximations are used. Given the combinatorial flavor of the problem, it is natural to ask whether combinatorial methods could also provide efficient solutions. It turns out that the most natural classes of combinatorial algorithms have some definite shortcomings.

__ Rectangle Free Coloring of Grids __

The following is known.

THEOREM:

For every c\ge 1 there exists natural number G=G(c)
such that, for every c-coloring of GxG
there exists a square all of whose corners are the same
color.

END OF THEOREM

There are several proofs of this theorem; however, in all of
them the function G(c) is bounded by a very fast growing function.

What if instead of trying to get a square we only want
to get a rectangle? How are the bounds then?
That is the subject of this talk.

We call a grid c-colorable if there is a way to c-color it
and not get any monochromatic rectangles.
In this talk we completely characterize which grids are 2-colorable
and which grids are 3-colorable.
We also make some progress on 4-colorable.
The proofs are a nice blend of combinatorics and computer science.

__Efficient signature schemes supporting redaction, pseudonymization, and
data deidentification__

We present a new signature algorithm that allows for controlled
changes to the signed data. The change operations we study are
removal of subdocuments (redaction), pseudonymization, and gradual
deidentification of hierarchically structured data. These
operations are applicable in a number of practically relevant
application scenarios, including the release of previously
classified government documents, privacy-aware management of
audit-log data, and the release of tables of health records. When
applied directly to redaction, our algorithm improves on previous
work by reducing significantly the overhead of cryptographic
information that has to be stored with the original data.
(This is joint work with Yasuo Hatano, Yoshinori Honda, Bill Horne,
Kunihiko Miyazaki, Tomas Sander, Satoru Tezuka and Danfeng Yao.)

__The Shield that Never Was: Societies with Single-Peaked Preferences
Are More Open to Manipulation and Control__

Much work has been devoted, during the past twenty years, to using
complexity to protect elections from manipulation and control. Many
results have been obtained showing NP-hardness shields, and recently
there has been much focus on whether such worst-case hardness
protections can be bypassed by frequently correct heuristics or by
approximations. Our work takes a very different approach: We argue
that when electorates follow the canonical political science model of
societal preferences the complexity shield never existed in the first
place. In particular, we show that for electorates having
single-peaked preferences, many existing NP-hardness results on
manipulation and control evaporate. This is joint work with Piotr
Faliszewski, Edith Hemaspaandra, and Joerg Rothe

__Random Inner Product Graphs as Models for Complex Networks__

Random Inner Product Graphs are strict generalizations
of Erdos-Renyi random graphs that endow the classical random graph model
with semantics on nodes and links, while keeping the analysis mathematically tractable.
In particular, nodes are sampled from a distribution in d-dimentional space,
and links between two nodes are added with probability proportional to
the inner product of the vectors corresponding to the nodes.
In this talk we give results concerning fundamental structural properties
of random inner product graphs, including degree distribution, diameter
and clustering (in the global sense of conductance or spectral gap).

__ Matching problems in blue-red graphs __

TBA

__How to Guard the Guards Themselves__

In its 20 first, modern cryptography was considered to be the
guards of our information and systems. In recent years as these systems have been embedded in real systems, it turned out that implementations are open to various attacks due to the inherent weakness of hosting systems.
To cope with these issues, methods designing cryptosystems that mitigate real life exposures have been developed. We will review such methods, techniques and systems.

Back to Main