Algorithmic Problems Related to the Internet

Christos H. Papadimitriou

University of California, Berkeley, USA

 

The Internet has arguably superseded the computer as the most complex

cohesive artifact (if you can call it that), and is, quite predictably, the

source of a new generation of foundational problems for Theoretical Computer

Science. These new theoretical challenges emanate from several novel aspects of the

Internet: (a) Its unprecedented size, diversity, and availability as an information

repository; (b) its novel nature as a computer system that intertwines a

multitude of economic interests in varying degrees of competition; and (c) its

history as a shared resource architecture that emerged in a remarkably {\em ad hoc} yet

gloriously successful manner. In this talk I will survey some recent research

(done in collaboration with Joan Feigenbaum, Dick Karp, Elias Koutsoupias,

and Scott Shenker on problems of the two latter kinds.

To a large extent, the Internet owes its remarkable ascent

to the surprising empirical success of a congestion control protocol.

A flow between two points strives to determine the correct transmission

rate by trying higher and higher rates, in a linearly increasing function.

As links shared by several flows reach their capacity, one or more packets may be lost,

and, in response, the corresponding flow {\em halves} its transmission

rate. The fantastic empirical success of this crude mechanism raises important, and

at times theoretically interesting, questions related to novel search algorithms (the process

clearly contains a novel variant of binary search), on-line algorithms

(when the true capacity that a flow can use changes in an unknown and, in a

worst case, adversarial fashion), and algorithmic game theory (why is it that

flows seem to be unwilling, or unmotivated, to move away from TCP?).

In a situation in which the same piece of information

is multicast to millions of individuals, it is of interest to determine the price that each

recipient must pay in return. The information in question is of different

value to each individual, and this value is known only to this individual.

The participants must engage in a {\em mechanism},

that is to say, a distributed protocol whereby they determine the individuals who will

actually receive the information, and the price each will pay. Economists over

the past fourty years have studied such situations arising in auctions, welfare

economics, and pricing of public goods, and have come up with clever mechanisms

---solutions that satisfy various sets of desiderata (``axioms'')--- or ingenious

proofs that satisfying other sets of desiderata is impossible. In the context of the

Internet, however, an important computational desideratum emerges: {\em The protocol

should be efficient,} with respect to the total number of messages sent. It turns

out that this novel condition changes the situation dramatically, and carves out a

new set of pricing mechanisms that are appropriate and feasible.