AGT 2012 SAMOS 1 Braess’s Paradox A. Kaporis Dept. of Information & Commun. Systems Eng.,...
-
Upload
brice-maurice-robinson -
Category
Documents
-
view
214 -
download
0
Transcript of AGT 2012 SAMOS 1 Braess’s Paradox A. Kaporis Dept. of Information & Commun. Systems Eng.,...
AGT 2012 SAMOS 1
Braess’s Paradox
A. KaporisDept. of Information & Commun. Systems Eng., U.Aegean, Samos, Greece
&Research Academic Comp. Tech. Inst.,U. Patras Campus, Greece
Joint work with:
D. FotakisNational Technical University of Athens, School of Electrical and Computer Engineering
P. SpirakisDept. of Computer Eng. and Informatics, U. Patras, Greece
&Research Academic Comp. Tech. Inst.,U. Patras Campus, Greece
AGT 2012 SAMOS 2
Overview
Selfish routing on a network:
AGT 2012 SAMOS 3
Overview
Selfish routing on a network: an infinite number
AGT 2012 SAMOS 4
Overview
Selfish routing on a network: an infinite number of infinitesimally small users,
AGT 2012 SAMOS 5
Overview
Selfish routing on a network: an infinite number of infinitesimally small users, each wanting to minimize her travel time along the path she routes.
AGT 2012 SAMOS 6
Overview
Selfish routing on a network: an infinite number of infinitesimally small users, each wanting to minimize her travel time along the path she routes.
Selfishness:
AGT 2012 SAMOS 7
Overview
Selfish routing on a network: an infinite number of infinitesimally small users, each wanting to minimize her travel time along the path she routes.
Selfishness yields a routing that no user wants to reroute
AGT 2012 SAMOS 8
Overview
Selfish routing on a network: an infinite number of infinitesimally small users, each wanting to minimize her travel time along the path she routes.
Selfishness yields a routing that no user wants to reroute (all used paths have the minimum latency)
AGT 2012 SAMOS 9
Overview
Selfish routing on a network: an infinite number of infinitesimally small users, each wanting to minimize her travel time along the path she routes.
Selfishness yields a routing that no user wants to reroute (all used paths have the minimum latency), aka Wardrop equilibrium.
AGT 2012 SAMOS 10
Overview
Selfish routing on a network: an infinite number of infinitesimally small users, each wanting to minimize her travel time along the path she routes.
Selfishness yields a routing that no user wants to reroute (all used paths have the minimum latency), aka Wardrop equilibrium.
But, such a selfish equilibrium may increase arbitrarily society’s cost (the sum all user’s travel times).
AGT 2012 SAMOS 11
Overview
Selfish routing on a network: an infinite number of infinitesimally small users, each wanting to minimize her travel time along the path she routes.
Selfishness yields a routing that no user wants to reroute (all used paths have the minimum latency), aka Wardrop equilibrium.
But, such a selfish equilibrium may increase arbitrarily society’s cost (the sum all user’s travel times).
Central question:
AGT 2012 SAMOS 12
Overview
Selfish routing on a network: an infinite number of infinitesimally small users, each wanting to minimize her travel time along the path she routes.
Selfishness yields a routing that no user wants to reroute (all used paths have the minimum latency), aka Wardrop equilibrium.
But, such a selfish equilibrium may increase arbitrarily society’s cost (the sum of all user’s travel times).
Central question:
Is there a cheap (wrt network designer)
AGT 2012 SAMOS 13
Overview
Selfish routing on a network: an infinite number of infinitesimally small users, each wanting to minimize her travel time along the path she routes.
Selfishness yields a routing that no user wants to reroute (all used paths have the minimum latency), aka Wardrop equilibrium.
But, such a selfish equilibrium may increase arbitrarily society’s cost (the sum of all user’s travel times).
Central question:
Is there a cheap (wrt network designer) & fair (wrt users)
AGT 2012 SAMOS 14
Overview
Selfish routing on a network: an infinite number of infinitesimally small users, each wanting to minimize her travel time along the path she routes.
Selfishness yields a routing that no user wants to reroute (all used paths have the minimum latency), aka Wardrop equilibrium.
But, such a selfish equilibrium may increase arbitrarily society’s cost (the sum of all user’s travel times).
Central question:
Is there a cheap (wrt network designer) & fair (wrt users) way to decrease society’s cost
AGT 2012 SAMOS 15
Overview
Selfish routing on a network: an infinite number of infinitesimally small users, each wanting to minimize her travel time along the path she routes.
Selfishness yields a routing that no user wants to reroute (all used paths have the minimum latency), aka Wardrop equilibrium.
But, such a selfish equilibrium may increase arbitrarily society’s cost (the sum of all user’s travel times).
Central question:
Is there a cheap (wrt network designer) & fair (wrt users) way to
decrease society’s cost, while all users still route free & selfish?
AGT 2012 SAMOS 16
Overview
Yes!
AGT 2012 SAMOS 17
Overview
Yes! Exploit Braess’s Paradox
AGT 2012 SAMOS 18
Overview
Yes! Exploit Braess’s Paradox
That is,
No matter how new & luxurious is your network,
AGT 2012 SAMOS 19
Overview
Yes! Exploit Braess’s Paradox
That is,
No matter how new & luxurious is your network, you can make all users
AGT 2012 SAMOS 20
Overview
Yes! Exploit Braess’s Paradox
That is,
No matter how new & luxurious is your network, you can make all users to travel faster,
AGT 2012 SAMOS 21
Overview
Yes! Exploit Braess’s Paradox
That is,
No matter how new & luxurious is your network, you can make all users to travel faster, by (cruelly) destroying a part of it.
AGT 2012 SAMOS 22
A network toy example
AGT 2012 SAMOS 23
A flow dependent latency function per edge
AGT 2012 SAMOS 24
NE flow= red Zoro’s-subgraph
AGT 2012 SAMOS 25
NE path latency = 1+0+1= 2
AGT 2012 SAMOS 26
OPT flow= blue Ο-subgraph
AGT 2012 SAMOS 27
OPT path latency = ½+1= 3/2 < 2 = NE path latency
AGT 2012 SAMOS 28
A lazy designer just deletes edge uv & achieves optimum routing …
AGT 2012 SAMOS 29
A lazy designer just deletes edge uv & achieves optimum routing … so it is cheap (wrt the designer)!
AGT 2012 SAMOS 30
No user is sacrificed through slower paths …
AGT 2012 SAMOS 31
No user is sacrificed through slower paths … so it is fair (wrt users)!
AGT 2012 SAMOS 32
Opposed to Stackelberg strategies, for example:
AGT 2012 SAMOS 33
A leader has to control selfish routing, on a pair of links as:
AGT 2012 SAMOS 34
Leader is highly motivated by the optimum routing:
AGT 2012 SAMOS 35
But, users are reluctant to Leader’s view… since all are mad about the speedy link
AGT 2012 SAMOS 36
So, Leader sacrifices ½ of flow through the slower link (up)
AGT 2012 SAMOS 37
So, Leader sacrifices ½ of flow through the slower link (up)
AGT 2012 SAMOS 38
How about Taxes?
AGT 2012 SAMOS 39
How about Taxes?
• In a large class of networks—including all networks with linear latency functions—marginal cost taxes do not improve the cost of the Nash equilibrium [Cole, Dodis, Roughgarden].
• Disutility per user = path latency + taxes paid, is increased. • It is more expensive & complex (wrt a designer) to build (& operate) tax-stations
per road, than closing (once & for all) some roads.
AGT 2012 SAMOS 40
How about Taxes?
• In a large class of networks—including all networks with linear latency functions—marginal cost taxes do not improve the cost of the Nash equilibrium [Cole, Dodis, Roughgarden].
• Disutility per user = path latency + taxes paid, is increased. • It is more expensive & complex (wrt a designer) to build (& operate) tax-stations
per road, than closing (once & for all) some roads.
AGT 2012 SAMOS 41
How about Taxes?
• In a large class of networks—including all networks with linear latency functions—marginal cost taxes do not improve the cost of the Nash equilibrium [Cole, Dodis, Roughgarden].
• Disutility per user = path latency + taxes paid, is increased. • It is more expensive & complex (wrt a designer) to build (& operate) tax-stations
per road, than closing (once & for all) some roads.
AGT 2012 SAMOS 42
But, is Braess’s paradox happening only to a designer’s summer-night dream?
AGT 2012 SAMOS 43
But, is Braess’s paradox happening only to a designer’s summer-night dream?
• It might be just a “creature” of mathematical imagination, restricted to “breath” only in a optimization laboratory.
AGT 2012 SAMOS 44
But, is Braess’s paradox happening only to a designer’s summer-night dream?
• It might be just a “creature” of mathematical imagination, restricted to “breath” only in a optimization laboratory.
No!
AGT 2012 SAMOS 45
But, is Braess’s paradox happening only to a designer’s summer-night dream?
• It might be just a “creature” of mathematical imagination, restricted to “breath” only in a optimization laboratory.
No!
It appears almost always in a very broad class of random graphs [Valiant, Roughgarden, EC ‘06]
AGT 2012 SAMOS 46
But, is Braess’s paradox happening only to a designer’s summer-night dream?
• It might be just a “creature” of mathematical imagination, restricted to “breath” only in a optimization laboratory.
No!
It appears almost always in a very broad class of random graphs [Valiant,
Roughgarden, EC ‘06]
It has been long observed in many large cities, such as NY [Kolata, New York Times ‘90].
“It is just as likely to occur as not” [Steinberg, ‘83].
AGT 2012 SAMOS 47
Formally, a routing instance G consists of:
AGT 2012 SAMOS 48
Formally, a routing instance G consists of:
• a directed graph G with source node s and destination t.
AGT 2012 SAMOS 49
Formally, a routing instance G consists of:
• a directed graph G with source node s and destination t.
• a flow r>0 of infinite number of infinitesimally small users that wish to route through paths of G from node s to t.
AGT 2012 SAMOS 50
Formally, a routing instance G consists of:
• a directed graph G with source node s and destination t.
• a flow r>0 of infinite number of infinitesimally small users that wishes to route through paths of G from node s to t.
• edges endowed with flow-dependent latency functions (continuous, increasing, etc).
AGT 2012 SAMOS 51
Formally, a routing instance G consists of:
• a directed graph G with source node s and destination t.
• a flow r>0 of infinite number of infinitesimally small users that wishes to route through paths of G from node s to t.
• edges endowed with flow-dependent latency functions (continuous, increasing, etc).
G is paradox ridden:
if on a subgraph HÍ G, the selfish routing coincides to the optimum routing.
AGT 2012 SAMOS 52
Formally, a routing instance G consists of:
• a directed graph G with source node s and destination t.
• a flow r>0 of infinite number of infinitesimally small users that wishes to route through paths of G from node s to t.
• edges endowed with flow-dependent latency functions (continuous, increasing, etc).
G is paradox ridden:
if on a subgraph HÍ G, the selfish routing coincides to the optimum routing.
AGT 2012 SAMOS 53
We focus on real world routing instances, with latencies as:
F. Kelly, “The Mathematics of traffic in networks”. The Princeton Companion to Mathematics, Princeton University Press.
AGT 2012 SAMOS 54
We focus on real world routing instances, with latencies as:
F. Kelly, “The Mathematics of traffic in networks”. The Princeton Companion to Mathematics, Princeton University Press.
AGT 2012 SAMOS 55
Designer’s problems. Given an instance G:
• Paradox-Ridden Recognition (ParRid): decide if G is paradox-ridden.
• Best Subnetwork Equilibrium Latency (BSubEL): find the best subnetwork HB of G and its equilibrium latency L(HB).
AGT 2012 SAMOS 56
Designer’s problems. Given an instance G:
• Paradox-Ridden Recognition (ParRid): decide if G is paradox-ridden.
• Best Subnetwork Equilibrium Latency (BSubEL): find the best subnetwork HB of G and its equilibrium latency L(HB).
AGT 2012 SAMOS 57
Designer’s problems. Given an instance G:
• Paradox-Ridden Recognition (ParRid): decide if G is paradox-ridden.
• Best Subnetwork Equilibrium Latency (BSubEL): find the best subnetwork HB of G and its equilibrium latency L(HB).
AGT 2012 SAMOS 58
Designer’s problems. Given an instance G:
• Minimum Latency Modification (MinLatMod): with each edge e endowed
with polynomial of degree d latency on flow x:
modify the latency:
So that:
(i) the Euclidian distance of the coefficient’s vectors is minimum
&
(ii) the induced common latency gives the cost of optimum flow on G
d
iie
iiee axaxl
0,, 0,)(
d
i
iei
iee axaxl0
,
~
,
~~
0,)(
AGT 2012 SAMOS 59
Designer’s problems. Given an instance G:
• Minimum Latency Modification (MinLatMod): with each edge e endowed
with polynomial of degree d latency on flow x:
modify the latency:
So that:
(i) the Euclidian distance of the coefficient’s vectors is minimum
&
(ii) the induced common latency gives the cost of optimum flow on G
d
iie
iiee axaxl
0,, 0,)(
d
i
iei
iee axaxl0
,
~
,
~~
0,)(
AGT 2012 SAMOS 60
Previous work on Braess’s paradox
• First observed by D. Braess in ’68 and inspired a vast amount of papers.
• A nice history of it in: [Roughgarden, Selfish Routing and the Price of Anarchy. MIT, ’05].
• Recent work shows that the paradox is quite likely to occur [Valiant, Roughgarden,
EC ‘06].
• For general latencies ParRid is NP-hard [Roughgarden, Selfish Routing and the Price of
Anarchy. MIT, ’05]. Also, it is hard even to approximate BSubEL beyond a critical value (4/3 for linear latencies & n/2 for polynomial ones) [Roughgarden,
Selfish Routing and the Price of Anarchy. MIT, ’05].
• MinLatMod: of the most important problems [Magnanti, Wong: Network design and
transportation planning: Models and algorithms, Transp. Sci. '84].
AGT 2012 SAMOS 61
Our results
• Recognizing paradox-ridden instances (ParRid): . This problem is NP-complete for arbitrary linear latencies [29]. We show that it
becomes polynomially solvable for the important case of strictly increasing linear latencies.
Furthermore,
• We reduce the problem ParRid with arbitrary linear latencies to the problem of generating all optimal basic feasible solutions of a Linear Program that describes the optimal traffic allocations to the constant latency edges.
AGT 2012 SAMOS 62
Our results
• Recognizing paradox-ridden instances (ParRid): . This problem is NP-complete for arbitrary linear latencies. We show that it becomes
polynomially solvable for the important case of strictly increasing linear latencies.
Furthermore,
• We reduce the problem ParRid with arbitrary linear latencies to the problem of generating all optimal basic feasible solutions of a Linear Program that describes the optimal traffic allocations to the constant latency edges.
AGT 2012 SAMOS 63
Our results
• Recognizing paradox-ridden instances (ParRid): . This problem is NP-complete for arbitrary linear latencies. We show that it becomes
polynomially solvable for the important case of strictly increasing linear latencies.
Furthermore,
• We reduce the problem ParRid with arbitrary linear latencies to the problem of generating all optimal basic feasible solutions of a Linear Program that describes the optimal traffic allocations to the constant latency edges.
AGT 2012 SAMOS 64
Our results
• Best Subnetwork Equilibrium Latency (BSubEL):
For linear latencies, it is hard even to approximate BSubEL beyond the ratio 4/3.
For instances with polynomially many paths, each of polylogarithmic length, and arbitrary linear latencies, we present a subexponential-time approximation scheme.
That is,
For any ε> 0, the algorithm computes a subnetwork with an ε –NE. The common latency is at most an additive term of ε/2 from the optimum common latency. The running time is exponential in poly(logm)/ ε2, m is the number of edges.
Also,
• For instances with strictly increasing linear latencies that are not paradox ridden, we show that there is an instance-dependent δ > 0, such that the equilibrium latency is within a factor of 4/3-δ from the equilibrium latency on the best subnetwork.
AGT 2012 SAMOS 65
Our results
• Best Subnetwork Equilibrium Latency (BSubEL):
For linear latencies, it is hard even to approximate BSubEL beyond the ratio 4/3.
For instances with polynomially many paths, each of polylogarithmic length, and arbitrary linear latencies, we present a subexponential-time approximation scheme.
That is,
For any ε> 0, the algorithm computes a subnetwork with an ε –NE. The common latency is at most an additive term of ε/2 from the optimum common latency. The running time is exponential in poly(logm)/ ε2, m is the number of edges.
Also,
• For instances with strictly increasing linear latencies that are not paradox ridden, we show that there is an instance-dependent δ > 0, such that the equilibrium latency is within a factor of 4/3-δ from the equilibrium latency on the best subnetwork.
AGT 2012 SAMOS 66
Our results
• Best Subnetwork Equilibrium Latency (BSubEL):
For linear latencies, it is hard even to approximate BSubEL beyond the ratio 4/3.
For instances with polynomially many paths, each of polylogarithmic length, and arbitrary linear latencies, we present a subexponential-time approximation scheme.
That is,
For any ε> 0, the algorithm computes a subnetwork with an ε –NE. The common latency is at most an additive term of ε/2 from the optimum common latency. The running time is exponential in poly(logm)/ ε2, m is the number of edges.
Also,
• For instances with strictly increasing linear latencies that are not paradox ridden, we show that there is an instance-dependent δ > 0, such that the equilibrium latency is within a factor of 4/3-δ from the equilibrium latency on the best subnetwork.
AGT 2012 SAMOS 67
Our results
• Best Subnetwork Equilibrium Latency (BSubEL):
For linear latencies, it is hard even to approximate BSubEL beyond the ratio 4/3.
For instances with polynomially many paths, each of polylogarithmic length, and arbitrary linear latencies, we present a subexponential-time approximation scheme.
That is,
For any ε> 0, the algorithm computes a subnetwork with an ε –NE. The common latency is at most an additive term of ε/2 from the optimum common latency. The running time is exponential in poly(logm)/ ε2, m is the number of edges.
Also,
• For instances with strictly increasing linear latencies that are not paradox ridden, we show that there is an instance-dependent δ > 0, such that the equilibrium latency is within a factor of 4/3-δ from the equilibrium latency on the best subnetwork.
AGT 2012 SAMOS 68
Our results
• Minimum Latency Modification (MinLatMod):
If the instance is not paradox-ridden however, it is not possible to turn the optimal flow into a Nash flow by just removing edges.
Enforcing the optimal flow is possible, if in addition to removing edges, the administrator can modify the latency functions.
We present a polynomial-time algorithm for the problem of minimally modifying the latency functions of the edges used by the optimal flow so that the optimal flow is enforced as a Nash flow on the subnetwork used by the optimal flow with the modified latencies.
AGT 2012 SAMOS 69
Our results
• Minimum Latency Modification (MinLatMod):
If the instance is not paradox-ridden however, it is not possible to turn the optimal flow into a Nash flow by just removing edges.
Enforcing the optimal flow is possible, if in addition to removing edges, the administrator can modify the latency functions.
We present a polynomial-time algorithm for the problem of minimally modifying the latency functions of the edges used by the optimal flow so that the optimal flow is enforced as a Nash flow on the subnetwork used by the optimal flow with the modified latencies.
AGT 2012 SAMOS 70
Our results
• Minimum Latency Modification (MinLatMod):
If the instance is not paradox-ridden however, it is not possible to turn the optimal flow into a Nash flow by just removing edges.
Enforcing the optimal flow is possible, if in addition to removing edges, the administrator can modify the latency functions.
We present a polynomial-time algorithm for the problem of minimally modifying the latency functions of the edges used by the optimal flow so that the optimal flow is enforced as a Nash flow on the subnetwork used by the optimal flow with the modified latencies.
AGT 2012 SAMOS 71
Our results
• Minimum Latency Modification (MinLatMod):
If the instance is not paradox-ridden however, it is not possible to turn the optimal flow into a Nash flow by just removing edges.
Enforcing the optimal flow is possible, if in addition to removing edges, the administrator can modify the latency functions.
We present a polynomial-time algorithm for the problem of minimally modifying the latency functions of the edges used by the optimal flow so that the optimal flow is enforced as a Nash flow on the subnetwork used by the optimal flow with the modified latencies.
AGT 2012 SAMOS 72
Best Subnetwork Equilibrium Latency (BSubEL): the idea
• For instances with polynomially many paths, each of polylogarithmic length, and arbitrary linear latencies, we present a subexponential-time approximation scheme for the subgraph of minimum common latency.
• We have to search amongst exponentially many subgraphs to find the best HB one of the minimum latency L(HB).
• Can we approximate HB (best) with H* (almost best, that is, L(H*) ≤L(HB)+ε), which is “small” and, thus, “easy” to find?
How “small”?
• “small” means H* has only polylogarithmic many paths that receive flow.
• Because, this implies subexponentially many candidates for H*.
Central question:
Is it possible such a “David” H* to perform almost as good as a “Goliath” HB?
AGT 2012 SAMOS 73
Best Subnetwork Equilibrium Latency (BSubEL): the idea
• For instances with polynomially many paths, each of polylogarithmic length, and arbitrary linear latencies, we present a subexponential-time approximation scheme for the subgraph of minimum common latency.
• We have to search amongst exponentially many subgraphs to find the best HB one of the minimum latency L(HB).
• Can we approximate HB (best) with H* (almost best, that is, L(H*) ≤L(HB)+ε), which is “small” and, thus, “easy” to find?
How “small”?
• “small” means H* has only polylogarithmic many paths that receive flow.
• Because, this implies subexponentially many candidates for H*.
Central question:
Is it possible such a “David” H* to perform almost as good as a “Goliath” HB?
AGT 2012 SAMOS 74
Best Subnetwork Equilibrium Latency (BSubEL): the idea
• For instances with polynomially many paths, each of polylogarithmic length, and arbitrary linear latencies, we present a subexponential-time approximation scheme for the subgraph of minimum common latency.
• We have to search amongst exponentially many subgraphs to find the best HB one of the minimum latency L(HB).
• Can we approximate HB (best) with H* (almost best, that is, L(H*) ≤L(HB)+ε), which is “small” and, thus, “easy” to find?
How “small”?
• “small” means H* has only polylogarithmic many paths that receive flow.
• Because, this implies subexponentially many candidates for H*.
Central question:
Is it possible such a “David” H* to perform almost as good as a “Goliath” HB?
AGT 2012 SAMOS 75
Best Subnetwork Equilibrium Latency (BSubEL): the idea
• For instances with polynomially many paths, each of polylogarithmic length, and arbitrary linear latencies, we present a subexponential-time approximation scheme for the subgraph of minimum common latency.
• We have to search amongst exponentially many subgraphs to find the best HB one of the minimum latency L(HB).
• Can we approximate HB (best) with H* (almost best, that is, L(H*) ≤L(HB)+ε), which is “small” and, thus, “easy” to find?
How “small”?
• “small” means H* has only polylogarithmic many paths that receive flow.
• Because, this implies subexponentially many candidates for H*.
Central question:
Is it possible such a “David” H* to perform almost as good as a “Goliath” HB?
AGT 2012 SAMOS 76
Best Subnetwork Equilibrium Latency (BSubEL): the idea
• For instances with polynomially many paths, each of polylogarithmic length, and arbitrary linear latencies, we present a subexponential-time approximation scheme for the subgraph of minimum common latency.
• We have to search amongst exponentially many subgraphs to find the best HB one of the minimum latency L(HB).
• Can we approximate HB (best) with H* (almost best, that is, L(H*) ≤L(HB)+ε), which is “small” and, thus, “easy” to find?
How “small”?
• “small” means H* has only polylogarithmic many paths that receive flow.
• Because, this implies subexponentially many candidates for H*.
Central question:
Is it possible such a “David” H* to perform almost as good as a “Goliath” HB?
AGT 2012 SAMOS 77
Best Subnetwork Equilibrium Latency (BSubEL): the idea
• For instances with polynomially many paths, each of polylogarithmic length, and arbitrary linear latencies, we present a subexponential-time approximation scheme for the subgraph of minimum common latency.
• We have to search amongst exponentially many subgraphs to find the best HB one of the minimum latency L(HB).
• Can we approximate HB (best) with H* (almost best, that is, L(H*) ≤L(HB)+ε), which is “small” and, thus, “easy” to find?
How “small”?
• “small” means H* has only polylogarithmic many paths that receive flow.
• Because, this implies subexponentially many candidates for H*.
Central question:
Is it possible such a “David” H* to perform almost as good as a “Goliath” HB?
AGT 2012 SAMOS 78
Best Subnetwork Equilibrium Latency (BSubEL): the idea
• For instances with polynomially many paths, each of polylogarithmic length, and arbitrary linear latencies, we present a subexponential-time approximation scheme for the subgraph of minimum common latency.
• We have to search amongst exponentially many subgraphs to find the best HB one of the minimum latency L(HB).
• Can we approximate HB (best) with H* (almost best, that is, L(H*) ≤L(HB)+ε), which is “small” and, thus, “easy” to find?
How “small”?
• “small” means H* has only polylogarithmic many paths that receive flow.
• Because, this implies subexponentially many candidates for H*.
Central question:
Is it possible such a “David” H* to perform almost as good as a “Goliath” HB?
AGT 2012 SAMOS 79
Best Subnetwork Equilibrium Latency (BSubEL): the idea
• For instances with polynomially many paths, each of polylogarithmic length, and arbitrary linear latencies, we present a subexponential-time approximation scheme for the subgraph of minimum common latency.
• We have to search amongst exponentially many subgraphs to find the best HB one of the minimum latency L(HB).
• Can we approximate HB (best) with H* (almost best, that is, L(H*) ≤L(HB)+ε), which is “small” and, thus, “easy” to find?
How “small”?
• “small” means H* has only polylogarithmic many paths that receive flow.
• Because, this implies subexponentially many candidates for H*.
Central question:
Does exist such a “small” H* that performs almost as good as a “big” HB?
AGT 2012 SAMOS 80
Is it possible such a “small” H* to perform almost as good as a “big” HB?
• If one has to bet, then she won’t put her penny on “David”…
But,
Our hopes come from a seemingly unrelated topic:
Once upon a time, there was a bimatrix game, with only 2 rivals.
But,
each rival was armed with a “Goliath” set of strategies.
So,
it was a headache to compute their best subsets of strategies to play.
Until,
a wise old man showed them that only a “David” subset of strategies can make them almost as happy as the best subset of strategies.
AGT 2012 SAMOS 81
Is it possible such a “small” H* to perform almost as good as a “big” HB?
• If one has to bet, then she won’t put her penny on a “small” subgraph…
But,
Our hopes come from a seemingly unrelated topic:
Once upon a time, there was a bimatrix game, with only 2 rivals.
But,
each rival was armed with a “big” set of strategies.
So,
it was a headache to compute their best subsets of strategies to play.
Until,
a wise old man showed them that only a “small” subset of strategies can make them almost as happy as the “big” best subset of strategies.
AGT 2012 SAMOS 82
Is it possible such a “small” H* to perform almost as good as a “big” HB?
• If one has to bet, then she won’t put her penny on a “small” subgraph…
But,
Our hopes come from a seemingly unrelated topic:
Once upon a time, there was a bimatrix game, with only 2 rivals.
But,
each rival was armed with a “big” set of strategies.
So,
it was a headache to compute their best subsets of strategies to play.
Until,
a wise old man showed them that only a “small” subset of strategies can make them almost as happy as the best subset of strategies.
AGT 2012 SAMOS 83
Is it possible such a “small” H* to perform almost as good as a “big” HB?
• If one has to bet, then she won’t put her penny on a “small” subgraph…
But,
Our hopes come from a seemingly unrelated topic:
Once upon a time, there was a bimatrix game, with only 2 rivals.
But,
each rival was armed with a “Goliath” set of strategies.
So,
it was a headache to compute their best subsets of strategies to play.
Until,
a wise old man showed them that only a “David” subset of strategies can make them almost as happy as the best subset of strategies.
AGT 2012 SAMOS 84
Is it possible such a “small” H* to perform almost as good as a “big” HB?
• If one has to bet, then she won’t put her penny on “David”…
But,
Our hopes come from a seemingly unrelated topic:
Once upon a time, there was a bimatrix game, with only 2 rivals.
But,
each rival was armed with a “Goliath” set of strategies.
So,
it was a headache to compute their best subsets of strategies to play.
Until,
a wise old man showed them that only a “David” subset of strategies can make them almost as happy as the best subset of strategies.
AGT 2012 SAMOS 85
Is it possible such a “small” H* to perform almost as good as a “big” HB?
• If one has to bet, then she won’t put her penny on “David”…
But,
Our hopes come from a seemingly unrelated topic:
Once upon a time, there was a bimatrix game, with only 2 rivals.
But,
each rival was armed with a “Goliath” set of strategies.
So,
it was a headache to compute their best subsets of strategies to play.
Until,
a wise old man showed them that only a “David” subset of strategies can make them almost as happy as the best subset of strategies.
AGT 2012 SAMOS 86
Is it possible such a “small” H* to perform almost as good as a “big” HB?
• If one has to bet, then she won’t put her penny on “small” subgraph…
But,
Our hopes come from a seemingly unrelated topic:
Once upon a time, there was a bimatrix game, with only 2 rivals.
But,
each rival was armed with a “big” set of strategies.
So,
it was a headache to compute their best subsets of strategies to play.
Until,
a wise old man showed them that only a “small” subset of strategies can make them almost as happy as the best subset of strategies.
AGT 2012 SAMOS 87
Is it possible such a “small” H* to perform almost as good as a “Goliath” HB?
• If one has to bet, then she won’t put her penny on “small” subgraph…
But,
Our hopes come from a seemingly unrelated topic:
Once upon a time, there was a bimatrix game, with only 2 rivals.
But,
each rival was armed with a “big” set of strategies.
So,
it was a headache to compute their best subsets of strategies to play.
Until,
a wise old man showed them that only a “David” subset of strategies can make them almost as happy as the best subset of strategies.
AGT 2012 SAMOS 88
Is it possible such a “small” H* to perform almost as good as a “big” HB?
• If one has to bet, then she won’t put her penny on “David”…
But,
Our hopes come from a seemingly unrelated topic:
Once upon a time, there was a bimatrix game, with only 2 rivals.
But,
each rival was armed with a “Goliath” set of strategies.
So,
it was a headache for them to compute their best subsets of strategies to play.
Until,
a wise old man showed them that only a “David” subset of strategies can make them almost as happy as the best subset of strategies.
AGT 2012 SAMOS 89
Is it possible such a “small” H* to perform almost as good as a “big” HB?
• If one has to bet, then she won’t put her penny on “David”…
But,
Our hopes come from a seemingly unrelated topic:
Once upon a time, there was a bimatrix game, with only 2 rivals.
But,
each rival was armed with a “Goliath” set of strategies.
So,
it was a headache to compute their best subsets of strategies to play.
Until,
a wise old man showed them that only a “David” subset of strategies can make them almost as happy as the best subset of strategies.
AGT 2012 SAMOS 90
Is it possible such a “small” H* to perform almost as good as a “big” HB?
• If one has to bet, then she won’t put her penny on “David”…
But,
Our hopes come from a seemingly unrelated topic:
Once upon a time, there was a bimatrix game, with only 2 rivals.
But,
each rival was armed with a “Goliath” set of strategies.
So,
it was a headache to compute their best subsets of strategies to play.
Until,
a wise old man showed them that only a “small” subset of strategies can make them almost as happy as the best “big” subset of their strategies.
AGT 2012 SAMOS 91
How each player finds small & almost optimal subsets of strategies? By the Probabilistic Method…
• Each rival selects sequentially & at random (according to an optimal but unknown distribution) a row (column) from her matrix.
• Strong tail bounds show that, with probability ≥ a positive constant, after polylogarithimic many random row (column) selections, her expected gain (by her random “David” subset) will be close to the optimal expected gain (by her “Goliath” subset).
• Thus, by the Probabilistic Method, there exists “small” subsets of rivals’s strategies that are close to the optimal expected gain.
• So, each rival has only to exhaustively search in subexponentially time for such “small” subsets.
AGT 2012 SAMOS 92
How each player finds small & almost optimal subsets of strategies? By the Probabilistic Method…
• Each rival selects sequentially & at random (according to an optimal but unknown distribution) a row (column) from her matrix.
• Strong tail bounds show that, with probability ≥ a positive constant, after polylogarithimic many random row (column) selections, her expected gain (by her random “David” subset) will be close to the optimal expected gain (by her “Goliath” subset).
• Thus, by the Probabilistic Method, there exists “small” subsets of rivals’s strategies that are close to the optimal expected gain.
• So, each rival has only to exhaustively search in subexponentially time for such “small” subsets.
AGT 2012 SAMOS 93
From a bimatrix game to an instance G (a graph!)
• A flow = infinite number of players (but, a bimatrix game has only 2 players) …
View the flow on paths as the mixed strategy of a single player (FLOW-player)!
• An instance G has no payoff matrices (but, a bimatrix has 2 such matrices)…
View the edge-path adjacency matrix as the payoff matrix of the FLOW-player
• There are congestion effects on G (but, a bimatrix has no congestion)
Show that congestion affects smoothly the random experiment.
AGT 2012 SAMOS 94
From a bimatrix game to an instance G (a graph!)
• A flow = infinite number of players (but, a bimatrix game has only 2 players) …
View the flow on paths as the mixed strategy of a single player (FLOW-player)!
• An instance G has no payoff matrices (but, a bimatrix has 2 such matrices)…
View the edge-path adjacency matrix as the payoff matrix of the FLOW-player
• There are congestion effects on G (but, a bimatrix has no congestion)
Show that congestion affects smoothly the random experiment.
AGT 2012 SAMOS 95
From a bimatrix game to an instance G
• A flow = infinite number of players (but, a bimatrix game has only 2 players) …
View the flow on paths as the mixed strategy of a single player (FLOW-player)!
• An instance G has no payoff matrices (but, a bimatrix has 2 such matrices)…
View the edge-path adjacency matrix as the payoff matrix of the FLOW-player
• There are congestion effects on G (but, a bimatrix has no congestion)
Show that congestion affects smoothly the random experiment.
AGT 2012 SAMOS 96
From a bimatrix game to an instance G
• A flow = infinite number of players (but, a bimatrix game has only 2 players) …
View the flow on paths as the mixed strategy of a single player (FLOW-player)!
• An instance G has no payoff matrices (but, a bimatrix has 2 such matrices)…
View the edge-path adjacency matrix as the payoff matrix of the FLOW-player
• There are congestion effects on G (but, a bimatrix has no congestion)
Show that congestion affects smoothly the random experiment.
AGT 2012 SAMOS 97
From a bimatrix game to an instance G
• A flow = infinite number of players (but, a bimatrix game has only 2 players) …
View the flow on paths as the mixed strategy of a single player (FLOW-player)!
• An instance G has no payoff matrices (but, a bimatrix has 2 such matrices)…
View the edge-path adjacency matrix as the payoff matrix of the FLOW-player
• There are congestion effects on G (but, a bimatrix has no congestion)
Show that congestion affects smoothly the random experiment.
AGT 2012 SAMOS 98
From a bimatrix game to an instance G
• A flow = infinite number of players (but, a bimatrix game has only 2 players) …
View the flow on paths as the mixed strategy of a single player (FLOW-player)!
• An instance G has no payoff matrices (but, a bimatrix has 2 such matrices)…
View the edge-path adjacency matrix as the payoff matrix of the FLOW-player.
• There are congestion effects on G (but, a bimatrix has no congestion)
Show that congestion affects smoothly the random experiment.
AGT 2012 SAMOS 99
From a bimatrix game to an instance G
• A flow = infinite number of players (but, a bimatrix game has only 2 players) …
View the flow on paths as the mixed strategy of a single player (FLOW-player)!
• An instance G has no payoff matrices (but, a bimatrix has 2 such matrices)…
View the edge-path adjacency matrix as the payoff matrix of the FLOW-player.
• There are congestion effects on G (but, a bimatrix has no congestion)
Show that congestion affects smoothly the random experiment.
AGT 2012 SAMOS 100
From a bimatrix game to an instance G
• A flow = infinite number of players (but, a bimatrix game has only 2 players) …
View the flow on paths as the mixed strategy of a single player (FLOW-player)!
• An instance G has no payoff matrices (but, a bimatrix has 2 such matrices)…
View the edge-path adjacency matrix as the payoff matrix of the FLOW-player.
• There are congestion effects on G (but, a bimatrix has no congestion)
Show that congestion only affects smoothly the random experiment.
AGT 2012 SAMOS 101
m “short”paths from node s to t
AGT 2012 SAMOS 102
Let f the path-flow of the subgraph of the minimum common latency
AGT 2012 SAMOS 103
What if we search exhaustively for this precious f?
AGT 2012 SAMOS 104
Select k= Θ(logm/ε2) paths at random according to the unknown f
AGT 2012 SAMOS 105
These k= Θ(logm/ε2) paths with positive probability induce the minimum common latency (achieved by f) + ε
AGT 2012 SAMOS 106
These k= Θ(logm/ε2) paths with positive probability induce the minimum common latency (achieved by f) + ε