Regular Potential Gamesusers.ece.cmu.edu/~brianswe/pdfs/regularPG.pdfREGULAR POTENTIAL GAMES BRIAN...

23
REGULAR POTENTIAL GAMES BRIAN SWENSON , RYAN MURRAY , AND SOUMMYA KAR Abstract. A fundamental problem with the Nash equilibrium concept is the existence of certain “structurally deficient” equilibria that (i) lack fundamental robustness properties, and (ii) are difficult to analyze. The notion of a “regular” Nash equilibrium was introduced by Harsanyi. Such equilibria are highly robust and relatively simple to analyze. A game is said to be regular if all equilibria in the game are regular. In this paper it is shown that almost all potential games are regular. That is, except for a closed subset of potential games with Lebesgue measure zero, all potential games are regular. As an immediate consequence of this, the paper also proves an oddness result for potential games: In almost all potential games, the number of Nash equilibrium strategies is finite and odd. Key words. Game theory, Regular equilibrium, Potential games, Multi-agent systems AMS subject classifications. 91A06, 91B52, 91A26 1. Introduction. While the notion of Nash equilibrium is a universally ac- cepted solution concept for games, several shortcomings have been noted over the years. A principal criticism (in addition to non-uniqueness) is that some Nash equi- librium strategies may be undesirable or unreasonable due to a lack of basic robust- ness properties. As a consequence, many equilibrium refinement concepts have been proposed [13, 16, 22, 24, 26, 33, 34], each attempting to single out subsets of Nash equi- librium strategies that satisfy some desirable criteria. One of the most stringent refinement concepts, originally proposed by Harsanyi [13], is that a NE strategy be “regular”. In the words of van Damme [33], “regular Nash equilibria possess all the robustness properties that one can reasonably expect equilibria to posses.” Such equilibria are quasi-strict [13, 33], perfect [26], proper [22], strongly stable [16], essential [34], and isolated [33]. 1 If all equilibria of a game are regular, then the number of NE strategies in the game has been shown to be finite and, curiously, odd [13, 35]. Regular equilibria have also been studied in the context of games of incomplete information, where, as part of Harsanyi’s celebrated purification theorem [10, 12], they have been shown to be approachable. A game is said to be regular if all equilibria in the game are regular. Harsanyi [13] showed that almost all 2 games are regular, and hence, in almost all games, all equilibria possess all the robustness properties we might reasonably hope for. While this result is powerful when targeted at general N-player games, there are many important classes of games that have Lebesgue measure zero within the space of all games [1]. Harsanyi’s result tells us nothing about equilibrium properties within such special classes of games. This is the case, for example, in the important class of multi-agent games known as potential games [21]. A game is said to be a potential game if there exists some underlying function (generally referred to as the potential function ) that all players implicitly seek to optimize. Potential games have many applications in economics and engineering [20, 21], and are particularly useful in the study of multi-agent systems, e.g., [4, 6, 9, 17Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA ([email protected], [email protected]). Department of Mathematics, Pennsylvania State University, State College, PA, USA ([email protected]). 1 See [33] for an in depth discussion of each of these concepts and their interrelationships. 2 When we say almost all games satisfy some condition, we mean the set of games where the condition does not hold is a closed set with Lebesgue measure zero. See Section 3.2 for more details. 1

Transcript of Regular Potential Gamesusers.ece.cmu.edu/~brianswe/pdfs/regularPG.pdfREGULAR POTENTIAL GAMES BRIAN...

Page 1: Regular Potential Gamesusers.ece.cmu.edu/~brianswe/pdfs/regularPG.pdfREGULAR POTENTIAL GAMES BRIAN SWENSON y, RYAN MURRAYz, AND SOUMMYA KAR Abstract. A fundamental problem with the

REGULAR POTENTIAL GAMES

BRIAN SWENSON†, RYAN MURRAY‡, AND SOUMMYA KAR†

Abstract. A fundamental problem with the Nash equilibrium concept is the existence of certain“structurally deficient” equilibria that (i) lack fundamental robustness properties, and (ii) are difficultto analyze. The notion of a “regular” Nash equilibrium was introduced by Harsanyi. Such equilibriaare highly robust and relatively simple to analyze. A game is said to be regular if all equilibria inthe game are regular. In this paper it is shown that almost all potential games are regular. Thatis, except for a closed subset of potential games with Lebesgue measure zero, all potential games areregular. As an immediate consequence of this, the paper also proves an oddness result for potentialgames: In almost all potential games, the number of Nash equilibrium strategies is finite and odd.

Key words. Game theory, Regular equilibrium, Potential games, Multi-agent systems

AMS subject classifications. 91A06, 91B52, 91A26

1. Introduction. While the notion of Nash equilibrium is a universally ac-cepted solution concept for games, several shortcomings have been noted over theyears. A principal criticism (in addition to non-uniqueness) is that some Nash equi-librium strategies may be undesirable or unreasonable due to a lack of basic robust-ness properties. As a consequence, many equilibrium refinement concepts have beenproposed [13,16,22,24,26,33,34], each attempting to single out subsets of Nash equi-librium strategies that satisfy some desirable criteria.

One of the most stringent refinement concepts, originally proposed by Harsanyi[13], is that a NE strategy be “regular”. In the words of van Damme [33], “regularNash equilibria possess all the robustness properties that one can reasonably expectequilibria to posses.” Such equilibria are quasi-strict [13,33], perfect [26], proper [22],strongly stable [16], essential [34], and isolated [33].1 If all equilibria of a game areregular, then the number of NE strategies in the game has been shown to be finiteand, curiously, odd [13,35]. Regular equilibria have also been studied in the context ofgames of incomplete information, where, as part of Harsanyi’s celebrated purificationtheorem [10,12], they have been shown to be approachable.

A game is said to be regular if all equilibria in the game are regular. Harsanyi[13] showed that almost all2 games are regular, and hence, in almost all games, allequilibria possess all the robustness properties we might reasonably hope for.

While this result is powerful when targeted at general N-player games, there aremany important classes of games that have Lebesgue measure zero within the spaceof all games [1]. Harsanyi’s result tells us nothing about equilibrium properties withinsuch special classes of games. This is the case, for example, in the important class ofmulti-agent games known as potential games [21].

A game is said to be a potential game if there exists some underlying function(generally referred to as the potential function) that all players implicitly seek tooptimize. Potential games have many applications in economics and engineering [20,21], and are particularly useful in the study of multi-agent systems, e.g., [4, 6, 9, 17–

†Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh,PA, USA ([email protected], [email protected]).‡Department of Mathematics, Pennsylvania State University, State College, PA, USA

([email protected]).1See [33] for an in depth discussion of each of these concepts and their interrelationships.2When we say almost all games satisfy some condition, we mean the set of games where the

condition does not hold is a closed set with Lebesgue measure zero. See Section 3.2 for more details.

1

Page 2: Regular Potential Gamesusers.ece.cmu.edu/~brianswe/pdfs/regularPG.pdfREGULAR POTENTIAL GAMES BRIAN SWENSON y, RYAN MURRAYz, AND SOUMMYA KAR Abstract. A fundamental problem with the

2 B. SWENSON, R. MURRAY, AND S. KAR

19,23,25,27,36,37].Considered within the space of all games, the set of potential games constitutes

a low-dimensional subspace which has Lebesgue measure zero. Harsayi’s regularityresult provides no information on the abundance (or dearth) of regular equilibriawithin this class of games. Hence, when restricting attention to potential games, as isoften done in the study of multi-agent systems, we are deprived of any generic resultson the regularity, robustness, or finiteness of the equilibrium set. The main result ofthis paper is the following theorem.

Theorem 1. Almost all potential games are regular.

We note that this implies that in almost all potential games, all equilibria are quasi-strict, perfect, proper, strongly stable, essential, and isolated. Using Harsayi’s oddnesstheorem (see [13], Theorem 1), we see that in any regular potential game, the num-ber of NE strategies is finite and odd. Hence, the following result is an immediateconsequence of Theorem 1.

Theorem 2. In almost all potential games, the number of NE strategies is finiteand odd.

Regularity may be seen as serving two purposes. First, it ensures that the equilib-rium set possesses the desirable structural properties noted above (i.e., equilibria areisolated, robust, and finite in number). Second, it simplifies the analysis of the gamenear equilibrium points—the important features of players’ utility functions near anequilibrium can be understood by looking only at first and second-order terms inthe associated Taylor series expansion. In this sense, the role of regular equilibria ingames is analogous to the role that non-degenerate critical points play in the studyof real valued functions.3 This amenable analytic structure can greatly facilitate thestudy of (for example) game-theoretic learning processes [30] or approachability ingames with incomplete information [10].

Our primary motivation for studying regular equilibria in potential games, as dis-cussed above, is to characterize the fundamental regularity, robustness, and finitenessproperties of the equilibrium set in potential games. Our secondary motivation comesfrom the perspective of learning theory. Fictitious play (FP) learning dynamics [2]may be seen as the natural learning dynamics associated with the NE concept. FP isknown to converge to the set of NE in a variety of games, including potential games.However, the convergence result for FP in potential games is less than satisfying: FPmay converge to a mixed-strategy (saddle point) NE, solutions of FP may be non-unique, and convergence rate estimates for FP can be impossible to establish [11]. Ina companion paper [30] we show that these difficulties can be overcome by restrictingattention to regular potential games. In particular, it is shown that in any regularpotential game, (continuous-time) FP converges generically to a pure-strategy NE,solutions of FP are generically unique, and the rate of convergence of FP is gener-ically exponential.4 Combined with the result of this paper, this allows us to showthat the FP learning dynamics are “well behaved” in almost all potential games.

3A critical point x∗ of a function f : Rn → R is said to be non-degenerate if the Hessian of fat x∗ is non-singular. When a critical point is non-degenerate, one can understand the importantlocal properties of f using only the gradient and Hessian of f . If a critical point is degenerate thenheavy algebraic machinery may be required to understand the local properties of f . With regard togames, if x∗ is an interior equilibrium point of a potential game with potential function U , then x∗ isregular if and only if x∗ is a non-degenerate critical point of x∗. For non-interior equilibrium pointsthe story is more involved, but the main idea is the same.

4See [28], [29] for a discussion of the rate of convergence of FP in regular potential games.

Page 3: Regular Potential Gamesusers.ece.cmu.edu/~brianswe/pdfs/regularPG.pdfREGULAR POTENTIAL GAMES BRIAN SWENSON y, RYAN MURRAYz, AND SOUMMYA KAR Abstract. A fundamental problem with the

REGULAR POTENTIAL GAMES 3

The remainder of the paper is organized as follows. In Section 1.1 we outline ourstrategy for proving Theorem 1. Section 2 sets up notation. Section 3 defines thenotion of a regular equilibrium and presents two simple conditions that will allow usto verify if an equilibrium of a potential game is regular. Section 4 proves Theorem1.

1.1. Proof Strategy. An equilibrium in a potential game can be shown to beregular if and only if the derivatives of the potential function satisfy two simple non-degeneracy conditions (see Section 3 and Lemma 12). The first condition deals withthe gradient of the potential function—we refer to it as the first-order non-degeneracycondition. The second condition deals with the Hessian of the potential function—werefer to it as the second-order non-degeneracy condition. We say a potential gameis first-order (second-order) non-degenerate if all equilibria in the game satisfy thefirst-order (second-order) non-degeneracy condition.

We will prove Theorem 1 by showing that almost all potential games are bothfirst and second-order non-degenerate. We proceed in two steps. First, we will provethat almost all potential games are second-order non-degenerate (Proposition 17).We follow roughly the approach of [5,13], setting up an appropriate mapping into thespace of N -player potential games of a fixed size, and proving that all second-orderdegenerate games are contained in the set of critical values of the map. The resultthen follows from Sard’s Theorem [14]. We note, however, that the mapping used byHarsanyi [13] for general games fails to give any useful information when restricted topotential games (see Remark 22). Consequently, our construction differs substantiallyfrom Harsanyi’s in terms of the mapping used and some of the fundamental technicaltools used. Must significantly, we require a strong characterization of the rank ofthe linear mapping that relates equilibrium points of a game and the game utilitystructure (see (31) and Proposition 18). This characterization relies on (relatively)recent results on the signsolvability of matrices [15], not available to Harsanyi. Thecase of potential games also differs from the general case in that it is not possible toconstruct a single mapping a la Harsanyi [13] whose critical values set contains alldegenerate games. We are required to decompose the strategy space into a countablefamily of subsets and construct an appropriate mapping into the space of potentialgames from each subset.

As our second step in proving Theorem 1, we show that almost all potentialgames are first-order non-degenerate (Proposition 23). In order to show this, we useour aforementioned characterization of the rank of the linear mapping (see (31) anddiscussion above) to construct a Lipschitz mapping from a set with low Hausdorffdimension into the space of potential games, such that the range of the map containsall first-order degenerate games. The result then follows from the fact that the graphof a Lipschitz mapping cannot have a higher Hausdorff dimension than its domain([7], Section 2.4.2).

2. Preliminaries.

2.1. Notation. We begin by defining the notion of a game that will be studiedthroughout the paper.

Definition 3. A game (in normal form) is given by a tuple

Γ := (N, (Yi, ui)i=1,...,N ),

where N ∈ 2, 3, . . . denotes the number of players, Yi := y1i , . . . , y

Kii denotes the

set of pure strategies (or actions) available to player i, with cardinality Ki := |Yi|,

Page 4: Regular Potential Gamesusers.ece.cmu.edu/~brianswe/pdfs/regularPG.pdfREGULAR POTENTIAL GAMES BRIAN SWENSON y, RYAN MURRAYz, AND SOUMMYA KAR Abstract. A fundamental problem with the

4 B. SWENSON, R. MURRAY, AND S. KAR

and ui :∏Nj=1 Yj → R denotes the utility function of player i.

Given some game Γ, let Y :=∏Ni=1 Yi denote the set of joint pure strategies

available to players, and let K :=∏Ni=1Ki denote the number of joint pure strategies.

For a finite set S, let 4(S) denote the set of probability distributions over S.For i = 1, . . . , N , let ∆i := 4(Yi) denote the set of mixed-strategies available to

player i. Let ∆ :=∏Ni=1 ∆i denote the set of joint mixed strategies. Note that it is

implicitly assumed that players’ mixed strategies are independent; i.e., players do notcoordinate. Let ∆−i :=

∏j∈1,...,N\i∆j . When convenient, given a mixed strategy

σ = (σ1, . . . , σN ) ∈ ∆, we use the notation σ−i to denote the tuple (σj)j 6=iGiven a mixed strategy σ ∈ ∆, the expected utility of player i is given by

Ui(σ1, . . . , σN ) =∑y∈Y

ui(y)σ1(y1) · · ·σN (yN ).

For σ−i ∈ ∆−i, the best response of player i is given by the set-valued functionBRi : ∆−i ⇒ ∆i,

BRi(σ−i) := arg maxσi∈∆i

Ui(σi, σ−i),

and for σ ∈ ∆ the joint best response is given by the set valued function BR : ∆ ⇒ ∆

BR(σ) := BR1(σ−1)× · · · × BRN (σ−N ).

A strategy σ ∈ ∆ is said to be a Nash equilibrium (NE) if σ ∈ BRi(σ). Forconvenience, we sometimes refer to a Nash equilibrium simply as an equilibrium.

We say that Γ is a potential game [21] if there exists a function u : Y → R suchthat ui(y

′i, y−i)−ui(y′′i , y−i) = u(y′i, y−i)−u(y′′i , y−i) for all y−i ∈ Y−i and y′i, y

′′i ∈ Yi,

for all i = 1, . . . , N . We note that the potential function associated with a givenpotential game is unique up to an additive constant [21].

Let U : ∆→ R be the multilinear extension of u defined by

(1) U(σ1, . . . , σN ) =∑y∈Y

u(y)σ1(y1) · · ·σ(yN ).

The function U may be seen as giving the expected value of u under the mixed strategyσ. We refer to U as the potential function and to u as the pure form of the potentialfunction.

Using the definitions of Ui and U it is straightforward to verify that

BRi(σ−i) := arg maxσi∈∆i

Ui(σi, σ−i) = arg maxσi∈∆i

U(σi, σ−i).

Thus, in order to compute the best response set we only require knowledge of thepotential function U , not necessarily the individual utility functions (Ui)i=1,...,N .

By way of notation, given a pure strategy yi ∈ Yi and a mixed strategy σ−i ∈ ∆−i,we will write U(yi, σ−i) to indicate the value of U when player i uses a mixed strategyplacing all weight on the yi and the remaining players use the strategy σ−i ∈ ∆−i.

Given a σi ∈ ∆i, let σki denote value of the k-th entry in σi, so that σi = (σki )Kik=1.Since the potential function is linear in each σi, if we fix any i = 1, . . . , N we mayexpress it as

(2) U(σ) =

Ki∑k=1

σki U(yki , σ−i).

Page 5: Regular Potential Gamesusers.ece.cmu.edu/~brianswe/pdfs/regularPG.pdfREGULAR POTENTIAL GAMES BRIAN SWENSON y, RYAN MURRAYz, AND SOUMMYA KAR Abstract. A fundamental problem with the

REGULAR POTENTIAL GAMES 5

In order to study learning dynamics without being (directly) encumbered by thehyperplane constraint inherent in ∆i we define

Xi := xi ∈ RKi−1 : 0 ≤ xki ≤ 1 for k = 1, . . . ,Ki − 1, and

Ki−1∑k=1

xki ≤ 1,

where we use the convention that xki denotes the k-th entry in xi so that xi =(xki )Ki−1

k=1 .Given xi ∈ Xi define the bijective mapping Ti : Xi → ∆i as Ti(xi) = σi for the

unique σi ∈ ∆i such that σki = xk−1i for k = 2, . . . ,Ki and σ1

i = 1 −∑Ki−1k=1 xki . For

k = 1, . . . ,Ki let T ki be the k-th component map of Ti so that Ti = (T ki )Kii=1.Let X := X1 × · · · × XN and let T : X → ∆ be the bijection given by T =

T1 × · · · × TN . In an abuse of terminology, we sometimes refer to X as the mixed-strategy space of Γ. When convenient, given an x ∈ X we use the notation x−i todenote the tuple (xj)j 6=i. Letting X−i :=

∏j 6=iXj , we define T−i : X−i → ∆−i as

T−i := (Tj)j 6=i.Throughout the paper we often find it convenient to work in X rather than ∆.

In order to keep the notation as simple as possible we overload the definitions of somesymbols when the meaning can be clearly derived from the context. In particular,let BRi : X−i ⇒ Xi be defined by BRi(x−i) := xi ∈ Xi : BRi(σ−i) = σi, σi ∈∆i, σ−i ∈ ∆−i, σi = Ti(xi), σ−i = T−i(x−i). Similarly, given an x ∈ X we abusenotation and write U(x) instead of U(T (x)).

Given a pure strategy yi ∈ Yi, we will write U(yi, x−i) to indicate the value of Uwhen player i uses a mixed strategy placing all weight on the yi and the remainingplayers use the strategy x−i ∈ X−i. Similarly, we will say yki ∈ BRi(x−i) if thereexists an xi ∈ BRi(x−i) such that Ti(xi) places weight one on yki .

Applying the definition of Ti to (2) we see that U(x) may also be expressed as

(3) U(x) =

Ki−1∑k=1

xki U(yk+1i , x−i) +

(1−

Ki−1∑k=1

xki

)U(y1

i , x−i).

for any i = 1, . . . , N .We use the following nomenclature to refer to strategies in X.

Definition 4. (i) A strategy x ∈ X is said to be pure if T (x) places all its masson a single action tuple y ∈ Y .(ii) A strategy x ∈ X is said to be completely mixed if x is in the interior of X.(iii) In all other cases, a strategy x ∈ X is said to be incompletely mixed.

Other notation as used throughout the paper is as follows.• N := 1, 2, . . ..• The mapping sgn : Rn×m → Rn×m is given by

(sgn(A))i,j :=

1 if ai,j > 0

−1 if ai,j < 0

0 if ai,j = 0.

• Given two matrices A and B of the same dimension, A B denotes theHadamard product (i.e., the entrywise product) of A and B.

• Suppose m,n, p ∈ N, Fi : Rm × Rn → R, for i = 1, . . . , p. Suppose furtherthat F : Rm × Rn → Rp is given by F (w, z) = (Fi(w, z))i=1,...,p. Then

Page 6: Regular Potential Gamesusers.ece.cmu.edu/~brianswe/pdfs/regularPG.pdfREGULAR POTENTIAL GAMES BRIAN SWENSON y, RYAN MURRAYz, AND SOUMMYA KAR Abstract. A fundamental problem with the

6 B. SWENSON, R. MURRAY, AND S. KAR

the operator Dw gives the Jacobian of F with respect to the components ofw = (wk)k=1,...,m; that is

DwF (w, z) =

∂F1(w,z)∂w1

· · · ∂F1(w,z)∂wm

.... . .

...∂Fp(w,z)∂w1

· · · ∂Fp(w,z)∂wm

.

• Ac denotes the complement of a set A, and A denotes the interior of A, andclA denotes the closure of A.

• The support of a function f : Ω→ R is given by spt(f) := x ∈ Ω : f(x) 6= 0.• Given a function f , D(f) refers to the domain of f and R(f) to the range off .

• Ln, n ∈ 1, 2, . . . refers to the n-dimensional Lebesgue measure.

3. Regular Equilibria. The notion of a regular equilibrium was originallyintroduced by Harsanyi [13]. Subsequently, these equilibria were studied by VanDamme [32,33], who introduced a slightly modified definition of a regular equilibriumthat is generally standard today. Informally, an equilibrium is said to be regular if theJacobian of a certain differentiable mapping is non-singular. We will see that in po-tential games, the notion of a regularity takes on a more intuitive meaning. Roughlyspeaking, an equilibrium x∗ of a potential game is regular if x∗ is a non-degeneratecritical point of the potential function (i.e., the Hessian of the potential function isnon-singular at x∗).5

We will now formally define the notion of a regular equilibrium as given in [32,33].We begin by defining the carrier set of an element x ∈ X, a natural modification of asupport set to the present context. For xi ∈ Xi let

carri(xi) := spt(Ti(xi)) ⊆ Yi;

i.e., carri(xi) is the set of pure strategies in Yi that receive positive weight under the(conventional) mixed strategy Ti(xi) ∈ ∆i. For x = (x1, . . . , xN ) ∈ X let carr(x) :=carr1(x1) ∪ · · · ∪ carrN (xN ).

Let C = C1 ∪ · · · ∪ CN , where for each i = 1, . . . , N , Ci is a nonempty subsetof Yi. We say that C is the carrier for x = (x1, . . . , xN ) ∈ X if Ci = carri(xi) fori = 1, . . . , N (or equivalently, if C = carr(x)).

For each pure strategy yτ ∈ Y , τ = 1, . . . ,K let uτ denote the pure-strategypotential associated with playing yτ ; that is, uτ := u(yτ ), where u is the pure formof the potential function defined in Section 2. A vector of potential coefficients u =(uτ )Kτ=1 is an element of RK that uniquely defines the multi-linear potential functionU .

Given a strategy x ∈ X and vector of potential coefficients u ∈ RK , let

F ki (x, u) := T 1i (xi)x

ki [Ui(y

k+1i , x−i)− Ui(y1

i , x−i)](4)

for i = 1, . . . , N , k = 1, . . . ,Ki − 1, and let

(5) F (x, u) :=(F ki (x, u)

)i=1,...,N

k=1,...,Ki−1

.

5If x∗ is an interior equilibrium point, then this holds exactly, and the invertiblity of the Hessianalone is sufficient to determine the regularity of x∗. If x∗ touches a boundary of X, then the notionof regularity is slightly more delicate and requires us to also look at the gradient of U at x∗.

Page 7: Regular Potential Gamesusers.ece.cmu.edu/~brianswe/pdfs/regularPG.pdfREGULAR POTENTIAL GAMES BRIAN SWENSON y, RYAN MURRAYz, AND SOUMMYA KAR Abstract. A fundamental problem with the

REGULAR POTENTIAL GAMES 7

Definition 5 (Regular Equilibrium). Let x∗ ∈ X be an equilibrium of a gamewith potential coefficient vector u. Assume the action set Yi of each player is reorderedso that y1

i ∈ carri(x∗i ). The equilibrium x∗ is said to be regular if the Jacobian of

F (x∗, u), given by DxF (x∗, u), is non-singular.

Remark 6. We note that if x∗ is regular, then the Jacobian of F (x∗, u) can beshown to be nonsingular under any reordering of Yi in which the reference actionsatisfies y1

i ∈ carri(x∗i ) for all i = 1, . . . , N (see [32], Theorem 3.8). This justifies the

use of an arbitrary reference action y1i ∈ carri(x

∗i ) in the above definition.

Remark 7. The notion of a regular equilibrium is traditionally defined by con-sidering strategies in the space ∆ rather than X [33]. Using the definition of F ki andthe properties of the determinant of a matrix, it is straightforward to confirm that thedefinition of regularity given in Definition 5 coincides with the traditional definitionin [33]. See the extended version of this paper [31] for more details.

We note that the notion of a regular equilibrium is traditionally defined by consid-ering strategies in the space ∆ rather than X. In particular, in an abuse of notation,given a strategy σ ∈ ∆ and a vector u ∈ RK , let

(6) F ki (σ, u) := F ki (T−1(σ), u)

for i = 1, . . . , N and k = 1, . . . ,Ki − 1, where F ki (x, u) is as defined in (4). Further-more, let

(7) F 0i (σ, u) :=

(Ki∑k=1

σki

)− 1

for i = 1, . . . , N , and let

(8) F (σ, u) := (F ki (σ, u)) i=1,...,N,k=0,...,Ki−1

.

An equilibrium σ∗ ∈ ∆ is traditionally said to be regular if the strategy set Yi ofeach player i = 1, . . . , N is ordered so that y1

i ∈ spt(σi) and the Jacobian DσF (σ∗, u)is non-singular [33].

To see that the definition of a regular equilibrium given in Definition 5 coincideswith the traditional definition given in [33], suppose that x∗ ∈ X is an equilibrium ofsome potential game with potential coefficient vector u, and let σ∗ = T (x∗). The Jaco-bian DxF (x∗, u) may be formed from DσF (σ∗, u) by removing the rows of DσF (σ∗, u)corresponding to the coordinate maps F 0

i , i = 1, . . . , N and removing the columns ofDσF (σ∗, u) in which the partial derivative is taken with respect to σ1

i , i = 1, . . . , N .Using (6)–(8), (4) may be equivalently expressed in terms of strategies σ ∈ ∆ as

F ki (σ, u) := σ1i σ

k+1i [Ui(y

k+1i , σ−i)− Ui(y1

i , σ−i)](9)

for i = 1, . . . , N , k = 1, . . . ,Ki − 1.Note that for i = 1, . . . , N , and k = 1, . . . ,Ki − 1, we have

∂F ki (σ∗, u)

∂σ1i

= 0.

This follows from (9) and the fact that for k = 1, . . . ,Ki − 1, either

∂U(σ)

∂σk+1i

= U(yk+1i , σ−i)− U(y1

i , σ−i) = 0

Page 8: Regular Potential Gamesusers.ece.cmu.edu/~brianswe/pdfs/regularPG.pdfREGULAR POTENTIAL GAMES BRIAN SWENSON y, RYAN MURRAYz, AND SOUMMYA KAR Abstract. A fundamental problem with the

8 B. SWENSON, R. MURRAY, AND S. KAR

or σk+1i = 0. Note also that by (7) we have

∂F 0i (σ∗,u)

∂σ1i

= 1 for i = 1, . . . , N and

∂F 0i (σ∗,u)

∂σ1j

= 0 for i, j = 1, . . . , N , j 6= i. This means that for each i = 1, . . . , N , the

column of DσF (σ∗, u) in which the partial derivative is taken with respect to σ1i is

composed of all zeros except for a one in the row corresponding to F 0i . Thus, by the

definition of the determinant, removing the above mentioned rows and columns fromDσF (σ∗, u) does not change the value of the determinant of the resulting matrix.Consequently, DxF (x∗, u) is invertible if and only if DσF (σ∗, u) is invertible, and theequilibrium x∗ ∈ X is regular as defined in Definition 5 if and only if σ∗ is regular asdefined in [13].

3.1. Regular Equilibria in Potential Games. In potential games, the no-tion of a regular equilibrium has a natural and intuitive interpretation in terms of thederivatives of the potential function. In this section we discuss a pair of conditionsapplicable within potential games that are equivalent to regularity. The first condi-tion (referred to as the first-order condition) concerns the gradient of the potentialfunction; the second condition (referred to as the second-order condition) concernsthe Hessian of the potential function.

In Sections 3.1.1–3.1.2 we formally define the notions of first and second-orderdegeneracy. In Section 3.1.3 we show that, in a potential game, an equilibrium satisfiesthese non-degeneracy conditions if and only if it is regular.

3.1.1. First-Order Degeneracy. Let Γ be a potential game. Let C = C1 ∪· · · ∪ CN , Ci ⊂ Yi ∀i = 1, . . . , N be some carrier set. Let γi := |Ci| and assumethat the strategy set Yi is reordered so that y1

i , . . . , yγii = Ci. Under this ordering,

the first γi − 1 components of any strategy xi with carri(xi) = Ci are free (notconstrained to zero by Ci) and the remaining components of xi are constrained tozero. That is (xki )γi−1

k=1 is free under Ci and (xki )Kik=γi= 0. The set of strategies

x ∈ X : carr(x) = C is precisely the interior of the face of X given by

(10) ΩC := x ∈ X : xki = 0, k = γi, . . . ,Ki − 1, i = 1, . . . , N.

Definition 8 (First-Order Degenerate Equilibrium). Let x∗ ∈ X be an equi-librium with carrier C. We say that x∗ is first-order degenerate if there exists a

pair (i, k), i = 1, . . . , N , k = γi, . . . ,Ki − 1 such that ∂U(x∗)

∂xki= 0, and we say x∗ is

first-order non-degenerate otherwise.

Example. The 2× 2 identical interests game with payoff matrix M =

(0 01 −1

)has a first-order degenerate equilibrium at the strategy in which the row player playshis second action with probability 1 and the column player mixes between both hisactions with equal probability.

Remark 9. Using the multi-linearity of U , it is straightforward to verify that anequilibrium x∗ with carrier C is first order non-degenerate if and only if, for everyplayer i, the set of pure-strategy best responses to x∗−i coincides with Ci. We note that,using this later definition, Harsanyi [13] referred to first-order non-degenerate equi-libria as quasi-strong equilibria. We prefer to use the term first-order non-degeneratein order to emphasize that we are concerned with the gradient of the potential func-tion and to keep the nomenclature consistent with the notion of second-order non-degeneracy, introduced next.

Page 9: Regular Potential Gamesusers.ece.cmu.edu/~brianswe/pdfs/regularPG.pdfREGULAR POTENTIAL GAMES BRIAN SWENSON y, RYAN MURRAYz, AND SOUMMYA KAR Abstract. A fundamental problem with the

REGULAR POTENTIAL GAMES 9

3.1.2. Second-Order Degeneracy. Let C be some carrier set. Let N := |i =1, . . . , N : γi ≥ 2|, and assume that the player set is ordered so that γi ≥ 2 fori = 1, . . . , N . Under this ordering, for strategies with carr(x) = C, the first N playersuse mixed strategies and the remaining players use pure strategies. Assume thatN ≥ 1 so that any x with carrier C is a mixed (not pure) strategy.

Let the Hessian of U taken with respect to C be given by

(11) H(x) :=

(∂2U(x)

∂xki ∂x`j

)i,j=1,...,N,k=1,...,γi−1,`=1,...,γj−1

.

Note that this definition of the Hessian restricts attention to the components of xthat are free (i.e., unconstrained) under C. That is, H(x) taken with respect to C isthe Hessian of U |ΩC at x.

Definition 10 (Second-Order Degenerate Equilibrium). We say an equilibriumx∗ ∈ X is second-order degenerate if the Hessian H(x∗) taken with respect to carr(x∗)is singular, and we say x∗ is second-order non-degenerate otherwise.

Remark 11. Note that both forms of degeneracy are concerned with the interac-tion of the potential function and the “face” of the strategy space containing the equi-librium x∗. If x∗ touches one or more constraints, then first-order non-degeneracy en-sures that the gradient of the potential function is nonzero normal to the face Ωcarr(x∗),defined in (10). Second-order non-degeneracy ensures that, restricting U to the faceΩcarr(x∗), the Hessian of U

∣∣Ωcarr(x∗)

is non-singular. If x∗ is contained within the

interior of X, then the first-order condition becomes moot and the second-order con-dition reduces to the standard definition of a non-degenerate critical point.

3.1.3. Regular Equilibria and Potential Function Degeneracy. The fol-lowing lemma allows us to study the regularity of an equilibrium via the first andsecond-order degeneracy conditions.

Lemma 12. Let Γ be a potential game. Then,(i) If an equilibrium x∗ is first-order non-degenerate, then it is second-order non-degenerate if and only if it is regular.(ii) If an equilibrium x∗ is regular, then it is first-order non-degenerate.

In particular, an equilibrium x∗ is regular if and only if it is both first and second-order non-degenerate.

The proof of this lemma follows readily from the definitions and can be found in theextended version of this paper [31].

The following definitions will be useful in the proof of the lemma. Given a carrierset C = C1 ∪ · · · ∪ CN , let

(12) F ki (x, u) := U(yk+1i , x−i)− U(y1

i , x−i)

for i = 1, . . . , N , k = 1, . . . , γi − 1, where γi = |Ci|. Let

(13) F (x, u) :=(F ki (x, u)

)i=1,...,N

k=1,...,γi−1

.

In particular, note that for i = 1, . . . , N , k = 1, . . . , γi − 1 we have the relationship

(14) F ki (x, u) = Ti(xi)xki F

ki (x, u),

where F ki is as defined in (4).

Page 10: Regular Potential Gamesusers.ece.cmu.edu/~brianswe/pdfs/regularPG.pdfREGULAR POTENTIAL GAMES BRIAN SWENSON y, RYAN MURRAYz, AND SOUMMYA KAR Abstract. A fundamental problem with the

10 B. SWENSON, R. MURRAY, AND S. KAR

Proof. In order to simplify notation and minimize the overuse of superscripts,throughout the proof we will use the symbol x rather than the usual x∗, when referringto an equilibrium. Without loss of generality, given an equilibrium x ∈ X, assumethat each player’s action set Yi is reordered so that carri(xi) = y1

i , . . . , yγii . Note

that this comports with the ordering assumption implicit in the definitions of bothfirst and second-order degeneracy (see Sections 3.1.1–3.1.2).

We begin by proving (ii). As noted in Remark 9, an equilibrium is first-ordernon-degenerate if and only if it is quasi-strong, as introduced by Harsanyi [13]. It wasshown in [33] that any regular equilibrium is quasi-strong. Hence, (ii) holds.

We now prove (i). Assume henceforth that x is a first-order non-degenerateequilibrium.

Our goal is to show that

(15) DxF (x, u) is invertible ⇐⇒ H(x) is invertible.

Given an x ∈ X, it is useful to consider the decomposition x = (xp, xm), wherexm = (xki )i=1,...,N, k=1,...,γi−1 and xp contains the remaining components of x. (Thesubscript of xm is indicative of “mixed strategy components” of x and xp indicativeof “pure strategy components” of x.) Noting that

DxmF (xp, xm, u) = H(x),

where F is defined as in (13) with respect to the carrier carr(x), we see that (15) isequivalent to showing

DxF (x, u) is invertible ⇐⇒ DxmF (x, u) is invertible.

With this end in mind, we will now consider the behavior of the component maps ofF and F in two important cases.Case 1: Suppose (i, k) is such that k ∈ 1, . . . , γi − 1. Note that in this case wehave xki > 0. Differentiating (14) with respect to x`j , (j, `) 6= (i, k) we get

(16)∂F ki (x, u)

∂x`j=

(∂T 1

i (xi)

∂x`j

)xki F

ki (x, u) + T 1

i (xi)xki

∂F ki (x, u)

∂x`j

Since k ∈ 1, . . . , γi − 1, we have F ki (x, u) = 0 and hence,

(17)∂F ki (x, u)

∂x`j= T 1

i (xi)xki

∂F ki (x, u)

∂x`j

for (j, `) 6= (i, k).Differentiating (14) with respect to xki we get

(18)∂F ki (x, u)

∂xki= −xki F ki (x, u) + T 1

i (xi)Fki (x, u) + xki T

1i (xi)

∂F ki (x, u)

∂xki

By our choice of (i, k) we have F ki (x, u) = 0. Also note that F ki (x, u) does not depend

on xki (see (12)), and hence∂Fki (x,u)

∂xki= 0. By (18), this implies that

∂Fki (x,u)

∂xki= 0.

But we just showed that∂Fki (x,u)

∂xki= 0, hence

(19)∂F ki (x, u)

∂xki=∂F ki (x, u)

∂xki= 0.

Page 11: Regular Potential Gamesusers.ece.cmu.edu/~brianswe/pdfs/regularPG.pdfREGULAR POTENTIAL GAMES BRIAN SWENSON y, RYAN MURRAYz, AND SOUMMYA KAR Abstract. A fundamental problem with the

REGULAR POTENTIAL GAMES 11

Together, (17) and (19) imply that for each (i, k) such that k = 1, . . . , γi − 1 we haveDxm F

ki (x, u) = T 1

i (xi)xkiDxmF

ki (x, u), where xki > 0 and T 1

i (xi) > 0. This impliesthat

(20) Dxm

(F ki (x, u)

)i=1,...,N

k=1,...,γi−1

is invertible ⇐⇒ DxmF (x, u) is invertible.

Case 2: Suppose (i, k) is such that k ∈ γi, . . . ,Ki − 1. Note that in this case wehave xki = 0. Differentiating (4) with respect to x`j , (j, `) 6= (i, k) we get

(21)∂F ki (x, u)

∂x`j= T 1

i (xi)xki

∂x`j[U(yk+1

i , x−i)− U(y1i , x−i)] = 0,

where the equality to zero holds since xki = 0. Note in particular that this impliesthat Dxm F

ki (x, u) = 0.

If we differentiate (4) with respect to xki and use the fact that xki = 0 we get

∂F ki (x, u)

∂xki=T 1

i (xi)[U(yk+1i , x−i)− U(y1

i , x−i)].

Since x is a first-order non-degenerate equilibrium, U(y1i , x−i) > U(y`+1

i , x−i)for all ` = γi, . . . ,Ki − 1. Also, by our ordering of Yi we have T 1

i (xi) > 0. Hence,T 1i (xi)[U(yk+1

i , x−i)− U(y1i , x−i)] < 0. This, along with (21), implies that

Dxp

(F ki (x, u)

)i=1,...,N

k=γi,...,Ki−1

is a diagonal matrix with non-zero diagonal.We now consider the Jacobian DxF (x, u). This may be expressed as

DxF (x,m) =

Dxm

(F ki (x, u)

)i=1,...,N

k=1,...,γi−1

Dxp

(F ki (x, u)

)i=1,...,N

k=1,...,γi−1

Dxm

(F ki (x, u)

)i=1,...,N

k=γi,...,Ki−1

Dxp

(F ki (x, u)

)i=1,...,N

k=γi,...,Ki−1

By the above arguments we see that this matrix has the form

DxF (x,m) =

Dxm

(F ki (x, u)

)i=1,...,N

k=1,...,γi−1

M

0 D

,

where M ∈ Rγ×(κ−γ) and D ∈ R(κ−γ)×(κ−γ) is an invertible diagonal matrix. Giventhis block form we see that DxF (x,m) is invertible if and only if

Dxm

(F ki (x, u)

)i=1,...,N

k=1,...,γi−1

is invertible. By (20) we then see that DxF (x, u) is invertible if and only if DxmF (x, u)is invertible, which proves the desired result.

Page 12: Regular Potential Gamesusers.ece.cmu.edu/~brianswe/pdfs/regularPG.pdfREGULAR POTENTIAL GAMES BRIAN SWENSON y, RYAN MURRAYz, AND SOUMMYA KAR Abstract. A fundamental problem with the

12 B. SWENSON, R. MURRAY, AND S. KAR

3.2. Regularity in Almost All Potential Games. In Definitions 8 and 10and Lemma 12 above, we see that the degeneracy (or regularity) of an equilibriumof a potential game is uniquely determined by the potential function. Consequently,we find it convenient to use the following nomenclature, which deals directly with thepotential function rather than a particular potential game.

Definition 13. (i) We say that x∗ ∈ X is a first-order (second-order) degenerateequilibrium of a potential function U if x∗ is a first-order (second-order) degenerateequilibrium for any (and consequently, for every) potential game having potential func-tion U .(ii) We say that x∗ ∈ X is a degenerate equilibrium of a potential function U if x∗ isa first or second-order degenerate equilibrium of U .(iii) We say that a potential function U is first-order (second-order) degenerate if itpossesses any first-order (second-order) degenerate equilibria x∗ ∈ X.(iv) We say that a potential function U is degenerate if it is first or second-orderdegenerate.

Assuming players action spaces satisfy |Yi| = Ki for i = 1, . . . , N , and K =∏Ni=1Ki, a potential function U is uniquely determined by a vector in RK designating

the potential assigned to each action tuple y ∈ Y . Viewing each potential function assuch a vector, the space of potential functions is given by RK .

Lemma 12 implies that a potential game is regular if and only if the associatedpotential function is non-degenerate. In light of this, we define the notion of regularityin almost all potential games as follows.

Definition 14. We say that almost all potential games are regular if almost allpotential functions are both first and second-order non-degenerate. More precisely, wesay that almost all potential games are regular if for for arbitrary Ki ∈ 2, 3, . . .,|Yi| = Ki, i = 1, . . . , N , K =

∏Ni=1Ki, the set of associated potential functions that

are first and/or second-order degenerate is a closed set with LK-measure zero.

Remark 15. We note that the notion of “almost all potential games” given abovecan be equivalently stated directly in terms of the set of potential games rather than theset of potential functions. By stacking players utility functions into a single vector, agame may be seen as a point in RNK . One may show that the set of potential gamesis linear subspace of RNK with dimension Kp := (

∑Ni=1

∏j 6=iKj) + N + K − 1. Let

P ⊂ RKN denote the set of all potential games. When we say almost all potentialgames are regular, as in Definition 14, this may be equivalently stated that the set ofirregular potential games is a closed subset of P with LKp-measure zero. However, tosimplify the exposition, we find it convenient to work directly with Definition 14.

Remark 16. It was shown by Harsanyi that the set of irregular N -player gamesis a closed subset in the space of N -player games [13, 33]. Since the set of potentialgames is a subspace of the space of N -player games, it immediately follows that the setof irregular potential games is a closed set with respect to the space of potential games.Hence, in order to prove that almost all potential games are regular we only need provethat the set of potential functions that are first and/or second-order degenerate hasLK-measure zero.

4. Proof of Main Result. Given Definition 14, we will prove Theorem 1 intwo steps. We begin by showing that almost all potential functions are second-ordernon-degenerate (Section 4.1 and Proposition 17). Subsequently we show that almostall potential functions are first-order non-degenerate (Section 4.2 and Proposition 23).

Page 13: Regular Potential Gamesusers.ece.cmu.edu/~brianswe/pdfs/regularPG.pdfREGULAR POTENTIAL GAMES BRIAN SWENSON y, RYAN MURRAYz, AND SOUMMYA KAR Abstract. A fundamental problem with the

REGULAR POTENTIAL GAMES 13

Propositions 17 and 23 together prove Theorem 1.

4.1. Second-Order Degenerate Games. The goal of this subsection is toprove the following proposition.

Proposition 17. The set of potential functions which are second-order degener-ate has LK-measure zero.

We will prove the proposition using Sard’s theorem. Our construction roughly followsthat of [13]. We begin by introducing some pertinent notation and preliminary results.

Note that the set of joint pure strategies Y may be expressed as an ordered setY = y1, . . . , yK where each yτ ∈ Y , is an N -tuple of strategies, τ ∈ 1, . . . ,K. Wewill assume a particular ordering for this set after Proposition 18.

If we consider the vector of potential function coefficients u ∈ RK as a variable,then by (1), U is linear in u.6 At this point we will express U in a more convenientform.

Let τ = 1, . . . ,K, i = 1, . . . , N and xi ∈ Xi. We define qτi : Xi → [0, 1] by

(22) qτi (xi) := T ki (xi)

where k corresponds to the action played by player i in the tuple yτ , i.e, (yτ )i = yki ,and where T ki (xi) is the k-th component of Ti(xi). In an abuse of notation, given apure strategy yki ∈ Yi, we let qτi (yki ) = 1 if (yτ )i = yki and qτi (yki ) = 0 otherwise.

Given a fixed vector of potential coefficients u ∈ RK , the potential functionU : X → R may be expressed as (see (1) and (22))

(23) U(x) =

K∑τ=1

[N∏i=1

qτi (xi)

].

Note that this form makes it clear that U is linear in u.Now, let C = C1 ∪ · · · ∪ CN be some carrier set. The analysis through the

remainder of the section will rely on this carrier set being fixed, and many of thesubsequent terms are implicitly dependent on the choice of C. In keeping with ourprior convention we let γi := |Ci|, and let N := |i ∈ 1, . . . , N : γi ≥ 2|.

Any xi with carrier Ci has precisely γi − 1 free components (i.e., not constrainedto zero by Ci). The joint strategy x = (x1, . . . , xN ) is a vector with

γ :=

N∑i=1

(γi − 1)

free components.By (3) we have that

(24)∂U(x)

∂xki= U(yk+1

i , x−i)− U(y1i , x−i) =: F ki (x, u)

for i = 1, . . . , N , k = 1, . . . , γi − 1. Let

(25) F (x, u) :=(F ki (x, u)

)i=1,...,N

k=1,...,γi−1

.

6The potential function U is, of course, a function of both x and u. However, since we willonly exploit the dependence on u in this section, we generally stick to the standard game-theoreticconvention of writing U as a function of x only [8].

Page 14: Regular Potential Gamesusers.ece.cmu.edu/~brianswe/pdfs/regularPG.pdfREGULAR POTENTIAL GAMES BRIAN SWENSON y, RYAN MURRAYz, AND SOUMMYA KAR Abstract. A fundamental problem with the

14 B. SWENSON, R. MURRAY, AND S. KAR

Given an x ∈ X, it is at times useful to decompose it as x = (xp, xm), wherexm = (xki )i=1,...,N, k=1,...,γi−1 and xp contains the remaining components of x. (Thesubscript of xm is indicative of “mixed strategy components” of x and xp indicativeof “pure strategy components” of x.) In this decomposition, xm is a γ-dimensionalvector containing the free components of x. Taking the Jacobian of F in terms of thecomponents of xm we find that

(26) DxmF (xp, xm, u) = H(x),

where DxmF (xp, xm, u) =(∂F∂x`i

)i=1,...,N,`=1,...,γi−1

.

Let x∗ be a mixed equilibrium with carrier C. Differentiating (3) we see that at

the equilibrium x∗ we have ∂U(x∗)

∂xki= 0 for i = 1, . . . , N , k = 1, . . . , γi− 1 (see Lemma

25 in appendix), or equivalently,

(27) F ki (x∗, u) = U(yk+1i , x∗−i)− U(y1

i , x∗−i) = 0

for i = 1, . . . , N , k = 1, . . . , γi − 1. Using (23) in the above we get

F ki (x∗, u) =

K∑τ=1

(qτi (yk+1i )− qτi (y1

i ))∏j 6=i

qτj (x∗j )

= 0.(28)

It will be convenient to be able to relate the ordering of Y with the ordering of(F ki )i=1,...,N, k=1,...,γi−1. For this purpose, given i = 1, . . . , N , k = 1, . . . , γi − 1, let

s∗(i, k) :=

k for i = 1,∑i−1j=1(γj − 1) + k for i ≥ 2.

Define i∗ : 1, . . . , γ → 1, . . . , N and k∗ : 1, . . . , γ → 1, . . . ,maxiγi− 1 to bethe inverse of s∗; that is

(29) s∗(i∗(s), k∗(s)) = s

for all s = 1, . . . , γ.Given an x ∈ X, let A(x) = (as,τ (x))s=1,...,γ,

τ=1,...,K∈ Rγ×K be defined as the matrix

with entries

(30) as,τ (x) :=(qτi∗(s)(y

k∗(s)+1i )− qτi∗(s)(y

1i )) ∏j 6=i∗(s)

qτj (xj),

Using this notation, (28) is equivalently expressed as

(31) A(x∗)u = 0.

The following proposition establishes the solvability of (31). We note that, inorder to prove Proposition 17 (the main result of this section), it is sufficient toconsider only x such that carr(x) = C in the following proposition. However, later,when studying first order degenerate games in Section 4.2, we will consider the notionof an “extended carrier set” (see (42)), and we will need to characterize the rank ofA(x) for x with carr(x) ⊂ C.

Page 15: Regular Potential Gamesusers.ece.cmu.edu/~brianswe/pdfs/regularPG.pdfREGULAR POTENTIAL GAMES BRIAN SWENSON y, RYAN MURRAYz, AND SOUMMYA KAR Abstract. A fundamental problem with the

REGULAR POTENTIAL GAMES 15

Proposition 18. For any x such that carr(x) ⊆ C, the matrix A(x) has full rowrank.

Before proving this proposition we introduce some notation that will permit us tostudy the structure of A(x).

We begin by establishing a particular ordering of elements in Y . Let

K :=

N∏i=1

γi.

For τ = 1, . . . , K, let ατ = (α1τ , . . . , α

Nτ ) be a multi-index associated with the τ -th

action tuple in Y , meaning that

yτ = (yα1τ

1 , . . . , yαNτN, y1N+1

, . . . , y1N ),

where y1i , for i = N + 1, . . . , N , is, by construction, the pure strategy used by player i

at any strategy x with carrier C. Let Y be reordered so that for every τ = 1, . . . , K,i = 1, . . . , N we have 1 ≤ αiτ ≤ γi. This ensures that the first K strategies in Ycontain all strategy combinations of the elements of C1, . . . , CN , and that for any

multi-index α = (α1, . . . , αN ), 1 ≤ αi ≤ γi, there exists a unique 1 ≤ τ ≤ K such

that yτ = (yα1

1 , . . . , yαN

N, y1N+1

, . . . , y1N ).

By definition (22) and the ordering we assumed for Y we have that

(32) qτi (yki ) =

1 if k = αiτ0 otherwise.

For 1 ≤ s ≤ γ and 1 ≤ τ ≤ K, let

(33) rs,τ := qτi (yk+1i )− qτi (y1

i ) and ps,τ (x) :=∏j 6=i

qτj (xj),

where i = i∗(s) and k = k∗(s) (see (29)), and note that as,τ (x) = rs,τps,τ (x) (see(30)). We may write A(x) = R P(x), where is the Hadamard product, and R andP(x) have entries rs,τ and ps,τ (x) respectively.

Partition A(x), R and P(x) as A(x) = [A1(x) A2(x)], R = [R1 R2], and P(x) =

[P1(x) P2(x)], where A1(x),R1,P1(x) ∈ Rγ×K and A2(x),R2,P2(x) ∈ Rγ×(K−K),so we may write

A(x) = [A1(x) A2(x)], with, A1(x) = R1 P1(x) and A2(x) = R2 P2(x).

In order to show that A(x) has full row rank, it is sufficient to prove that A1(x)has full row rank—this is the approach we will take in proving the proposition.

We address this by studying the sign pattern of A1(x). Properties of sign patternmatrices (i.e., matrices with entries in −1, 0, 1) have been well-studied [3, 15]. Werecall the following definition from [3],

Definition 19. A sign pattern matrix L ∈ Rm×n with n ≥ m is said to be anL-matrix if for every matrix M with sgn(M) = sgn(L), the matrix M has full rowrank.

The following lemma characterizes L-matrices [3, 15].

Page 16: Regular Potential Gamesusers.ece.cmu.edu/~brianswe/pdfs/regularPG.pdfREGULAR POTENTIAL GAMES BRIAN SWENSON y, RYAN MURRAYz, AND SOUMMYA KAR Abstract. A fundamental problem with the

16 B. SWENSON, R. MURRAY, AND S. KAR

Lemma 20. Let L ∈ Rm×n be a sign pattern matrix with n ≥ m. Then L is anL-matrix if and only if for every diagonal sign pattern matrix D ∈ Rm×m, D 6= 0there is a nonzero column of DL in which each nonzero entry has the same sign.

In light of Definition 19, Proposition 18 is equivalent to the following lemma.

Lemma 21. For any x such that carr(x) ⊆ C, the matrix A1(x) =(R1 sgn(P1(x))) is an L-matrix.

The proof of this lemma relies on showing that (R1 sgn(P1(x))) satisfies the L-matrix characterization given in Lemma 20. Before proving this lemma we introducesome definitions that will be useful in the proof.

Given a diagonal matrix D ∈ R`×`, ` ∈ N, let diag(D) be the vector in R`containing the diagonal elements of D.

Given a diagonal sign pattern matrix D ∈ R`×`, ` ∈ N, define idx(D) as follows.If diag(D) does not contain any ones, then let idx(D) = 1. Otherwise, let idx(D) beone more than the first index in diag(D) containing a 1.7,8

Given a diagonal matrix D ∈ Rγ×γ , let Di ∈ R(γi−1)×(γi−1), i = 1, . . . , N be the(unique) diagonal matrices satisfying diag(D) = (diag(D1), . . . ,diag(DN )).

Proof. Let x be a strategy satisfying carr(x) ⊆ C. In order to show that(R1 sgn(P1(x))) is an L-matrix, it is sufficient (by Lemma 20) to show that and forany diagonal sign pattern matrix D 6= 0, there exists a column of D(R1 sgn(P1(x)))which is nonzero and in which every nonzero entry has the same sign. With this inmind, we begin by giving a characterization of the columns of R1.

Suppose that i = 1, . . . , N and k = 1, . . . , γi − 1 are fixed. Note the following:

(i) Suppose τ ∈ 1, . . . , K is such that αiτ = k + 1, where αiτ is the i-th indexof the multi-index ατ . Since ατ is used to define the ordering of actions inY , we have qτi (yk+1

i ) = 1 and qτi (y1i ) = 0 (see (32) and preceding discussion).

Hence, qτi (yk+1i )− qτi (y1

i ) = 1.(ii) Suppose τ ∈ 1, . . . , K is such that αiτ = 1. Then qτi (yk+1

i ) = 0, andqτi (y1

i ) = 1. Hence, qτi (yk+1i )− qτi (y1

i ) = −1.(iii) For all other τ ∈ 1, . . . , K we have qτi (yk+1

i ) = 0, and qτi (y1i ) = 0, and hence

qτi (yk+1i )− qτi (y1

i ) = 0.

For 1 ≤ τ ≤ K, let rτ ∈ Rγ be the τ -th column of R1. Partition this column as

rτ =

r1τ...

rNτ

.

where riτ ∈ Rγi−1. From (33) we see that

riτ =

qτi (y2i )− qτi (y1

i )...

qτi (yγii )− qτi (y1i )

.

7Assume indexing starts with one, not zero. For example, if the first time a 1 appears in diag(D)is at index 2, then idx(D) = 3.

8The awkward offset in this definition is needed in order to handle the indexing offset inherentin the mapping Ti : Xi → ∆i, i = 1, . . . , N .

Page 17: Regular Potential Gamesusers.ece.cmu.edu/~brianswe/pdfs/regularPG.pdfREGULAR POTENTIAL GAMES BRIAN SWENSON y, RYAN MURRAYz, AND SOUMMYA KAR Abstract. A fundamental problem with the

REGULAR POTENTIAL GAMES 17

Given the observations (i)–(iii) above we see that for each i we have

(34) riτ =

−1 if αiτ = 1

eαiτ−1 if 2 ≤ αiτ ≤ γi,

where the symbol eαiτ−1 refers to the (αiτ − 1)-th canonical vector in Rγi−1 and

1 ∈ Rγi−1 is the vector of all ones.We now characterize the columns of sgn(P1(x)). For i = 1, . . . , N we define

Ii(x) :=k ∈ 1, . . . , γi : T ki (xi) > 0

.

Since carr(x) = C, the ordering we assumed for Yi implies that T ki (x) = 0 for k ≥γi+1. By the definition of Ti, it is not possible to have T ki (xi) = 0 for all k = 1, . . . ,Ki

and hence Ii(x) 6= ∅.Let pτ be the τ -th column of P1(x) and let

pτ := sgn(pτ )

be the τ -th column of sgn(P1(x)).9 Suppose that τ ∈ 1, . . . , K is such that for themulti-index ατ we have αiτ ∈ Ii(x) for all i = 1, . . . , N . Then for each s = 1, . . . , γthe (s, τ)-th entry of P1(x) is strictly positive (see (22) and (33)), and hence pτ ispositive and pτ = 1.

Partition the columns pτ and pτ as

(35) pτ =

p1τ...

pNτ

and pτ =

p1τ...

pNτ

,

where piτ , piτ ∈ Rγi−1. Suppose that τ is such that for the multi-index ατ we have

αiτ /∈ Ii(x) for exactly one subindex i ∈ 1, . . . , N. Then piτ is positive (see (33))and pjτ is zero for any j 6= i. Hence, piτ = 1 and pjτ = 0 for any j 6= i.

Now, let D ∈ Rγ×γ be an non-zero diagonal sign pattern matrix. We will showthat there is a nonzero column of D(R1 sgn(P1(x))) in which each nonzero entry isa 1.

We now consider two possible cases for the structure of D and show that in eachcase there is a nonzero column of D(R1 sgn(P1(x))) in which every nonzero entryis 1.

Case 1: Suppose that for all i ∈ 1, . . . , N such that idx(Di) /∈ Ii(x) we havediag(Di) = 0. Choose τ such that

αiτ = idx(Di) if idx(Di) ∈ Ii(x),

αiτ ∈ Ii(x) if idx(Di) /∈ Ii(x).

Note that αiτ ∈ Ii(x) for all i = 1, . . . , N , and hence pτ = 1 (see discussion preceding(35)). The τ -th column of D(R1 sgn(P1(x))) is given by

(36) D(rτ pτ ) = diag(D) rτ =(diag(Di) riτ

)Ni=1

.

9For convenience in notation, we suppress the argument x when writing the columns of thesematrices.

Page 18: Regular Potential Gamesusers.ece.cmu.edu/~brianswe/pdfs/regularPG.pdfREGULAR POTENTIAL GAMES BRIAN SWENSON y, RYAN MURRAYz, AND SOUMMYA KAR Abstract. A fundamental problem with the

18 B. SWENSON, R. MURRAY, AND S. KAR

For any i such that idx(Di) /∈ Ii(x) we have, by assumption, diag(Di) = 0 andhence diag(Di) riτ = 0. Moreover, note that in this case we have idx(Di) = 1 since,by the definition of idx(·), diag(Di) = 0 implies idx(Di) = 1.

Suppose now that i is such that idx(Di) ∈ Ii(x). For i = 1, . . . , N , if αiτ = 1then riτ = −1 (by (34)) and diag(Di) contains no ones (this is the definition ofidx(Di) = 1). In fact, diag(Di) contains only entries with value of 0 or −1. Hence,diag(Di) riτ = − diag(Di), which is a nonnegative vector.

If 2 ≤ αiτ ≤ γi then riτ = eαiτ−1 (by (34)). Recalling the definition of idx(·), by our

choice of αiτ = idx(Di), the (αiτ −1)-th entry of diag(Di) is 1. Hence, diag(Di) riτ =eαiτ−1. In particular, this implies that if αiτ 6= 1 then diag(Di) riτ is not identically

zero and every nonzero entry of diag(Di) riτ is 1.In summary, for i = 1, . . . , N , we have diag(Di)riτ ≥ 0, with equality only when

idx(Di) = 1 and Di = 0. Hence, by (36), the τ -th column of D(R1 sgn(P1(x)))satisfies D(rτ pτ ) ≥ 0, with equality only when idx(Di) = 1 and Di = 0 for all i.But, by assumption D 6= 0, so D(rτ pτ ) 6= 0.

Case 2: Suppose that for some i ∈ 1, . . . , N we have idx(Di) 6∈ Ii(x) anddiag(Di) 6= 0. Let τ ∈ 1, . . . , K be chosen such that αiτ = idx(Di) for exactly onesuch i ∈ 1, . . . , N and for all other j 6= i we have αiτ ∈ Ij (this is always possiblesince Ij 6= ∅). Then we have

piτ = 1, and pjτ = 0, for all j 6= i

(see discussion following (35)).As shown in Case 1, if αiτ = 1, then Di ≤ 0 and riτ = −1 which implies that

diag(Di) riτ ≥ 0. Moreover, since piτ = 1 and since, by assumption diag(Di) 6= 0 wehave diag(Di) riτ piτ 6= 0 and every nonzero entry is 1.

If 2 ≤ αiτ ≤ γi, then, again using the same reasoning as in Case 1, we see thatdiag(Di) riτ = eαiτ−1. Since piτ = 1 we get that diag(Di) riτ piτ = eαiτ−1.

For j 6= i we have pjτ = 0, which implies diag(Dj) rjτ pjτ = 0.All together, this implies that the τ -th column of D(R1 sgn(P1(x))), given by

(diag(Dj) rjτ pjτ )Nj=1, is nonzero and every nonzero entry is equal to 1.Since this holds for arbitrary diagonal sign matrix D 6= 0, Lemma 20 implies that

(R1 sgn(P1(x))) is an L-matrix. Since this holds for any x satisfying carr(x) ⊆ C,we see that the desired result holds.

Given the carrier C there are(Kγ

)possible combinations (of length γ) of the

columns of A(x). For each r = 1, . . . ,(Kγ

), let Ar(x) ∈ Rγ×γ denote a square matrix

formed by taking one unique combination of the columns of A(x). For r = 1, . . . ,(Kγ

),

let

Sr := x ∈ X : carr(x) = C, detAr(x) 6= 0.

By Proposition 18, no strategy x ∈ X with carr(x) = C may simultaneously be in allScr . Note also that each Sr is open relative to the set x ∈ X : carr(x) = C = ΩC(see (10)). Thus, we may construct a countable family of open balls (B`)`≥1, B` ⊂ ΩCthat satisfy:(i)⋃`≥1B` = x ∈ X : carr(x) = C

(ii) For each ` ∈ N there exists an r` ∈ 1, . . . ,(Kγ

) such that B` ⊆ Sr` (i.e., Ar`(x)

is invertible for all x ∈ B`).Fix ` ∈ 1, 2, . . .. After reordering, A(x) may be partitioned as A(x) =

[Ar`(x) Ar`(x)], where Ar`(x) is a matrix formed by the columns of A(x) not used

Page 19: Regular Potential Gamesusers.ece.cmu.edu/~brianswe/pdfs/regularPG.pdfREGULAR POTENTIAL GAMES BRIAN SWENSON y, RYAN MURRAYz, AND SOUMMYA KAR Abstract. A fundamental problem with the

REGULAR POTENTIAL GAMES 19

to form Ar`(x). Let the strategy set Y be reordered in the same way as the columnsA(x).10 Given a vector of potential coefficients u ∈ RK , let it be partitioned asu = (u1, u2), where u1 = (u1, . . . , uK−γ) and u2 = (uK−γ+1, . . . , uK). Define ρ` :B` × RK−γ → Rγ by

(37) ρ`(x, u1) := −Ar`(x)−1Ar`(x)u1,

If x∗ ∈ B` is an equilibrium for some potential game with potential coefficient vectoru, then by (31) we have A(x∗)u = 0. Since Ar`(x

∗) is invertible, this is equivalentto u2 = −Ar`(x

∗)−1Ar`(x∗)u1. Hence, if x∗ ∈ B` is an equilibrium of some potential

game with potential coefficient vector u = (u1, u2), the function ρ` permits us torecover u2 given u1 and x∗.

Conversely, suppose x ∈ B` and u1 ∈ RK−γ are arbitrary. If u = (u1, u2) withu2 = ρ`(x, u1), then by the definition of ρ` we see that A(x)u = 0. By the definitionof A(x) (see (25)–(30)) this implies

F (x, u1, ρ`(x, u1)) = 0, for all x ∈ B`.

Thus, taking a partial derivative with respect to xki we get

(38)∂

∂xkiF (x, u1, ρ`(x, u1)) = 0, i = 1, . . . , N , k = 1, . . . , γi − 1.

Consider again the decomposition x = (xp, xm). Using compact notation, (38) isrestated as

(39) DxmF (xp, xm, u1, ρ`(xp, xm, u1)) = 0.

Suppose x∗ ∈ B` is an equilibrium of a potential game with potential coefficients uand carr(x∗) = C. Applying the chain rule in (39), using (26), and using the fact thatF (x, u) = A(x)u = Ar`(x)u1 + Ar`(x)u2, we find that at x∗ there holds11

(40) H(x∗) = −Ar`(x∗)Jρ`(x

∗, u),

where Jρ`(x, u) := Dxm ρ`(xp, xm, u)∣∣xm=xm

is the Jacobian of ρ` taken with respectto xm.

Since Ar`(x) is invertible for all x ∈ B`, this means that given any equilibriumx∗ ∈ B`, the Hessian H(x∗) is nonsingular if and only if the Jacobian Jρ`(x

∗, u) isnonsingular.

For each ` ∈ N, define the function ρ` : B` × RK−γ → RK by ρ`(x, u1) :=(u1, ρ`(x, u1)). The function ρ` is a trivial extension of ρ` that recovers the full vectorof potential coefficients u ∈ RK given u1 ∈ RK−γ and an equilibrium x∗ ∈ B`.

Remark 22. In the case of general N -player games, as considered in [13, 33],Harsanyi (and Van Damme) construct a mapping into RNK (denoted by the symbolρ∗∗ in [13]) which recovers the individual utility coefficients for each player, given asubset of the utility coefficients and an equilibrium strategy. In that case, the equal-ity governing the relationship between strategies and utility coefficients is given by an

10Note that we previously assumed a specific ordering for Y . However, this was for the purposeof proving Proposition 18, which is unaffected by a reordering of Y at this point.

11We note that H(x∗) is also dependent on u. However, we suppress the argument u since it isgenerally held constant in the context of the Hessian H.

Page 20: Regular Potential Gamesusers.ece.cmu.edu/~brianswe/pdfs/regularPG.pdfREGULAR POTENTIAL GAMES BRIAN SWENSON y, RYAN MURRAYz, AND SOUMMYA KAR Abstract. A fundamental problem with the

20 B. SWENSON, R. MURRAY, AND S. KAR

equation analogous to (31) (see (49) in [13]) in which the matrix corresponding toA(x) in (31) has dimension γ × NK. Effectively, Harsanyi’s mapping ρ∗∗ is con-structed by choosing γ columns of the associated matrix A(x) that are linearly inde-pendent for all x ∈ X, and inverting this square submatrix as in (37). In the case ofpotential games, it is not possible to choose γ columns of A(x) that are linearly inde-pendent for all x ∈ X. (In particular, Harsanyi’s mapping ρ∗∗ looses uniqueness whenapplied to potential games). Instead, one must construct a collection of (well-defined)mappings (ρ`)`, each of which recovers the full vector of potential coefficients givena subset of potential coefficients and an equilibrium strategy x∗ in some appropriatesubset of X.

Note that the Jacobian of ρ` takes the form

Jρ`(x, u) =

(0 IJρ` M

),

for some matrix M. Clearly, detJρ` = 0 if and only if det Jρ` = 0. Thus, by (40)we see that if x∗ ∈ B` is an equilibrium of a game with potential coefficient vector u,then

(41) detJρ`(x∗, u) = 0 ⇐⇒ det H(x∗) = 0.

We now prove Proposition 17.

Proof. Let C be a carrier set. Let U(C) ⊆ RK be the set of potential functionshaving at least one degenerate equilibrium with carrier set C; that is,

U(C) :=u ∈ RK : ∃ degenerate equilibrium x∗ ∈ x ∈ X : carr(x) = C

.

For ` ∈ N, let U(C, `) ⊆ RK be the subset of potential functions having at leastone degenerate equilibrium x∗ ∈ B`, where B` is defined with respect to C; that is,

U(C, `) :=u ∈ RK : ∃ degenerate equilibrium x∗ ∈ B`

.

By construction, we have⋃`≥1B` = x ∈ X : carr(x) = C, and hence U(C) =⋃

`≥1 U(C, `).

We showed above that for any (x, u) ∈ B` × RK such that x is an equilibrium ofthe potential game with potential coefficients u, the Hessian H(x) (taken with respectto C) is invertible if and only if the Jacobian of ρ`(x, u) is invertible. Thus, the setU(C, `) is contained in the set of critical values of ρ`. By Sard’s theorem, we get thatU(C, `) is a set with LK-measure zero. Since U(C) is the countable union of sets ofLK-measure zero, it is itself a set with LK-measure zero.

Let U ⊂ RK denote the set of potential functions with at least one degenerateequilibrium. The set U may be expressed as the union U =

⋃C U(C) taken over all

possible support sets C. Since there are a finite number of support sets C, the set Uhas LK-measure zero.

4.2. First-Order Degenerate Games. The following proposition shows thatfirst-order degenerate games form a null set.

Proposition 23. The set of potential functions which are first-order degeneratehas LK-measure zero.

Proof. Fix some set C = C1 ∪ · · · ∪ CN where each Ci is a nonempty subset ofYi. Let C be any strict subset of C. In the context of this proof let γi := |Ci|, let

Page 21: Regular Potential Gamesusers.ece.cmu.edu/~brianswe/pdfs/regularPG.pdfREGULAR POTENTIAL GAMES BRIAN SWENSON y, RYAN MURRAYz, AND SOUMMYA KAR Abstract. A fundamental problem with the

REGULAR POTENTIAL GAMES 21

γ :=∑Ni=1(γi − 1), let N := |i = 1, . . . , N : γi ≥ 2|, and assume Yi is reordered

so that y1i , . . . , y

γii = Ci. Note that this ordering implies that for any x with

carr(x) = C we have y1i ∈ carr(x), i = 1, . . . , N .

Given an equilibrium x∗ let the extended carrier of x∗ be defined as

(42) ext carr (x∗) := carr(x∗) ∪

(N⋃i=1

yki ∈ Yi : k = 2, . . .Ki,∂U(x∗)

∂xk−1i

= 0

).

Suppose that x∗ is an equilibrium with extended carrier C. By Lemma 25 (see ap-pendix) and the ordering we assumed for Yi we have

(43) F ki (x∗, u) =∂U(x∗)

∂xk−1i

= 0, i = 1, . . . , N , k = 1, . . . , γi − 1,

where F ki is as defined in (24). Thus, if x∗ is an equilibrium for some game withpotential coefficient vector u ∈ RK , and ext carr (x∗) = C, then by the definition ofA(x) (see (25)–(30)), (43) implies that A(x∗)u = 0, or equivalently,

u ∈ kerA(x∗),

where the matrix A(x) ∈ Rγ×K is defined with respect to C, as in (30).

Let U(C, C) ⊆ RK be the set of potential functions in which there exists an

equilibrium x∗ with carr(x∗) = C and ext carr (x∗) = C. Let

X := x ∈ X : carr(x) = C.

By the above we see that

(44) U(C, C) ⊆ ∪x∈X kerA(x).

For each x ∈ X, let rangeA(x)T denote the range space of A(x)T . Each entryof A(x)T is a polynomial function in x and hence is Lipschitz continuous over the

bounded set X. By Proposition 18 we have rankA(x)T = γ for all x ∈ X. Thus,we may choose a set of γ basis vectors b1(x), . . . ,bγ(x) spanning rangeA(x)T

such that each bk(x) ∈ RK , k = 1, . . . , γ is a Lipschitz continuous function in x.Moreover, we may choose a complementary set of (K−γ) linearly independent vectorsbγ+1(x), . . . ,bK(x) forming a basis for the orthogonal complement (rangeA(x)T )⊥,with each bk(x) ∈ RK , k = γ + 1, . . . ,K being a continuous function in x. Let

B(x) := (bγ+1(x) · · · bK(x)) ∈ RK×(K−γ). Let f : X × RK−γ → RK be given by

f(x, v) := B(x)v. Since B(x) is Lipschitz continuous in x and X is bounded, f is

Lipschitz continuous. By the fundamental theorem of linear algebra, for each x ∈ X,kerA(x) = (rangeA(x)T )⊥ = rangeB(x). Hence,

(45)⋃x∈X

kerA(x) = f(X × RK−γ).

Since C ( C, the Hausdorff dimension of X is at most (γ − 1) and the Hausdorff

dimension of X×RK−γ is at most K−1. Since f is Lipschitz continuous, this implies(see [7], Section 2.4) that the Hausdorff dimension of f(X ×RK−γ) is at most K − 1,

and in particular, that f(X × RK−γ) has LK-measure zero. By (44) and (45), this

implies that U(C, C) has LK-measure zero.

Page 22: Regular Potential Gamesusers.ece.cmu.edu/~brianswe/pdfs/regularPG.pdfREGULAR POTENTIAL GAMES BRIAN SWENSON y, RYAN MURRAYz, AND SOUMMYA KAR Abstract. A fundamental problem with the

22 B. SWENSON, R. MURRAY, AND S. KAR

Let U ⊆ RK denote the set of all potential functions containing a first-orderdegenerate equilibrium. Since we may represent this set as a finite union of LK-measure zero sets,

U =⋃

∅6=Ci⊆Yi, i=1,...,NC=C1∪···∪CN

C(C

U(C, C),

the set U also has LK measure zero.

Appendix.

Lemma 24. Let x ∈ X and i = 1, . . . , N . Assume Yi is ordered so that y1i ∈ BRi(x−i).

Then: (i) For k = 1, . . . ,Ki − 1 we have ∂U(x)

∂xki≤ 0, and (ii) For k = 1, . . . ,Ki − 1, we have

yk+1i ∈ BRi(x−i) if and only if ∂U(x)

∂xki= 0. In particular, combined with (i) this implies that

yk+1i 6∈ BRi(x−i) ⇐⇒ ∂U(x)

∂xki< 0.

Proof. (i) Differentiating (3) we find that ∂U(x)

∂xki= U(yk+1

i , x−i) − U(y1i , x−i) (i) Since

y1i is a best response, we must have U(y1

i , x−i) ≥ U(yk+1i , x−i) for any k = 1, . . . ,Ki − 1.

Hence ∂U(x)

∂xki≤ 0.

(ii) Follows readily from (3).

Lemma 25. Suppose x∗ is an equilibrium and yki ∈ carr(x∗), k ≥ 2. Then ∂U(x∗)

∂xki= 0.

Proof. Since U is multilinear, yki must be a pure-strategy best response to x∗−i. The

result then follows from Lemma 24.

REFERENCES

[1] F. Brandt and F. Fischer, On the hardness and existence of quasi-strict equilibria, in Inter-national Symposium on Algorithmic Game Theory, Springer, 2008, pp. 291–302.

[2] G. W. Brown, “Iterative Solutions of Games by Fictitious Play” In Activity Analysis ofProduction and Allocation, Wiley, New York, 1951.

[3] R. A. Brualdi, K. L. Chavey, and B. L. Shader, Rectangular L-matrices, Linear AlgebraAppl., 196 (1994), pp. 37–61.

[4] X. Chu and H. Sethu, Cooperative topology control with adaptation for improved lifetime inwireless ad hoc networks, in Proceedings of IEEE International Conference on ComputerCommunications, IEEE, 2012, pp. 262–270.

[5] G. Debreu, Economies with a finite set of equilibria, Econometrica: Journal of the EconometricSociety, (1970), pp. 387–392.

[6] C. Ding, B. Song, A. Morye, J. A. Farrell, and A. K. Roy-Chowdhury, Collaborativesensing in a distributed PTZ camera network, IEEE Transactions on Image Processing, 21(2012), pp. 3282–3295.

[7] L. C. Evans and R. F. Gariepy, Measure theory and fine properties of functions, Textbooksin Mathematics, CRC Press, Boca Raton, FL, revised ed., 2015.

[8] D. Fudenberg and J. Tirole, Game Theory, vol. 393, 1991.[9] A. Garcia, D. Reaume, and R. L. Smith, Fictitious play for finding system optimal routings

in dynamic traffic networks, Transportation Research Part B: Methodological, 34 (2000),pp. 147–156.

[10] S. Govindan, P. J. Reny, and A. J. Robson, A short proof of harsanyi’s purification theorem,Games and Economic Behavior, 45 (2003), pp. 369–374.

[11] C. Harris, On the rate of convergence of continuous-time fictitious play, Games and EconomicBehavior, 22 (1998), pp. 238–259.

[12] J. C. Harsanyi, Games with randomly disturbed payoffs: A new rationale for mixed-strategyequilibrium points, International Journal of Game Theory, 2 (1973), pp. 1–23.

[13] J. C. Harsanyi, Oddness of the number of equilibrium points: a new proof, InternationalJournal of Game Theory, 2 (1973), pp. 235–250.

[14] M. W. Hirsch, Differential topology, Springer, 1976.

Page 23: Regular Potential Gamesusers.ece.cmu.edu/~brianswe/pdfs/regularPG.pdfREGULAR POTENTIAL GAMES BRIAN SWENSON y, RYAN MURRAYz, AND SOUMMYA KAR Abstract. A fundamental problem with the

REGULAR POTENTIAL GAMES 23

[15] V. Klee, R. Ladner, and R. Manber, Signsolvability revisited, Linear Algebra Appl., 59(1984), pp. 131–157.

[16] M. Kojima, A. Okada, and S. Shindoh, Strongly stable equilibrium points of n-person non-cooperative games, Mathematics of Operations Research, 10 (1985), pp. 650–663.

[17] T. J. Lambert, M. A. Epelman, and R. L. Smith, A fictitious play approach to large-scaleoptimization, Operations Research, 53 (2005), pp. 477–489.

[18] N. Li and J. R. Marden, Designing games for distributed optimization, IEEE Journal ofSelected Topics in Signal Processing, 7 (2013), pp. 230–242.

[19] J. R. Marden, G. Arslan, and J. S. Shamma, Connections between cooperative control andpotential games, IEEE Transactions on Systems, Man and Cybernetics. Part B: Cybernet-ics, 39 (2009), pp. 1393–1407.

[20] J. R. Marden, G. Arslan, and J. S. Shamma, Cooperative control and potential games,IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 39 (2009),pp. 1393–1407.

[21] D. Monderer and L. Shapley, Potential games, Games and Econ. Behav., 14 (1996), pp. 124–143.

[22] R. B. Myerson, Refinements of the nash equilibrium concept, International journal of gametheory, 7 (1978), pp. 73–80.

[23] N. Nie and C. Comaniciu, Adaptive channel allocation spectrum etiquette for cognitive radionetworks, Mobile Networks and Applications, 11 (2006), pp. 779–797.

[24] A. Okada, On stability of perfect equilibrium points, International Journal of Game Theory,10 (1981), pp. 67–73.

[25] G. Scutari, S. Barbarossa, and D. P. Palomar, Potential games: A framework for vectorpower control problems with coupled constraints, in Proceedings of the IEEE InternationalConference on Acoustics, Speech and Signal Processing, vol. 4, IEEE, 2006, pp. 241–244.

[26] R. Selten, Reexamination of the perfectness concept for equilibrium points in extensive games,International journal of game theory, 4 (1975), pp. 25–55.

[27] V. Srivastava, J. O. Neel, A. B. MacKenzie, R. Menon, L. A. DaSilva, J. E. Hicks, J. H.Reed, and R. P. Gilles, Using game theory to analyze wireless ad hoc networks, IEEECommunications Surveys and Tutorials, 7 (2005), pp. 46–56.

[28] B. Swenson, Myopic Best-Response Learning in Large-Scale Games, PhD thesis, CarnegieMellon University, 2017.

[29] B. Swenson and S. Kar, On the exponential rate of convergence of fictitious play in poten-tial games. Submitted for conference publication, online:https://users.ece.cmu.edu/ brian-swe/pdfs/Allerton2017.pdf, 2017.

[30] B. Swenson, R. Murray, and S. Kar, Fictitious play in potential games. Submitted forjournal publication, online: https://users.ece.cmu.edu/ brianswe/pdfs/FPinPotGames.pdf,2017.

[31] B. Swenson, R. Murray, and S. Kar, Regular potential games. Submitted for journal publi-cation, online: https://users.ece.cmu.edu/ brianswe/pdfs/regularPG.pdf, 2017.

[32] E. Van Damme, Regular equilibrium points of n-person games in normal form, (1981).[33] E. Van Damme, Stability and perfection of Nash equilibria, vol. 339, Springer, 1991.[34] W. Wen-Tsun and J. Jia-He, Essential equilibrium points of n-person non-cooperative games,

Sci Siniea, 11 (1322), p. 1307.[35] R. Wilson, Computing equilibria of n-person games, SIAM Journal on Applied Mathematics,

21 (1971), pp. 80–87.[36] Y. Xu, A. Anpalagan, Q. Wu, L. Shen, Z. Gao, and J. Wang, Decision-theoretic distributed

channel selection for opportunistic spectrum access: Strategies, challenges and solutions,IEEE Communications Surveys & Tutorials, 15 (2013), pp. 1689–1713.

[37] M. Zhu and S. Martınez, Distributed coverage games for energy-aware mobile sensor net-works, SIAM Journal on Control and Optimization, 51 (2013), pp. 1–27.