· 2020-07-20 · Bargaining Under Strategic Uncertainty Amanda Friedenberg December 25, 2013...
Transcript of · 2020-07-20 · Bargaining Under Strategic Uncertainty Amanda Friedenberg December 25, 2013...
Bargaining Under Strategic Uncertainty
Amanda Friedenberg∗
December 25, 2013
Extremely Preliminary
Abstract
This paper provides a novel understanding of delays in reaching agreements based on the idea
of strategic uncertainty—i.e., the idea that a Bargainer may face uncertainty about her op-
ponent’s play, even if there is no uncertainty about the structure of the game. It considers a
particular form of strategic uncertainty, called on path strategic certainty. This is the assump-
tion that strategic uncertainty can only arise after surprise moves in the negotiation process.
The paper shows that Bargainers who engage in forward induction reasoning can face strate-
gic uncertainty after surprise moves. Moreover, rational Bargainers who engage in forward
induction reasoning and satisfy on path strategic certainty may experience delays in reach-
ing agreements. The paper goes on to characterize the behavioral implications of rationality,
forward induction reasoning, and on path strategic certainty.
Bargaining is an important feature of many economic and political phenomena. Understand-
ing how negotiators form agreements is instrumental to understanding employment contracts,
legislative outcomes, sovereign debt, etc. In each of these applications, we observe an important
behavioral feature: at times, parties fail to reach an immediate agreement. Such failures lead to
strikes, holdouts, legislative stalemates, delays in renegotiating debt contracts, and so on. Each
of these situations have important (and sometimes long-term) economic consequences.
What is the source of such negotiation failures? Many answers have been put forward in the
literature. One natural answer is that such failures reflect incomplete information, i.e., reflect
the fact that bargainers face uncertainty about some structural aspect of the game. This can
be uncertainty about how bargainers value the object (e.g., Admati and Perry, 1987; Sobel and
Takahashi, 1983; Cramton, 1984; Fudenberg, Levine and Tirole, 1985; Grossman and Perry, 1986;
Feinberg and Skrzypacz, 2005), uncertainty about the bargainers’ cost of waiting (e.g., Rubinstein,
∗Arizona State University, [email protected]. I am indebted to Pierpaolo Battigalli, Ethan Buenode Mesquita, Martin Cripps, Eddie Dekel, H. Jerome Keisler, Max Stinchcombe, Gus Stuart, Asher Wolinskyand, especially, Marciano Siniscalchi, for helpful conversations. I also thank audiences at the Paris Game TheorySeminar, Northwestern University, University of Texas, University of British Columbia, Caltech, Carnegie Mellon,the Game Theory Society World Congress, the Belief Change in Social Context Workshop, the Games InteractiveReasoning and Learning Workshop, the Tsingua Economic Theory Conference, and the Society for Advancementof Economic Theory. Much of this paper was written while visiting University College London; I am indebted toUCL for their continued hospitality.
1
1985; Watson, 1998), uncertainty about the bargainers’ posture (e.g., Abreu and Gul, 2000; Abreu
and Pearce, 2007; Abreu, Pearce and Stacchetti, 2012; Wolitzky, 2012), uncertainty about the
ability to make future offers (e.g., Yildiz, 2004, 2011; Ortner, 2013), etc.
To understand why incomplete information is a natural source of delay, refer to Rubinstein’s
(1982) canonical alternating offers bargaining model. Suppose there is delay in reaching an agree-
ment. Under a standard equilibrium analysis, Bargainers have correct beliefs about the bargaining
process. When there is delay, the responder rejects an offer because she correctly anticipates that
she will fare better in the future. This rejection is anticipated by the proposer. So, the proposer
has an incentive to, instead, offer a proposal that is better from the perspective of the responder.
If there is a failure to reach immediate agreement, it must be that the proposer does not under-
stand that the responder will necessarily reject the offer. Under an equilibrium analysis, this is
obtained by introducing uncertainty about the structure of the game, e.g., uncertainty about how
the other bargainer values the object.
The premise of this paper is that incomplete information cannot provide a full picture of
why Bargainers experience delays in reaching agreements. In particular, incomplete information
does not appear to be a significant source of delay in a wide variety of applications. Take three
examples. First, delay in the negotiation of athletic contracts does not appear to be driven by
differences in information. Often both the athletic-product (i.e., the athlete’s statistics, traits,
outside options, etc.) and the team’s traits are commonly understood by the parties. Second, wars
are often seen as a failure by nations to reach agreements. But, understanding war as an artifact of
incomplete information leads to some peculiar conclusions—it leads to the conclusion that, after
prolonged battle, parties retain significant informational asymmetries. (See Fearon (2004, page
290) and Powell (2006, page 172) for a discussion of this point.) Third, high-profile legislative
stalemates often involve situations where the Legislators’ and Executive’s preferences are well-
understood. Moreover, on issues of domestic politics—particularly, budgetary stalemates—it is
difficult to argue that the Legislators and Executive have differential information about data.
This paper provides a novel understanding of delay in reaching agreements based on the idea of
strategic uncertainty: Even in the extreme case where the game is transparent to the Bargainers,
they may nonetheless face residual uncertainty about how others negotiate (or play the game).
The fact that strategic uncertainty can lead to bargaining impasses has a long history in the
negotiations literature, going back at least to Walton and McKersie (1965, page 37): A Bargainer
may offer a split of the pie with the expectation that the offer will be accepted; under strategic
uncertainty, this expectation need not be correct and the offer may be rejected. This simple
observation might suggest that there is nothing interesting to be gained from an exercise based
on strategic uncertainty—it might suggest an ‘anything goes’ result. Instead, the analysis here
imposes two important restrictions. These restrictions explicitly rule out trivial forms of strategic
uncertainty and, so, trivial forms of delay. The output is not an ‘anything goes’ result.
2
Preview of Approach The starting point is that each Bargainer faces a direct form of strategic
uncertainty. That is, each Bargainer faces uncertainty about how the other plays the game and this
uncertainty is not an artifact of other features of the environment (e.g., incomplete information or
player randomization). Epistemic game theory provides a formalism in which to analyze strategic
interaction, when Bargainers face such a direct form of strategic uncertainty.
The approach formally changes the definition of a game to include Bargainers hierarchies
of beliefs about the play of the game. Within this expanded definition of a game, we can for-
mally describe Bargainers’ reasoning about the other’s play; this is done by imposing (so-called)
epistemic conditions. We now preview the key epistemic conditions underlying the analysis.
A background requirement is that each Bargainer is rational. Formally, this means that each
Bargainer maximizes her conditional subjective expected utility, i.e., at each information set, she
maximizes her expected utility given her belief about how the other Bargainer plays the game.
Two key epistemic conditions restrict the Bargainer’s beliefs:
(a) Forward Induction Reasoning, and
(b) On Path Strategic Certainty.
Forward induction reasoning is the idea that Bargainers rationalize past behavior, when possible.
(The idea goes back to Kohlberg, 1981.) On path strategic certainty limits the nature of the
strategic uncertainty. It requires that, along the path of play, Bargainers correctly anticipate
how the bargaining process will unfold. Strategic uncertainty only arises in the event of surprise
offers or surprise rejections. (The idea is similar in spirit to self-confirming equilibrium, i.e., as in
Fudenberg and Levine, 1993; Dekel, Fudenberg and Levine, 1999.) We now discuss the importance
of these two criteria.
Begin with forward induction reasoning. It will be useful to draw an analogy to the standard
equilibrium analysis. We pointed to the fact that, if a proposer correctly anticipates that her
offer will be rejected, she has an incentive to offer a proposal that is better from the perspective
of the responder. The implicit presumption was that the proposer correctly anticipates that the
responder would accept such a ‘mutually beneficial’ offer; indeed, a threat to reject such an offer
would not be credible. A standard Nash equilibrium analysis does not rule out such an incredible
threat and, indeed, allows for delays in reaching agreements. The implicit argument was one based
on subgame perfection—where no Bargainer uses a strategy that involves an incredible threat
and, as a consequence, no player ‘thinks’ the other player uses such a strategy. Forward induction
reasoning provides an analogue for the case of strategic uncertainty. One natural conjecture is
that, if the Bargainers’ beliefs are consistent with forward induction reasoning, then we will return
to the conclusion of immediate agreement.
Turn to on path strategic certainty. Here, both Bargainers begin the bargaining process with
an understanding that they will, say, reach an agreement on a 50 : 50 split of the pie in period 10;
both bargainers understand that the other understands this, etc. And, indeed, this is the ‘correct’
outcome. The strategic uncertainty arises when one Bargainer, say Bargainer 1 (B1), deviates
3
from the believed path of play, e.g., by making a, say, mutually beneficial offer. At that point,
Bargainer 2 (B2) can face uncertainty about which future offers B1 will accept/reject. This type
of uncertainty is implicitly ruled out by an equilibrium analysis: Under an equilibrium analysis,
it is presumed that each Bargainer would play the equilibrium continuation strategy, even after
a deviation from the path of play; it follows that, under an equilibrium analysis, each Bargainer
continues to have correct beliefs about the path of play, after a surprise move in the negotiation
process.1
In Section 7 we will see that there is an important conceptual reason to require on path
strategic certainty. On path strategic certainty rules out a trivial form of delay, i.e., delay based
on uncertainty about how others resolve their indifferences.
The Main Theorem characterizes the set of outcomes consistent with forward induction rea-
soning under on path strategic certainty. As will become clear, there are outcomes ruled out by
these assumptions—that is, the result is not one of ‘anything goes.’ But, delay is consistent with
these assumptions.
Let us preview the mechanism for delay: The two Bargainers begin the game understanding
that the outcome will be an x : 1 − x split of the pie in period n; they each understand that
they each understand this, etc. This outcome involves inefficient delay. So, there is a mutually
beneficial offer to be made upfront. But, no Bargainer makes such a mutually beneficial offer.
The reason is that each Bargainer faces uncertainty about how the other Bargainer will react to
the unexpected. In particular, each Bargainer fears that, by making a ‘better than expected’ offer
upfront, the other Bargainer will become more optimistic about her future prospects and this will
cause the other Bargainer to reject the mutually beneficial offer—specifically, holding out for an
‘even better’ offer.
It is evident that this mechanism satisfies on path strategic certainty. The difficulty is in
understanding why it is consistent with forward induction reasoning. See Section 3.
Optimism and Delay The mechanism described above draws a connection between optimism
and delay: Bargainers do not make mutually beneficial offers because they fears that, the act of
making such an offer, will cause the other Bargainer to become more optimistic about his/her
future prospects. In Section TK, we will see that such optimism is intrinsically tied
This is not the first paper to draw a connection between optimism and delay. See, e.g., Farber
and Katz (1979); Shavell (1982); Yildiz (2004, 2011); Ortner (2013). Here, optimism reflects a
belief about the future offers the other Bargainer will accept/propose. By contrast, in the previous
literature, optimism reflects beliefs about other aspects of the bargaining problem, e.g., optimism
about outside options or optimism about the likelihood of making future offers.2
1The idea that strategic uncertainty can arise after surprise moves in the negotiation process also appears inStuart (2004).
2Also note that, here, the driving force appears to be a notion of second-order optimism, i.e., Bargainers notdeviating out of fear that it will cause the other Bargainer to become more optimistic. However, it should beemphasized that, as of now, there is no ‘proof’ that the driving force is such a second-order optimism.
4
Implications for Improving Efficiency Delays in reaching agreements are a source of eco-
nomic inefficiency. Strikes and holdouts are detrimental to both workers and firms. Legislative
stalemates have caused government shutdowns. Delays in renegotiating debt contracts can have
negative macroeconomic consequences. And so on. A first step to moving past these inefficiencies
is a better understanding of their cause.
Consider the case where the source of the inefficiency is uncertainty about surprise moves
in the negotiation process. This can be thought of as a case where Bargainers are trapped in
a situation with a ‘bad’ set of beliefs (about play of the game). There are mutually beneficial
outcomes. Indeed, if the Bargainers had a different set of beliefs, they would be able to obtain
such mutually beneficial outcomes. But, with their actual beliefs, they each fear making such
a mutually beneficial offer, uncertain how the other will react to the unexpected. One might
conjecture that a mediator can be particularly effective in such a situation—helping the parties
to overcome fears based on strategic uncertainty.
This is not to suggest that mediation is only helpful when the source of bargaining impasse is
strategic uncertainty. Rather, the claim is that identifying the source of impasse is an important
step toward implementing effective mediation. In particular, the source may influence the type
of mediation that would be most effective. For instance, if the primary source of inefficiency
is private information about a Bargainer’s valuation, the mediator would need to find ways to
overcome the informational asymmetry. On the other hand, if the primary source of inefficiency
is strategic uncertainty (in the sense put forward here), the mediator would need to find ways to
help the parties overcome fears of putting ‘good’ offers on the table.
The paper proceeds as follows. Section 1 describes the strategic situation, i.e., the bargaining game
and the Bargainers’ type structure. Section 2 formalizes the key epistemic conditions. Section 3
provides the main theorem, a characterization of the set of outcomes consistent with rationality,
forward induction reasoning and on path strategic certainty. The result is further explored in
Section 4. Section 5 explores implications for delay and Section 6 discusses comparative statics.
Section 7 revisits the assumption of on path strategic certainty.
1 The Description
The epistemic game describes the strategic situation. It consists of a Bargaining game, denoted
B, and a type structure, denoted T . The type structure describes the Bargainers’ strategic
uncertainty—it gives an implicit description of Bargainers’ hierarchies of conditional beliefs about
how the other bargainer plays the game. We now describe these two components.
Bargaining Game
The game is the canonical alternating offers Bargaining model of Stahl (1977); Rubinstein (1982):
Two bargainers, viz. B1 and B2, negotiate on how to split a pie [0, 1]. We will refer to B1 as ‘she’
5
and B2 as ‘he.’ Write i for a particular bargainer and −i for the other bargainer.
In each bargaining phase, some Bargainer i (henceforth, Bi) takes on the role of the proposer
and the other Bargainer (henceforth, B(−i)) takes on the role of the responder. In particular:
• Bi makes a proposal x ∈ [0, 1].
• B(−i) chooses to Accept (A) or Reject (R) the proposal.
– If B(−i) chooses A, the game is over: Bi gets x and B(−i) gets 1− x.
– If B(−i) chooses R, then a new bargaining phase begins. In the new bargaining phase
B(−i) is in the proposer’s role.
Period 1 begins with B1 in the proposer role; if the game does not conclude, Period 2 continues
with B2 in the proposer role, etc. The game can last for at most N ∈ N ∪ ∞ periods with
N ≥ 2. The game has a deadline if and only if N is finite.
An outcome of the game is some (x1, x2, n), where xi denotes Bi’s share of the pie and
n denotes the period in which x1 : x2 are determined. There are two types of outcomes: An
agreement outcome (x1, x2, n) = (y, 1 − y, n) is associated with a division of the pie, viz.
(x1, x2) = (y, 1 − y), and a period at which the Bargainers agree to the division, viz. n. A
disagreement outcome, viz. (x1, x2, N) = (0, 0, N), results if all offers are perpetually rejected.
At times we will refer to (x1, x2, n) as an n-period outcome.
Each Bargainer discounts the future; the discount factor is δ ∈ (0, 1). Thus, the utility function
for Bargainer i is given by Πi(x1, x2, n) = δn−1xi if n is finite and Πi(x1, x2, n) = 0 if n is infinite.
A history, viz. h, is a sequence of moves; a history can be identified with a node of the game
and so with an information set. We will say that h is an n-period history if it occurs in a
nth-bargaining phase. The Bargainer who moves at h either takes on the proposer’s role (and
chooses an element of [0, 1]) or takes on the responder’s role (and chooses an element of A,R).Write HP
i for the set of histories at which Bargainer i takes on the proposer’s role and HRi for
the set of histories at which Bargainer i takes the responder’s role. Then, Hi = HPi ∪HR
i is the
set of histories at which i moves and H = H1 ∪H2 is the set of non-terminal histories. Write Z
for the set of terminal histories.
A strategy si maps each information set into a choice available at that information set, i.e.,
si : Hi → [0, 1] ∪ A,R with si(h) ∈ [0, 1] for h ∈ HPi and si(h) ∈ A,R for h ∈ HR
i . Write Si
for the set of strategies of Bi. Say a strategy si allows history h ∈ H ∪Z if there is some strategy
s−i so that the path induced by (si, s−i) passes through h. Write Si(h) for the set of strategies of
Bi that allow information set h ∈ H ∪ Z.
Each strategy profile (s1, s2) ∈ S1×S2 induces a terminal history z ∈ Z. Write ζ : S1×S2 → Z
for the mapping from strategy profiles to terminal histories. Each terminal history z ∈ Z induces
an outcome; write ξ for the mapping from terminal histories to outcomes. Then, the strategic-
form payoff function for Bargainer i is πi = Πi ξ ζ.
6
Note, there are two parameters of the Bargaining game B: the horizon N and the discount
factor δ. At times, we will want to emphasize that we are looking at a Bargaining game with
particular parameters N and δ. In that case, we will write B[N, δ].
Type Structure
The premise is that each Bargainer faces uncertainty about how the other plays the game. Thus,
we will want to specify Bi’s belief about S−i, i.e., about the strategy that B(−i) chooses. But,
note, Bi may be forced to revise her beliefs during the course of play. For instance, B2 may begin
the game assigning probability one to B1 offering x = 14 upfront, only to find her, instead, offering
a larger share x = 12 . At that point, B2 will need to form a new assessment about B1’s future
play, e.g., which future offers will she accept or reject. Thus, we will describe Bargainers as having
beliefs at each information set; this system should satisfy the rules of conditional probability when
possible.
We have just described a so-called first-order systems of conditional beliefs, i.e., a system
of conditional beliefs about the play of the game. Whether a strategy is rational vs. irrational
may depend on the Bargainer’s first-order system of conditional beliefs. For instance, it may be
rational for B1 to offer a 12 : 1
2 split of the pie upfront, if she has certain beliefs about B2’s (current
and future) play; but it may be irrational to do so if she has other beliefs.
We will not only want to capture the idea that each Bargainer is rational, but the idea that
each Bargainer ‘thinks’ the other Bargainer is rational. To do so, we cannot simply specify a
belief about the other Bargainer’s play of the game—after all, whether a strategy is rational vs.
irrational for B2 may depend on what B2 believes about B1’s play of the game. Thus, we will need
to specify B1’s so-called second-order system of conditional beliefs, i.e., a system of conditional
beliefs about both B2’s play and B2’s belief about B1’s play. Continuing further along these lines,
we will want to specify B1’s hierarchy of conditional beliefs about B2’s play of the game.
We will implicitly describe the hierarchies of conditional beliefs by a type structure. The
type structure will follow Harsanyi (1967), modified to give beliefs at each information set. (The
modification was introduced in Ben-Porath, 1997; Battigalli and Siniscalchi, 1999.) We now
proceed to give the formal definition.
Some mathematical preliminaries will be of use: Fix a metric space Ω and the Borel sigma-
algebra thereof. Endow the product of metric spaces with the product topology, unless otherwise
explicitly stated. Write P(Ω) for the set of Borel probability measures on Ω. Endow P(Ω) with
the weak topology, so that it is again a metric space.
A conditional probability space is some (Ω, E), where each element of E is a Borel subset
of Ω. The collection E is a set of conditioning events. An array of conditional measures
on (Ω, E) is some µ = (µ(·|E) : E ∈ E) where, for each conditioning event E ∈ E , µ(·|E) ∈ P(Ω)
with µ(E|E) = 1. Thus, an array of conditional measures specifies a belief for each conditioning
event.
7
Write A(Ω, E) for the set of arrays of conditional measures on (Ω, E), or simply A(Ω) when Eis clear from the context. Note, A(Ω) =
∏E∈Eµ ∈ P(Ω) : µ(E) = 1 is the product of Borel sets.
(See Aliprantis and Border, 2007, Lemma 15.16.) Endow A(Ω) with the product sigma-algebra.
Definition 1.1. A countable conditional probability system (CPS) on (Ω, E) is some
µ ∈ A(Ω, E) satisfying the following properties:
(i) For each E ∈ E , µ(·|E) ∈ P(Ω) has countable support
(ii) For any E ⊆ F ⊆ G, E is Borel, and F,G ∈ E , µ(E|G) = µ(E|F )µ(F |G).
An array of conditional measures provides one belief for each conditioning event. But, it need
not satisfy the rules of conditional probability. A countable CPS is sufficient to ensure that the
array of beliefs satisfy the rules of conditional probability when possible. Of course, an array may
satisfy the rules of conditional probability, but need not be a countable CPS. Write C(Ω, E) for
the set of countable conditional probability systems on (Ω, E). When the set E is clear from the
context, simply write C(Ω).
We can now formally describe the Bargainers’ strategic uncertainty. Bi is uncertain about
which strategy B(−i) will choose. Thus, she will have a belief about S−i at the beginning of the
game and at each history at which she moves. Her belief at history h assigns probability one to
the history h being reached. Write
Si = S−i(h) : h ∈ Hi ∪ φ,
where φ represents the initial history.3 The set Si will, in a sense, correspond to the set of Bi’s
conditioning events. So, at the beginning of the game, each Bi will have an initial hypothesis
corresponding to S−i(φ) = S−i. But, each Bi will also have a hypothesis at each subsequent
history hi at which she moves. Endow S−i with the uniform metric, so that each element of Si is
Borel. (See Lemma A.1.)
Definition 1.2. A B-based type structure is some T = (B;T1, T2;S1,S2;β1, β2) so that:
(i) Ti is a metrizable type set for Bi;
(ii) Si ⊗ T−i = S−i(h)× T−i : h ∈ Hi ∪ φ is the set of conditioning events for Bi;
(iii) βi : Ti → A(S−i × T−i;Si ⊗ T−i) is a measurable belief map for Bi.
A B-based conditional type structure is a B-based type structure T = (B;T1, T2;S1,S2;β1, β2)where, for each i = 1, 2, βi(Ti) ⊆ C(S−i × T−i;Si ⊗ T−i).
We will abuse notation and write βi,h(ti) for the measure βi(ti)(·|S−i(h)× T−i).3Of course, φ ∈ H1 but φ 6∈ H2.
8
Note, a type structure induces hierarchies of conditional beliefs about the strategies played:
Each type ti has a system of conditional beliefs on the strategies and types of the other Bargainers,
viz. βi(ti). By marginalizing onto S−i, each type ti then has a system of first-order beliefs on the
strategies of the other Bargainer. Since each type t−i has such a system of first-order beliefs, this
induces each type ti’s system of second-order beliefs, i.e., on the strategies and first-order beliefs
of the other Bargainer. And so on.4
The object we are interested in lies in between a B-based type structure and a B-based condi-
tional type structure: On the one hand, a B-based type structure is too permissive—it may have
types that violate the rules of conditional probability. On the other hand, a B-based conditional
type structure is too restrictive—it needlessly imposes a requirement that arrays have countable
support. In Sections 3-4 we will see that, taken together, these two concepts allow us to capture
the behavioral implications of the object we are interested in.
Remark 1.1. A technical remark: Fix a B-based type structure T , where T may not be a
conditional type structure. Consider histories (h, x) ∈ HR1 and (h, x,R) ∈ HR
1 , i.e., that differ
only in that B1 rejected the proposal on the table. Note S2(h, x) × T2 = S2(h, x,R) × T2, i.e.,
there is a single conditioning event that corresponds to both histories (h, x) and (h, x,R). So,
each type of B1 is constrained to have the same belief at each of these histories. This fact will be
used in the analysis. 2
Epistemic Game
An epistemic game is a pair (B, T ), where T is a B-based type structure. A conditional
epistemic game is an epistemic game (B, T ) where T is a B-based conditional type structure.
An epistemic game induces a set of states, viz. S1×T1×S2×T2. A state (s1, t1, s2, t2) describes
the Bargainers’ play, viz. s1 and s2, and the Bargainers’ beliefs, viz. β1(t1) and β2(t2).
2 Epistemic Conditions
Throughout this section, we will fix an epistemic game (B, T ). We will impose restrictions on the
set of states, which correspond to forward induction reasoning and on path strategic certainty.
We now review the basic ingredients.
Rationality The basic starting point will be to consider the requirement that each Bargainer
is rational. Recall, a strategy of Bi may be rational given some belief about B(−i)’s play and
irrational given some other belief. Since types specify beliefs (via the belief map), rationality is
a property of a strategy-type pair. The idea will be that (si, ti) is rational if si maximizes Bi’s
expected payoff under ti, at each history allowed by si.
4This is, of course, an informal argument. A related formal argument appears in Battigalli and Siniscalchi(1999). However, that argument does not apply, as here the set of conditioning events is uncountable. We do notattempt formalize this argument within the context of this paper.
9
Definition 2.1. Fix a strategy si and an array of conditional measures µi ∈ A(S−i;Si). Say si is
sequentially optimal under µi if, for each information set h ∈ Hi with si ∈ Si(h), the following
hold:
(i) πi(si, ·) : S−i → R is µi(·|S−i(h))-integrable and
(ii) if πi(ri, ·) : S−i → R is µi(·|S−i(h))-integrable for ri ∈ Si(h), then∫S−i
πi(si, ·)dµi(·|S−i(h)) ≥∫S−i
πi(ri, ·)dµi(·|S−i(h)).
Fix a strategy si and a history h allowed by si. Condition (i) says that, at h, Bi must be able to
evaluate her expected payoffs from si under the CPS µi. This requirement is automatically met
if the support of µi(·|S−i(h)) is countable. (Example A.1 in the Appendix illustrates that the
requirement may fail. See also Battigalli, 2003.) Condition (ii) says that, at h, si must maximize
Bi’s expected payoffs under the CPS µi. This maximization is done relative to all strategies ri
that allow h, so that Bi can evaluate her expected payoffs from ri under the CPS µi. In the case
where the support of µi(·|S−i(h)) is countable, this is equivalent to maximizing expected payoffs
relative to all strategies ri that allow h.
Fix a pair (B, T ). For each µi ∈ A(S−i×T−i;Si⊗T−i), write marg S−iµi for the marginal ar-
ray of measures. This is an array of measures νi ∈ A(S−i;Si) with νi(·|S−i(h)) = marg S−iµi(·|S−i(h)×
T−i).
Definition 2.2. A strategy-type pair (si, ti) ∈ Si × Ti is rational if si is sequentially optimal
under the marginal array marg S−iβi(ti).
Write Ri for the set of rational strategy-type pairs for i and R = R1 ×R2 for the set of states at
which each Bargainer is rational.
Rationality is a requirement about maximizing subjective conditional expected utility given
the Bargainer’s beliefs. It does not impose any requirements on what the Bargainer’s belief is.
The next step imposes restrictions on the Bargainer’s beliefs. There are two types of restrictions:
one that will arise from forward induction reasoning and the second that will arise from on path
strategic certainty.
Forward Induction Reasoning Forward induction goes back to Kohlberg (1981). It is the
idea that Bargainers rationalize past behavior whenever possible: If a type t1 of B2 rationalizes
B1’s past behavior when possible, the type should assign probability one to the event “B2 is
rational,” whenever she reaches an history consistent with this event. The key ingredient is the
concept of strong belief.
Definition 2.3 (Battigalli and Siniscalchi, 2002). A type ti strongly believes E−i ⊆ S−i×T−i if
E is an event satisfying the following requirement: For each S−i(h) ∈ Si with E−i∩[S−i(h)∩T−i] 6=∅, βi,h(ti)(E−i) = 1.
10
Write SB i(E−i) = Si × ti : ti strongly believes E−i.5
Forward induction reasoning is then captured by the requirements of: strong belief of “ra-
tionality,” strong belief of “rationality and strong belief of ‘rationality,’ ” etc. More formally, set
R1i = Ri. Inductively define sets Rmi , for each m ∈ N+, so that Rm+1
i = Rmi ∩ SB i(Rm−i). Set
R∞i =⋂m∈N+ Rmi . The set Rm = Rm1 ×Rm2 is the set of states at which there is rationality and
(m− 1)th-order strong belief of rationality. The set R∞ = R∞1 × R∞2 is the set of states at
which there is rationality and common strong belief of rationality (RCSBR). The set of
states at which there is RCSBR corresponds to rationality plus forward induction reasoning.
On Path Strategic Certainty On path strategic certainty is that idea that, along the path of
play, no Bargainer faces uncertainty about the terminal node that will be reached—even though,
along the path of play, the Bargainers may very well face uncertainty about the actual strategy
the other Bargainer employs.
The path of play is determined by a state. Specifically, a given state (s1, t1, s2, t2) induces a
path through the tree to a particular terminal node z. At (s1, t1, s2, t2), there is on path strategic
certainty if, at each history h along the path induced by that state, type t1 (resp. t2) assigns
probability one to the event that the terminal node z will be reached.
To formalize this idea, write Z−i[s1, s2] for the set r−i : ζ(si, r−i) = ζ(si, s−i) × T−i. This
consists the set of strategies of −i that will induce the same terminal node as (si, s−i), viz.
z = ζ(si, s−i), when Bi plays si. This set is closed. (See Corollary A.1.) At the state (s1, t1, s2, t2)
there is on path strategic certainty if each ti strongly believes the event
Z−i[s1, s2] = r−i : ζ(si, r−i) = ζ(si, s−i) × T−i ⊆ S−i × T−i.
Write
C =⋃
(s1,s2)∈S1×S2
[(s1 × T1) ∩ SB1(Z2[s1, s2]))]× [(s2 × T2) ∩ SB2(Z1[s1, s2]))].
Then, C is the set of states at which there is on path strategic certainty.
If there is on path strategic certainty at (s1, t1, s2, t2), then at each history allowed by (s1, s2),
t1 and t2 have correct beliefs about the terminal node that will obtain. It is important to note
that, at histories precluded by (s1, s2), types t1 and t2 cannot have correct beliefs about the path
of play. (This is ‘by definition.’) And, in fact, they may very well have different beliefs about the
path.
At times, we will want to study the assumption that, at a state, Bargainers “reason” about
on path strategic certainty. For instance, at (s1, t1, s2, t2) ∈ C, t1 may strongly believe
(s2, t2) : there is some (r1, u1, s2, t2) ∈ C,5The fact that E−i is an event reflects the fact that it is Borel. If E−i ⊆ S−i×T−i is not Borel, then SB i(E−i) = ∅.
11
i.e., at each information set consistent with some state at which there is on path strategic certainty,
t1 assigns probability one to the event that B2 plays a strategy-type pair that is consistent with
on path strategic certainty.6 With this in mind, set C1 = C and inductively define Cm so that
Cm+1 = Cm⋂[
SB 1(proj S2×T2Cm)× SB 2(proj S1×T1C
m)].
Then the set C∞ =⋂∞m=1C
m is the set of states at which there is on path strategic certainty
and common strong belief of on path strategic certainty.
3 Characterization Theorem
Fix a Bargaining game, viz. B[N, δ]. This section characterizes the set of outcomes consistent with
rationality, forward induction reasoning, and on path strategic certainty, i.e., across all epistemic
games based on B[N, δ].
Theorem 3.1. Fix a Bargaining Game B[N, δ]. For each finite n ≤ N , there exists an interval
[xn, xn] so that the following are equivalent:
(i) There is an epistemic game (B[N, δ], T ) and a state thereof (s∗1, t∗1, s∗2, t∗2) that induces the
outcome (x∗1, x∗2, n) so that, at (s∗1, t
∗1, s∗2, t∗2), there is rationality, common strong belief of
rationality, and on path strategic certainty.
(ii) x∗1 ∈ [xn, xn].
In fact, we will show a stronger result than that presented above. There will be two halves of
the proof. The first is to show necessity, i.e., that part (i) implies that the outcome (x∗1, x∗2, n)
necessarily has x∗1 in the set specified by part (ii). For that, we will be able to weaken the premise
in part (i). We will assume that there is a state at which there is rationality, strong belief of
rationality and on path strategic certainty. We will show that then the outcome will necessarily
satisfy the requirement in part (ii). The second is to show sufficiency, i.e., that part (ii) implies
part (i). For that we will be able to strengthen the conclusion in part (i). Specifically, we will fix
some x∗1 ∈ [xn, xn] and show that we can construct a conditional epistemic game and an associated
state at which there is rationality, common strong belief of rationality, on path strategic certainty,
and common strong belief of on path strategic certainty.
It is important to note that there is no presumption that the sets [xn, xn] are necessarily non-
empty. We will be able to calculate these sets based on parameters of the Bargaining game (that
is, the deadline and discount factor); whether the set is empty versus non-empty will depend on
6This is one way to formalize reasoning about on path strategic certainty. As will become clear, the particularchoice will not be important for our purposes.
12
these parameters plus the particular period n in question. Specifically, for each finite n ≤ N , take
xn =
max 1−δδn−1 , δ
N−n if N <∞ is odd
1−δδn−1 otherwise
and
xn =
min1− δ(1−δ)δn−1 , 1− δN−n if N <∞ is even,
1− δ(1−δ)δn−1 otherwise.
Take x∞ = 1 and x∞ = 0.
Take the case of a two-period deadline, i.e., N = 2. If there were delay in this case, i.e., an
outcome (x∗1, x∗2, 2), then x∗1 would be required to be in [x2, x2] = [1−δδ , 0], an impossibility. Thus,
in the case of a two-period deadline, there cannot be delay (under the assumptions of Theorem
3.1). Let us understand why:
Two-Period Deadline: No Delay Fix an epistemic Bargaining game and a state (s∗1, t∗1, s∗2, t∗2)
at which there is rationality, strong belief of rationality, on path strategic certainty. Suppose, at
the state, there is delay in reaching agreement, i.e., in the second-period, B1 and B2 get shares
x∗1 and x∗2 of the pie. We will show that this cannot be the case.
Consider the path of play induced by (s∗1, s∗2). Along the path, there is an information set h
at which B2 proposes. At that information set, B2 can continue to maintain the hypothesis that
B1 is rational. (The state allows h and is consistent with the event ‘B1 is rational.’) Thus, when
B2 proposes at h, t∗2 must assign probability one to the event ‘B1 accepts any offer y < 1.’ Since,
at this state, B2 is rational, it follows that s∗2 must offer y = 1 at h. Thus, irrespective of whether
s∗1 accepts or rejects the offer, B1 gets 0 at this state and, so, by on path strategic certainty, t∗1must expect to get 0 at the initial node. But, t∗1 also strongly believes the event ‘B2 is rational.’
At the initial node, t∗1 can maintain a hypothesis that ‘B2 is rational’ and so, at the initial node,
t∗1 must assign probability one to the event ‘B2 accepts any offer x < 1− δ.’ (Such an offer would
give B2 a share 1− x > δ, which is better than any possible future agreement.) By offering some
x ∈ (0, 1− δ), t∗1 can improve her expected payoff over s∗1.
Thus, we conclude that: When bargaining with a two-period deadline, the Bargainers must agree
immediately, under the assumptions of rationality, strong belief of rationality and on path strategic
certainty. Now we turn to bargaining with a three-period deadline.
Three-Period Deadline: Limits on Delay Fix an epistemic Bargaining game and a state
(s∗1, t∗1, s∗2, t∗2) at which there is rationality, strong belief of rationality, on path strategic certainty.
Suppose, at the state, there is delay in reaching agreement. Repeating the argument for the
two-period deadline, we can conclude that, at this state, the players must agree in the second
13
period, i.e., the outcome is (x∗1, x∗2, 2). We will show that there are limits on this sort of delay; in
particular, x∗1 = δ.
Note, along the path of play induced by (s∗1, s∗2), there is an information set h at which B2
proposes x∗2 and subsequently the offer is accepted. By on path strategic certainty, when B2
proposes x∗2, the type t∗2 believes B1 will accept the offer, i.e., expects his payoffs to be δx∗2.
Moreover, since t∗2 strongly believes B1 is rational and h is consistent with the event that she is
rational, t∗2 must assign probability one to the event ‘B1 accepts any offer y < 1 − δ.’ (Such an
offer would give B1 a share 1−y > δ, which is larger than the discounted third-period pie.) Thus,
by rationality, δx∗2 ≥ δy for all y < 1− δ. From this it follows that δ ≥ x∗1.Now turn to B1. At h, B2 makes the offer of x∗2. At that point, B1 can continue to maintain
the hypothesis that B2 is rational. So, using the fact that t∗1 strongly believes that B2 is rational,
t∗1 must believe that B2 will accept any third period offer z < 1. Since, at the state, B1 is rational,
δx∗1 ≥ δ2z, for all z < 1 or x∗1 ≥ δ.In sum, we have two requirements:
• B2 does not have an incentive to make a particular offer upfront, i.e., the first time he can
make an offer.
• B1 does not have an incentive to wait for the deadline.
Taken together, these two requirements say that x∗1 = δ.
There are two things to take note of in the above argument. First, we used on path strategic
uncertainty to conclude that B2 must expect his offer x∗2 to be accepted by B1. We could alterna-
tively obtain this conclusion by assuming that B2 is rational and strongly believes “B1 is rational
and strongly believes I am rational.” We will discuss this in Section 7. (See Proposition 7.1.)
Second, the above requirements only provide part of the picture—they are necessary require-
ments. Notice, for instance, they are silent about requiring that B1 not have an incentive to make
an alternate offer upfront. And, indeed, she might: By on path strategic certainty, at the start
of the game, t∗1 anticipates her payoffs will be δ2. At that point, she also continues to maintain
the hypothesis that B2 is rational and, so, she anticipates that, if she makes an offer of x < 1− δupfront, B2 will accept the offer. (In that case, B2 gets 1− x > δ which is more than tomorrow’s
discounted pie.) If t∗1 is to wait for an agreement in the second period, δ2 ≥ 1− δ.In light of the above, there is a third requirement:
• B1 does not have an incentive to make a particular offer upfront, i.e, the first time she can
make an offer.
This is a requirement that the discount factor must be sufficiently high to make waiting profitable.
14
Three-Period Deadline: Possibilities for Delay Suppose that δ2 ≥ 1 − δ. We will focus
on a particular strategy profile (s∗1, s∗2), that results in agreeing on a δ : 1− δ split in the second-
period. We will informally argue that we can construct a type structure so that there is a state
at which there is rationality, forward induction reasoning, on path strategic certainty, and (s∗1, s∗2)
is played. The strategy profile (s∗1, s∗2) has the following features:
B1’s Strategy, s∗1: At the initial node, B1 offers to take the full pie for herself, i.e., x = 1.
If this offer is rejected and, subsequently, B2 offers to take y for himself, B1 Accepts if and
only if y ≤ 1− δ. In the third period, B1 offers to take the full pie for herself, i.e., z = 1,
irrespective of history.
B2’s Strategy, s∗2: If, at the initial node, B1 offers to take x for herself, B2 Accepts if and
only if x < 1 − δ. If B1’s initial offer was x = 1, B2 Rejects and subsequently makes an
offer to take y = 1 − δ for himself. If B1’s initial offer was x ∈ [1 − δ, 1), B2 Rejects and
subsequently makes an offer to take the full pie for herself, i.e., y = 1. At each third-period
information set, B2 Accepts an offer of z if and only if z 6= 1, irrespective of history.
The idea will be to construct a type structure with one type for each Bargainer, viz. T1 = t∗1and T2 = t∗2. The belief of type t∗1 (resp. t∗2) will assign probability one to (s∗2, t
∗2) (resp. (s∗1, t
∗1)),
at any information set allowed by s∗2 (resp. s∗1). Thus, there will be on path strategic certainty,
at the state (s∗1, t∗1, s∗2, t∗2).
At an information set h inconsistent with s∗2 (resp. s∗1), t∗1 (resp. t∗2) will assign probability
one to a so-called “accommodating strategy” of B2 (resp. B1). B2’s accommodating strategy is
one that allows h and subsequently accommodates B1 by accepting any third period offer. B1’s
accommodating strategy is one that allows h and subsequently accommodates B2 by accepting
any offer (if allowed by h) and proposes to take zero share of the pie (i.e., z = 0).
Notice, at (s∗1, t∗1, s∗2, t∗2), each Bargainer is rational. This may seem peculiar, at first: For
instance, take δ = .7. At the start of the game, both t∗1 and t∗2 expect payoffs of .49 and .21.
Thus, they would both strictly prefer to accept a .6 : .4 split in the first period. But offering
x = .6 upfront would give t∗1 a strictly lower expected payoff. Type t∗1 expects that if she were to
make this offer, then B2 would respond by rejecting the offer and proposing to take the full pie.
And, indeed, type t∗1 would be correct about this presumption; at (s∗1, t∗1, s∗2, t∗2), B2 would reject
such a mutually beneficial offer.
While B2 rejects such a mutually beneficial offer at this state, B2 is nonetheless rational at this
state. In particular, when such a beneficial offer is made, t∗2 is forced to update his belief; now t∗2expects B1 to accept an offer that gives B2 the full pie. More loosely, when such a beneficial offer
is made, t∗2 becomes more optimistic about his future prospects and so accepting what appears
to be a ‘better offer’ is not a best response for t∗2.
How can t∗2’s belief be consistent with forward induction reasoning—after all, conditional upon
a mutually beneficial offer being made, he thinks that B1 will accept a zero-share of the pie? The
15
key is that (under the construction) when B1 offers, say, x = .6 upfront, B2 must maintain a
hypothesis that B1 is irrational: At the initial node, every type of B1 believes that B2 rejects
such a mutually beneficial offer and responds by offering a zero-share of the pie (to B1). Thus,
conditional upon B1 making such an offer, B2 must believe that B1 has not maximized her
expected payoffs. At this point, he may reason that B1 will again fail to maximize her expected
payoffs in the future.7
4 Revisiting the Characterization Theorem
The Characterization Theorem can be seen as a consequence of two Propositions: one showing
necessity and the second showing sufficiency. Each of these Propositions will provide an important
strengthening of one half of the Characterization Theorem.
4.1 Necessity
Here we establish that the behavioral predictions of rationality, strong belief of rationality, and
on path strategic certainty must be contained in some set [xn, xn].
Proposition 4.1. Fix an epistemic game (B[N, δ], T ) and a state
(s∗1, t∗1, s∗2, t∗2) ∈ R ∩ [SB 1(R2)× SB 2(R1)] ∩ C.
The outcome induced by (s∗1, s∗2) is (x∗1, x
∗2, n), where x∗1 ∈ [xn, xn].
We now provide the intuition for the result. To do so, fix a state at which there is rationality,
strong belief of rationality and on path strategic certainty. Suppose, at this state, the Bargainers
agree in period n on an x∗1 : x∗2 split of the pie. Under our epistemic assumptions, there are two
constraints: an upfront constraint and a deadline constraint.
Upfront Constraint At the start of the game, the Bargainers anticipate that they will be
able to reach agreement on a x∗1 : x∗2 split in period n. (This is by on path strategic certainty.)
Upfront, they also each believe that the other Bargainer is rational. So, upfront, they each
anticipate the other Bargainer will accept any upfront offer that gives the other Bargainer more
than the discounted total pie, i.e., more than δ. Thus, each Bargainer must prefer an x∗1 : x∗2 split
in period n to making an offer that gives the other Bargainer a δ share of the pie upfront.
Note, the idea of an “upfront offer” is implemented differently for the two Bargainers. For B1,
an upfront offer involves an offer in the first bargaining phase. For B2, an upfront offer involves
an offer in the second bargaining phase. (In the case of immediate agreement, B2 is certain of the
offer she accepts and, at the same time—by strong belief of rationality—maintains a hypothesis
7Of course, there exists a different epistemic game where, conditional upon B1 making such a mutually beneficialoffer upfront, B2 updates his belief in a way that allows B2 to conclude that B1 will maximize her expected payoffsin the future. That structure will have different implications for forward induction reasoning.
16
that she can induce B1 to accept a sufficiently good offer in period 2.) As such, the upfront
constraint requires x∗1 ≥ 1−δδn−1 and x∗2 ≥
δ(1−δ)δn−1 .
Deadline Constraint Take the case of a deadline N < ∞ and suppose Bi is the proposer in
the last bargaining phase N . There is an n-period history at which either (i) Bi accepts a x∗1 : x∗2split, or (ii) Bi proposes a x∗1 : x∗2 split. In either case, at the given n-period history, he expects
the outcome to be a x∗1 : x∗2 split. (This is by on path strategic certainty.) Note, at that point,
he continues to maintain the hypothesis that the other Bargainer is rational. Thus, he maintains
the hypothesis that, if the final bargaining phase is reached, the other Bargainer will accept any
strictly positive share of the pie. Thus, Bi must prefer a x∗1 : x∗2 split in period n to waiting for
(essentially) the full pie in period N , i.e., x∗i ≥ δN−n. This is Bi’s deadline constraint.
Of course, for any given bargaining game, there is at most one Bargainer for which the deadline
constraint is active, i.e., the Bargainer who proposes in the final bargaining phase. If Bi is the
proposer in the final bargaining phase, we will say that Bi has deadline bargaining power.
Note, carefully, the deadline bargaining power of i does not arise because, under our epistemic
assumptions, Bi will get the full pie in the final period. (If our epistemic assumptions are met,
the final period will never be met. If they fail, Bi need not get the full pie in the final period.)
Instead, it arises because, under our epistemic assumptions, at the point of reaching an agreement,
Bi anticipates that he would be able to obtain the full pie in the final period. Of course, if an
agreement is not reached, Bi may very well rethink such an assessment.
4.2 Sufficiency
Here we establish that x∗ ∈ [xn, xn] is sufficient to guarantee that (x∗, 1 − x∗, n) is an outcome
consistent with rationality, common strong belief of rationality, on path strategic certainty and
common strong belief of on path strategic certainty, in a conditional epistemic game.
Proposition 4.2. Fix some n ≤ N and some x∗ ∈ [xn, xn]. There is a conditional epistemic
game, viz. (B, T ), and a state (s∗1, t∗1, s∗2, t∗2) thereof, so that
(i) (s∗1, t∗1, s∗2, t∗2) induces the outcome (x∗1, x
∗2, n) = (x∗, 1− x∗, n), and
(ii) (s∗1, t∗1, s∗2, t∗2) ∈ R∞ ∩ C∞.
Throughout the exposition, we fix some finite time period n ≤ N and some x∗ ∈ [xn, xn].
We begin by constructing particular strategies s∗1 and s∗2, so that (s∗1, s∗2) induces the outcome
(x∗1, x∗2, n) = (x∗, 1−x∗, n). To do so, it will be convenient to fix a particular history h∗ ∈ HP
1 ∪HP2 .
If n = 1, h∗ = φ. If n ≥ 2, h∗ = (1,R, . . . , 1,R), i.e., there are (n − 1) offers of 1 followed by
(n− 1) rejections.
The strategy s∗i satisfies the following properties. For any history h ∈ HPi , set
• s∗i (h) = x∗i if h = h∗, and
17
• s∗i (h) = 1 if h 6= h∗.
For any history h ∈ HP−i, set s∗i (h, x) = A if and only if one of the following hold:
• x < 1− δ,
• (h, x) = (h∗, x∗−i), and
• h is an N -period history and x < 1.
Note, the strategy profile (s∗1, s∗2) induces each Bargainer to propose 1, reject, propose 1, reject,
etc., up until the nth-bargaining phase. In the nth-bargaining phase, the Proposer makes an offer
which is accepted. This offer is x∗1 = x∗ if B1 is the Proposer and x∗2 = 1−x∗ if B2 is the Proposer.
In either case, in the nth-bargaining phase, the Bargainers come to an agreement, with B1 getting
x∗ and B2 getting 1− x∗.The construction of the epistemic game will be analogous to the example with a three period
deadline (pages 15-16): The idea will be that Bi begins the game with a hypothesis that B(−i)plays the strategy s∗−i. If Bi observes a deviation from this behavior, she/he updates his belief
and subsequently believes B(−i) will act in an accommodating manner. Thus, it will be useful to
have the concept of the accommodating strategy. The accommodating strategy for Bargainer
i, written ai, is a strategy so that ai(h) = 0 for all h ∈ HPi and ai(h, x) = A for all h ∈ HP
−i.
The h-accommodating strategy for Bargainer i, written ahi is a strategy that allows h but
otherwise agrees with the accommodating strategy.
Now, construct
T = (B;T1, T2;S1,S2;β1, β2),
where, for each Bi, Ti = t∗i and
• βi,h(t∗i )(s∗−i, t
∗−i) = 1, if s∗−i ∈ S−i(h), and
• βi,h(t∗i )(ah−i, t
∗−i) = 1, if s∗−i 6∈ S−i(h)
Begin with some observations:
Observation 4.1.
(i) The epistemic game (B, T ) is a conditional epistemic game.
(ii) At the state (s∗1, t∗1, s∗2, t∗2), there is on path strategic certainty and common strong belief of
on path strategy certainty.
Part (i) follows from the following fact: If S−i(h′) × T−i ⊆ S−i(h) × T−i and ah−i ∈ S−i(h
′),
then the h′-accommodating strategy ah′−i is the h-accommodating strategy ah−i. Part (ii) follows
immediately from the construction.
To show Proposition 4.2 it suffices to show that, at (s∗1, t∗1, s∗2, t∗2), there is RCSBR. The key is
the following Lemma:
18
Lemma 4.1.
(i) For each i, (s∗i , t∗i ) is rational.
(ii) For each i and h ∈ H1 ∪H2, Ri ∩ [Si(h)× Ti] 6= ∅ implies s∗i ∈ Si(h).
The proof of Lemma 4.1 can be found in the Appendix.
Proof of Proposition 4.2. By Lemma 4.1(i), (s∗1, t∗1, s∗2, t∗2) ∈ R1 ×R2. Fix an information set
h ∈ Hi∪φ with R−i∩ [S−i(h)×T−i] 6= ∅. By Lemma 4.1(ii), s∗−i ∈ S−i(h). So, by construction,
t∗i strongly believes the event R1−i. This delivers that R2
i = R1i . Proceeding inductively, Rmi = R1
i
for all m, and so (s∗1, t∗1, s∗2, t∗2) ∈ Rm1 ×Rm2 for each m.
Remark 4.1. The construction displays the following no indifference property : Fix an informa-
tion set h along the path of play. If ri ∈ Si(h) and ri is a best response at h under marg S−iβi,h(t∗i ),
then ri(h) = s∗i (h). More loosely, along the path of play, no Bargainer is indifferent between any
two actions.
Lemma C.1 shows that a stronger no indifference property holds: If t∗i does not have a uniquely
optimal action at some h ∈ Hi allowed by s∗i , then h is a history at which Bi is in the receiver role
and has received an offer of 1−δ. (This cannot happen along the path of play induced by (s∗1, s∗2).)
We constructed s∗i to reject such an offer. This choice was made to simplify the construction; the
choice was not instrumental to the conclusion.8 2
5 Implications for Delay
Section 3 fixed a bargaining game and provided a Characterization Theorem: a characterization of
the set of outcomes consistent with rationality, forward induction reasoning, and on path strategic
certainty (across all associated epistemic games). This section uses the Characterization Theorem
to point to both the possibilities for and bounds on delay.
We begin by pointing out that there are limits to delay. A preliminary observation:
Observation 5.1. Fix a Bargaining game B[N, δ]. There exists some finite n(δ) with N ≥ n(δ) ≥2 so that [xn, xn] = ∅ if and only if n ≥ n(δ).
Fix a Bargaining game. If there is rationality, strong belief of rationality, and on path strategic
certainty, there are bounds on delay that are determined by the given deadline (if there is one)
and the discount factor. The next proposition establishes that, in the case of a deadline, there
are bounds on delay that exist independent of the discount factor.
Proposition 5.1. Fix a Bargaining game B[N, δ], with a deadline N .
8Put differently: We could have chosen s∗i to accept such an offer. But, then, conditional upon the offer beingrejected, B(−i) would be able to maintain a hypothesis that Bi is rational. Thus, constructing a state at whichthere is RCSBR would have involved a significantly more delicate argument. A proof is available upon request.
19
(i) [xN , xN ] = ∅.
(ii) If [xN−1, xN−1] 6= ∅, then N ≤ 3.
To understand part (i), refer to the case of a two-period deadline. If there is rationality, strong
belief of rationality, and on path strategic certainty, then the Bargainers reach immediate agree-
ment. Taken together, Propositions 4.1 and 5.1(i) provide an analogue for the case of an N -period
bargaining game. At any state at which there is rationality, strong belief of rationality, and on
path strategic certainty, the Bargainers agree prior to the deadline.
To understand part (ii), refer to the case of a three-period deadline. Fix a state at which
there is rationality, strong belief of rationality, and on path strategic certainty. Suppose, at that
state, there is delay until the penultimate period, viz. n = N − 1 = 2. B1’s deadline constraint
implies that, at that state, she must get at least a δ share of the pie; when the Bargainers agree
on (x∗1, x∗2, 2), B1 maintains the hypothesis that B2 is rational, and so she thinks she can get the
full share of the pie in the final period. B2’s upfront constraint implies that, at the state, he must
get at least a 1− δ share of the pie (and so B1’s share can be at most δ); when he makes a second
period proposal of x∗2, he maintains the hypothesis that B1 is rational and so B1 will accept any
offer of x ∈ [0, 1− δ). Put together, this constrains x∗1 = δ.
Now suppose the deadline is N ≥ 4 and Bi is the proposer in the last period. Fix a state at
which there is rationality, strong belief of rationality, and on path strategic certainty. Suppose,
at that state, there is delay until the penultimate period, viz. n = N − 1 ≥ 2. Bi’s deadline
constraint continues to imply that, at the state, she must get at least a δ share of the pie in
n = N − 1. But, we will argue, that B(−i)’s upfront constraint implies that he must get strictly
more than 1− δ. Thus, the two Bargainers cannot come to an agreement in period n = N − 1.
To see that B(−i)’s upfront constraint implies that he must get strictly more than 1−δ: Since
N ≥ 4, there must be some earlier period, viz. k ≤ N − 3, at which B(−i) is in the proposer role.
In the kth bargaining phase that occurs along the path of play, B(−i) can continue to maintain
the hypothesis that Bi is rational. So, there, B(−i) thinks that Bi will accept any share of the pie
that gives her, i.e., Bi, strictly more than δ. Thus, B(−i) can secure 1− δ upfront. This is worth
strictly more than 1 − δ in period n = N − 1. By on path strategic certainty, B(−i) correctly
anticipates his n = N − 1 period share of the pie. Since he is rational and willing to wait, it must
be the case that, in period n = N − 1, he gets strictly more than a 1− δ share of the pie.
Nonetheless, for each n ≤ N −2, there is the possibility for delay, provided the discount factor
is sufficiently large. More precisely:
Proposition 5.2. Fix n with N − 2 ≥ n ≥ 2. There exists δ[N,n] ∈ (12 , 1) so that [xn, xn] 6= ∅ if
and only if δ ≥ δ[N,n].
Taken together, Propositions 4.2 and 5.2 give the following: Given some N period deadline
and some period n with N − 2 ≥ n ≥ 2. There exists a conditional epistemic game (B[N, δ], T )
and a state at which there is rationality, common strong belief of rationality, on path strategic
20
certainty, common strong belief of on path strategic certainty, and agreement in the nth period.
Implicit in this last statement is a requirement that the discount factor δ be sufficiently high—
specifically, ‘sufficiently high’ requires that the discount factor is greater than 12 . It is important
to note that this still allows for delay when the discount factor is ‘far’ from 1. For instance, in
the case of delay until the second period (n = 2), it suffices for the discount factor to be at least√52 −
12 (irrespective of the deadline).
It is of interest to note that, for a given deadline N , we can choose a sufficiently high discount
factor δ so that the above holds for all n ≤ N − 2. Specifically:
Remark 5.1. Fix some N . There exists δ[N ] ∈ (12 , 1) so that, for each δ ≥ δ[N ] and n ≤ N − 2,
[xn, xn] 6= ∅.
6 Comparative Statics
How does changing the exogenous parameters of the bargaining game change the resulting be-
havioral predictions? When the description of the strategic situation is given by an epistemic
game, there are (at least) two ways to ask this question. We will refer to the first as a within
comparative static analysis and refer to the second as an across comparative static analysis.
Within Comparative Statics
The idea here is to analyze comparative statics within an epistemic game. It begins with an
epistemic game and changes an exogenous parameter holding all other features of the epistemic
game fixed—including the type structure. It applies the epistemic conditions to the new and old
epistemic game and asks how the behavioral predictions have changed.
It is important to note that this sort of comparative statics exercise may not be meaningful.
Consider increasing the deadline N . This necessitates changing the type structure, since the
information sets of the game change. There is no unique or obvious way to map the original type
structure to the new type structure. That is, by changing the deadline, it is impossible to identify
a single description of the strategic situation.
By contrast, this sort of comparative statics exercise is meaningful when we vary the dis-
count factor—varying the discount factor changes the payoff function and does not influence the
type structure. However—under the assumptions of rationality, forward induction reasoning and
on path strategic certainty—the analysis does not yield qualitatively interesting answers. The
following example makes this point.
Example 6.1. Consider the case of a three-period deadline. Fix discount factors δ∗∗∗ > δ∗∗ > δ∗,
so that 1 − δ∗ − δ2∗ > 0 > 1 − δ∗∗ − δ2∗∗. By Proposition 4.2, there exists a conditional epistemic
game (B[N, δ∗∗], T ) and a state thereof, viz. (s1, t1, s2, t2), at which there is RCSBR, on path
strategic certainty, and the outcome is (δ∗∗, 1− δ∗∗, 2). But, referring to analysis on pages 15-16,
in each of the epistemic games (B[N, δ∗], T ) and (B[N, δ∗∗∗], T ), there is no state at which there
21
is RCSBR and on path strategic certainty. Thus, when we increase the discount factor from δ∗ to
δ∗∗ we strictly increase the set of predictions and when we increase the discount factor from δ∗∗
to δ∗∗∗ we strictly decrease the set of predictions. 2
Across Comparative Statics
The idea here is to analyze comparative statics across epistemic games. It begins with a game
and changes an exogenous parameter holding all other features of the game fixed. It then applies
the epistemic conditions to the class of all epistemic games associated with the new and old games
and asks how the behavioral predictions have changed. Note, now, the behavioral predictions are
not defined within a given epistemic game, but across a class of epistemic games.
To perform this exercise, we will make use of the Characterization Theorem—it provides the
behavioral predictions of RCSBR and on path strategic certainty, across all epistemic (or condi-
tional epistemic) games. The predictions, given by the sets [xn, xn], depend on the parameters
of the bargaining game, viz. N and δ. Thus, it will be convenient to write [x(n,N,δ), x(n,N,δ)] to
emphasize that we are computing the n-period interval in the Bargaining game B[N, δ].
Increasing the Deadline Recall, under the assumptions of rationality, strong belief of ratio-
nality, and on path strategic certainty, the Bargainer who proposes in the last period has deadline
bargaining power. Increasing the deadline has two potential effects. First, it can transfer the
deadline bargaining power from one Bargainer to the next. Second, it can diminish the deadline
bargaining power (i.e., without transferring the deadline bargaining power from one Bargainer to
the next).
First consider the case where increasing the deadline shifts the deadline bargaining power from
one Bargainer to the next. Here, the analysis does not appear to yield qualitatively interesting
answers. This is illustrated by the following example.
Example 6.2. Fix a discount factor δ > 1 − δ2. In B[3, δ], B1 has the deadline bargaining
power and, in B[4, δ], B2 has the deadline bargaining power. Consider outcomes associated with
agreement in period 2. Note, [x(2,3,δ), x(2,3,δ)] = δ and [x(2,4,δ), x(2,4,δ)] = [1−δδ , 1 − δ2]. Using
the fact that δ > 1− δ2, [x(2,3,δ), x(2,2,δ)] ∩ [x(2,4,δ), x(2,4,δ)] = ∅. 2
Let us review Example 6.2. We began by increasing the deadline from N = 3 to N = 4, thereby
shifting the deadline bargaining power from B1 to B2. We studied the behavioral implications
of rationality, forward induction reasoning and on path strategic certainty. By Proposition 5.1,
there cannot be delay until period n = 3 (or, in the case of N = 4, until n = 4). So, focus on the
case of delay until the second period. This is possible in both cases, provided the discount factor
is large (i.e. 0 > 1− δ − δ2). But, in this case, the behavioral implications are distinct.
Now consider increasing the deadline, without shifting deadline bargaining power. That is,
but Bi proposes in the last period, both before and after an increase in the deadline. This increase
in the deadline diminishes Bi’s deadline bargaining power and, in so doing, relaxes Bi’s deadline
22
bargaining constraint. This leads to a (weakly) larger set of outcomes consistent with RCSBR
(across all type structures).
Proposition 6.1. For N∗∗ ≥ N∗:
(i) [x(n,2N∗,δ), x(n,2N∗,δ)] ⊆ [x(n,2N∗∗,δ), x(n,2N∗∗,δ)] ⊆ [x(n,∞,δ), x(n,∞,δ)] and
(ii) [x(n,2N∗+1,δ), x(n,2N∗+1,δ)] ⊆ [x(n,2N∗∗+1,δ), x(n,2N∗∗+1,δ)] ⊆ [x(n,∞,δ), x(n,∞,δ)].
Increasing the Discount Factor Increasing the discount factor makes waiting for the future
more profitable. This serves to diminish each Bargainer’s upfront constraint. However, because
it makes waiting more profitable, it also tightens the deadline constraint.
0 1
U1 D1 U2
1
Figure 6.1: n = 2 and N ≥ 5 Odd
Refer to Figure 6.1 which depicts the case where B1 has deadline bargaining power, n = 2 and
N − 2 ≥ 2 is odd. B2’s upfront constraint, viz. U2, is strictly increasing in the discount factor.
B1’s upfront constraint, viz. U1, is strictly decreasing in the discount factor. If the Bargainers
agree on an x∗1 : x∗2 split in period n = 2, x∗1 must lie between U1 and U2. So, indeed, the
upfront constraints are relaxed, as stated. Also notice that B1’s deadline constraint, viz. D1, is
strictly increasing in the discount factor. Since x∗1 must lie above D1, this depicts the claim that
increasing the discount factor tightens the deadline constraint. Overall, increasing the discount
factor either (i) relaxes both the lower and upper bounds of the interval [x2, x2], or (ii) shifts up
both the lower and upper bounds of the [x2, x2]
This is true more generally:
Proposition 6.2. Take δ∗∗ > δ∗.
(i) If N is either infinite or odd, x(n,N,δ∗∗) > x(n,N,δ∗).
(ii) If N is either infinite or even, x(n,N,δ∗∗) > x(n,N,δ∗).
23
Remark 6.1. In the above analysis, we spoke about relaxing Bi’s upfront constraint or tightening
Bi’s deadline constraint. There was no mention of Bi’s reasoning process. Because this analysis
studies behavioral implications across a class of epistemic games, it is difficult to provide a careful
interpretation of these effects in terms of Bi’s reasoning process. 2
7 On Path Strategic Uncertainty
In this Section we return to understand the extent to which the assumption of ‘on path strategic
certainty’ does vs. does not limit the behavioral predictions of forward induction reasoning.
7.1 Examples
Begin with the case of a two-period deadline.
Example 7.1 (Two-Period Deadline). Revisit the case of the two-period deadline. There [x1, x1] =
1− δ and [x2, x2] = ∅. Thus, any state at which there is rationality, strong belief of rationality
and on path strategic certainty, induces an outcome (1− δ, δ, 1).
Next, consider a state (s∗1, t∗1, s∗2, t∗2) at which there is rationality and strong belief of rationality
(but not necessarily on path strategic certainty). First, suppose the state induces immediate
agreement, i.e, an outcome (x∗1, x∗2, 1). Here, we must still have that x∗1 = 1− δ and x∗2 = δ: Since
t∗1 strongly believes the event B2 is rational, t∗1 must expect B2 to accept any offer of x ∈ [0, 1−δ);for any such offer x ∈ [0, 1 − δ), t∗1 would be strictly better off by offering some y ∈ (x, 1 − δ).Thus, x∗1 ≥ 1 − δ. Now note that, after B1 proposes x∗1, t
∗2 can and must continue to maintain
the hypothesis that B1 is rational, i.e., and so B1 will accept any second-period offer x ∈ [0, 1).
So, if x∗2 < δ, it is a best response for t∗2 to reject. That is, the fact that (s∗2, t∗2) ∈ R2 ∩ SB 2(R1)
implies that x∗2 ≥ δ or, equivalently, that x∗1 ≤ 1− δ.Suppose instead the state (s∗1, t
∗1, s∗2, t∗2) induces delay, i.e., an outcome (x∗1, x
∗2, 2). Then, there
is a 2-period history h∗ ∈ HP2 along the path of play induced by that state. At h∗, t∗2 can and
does continue to maintain the hypothesis that B1 is rational. Thus, he expects B1 to accept any
x ∈ [0, 1). Since (s∗2, t∗2) is rational, it follows that s∗2(h
∗) = 1 and so x∗1 = 0. We will later see
that this outcome is indeed consistent with RCSBR. 2
The example is suggestive of three principles: Fix a state at which there is rationality and
strong belief of rationality; look at the induced path of play. First, along the induced path of
play, no Bargainer will ever make a proposal x ∈ [0, 1 − δ). Second, if the path of play reaches
the final bargaining phase N , no proposal x ∈ [0, 1) will be made. Third, along the induced path
of play, no Responder will ever accept an offer less than δ(1− δ) in some period n ≤ N − 1; the
Responder would prefer to wait for the following period. These take-aways are summarized by
the following result.
24
Proposition 7.1. Fix an epistemic game (B, T ) and some (s∗1, t∗1, s∗2, t∗2) ∈ R ∩ [SB 1(R2) ×
SB 2(R1)]. Let (x∗1, x∗2, n) be the outcome induced by (s∗1, s
∗2). If Bi is the proposer in period
n, then
(i) x∗i ≥ 1− δ,
(ii) x∗−i ≥ δ(1− δ), provided n ≤ N − 1, and
(iii) x∗−i = 0, provided n = N .
Example 7.2 (Three Period Deadline). Revisit the case of the three-period deadline. There
[x1, x1] = [max1 − δ, δ2, 1 − δ + δ2], [x2, x2] ⊆ δ, and [x3, x3] = ∅. Thus, any state at which
there is rationality, strong belief of rationality, on path strategic certainty, and delay, the outcome
is (δ, 1− δ, 2).
Again, consider a state (s∗1, t∗1, s∗2, t∗2) at which there is rationality and strong belief of ratio-
nality (but not necessarily on path strategic certainty). Suppose the state induces the outcome
(x∗1, x∗2, n). What can be said of this outcome?
Begin with the case of delay until the final period, i.e., n = 3. Proposition 7.1(iii) says that
the outcome is either the outcome is (0, 0, 3) or (1, 0, 3).
Consider the case of delay until the second period, i.e., an outcome (x∗1, x∗2, 2). Using Propo-
sition 7.1(i), x∗2 ≥ 1− δ and so x∗1 ≤ δ. But, we can push the implications further—there is also a
deadline constraint on x∗1: At the point where B1 accepts the offer of x∗2, t∗1 can and must continues
to maintain the hypothesis that B2 is rational. So, t∗1 must maintain a hypothesis that B2 will
accept any third-period offer that gives him (B2) a strictly positive share of the pie. If x∗1 ∈ [0, δ),
then t∗1 would strictly prefer to wait until the third period. This says that x∗1 ≥ δ. So, if there
is delay until the second period, then the outcome is (δ, 1 − δ, 2); of course, this corresponds to
[x2, x2] when δ is sufficiently large.
Now, suppose that there is no delay, i.e., the state induces an outcome (x∗1, x∗2, 1). Proposition
7.1(i)-(ii) gives that x∗1 ∈ [1− δ, 1− δ+ δ2]. Notice that x∗1 may not be contained in [x1, x1]. This
can happen if and only if δ2 > 1 − δ. To understand why, notice that there is a strategy that
ensures type t∗1 expected payoffs of at least δ2x, i.e., by making an offer of 1 upfront, accepting
a second period offer of y if and only if y ∈ [0, 1 − δ), and offering x ∈ [0, 1) in the final period.
Under on path strategic certainty, t∗1 expects x∗1 to be accepted and so it must be the case that
x∗1 ≥ δ2x for each x ∈ [0, 1) or, equivalently, x∗1 ≥ δ2. But, absent an assumption of on path
strategic certainty, t∗1 need not expect x∗1 to be accepted and so δ2 may be strictly greater than
x∗1 = 1− δ.Suppose now that (s∗1, t
∗1, s∗2, t∗2) ∈ R4
1 × R32. When B1 makes the offer of x∗1, type t∗1 assigns
probability one to “B2 is rational and he strongly believes ‘I am rational and I strongly believe
B2 is rational.’ ” (In fact, formally t∗1 assigns probability one to a subset of this event.) Thus,
t∗1 assigns probability one to the event that “when B2 will make an offer (following a rejection of
x∗1), he will expect to get a zero share of the pie, if the final Bargaining phase is reached.” Thus,
25
t∗1 expects that B2 would make an offer of y = 1 − δ. Now, since t∗1 beings with the hypothesis
that “B2 is rational in his rejection of the offer x∗1,” it follows that x∗2 = 1 − x∗1 ≤ δ(1 − δ) or
x∗1 ≥ 1− δ + δ2 > δ2. 2
7.2 Forward Induction Reasoning and the Deadline Constraint
Focus on the case of a three-period deadline. Suppose the outcome is (x∗1, x∗2, 1), i.e., there is no
delay. Under on path strategic certainty, when B1 makes an initial offer of x∗1, she expects B2
will accept the offer. Thus, under rationality and strong belief of rationality, B1 would not make
an offer less than δ2, else it would pay to wait for the final period.
At first glance, absent on path strategic certainty, B1 may expect the initial offer of x∗1 to be
rejected. But the key is that this cannot happen if B1 engages in forward induction reasoning, in
the sense of RCSBR: If B1 engages in forward induction reasoning, at the initial node, she must
assign probability one to the event that B2 engages in forward induction reasoning. If an initial
offer of x∗1 is consistent with forward induction reasoning and B1 expects that such an offer may
be rejected, then she must expect that, when B2 receives the offer, he (i.e., B2) will have the
following expectation of future behavior
“if B1 gets to make a final period offer, she will offer to take the full final-period
surplus (i.e., the full pie).”
Thus, from B1’s perspective, there is an expected penultimate-period surplus of 1 − δ and she
hypothesizes that B2 will attempt to take the full penultimate-period surplus. If she, i.e., B1,
nonetheless thinks that B2 will reject the initial offer of x∗1, it must be because that initial offer
gives B2 a low share relative to the discounted full penultimate-period surplus. This can only be
the case if, initially, B1 offered to take a share of the pie that was greater than the discounted full
final-period surplus.
This reasoning applies more generally.
Proposition 7.2. Fix an epistemic game (B, T ) with a deadline N and suppose Bi proposes in
the last period. Let (x∗1, x∗2, n) be the outcome induced by some (s∗1, s
∗2) and suppose one of the
following hold:
(i) For some k ≥ 1, n = N − 2k + 1 and (s∗i , t∗i , s∗−i, t
∗−i) ∈ R2
i ×R1−i.
(ii) For some k ≥ 1, n = N − 2k and (s∗i , t∗i , s∗−i, t
∗−i) ∈ R
2k+2i ×R2k+1
−i .
Then x∗i ≥ δN−n.
Corollary 7.1. Fix an epistemic game (B, T ) with a deadline N and suppose Bi proposes in the
last period. If (s∗1, t∗1, s∗2, t∗2) ∈ R∞ with ξ(ζ(s∗1, s
∗2)) = (x∗1, x
∗2, n), then x∗i ≥ δN−n.
Thus, under rationality and forward induction reasoning, an x∗1 : x∗2 split of the pie in period
n must satisfy the deadline constraint. To understand the difference between parts (i)-(ii) of
26
Proposition 7.2, consider a state (s∗i , t∗i , s∗−i, t
∗−i) that induces the outcome (x∗1, x
∗2, n). If N is odd
and n is even, then B1 is the Proposer in the N th period but the Receiver in the nth period. In
this case, B1 agrees to an x∗1 : x∗2 split in period n, i.e., knowing he is making such an agreement.
If, however, both N and n are odd, then B1 is the Proposer in the N th and nth periods. In this
case, when B1 proposes an x∗1 : x∗2 split in period n, she does not know that she is making such an
agreement—she may very well think that the offer will be rejected. But, as we have seen, forward
induction reasoning, nonetheless implies that the deadline constraint is satisfied.
Restrict attention to the case of no delay. A consequence of Propositions 7.1-7.2 is that,
in this no-delay case, the behavioral implications of rationality and forward induction reasoning
correspond exactly to the behavioral implications of rationality, strong belief of rationality, and
on path strategic certainty.
Corollary 7.2. Fix an epistemic game (B, T ) with a deadline N and suppose Bi proposes in the
last period. Let ξ(ζ(s∗1, t∗1, s∗2, t∗2)) = (x∗1, x
∗2, 1) and suppose one of the following hold:
(i) N is even and (s∗1, t∗1, s∗2, t∗2) ∈ R1
1 ×R22.
(ii) N is odd and (s∗1, t∗1, s∗2, t∗2) ∈ R
N+11 ×RN2 .
Then x∗1 ∈ [x1, x1].
7.3 Forward Induction Reasoning and the Upfront Constraint
We now return to the case of a two-period deadline. We first show that, under rationality and
forward induction reasoning, the upfront constraint need not be satisfied.
Example 7.3 (Two Period Deadline Revisited). We revisit the case of a two-period deadline.
Now, we construct a conditional epistemic game (B, T ) and states (s∗1, t∗1, r∗2, t∗2) and (r∗1, t
∗1, r∗2, t∗2)
at which there is RCSBR, so that ξ(ζ(s∗1, r∗2)) = (0, 1, 2) and ξ(ζ(r∗1, r
∗2)) = (0, 0, 2). Thus, delay
is indeed consistent with rationality and forward induction reasoning.
Begin by first specifying specific strategies. Take s∗1(φ) = 1 − δ and s∗1(φ, 1 − δ,R, x) = A
for all x ∈ [0, 1]. Take r∗1(φ) = 1 − δ and r∗1(φ, 1 − δ,R, x) = A if and only if x ∈ [0, 1). Take
s∗2(φ, x) = A if and only if x ∈ [0, 1− δ] and s∗2(φ, x,R) = 1 for all x ∈ [0, 1]. Take r∗2(φ, x) = A if
and only if x ∈ [0, 1− δ) and r∗2(φ, x,R) = 1 for all x ∈ [0, 1].
Now construct T as follows: The type sets with be singletons, viz. T1 = t∗1 and T2 = t∗2.The belief maps are given as follows: For each h ∈ H1,
• β1,h(t∗1)(s∗2, t∗2) = 1, if s∗2 ∈ S2(h),
• β1,h(t∗1)(r∗2, t∗2) = 1, if s∗2 6∈ S2(h) and r∗2 ∈ S2(h), and
• β1,h(t∗1)(S2(h)× T2) = 1, otherwise.
27
For each h ∈ H2, set β2,h(t∗2)(sh,∗1 , t∗1) = 1.
Clearly,
s∗1, r∗1 × T1 × s∗2, r∗2 × T2 ⊆ R11 ×R1
2.
Moreover, if R1i ∩ [Si(h) × Ti] 6= ∅, then s∗i , r∗i ∩ Si(h) 6= ∅ for each Bi. From this, t∗1 strongly
believes R12 and t∗2 strongly believes R1
1. By induction, R11 ×R1
2 = Rm1 ×Rm2 for each m ≥ 1. 2
Here we have an example where there is rationality and forward induction reasoning, but the
outcome does not satisfy the upfront constraint. The reason is that, when B1 makes the initial
offer, he is uncertain about what the outcome will be under forward induction reasoning. If
forward induction reasoning did pin down a specific outcome, the outcome would have to satisfy
the upfront constraint:
Definition 7.1. Call Q1 ×Q2 ⊆ S1 × S2 a constant set if, for any (s1, s2), (r1, r2) ∈ Q1 ×Q2,
π1(s1, s2) = π1(r1, r2) and π2(s1, s2) = π2(r1, r2).
Proposition 7.3. Fix an epistemic game (B, T ). Let (x∗1, x∗2, n) be the outcome induced by some
(s∗1, s∗2). Suppose proj SR
∞ is a constant set and (s∗1, t∗1, s∗2, t∗2) ∈ R∞. Then
(i) x∗1 ≥ 1−δδn−1 , and
(ii) x∗2 ≥δ(1−δ)δn−1 .
7.4 Forward Induction Reasoning and Indifferences
In Example 7.3, we saw that—even in the case of a two-period deadline—delay is consistent
with forward induction reasoning. There, B1 made an offer which left B2 indifferent between
acceptance and rejection. B1 expected that B2 would accept the offer. But, B1 had an incorrect
belief about how B2 breaks his indifference. That is, delay was an artifact of uncertainty about
how B2 breaks his indifference.
Under rationality and forward induction reasoning alone, there may be delay in reaching agree-
ments because of uncertainty about how the other Bargainer breaks his indifference. Arguably,
this source of delay is of lesser interest.
In Section 4.2, we constructed a state at which there was rationality, forward induction rea-
soning, on path strategic certainty and delay. There, delay was not an artifact of uncertainty
about how the other Bargainer breaks his indifference: The construction satisfied a no indiffer-
ence property. (Refer to Remark 4.1.) In particular, at each information set h ∈ Hi along the
path of play, the unique type of Bi had a uniquely optimal action. Thus, no Bargainer faced
uncertainty about how the other Bargainer breaks indifferences.
The assumption of on path strategic certainty serves to rule out delay that is an artifact of
uncertainty about how the other Bargainer breaks indifferences. This can easily be understood
in the example of a two-period deadline: There, we saw that, if there is a state at which there
is RCSBR and the upfront constraint is violated, then the outcome is either (0, 0, 2) or (0, 1, 2).
28
For these outcomes to be to be consistent with RCSBR, it must be the case that B1 is uncertain
about how B2 breaks his indifference. Here is the intuition: If B2 is rational and there is delay,
then B1 must have made an offer x ∈ [1 − δ, 1]. Since B1 strongly believes B2 is rational, B1
expects that B2 would accept any offer in [0, 1 − δ). Using the fact that B1 is rational, when
she makes an offer x ∈ [1 − δ, 1], she must assign positive probability to B2 accepting the offer.
Since B1 strongly believes “B2 is rational and strongly believes I am rational,” this implies that
x = 1− δ. In that case, B2 is indifferent between acceptance and rejection. B1 incorrectly assigns
positive probability to acceptance of x = 1− δ.This idea is true more generally: A violation of the upfront constraint is indicative of the fact
that some Bargainer is uncertain about how the other bargainer breaks his indifference. We now
turn to formalize this statement. It will be useful to introduce terminology.
Definition 7.2. Fix some si ∈ Si and some µi ∈ P(S−i×T−i). Say (si, µi) has a distinguished
outcome if there exists some event E−i ⊆ S−i × T−i with µi(E−i) > 0 so that ξ(ζ(si ×proj S−i
E−i)) is a singleton.
Suppose (si, βi,h(ti)) does not have a distinguished outcome, i.e., for any event E−i assigned
positive probability under µi, we can find strategies s−i, r−i ∈ proj S−iE−i so that s−i and r−i
induce distinct outcomes when Bi plays si. Then, at h, ti faces an extreme form of uncertainty
about the outcome that will obtain—if ti can reason about an event where only outcome (x1, x2, n)
obtains then ti assigns zero probability to that event at h.9
Definition 7.3. Say (x∗1, x∗2, n∗) and (x∗∗1 , x
∗∗2 , n
∗∗) are Bi-equivalent if Πi(x∗1, x∗2, n∗) = Πi(x
∗∗1 , x
∗∗2 , n
∗∗).
Write R∞i (h) = R∞i ∩ Si(h) and R∞(h) = R∞1 (h)×R∞2 (h)
Definition 7.4. Fix an epistemic Bargaining game (B, T ) and a state (s1, t1, s2, t2) at which there
is RCSBR. Say, at (s1, t1, s2, t2), Bi is uncertain about how B(−i) breaks indifferences if
there is a history h ∈ Hi with (s1, s2) ∈ S1(h)× S2(h) so that the following hold:
(i) There are distinct B(−i)-equivalent outcomes in ξ(ζ(proj SR∞(h))).
(ii) If (si, βi,h(ti)) has a distinguished outcome, then there exists some event E−i contained in
r−i ∈ S−i(h) : ξ(ζ(si, r−i)) 6= ξ(ζ(si, s−i)) and Π−i(ξ(ζ(si, r−i))) = Π−i(ξ(ζ(si, s−i)))×T−i
so that βi,h(E−i) > 0.
Suppose at (s1, t1, s2, t2), B1 is uncertain about how B2 breaks indifferences. Then (s1, s2) allows
a history h ∈ H1 at which t1 faces uncertainty about distinct outcomes that are payoff-equivalent
for B2. The nature of this uncertainty can take one of two forms. One possibility is that, at h,
t1 assigns positive probability to an event where ‘the wrong B2-equivalent outcome’ obtains. A
9We interpret the sigma-algebra as reflecting the sets that a Bargainer can reason about.
29
second possibility is that, at h, t1 cannot reason about the event that the correct outcome will
obtain. In this latter case, any event E2 that gets positive probability and contains some (r2, u2)
with ξ(ζ(s1, s2)) = ξ(ζ(s1, r2)) also contains some (q2, v2) where (i) ξ(ζ(s1, s2))) 6= ξ(ζ(s1, q2)) but
(ii) ξ(ζ(s1, s2)) and ξ(ζ(s1, q2)) are B2-equivalent.
Proposition 7.4. Fix an epistemic game (B, T ) with a deadline. Suppose the upfront constraint
is violated at some state at which there is RCSBR. Then there is a Bi and a state (s∗1, t∗1, s∗2, t∗2)
at which there is RCSBR so that, at (s∗1, t∗1, s∗2, t∗2), Bi is uncertain about how B(−i) breaks indif-
ferences.
8 Discussion
Uncertainty about Posture
Optimism and Delay In Section TK, we drew a connection between delay and second-order
optimism. In particular, we constructed a type structure and a state at which there was RCSBR,
on path strategic certainty and delay in reaching agreements. Under the construction, there was a
mutually beneficial offer to be made upfront. But, B1 did not make this offer as she hypothesized
that doing so would cause B2 to become more optimistic about his future prospects.
This connection between second-order optimism and delay was not coincidental. To see this,
it will be convenient to introduce notation: Given a history h ∈ H2, write Eπ2[h] : S2(h)×T2 → Rfor
Eπ2[h](s2, t2) =
∫S1(h)
π2(s1, s2)dmarg S1β2,h(t2).
When Eπ2[h] is β1,φ(t1)-integrable, write
E1[Eπ2[h]|t1, φ] =
∫S2(h)×T2
Eπ2[h]dβ1,h(t1)
for t1’s initial expectation of ‘B2’s expected payoffs at h.’
Definition 8.1. Say t1 initially believes that B2 more optimistic at h than at h′ if
E1[Eπ2[h]|t1, φ] > E1[Eπ2[h′]|t1, φ].
Fix a state (s1, t1, s2, t2) ∈ R ∩ S, i.e., at which there is rationality and on path strategic
certainty. Suppose, at this state, β1,φ(t1) assigns probability one to SB 2(Z[s1, s2]), i.e., to the
event that B2 strongly believes that the terminal node ζ(s1, s2) will be reached, provided B2
plays s2. Then, for any outcomes (y1, y2, 1) that Pareto dominates ξ(ζ(s1, s2)), t1 must initially
believe that B2 is more optimistic at (φ, y1) than at (φ, s1(φ)). That is, in this case, t1 initially
reasons that the act of making a mutually beneficial offer necessarily will cause B2 to become
more optimistic about her future prospects. (See TK.)
30
Extensive-Form Rationaizability Fix a f
Equilibrium Dominance Criterion The conditions of rationality, strong belief of rational-
ity, and on path strategic certainty are related to—by distinct from—the equilibrium dominance
criterion discussed in Battigalli and Siniscalchi (2002, Section 6.1).
The equilibrium dominance criterion looks at states at which there is rationality and strong
belief of “rationality and the correct path of play.” Here we look at states at which there is
rationality, strong belief of “rationality,” and strong belief of “the correct path of play.” This
implies the equilibrium dominance criterion. However, the converse does not hold. In particular,
strong belief of “rationality and the correct path of play” allows a player to give up on the other’s
rationality, once the other has deviated from the given path of play. Here, if a player can continue
to believe the other is rational, she does so—even if the other has deviated from the proposed
path of play.
Appendix A Technical Preliminaries
Endow [0, 1] with the Euclidean metric and A,R with the discrete metric. Then, take ρ to be
the induced metric on [0, 1] ∪ A,R. The uniform metric on Si, viz. di, is then given by
di(si, ri) = suph ∈ Hi : ρ(si(h), ri(h)).
Lemma A.1.
(i) (Si, di) is a metric space.
(ii) For each h ∈ H ∪ Z, Si(h) is closed.
(iii) For each h ∈ HP1 ∪HP
2 and each interval [y, y] ⊆ [0, 1], si ∈ Si : si(h) ∈ [y, y] is closed.
Proof.
Part (i): Using the fact that ρ is a metric, it follows that, for each si, ri ∈ Si, di(si, ri) = di(ri, si) ≥0. Certainly, by construction, for each si ∈ Si, di(si, si) = 0. Finally, fix si, ri, qi ∈ Si and note
that, for each h ∈ Hi,
ρ(si(hi), qi(hi)) ≤ ρ(si(hi), ri(hi)) + ρ(ri(hi), qi(hi)).
It follows that di(si, qi) ≤ di(si, ri) + di(ri, qi).
Part (ii): Fix some h ∈ H ∪ Z. Let (ski )k∈N be a sequence with (i) ski ∈ Si(h) for each
k ∈ N, and (ii) limk→∞ di(ski , s∗i ) = 0. It follows that, for each h′ ∈ Hi that strictly pre-
cedes h, limn→∞ ρ(ski (h′), s∗i (h
′)) = 0. Moreover, for each h′ ∈ Hi that strictly proceeds h,
31
ρ(ski (h′), sli(h
′)) = 0, for each k, l ∈ N. It follows that, for each h′ that strictly proceeds h and
each ski , ρ(ski (h′), s∗i (h
′)) = 0 and so s∗i (h′) = ski (h
′). Thus, s∗i ∈ Si(h).
Part (iii) is immediate from the definition of d.
Lemma A.1(ii) is analogue to Battigalli’s (2003) Lemma 2.1. We include a specific proof here to
draw out several implications that we make implicit use of:
Corollary A.1. For each z ∈ Z and each si ∈ Si, the set s−i : ζ(si, s−i) = z is closed.
Proof. Fix z ∈ Z and si ∈ Si. By Lemma A.1(ii), it suffices to show that s−i : ζ(si, s−i) = z =
S−i(z). By definition, s−i : ζ(si, s−i) = z ⊆ S−i(z). For the converse, fix s−i ∈ S−i(z), i.e.,
there exists ri ∈ Si, ζ(ri, s−i) = z. It follows that, along the path from the root to the terminal
node z, ri and si must specify the same choices and so ζ(si, s−i) = ζ(ri, s−i), as required.
Corollary A.2.
(i) For each h ∈ HP1 ∪HP
2 and each interval [y, y] ⊆ [0, 1], the set si ∈ Si(h) : si(h) ∈ [y, y)is Borel.
(ii) For each h ∈ H, the set
si ∈ Si(h) : for each N -period history that follows h, viz. h′ ∈ HPi , si(h
′) ∈ [0, 1)
is Borel
Proof.
Part (i): Note that
si ∈ Si(h) : si(h) ∈ [y, y) = Si(h) ∩(si ∈ Si : si(h) ∈ [y, y]\Si(h, y)
).
So, using Lemma A.1(ii)-(iii), this set is Borel.
Part (ii): Let H be the set of N -period histories h′ ∈ HPi that follow h. Then, by Lemma A.1(iii),
si ∈ Si : si(h′) = 1 for all h′ ∈ H =
⋃h′∈H
si ∈ Si : si(h′) ∈ [1, 1]
is closed. Now, the set
si ∈ Si(h) : for each N -period history that follows h, viz. h′ ∈ HPi , si(h
′) ∈ [0, 1)
can be written as
Si(h)\si : si(h′) = 1 for all h′ ∈ H.
32
By Lemma A.1(ii), this set is Borel.
Example A.1. The purpose of this example is to demonstrate that there may be a strategy si
and a CPS µi ∈ C(S−i;Si). so that, for some history h, πi(si, ·) : S−i → R is not µi(·|S−i(h))-
integrable. To do so, fix E ⊆ [0, 1] so that E is not Lebesgue measurable and 1 is contained in
E.10 Write λ for the conditional of the Lebesgue measure on [0, 1].
Consider the case of a two period deadline. Let s1 be a strategy with s1(φ) = 1 and
s1(φ, 1, R, x) =
A if x 6∈ E
R if x ∈ E.
Let S2 be the set of all strategies s2 so that
• s2(φ, x) = R, for all x ∈ [0, 1], and
• s2(φ, x,R) = 1, for all x ∈ [0, 1).
Thus, strategies in S2 differ only after the history (φ, 1,R). Note, there is a bijective continuous
mapping f : S2 → [0, 1]. Let ν be the image of λ under f . Consider a CPS µ1 with µ1(·|S2) = ν.
Then, at h = (φ), π1(s1, ·) : S2 → R is not µ1(·|S2)-integrable. 2
Appendix B Proof of Necessity
Remark B.1. Fix a state (si, ti) ∈ R1i and some (h, x) ∈ HR
i allowed by si.
(i) If x ∈ [0, 1− δ), then si(h, x) = A.
(ii) If (h, x) is an N -period history and x ∈ [0, 1), then si(h, x) = A.
Fix an n-period history (h, x) ∈ HRi . Let si, ri ∈ Si(h) with si(h, x) = A and ri(h, x) = R. At
(h, x) with x ∈ [0, 1−δ), any type’s expected payoff (at (h, x)) from choosing si is δn−1(1−x) > δn
and any type’s expected payoff (at (h, x)) from choosing ri is less than or equal to δn. If n = N
with x ∈ [0, 1), any type’s expected payoff (at (h, x)) from choosing si is strictly greater than 0
whereas any type’s expected payoff (at (h, x)) from choosing ri is 0.
It will be convenient to introduce the following notation: Write Eπi[si|t∗i , h] for type ti’s expected
payoff, at h ∈ Hi ∪ φ, from choosing si ∈ Si(h), i.e.,
Eπi[si|ti, h] =
∫S−i
πi(si, ·)dmarg S−iβi,h(ti),
if πi(si, ·) is marg S−iβi,h(ti) integrable. It will be convenient to have the following terminology.
10The choice of E to not contain 1 is without loss of generality, since the set of Lebesgue measurable sets form asigma-algebra.
33
Definition B.1. Say that type ti ∈ Ti can secure payoffs of q at history h ∈ Hi∪φ if there
exists some strategy si ∈ Si(h) so that Eπi[si|ti, h] ≥ q.
We begin with a preliminary observation.
Lemma B.1. Fix some (s∗1, t∗1, s∗2, t∗2) ∈ R ∩ [SB 1(R2) × SB 2(R1)]. If ξ(ζ(s∗1, s
∗2)) = (x∗1, x
∗2, n)
and Bi is the proposer in period n, then
(i) x∗i ≥ 1− δ,
(ii) x∗−i ≥ δ(1− δ) if n ≤ N − 1, and
(iii) x∗−i = 0 if n = N .
Proof. Note, by assumption, there is an n-period history h∗ ∈ HPi along the path induced by
(s∗1, s∗2), so that s∗i (h
∗) = x∗i and s∗−i(h∗, x∗i ) = A. Of course, Eπ−i[s∗−i|t∗−i, (h∗, x∗i )] = δn−1x∗−i.
For part (i): Since t∗i strongly believes R−i and, by assumption, R−i ∩ [S−i(h∗) × T−i] 6= ∅,
βi,h∗(t∗i ) assigns probability one to
r−i ∈ S−i(h∗) : r−i(h∗, x) = A, for all x ∈ [0, 1− δ) × T−i.
(By Corollary A.2(i) this set is Borel; then apply Remark B.1.) Thus, for each x ∈ [0, 1 − δ), t∗ican secure δn−1x at h∗. Since (s∗i , t
∗i ) is rational, it follows that x∗i ≥ 1− δ.
For part (ii): Take n ≤ N − 1. We will show that, for any x ∈ [0, 1 − δ), t∗−i can secure δnx
at (h∗, x∗i ). Since (s∗−i, t∗−i) is rational, it follows that δn−1x∗−i ≥ δn(1− δ) or x∗−i ≥ δ(1− δ).
Fix some x ∈ [0, 1−δ). Since n ≤ N−1, we can construct a strategy r−i so that r−i(h∗, x∗i ) = R,
r−i(h∗, x∗i ,R) = x, and r−i(h) = s∗−i(h) for all h ∈ H−i\(h∗, x∗i ), (h∗, x∗i ,R). Since t∗−i strongly
believes Ri and, by assumption, Ri ∩ [Si(h∗, x∗i ) × Ti] 6= ∅, it follows that β−i,(h∗,x∗i )(t
∗−i) assigns
probability one to
ri ∈ Si(h∗, x∗i ) : ri(h∗, x∗i ,R, x) = A, for all x ∈ [0, 1− δ) × Ti.
(By Corollary A.2(i) this set is Borel; then apply Remark B.1.) So, Eπ−i[r−i|t∗−i, (h∗, x∗i )] = δnx,
as desired.
For part (iii): Since t∗i strongly believes R−i, R−i ∩ [S−i(h∗)× T−i] 6= ∅, and n = N , βi,h∗(t
∗i )
assigns probability one to
r−i ∈ S−i(h∗) : r−i(h∗, x) = A, for all x ∈ [0, 1) × T−i.
(The fact that this set is Borel follows from Corollary A.2(i).) Thus, for each x ∈ [0, 1), t∗i can
secure δn−1x at h∗. Since (s∗i , t∗i ) is rational, it follows that s∗i (h
∗) = 1 and so x∗−i = 0.
The next step is to establish the upfront constraint. In fact, we will establish this under a
somewhat weaker assumption. (We will later make use of this stronger result.)
34
Lemma B.2. Fix some (s∗1, t∗1, s∗2, t∗2) ∈ R ∩ [SB 1(R2)× SB 2(R1)] with ξ(ζ(s∗1, s
∗2)) = (x∗1, x
∗2, n).
If β1,φ(t∗1) assigns probability one to an event E2 ⊆ S2 × T2 with
E2 ⊆ r2 : ξ(ζ(s∗1, r2)) = (x∗1, x∗2, n∗) × T2,
then x∗1 ≥ 1−δδn−1 .
Proof. By assumption, β1,φ(t∗1) assigns probability one to an event E2 with E2 ⊆ r2 : ξ(ζ(s∗1, r2)) =
(x∗1, x∗2, n) × T2. It follows that Eπ1[s∗1|t∗1, φ] = δn−1x∗1.
We also have that (s∗1, t∗1) ∈ SB 1(R2). Thus, β1,φ(t∗1) assigns probability one to r2 : r2(φ, x) =
A for all x ∈ [0, 1− δ)×T2, i.e., the event that B2 will accept any initial offer x ∈ [0, 1− δ). (By
Corollary A.2(i) this set is Borel; then apply Remark B.1.) Thus, for each r1 with r1(φ) ∈ [0, 1−δ),Eπ1[r1|t∗1, φ] = r1(φ).
Put these two facts together: Since (s∗1, t∗1) ∈ R1, it follows that δn−1x∗1 ≥ x for all x ∈ [0, 1−δ).
Equivalently, δn−1x∗1 ≥ 1− δ. This establishes that x∗1 ≥ 1−δδn−1 .
Lemma B.3. Fix some (s∗1, t∗1, s∗2, t∗2) ∈ R ∩ [SB 1(R2)× SB 2(R1)] with ξ(ζ(s∗1, s
∗2)) = (x∗1, x
∗2, n).
If either n = 1 or β2,(φ,s∗1(φ),R)(t∗2) assigns probability one to an event E1 ⊆ S1 × T1 with
E1 ⊆ r1 : ξ(ζ(r1, s∗2)) = (x∗1, x
∗2, n) × T2,
then x∗2 ≥δ(1−δ)δn−1 .
Proof. The case of n = 1 follows from Lemma B.1(ii). So, take n ≥ 2.
Along the path of play, there is a two-period history h∗ ∈ HP2 so that β2,h∗(t
∗2) assigns
probability one to some event E1 with E1 ⊆ r1 : ξ(ζ(r1, s∗2)) = (x∗1, x
∗2, n) × T2. Thus,
Eπ2[s∗2|t∗2, h∗] = δn−1x∗2, i.e., at h∗, t∗2’s expected payoff from playing s∗2 is δn−1x∗2.
We also have that (s∗2, t∗2) ∈ SB 2(R1). Since (s∗1, t
∗1) ∈ R1 and s∗1 allows h∗, β2,h∗(t
∗2) assigns
probability one to r1 ∈ S1(h∗) : r1(h∗, x) = A for all x ∈ [0, 1−δ)×T1. (By Corollary A.2(i) this
set is Borel; then apply Remark B.1.) Thus, for any strategy r2 ∈ S2(h∗) with r2(h∗) ∈ [0, 1− δ),
Eπ2[r2|t∗2, h∗] = δr2(h∗), i.e., at h∗, t∗2’s expected payoff from proposing x ∈ [0, 1 − δ) at h∗ is
δr2(h∗).
Put these two facts together: Since (s∗2, t∗2) ∈ R2, it follows that δn−1x∗2 ≥ δx for all x ∈
[0, 1− δ). Equivalently, δn−1x∗2 ≥ δ(1− δ). This establishes that x∗2 ≥δ(1−δ)δn−1 .
The upfront constraint is now (almost) a corollary of the previous results.
Lemma B.4. Fix some (s∗1, t∗1, s∗2, t∗2) ∈ R∩ [SB 1(R2)×SB 2(R1)]∩C. If ξ(ζ(s∗1, s
∗2)) = (x∗1, x
∗2, n),
then
(i) x∗1 ≥ 1−δδn−1 , and
(ii) x∗2 ≥δ(1−δ)δn−1 .
35
Proof. Begin with part (i). Since (s∗1, t∗1) ∈ SB 1(Z2[s
∗1, s∗2]), it follows that β1,φ(t∗1) assigns
probability one to
r2 : ζ(s∗1, r2) = ζ(s∗1, s∗2) × T2 ⊆ r2 : ξ(ζ(s∗1, r2)) = (x∗1, x
∗2, n∗) × T2.
Thus, the claim follows from Lemma B.2.
Now turn to part (ii). The case of n = 1 follows from Lemma B.1(ii). So, take n ≥ 2.
Along the path of play induced by (s∗1, s∗2), there is some 2-period history h∗ ∈ HP
2 . Since
(s∗2, t∗2) ∈ SB 2(Z1[s
∗1, s∗2]), β2,h∗(t
∗2) assigns probability one to
r1 : ζ(r1, s∗2) = ζ(s∗1, s
∗2) × T1 ⊆ r1 : ξ(ζ(r1, s
∗2)) = (x∗1, 1− x∗1, n) × T1.
Thus, the claim follows from Lemma B.3.
We now turn to the deadline constraint. Begin with a preliminary result.
Lemma B.5. Fix a game with a deadline N < ∞ and suppose Bi proposes in the last period.
If ti strongly believes R−i, then ti can secure δN−1x at h∗, provided x ∈ [0, 1) and h∗ ∈ Hi with
R−i ∩ (S−i(h∗)× T−i) 6= ∅.
Proof. Fix some x ∈ [0, 1) and some h∗ ∈ Hi with R−i ∩ (S−i(h∗) × T−i) 6= ∅. Construct
si ∈ Si(h∗) that satisfies the following properties: Set si(h) = x for each N -period history h ∈ HPi
that (weakly) follows h∗. Set si(h) = 1 for all other histories h ∈ HPi that (weakly) follow h∗. Set
si(h, y) = A if and only if y ∈ [0, 1− δ), for each (h, y) ∈ HRi that (weakly) follows h∗.
Since ti strongly believes R−i and R−i ∩ (S−i(h∗) × T−i) 6= ∅, it follows that βi,h∗(ti) assigns
probability one to
s−i ∈ S−i(h∗) : s−i(h, x) = A, for each N -period history (h, x) ∈ HPi ×[0, 1) that follows h∗×T−i.
(The fact that this set is Borel follows from Corollary A.2(ii).) From this, it follows that
Eπi[si|ti, h∗] ≥ δN−1x.
Lemma B.6. Fix a game with a deadline N < ∞ and suppose Bi proposes in the last period.
Write ξ(ζ((s∗1, t∗1, s∗2, t∗2))) = (x∗1, x
∗2, n).
(i) Suppose i is the Receiver in the nth period. If (s∗1, t∗1, s∗2, t∗2) ∈ R and t∗i strongly believes R−i,
then x∗i ≥ δN−n.
(ii) Suppose i is the Proposer in the nth period. If (s∗1, t∗1, s∗2, t∗2) ∈ R∩C and t∗i strongly believes
R−i, then x∗i ≥ δN−n.
Proof. Consider the path of play induced by (s∗1, s∗2). Along the path of play, there is some n-
period history h∗ ∈ HPi ∪HP
−i. Write hi = h∗ if h∗ ∈ HPi and hi = (h∗, x∗−i) if h∗ ∈ HR
i . In either
36
case, Eπi[s∗i |t∗i , hi] = δn−1x∗i . This is immediate if hi = (h∗, x∗−i) ∈ HRi , as then s∗i (h
∗, x∗i ) = A.
If hi = h∗ ∈ HPi , then s∗i (h
∗) = x∗i and s∗−i(h∗, x∗i ) = A. As (s∗1, t
∗1, s∗2, t∗2) ∈ R ∩ C, it follows
that marg S−iβi,h(t∗i ) assigns probability one to r−i ∈ S−i(h∗) : r−i(h
∗, x∗i ) = A. (See Corollary
A.1(ii).) Thus, Eπi[s∗i |t∗i , hi] = δn−1x∗i , as claimed.
Now use the fact that t∗i strongly believes R−i and R−i ∩ (S−i(h∗, x∗−i) × T−i) 6= ∅ (since
(s∗−i, t∗−i) ∈ R−i ∩ (S−i(h
∗, x∗−i) × T−i)). It follows from Lemma B.5 that, for each x ∈ [0, 1), t∗ican secure δN−1x at (h∗, x∗−i). Since (s∗i , t
∗i ) is rational, it follows that δn−1x∗i ≥ δN−1x for each
x ∈ [0, 1). This implies that δn−1x∗i ≥ δN−1 or x∗i ≥ δN−n.
Proof of Proposition 4.1. Immediate from Lemmata B.4-B.6, plus the fact that δ ∈ (0, 1)
(i.e., so that x∗1 = 1− x∗2).
Appendix C Proof of Sufficiency
To complete the proof, we must show Lemma 4.1. That Lemma will be a consequence of the
following claim:
Lemma C.1. Fix h ∈ Hi. If s∗i , ri ∈ Si(h) then,
(i) Eπi[s∗i |t∗i , h] ≥ Eπi[ri|t∗i , h], and
(ii) Eπi[ri|t∗i , h] = Eπi[s∗i |t∗i , h] implies that either
• ri(h) = s∗i (h) or
• h = (·, 1− δ) ∈ HRi and ri(h) = A.
Part (i) says that, at every history h ∈ Hi allowed by s∗i , s∗i maximizes t∗i ’s expected payoffs.
Part (ii) says that, if ri also maximizes t∗i ’s expected payoffs at a history h allowed by s∗i , then
either ri specifies the same choice as si at h or ri ends the game by accepting an offer of 1 − δ.Thus, for any history h ∈ H1 ∪ H2, if a rational strategy-type pair allows that history, s∗i must
also allow that history. This establishes Lemma 4.1.
The remainder of this subsection is devoted to the proof of Lemma C.1. Throughout, we fix
a k-period history h ∈ Hi with s∗i ∈ Si(h). We also fix a strategy ri ∈ Si(h). We first suppose h
is allowed by s∗−i (in which case, k ≤ n) and then turn to when h is precluded by s∗−i. We make
implicit use of the fact that, [xn, xn] 6= ∅, from which n ≤ N − 1 and x∗i = 1− x∗−i.
History h ∈ Hi Allowed by s∗−i: At h, t∗i ’s expected payoff from s∗i (h) is Eπi[s∗i |t∗i , h] = δn−1x∗i .
Using the fact that x∗1 ∈ [xn, xn],
Eπi[s∗i |t∗i , h] ≥
1− δ if Bi=B1
δ(1− δ) if Bi=B2and Eπi[s∗i |t∗i , h] ≥
δN−1 if Bi is the proposer in period N
0 if Bi is the receiver in period N.
37
We make use of these facts below.
Consider the set of histories h′ ∈ Hi on the path from h to ζ(s∗i , s∗−i). If ri agrees with s∗i on
this set (i.e., ri(h′) = s∗i (h
′) for each such history h′), then certainty Eπi[s∗i |t∗i , h] = Eπi[ri|t∗i , h].
So, we suppose that there exists some h′ on the path from h to ζ(s∗i , s∗−i) so that
(i) ri(h′) 6= s∗i (h
′) and
(ii) ri agrees with s∗i at each history that strictly precedes h′.
There are four cases.
Case A: First, suppose that h′ ∈ HPi and ri(h
′) ∈ [0, 1 − δ). Then, at h′, t∗i expects the offer
ri(h′) to be accepted and so Eπi[ri|t∗i , h] ≤ δk−1ri(h′) < δk−1(1− δ). This gives that
Eπi[ri|t∗i , h] <
1− δ if Bi=B1
δ(1− δ) if Bi=B2,
where the case of Bi=B2 follows from the fact that, if B2 is the proposer, then k ≥ 2. This
establishes Eπi[s∗i |t∗i , h] > Eπi[ri|t∗i , h].
Case B: Second, suppose that h′ ∈ HPi and ri(h
′) ∈ [1 − δ, 1]. Then, at h′, t∗i expects the offer
ri(h′) to be rejected. Since βi,h′(t
∗i ) assigns probability one to (s∗−i, t
∗−i), it follows that, at h′, t∗i
believes that B(−i) will
• only make future offers where Bi gets zero share of the pie, and
• accept a future l-period offer of x if and only if either (i) x ∈ [0, 1 − δ) and l ≤ N − 1 or
(ii) x ∈ [0, 1) and l = N .
(Use Lemma A.1(i) and Corollary A.2 to get that this set is Borel.) It follows that
Eπi[ri|t∗i , h] <
maxδN−1, δk+1(1− δ) if Bi is the Proposer in N
δk+1(1− δ) if Bi is the Receiver in N
This establishes that Eπi[s∗i |t∗i , h] > Eπi[ri|t∗i , h], as desired.
Case C: Third, suppose that h′ = (·, x) ∈ HRi with h′ 6= (h∗, x∗−i). Then h′ = (·, 1), s∗i (h
′) = R 6=A = ri(h
′). It follows that Eπi[s∗i |t∗i , h] > 0 = Eπi[ri|t∗i , h].
Case D: Fourth, suppose that h′ = (h∗, x∗−i). Repeat the argument in Case B to get that
Eπi[ri|t∗i , h] <
δk+1(1− δ) if Bi is the Receiver in N
maxδN−1, δk+1(1− δ) if Bi is the Proposer in N
38
This establishes that Eπi[s∗i |t∗i , h] > Eπi[ri|t∗i , h], as desired.
Thus, we have established that, at each h ∈ Hi with (s∗i , s∗−i) ∈ Si(h) × S−i(h), either ri agrees
with s∗i on the path from h to ζ(s∗i , s∗−i) or Eπi[s∗i |t∗i , h] > Eπi[ri|t∗i , h].
History h ∈ Hi Precluded by s∗−i: There are three cases.
Case A: First, suppose that h ∈ HPi . Since s∗−i 6∈ S−i(h), βi,h(t∗i ) assigns probability one to B(−i)
accepting any offer proposed at h. Thus, Eπi[s∗i |t∗i , h] = δk−1. If ri(h) = s∗i (h) = 1, then certainty
Eπi[s∗i |t∗i , h] = Eπi[ri|t∗i , h]. If ri(h) 6= s∗i (h), then Eπi[s∗i |t∗i , h] = δk−1 > Eπi[ri|t∗i , h].
Case B: Second, suppose that h = (·, x) ∈ HP−i × [0, 1− δ). Then, s∗i (h) = A and Eπi[s∗i |t∗i , h] =
δk−1(1− x) > δk. If ri(h) = s∗i (h) = A, certainly Eπi[ri|t∗i , h] = Eπi[s∗i |t∗i , h]. If ri(h) = R 6= A =
s∗i (h), then Eπi[ri|t∗i , h] ≤ δk, and so Eπi[s∗i |t∗i , h] > Eπi[ri|t∗i , (h, x)].
Case C: Third, suppose that h = (·, x) ∈ HP−i × [1 − δ, 1]. Then, βi,(h,x)(t
∗i ) assigns probability
one to B(−i) accepting any subsequent offer. Thus, Eπi[s∗i |t∗i , h] = δk. If ri(h) = s∗i (h) = R,
then Eπi[ri|t∗i , h] = δkri(h,R) ≤ δk. If ri(h) = A, then Eπi[ri|t∗i , h] = δk−1(1 − x) ≤ δk. Thus,
Eπi[s∗i |t∗i , h] ≥ Eπi[ri|t∗i , (h, x)] and the inequality is weak if and only if one of the following hold:
(i) ri(h) = s∗i (h) = R and ri(h,R) = s∗i (h,R) = 1, or (ii) h = (·, 1− δ) and ri(h) = A 6= s∗i (h).
Thus, we have established that, at each h ∈ Hi with s∗i ∈ Si(h) and s∗−i 6∈ Si(h), either either
(i) ri agrees with s∗i on the path from h to ζ(s∗i , ah−i), (ii) h = (·, 1 − δ) and ri(h) = A, or
(iii) Eπi[s∗i |t∗i , h] > Eπi[ri|t∗i , h].
Appendix D Implications for Delay
Proof of Observation 5.1. Fix some N and δ. Let:
• nU(δ) be the smallest n so that δn−1 < 1− δ2,
• nD1(δ) be the smallest n so that δn−1 < δ(1− δ) + δN−1, and
• nD2(δ) be the smallest n so that δn−1 < (1− δ) + δN−1.
Since N ≥ 2, nU(δ), nD1(δ), nD2(δ) ≥ 2. Moreover, nU(δ) is finite.
If there is no deadline, take n(δ) = nU(δ). If there is a deadline where B1 (resp. B2) proposes
in N , take n(δ) = minnU(δ), nD1(δ) (resp. n(δ) = minnU(δ), nD2(δ)). Then [xn, xn] = ∅ if
and only if n ≥ n(δ). The fact that n(δ) ≤ N follows from Proposition 5.1(ii).
39
Proof of Proposition 5.1. Fix a state (s∗1, t∗1, s∗2, t∗2) at which there is rationality, strong
belief of rationality, and on path strategic certainty. By Proposition 4.1, ξ(ζ((s∗1, t∗1, s∗2, t∗2))) =
(x∗, 1− x∗, n), where x∗ ∈ [xn, xn].
Begin with part (i): If n = N and N is odd, then xN = δN−N = 1 and xN = 1− δ(1−δ)δN−1 < 1. If
n = N and N is even, then xN ≥ 1−δδN−1 > 0 and xN = 1− δN−N = 0. In either case, [xN , xN ] = ∅
and so n ≤ N − 1.
Turn to part part (ii): Assume n = N − 1 and we will show that N ≤ 3: First take N odd.
In this case xN−1 ≥ δN−N+1 = δ and xN−1 = 1− δ(1−δ)δN−2 . Since [xN−1, xN−1] 6= ∅,
δN−2(1− δ) ≥ δ(1− δ).
This holds if and only if N − 2 ≤ 1 or, equivalently, if and only if N ≤ 3. Next, take N is even.
Then, xN−1 = 1−δδN−2 and xN−1 ≤ 1− δ. So, [xN−1, xN−1] 6= ∅ implies
δN−2(1− δ) ≥ (1− δ).
This can only hold if and only if N = 2.
It will be convenient to define functions corresponding to the upfront constraints and the
deadline constraints. Specifically, define Ui : (0, 1)× N+ → R, and Di : (0, 1)× N+ × N+ → R so
that
• U1(δ, n) = 1−δδn−1 ,
• U2(δ, n) = 1− δ(1−δ)δn−1 ,
• D1(δ,N, n) = δN−n, and
• D2(δ,N, n) = 1− δN−n.
For given parameters N and δ, xn = maxU1(δ, n),D1(δ,N, n) if N < ∞ is odd and xn =
U1(δ, n) otherwise. Likewise, for given parameters N and δ, xn = minU2(δ, n),D2(δ,N, n) if
N <∞ is even and xn = U2(δ, n) otherwise. We will show the following two Lemmata:
Lemma D.1. Fix n ≥ 2. There exists δ[n] ∈ (12 , 1) so that U2(δ, n) ≥ U1(δ, n) if and only if
δ ≥ δ[n].
Lemma D.2. Fix some n ≤ N − 2
(i) There exists δ[N,n] ∈ (0, 1) so that U2(δ, n) ≥ D1(δ,N, n) if and only if δ ≥ δ[N,n].
(ii) There exists δ[N,n] ∈ (0, 1) so that D2(δ,N, n) ≥ U1(δ, n) if and only if δ ≥ δ[N,n].
Proof of Proposition 5.2. Immediate from Lemmata D.1-D.2.
We begin with the proof of Lemma D.1. For this it will be useful to observe the following:
40
Remark D.1. Fix n ≥ 2.
(i) U1(·, n) is a strictly decreasing continuous function.
(ii) U2(·, n) is a strictly increasing continuous function.
Proof of Lemma D.1. Note that the following are equivalent:
(i) U2(δ, n) ≥ U1(δ, n)
(ii) δn−1 − δ(1− δ) ≥ (1− δ), and
(iii) 0 ≥ 1− δ2 − δn−1,
For any given n, the function f(δ, n) := 1− δ2 − δn−1 is strictly decreasing and continuous in δ.
Moreover, for any given n, limδ→0 f(δ, n) = 1 and limδ→1 f(δ, n) = −1. Thus, for any given n,
there exists δ[n] ∈ (0, 1) so that f(δ[n], n) = 0. It follows that U2(δ, n) ≥ U1(δ, n) if and only if
δ ≥ δ[n].
Now turn to show that δ[n] > 12 . We first show that δ[2] > 1
2 . Then we show that δ[n] is
increasing in n. The claim then follows.
To see that δ[2] > 12 : Note, U1(
12 , 2) = 1 > 1
2 = U2(12 , 2). Since U1(·, 2) is a strictly decreas-
ing continuous function and U2(·, 2) is a strictly increasing continuous function, it follows that,
U1(δ[2], 2) = U2(δ[2], 2) implies δ[2] > 12 .
To see that δ[n] is strictly increasing in n: For any given δ, the function f(δ, ·) is strictly
increasing in n. Thus, if f(δ[n], n) = 0 then f(δ[n], n + 1) > 0. Since f(·, n + 1) is strictly
decreasing in δ, it follows that δ[n+ 1] > δ[n].
Now we will turn to the proof of Lemma D.2. It will be convenient to define functions g :
[0, 1]× N+ × N+ → R and h : [0, 1]× N+ × N+ → R, so that
g(δ,N, n) = (1− δ)− δn−2(1− δN−n) and h(δ,N, n) = (1− δ)− δn−1(1− δN−n).
Lemma D.3. Fix n ≥ 2.
(i) 0 ≥ g(δ,N, n) if and only if U2(δ, n) ≥ D1(δ,N, n).
(ii) 0 ≥ h(δ,N, n) if and only if D2(δ, n) ≥ U1(δ,N, n).
Proof. Note that the following are equivalent:
(i) U2(δ, n) ≥ D1(δ,N, n),
(ii) δn−1 − δ(1− δ) ≥ δN−n × δn−1,
(iii) 0 ≥ δ(1− δ)− δn−1(1− δN−n), and
41
(iv) 0 ≥ (1− δ)− δn−2(1− δN−n),
as desired. Next note that the following are equivalent:
(i) D2(δ, n) ≥ U1(δ,N, n), and
(ii) 0 ≥ (1− δ)− δn−1(1− δN−n),
as desired.
Lemma D.4. Fix n = N − 2.
(i) There exists δ[N,N − 2] ∈ (0, 1) so that g(δ,N,N − 2) ≤ 0 if and only if δ ≥ δ[N,N − 2].
(ii) There exists δ[N,N − 2] ∈ (0, 1) so that h(δ,N,N − 2) ≤ 0 if and only if δ ≥ [δ[N,N − 2].
Proof. First note that
g(δ,N,N − 2) = g(δ, n+ 2, n) = (1− δ)− δn−2(1− δ2) = (1− δ)− δn−2(1− δ)(1 + δ).
Then, g(δ,N,N−2) ≤ 0 if and only if 1−δn−2(1+δ) ≤ 0. Note, the function k(δ, n) = 1−δn−2(1+
δ) is strictly decreasing and continuous in δ, with limδ→0 k(δ, n) = 1 and limδ→1 k(δ, n) = −1.
From this, we can find δ[n+ 2, n] ∈ (0, 1) so that 1− δn−2(1 + δ) ≤ 0 if and only if δ ≥ δ[n+ 2, n].
Next note that
h(δ,N,N − 2) = h(δ, n+ 2, n) = (1− δ)− δn−1(1− δ2) = (1− δ)− δn−1(1− δ)(1 + δ).
Then, h(δ,N,N−2) ≤ 0 if and only if 1−δn−1(1+δ) ≤ 0. Note, the function k(δ, n) = 1−δn−1(1+
δ) is strictly decreasing and continuous in δ with limδ→0 k(δ, n) = 1 and limδ→1 k(δ, n) = −1. From
this, we can find δ[n+ 2, n] ∈ (0, 1) so that 1− δn−1(1 + δ) ≤ 0 if and only if δ ≥ δ[n+ 2, n].
Corollary D.1. Take n ≤ N − 2. There exists δ[n], δ[n] ∈ (0, 1) so that the following hold:
(i) For all δ ≥ δ[n], g(δ,N, n) ≤ 0.
(ii) For all δ ≥ δ[n], h(δ,N, n) ≤ 0.
To see this, apply Lemma D.4 taking δ[n] = δ[N,N −2] and δ[n] = δ[N,N −2]. The claim follows
since g(δ,N, ·) and h(δ,N, ·) are increasing in n.
Lemma D.5. Fix n ≤ N − 2.
(i) There exists δ[N,n] ∈ (0, 1) so that g(δ,N, n) ≤ 0 if and only if δ ≥ δ[N,n].
(ii) There exists δ[N,n] ∈ (0, 1) so that h(δ,N, n) ≤ 0 if and only if δ ≥ δ[N,n].
42
Proof. Begin with part (i) and note that g(·, N, n) : [0, 1] → R is a continuous function with
limδ→0 g(δ,N, n) = 1 and limδ→1 g(δ,N, n) = 0. Moreover, by Corollary D.1(i), there is some
δ[n] ∈ (0, 1) so that g(δ,N, n) ≤ 0 if δ ≥ δ[n]. Thus, to show the claim, it suffices to show that
the function g(·, N, n) does not achieve a local maximum in (0, 1).
To show that the function g(·, N, n) does not achieve a maximum in (0, 1), note:
dg(·, N, n)
dδ= −1− (n− 2)δn−3 + (N − 2)δN−3.
So, if δ∗ ∈ (0, 1) is a local minimum or local maximum, then
(N − 2)δN−3∗ = 1 + (n− 2)δn−3∗ . (1)
Moreover,d2g(·, N, n)
dδ2= −(n− 2)(n− 3)δn−4 + (N − 2)(N − 3)δN−4.
We show that if δ∗ ∈ (0, 1) satisfies Equation 1, then d2g(·,N,n)dδ2
is strictly positive at δ∗. This
implies that there is no local maximum in (0, 1).
Notice that the sign of d2g(·,N,n)dδ2
is the same as the sign of
−(n− 2)(n− 3)δn−3 + (N − 2)(N − 3)δN−3.
Thus, if δ∗ satisfies Equation 1, then the sign of d2g(·,N,n)dδ2
at δ∗ is the same as the sign of
−(n− 2)(n− 3)δn−3∗ + (N − 3)[1 + (n− 2)δn−3∗ ] = δn−3∗ (n− 2)[N − n] + (N − 3).
The fact that δn−3∗ (n− 2)[N − n] + (N − 3) > 0 follows from the fact that N − 2 ≥ n ≥ 2.
Turn to part (ii) and note that h(·, N, n) : [0, 1]→ R is a continuous function with limδ→0 h(δ,N, n) =
1 and limδ→1 h(δ,N, n) = 0. Moreover, by Corollary D.1(ii), there is some δ[n] ∈ (0, 1) so that
h(δ,N, n) ≤ 0 if δ ≥ δ[n]. Thus, to show the claim, it suffices to show that the function h(·, N, n)
does not achieve a local maximum in (0, 1).
To show that the function h(·, N, n) does not achieve a maximum in (0, 1), note:
dh(·, N, n)
dδ= −1− (n− 1)δn−2 + (N − 1)δN−2.
So, if δ ∈ (0, 1) is a local minimum or local maximum, then
(N − 1)δN−2∗ = 1 + (n− 1)δn−2∗ . (2)
Moreover,d2h(·, N, n)
dδ2= −(n− 1)(n− 2)δn−3 + (N − 1)(N − 2)δN−3.
43
We show that if δ∗ ∈ (0, 1) satisfies Equation 2, then d2h(·,N,n)dδ2
is strictly positive at δ∗. This
implies that there is no local maximum in (0, 1).
Notice that the sign of d2h(·,N,n)dδ2
is the same as the sign of
−(n− 1)(n− 2)δn−2 + (N − 1)(N − 2)δN−2.
Thus, if δ∗ satisfies Equation 2, then the sign of d2h(·,N,n)dδ2
at δ∗ is the same as the sign of
−(n− 1)(n− 2)δn−2∗ + (N − 2)[1 + (n− 1)δn−2∗ ] = δn−2∗ (n− 1)[N − n] + (N − 2).
The fact that δn−2∗ (n− 1)[N − n] + (N − 2) > 0 follows from the fact that N − 2 ≥ n ≥ 2.
Proof of Lemma D.2. Immediate from Lemma D.3 and Lemma D.5.
Appendix E Comparative Statics
Proof of Proposition 6.1. Fix N∗∗ > N∗. When N∗∗ = ∞, the claim is immediate from the
definitions. If N∗∗ and N∗ are even, then x(n,N∗,δ) = x(n,N∗∗,δ) and
x(n,N∗,δ) = min1− δ(1− δ)δn−1
, 1− δN∗−n ≤ min1− δ(1− δ)δn−1
, 1− δN∗∗−n = x(n,N∗∗,δ).
If N∗∗ and N∗ are odd, then
x(n,N∗,δ) = max1− δδn−1
, δN∗−n ≥ max1− δδn−1
, δN∗∗−n = x(n,N∗∗,δ)
and x(n,N∗∗,δ) = x(n,N∗,δ).
Proof of Proposition 6.2. Immediate from Remark D.1.
Appendix F On Path Strategic Uncertainty
Lemma F.1. Fix a game with a deadline N <∞ and suppose Bi proposes in the last period. Let
ξ(ζ(s∗1, t∗1, s∗2, t∗2)) = (x∗1, x
∗2, N − 2k) for some N−1
2 ≥ k ≥ 1. If (s∗i , t∗i , s∗−i, t
∗−i) ∈ R
2k+2i × R2k+1
−i ,
then x∗i ≥ δN−(N−2k).
Proof. We will suppose the result is true for all j with k > j ≥ 1 and we will show that it is
also true for k. (Note, this will cover the base case of k = 1, since there is no such j, i.e., the
supposition trivially holds.) Throughout, we fix a state (s∗i , t∗i , s∗−i, t
∗−i) ∈ R2k+2
i × R2k+1−i . We
will show that ξ(ζ(s∗1, t∗1, s∗2, t∗2)) = (x∗1, x
∗2, N − 2k) implies x∗i ≥ δN−(N−2k) = δ2k. Note, along
the path induced by (s∗1, s∗2), there is a (N − 2k)-period history h∗ ∈ HP
i with s∗i (h∗) = x∗i and
s∗−i(h∗, x∗i ) = A.
44
Case A: First, suppose βi,h∗(t∗i ) assigns probability one to
A−i[h∗, x∗i ] := r−i ∈ S−i(h∗) : r−i(h
∗, x∗i ) = A × T−i.
Then, Eπi[s∗i |t∗i , h∗] = δN−2k−1x∗i .
Note next that t∗i strongly believes R−i and R−i∩[S−i(h∗)×T−i] 6= ∅ (in particular, (s∗−i, t
∗−i) ∈
R−i∩ [S−i(h∗)×T−i]). It follows that, for each x ∈ [0, 1), t∗i can secure δN−1x at h∗. (See Lemma
B.5.) Since (s∗i , t∗i ) is rational, it follows that, for each x ∈ [0, 1), δN−2k−1x∗i ≥ δN−1x or x∗i ≥ δ2kx.
From this, x∗i ≥ δ2k.
Case B: Next suppose βi,h∗(t∗i ) assigns strictly positive probability to
R−i[h∗, x∗i ] := r−i ∈ S−i(h∗) : r−i(h
∗, x∗i ) = R × T−i.
Since t∗i strongly believes R2k+1−i and R2k+1
−i ∩[S−i(h∗)×T−i] 6= ∅ (in particular, (s∗−i, t
∗−i) ∈ R
2k+1−i ∩
[S−i(h∗)×T−i]), it follows that βi,h∗(t
∗i ) assigns strictly positive probability to R−i[h
∗, x∗i ]∩R2k+1−i .
Fix some (r−i, u−i) ∈ R−i[h∗, x∗i ] ∩R
2k+1−i . We will show that
Eπ−i[r−i|u∗−i, (h∗, x∗i ,R)] ≤ δN−2k(1− δ2k−1).
From this the claim follows: Using Remark 1.1 and the fact that (r−i, u−i) is rational, we have
that
Eπ−i[r−i|u−i, (h∗, x∗i ,R)] = Eπ−i[r−i|u−i, (h∗, x∗i )] ≥ Eπ−i[q−i|u−i, (h∗, x∗i )]
for q−i ∈ S−i(h∗, x∗i ) with q−i(h∗, x∗i ) = A. Thus,
δN−2k(1− δ2k−1) ≥ Eπ−i[r−i|u−i, (h∗, x∗i ,R)] ≥ δN−2k−1(1− x∗i )
or x∗i ≥ 1− δ(1− δ2k−1) > δ2k, as desired.
The remainder of the proof is devoted to showing Eπ−i[r−i|u−i, (h∗, x∗i ,R)] ≤ δN−2k(1−δ2k−1).For this, note that u−i strongly believes R2k
i and R2ki ∩ [Si(h
∗, x∗i ,R)×Ti] 6= ∅. (Here, we used the
fact that (s∗i , t∗i ) ∈ R2k
i ∩ [Si(h∗, x∗i ,R)× Ti].) With this, β−i,(h∗,x∗i ,R)(u−i) assigns probability one
to R2ki . As such, to show that Eπ−i[r−i|u−i, (h∗, x∗i ,R)] ≤ δN−2k(1 − δ2k−1), it suffices to show
the following:
Claim: If (ri, ui) ∈ R2ki ∩ [Si(h
∗, x∗i ,R) × Ti] with ξ(ζ(ri, r−i)) = (x1, x2, n), then
x−i ≤ 1− δ2k−1 and n ≥ N − 2k + 1.
We now turn to show this claim.
Fix (ri, ui) ∈ R2ki ∩[Si(h
∗, x∗i ,R)×Ti] with ξ(ζ(ri, r−i)) = (x1, x2, n). Certainly, n ≥ N−2k+1.
So, we focus on showing that x−i ≤ 1 − δ2k−1. Write h[n] for the n-period history in HP1 ∪HP
2
along the path induced by (r1, r2). There will be three subcases, based on whether h[n] ∈ HPi or
h[n] ∈ HP−i.
45
Subcase 1. Suppose n = N . Note (ri, ui) ∈ R2ki ⊆ R2
i and (r−i, u−i) ∈ R−i ∩ (S−i(h[n]) × T−i).So, βi,h[n](ui) assigns probability one to
s−i ∈ S−i(h[n]) : s−i(h[n], x) = A, for all x ∈ [0, 1) × T−i.
(Use Corollary A.2(i) to get that this set is Borel.) Since (ri, ui) is rational, ri(h[n]) = 1 and so
x−i = 0.
Subcase 2. Suppose that n = N − 2j + 1 for some j with k ≥ j ≥ 1, so that h[n] ∈ HP−i. Since
(ri, ui, r−i, u−i) ∈ R2k ⊆ R2. It then follows from Lemma B.6(i) that xi ≥ δN−(N−2j+1) ≥ δ2k−1.
Thus, x−i ≤ 1− δ2k−1, as desired.
Subcase 3. Suppose that n = N − 2j for some j with k > j ≥ 1, so that h[n] ∈ HPi . Note,
(ri, ui, r−i, u−i) ∈ R2k ⊆ R2(j+1) ⊆ R2j+2i ×R2j+1
−i . It follows from the assumption that the claim
holds for all j < k that xi ≥ δ2j ≥ δ2k−1. Thus, x−i ≤ 1− δ2k−1, as desired.
Proof of Proposition 7.2. Immediate from Lemmata B.6(i) and F.1.
Remark F.1. If π1(s1, s2) = π1(r1, r2) and π2(s1, s2) = π2(r1, r2), then ξ(ζ(s1, s2)) = ξ(ζ(r1, r2)).
Proof of Proposition 7.3. Throughout, fix some (s∗1, t∗1, s∗2, t∗2) ∈ R∞ with ξ(ζ(s∗1, s
∗2)) =
(x∗1, x∗2, n).
Since t∗1 strongly believes eachRm2 , β1,φ(t∗1)(Rm2 ) = 1 for eachm ≥ 1. From this, β1,φ(t∗1)(R
∞2 ) =
1. Since proj SR∞ is constant and (s∗1, s
∗2) ∈ proj SR
∞, it follows from Remark F.1 that
R∞2 ⊆ r2 : ξ(ζ(s∗1, r2)) = ξ(ζ(s∗1, s∗2)) × T2.
Thus, by Lemma B.2, x∗1 ≥ 1−δδn−1 .
If n = 1, then it follows from Lemma B.1(ii) that x∗2 ≥δ(1−δ)δn−1 . So, we focus on the case of
n ≥ 2. Note, along the path of play, there is a two-period history h∗ ∈ HP2 . Since t∗2 strongly
believes each Rm1 and (s∗1, t∗1) ∈ Rm1 ∩ [S1(h
∗)× T1] 6= ∅, β2,h∗)(t∗2)(Rm1 ) = 1 for each m ≥ 1. From
this, β2,h∗(t∗2)(R
∞1 ) = 1. Since proj SR
∞ is constant and (s∗1, s∗2) ∈ proj SR
∞, it follows from
Remark F.1 that
R∞1 ⊆ r1 : ξ(ζ(r1, s∗2)) = ξ(ζ(s∗1, s
∗2)) × T1.
Thus, by Lemma B.3, x∗2 ≥δ(1−δ)δn−1 .
Proof of Proposition 7.4. Suppose the upfront constraint is violated. Then, proj S1R∞1 ×
proj S2R∞2 is not a constant set. Since the game has a deadline, this implies that we can find some
Bargainer B(−i) and some history h−i ∈ H−i, so that the following holds:
46
(a) ξ(ζ(proj SR∞(h−i))) contains at least two outcomes, but
(b) for any history h′ ∈ H that strictly follows h−i, ξ(ζ(proj SR∞(h′))) contains at most one
outcome.
Write hi ∈ Hi for the last history in Hi that precedes h−i. So, if h−i ∈ HR−i, then h−i = (hi, x)
for some x ∈ [0, 1]. If h−i ∈ HP−i, then h−i = (hi, x,R) for some x ∈ [0, 1].
We will show that, for any (si, ti) ∈ R∞i (h−i), there is some (s−i, t−i) so that, at (si, ti, s−i, t−i),
Bi faces uncertainty about how B(−i) breaks indifferences.
Step A: This step shows that any two outcomes in ξ(ζ(proj SR∞(h))) are B(−i) equivalent. Fix
(si, ti, s−i, t−i), (ri, ui, r−i, u−i) ∈ R∞(h−i). Then, t−i and u−i strongly believe R1i , R
2i , . . .. It
follows from the conjunction property of strong belief that t−i and u−i strongly believe R∞−i.
Using the fact that (s−i, t−i) and (r−i, u−i) are rational plus condition (b), it follows that
(i) E[s−i|t−i, h−i] = Π−i(ξ(ζ(si, s−i))) ≥ Π−i(ξ(ζ(ri, r−i))) = E[r−i|t−i, h−i] and
(ii) E[r−i|u−i, h−i] = Π−i(ξ(ζ(ri, r−i))) ≥ Π−i(ξ(ζ(si, s−i))) = E[s−i|u−i, h−i].
Thus, Π−i(ξ(ζ(si, s−i))) = Π−i(ξ(ζ(ri, r−i))), as required.
Step B: First, suppose that h−i = (hi, x) ∈ HR−i. Fix some (si, ti) ∈ R∞i (h−i). Note that the sets
• A−i[h−i] := q−i ∈ S−i(h−i) : q−i(h−i) = A × T−i, and
• R−i[h−i] := q−i ∈ S−i(h−i) : q−i(h−i) = R × T−i
are both Borel. (Lemma A.1(ii).) Notice, by construction of h−i, there exists (s−i, t−i), (r−i, u−i) ∈R∞−i(h−i) with s−i(h−i) = A 6= R = r−i(h−i). States in (si, ti) × A−i[h−i] (resp. (si, ti) ×R−i[h−i]) necessarily induce distinct outcomes from (si, ti, r−i, u−i) (resp. (si, ti, s−i, t−i)), since
they result in the Bargaining game concluding in different Bargaining phases. It follows that, if
βi,hi(ti)(A−i[h−i]) > 0 (resp. βi,hi(ti)(R−i[h−i]) > 0), then at (si, ti, r−i, u−i) (resp. (si, ti, s−i, t−i))
Bi faces uncertainty about how B(−i) breaks indifferences.
Step C: Finally, suppose that h−i = (hi, x,R) ∈ HR−i. Fix some (si, ti) ∈ R∞i (h−i). If (s1, βi,h(ti))
does not has a distinguished outcome, then at any state in (si, ti) × R∞−i(h−i), Bi is uncer-
tain about how B(−i) breaks indifferences. So, we will suppose that (si, βi,h(ti)) has a distin-
guished outcome, i.e., there exists some event E−i ⊆ S−i × T−i with βi,h(si × E−i) > 0 and
ξ(ζ(si × proj S−iE−i)) = (x∗1, x∗2, n). Note, by (a), there exists some (s−i, t−i) ∈ R∞−i(h−i)
with ξ(ζ(si, s−i)) 6= (x∗1, x∗2, n). Then, at (si, ti, s−i, t−i), Bi faces uncertainty about how B(−i)
breaks indifferences.
47
References
Abreu, Dilip and David Pearce. 2007. “Bargaining, reputation, and equilibrium selection in
repeated games with contracts.” Econometrica 75(3):653–710.
Abreu, Dilip, David Pearce and Ennio Stacchetti. 2012. “One-Sided Uncertainty and Delay in
Reputational Bargaining.” Economic Theory Center Working Paper (45-2012).
Abreu, Dilip and Faruk Gul. 2000. “Bargaining and Reputation.” Econometrica 68(1):85–117.
Admati, A.R. and M. Perry. 1987. “Strategic Delay in Bargaining.” The Review of Economic
Studies 54(3):345–364.
Aliprantis, C.D. and K.C. Border. 2007. Infinite dimensional analysis: a hitchhiker’s guide.
Springer Verlag.
Battigalli, P. and M. Siniscalchi. 1999. “Hierarchies of Conditional Beliefs and Interactive Epis-
temology in Dynamic Games.” Journal of Economic Theory 88(1):188–230.
Battigalli, P. and M. Siniscalchi. 2002. “Strong Belief and Forward Induction Reasoning.” Journal
of Economic Theory 106(2):356–391.
Battigalli, Pierpaolo. 2003. “Rationalizability in infinite, dynamic games with incomplete infor-
mation.” Research in Economics 57(1):1–38.
Ben-Porath, E. 1997. “Rationality, Nash Equilibrium and Backwards Induction in Perfect-
Information Games.” The Review of Economic Studies 64:23–46.
Cramton, P.C. 1984. “Bargaining with incomplete information: An infinite-horizon model with
two-sided uncertainty.” The Review of Economic Studies 51(4):579–593.
Dekel, Eddie, Drew Fudenberg and David K Levine. 1999. “Payoff information and self-confirming
equilibrium.” Journal of Economic Theory 89(2):165–185.
Farber, H.S. and H.C. Katz. 1979. “Interest arbitration, outcomes, and the incentive to bargain.”
Industrial and Labor Relations Review 33(1):55–63.
Fearon, J.D. 2004. “Why do some Civil Wars Last so Much Longer than Others?” Journal of
Peace Research 41(3):275–301.
Feinberg, Y. and A. Skrzypacz. 2005. “Uncertainty about Uncertainty and Delay in Bargaining.”
Econometrica 73(1):69–91.
Fudenberg, D., D. Levine and J. Tirole. 1985. “Infinite-Horizon Models of Bargaining with One-
Sided Incomplete Information.” Game Theoretic Models of Bargaining pp. 73–98.
48
Fudenberg, D. and D.K. Levine. 1993. “Self-confirming equilibrium.” Econometrica: Journal of
the Econometric Society pp. 523–545.
Grossman, S.J. and M. Perry. 1986. “Sequential Bargaining under Asymmetric Information.”
Journal of Economic Theory 39(1):120–154.
Harsanyi, J.C. 1967. “Games with Incomplete Information Played by “Bayesian” Players, I-III.
Part I. The Basic model.” Management Science pp. 159–182.
Kohlberg, E. 1981. “Some Problems with the Concept of Perfect Equilibrium.” Rapp. Rep. NBER
Conf. Theory Gen. Econ. Equilibr. K. Dunz N. Singh, Univ. Calif. Berkeley .
Ortner, Juan. 2013. “Optimism, Delay and (In)Efficiency in a Stochastic Model of Bargaining.”
Games and Economic Behavior 77(1):352366.
Powell, R. 2006. “War as a Commitment Problem.” International Organization 60(01):169–203.
Rubinstein, A. 1982. “Perfect Equilibrium in a Bargaining Model.” Econometrica 50(1):97–109.
Rubinstein, A. 1985. “A Bargaining Model with Incomplete Information about Time Preferences.”
Econometrica 53(5):1151–1172.
Shavell, S. 1982. “Suit, Settlement, And Trial: A Theoretical Analysis Under Alternative Methods
for the Allocation of Legal Costs.” Journal of Legal Studies 11.
Sobel, J. and I. Takahashi. 1983. “A multistage model of bargaining.” The Review of Economic
Studies 50(3):411–426.
Stahl, Ingolf. 1977. An n-person bargaining game in the extensive form. In Mathematical economics
and game theory. Springer pp. 156–172.
Stuart, Harborne W. 2004. “Surprise moves in negotiation.” Negotiation Journal 20(2):239–251.
Walton, Richard and Robert McKersie. 1965. A behavioral theory of labor negotiations: An
analysis of a social interaction system. McGraw-Hill.
Watson, J. 1998. “Alternating-Offer Bargaining with Two-Sided Incomplete Information.” Review
of Economic Studies 65(3):573–594.
Wolitzky, Alexander. 2012. “Reputational Bargaining with Minimal Knowledge of Rationality.”
Econometrica 80(5):2047–2087.
Yildiz, M. 2004. “Waiting to persuade.” The Quarterly Journal of Economics 119(1):223–248.
Yildiz, M. 2011. “Bargaining with Optimism.” Annual Review of Economics 3.
49