Communication with Language Barriers�
Francesco Giovannoni† Siyang Xiong‡
February 23, 2017
Abstract
We consider a general communication model with language barriers, and study
whether language barriers harm welfare in communication. Contrary to the negative
result in Blume and Board (2013), we provide two positive results. First, the negative
effect of any language barriers can be completely eliminated, if we introduce a new
communication protocol called N-dimensional communication. Second, even if we
stick to the classical 1-dimensional communication (as in Crawford and Sobel (1982)),
for any payoff primitive, there exists some language barriers whose maximimal equi-
librium welfare dominates any cheap-talk equilibrium under no language barriers.
�We thank....†Department of Economics, University of Bristol, [email protected]‡Department of Economics, University of Bristol, [email protected]
1
Government officer: ”Why don’t they just speak English?”
Dr. Eleanor Arroway: ”Maybe because 70% of the planet speaks other lan-
guages? Mathematics is the only truly universal language. It’s no coincidence
that they are using primes.”
— Contact (the movie) 1997
1 Introduction
Communication is about transmission of information, so that a natural question to ask is
“what” information is actually transmitted and this has been the focus of the literature on
strategic communication, or “cheap talk.” This literature, however, has typically ignored
the issue of “how” information is transmitted. Yet, everyday experience suggests the
intuition that how information is transmitted may hinder or help communication. For in-
stance, it is notoriously hard to convey humor or any other emotion in modern electronic
communication but once electronic communication became sufficiently important, emo-
tions were developed to deal with this very issue (Curran and Casey (2006)). Similarly,
there is a debate on the appropriateness of releasing medical records to patients where
one of the concerns is that patients may not be able to understand medical jargon and so
the common suggestion is to avoid such jargon when likely to cause misunderstandings
(see Ross and Lin (2003) for a survey). In general, the importance of being able to com-
municate effectively is amply recognized in many fields. For example, good rhetorical
skills are considered crucial for modern politicians to the point that there is now concern
that comprehensive access to the media has made the rhetorical component of political
communication much more important than its substantial content (see Spence (1973) and
McNair (2011)). But the ability to communicate effectively is also obviously very impor-
tant in many other fields such as marketing and sales, law and, obviously, academia.1.
So, in order to fully understand the process of communication and to determine to what
extent it can be successful, it is very important to not just to determine what the parties
involved want to say, but also how they say it. In this paper, we take the “how” issue
1A large part of Thomson (2001) is dedicated to discussing issues that ultimately boil down to how to
communicate research findings in Economics.
2
seriously and study “language barriers”, as introduced in Blume and Board (2013), in
one-shot communication games. These language barriers allow us to model the possibil-
ity that in situations of strategic communication individuals may not be able to send or
understand certain messages.
To get a more precise intuition for our results, consider a standard sender-receiver
model, where a sender (S) privately observes the payoff-relevant state t 2 T, and then
sends a message m 2 M to the receiver (R), where M denotes the set of all possible mes-
sages. The receiver cannot observe t, but she has to take a payoff-relevant action a 2 A
upon receiving m. However, each player i may understand only a subset of the messages,
denoted by λi � M. Following Crawford and Sobel (1982), almost all of previous papers
implicitly assume common knowledge of λi = M for every i: we say “language barriers”
do not exist, if this holds, and exist otherwise. When language barriers exist, a subset
λi � M denotes a language type of player i, while ΛS and ΛR represent the sets of all
language types of the sender and the receiver, respectively. Then, a common prior on
T �ΛS �ΛR defines a standard Bayesian game, which is the “language barriers” model
proposed in Blume and Board (2013). This provides a parsimonious way to study a fun-
damental question: do language barriers improve or harm welfare of communication, or
equivalently, is equilibrium welfare under language barriers bigger or smaller than that
under no language barriers? A first answer is provided in Blume and Board (2013), who
show that in the presence of language barriers with language types being private infor-
mation, there will necessarily be indeterminacies of meaning in common-interest games.2 A
direct consequence of their result is that though an efficient equilibrium always exists in
common-interest games without language barriers, efficiency is impossible in the pres-
ence of private information over language types. Facing this negative result, we pursue
this fundamental question in two directions, a normative one and a positive one. We first
ask: is there a natural communication protocol that can eliminate the negative welfare
effect of any language barriers? Our second question is: for any payoff primitive, can
we find some language barriers that (weakly) improve equilibrium welfare? We provide
positive answers to both questions.
2Indeterminacies of meaning arise when, in the presence of language barriers, players’s equilibrium
strategies are such that they would want to deviate if they knew their opponent’s language type.
3
Our first main result is inspired by a phenomenon we observe in real-life commu-
nication, which is that messages are formed by combining basic units to make complex
structures that convey meaning; such structures can always achieve ever higher levels of
complexity, depending on how complex the meaning is. So, the English language can
form a relatively simple sentence structure to convey a simple message such as “close the
door”, but can build much more complex structures if communication requires it.3 Thus,
it seems restrictive in modeling communication to assume a fixed number of messages,
each with a predetermined level of complexity rather than assuming that such messages
can always be used as building blocks capable of forming more complex structures. The
communication protocol in Crawford and Sobel (1982) or Blume and Board (2013) which
is 1-dimensional, implicitly forbids forming such more complex structures, so we relax
this assumption in the simplest way possible by assuming that the set of available mes-
sages extends to a multi-dimensional set MN (for some finite integer N). This simple
change in assumptions allows us to describe to some extent this self-generating property
of real-life communication (aside from languages, think of binary codes in computer sci-
ence and Morse code) and yet, to the best of our knowledge, we are the first paper to
formalize this in the literature on strategic communication.
To be more precise, in 1-dimensional communication, as in Crawford and Sobel
(1982) or Blume and Board (2013), for a given M, the sender is allowed to send a mes-
sage m 2 M . Under our N-dimensional communication protocol, the sender is allowed
to send a message m 2 MN.4 In addition, N-dimensional communication must respect
language barriers, if they exist. In particular, type λS (� M) of the sender can send only
messages in (λS)N, and type λR (� M) of the receiver can only understand messages
in (λR)N. Our first main result is that any (finite) equilibrium which would obtain in a
game with one-dimensional communication and no language barriers can be mimicked
by an equilibrium of the same game if we added any language barriers but allowed for
N-dimensional communication (for sufficiently large N). In this sense, the negative ef-
3Chrystal (2006) discusses how the one of the fundamental characteristics of any language is a hierar-
chical structure of its syntax.4We consider only one-shot messages, and as a result, N-dimensional communication is not a conver-
sation (i.e., N-round communication). Furthemore, it is different from multi-dimensional cheap talk as in
Battaglini (2002) and Levy and Razin (2007), where the multiple dimensions there refers to the dimensions
of the payoff types.
4
fect of language barriers can be completely eliminated, if we allow for multi-dimensional
communication.
Technically, there are three obstacles for effective communication in the presence of
language barriers: (1) the sender may not know the receiver’s language type, and hence
may not know what messages to send; (2) the receiver may not know the sender’s lan-
guage type, and hence may not know how to interpret a received message; (3) there may
not be enough commonly known messages to transmit all the information. In Section
4.2.1, we show all of the three obstacles can be overcome with multi-dimensional com-
munication but there is one important point that is worth emphasizing. It is obvious that
multi-dimensional communication enlarges the set of possible messages available to the
sender, which may lead one to wonder whether this is all that matters. In fact, this point
resolves only the third technical obstacle mentioned above, but does not eliminate asym-
metric information regarding the sender’s and the receiver’s language types. In section
4.2.1, we show through a couple of examples that it is the (multiple) dimensionality of the
communication that overcomes asymmetric information about language types.
In the second part of the paper, we tackle the second question in the context of
1-dimensional communication. In particular, we follow Goltsman, Horner, Pavlov, and
Squintani (2009) and Blume and Board (2010) in comparing welfare across several proto-
cols for cheap-talk communication: arbitration, mediation, language barriers and noisy
talk. Our main result is a linear ranking of the maximal welfare achieved in these different
protocols:
ΦLB � ΦM � ΦILB � ΦN,
where ΦLB, ΦM, ΦILB, ΦN are the maximal equilibrium welfare achieved in a generic
sender-receiver game under language barriers, mediation, language barriers with the re-
striction that language types are distributed independently of payoff states (we refer to
these as independent language barriers from now on), and noisy talk, respectively. One
immediate implication is that, for any payoff primitive, there exist some language barri-
ers whose maximal equilibrium welfare (weakly) dominates any equilibrium in the cor-
responding game without language barriers.
While both Goltsman, Horner, Pavlov, and Squintani (2009) and Blume and Board
5
(2010) ask a very similar question, methodologically our approach is quite different. Golts-
man, Horner, Pavlov, and Squintani (2009) and Blume and Board (2010) consider the case
of quadratic preferences and uniform payoff distribution; they first argue mediation pro-
vides an upper bound to the welfare achievable under language barriers with indepen-
dence and noisy talk, and then construct a specific equilibrium under such language bar-
riers and under noisy talk which achieve the welfare upper bound, i.e., they establish an
equivalence result on (maximal) welfare for the three protocols. Instead, we go to the
roots of the incentives underneath each protocol, and show that equilibria with language
barriers, mediation, independent language barriers and noisy talk correspond to a series
of increasingly restrictive incentive compatibility conditions in that order, which gener-
ates the welfare order described above. Because of this approach, our results go beyond
the environment with quadratic preferences and uniform payoff distribution and indeed
hold for any general preference and distributional assumptions. We provide two further
results. Firstly, we consider two possible notions of arbitration, which are simply forms
of mediation where one of the two incentive constraints - the sender’s incentive to reveal
the truth to the mediator and the receiver’s incentive to follow the mediator’s suggested
actions - are relaxed. The first notion of arbitration corresponds to the one defined in
Goltsman, Horner, Pavlov, and Squintani (2009), where it is assumed that the receiver
must play the strategies recommended by the arbitrator whereas the sender must still be
incentivised to reveal the payoff state. Compatibly with Myerson (1991)’s terminology
we call this arbitration with adverse selection. The second notion of arbitration is absent in
the previous literature but is a modification of mediation where it is assumed the sender
must truthfully report the payoff state to the arbitrator whereas the receiver must still
be incentivised to follow the arbitrator’s recommended action. We call this arbitration
with moral hazard. Given these definitions, we show that the maximal equilibrium wel-
fare achieved with language barriers dominates that under arbitration with moral hazard
whereas no such ranking can be established with regard to arbitration with adverse se-
lection. This immediately establishes that both arbitration and language barriers welfare-
dominate mediation, but a general ranking between arbitration and language barriers is
not possible.
Our final result shows, through an example, that the welfare equivalence between
mediation, independent language barriers and noisy talk established by Goltsman, Horner,
6
Pavlov, and Squintani (2009) and Blume and Board (2010) is not robust if we relax the
uniform-distribution assumption on payoff states.
The remainder of the paper proceeds as follows: we discuss the literature in Section
2; we describe the model in Section 3; Section 4 shows how N-dimensional communica-
tion can always replicate equilibria obtained without language barriers, no matter what
these are; Section 5 shows how some language barriers can improve welfare even under
1-dimensional communication and compares such language barriers to other noisy com-
munication protocols; Section 6 concludes and Appendix contains all the proofs not in
the main part of the paper.
2 Literature Review
The literature on communication in games of asymmetric information is very large. Craw-
ford and Sobel (1982) introduced the canonical “cheap talk” setting, with an informed
“sender” who sends (costless) messages to an uninformed “receiver” who, in turn, takes
an action which affects them both. Since then, a vast literature has developed, which ex-
tended the analysis in many different directions. For example, beginning with Milgrom
(1981), there is significant amount of work that considers communication when messages
are (costless) evidence so that lying is not allowed, including Kartik (2009) where ly-
ing is arbitrarily costly. Another important areas of research are those where the anal-
ysis has been extended to multiple senders or multi-dimensional payoff state spaces (e.g.
Battaglini (2002), Chakraborthy and Harbaugh (2007) and Levy and Razin (2007)) or to is-
sues of commitment amongst the parties: Dessein (2002) and Krishna and Morgan (2008)
focus on various types of commitment on the part of the receiver while Kamenica and
Gentzkow (2011) assume the sender commits ex-ante to an informational mechanism.
Finally, there are important extensions which consider the dynamics of interactions be-
tween senders and receivers when their preferences differ (e.g. Sobel (1985) and Morris
(2001)) or when there is uncertainty about the quality of the sender’s information (e.g.
Scharfstein and Stein (1990) and Ottaviani and Sorensen (2006)).
In all of this literature, one assumption is that language is never an issue. A signifi-
7
cant exception is Farrell (1993), where the issue of how exactly information is transmitted
is taken seriously. There is a “rich language assumption”, which excludes language barri-
ers, and the crucial restriction is that messages come with some intrinsic meaning. Thus,
for Farrell (1993), the restriction is not that players cannot use or understand some mes-
sages but rather that, whenever credible, messages should be taken literally.
Still, a few authors have argued that language is necessarily too coarse for com-
munication in certain environments. For example, Arrow (1975) discusses the reasons of
organizational codes and both Cremer, Garicano, and Prat (2007) and Sobel (2015) model
such codes by using a setting where messages are too few to avoid ambiguity. While
our results suggest that N-dimensional communication can overcome all such issues, in
those environments there may be reasons, such as the complexity or the time needed to
develop and understand such messages, that pose substantial limits on how much can be
done with them. In other contexts, on the other hand, it is likely that successful commu-
nication is so important that such complex messaging strategies are worth pursuing. For
example, in the Arecibo Message Project a message was broadcast from Earth to potential
intelligent alien civilizations. This message contains information about our DNA and our
solar system and is encoded using a binary system not dissimilarly from our equilibrium
construction. The science fiction novel Contact (by Sagan (1985)) also addresses the is-
sue of one-shot communication in the presence of language barriers and it too provides
a solution where a common language is established before the content of the message is
delivered.5
As already discussed, the closest work to ours is Blume and Board (2013) who in-
troduce the notion of language types and use it to describe language barriers. The focus
in Blume and Board (2013) is on describing how even in common interest games, several
inefficiencies do arise as a result of language barriers. We adopt the same framework but
consider any communication game (not just common-interest) and introduce the notion
of N-dimensional communication. We show that such communication protocol can repli-
cate any equilibrium of the corresponding game without language barriers. Blume (2015)
looks again at the issues raised by language barriers in a sender-receiver context where
5Sagan also participated in the design of the various messages attached to the two Pioneer and two
Voyager probes. For a scientific discussion of communication with extra-terrestials, see D.A. Vakoch (2011).
8
the sender still has private information about her language type but there is no common
prior on it. We do not focus on higher-order uncertainty but due to the ex-post nature of
our results, these would be robust in such settings.
In our paper, we also look at whether particular language barriers can improve upon
communication in non-common interest setting. A few papers have particular relevance
to our work here. Krishna and Morgan (2004) show that more (Pareto) efficient equilibria
may be obtained by allowing for the informed sender and uninformed receiver to ex-
change messages at a first stage and then allowing the sender sends a second message.
The N-dimensional messages in our setting should not be interpreted as a conversation
as all communication takes place in a single stage. Blume, Board, and Kawamura (2007)
show that the exogenously given possibility of an error in communication actually im-
proves communication in equilibrium, while in our setting it is exogenous language bar-
riers that provide such results. In fact, Goltsman, Horner, Pavlov, and Squintani (2009)
provide an upper bound on ex-ante efficiency if mediation is introduced in the model and
show that both Krishna and Morgan (2004) and Blume, Board, and Kawamura (2007) at
best can reach, but not surpass that bound.6 Blume and Board (2010) study language bar-
riers under the assumption of independence between the priors on language types and
payoff types and argue that the efficiency bound can be reached by language barriers. We
extend those results to a class of much more general communication games and provide
a linear ranking amongst all these communication protocols. In particular, we show that
under the independence assumption but in this general setting, optimal language barri-
ers will always do no worse than optimal noisy talk and provide an example where they
do strictly better. This implies that, in general and in contrast with the conclusions drawn
in Goltsman, Horner, Pavlov, and Squintani (2009) and Blume and Board (2010), noisy
communication cannot always achieve the efficiency bound obtained through mediated
communication. Finally, we go beyond the independence assumption between payoff
and language types and show that the optimal such language barriers can do better than
mediation, whereas we show with an example that a comparison with arbitration cannot
6Ganguly and Ray (2011) argue that any noisy communication protocol requires a larger set of messages
than those used in the standard Crawford and Sobel (1982) setting. They show that simple mediation,
where no more messages can be used than in the corresponding Crawford and Sobel (1982) setting, does
not improve on such setting.
9
be made without specifying the welfare function.
3 Model
Let I denote a finite set of agents, and for every agent i 2 I, we use Ai and Ti to denote
the sets of actions and payoff states of agent i, respectively. Throughout the paper, we
utilize the notational convention that a subscript i refers to agent i whereas no subscript
refers to all agents. Thus, A � ∏i2I Ai and T � ∏i2I Ti. Agent i has the utility function
ui : T � A �! R.
Let M denote the set of all possible messages. For every agent i 2 I, we use a non-
empty Λi � 2M� f?g to denote the set of language types of agent i. Each language type
λi 2 Λi is defined as the set of messages that agent i understands. There is a common
prior π on T � Λ, and let πT and πΛ denote the marginal distributions on T and Λ,
respectively. We will sometimes impose the following assumption, and we will state it
explicitly, if we do.
Assumption 1 T and Λ are independently distributed.
We use jXj to denote the cardinality of of a set X. Throughout the paper, we assume
jMj > 1 and jΛj < ∞. As usual, �i represents I�fig, and x�i represents�xj�
j2I�fig.
For any positive integer N, we define a N-dimensional communication game. Before
the game starts, nature chooses a state-type profile (t, λ) according to π. Then, upon
privately observing (ti, λi), every agent i 2 I sends an N-dimensional message mi ��m1
i , ..., mNi�2 (λi)
N. Finally, upon observinghti, , λi,
�mj�
j2I
i, every agent i 2 I takes an
action ai 2 Ai.7
7This setting allows for everyone to be both a “sender” and a “receiver” but can easily be accommodated
to allow for the cases where only some players are senders and/or only some players are receivers. For the
former, it suffices to impose that some players (the non-senders) have a singleton payoff state space and for
the latter, it suffices to impose that some players (the non-receivers) have a singleton action space.
10
Thus, a game is defined by a tuple
I, M, T, Λ, π, A, (ui : T � A �! R)i2I , N�, and
players’ strategies in the game are8��σi : Ti �Λi ! MN
�,�
ρi : Ti �Λi ��
MN�I! Ai
��, 8i 2 I,
such that σi and ρi are measurable with respect to Λi. More precisely, the measurability
of σi means σi (ti, λi) 2 (λi)N for every (i, t, λ) 2 I � T �Λ. The interpretation is that a
language type λi understands only the messages with which he is endowed, and hence
this type can send a string of messages only in λi, i.e., the restriction σi (ti, λi) 2 (λi)N
defines “language barriers” for agents when sending messages.
We define the measurability of ρi as follows. Define
x �λi y if and only if x = y 2 λi or fx, yg \ λi = ?,
i.e., type λi can distinguish two messages with which he is endowed, but treats all the
other messages as a single and distinct “nonsense” message. Then, for any positive inte-
ger K, define�x1, ..., xK
��λi
�y1, ..., yK
�if and only if xk �λi yk, 8k 2 f1, ..., Kg .
The measurability of ρi means
�mj�
j2I �λi
�m0
j
�j2I=) ρi
hti, λi,
�mj�
j2I
i= ρi
�ti, λi,
�m0
j
�j2I
�,
8�i, t, λ, m, m0� 2 I � T �Λ� MN � MN.
We use “m �λi m0” to denote that “m �λi m0 is false”. This measurability requirement
captures “language barriers” for agents when receiving messages.
Given a strategy profile (σ, ρ) � (σi, ρi)i2I , the final utility of agent i given a state-
type profile (t, λ), denoted by Ui (σ, ρjt, λ), is defined as follows.
Ui (σ, ρjt, λ) = ui
� htj, ρj
�tj, λj, [σl (tl, λl)]l2I
�ij2J
�. (1)
8For notational ease, we focus on pure strategies. The analysis can be extended to mixed strategies in a
straightforward way, but requires much more messy notation.
11
Define
Ui (σ, ρ) =Z
T�ΛUi (σ, ρjt, λ)π (dt, dλ) , 8i 2 I,
i.e., Ui (σ, ρ) is player i’s expected payoff given the strategy profile (σ, ρ).
Instead of considering (language-)interim equilibria as in Blume and Board (2013), we
adopt the stronger solution concept of (language-)ex-post equilibrium.9 Given any (ti, λ) 2Ti �Λ, let π (�jti, λ) denote the distribution of t�i conditional on (ti, λ).
Definition 1 (σ, ρ) is an equilibrium ifZT�i
�Ui (σ, ρ j ti, t�i, λ)�Ui
��σ0i, σ�i
�,�ρ0i, ρ�i
�j ti, t�i, λ
��π (dt�ijti, λ) � 0, (2)
8i 2 I, 8 (ti, λ) 2 Ti �Λ, 8�σ0i, ρ0i
�.
Note that equation (2) describes the (language-)ex-post incentive compatibility of
agent i in the equilibrium: knowing (ti, λ), agent i chooses the best strategy. In Blume and
Board (2013), (language-)interim equilibria are instead defined as: knowing (ti, λi), agent
i chooses the best strategy. For them, “indeterminacies of meaning” arise when there is
a (language-)interim equilibrium that is not a (language-)ex-post equilibrium. Therefore,
“determinacy of meaning” is embedded in our equilibrium notion.
Throughout the paper, we impose the following necessary assumption for informa-
tive communication.
8i 2 I, jλij � 2, 8λi 2 Λi, (3)
π��(t, λ) 2 T �Λ : λi \ λj 6= ?, 8i, j 2 I with i 6= j
�= 1. (4)
(3) says that every player is able to transmit non-trivial information (i.e., jλij �2). (4) says that, any two language types of two distinct agents must have non-empty
intersection, because communication is not informative otherwise.10
9The “ex-post” is defined with respect to realization of language types (rather than payoff states). That
is, our equilibrium provides best replies for all agents, even if all the agents’ language types were truthfully
revealed. Clearly, this is much stronger than the corresponding “interim” and “ex-ante” equilibria.10Blume and Board (2013) assume existence of a common message for all language types of all players,
which, clearly, is stronger than (4).
12
4 Main Results: N-dimensional Communication
In this section, we show that any equilibrium in a communication game with no language
barriers can be replicated by an equilibrium of the corresponding game if we introduce
language barriers. In Section 4.1, we first define what this means formally; in Section 4.2,
we prove our main result for this section and illustrate some of its implications.
4.1 Similar games and outcome-equivalent equilibria
We will compare equilibria between communication games which only differ in whether
language barriers exist or not. To make the comparison between two such games mean-
ingful, they must be “similar” i.e., they must share the same primitives in terms of agents,
actions, payoffs, etc. Rigorously, we apply the following definition
Definition 2 Two games bG and eGbG =
DbI, bM, bT, bΛ, bπ, bA,�bui : bT � bA �! R
�i2I
, bNE ,
eG =DeI, eM, eT, eΛ, eπ, eA,
�eui : eT � eA �! R�
i2I, eNE ,
are similar if DbI, bM, bT, bA, (bui)i2I
E=
DeI, eM, eT, eA, (eui)i2I
E,
and bπbT = eπeT.
That is, two similar games may differ only in language types and the dimension of
messages they send. We now define outcome-equivalent equilibria in two similar games.
Definition 3 Given two similar games, bG and eG, an equilibrium (bσ,bρ) in bG is outcome-equivalent
to and an equilibrium (eσ,eρ) in eG if
bρi
�ti, bλi,
�bσj
�tj, bλj
��j2I
�= eρi
�ti, eλi,
�eσj
�tj, eλj
��j2I
�, 8�
i, t, bλ, eλ� 2 I� T� bΛ� eΛ,
13
Outcome-equivalent equilibria in similar games induce the same action profile for
any given profile of payoff types, regardless of language types. As a result,
Ui (bσ,bρ) = Ui (eσ,eρ) , 8i 2 I,
i.e., they induce the same expected utility for every player.
4.2 Outcome-equivalence for similar games
For any game G =
I, M, T, Λ, π, A, (ui)i2I , N�, define
N� � 1, λ�i � M, Λ
�= ∏
i2I
hΛ�i � fλ�i g
i;
π� h
E�Λ�i= πT (E) , 8E � T,
i.e., G� =D
I, M, T, Λ�, π
�, A, (ui)i2I , N
�= 1
Eis the standard communication game with
1-dimensional messages and no language barriers, which is similar to G. Our first main
result says that for any equilibrium of G�, there exists an outcome-equivalent equilibrium
of G, if N is sufficiently large. In this sense, language barriers bring no harm to welfare.
Let (σ�, ρ�) denote an equilibrium of G�. Let E (σ�,ρ�)
i denote the set of messages
agent i sends in the equilibrium, i.e.,
E (σ�,ρ�)
i � fσ�i (ti, λ�i ) : ti 2 Tig .
(σ�, ρ�) is called a finite-message equilibrium if���∏i2I E
(σ�,ρ�)i
��� < ∞ and an infinite-message
equilibrium otherwise. For notational ease, we focus on finite-message equilibria but the
analysis can be easily extended to infinite-message equilibria.
Theorem 1 Suppose Assumption 1 holds. Then, for any finite-message equilibrium (σ�, ρ�) in
any game without language barriers G� =D
I, M, T, Λ�, π
�, A, (ui)i2I , N
�= 1
E, a positive in-
teger bN exists, such that in any similar game G =
I, M, T, Λ, π, A, (ui)i2I , N�
with N � bN,
there exists an equilibrium (σ, ρ) of G that is outcome-equivalent to (σ�, ρ�).
Recall that (σ�, ρ�) and (σ, ρ) being outcome-equivalent means that the two games
induce the same equilibrium action profile for every t 2 T, regardless of λ 2 Λ. In this
14
sense, we say (σ, ρ) replicates (σ�, ρ�). In 4.2.1 and 4.2.2 we prove this result, focusing
first on the role of N-dimensionality in overcoming language barriers, absent the issue of
incentive compatibility and then showing how incentive compatibility is assured.
4.2.1 The role of N-dimensionality in Theorem 1
In this section, we leave incentive compatibility aside, and show that multiple dimensions
of messages suffice for effective communication. We return to incentive compatibility in
the next section. To prove Theorem 1, we need to overcome three technical obstacles: the
first is that senders may not know receivers’ language types; the second is that receivers
may not know senders’ language types; finally, we need to show how players transmit
information using their endowed messages given that they know each others’ language
types. We show that all of these can be achieved by utilizing messages with multiple
dimensions.
We first tackle the problem that senders do not know the receivers’ language types.
With multiple dimensions, this type of asymmetric information is easily eliminated, be-
cause, we can break a N-dimensional message into
�����[i2I
Λi
����� strings, with each string in-
tended for a receiver’s language type. Specifically, suppose N = N0 ������[i2I
Λi
����� for some
integer N0, and a sender’s N-dimensional message is m =�mλi
�i2I,λi2Λi
2h
MN0ij[i2IΛij
,
where mλi 2 MN0is the intended message from the sender to λi. Upon receiving m, the
language type λi just goes to his designated string to retrieve his intended message mλi ,
so that the asymmetric information regarding senders not knowing receivers’ language
types is eliminated. This is analogous to what happens in many tourist attractions, where
information is written in different languages, and tourists from different countries just
jump to the bit written in a language they understand to retrieve the information. Thus,
messages with multiple dimensions do more than just increase the size of the message
space. To see this, consider the following example, with one sender and one receiver who
15
share a common interest. Suppose that
T = A = fα, β, γg
uS (t, a) = uR (t, a) =
8<: 1 i f a = t
0 i f a 6= t
M = Z; ΛS = fλSg ; ΛR =�
λ�R , λ+R
λS = f�100,�99, ..., 0, ..., 99, 100g ; λ+R = f1, 2, ...g ; , λ�R = f�1,�2, ...g
Assume that each t 2 T has positive probability but that the true realization is the
sender’s private information whereas both λ�R and λ+R also have positive probability, but
their realization is the receiver’s private information. Since sender and receiver have iden-
tical preferences, the only issue is how the sender can communicate her information to the
receiver. Clearly, without language barriers, the efficient outcome (i.e., perfect communi-
cation) is an equilibrium. However, it is not an equilibrium for 1-dimensional communi-
cation under these language barriers: to achieve efficiency, λ+R must be able to distinguish
between m (β), m (γ) and m (δ) , the three equilibrium messages from the sender in the
three states. Hence, at least two of the three messages must be in λ+R , and as a result,
λ�R cannot distinguish these two messages, since λ+R \ λ�R = ?, i.e., efficiency cannot be
achieved in an equilibrium. It is also easy to see that even if we increased the number of
messages available to the sender up to the point where λS = M, the same difficulty would
remain. On the other hand, we could achieve full communication even if we restricted
λS to the set f�2,�1, 0, 1, 2g , but allowed 2-dimensional messages. This would allow the
sender to produce a message (m� (t) , m+ (t)) where m� and m+ are the strings that de-
scribe the payoff relevant information for each possible type of receiver.11 Note that in this
example the number of messages is not the issue as when we make λS = f�2,�1, 0, 1, 2g,
even with 2-dimensional messages we have actually reduced the number of messages
available to the sender compared to the case λS = f�100,�99, ..., 0, ..., 99, 100g. Yet, effi-
cient communication can now be guaranteed.
11For instance, there is an equilibrium where the strings m� and m+ are described by
m+ (α) = 0, m+ (β) = 1, m+ (γ) = 2,
m� (α) = 0, m� (β) = �1, m� (γ) = �2,
i.e., λ+R can distinguish m+ (α), m+ (β) and m+ (γ), and type λ�R can distinguish m� (α), m� (β) and m� (γ).
16
The second problem is that, without knowing the senders’ language types, the re-
ceivers do not know how to interpret the senders’ messages. Hence, to achieve effective
communication, the senders must reveal their language types. Fix any N such that
N � 3������[i2I
Λi
����� . (5)
Ignoring incentive compatibility, the following lemma shows that there is a procedure
such that every sender is able to reveal his language type. The proof of Lemma 1 can be
found in Appendix A.1.
Lemma 1 For every (i, j, λ) 2 I� I�Λ with i 6= j, there exists a function Υ(i,λj): Λi �! MN
such that
Υ(i,λj) [λi] 2 (λi)
N , 8λi 2 Λi,
and λi 6= λ0i =) Υ(i,λj) [λi] �λj Υ(i,λj)
�λ0i�
. (6)
Given i 6= j, suppose agent i follows Υ(i,λj)to reveal his language type to agent j
whose type is λj: if agent i is of type λi, he sends Υ(i,λj) [λi] 2 (λi)
N to type λj. For
any two distinct language types of i, λ0i and λ00i , because of (6), λj can distinguish the
message sent by λ0i (i.e., Υ(i,λj)�λ0i�) from the message sent by λ00i (i.e., Υ(i,λj)
�λ00i�). Thus,
the asymmetric information due to receivers not knowing the senders’ language type is
eliminated.
Once again, the N-dimensional nature of messages is crucial. Consider the following
sender-receiver common-interest example where now it is the sender that has language
barriers:
T = A = fα, βg
uS (t, a) = uR (t, a) =
8<: 1 i f a = t
0 i f a 6= t
M = Z; ΛR = fλRg ; ΛS =�
λ0S, λ00S , λ000S
λR = f1, 2g ; λ0S = f1, 2g ; λ00S = f1, 3g ; λ000S = f2, 3g
17
Suppose all language types and all payoff states have positive probability. Clearly, with-
out language barriers, the efficient outcome (i.e., perfect communication) is an equilib-
rium. However, it is not an equilibrium for 1-dimensional communication for these par-
ticular language barriers. To see this, suppose otherwise. Then, to achieve efficiency for
λ0S, states α and β must be truthfully revealed by messages 1 and 2. Without loss of gen-
erality, λR plays α and β upon receiving messages 1 and 2, respectively. As a result, if λR
plays α upon receiving messages 3, then efficiency is not achieved for λ00S ; if λR plays β
upon receiving messages 3, then efficiency is not achieved for λ000S —we get a contradiction.
Nevertheless, because of Lemma 1, an equilibrium with multi-dimensional mes-
sages�mλ, mt (λ)
�which guarantees full communication exists. In such equilibria, the
first component, mλ identifies the sender’s language type while the second component
identifies the payoff type. As in the previous example, giving arbitrary additional mes-
sages to each sender type would not work because the receiver would not be able to
understand such messages.
The last obstacle is technical and amounts to making sure that, once asymmetries
of information about language types are resolved, there are still enough dimensions to
convey the payoff relevant information. For a given hΛ, πi , some sender’s language type
λi may have fewer messages than needed to replicate (σ�, ρ�), i.e., jλij <���E (σ�,ρ�)
i
���, where
E (σ�,ρ�)
i denotes the set of messages player i sends under (σ�, ρ�). We show this too can
be overcome by multiple dimensions via the following lemma and its proof can be found
in Appendix A.2. Fix any positive integer bN such that
bN � maxn���E (σ�,ρ�)
i
��� : i 2 Io
. (7)
Lemma 2 For every (i, j, λ) 2 I� I�Λ with i 6= j, there exists a function Γ(λi,λj): E (σ
�,ρ�)i �!
(λi)bN such that for any m, m0 2 E (σ
�,ρ�)i ,
m 6= m0 =) Γ(λi,λj) (m) �λj Γ(λi,λj)
�m0� . (8)
By the previous two steps, both senders’ and receivers’ language types can be truth-
fully revealed. Given this and i 6= j, suppose sender λi follows Γ(λi,λj)to send messages
18
to receiver λj, and Γ(λi,λj)translates equilibrium messages in E (σ
�,ρ�)i to i’s endowed mes-
sages in (λi)bN. Then, for any two distinct messages m0, m00 in E (σ
�,ρ�)i , because of (8),
receiver λj can distinguish the two translated messages, Γ(λi,λj) (m0) and Γ(λi,λj) (
m00),
i.e., equilibrium messages are effectively transmitted. In this case, N-dimensional mes-
sages have the role of increasing the number of messages at the sender’s disposal, and
they achieve this using components that the receiver can understand.
We now proceed to integrate these observations together in the more general frame-
work where incentive compatibility matters, thus proving Theorem 1.
4.2.2 Proof of Theorem 1
Fix any game G� without language barriers and any finite-message equilibrium (σ�, ρ�)
in G�, which are listed as follows.
G� =D
I, M, T, Λ�, π
�, A, (ui)i2I , N
�= 1
E;
(σ�, ρ�) =h(σ�i : Ti ! M)i2I ,
�ρ�i : Ti � MI ! Ai
�i2I
i.
Consider N =�
N + bN�� �����[i2I
Λi
�����, where N and bN are defined in (5) and (7), respectively.
For notationl convenience, for every i 2 I and every�λi, λ0i
�2 Λi �Λi, fix any two
functions:
Υ(i,λ0i) : Λi �! MN;
Γ(λi,λ0i)
: E (σ�,ρ�)
i �! (λi)bN .
These two functions will not play any essential role in our equilibrium (see footnote 14).
Senders’ strategies: let m(i,λj)denote the message intended from sender i to receiver
j of language type λj. For every player i 2 I and every (ti, λi) 2 Ti �Λi, define
σi (ti, λi) =�
m(i,λj)
�j2I, λj2Λj
=�
Υ(i,λj) (λi) , Γ(λi,λj) [
σ�i (ti)]�
j2I, λj2Λj2 MN. (9)
I.e., given j 6= i, sender λi tells receiver λj about i’s true language type via Υ(i,λj) (λi) as
19
described in Lemma 1 and the equilibrium message σ�i (ti) under (σ�, ρ�) via Γ(λi,λj)�σ�i (ti)
�as described in Lemma 2.12
Fix any eti 2 Ti.
Receivers’ strategies: upon receiving the intended message m(i,λj)from sender i ( 6= j),
receiver λj uses the following function to translate it back to an equilibrium message un-
der (σ�, ρ�) via the following function:
Σ(λj,i)
hm(i,λj)
i=
8>>><>>>:σ�i (ti) ,
if there exists (ti, λi) 2 Ti �Λi such that
m(i,λj)=�
Υ(i,λj) (λi) , Γ(λi,λj)
�σ�i (ti)
��;
σ�i�eti�
, otherwise.
(10)
Note that, by Lemmas 1 and 2, if there exist multiple (ti, λi) 2 Ti �Λi such that m(i,λj)=�
Υ(i,λj) (λi) , Γ(λi,λj)
�σ�i (ti)
��, then λi must be unique, and all (ti, λi) have the same
equilibrium message σ�i (ti), and hence, Σ(λj,i)
hm(i,λj)
iis well-defined. We are ready
to define ρj for every j 2 I as follows.
ρj
�tj, λj,
�m
λji , bmλj
i
�i2I
�= ρ�j
�tj,�
σ�j�tj�
, Σ(λj,i)
hm(i,λj)
i�i2I�fjg
�.
That is, under (σ, ρ) and any given (t, λ) 2 T � Λ, each sender’s type λi follows
σ�i (ti) by sending two pieces of information,�
Υ(i,λj) (λi) , Γ(λi,λj)
�σ�i (ti)
��, to each re-
ceiver’s type λj, where the former truthfully reveals λi, and the latter is the coded message
of σ�i (ti) by using the endowed message of λi. Upon receiving the message, each receiver
λj decodes it back to σ�i (ti), and plays the action ρ�j
htj,�σ�i (ti)
�i2I
i. As a result,
ρ�i
�ti,hσ�j�tj�i
j2I
�= ρi
hti, λi,
�σj�tj, λj
��j2I
i, 8 (i, t, λ) 2 I � T �Λ,
i.e., (σ, ρ) and (σ�, ρ�) are outcome-equivalent.
12For i = j, the messages�
Υ(i,λ0i) (λi) , Γ(λi ,λ0i)�σ�i (ti)
��is never used in our equilibrium — it is the
message intended from λi to λ0i, but λi knows herself/himself is the language type λi not λ0i, and as a result,
such messages are redundant. We include these redundent messages purely for notational convenience.
20
Finally, we show the incentive compatibility of both senders and players. First, for
any receiver j under (σ�, ρ�), he forms a posterior belief on t�j upon receiving the mes-
sages�σ�i (ti)
�i2I , and chooses the best strategy ρ�j
htj,�σ�i (ti)
�i2I
i. For receiver j under
(σ, ρ), he receives two pieces of information, i.e., λ in addition to�σ�i (ti)
�i2I . Since T and
Λ are independent by Assumption 1, receiver j forms the same posterior belief on t�j as
that under (σ�, ρ�). And hence, the same strategy ρ�j
htj,�σ�i (ti)
�i2I
iis a best reply for
j. Second, for any sender i under (σ�, ρ�), sending σ�i (ti) is a best strategy given the true
payoff state ti, i.e., sending σ�i (ti) is weakly better than sending σ�i�t0i�
for any t0i 2 Ti.
Note that under (σ, ρ), the equilibrium message of sender i with the true payoff state ti
will be interpreted as σ�i (ti) by the receivers. Furthermore, any message from sender i
would be interpreted as σ�i�t0i�
for some t0i 2 Ti (see (10)). Since sending σ�i (ti) is weakly
better than sending σ�i�t0i�
for any t0i 2 Ti, it is a best strategy for sender i to send the
equilibrium message under (σ, ρ).�
4.2.3 Implications of Theorem 1
One immediate implication of Theorem 1 is that any language barriers in the canonical
Crawford and Sobel (1982) cheap-talk model can be overcome. In that model, there ex-
ists a maximally-revealing equilibrium, in which finite messages are transmitted. Hence,
all equilibria in the model are finite-message equilibria, and Theorem 1 immediately im-
plies that all of them can be replicated whatever language barriers there are, if multi-
dimensional communication is allowed.
A second, less immediate, implication focuses on the setting studied by Blume and
Board (2013), which is that of a common-interest sender-receiver game. Specifically, we
assume the following:
Assumption 2 (common-interest sender-receiver game)
I = f1, 2g , jA1j = jT2j = 1, jT1j > 1, jA2j > 1, jMj < ∞,
u1 � u2 � u (i.e., common interest) is continuous and T1 and A2 are compact metric spaces.
I.e., player 1 is the sender and player 2 is the receiver; A1 and T2 are degenerate; we
21
use u to denote the common utility function for both players. In this setting Blume and
Board (2013) prove that indeterminacies of meaning are inevitably induced by language
barriers under 1-dimensional communication. As previously discussed, this means that
there will not be efficient equilibria.13
However, given no language barrier, Proposition 1 below shows that approximate
efficiency can always be achieved if there are sufficiently many, albeit finite, messages. Its
proof can be found in Appendix A.3.
Proposition 1 For any ε > 0 and any game with 1-dimensional communication and no language
barriers G� =D
I, M, T, Λ�, π
�, A, (ui)i2I , N
�= 1
Ewhich satisfies Assumption 2, there exists
a positive integer K such that
jMj � K =)
������ sup(σ,ρ)2ΣG�
U (σ, ρ)�Z
t2T
�maxa2A
u (t, a)�
πT (dt)
������ � ε,
where ΣG� denotes the set of equilibria of G�.
Note thatZ
t2T
[maxa2A u (t, a)]πT (dt) is the maximal utility that players can possibly
get. We say an equilibrium (σ, ρ) achieves ε-efficiency if and only if������U (σ, ρ)�Z
t2T
�maxa2A
u (t, a)�
πT (dt)
������ � ε.
Then, Theorem 1 and Proposition 1 together immediately imply the following corollary:
Corollary 1 For any ε > 0 and any game satisfying Assumptions 1 and 2, ε-efficiency can be
achieved in equilibria of similar games G =
I, M, T, Λ, π, A, (ui)i2I , N�
for sufficiently large
N.
That it, multi-dimensional communication not only eliminates indeterminacies of
meaning caused by language barriers, but also achieves almost-efficiency.14
13Indeterminancies of meaning imply inefficiency, or equivalently, efficiency implies determinancy of
meaning.14Theorem 1 assumes that language types and payoff states are independently distributed. For common-
22
5 Main Results: 1-dimensional Communication
In the previous section, we showed that for any language barriers, if multi-dimensional
communication is allowed, we can always replicate outcomes that would obtain in the
absence of such language barriers. In this sense, in the presence of language barriers
multi-dimensional communication allows us to do no worse than if such language bar-
riers did not exist. In this section, we change quantifier and focus on one-dimensional
communication to study whether there exist language barriers that allow us to do “better”
than what we can achieve without them.
In particular, we follow the Goltsman, Horner, Pavlov, and Squintani (2009) strat-
egy of studying several modified versions of cheap-talk communication games, although
the games studied here generalizes theirs over two dimensions: we consider any arbi-
trary distributions and utility functions, while Goltsman, Horner, Pavlov, and Squintani
(2009) focus on the uniform distribution and the quadratic utility function.15 In Section
5.2, we define arbitration and mediation equilibria (Goltsman, Horner, Pavlov, and Squin-
tani (2009)), noisy-talk equilibria (Blume, Board, and Kawamura (2007)), and language-
barrier equilibria, all of which may Pareto dominate cheap-talk equilibria. In Section 5.3,
we provide a linear ranking regarding the maximal welfare induced by all the equilibria
except for arbitration equilibria. In Sections 5.4 and 5.5, we further clarify the relation-
ship between arbitration equilibria and language-barrier equilibria on the one hand and
language-barrier equilibria and noisy-talk equilibria on the other.
We begin in Section 5.1 by showing, through an example, that language barriers can
(ex-ante) Pareto improve outcomes strictly.
interest games, however, in previous versions of this paper we showed that even in the absence of indepen-
dence, with a N-dimensional protocol there exist ε-equilibria that achieve almost-efficiency.15(cite BB WP version) also undertook an exercise similar to ous, by adding language-barrier equilibria
to the Goltsman, Horner, Pavlov, and Squintani (2009) analysis. However, they focused on the Goltsman,
Horner, Pavlov, and Squintani (2009) class of games, so that our results, which consider more general set-
tings, differ.
23
5.1 An example of language barriers strictly improving over the cheap
talk
Example 1 Consider a canonical Crawford and Sobel (1982) one sender- one receiver game, i.e.,
I = fS, Rg; jASj = jTRj = 1,
TS = M = f0, 1g and AR = (�∞, ∞) ,
uS = ��
ar � tS �58
�2
and uR = � (ar � tS)2 , 8 (ar, tS) 2 AR � TS,
Consider two scenarios:
1. no language barriers:
ΛS = ΛR = fMg ,
and the prior on T�ΛS �ΛR has a uniform distribution;
2. language barriers for the sender:
bΛS =n
λS = f0g , bλS = f0, 1go
,
ΛR = fMg ,
and the prior on T� bΛS �ΛR has a uniform distribution.
Without language barriers, with one-dimensional communication, only the pooling
equilibrium exists, because the bias between the sender and receiver is too large (i.e., 58 >
12 ). In the pooling equilibrium, the receiver takes the action 1
2 , and the ex-ante expected
utilities of the two agents are EuS = �4164 and EuR = �1
4 . However, with the language
barriers for the sender specified above, it is easy to check that the following strategy
profile is an equilibrium: type λS always sends message 0, type bλSsends the message 1
if t = 1, and the message 0 if t = 0,. Finally, the receiver plays action 1 if he gets the
message 1 and action 13 if he gets message 0.16 The agents’ ex-ante expected utilities in
16It is straightforward to see that the receiver and λS are choosing best replies in equilibrium. To check
the incentive compatibility of bλS, note that the receiver plays only two actions: 1 and 1
3 . Furthermore, the
24
this equilibrium are:
EuS =12� EuλS
+12� EubλS
= �321576
> �4164
uR = �12��
13� 0
�2
� 14��
13� 1
�2
� 14� (1� 1)2 = �1
6> �1
4
Therefore, the equilibrium with these language barriers (ex-ante) Pareto dominates the
equilibrium with no language barriers.
5.2 Cheap talk communication devices
The welfare improvement in Example 1 is not entirely surprising, since the literature has
already pointed out that while faulty communication may reduce message precision, it
may also weaken the sender’s incentive compatibility constraints in such a way to more
than compensate for the lesser precision. In particular, Blume, Board, and Kawamura
(2007) show this for “noisy talk” in a sender-receiver game, where talk is “noisy” in the
sense that there is an exogenous probability that the Receiver will not hear the intended
message, but one randomly chosen by nature. Furthermore, Goltsman, Horner, Pavlov,
and Squintani (2009) show that if an an unbiased mediator is introduced in the standard
Crawford and Sobel (1982) model, this mediator may improve communication by sending
noisy messages to the receiver.
Example 1 shows that language barriers can also achieve welfare improvements, so
the question we wish to answer is: what is the relationship regarding welfare between
language barriers and cheap talk or generalized versions of cheap talk such as noisy talk
and mediated communication mentioned above? To answer this question, we follow the
ideal point for the sender is 58 when t = 0, and����13 � 5
8
���� = 724<
924=
����1� 58
���� .
As a result, it is a best reply for bλSto send the message 0 when t = 0. Similarly, the ideal point for the
sender is 138 when t = 1, and ����13 � 13
8
���� = 3124>
1524=
����1� 138
���� .
As a result, it is a best reply for bλSto send the message 1 when t = 1.
25
Goltsman, Horner, Pavlov, and Squintani (2009) strategy in studying optimal equilibria
under language barriers, noisy talk, mediated communication, and arbitrated communi-
cation, and compare their welfare properties.17
Recall that a communication game is defined by a tupleI, M, T, Λ, π, A, (ui : T � A �! R)i2I , N
�.
From now on, we fix any primitive (excluding language barriers),I, M, T, πT, A, (ui : T � A �! R)i2I , N = 1
�,
so as to make comparisons meaningful. In particular, we fix πT but allow Λ and the
marginal distribution on it to change. For simplicity, we assume
I = f1, 2g , jA1j = jT2j = 1,
i.e., we focus on the standard one sender-one receiver game, and 1 and 2 are the sender
and the receiver, respectively.
Note that jA1j = jT2j = 1 means that the sender (i.e., player 1) takes a degenerate
action, and the receiver (i.e., player 2) observes a degenerate payoff type — this is com-
mon knowledge. Hence, it is with loss of generality for us to omit A1 and T2 and to use
A and T to denote A2 and T1, respectively. However, we still consider Λ = Λ1 �Λ2, i.e.,
we allow for the possibility that both the sender and the receiver have language barriers.
Finally, for simplicity, we assume
M = A = R.
5.2.1 Mediation and arbitration equilibria
First, we define arbitration and mediation equilibria.
17Of course, this means comparing the best available equilibrium under the different “faulty” devices.
For instance, we say language barriers strictly improve welfare over noisy talk if and only if the optimal
equilibrium under some language barriers has a strictly larger welfare than the optimal equilibrium under
all noisy talk.
26
Definition 4 [p : T �! 4 (A)] is an arbitration equilibrium with adverse selection ifZa2A
u1 [t, a] p (t) (da) �Z
a2A
u1 [t, a] p�t0�(da) , 8t, t0 2 T. (11)
[p : T �! 4 (A)] is an arbitration equilibrium with moral hazard if
8ι : A �! A, (12)ZT
24 Za2A
u2 [t, a] p (t) (da)
35πT [dt] �ZT
24 Za2A
u2 [t, ι (a)] p (t) (da)
35πT [dt] .
[p : T �! 4 (A)] is a mediation equilibrium if both (11) and (12) are satisfied.
We say [p : T �! 4 (A)] is an arbitration equilibrium if it is either an arbitration
equilibrium with adverse selection or an arbitration equilibrium with moral hazard. Clearly,
a mediation equilibrium is an arbitration equilibrium. In particular, we share the same
definition of mediation equilibrium with Goltsman, Horner, Pavlov, and Squintani (2009).
Our notion of arbitration equilibrium with adverse selection is the same as the origi-
nal ”arbitration equilibrium” defined in Goltsman, Horner, Pavlov, and Squintani (2009),
while ”arbitration equilibrium with moral harzard” is a new notion.18
Our terminology is inspired by Myerson (1991)’s notions of moral hazard and ad-
verse selection in communication games. Suppose there is a non-strategic middleman
(i.e., an arbitrator or a mediator) besides the players. The sender reports his private pay-
off type to the middleman; upon receiving t, the middleman commits to draw from a
lottery on A following the distribution p (t); given every realized value a of the lottery,
the receiver plays a. In the case of mediation, the sender is not committed to reporting
the true payoff type and condition (11) requires that truthful reporting be optimal for the
sender in an equilibrium. At the same time, the receiver is also not committed to follow
the action suggested by the middleman, and condition (12) describes the incentive com-
patibility condition for the receiver, where the function ι : A �! A in (12) represents the
18Presumably, ”arbitration equilibrium with adverse selection” is a more practically relevant one between
the two. However, our sole purpose to introduce the other notion is for conceptual clarification. More
precisely, it helps us compare language-barrier equilibria and arbitration equilibria (see Section 5.4).
27
receiver’s (possible) deviation from the recommended actions, i.e., when the mediator
recommends a, the receiver may deviate to play ι (a). The two forms of arbitration follow
when we impose just one of the two conditions. In arbitration with adverse selection,
the receiver must follow the arbitrator’s recommended action but the adverse selection
problem (i.e., the sender still needs to be incentivized to report her true payoff state) re-
mains. In arbitration with moral hazard, the sender must report her true payoff state, but
the moral hazard problem (i.e., the receiver still needs to be incentivized to follow the
arbitrator’s recommended action) remains.
5.2.2 Noisy-talk equilibria
A noisy-talk game is defined by a tuple (ε, ξ) 2 [0, 1] �4 (M), i.e., with probability ε,
the sender’s message is replaced by an exogenous and independent noise following the
distribution ξ. A potential candidate for a noisy-talk equilibrium is a strategy profile
([s : T �! 4 (M)] , [r : M �! 4 (A)]) .
Given [(ε, ξ) , s, r], type t of the sender follows s (t) 2 4 (M) to send a random mes-
sage; for any realized message m from the sender, with probability (1� ε), the receiver
observes m, and with probability ε, the receiver observes a random message generated by
the distribution ξ; finally, upon receiving a (possibly distorted) message m0, the receiver
takes a random action r (m0) 2 4 (A). We aggregate this process as follows.
p[(ε,ξ), s, r] : T �! 4 (A) , (13)
p[(ε,ξ), s, r] (t) [E] =ZM
24(1� ε)� r (m) [E] + ε�ZM
r (m) [E] ξ [dm]
35 s (t) [dm] , 8E � A,
i.e., p[(ε,ξ), s, r] (t) is the ex-post action distribution induced by the equilibrium, given t. We
now define noisy-talk equilibria.
Definition 5 ([s : T �! 4 (M)] , [r : M �! 4 (A)]) is a noisy-talk equilibrium if there ex-
28
ists (ε, ξ) 2 [0, 1]�4 (M) such that
8t 2 T, 8s0 : T �! 4 (M) , (14)Za2A
u1 (t, a) p[(ε,ξ), s, r] (t) (da) �Z
a2A
u1 (t, a) p[(ε,ξ), s0, r] (t) (da) ,
and 8r0 : M �! 4 (A) , (15)ZT
24 Za2A
u2 (t, a) p[(ε,ξ), s, r] (t) (da)
35πT [t] �ZT
24 Za2A
u2 (t, a) p[(ε,ξ), s, r0] (t) (da)
35πT [t] .
(14) and (15) in Definition 5 describe the incentive compatibility conditions for the
sender and the receiver, respectively.
5.2.3 Language-barriers equilibria
A valid language-barriers game is defined by a tuple [Λ = Λ1 �Λ2, π 2 4 (T �Λ)] such
that the marginal distribution of π on T matches the fixed πT and assumptions (3) and (4)
are satisfied. In game [Λ, π], a potential candidate for a language-barriers equilibrium is
a strategy profile
[σ : T �Λ1 ! 4 (M) , ρ : Λ2 � M ! 4 (A)] .
We say [σ, ρ] is a valid strategy profile if and only if σ and ρ are measurable with respect
to Λ1 and Λ2, respectively, where measurability is as defined in Section 3.
Given (t, λ1, λ2), the sender follows σ (t, λ1) 2 4 (M) to send a random message;
upon receiving a realized message m, the receiver follows ρ (λ2, m) 2 4 (A) to play a
random action. Abstractly, we use the function p(σ, ρ) defined below to aggregate this
process.
p(σ, ρ) : T �Λ1 �Λ2 ! 4 (A) , (16)
p(σ, ρ) (t, λ1, λ2) [E] =ZM
[ρ (λ2, m) [E]] σ (t, λ1) (dm) , 8E � A.
29
Definition 6 For any valid language-barriers game (Λ, π), we say a valid strategy profile
[σ : T �Λ1 ! 4 (M) , ρ : Λ2 � M ! 4 (A)] ,
is a language-barriers equilibrium if
8 (t, λ1) 2 T �Λ1, 8σ0 : T �Λ1 �! 4 (M) , (17)ZΛ2
0@ Za2A
u1 (t, a) p(σ, ρ) (t, λ1, λ2) [da]�Z
a2A
u1 (t, a) p(σ0, ρ) (t, λ1, λ2) [da]
1Aπ [dλ2 j t, λ1] � 0,
and 8λ2 2 Λ2, 8ρ0 : Λ2 � M ! 4 (A) , (18)ZT�Λ1
0@ Za2A
u2 (t, a) p(σ, ρ) (t, λ1, λ2) [da]�Z
a2A
u2 (t, a) p(σ, ρ0) (t, λ1, λ2) [da]
1Aπ [(dt, dλ1) j λ2] � 0.
Furthermore, we say it is an independent-language-barriers equilibrium, if T and Λ are indepen-
dent according to π.
Finally, to compare these different notions of equilibria, we define a notion of out-
come equivalence, as in Definition 3.
Definition 7 Consider any arbitration equilbrium [p : T �! 4 (A)], any noisy-talk equilib-
rium (s, r) under noise (ε, ξ) and any language-barriers equilibrium [σ, ρ], which, given t, in-
duce ex-post action distributions p (t), p[(ε,ξ), s, r] (t) as defined in (13), and P (σ, ρ) (t) as defined
in (16), respectively. Any two of the equilibria are outcome-equivalent if they induce the same
ex-post action distribution for any t 2 T.
5.3 Welfare comparison
In this section, we compare welfare induced by different equilibria. Goltsman, Horner,
Pavlov, and Squintani (2009) consider the canonical Crawford and Sobel (1982) with quadratic
utility, where, in any mediation equilibrium, the sender’s expected utility differs from the
receiver’s expected utility by a constant determined by the “bias.” In that setting, it is
30
without loss of generality to compare only the sender’s (or the receiver’s) expected utility
in different equilibria. However, in the general communication model as studied here,
this nice property no longer holds. We thus use any weakly-increasing social welfare
function Φ to aggregate players’ utility, i.e.,
Φ : RI �! R such that
xi � x0i , 8i 2 I =) Φ�(xi)i2I
�� Φ
h�x0i�
i2I
i.
That is, under an equilibrium, if every player i 2 I gets expected utility xi, we say this
equilibrium achieves social welfare of Φ�(xi)i2I
�. Then, given a social welfare func-
tion Φ, let ΦA�MH, ΦA�AS, ΦM, ΦN, ΦLB, ΦILB denote the supremum of the social
welfare achieved by equilibria in each of our possible protocols (arbitration with moral
hazard, arbitration with adverse selection, mediation, noisy-talk, language-barriers, and
independent-language-barriers, respectively). We now present the main result of this sec-
tion.
Theorem 2 For any weakly increasing social welfare function Φ, we have
ΦLB � ΦA�MH � ΦM � ΦILB � ΦN.
It is straightforward to see ΦA�MH � ΦM, because every mediation equilibrium
is an arbitration equilibrium with moral hazard. Given this, Theorem 2 is immediately
implied by the following three lemmas. The idea of the proofs is to show that equilibria
with language barriers, arbitration with moral hazard, mediation, independent language
barriers and noisy talk correspond to a series of increasingly restrictive incentive compat-
ibility conditions in that order. The proof of Lemmas 3, 4 and 5 are relegated to Appendix
A.5, 4 and 5.
Lemma 3 For any noisy talk equilibrium, there exists an outcome-equivalent independent-language-
barrier equilibrium.
Lemma 4 For any independent language barriers equilibrium, there exists an outcome-equivalent
mediation equilibrium.
31
Lemma 5 For any arbitration equilibrium with moral hazard, there exists an outcome-equivalent
language-barrier equilibrium.
It is straightforward to see ΦA�AS � ΦM, because every mediation equilibrium
is an arbitration equilibrium with adverse selection. One question remaining is how to
compare ΦA�AS and ΦLB (and ΦA�MH), which will be discussed in Section 5.4.
5.4 Arbitration equilibria and language-barrier equilibria
It is difficult to directly compare the maximal welfare of languag-barrier equilibrium and
arbitration equilibrium with adverse selection (i.e., the original ”arbitration equilibrium”
defined in Goltsman, Horner, Pavlov, and Squintani (2009)). However, it is easy to com-
pare the two forms of arbitration equilibrium, which is the reason that we introduce the
new arbitration notion. Furthemore, the comparision helps us clarify the relationship
between languag-barrier equilibrium and arbitration equilibrium with adverse selection.
The following example shows that neither ΦA�AS � ΦA�MH nor ΦA�MH � ΦA�AS
hold generally.
Example 2 Consider the standard cheap-talk model with quadratic utility such that
u1 (a, t) = ��
a� t� 34
�2
;
u2 (a, t) = � (a� t)2 .
and µT 2 ∆ (T) is defined as
µT (f0g) = µT (f1g) = 12
.
Consider [bp : T �! A] and [ep : T �! A] such that
bp (0) =34
and bp (1) = 74
;
ep (0) = 0 and ep (1) = 1.
That is, the sender and the receiver achieve the ideal actions (at both payoff states) in bpand ep, respectively. As a result, bp and ep are arbitration equilibrium with adverse selection
32
and arbitration equilibrium with moral hazard, respectively. Consider bΦ : R2 �! R andeΦ : R2 �! R defined as
bΦ [u1, u2] � u1 and eΦ [u1, u2] � u2.
Hence, bΦA�AS = eΦA�MH = 0. (19)
It is easy to show
bΦLB � � 916
, (20)
eΦA�AS � � 1128
. (21)
Then, (19), (20) imply bΦA�AS > bΦLB. Furthermore, (19), (21) and Lemma 5 imply eΦLB >eΦA�AS. Therefore, neither ΦLB � ΦA�AS nor ΦA�AS � ΦLB hold generally. The detailed
analysis can be found in Appendix A.8.
5.5 Independent-language barrier equilibria and noisy-talk equilibria
Example 3 Consider the sender-receiver game with I = fS, Rg; jASj = jTRj = 1,
TS = M = [0, 1] and AR = (�∞, ∞) ,
uS = ��
ar � tS �14
�2
and uR = � (ar � tS)2 , 8 (ar, tS) 2 AR � TS,
with the common prior µ (f0g) = µ
��3572
��= µ (f1g) = 1
3.
Then, there exists an independent-language-barriers equilibrium that strictly Pareto dominates
any noisy talk equilibria.
The proof is quite tedious and is relegated to Appendix A.9, but here we provide
some intuition. Given the prior µ described above, a mediated communication equilib-
rium can be constructed where the mediator proposes action zero when the type is zero,
action 12 when the type is 35
72 and mixes when the type is 1 by proposing action 1 with
probability 3536 and action 1
2 with complementary probability. The same outcome can be
33
obtained by an independent-language-barriers setting when one language type has three
messages (regardless of payoff type, this language type occurs with probability 3536) and
the other has only two of the three messages available. Then, there is an equilibrium
where one common message is used by both language types when the payoff state is zero
to communicate that the action that should be taken is zero, the second common message
is used by both language types when the payoff state is 3572 to indicate that the action that
should be taken is 12 and finally, if that payoff state is 1, then the remaining message is
used to communicate that the action that should be taken is one by the language type that
has that message available; the other language type, who only has two messages, uses the
second common message.
These equilibria cannot be replicated by a noisy-talk equilibrium. In mediated com-
munication it is the mediator that injects noise and can do so depending on the payoff
state: in this particular case, the mediator can make the receiver unsure of the sender’s
payoff state when she proposes action 12 , but when she proposes actions zero and one,
there is no uncertainty about the underlying payoff state. In the independent-language-
barriers case, this can be replicated because upon observing the second common message
the receiver is again uncertain about the sender’s payoff state, whereas there is no uncer-
tainty with the other two messages. In noisy-talk, this cannot be replicated because the
same noise distribution must apply for each payoff state.
6 Conclusions
At an intuitive level, “language barriers” bring obstacles to communication. However,
in this paper we show that they may not be if a different communication protocol (than
that in Crawford and Sobel (1982)) is allowed. In particular, with N-dimensional commu-
nication, (almost) efficiency can always be achieved in common-interest games, and any
equilibrium in the canonical cheap-talk game can be mimicked by an equilibrium under
any “language barriers.” As a result, players cannot be worse off under “language barri-
ers.” Of course in the real world, plently of examples of miscommunication exist, so our
results imply that miscommunication must arise from something outside of this setting.
A simple extension would be to incorporate in the model the cognitive cost of sending
34
and comprehending more complex messages thus reconciling our results with those of
Blume and Board (2013).19 More generally, miscommunication might also arise from the
fact that real-world messages have a semantic meaning and different agents might not
have the same vocabulary. In our model the meaning of messages is emergent in equi-
librium. We would argue that this makes the notion of equilibrium itself unsuitable for
studying this aspet of language, whereas a more promising approach would be based on
learning.
The second part of our paper shows that even if the original (1-dimensional) com-
munication in Crawford and Sobel (1982) is imposed, some language barriers can im-
prove upon equilibria obtainable in their absence. In particular, we show the optimal
independent-language-barrier equilibrium always weakly dominates—and sometimes,
strictly dominates— any generalized noisy talk equilibria, which includes the equilibria
in the canonical cheap-talk Crawford and Sobel (1982) model (without noise) as special
cases.20
We also believe that the Blume and Board (2013) framework with language types
utilized here is rich enough to accommodate both the standard cheap talk and the “per-
suasion games” literature (e.g.,Milgrom (1981)) as special cases. In cheap-talk, the sender
could send any possible messages, whereas in persuasion games, the privately informed
parties cannot lie about the payoff states. This no-lie assumption is equivalent to M =
2T� f?g and each payoff state t correponds to a language type who is endowed with a
set of subsets (of M) containing t. I.e., at t, the sender can send only a message E such
that t 2 E, meaning that only states in E are possibly true. Much of the literature on per-
suasion games has focused on the conditions on preferences necessary to guarantee full
communication of payoff type.21 It is easy to see that guaranteeing full communication
with arbitrary language barriers is easy but future research should focus on determining
the minimal conditions on language types necessary to guarantee full communication for
a given preference profile.
19See Garicano and Prat (2013) for a discussion of cognitive costs in communication in organisations.20One open question remains. We show that the optimal mediated communication is always weakly
better than communication under the optimal independent “language barriers.” Is the converse true?21See Seidmann and Winter (1997), Giovannoni and Seidmann (2007) and Jeanne Hagenbach and Perez-
Richet (2014) for details.
35
A Proofs
A.1 The proof of Lemma 1
Fix any (i, j, λ) 2 I � I � Λ with i 6= j. Recall N � 3� j[l2IΛlj, and hence, jΛij � N.
Label the elements in Λi as λ(1)i , λ
(2)i , ..., λ
(K)i , where K = jΛij � N.
For each λ(k)i 2 Λi with k � K, we have λj \ λ
(k)i 6= ? and
���λ(k)i
��� � 2, due to (3) and
(4). Thus, we fix some m(k) 2 λj \ λ(k)i , and some em(k) 2 λ
(k)i �
nm(k)
o, i.e., m(k) 6= em(k).
Note that
m(k) �λj em(k), (22)
when either em(k) 2 λj or em(k) /2 λj is true.
Then, define Υ(i,λj): Λi �! MN as follows. For each k 2 f1, 2, ..., Kg,
Υ(i,λj)
hλ(k)i
i= [ml]
Nl=1 2 MN such that ml =
8<: m(k), if l = k;em(k), otherwise.
That is, type λ(k)i uses m(k) to denote ”yes” and em(k) for ”no.” Furthermore, player i as-
sociates each of the first K dimensions of the message Υ(i,λj)
hλ(k)i
ito one element in Λi,
and player i reveals whether he is that element in the associated dimension. Precisely, λ(k)i
says ”yes” (i.e., m(k)) in the k-th dimension, and ”no” (i.e., em(k)) in all other dimensions.
For k 6= k0, we show Υ(i,λj)
hλ(k)i
i�λj Υ(i,λj)
hλ(k0)i
i, as needed in (6). By the defini-
tion of Υ(i,λj), we have
Υ(i,λj)
hλ(k)i
i= [ml]
Nl=1 =
�mk = m(k),
�ml = em(k)
�l 6=k
�;
Υ(i,λj)
hλ(k0)h
i= [ bml]
Nl=1 =
� bmk0 = m(k0),� bml = em(k0)
�l 6=k0
�.
Consider two cases: (1) m(k) 6= em(k0) and (2) m(k) = em(k0). In case (1), m(k) 6= em(k0) and
m(k) 2 λj implies
mk = m(k) �λj em(k0) = bmk,
i.e., in the k-th dimension, mk �λj bmk, which further implies Υ(i,λj)
hλ(k)i
i�λj Υ(i,λj)
hλ(k0)i
i.
36
In case (2), recall N � 3 by (5). Pick any k00 2�
1, ..., N� fk, k0g. Then, (22) implies
mk00 = em(k) �λj m(k) = em(k0) = bmk00 ,
i.e., in the k00-th dimension, mk00 �λj bmk00 , which further implies Υ(i,λj)
hλ(k)i
i�λj Υ(i,λj)
hλ(k0)h
i.�
A.2 The proof of Lemma 2
Fix any (i, j, λ) 2 I � I �Λ with i 6= j. Recall λi \ λj 6= ? and jλij � 2. Thus, we fix some
m 2 λi \ λj, and some em 2 λi� fmg, i.e., m 6= em. Note that
m �λj em, (23)
when either em 2 λj or em /2 λj is true.
Recall bN ����E (σ�,ρ�)
i
��� by (7). Label the elements in E (σ�,ρ�)
i as m(1), m(2), ..., m(K),
where K =���E (σ�,ρ�)
l
��� � bN. Then, define Γ(λi,λj): E (σ
�,ρ�)i �! (λi)
bN as follows. For each
k 2 f1, 2, ..., Kg,
Γ(λi,λj)
hm(k)
i= [ml]
bNl=1 2 M bN such that ml =
8<: m, if l = k;em, otherwise.
That is, type λi use m to denote ”yes” and em for ”no.” Furthermore, λi associates each
of the first K dimensions of the message Γ(λi,λj)
hm(k)
ito one element in E (σ
�,ρ�)i , and λi
reveals whether he intends to send that element in the associated dimension. Precisely,
to send the message m(k) 2 E (σ�,ρ�)
i , λi say ”yes” (i.e., m) in the k-th dimension, and ”no”
(i.e., em) in all other dimensions.
For k 6= k0, we show Γ(λi,λj)
hm(k)
i�λj Γ(λi,λj)
hm(k0)
i, as needed in (8). By the
definition of Γ(λi,λj), we have
Γ(λh,λi)
hm(k)
i= [ml]
bNl=1 =
hmk = m, (ml = em)l 6=k
i;
Γ(λh,λi)
hm(k0)
i= [ bml]
bNl=1 =
h bmk0 = m, ( bml = em)l 6=k0
i.
Since k 6= k0, (23) implies
mk = m �λi em = bmk,
i.e. in the k-th dimension, mk �λi bmk, which further implies Γ(λi,λj)
hm(k)
i�λj Γ(λi,λj)
hm(k0)
i.�
37
A.3 Proof of Proposition 1
We use the following two lemmas to prove Proposition 1, and the proofs can be found in
Appendix A.3.1 and A.3.2.
Lemma 6 Suppose Assumption 2 holds. For any ε > 0, there exists δ > 0 such that
8t, t0 2 T, d�t, t0�< δ =)
����maxa2A
u (t, a)� u [t, a�]���� < ε, 8a� 2 arg max
a2Au�t0, a
�. (24)
Lemma 7 For any game satisfying Assumption 2, there exists an optimal equilibrium (σ�, ρ�) in
G� =D
I, M, T, Λ�, π
�, A, (ui)i2I , N
�E
such that U (σ�, ρ�) � U (σ, ρ) for any strategy profile
(σ, ρ) in G�.
Proof of Proposition 1: Fix any game satisfying Assumption 2 and any ε > 0. By
Lemma 6, there exists δ > 0 such that
8t, t0 2 T, d�t, t0�< δ =)
����maxa2A
u (t, a)� u [t, a�]���� < ε, 8a� 2 arg max
a2Au�t0, a
�. (25)
Since T is compact, it is totally bounded. Hence, there exists a positive integer K, such
that T can be partitioned by fE1, ..., EKg and
t, t0 2 Ek =) d�t, t0�< δ , 8k 2 f1, ..., Kg . (26)
For each k 2 f1, ..., Kg, fix some tk 2 Ek and some ak 2 arg maxa2A u (tk, a). Then,
∑k2f1,...,Kg
Zt2Ek
u (t, ak)πT (dt) � ∑k2f1,...,Kg
Zt2Ek
�maxa2A
u (t, a)� ε
�πT (dt) (27)
=Z
t2T
�maxa2A
u (t, a)�
πT (dt)� ε,
where the inequality follows from (25) and (26).
Suppose jMj � K. Then, the expected utility ∑k2f1,...,Kg
Rt2Ek
u (t, ak)πT (dt) can be
achieved in a strategy profile, i.e., fix K messages, m1, ..., mK; the sender sends mk if and
38
only t 2 Ek; and the receiver plays ak if and only if he receives mk. By Lemma 7, an optimal
equilibrium exists, and denote it by (σ�, ρ�), and hence
U (σ�, ρ�) � ∑n2f1,...,Ng
Zt2En
u (t, an)πT (dt) . (28)
Furthermore, Zt2T
�maxa2A
u (t, a)�
πT (dt) � U (σ�, ρ�) . (29)
Thus, (27), (28) and (29) imply������U (σ�, ρ�)�Z
t2T
�maxa2A
u (t, a)�
πT (dt)
������ � ε,
which completes the proof of Proposition 1.�
A.3.1 Proof of Lemma 6
Since u is continuous and T, A are compact, u is uniformly continuous. Then, by Berg’s
Maximum Theorem, φ (t) � maxa2A u (t, a) is continuous on t 2 T. Since T is compact,
φ (t) is uniformly continuous, and hence,
8ε > 0, 9δ > 0, such that d�t, t0�< δ =)
����maxa2A
u (t, a)�maxa2A
u�t0, a
����� < ε
2, (30)
The uniform continuity of u implies
8ε > 0, 9δ > 0, such that (31)
d�t, t0�< δ =)
����u (t, a�)�maxa2A
u�t0, a
����� = ��u (t, a�)� u�t0, a�
��� < ε
2, 8a� 2 arg max
a2Au�t0, a
�.
Then, (30) and (31) imply
8ε > 0, 9δ > 0, such that d�t, t0�< δ =)
����maxa2A
u (t, a)� u (t, a�)���� < ε, 8a� 2 arg max
a2Au�t0, a
�.
This completes the proof of Lemma 6.�
39
A.3.2 Proof of Lemma 7
Suppose jMj = n. Define a function, ψ : An �! R as follows.
ψ (a1, ..., an) =Z
t2T
�max
a2fa1,...,angu (t, a)
�πT (dt) .
First, we show ψ is uniformly continuous, i.e.,
8ε > 0, 9δ > 0, such that (32)
jbak � eakj < δ, 8k 2 f1, 2, ..., ng =) jψ (ba1, ...,ban)� ψ (ea1, ...,ean)j < ε.
Consider any (ba1, ...,ban) and (ba1, ...,ban) such that maxk2f1,2,...,ng jbak � eakj < δ. For each
t 2 T, fix any k (t) 2 arg maxk2f1,...,ng u (t,bak). We thus have
ψ (ba1, ...,ban) =Z
t2T
hu�
t,bak(t)
�iπT (dt) . (33)
By uniform continuity of u,
8ε > 0, 9δ > 0, such that (34)
jbak � eakj < δ, 8k 2 f1, 2, ..., ng =)
������Z
t2T
hu�
t,bak(t)
�iπT (dt)�
Zt2T
hu�
t,eak(t)
�iπT (dt)
������ < ε.
Furthermore, by the definition of ψ (ea1, ...,ean), we have
ψ (ea1, ...,ean) �Z
t2T
hu�
t,eak(t)
�iπT (dt) . (35)
Then, (33), (34) and (35) imply
8ε > 0, 9δ > 0, such that (36)
jbak � eakj < δ, 8k 2 f1, 2, ..., ng =) ψ (ea1, ...,ean) � ψ (ba1, ...,ban)� ε.
If we change the roles of (ea1, ...,ean) and (ba1, ...,ban), and repeat the analysis, we get
8ε > 0, 9δ > 0, such that (37)
jbak � eakj < δ, 8k 2 f1, 2, ..., ng =) ψ (ba1, ...,ban) � ψ (ea1, ...,ean)� ε.
40
Therefore, (36) and (37) imply (32), i.e., ψ is uniformly continuous.
Second, there exists
(a�1 , ..., a�n) 2 arg max(a1,...,an)2An
ψ (a1, ..., an) , (38)
due to compactness of A and continuity of ψ, i.e.,Zt2T
"max
a2fa�1 ,...,a�ngu (t, a)
#πT (dt) �
Zt2T
�max
a2fa1,...,angu (t, a)
�πT (dt) , 8 (a1, ..., an) 2 An.
Third, recall that there are at most jMj = n messages. Label the elements in M as
m1, ..., mn, i.e., fm1, ..., mng. For any fixed strategy profile (σ, ρ), let ak 2 A denote the
action taken by the receiver upon getting mk under (σ, ρ). Then, the expected utility of
the players under (σ, ρ) is at mostZ
t2T
hmaxa2fa1,...,ang u (t, a)
iπT (dt).
Finally, (a�1 , ..., a�n) as defined in (38) corresponds to an equilibrium, denoted by
(σ�, ρ�), under which players’ expected utility isZ
t2T
hmaxa2fa�1 ,...,a�ng u (t, a)
iπT (dt). To
see this, define
Ek =
(t 2 T : a�k 2 arg max
a2fa�1 ,...,a�ngu (t, a)
), 8k 2 f1, 2, ..., ng .
Then, define
E1 = E1 and
Ek = Ek�h[k�1
l=1 Ek
i, 8k 2 f2, ..., ng .
As a result,�
E1, ..., En
is a partition of T, and each a�k is the optimal action for every
t 2 Ek. Thus, the following strategy profile is an equilibrium.24 sender’s strategy: send mk if and only if t 2 Ek, 8k 2 f1, 2, ..., ng .
receiver’s strategy: play a�k if and only if he receives mk, 8k 2 f1, 2, ..., ng .
35The incentive compatibility of the sender is implied by the definition of Ek and the incen-
tive compatibility of the receiver is implied by (a�1 , ..., a�n) 2 arg max(a1,...,an)2An ψ (a1, ..., an).
To sum, the last two ponts show the existence of an equilibrium (σ�, ρ�) such that
U (σ�, ρ�) � U (σ, ρ) for any strategy profile (σ, ρ). �
41
A.4 Weak-language-barrier equilibria
We introduce a notion of weak-language-barrier equilibria (resp. weak-independent-
language-barrier equilibria), which differ from language-barrier equilibria (resp. independent-
language-barrier equilibria) only on one assumption:
jλij � 2, 8 (i, λ) 2 I �Λ. (39)
That is, every language type must have at least two messages in any language-barrier
equilibrium, but language types in a weak-language-barrier equilibrium may be endowed
with just one single message.
Clearly, a language-barrier equilibrium is a weak-language-barrier equilibrium. Con-
versely, for any weak-language-barrier equilibrium, there is an outcome-equivalent language-
barrier equilibrium, which is summarized in the following lemma. Because of this, it is
without loss of generality to focus on weak-language-barrier equilibria.
Lemma 8 For any weak-language-barrier equilibrium, there exists an outcome-equivalent language-
barrier equilibrium. Furthermore, for any weak-independent-language-barrier equilibrium, there
exists an outcome-equivalent independent-language-barrier equilibrium.
Proof: Fix any valid language-barrier game (Λ, π), and any weak-language-barrier
equilibrium,
[σ : T �Λ1 ! 4 (M) , ρ : Λ2 � M ! 4 (A)] .
Recall M = R. Pick any disjoint M� (� M) and M�� (� M) which are both homeomor-
phic to M, e.g., M� =�
0, 13
�and M� =
�23 , 1�. Let
γ� : M �! M�,
� : M �! M��,
denote the homeomorphisms, and let �1 and ��1 denote the inverse function.
42
Define a new valid language-barrier game (Λ, π),
eΛ1 = fγ� (λ1) [ γ�� (λ1) : λ1 2 Λ1g ;eΛ2 = fγ� (λ2) [ γ�� (λ2) : λ2 2 Λ2g ;
eπ (E) = π (f[t, λ1, λ2] : [t, γ� (λ1) [ γ�� (λ1) , γ� (λ2) [ γ�� (λ2)] 2 Eg) , 8E � T � 2M � 2M,
i.e., each of the sender’s language type λ1 is transformed to a new type containing two
copies of the original type, with the first copy transformed from λ1 via γ� and the second
copy from λ1 via γ��; similar construction applies to the receiver’s language types; the
new prior eπ inheritates the distribution from the original prior π.
For any µ 2 4 (M), define γ� (µ) 2 4 (M�) as
γ� (µ) [E] = µ�
�1 [E]�
, 8E � M�,
i.e., for any random message generated by µ, we transform it to a message in M� via γ�,
and γ� (µ) is the the distribution of the transformed message from µ.
For each λ2 2 Λ2 such that λ2 $ M, fix any mλ2 2 M�λ2. Furthermore, if λ2 = M,
fix any mλ2 2 M. The sole purpose of construction of mλ2 is for the measurablility (with
respect to λ2) of eρ defined below.
We now define the outcome-equivalent language-barrier equilibrium.heσ : T � eΛ1 ! 4 (M) , eρ : eΛ2 � M ! 4 (A)i
,
eσ [t, γ� (λ1) [ γ�� (λ1)] = γ� (σ [t, λ1]) ,
eρ [γ� (λ2) [ γ�� (λ2) , m] =
8>><>>:ρ�λ2, γ��1 (m)
�, if m 2 M�;
ρ�λ2, γ���1 (m)
�, if m 2 M��;
ρ�λ2, mλ2
�, otherwise.
That is, a new sender’s type γ� (λ1) [ γ�� (λ1) follows the strategy σ [t, λ1] of the old
type λ1, but transform the (random) message to transform a message in M� via γ�; a new
receiver’s type γ� (λ2) [ γ�� (λ2) first decode the messages in M� and M�� via γ��1 and
γ���1, respectively, and then follows the strategies ρ�λ2, γ���1 (m)
�and ρ
�λ2, γ���1 (m)
�of the old type λ2.
43
First, with probability 1, the sender sends messages in M� and M��. Second, the
receiver treats M� and M�� as the transformed copies of the same set M (via γ� and
�, respectively). Hence, it is without loss of generality for the sender to send messages
only in M�.22 Given this, [eσ, eρ] just replicates [σ, ρ], and [eσ, eρ] inheritates the incen-
tive comptibility of the players from [σ, ρ]. Therefore, [eσ, eρ] is an outcome-equivalent
language-barrier equilibrium.
A similar argument applies to weak-independent-language-barrier equilibria.�
A.5 Proof of Lemma 3
In light of Lemma 8, it is without loss of generality for us to focus on weak-independent-
language-barrier equilibria. Fix any noisy-talk game (ε, ξ) 2 [0, 1] � 4 (M), and any
noisy-talk equilibrium ([s : T �! 4 (M)] , [r : M �! 4 (A)]) in the game. Define a language-
barrier game (Λ, π), such that T and Λ are independent under π, and
Λ1 = fMg [ ffmg : m 2 Mg ;
Λ2 = fMg ;
πΛ [fMg � fMg] = 1� ε;
πΛ [E� fMg] = ε� ξ [fm : fmg 2 Eg] , 8E 2 2M� fMg .
That is, the receiver understands all messages in M; with probability 1� ε, the sender
understand all messages in M, and with probability ε, the sender is endowed with a
single message; conditional on the probability-ε event, the distribution follows ξ, with
fmg replacing m.
Then, we define a weak-independent-language-barrier equilibrium
[σ : T �Λ1 ! 4 (M) , ρ : Λ2 � M ! 4 (A)] ,22Any message in M�� has a corresponding message M� which plays the same role.
44
such that for every (t, m) 2 T � M,
σ (t, λ1 = M) = s (t) ,
σ (t, λ1 = fmg) = δm,
ρ (λ2 = M, m) = r (m) ,
where δm denotes the Dirac measure on m. Clearly, incentive compatibility for every λ1 =
fmg is satisfied. Then, the incentive compatibility of the sender’s language type λ1 = M
and the receiver’s language type λ2 = M in [σ, ρ] inheritates the incentive compatibility
of the sender and the receiver in the noisy-talk equilibrium (s, r), respectively. I.e., [σ, ρ]
is an outcome-equivalent weak-independent-language-barrier equilibrium. Finally, by
Lemma 8, an outcome-equivalent independent-language-barrier equilibrium exists.�
A.6 Proof of Lemma 4
Fix any valid language-barrier game (Λ, π), and any independent-language-barrier equi-
librium
[σ : T �Λ1 ! 4 (M) , ρ : Λ2 � M ! 4 (A)] ,
Recall p(σ, ρ) : T �Λ1 �Λ2 ! 4 (A) defined in (16):
p(σ, ρ) (t, λ1, λ2) [E] =ZM
[ρ (λ2, m) [E]] σ (t, λ1) (dm) , 8E � A.
i.e., p(σ, ρ) (t, λ1, λ2) is the ex-post action distribution induced by [σ, ρ], given (t, λ1, λ2).
Then, define
P (σ, ρ) : T ! 4 (A) ,
P (σ, ρ) (t) [E] =ZΛ
hp(σ, ρ) (t, λ1, λ2) [E]
iπΛ [dλ1, dλ2] , 8E � A, (40)
i.e., P (σ, ρ) (t) is the ex-post action distribution induced by [σ, ρ], given t. We now show
P (σ, ρ) : T ! 4 (A) defined above is a mediation equilibrium. First, since [σ, ρ] is a
45
language-barrier equilibrium, (17) in Definition 6 implies
8 (t, λ1) 2 T �Λ1, 8σ0 : T �Λ1 �! 4 (M) ,ZΛ2
0@ Za2A
u1 (t, a) p(σ, ρ) (t, λ1, λ2) [da]�Z
a2A
u1 (t, a) p(σ0, ρ) (t, λ1, λ2) [da]
1Aπ [dλ2 j t, λ1] � 0,
(41)
Recall Λ and π are indepdent, and hence (41) reduces to
ZΛ2
0@ Za2A
u1 (t, a) p(σ, ρ) (t, λ1, λ2) [da]�Z
a2A
u1 (t, a) p(σ0, ρ) (t, λ1, λ2) [da]
1Aπ [dλ2 j λ1] � 0.
(42)
Given the definition of P (σ, ρ) defined in (40), if we integrate (42) over Λ1, we getZa2A
u1 [t, a]P (σ, ρ) (t) [da] �Z
a2A
u1 [t, a]P (σ0, ρ) (t) [da] , 8t, σ0. (43)
Finally, for every t0 2 T, consider σ0 (t) � σ (t0), and (43) becomesZa2A
u1 [t, a]P (σ, ρ) (t) [da] �Z
a2A
u1 [t, a]P (σ, ρ) (t) [da] , 8t, t0 2 T.
Second, since [σ, ρ] is a language-barrier equilibrium, (18) in Definition 6 implies
and 8λ2 2 Λ2, 8ρ0 : Λ2 � M ! 4 (A) ,ZT�Λ1
0@ Za2A
u2 (t, a) p(σ, ρ) (t, λ1, λ2) [da]�Z
a2A
u2 (t, a) p(σ, ρ0) (t, λ1, λ2) [da]
1Aπ [(dt, dλ1) j λ2] � 0.
(44)
Given the definition of P (σ, ρ) defined in (40), if we intergrate (44) over Λ2, we get
ZT
24 Za2A
u2 [t, a]P (σ, ρ) (t) (da)
35πT [dt] �ZT
24 Za2A
u2 (t, a)P (σ, ρ0) (t) (da)
35πT [dt] , 8λ2, 8ρ0,
which further implies
8ι : A �! A,ZT
24 Za2A
u2 [t, a]P (σ, ρ) (da)
35πT [dt] �ZT
24 Za2A
u2 [t, ι (a)]P (σ, ρ) (da)
35πT [dt] .
Therefore, P (σ, ρ) : T ! 4 (A) defined above is a mediation equilibrium.�
46
A.7 Proof of Lemma 5
Fix any arbitration equilibrium with moral hazard [p : T �! 4 (A)], i.e.,
8ι : A �! A,ZT
24 Za2A
u2 [t, a] p (t) (da)
35πT [dt] �ZT
24 Za2A
u2 [t, ι (a)] p (t) (da)
35πT [dt] . (45)
In light of Lemma 8, it is without loss of generality for us to focus on weak-language-
barrier equilibria. Recall M = A = R. Define a language-barrier game (Λ, π), such
that
Λ1 = ffag : a 2 A = Mg ,
Λ2 = fMg ,
π [E] =ZT
p (t) [fa : (t, fag , M) 2 Eg]πT [dt] , 8E � T � 2M � 2M,
i.e., the receiver has a unique language type M, who understands all messages; the sender’s
language type has the form fag for a 2 A = M; conditional on payoff type t, π [λ1 = fag , λ2 = M j t]
inheritates the distribution from p (t) [a], with λ1 = fag replacing a.
Define [σ : T �Λ1 ! 4 (M) , ρ : Λ2 � M ! 4 (A)] as follows.
σ [t, λ1 = fag] = δa, 8a 2 A = M,
ρ [λ2 = M, m = a] = δa, 8a 2 A = M,
where δa is the Dirac measure on a. Clearly, incentive compatibility of each sender’s
language type fag is satsified. The incentive comptability of the receiver follows from
(45). More specifically, p(σ, ρ) : T �Λ1 �Λ2 ! 4 (A) defined in (16) has the value
p(σ, ρ) [t, λ1 = fag , λ2 = M] = δa.
And hence, (45) implies
8λ2 2 Λ2, 8ρ0 : Λ2 � M ! 4 (A) ,ZT�Λ1
0@ Za2A
u2 (t, a) p(σ, ρ) (t, λ1, λ2) [da]�Z
a2A
u2 (t, a) p(σ, ρ0) (t, λ1, λ2) [da]
1Aπ [(dt, dλ1) j λ2] � 0,
47
i.e., incentive comptability of the receiver is satisfied, and [σ, ρ] is an outcome-equivalent
weak-language-barrier equilibria. Finally, by Lemma 8, an outcome-equivalent independent-
language-barrier equilibrium exists.�
A.8 Analysis on Example 2
Recall
u1 (a, t) = ��
a� t� 34
�2
;
u2 (a, t) = � (a� t)2 ;
µT (f0g) = µT (f1g) = 12
.
First, we consider bΦ [u1, u2] � u1, and show bΦLB � � 916 . Fix any language-barrier
equilibrium,
[σ : T �Λ1 ! 4 (M) , ρ : Λ2 � M ! 4 (A)] .
Since the receiver has the strictly quadratic utility u2 (a, t) = � (a� t)2, his best reply is
to take the pure action a = Et, where the expection is taken over his posterier belief on t.
Hence,
ρ (λ2, σ (t, λ1)) = E [tjλ2, σ (t, λ1)] .
By the rule of iterative expection, we have
E(t,λ)�π [ρ (λ2, σ (t, λ1))] = Et�πT [t] ,
or equivalently,
E [aj (σ, ρ)] = Et�πT [t] , (46)
where E [aj (σ, ρ)] denotes the expected value of the equilibrium actions. Furthermore,
let E [u1 (a, t) j (σ, ρ)] and E [u2 (a, t) j (σ, ρ)] denote the expected utility of the two play-
ers. We thus have
E [u1 (a, t) j (σ, ρ)] = E
"��(a� t)� 3
4
�2
j (σ, ρ)
#= E
h� (a� t)2 j (σ, ρ)
i� 9
16
= E [u2 (a, t) j (σ, ρ)]� 916
,
48
where the second inequality follows from (46). Then E [u2 (a, t) j (σ, ρ)] � 0 implies
E [u1 (a, t) j (σ, ρ)] � � 916
.
Since (σ, ρ) is arbitrary, we therefore conclude bΦLB � � 916 .
Second, we consider eΦ [u1, u2] � u2, and prove eΦA�AS � � 1128 by contradiction.
Suppose otherwise. I.e., there exists an arbitration equilibrium with adverse selection
[p : T �! 4 (A)] such that
12� Ea�p(0)
h� (a� 0)2
i+
12� Ea�p(1)
h� (a� 1)2
i> � 1
128,
which impies
Ea�p(0)
h(a� 0)2
i<
164
(47)
and Ea�p(1)
h(a� 1)2
i<
164
. (48)
Note that
Ea�p(0)
h(a� 0)2
i= Ea�p(0)
�h�a� Ea�p(0) [a]
�+�
Ea�p(0) [a]� 0�i2
�(49)
= Ea�p(0)
��a� Ea�p(0) [a]
�2�+h�
Ea�p(0) [a]� 0�i2
.
Then, (47) and (49) imply ���Ea�p(0) [a]� 0��� � r 1
64. (50)
Similar argument shows ���Ea�p(1) [a]� 1��� � r 1
64. (51)
Now, consider payoff state t = 0. If the sender sends message 0, the receiver follows p (0).
As a result, the sender’s expected utility is
Ea�p(0)
"��
a� 0� 34
�2#= �Ea�p(0)
h(a� 0)2
i� 9
16+
32� Ea�p(0) [a] (52)
� � 916+
32� Ea�p(0) [a]
� � 916+
32�r
164
� �38
,
49
where the second inequality follows from (50). At payoff state t = 0, if the sender sends
message 1, the receiver follows p (1). As a result, the sender’s expected utility is
Ea�p(1)
"��
a� 0� 34
�2#= Ea�p(1)
"��
a� 1+14
�2#
(53)
= �Ea�p(1)
h(a� 1)2
i� 1
16+
12� Ea�p(1) [a� 1]
� � 164� 1
16+
12� Ea�p(1) [a� 1]
� � 116� 1
16� 1
2�r
164
� � 316
.
where the first inequality follows from (48) and the second inequality follows from (51).
Hence, (52) and (53) imply that the sender prefers message 1 to message 0 at t = 0,
contradicting [p : T �! 4 (A)] being an arbitration equilibrium with adverse selection.
A.9 Analysis on Example 3
Recall
uS = ��
ar � tS �14
�2
and uR = � (ar � tS)2 ,
µ (f0g) = µ
��3572
��= µ (f1g) = 1
3.
In what follows, we sometimes write z = 3572 to economize on notation. Consider language
barriers as described below, where T and Λ are independently distributed.
Λ =
�λ1 =
�0,
12
, 1�
, λ2 =
�0,
12
��;
πΛ (λ1) =3536
and πΛ (λ2) =1
36.
Consider the pure-strategy independent language-barrier equilibrium (h, g) defined as
50
follows. �[h (t, λ) 2 λ](t,λ)2T�Λ , [g (m) 2 A]m2M
�h (0, λ1) = 0; h (z, λ1) =
12
; h (1, λ1) = 1
h (0, λ2) = 0; h (z, λ2) =12
; h (1, λ2) =12
g (m) = m for m 2 M
It is easy to check that this is indeed an equilibrium according to Definition 6.23 Clearly,
in this equilibrium, payoff types are almost fully revealed and the expected utility of the
receiver is:
� 13��
12� 35
72
�2
� 136� 1
3
�1� 1
2
�2
= � 373� 72� 72
' �0.002379 (54)
Given quadratic utility, it is easy to see that, in any language-barrier equilibrium, the
sender’s expected utility differs from the receiver’s expected utility by a constant de-
termined by the “bias” (see the discussed in Section A.8). Furthermore, any noisy-talk
equilibrium can be transformed to an outcome equivalent language-barrier equilibrium
(see Lemma 3). Therefore, it is without of loss generality for us to compare only the the
receiver’s expected utility. In particular, we show the expected utility of the receiver in
any noisy-talk equilibrium is less that that of (h, g) constructed above.
Now, consider any noisy talk equilibrium associated with (ε, ξ) 2 [0, 1] �4 (M),i.e., with probability ε, the receiver, instead of getting the message from the sender, gets
an exogenously chosen noise which is independently (to the sender’s message) generated
according to the distribution ξ. Because of the independence, conditional on noise, payoff
types are uniformly distributed, with a mean z+13 = 107
3�72 <12 . I.e., conditional on noise,
the best strategy for the receiver is to take the action z+13 , and the welfare loss induced by
the noise is at least
13��
z+ 13
� 0�2
+13��
z+ 13
� z�2
+13��
z+ 13
� 1�2
=29�"�
z� 12
�2
+34
#>
16
.
Hence, the total welfare loss induced by the noise is larger than 16 � ε, which, together
23The ideal point of t = 0, 3572 , 1 are 1
4 , 3572 +
14 , 5
4 , and it is easy to check h (t, λ) is consistent with their
preference. Furthermore, E (tjm = 0) = 0, E (tjm = 1) = 1, E�
tjm = 12
�=
13�
3536+
136�
13�1
13+
136�
13
= 12 .
51
with (54), implies that (h, g) dominates the noisy talk equilibrium if
� 373� 72� 72
� �16� ε
() ε � 3772� 36
And, hence, we must have
ε <37
72� 36<
140
. (55)
By the revelation principle, a noisy-talk equilibrium can always be transformed to
one in which the sender takes a (mixed) strategy of recommending actions, and the re-
ceiver follows the recommended actions, and furthermore, it is a best reply for the sender
to recommend the designated actions, and conditional on receiving a recommended ac-
tion, it is a best reply for the receiver to follow it. We now prove our result by contra-
diction in 7 steps, i.e., we assume there is such an equilibrium, in which the receiver’s
expected utility is larger than � 373�72�72 (as calculated in (54)).
Given quadratic utility function, the receiver’s best reply in an equilibrium is the
expectation of his posterior belief on t upon receiving a message. Given the sender’s
quadratic utility function, each payoff type t has at most two best actions to recommend
in an equilibrium (i.e., one smaller than than his ideal point, and the other larger than it).
Step 1: we show that, with a positive probability, a sender of type t 2 f0, z, 1grecommend an action in the interval
�t� 1
11 , t+ 111
�. Suppose otherwise. Then, given
ε < 140 , the total welfare loss for the receiver (at payoff state t) is at least
1� ε
3��
111
�2
�1� 1
403
� 1121
=13
4840' 0.0026859,
which is larger than 373�72�72 ' 0.002379, i.e., the receiver’s welfare loss in (h, g) (see
(54))—a contradiction.
Furthermore, since the ideal point of the sender of type t is t+ 14 and t+ 1
11 < t+ 14 ,
type t must have a unique action in�
t� 18 , t+ 1
8
�to recommend in the equilibrium. Let
at 2�
t� 111 , t+ 1
11
�denote the action recommended by type t 2 f0, z, 1g.
Step 2: a sender with type t must recommend at with a probability larger than 1920 .
Suppose otherwise, i.e., type t recommend another action, denoted by bat, with at least
52
probability 120 . To make both actions best replies, they must have the same distance to
the ideal point t + 14 . Furthermore, since at < t + 1
11 , we conclude bat > 2��
t+ 14
���
t+ 111
�= t+
�12 �
111
�. Then the total welfare loss for the receiver due to type t recom-
mending bat is at least
120� 1� ε
3��
12� 1
11
�2
� 120�
1� 140
3� 81
484=
1053387200
' 0.0027195,
which is larger than 373�72�72 ' 0.002379, i.e., the receiver’s welfare loss in (h, g) (see
(54))—a contradiction.
Step 3: the sender of type t = 1 has an ideal point 54 > 1. As a result, she has a
unique best recommendation in the equilibrium, which is the largest action recommended
in the equilibrium — this is a1. Furthermore, let ba denote the second largest action recom-
mended in the equilibrium, i.e., ba < a1. Since type t = 1 never recommends ba, only type
t = z < 12 , type t = 0 and the noise (with mean z+1
3 < 12 ) may recommend ba. As a result,
ba = E (tjm = ba) < 12< z+
14
.
I.e., az � ba < z+ 14 , where z+ 1
4 is the ideal point for t = z. Therefore, ba = az.
Step 4: we calculate an upper bound for az = E (tjm = az). Note 0 < z < z+13
and that only the noise (with mean z+13 ) and types t = 0, t = z may recommend az. We
would increase the posterior expectation of t if type t = 0 is not allowed to recommend az.
Moreover, to further increase the expectation, we should reduce the probability of type
t = z and increase the probability of noise, due to z < z+13 . To sum, we have
az = E (tjm = az) (56)
�1920 �
1�ε3 � z+ ε� z+1
31920 �
1�ε3 + ε
�1920 �
1� 140
3 � 3572 +
140 �
3572+1
3
1920 �
1� 140
3 + 140
=2807557672
< 0.4869,
where the second inequality follows from z = 3572 and ε < 1
40 (see (55)).
53
Step 5: the sender of type t = 0 has an ideal point of 14 , and we have shown a0 <
14 < az. Hence, to make a0 a best recommendation for type t = 0, we must have
14� a0 � az � 1
4, (57)
i.e., a0 is more close to the ideal point 14 . Then, (56) and (57) imply
a0 > 0.0131. (58)
Step 6: let γ denote the ex-ante probability that noise generates the recommendation
a0, and we show γ < 1130 . Suppose otherwise. Recall a0 < 1
11 . Then, the total welfare loss
due to the noise recommending a0 is at least
γ�"
13��
111� z�2
+13��
111� 1
�2#� 1
130�"
13��
111� 35
72
�2
+13��
111� 1
�2#
=616369
244632960' 0.002519
which is larger than 373�72�72 ' 0.002379, i.e., the receiver’s welfare loss in (h, g) (see
(54))—a contradiction.
Step 7: since a0 < az < z+ 14 < a1 < 5
4 , where z+ 14 and 5
4 are the ideal points of
types t = z and t = 1 respectively. As a s result, types t = z and t = 1 do not recommend
a0, i.e., only type t = 0 and the noise may recommend a0. By Step 2 above, the sender of
type t = 0 recommend a0 with a probability larger than 1920 (which corresponds to ex-ante
probability 1920 �
1�ε3 ). Hence, we have
a0 = E�
tjm = a0��
1920 �
1�ε3 � 0+ γ� z+1
31920 �
1�ε3 + γ
�1
130 �3572+1
3
1920 �
1� 140
3 + 1130
=8560
710856' 0.01204,
(59)
where the second inequality follows from z = 3572 and γ < 1
130 . In particular, (58) contra-
dicts (59)).�
References
ARROW, K. J. (1975): “The Limits of Organization,” New York, NY, Norton.
54
BATTAGLINI, M. (2002): “Multiple Referrals and Multidimensional Cheap Talk,” Econo-
metrica, 70, 1379–1401.
BLUME, A. (2015): “Failure of Common Knowledge of Language in Common-Interest
Communication Games,” Mimeo.
BLUME, A., AND O. BOARD (2010): “Language Barriers,” Mimeo.
(2013): “Language Barriers,” Econometrica, 81, 781–812.
BLUME, A., O. BOARD, AND K. KAWAMURA (2007): “Noisy Talk,” Theoretical Economics,
2, 395–440.
CHAKRABORTHY, A., AND R. HARBAUGH (2007): “Comparative Cheap Talk,” Journal of
Economic Theory, 132, 70–94.
CHRYSTAL, D. (2006): How Language Works. Penguin.
CRAWFORD, V., AND J. SOBEL (1982): “Strategic Information Transmission,” Econometrica,
50, 1431–1451.
CREMER, J., L. GARICANO, AND A. PRAT (2007): “Language and the theory of the firm,”
Quarterly Journal of Economics, 122, 373–407.
CURRAN, K., AND M. CASEY (2006): “Expressing emotion in electronic mail,” Kybernetes,
35, 616–631.
D.A. VAKOCH, E. (2011): “Communication with Extraterrestrial Intelligence,” Stony
Brook, NY, SUNYP.
DESSEIN, W. (2002): “Authority and Communication in Organizations,” Review of Eco-
nomic Studies, 69, 811–838.
FARRELL, J. (1993): “Meaning and Credibility in Cheap-Talk Games,” Games and Economic
Behavior, 5, 514531.
GANGULY, C., AND I. RAY (2011): “Simple Mediation in a Cheap-Talk Game,” University
of Birmingham, Department of Economics Discussion Paper 05-08RR.
55
GARICANO, L., AND A. PRAT (2013): Organizational Economics with Cognitive Costs. Econo-
metric Society Monographs, Cambridge University Press, pp.342-388.
GIOVANNONI, F., AND D. SEIDMANN (2007): “Secrecy, two-sided bias and the value of
evidence,” Games and Economic Behavior, 59, 296–315.
GOLTSMAN, M., J. HORNER, G. PAVLOV, AND F. SQUINTANI (2009): “Mediation, arbitra-
tion and negotiation,” Journal of Economic Theory, 144, 1397–1420.
JEANNE HAGENBACH, F. K., AND E. PEREZ-RICHET (2014): “Certifiable Pre-Play Com-
munication: Full Disclosure,” Econometrica, 82, 1093–1131.
KAMENICA, E., AND M. GENTZKOW (2011): “Bayesian Persuasion,” American Economic
Review, 101, 2590–2615.
KARTIK, N. (2009): “Strategic Communication with Lying Costs,” Review of Economic
Studies, 76, 1359–1395.
KRISHNA, V., AND J. MORGAN (2004): “The art of conversation: eliciting information
from experts through multi-stage communication,” Journal of Economic Theory, 117,
147179.
(2008): “Contracting for information under imperfect commitment,” The Rand
Journal of Economics, 39, 905925.
LEVY, G., AND R. RAZIN (2007): “On the Limits of Communication in Multidimensional
Cheap Talk: A Comment,” Econometrica, 75, 885–893.
MCNAIR, B. (2011): An Introduction to Political Communication. Taylor and Francis.
MILGROM, P. R. (1981): “Good News and Bad News: Representation Theorems and Ap-
plications,” The Bell Journal of Economics, 12, 380–391.
MORRIS, S. (2001): “Political Correctness,” Journal of Political Economy, 109, 231–265.
MYERSON, R. (1991): Game Theory: Analysis of Conflict. Harvard University Press.
OTTAVIANI, M., AND P. N. SORENSEN (2006): “Professional Advice,” Journal of Economic
Theory, 126, 120–142.
56
ROSS, S. E., AND C.-T. LIN (2003): “The Effects of Promoting Patient Access to Medical
Records: A Review,” Journal of the American Medical Informatics Association, 10, 129–138.
SAGAN, C. (1985): Contact. Simon & Schuster.
SCHARFSTEIN, D., AND J. STEIN (1990): “Herd Behavior and Investment,” American Eco-
nomic Review, 80, 465–479.
SEIDMANN, D., AND E. WINTER (1997): “Strategic Information Transmission with Verifi-
able Messages,” Econometrica, 65, 163–169.
SOBEL, J. (1985): “A Theory of Credibility,” Review of Economic Studies, 52, 557–573.
(2015): “Broad Terms and Organizational Codes,” Mimeo.
SPENCE, M. (1973): “Job Market Signaling,” The Quarterly Journal of Economics, 87, 355–
374.
THOMSON, W. (2001): A Guide for the Young Economist. MIT Press.
57
Top Related