P. Dai Pra, F. Fontini, E. Sartori, and M. Tolotti …virgo.unive.it/wpideas/storage/2011wp7.pdfP....
Transcript of P. Dai Pra, F. Fontini, E. Sartori, and M. Tolotti …virgo.unive.it/wpideas/storage/2011wp7.pdfP....
P. Dai Pra, F. Fontini, E. Sartori, and M. Tolotti
Endogenous equilibria in liquid markets with frictions and boundedly rational agents
Working Paper n. 7/2011August 2011
ISSN: 2239-2734
This Working Paper is published ! under the auspices of the Department of Management at Università Ca’ Foscari Venezia. Opinions expressed herein are those of the authors and not those of the Department or the University. The Working Paper series is designed to divulge preliminary or incomplete work, circulated to favour discussion and comments. Citation of this paper should consider its provisional nature.
Endogenous equilibria in liquid markets
with frictions and boundedly rational agents
Paolo Dai Pra Fulvio Fontini
Dept. of Pure and Applied Mathematics M. Fanno Dept. of Economics and Management
Università di Padova Università di Padova
Elena Sartori Marco Tolotti
Dept. of Management Dept. of Management
Università Ca’ Foscari Venezia Università Ca’ Foscari Venezia
(August 2011)
Abstract. In this paper we propose a simple binary mean field game, where N agents may
decide whether to trade or not a share of a risky asset in a liquid market. The asset’s returns
are endogenously determined taking into account demand and transaction costs. Agents’ utility
depends on the aggregate demand, which is determined by all agents’ observed and forecasted
actions. Agents are boundedly rational in the sense that they can go wrong choosing their
optimal strategy. The explicit dependence on past actions generates endogenous dynamics of the
system. We, firstly, study under a rather general setting (risk attitudes, pricing rules and noises)
the aggregate demand for the asset, the emerging returns and the structure of the equilibria of
the asymptotic game. It is shown that multiple Nash equilibria may arise. Stability conditions
are characterized, in particular boom and crash cycles are detected. Then we precisely analyze
properties of equilibria under significant examples, performing comparative statics exercises and
showing the stabilizing property of exogenous transaction costs.
Keywords: Endogenous dynamics, Nash equilibria, bounded rationality, transaction costs,
mean field games, random utility.
JEL Classification Numbers: D81 - C62 - C72
Correspondence to:
Marco Tolotti: Department of Management, Università Ca’ Foscari Venezia
Cannaregio, 873 - 30121 Venice, Italy
phone: [+39] 041-234-6928; e-mail: [email protected]
1 Introduction
The emergence of the financial crisis of the latest years, posing endogenous risks to the world
economic recovery, seriously challenges the standard optimality properties of first-year-course
like risk allocation models based on frictionless, timeless, market equilibria with homogeneous,
rational and perfectly foresighted agents. Researchers try to explain market’s behavior relax-
ing one or more of the restrictive hypotheses on which models are based. Indeed, agents have
limited information and cognitive ability, form their expectations and behavior following habits
and trends, over or under react to external signals, adjust progressively according to the time
they have and the cost of their choices, just to name some of the most common extensions to
basic market equilibrium analysis. Several financial models, in particular, try to explain the rise
of market cycles that seem not to be related to the oscillation of the fundamentals, overtaking
the agents’ perfect rationality assumption or focusing on external driving forces of individual
behavior. The former approach points at the vast (and perhaps inhomogeneous) bulk of litera-
ture that can be named as behavioral finance1; the latter one explains market values’ dynamics
as a result of the rational decision of agents, who are leaded by some external random vari-
able uncorrelated with fundamentals, i.e., a so-called sunspot2. Both fields of research provide
interesting, rich insights on market dynamics, yet relying on strong and perhaps unnecessary
assumptions that could be relaxed. Behavioral finance interestingly focuses on agents’ bounded
rationality, but it rests on external ad-hoc distinctions between agents’ types (informed and
noise-traders) to give rise to market’s dynamics. Sunspot equilibria models explicitly consider
rational players’ interactions, but rely on exogenous properly-introduced signals as the driving
forces that lead agents’ behavior. In our work, we try to encompass the most interesting features
of both approaches in a innovative framework, that seriously takes into account agents bounded
rationality and, at the same time, explicitly models assets’ returns dynamics and the rise of
booms and crashes in financial markets as the result of market features and agents’ interplay.
The bounded rationality assumption is included in the model considering the social interactions
that arise in repeated people interplay as well as the individual possibility of making mistakes.
1The literature on behavioral finance is too vast to be reviewed here. We refer to books that categorize and
review most common approaches, such as Schleifer (2000) and Thaler (2005), or recent textbooks such as Forbes
(2009).2Again, the literature on sunspot equilibria is too vast to be reviewed. For introduction, models and references
see for instance Guesnerie (2001), Citanna et al. (2004) and references therein.
1
Indeed, tracing back to Föllmer and Schelling seminal contributions3, a recent flow of literature
has focused on the explicit dependence of agents’ behavior on the other people’ s action, i.e.,
on their social interaction, as a driving force of individual binary choices. Nadal et al. (2005),
for instance, consider N agents facing the problem of choosing whether to buy or not a unique
share of a risky asset, whose price P is settled by a monopolistic (exogenous) market maker.
The utility function of each agent depends on a (random) idiosyncratic willingness to pay, on
a social component motivated by an imitation argument and on the exogenous (static) price
P . Bouchaud and Cont (2000) and Cont et al. (2005) follow Föllmer and Schweizer (1993) in
modeling a linear dependence of the price change of the risky asset as a function of the aggregate
demand, emerging from the underlying N dimensional market. The demand for the good is not
driven by an optimization process, but rather by a statistical mechanism. In both papers the
price (or the return) is extrapolated a posteriori as a function of the aggregate demand for the
asset.
We adopt here the same binary choice model framework, assuming that people’s utility depends
on both their own choice and other peoples’ ones. However, differently from the previously cited
literature, we let social interaction affect the individual behavior through the Nash equilibria of
a game representing the market. In other words, we define an endogenous interaction between
market’s dynamics and agents’ expectations. In order to focus on the relationship between social
interaction, bounded rationality and asset dynamics, we do not consider change in values (and
returns) that depend on external, supply-side aspects (such as production costs, firms’ profits,
market structure, and similar), but we assume that market prices and returns are driven by the
demand-side only. This is tantamount to define a liquid market, one in which the value of the
asset does not depend on the probability of closing a trading relationship (as it is, for instance,
in the search-and-matching models)4, but rather on the demand shifts. As for the bounded
rationality assumption, we allow agents to go wrong in their process of making their optimal
choice. This is encompassed in the model assuming that players have a utility function that
has the features of the random utility models for binary choices5. In words, we assume that,
3See Föllmer (1974) and Schelling (1978).4See Rocheteau and Weill (2011) for a review of applications of search-and-matching models to financial
settings.5Binary random utility models have been inspired by Schelling (1978), Manski (1988) and McFadden (1974)
and recently formalized in Brock and Durlauf (2001). A dynamic counterpart of this modeling framework can be
found in Blume and Durlauf (2003).
2
at discrete times, N agents may decide whether to trade or not a share of a risky asset. Their
utility function depends on the effective return of the asset and there is a random variable that
with some (possibly) positive probability makes them following the opposite choice to the one
that yields the highest utility, i.e., making the wrong one. We study market’s dynamics and in-
dividual expectation that mutually reinforce themselves: if agents believe that the market value
increases, they rise their demand; this spikes the value, which further rises the expectation and
so on. However, at each point in time the value depends on the strategic interactions of agents,
which are functions of their expectations on other agents’ behaviors. The explicit dependence on
past actions generates endogenously the dynamics of the system that are studied in the paper.
These are highly non linear and are hard to be evaluated for a fixed, large N . Therefore, we
let N go to infinity and focus on the asymptotic model, namely, the one with infinite agents.
We first focus on the game played by each agent at each trading date, computing the optimal
share q for the asset (i.e., the proportion of agents that own the asset at the equilibrium), and
the related return R, where the value of the demand q at the previous trading date enters as a
parameter. This gives rise to an implicit dynamic equation connecting q (and thus R) to q, of
the form
G(q, q) = 0. (1)
This equation does not necessarily produce a well defined dynamical system: given q, more
than one solution q may exist. This reflects the fact that multiplicity of Nash equilibria for the
N -dimensional static game may persist in the N → +∞ limit. We focus on the steady states
for (1). In particular, we identify all equilibria for (1), i.e., those q for which G(q, q) = 0; we also
give conditions under which the dynamics given by (1) are well defined in a neighborhood of
fixed points, and discuss their local stability. Moreover, we show that, under suitable conditions
on the parameters of the model, cycles of period 2 may arise, and that they may coexist with
stable fixed points.
We are interested also in evaluating to what extent our endogenous dynamics are influenced by
exogenous factors, such as transaction costs. We are motivated by the factual consideration that
in real life settings transaction costs exist and provide a rationale for bid-ask spreads; we are
also inspired by the debate that has recently arisen on the possible stabilizing properties of a
transaction tax like the Tobin one, which might lower the speed of adjustment in markets that
seem to react abruptly to changes in fundamentals, reducing the possibility of generating or
3
propagating a financial contagion. We evaluate to which extent the same effect might be played
by the transaction costs in the dynamics of our model.
Summarizing, we believe that our paper provides significant contributions in at least three
different directions.
• From a microeconomic perspective, we enrich the framework of random utility models a la
Brock and Durlauf (2001), letting the agents update their opinion in a parallel way, i.e.,
their action is the consequence of a game, whose payoffs depend on the expectations on
the behavior of the population.
• From a game theoretical perspective, we provide an existence result of pure strategy Nash
equilibria for a rather large class of binary N -player non-cooperative games. Indeed, it is
a well known result in literature that games in which players have binary strategies not
necessarily have pure strategy Nash equilibria (e.g., the matching pennies game). On the
other hand, in Cheng et al. (2004) it is shown that binary games admit pure strategy Nash
equilibria as long as agents’ payoff functions are symmetric. We extend and generalize
such a result showing the existence of pure strategy Nash equilibria in a less restrictive
setting, defining a class of monotone binary games (see Definition 3.1).
• From a financial economic perspective, we build a model where the returns R of the risky
asset are function of both today’s (known) and tomorrow’s (unknown) aggregate demand.
On the other hand, the aggregate demand q depends on R. The dynamics for the aggregate
demand and returns are, thus, coupled. We also study the impact of transaction costs on
the equilibria characteristics and the efficiency of the market.
The paper is organized as follows. The market model and the structure of the one period
game played by the N agents are described in Section 2. In Section 3 we solve the static
non-cooperative game and we derive the implicit evolution equation (1). Section 4 contains a
detailed discussion of fixed points for (1), the proof of the existence of cycles of period 2 and the
study of the stability both of the fixed points and of the 2-cycles. We also provide results on
the volume of trades at the equilibrium. In Section 5 we propose some examples and perform
comparative static exercises on them. Concluding remarks and the bibliography follow.
4
2 The model
In this chapter we describe both the market structure and the rules of the non-cooperative game
played by the agents, starting with the former.
2.1 The market structure
We consider N agents acting on a market for a risky asset open at discrete dates tkk>0. At
each trading date the agents have to choose whether to trade or not one share of the asset.
We denote by ωi(tk) ∈ 0, 1, for i = 1, . . . , N , the choice of the i-th agent at time tk. In this
context, ωi(tk) = 1 means that at time tk agent i does own the share, ωi(tk) = 0 means that
agent i does not.
For each trading date tk, the agents are facing a static non-cooperative game subject to
their information at time tk−1. Let us denote tk−1 = t and tk = t. We assume that all the
agents know the past price of the share, denoted by P = P (t), but they can only forecast the
price P = P (t) prevailing at the end of the trading day on the market. They, clearly, know
their own choice at time t, denoted by ω =: ω(t), and the aggregate number of shares owned by
the participants at time t, denoted by Q =∑N
j=1 ωj. When deciding their action, agents take
into account the unknown return R of the asset that arises at the end of the trading period.
This implies that agents are myopic, in the sense that they only look at one period returns,
neglecting future tk+1, tk+2, ... values. In other words, it is as if agents are assumed to hold a
specific version of variable rate of time preferences6, namely, they exhibit extreme impatience,
in the sense that they dislike so much future values that attach to them an infinitesimal weight
in their utility function, which can be neglected. This implies that the game, being a sequence
of static, repeated interactions, cannot be solved by means of backward induction. Such an
approach, assuming agents that are not perfectly rational in a game theoretic sense, i.e., perfectly
foresighted expected utility maximizers, is coherent with our framework of boundedly rational
players.
As already said, a peculiarity of this model is that R is a function of Q =∑N
j=1 ωj(t), the number
of shares owned by participants as resulting from the game at time t. We normalize quantities,
6On the implausibility of the standard constant discount rate of time preferences assumption and consequences
for time-consistency of the choices, see, for instance, Thaler (1981) and Loewenstein and Prelec (1992).
5
defining q = 1N
∑Nj=1 ωj and assuming that
R = g (q, q) , (2)
where g : [0, 1] × [0, 1] → R is a continuous map, increasing in q and decreasing in q. In
particular, this specification reflects the stylized fact that prices are positively influenced by an
increasing demand7. Equation (2) is extremely stylized. It does not take into account exogenous
supply-side factors, which could be introduced through an exogenous random shock ξ influenc-
ing returns, so that R = g(q, q) + ξ. Coherently with the scope of the paper we focus on the
relationship between returns and demand and thus leave such an extension to further researches.
We adopt the following important assumptions about market design.
A.1 All the transactions are closed with a Market Maker (MM), who guarantees that all the
buy/sell orders are fulfilled. As a consequence, we are assuming an infinite supply. Such an
assumption is justified by our intent to focus on the relationship between individual behav-
ior and collective one, i.e., market demand, leaving aside exogenous quantity constraints
to market supply which would affect the asset’s price. As said, we propose a different
framework compared, for instance, to the search-and-matching one. In that context it
is assumed that supply is fixed, the result of market interaction is the amount of trades
that occurs and prices derive from it8. We follow the opposite rationale, namely, supply is
infinite, the output of the model are prices and trade volumes follow. Thus, our stylized
model can be useful to explain asset dynamics in all those markets, where liquidity is large
enough (or individuals are so infinitesimal with respect to the market), so that aggregate
demand is the driving force of the market. Notice, moreover, that we are considering a
market with N participants, where each agent may hold at most one asset. This means
that the infinite supply assumption can be relaxed in assuming that there are N shares of
the risky asset offered to the market by an offering entity (a firm, a sovereign or a private
investment bank). The shares not bought in the first round remain to the offering entity,
that consequently acts as a market maker for this good in the next periods.
A.2 On the market there are frictions, in particular proportional transaction costs. The market
maker offers a [bid, ask] spread such that Pb ≤ P ≤ Pa, where we denote by Pb the price at
7As in Föllmer et al. (1994), for instance.8See Rocheteau and Weill (2011).
6
which the share is sold on the market, and by Pa the price at which the share is bought on
it. In our simplified market, where only one share of the asset can be traded, a proportional
transaction cost results in a constant proportion µ ≥ 0 subtracted to the realized return
of trading. In particular we can define the effective return Reff made on trade as
Reff = R − µ , when buying from the MM ;
Reff = −µ , when selling to the MM ;
Reff = R , when keeping the asset.
2.2 The one period non-cooperative game
Let us focus now on the one period non-cooperative game. At any trading date, each agent
has to decide whether to trade or not. Given their information, the N agents simultaneously
choose their actions, according to their utility function. Following the random utility models
approach9, we assume that the utility has two components. The first (the rational part) is based
on the forecasted profit, the second is an error term representing the bounded rationality of the
agents: they may fail in choosing correctly their action. We define, for i = 1, . . . , N ,
Ui(ωi) = f(Reff) + ǫi ωi, (3)
where f : R → R is a continuous, strictly increasing map such that f(0) = 0 and it represents the
agents’ risk attitude. The terms (ǫi)i=1,...,N are i.i.d. random variables with common distribution
η.
It is easy to see that, in our setting, the choice is made according to the scheme described
in Table 1.
ω → ω event Ui
0 → 1 if f(R − µ) + ǫi > 0 f(R − µ) + ǫi
0 → 0 if f(R − µ) + ǫi ≤ 0 0
1 → 1 if f(R) + ǫi > f(−µ) f(R) + ǫi
1 → 0 if f(R) + ǫi ≤ f(−µ) f(−µ)
Table 1: Scheme of the four possible scenarios (actions) and related payoffs.
9See Brock and Durlauf (2001) for a detailed discussion on Random Utility Models.
7
It follows that the utility function Ui for each agent can be written as
Ui(ωi) = ωi(1 − ωi)f(R − µ) + ωiωif(R) + ωi(1 − ωi)f(−µ) + ǫi ωi
= ωi [(1 − ωi)f(R − µ) + ωif(R) − ωif(−µ) + ǫi] + ωif(−µ) .(4)
Since, by equation (2), R = g(q, q), utility Ui explicitly depends on the N−dimensional vector
ω. Therefore, with a slight abuse of notations, we often write Ui(ωi, ω−i), where ω−i denotes
the vector of actions ω deprived of ωi. Note that the dependence in ω−i is only through
the normalized aggregate demand qN = 1N
∑
i ωi. Therefore, our N dimensional game can be
considered as a finite dimensional version of a mean field game in the sense of Lasry and Lions10.
In fact, we have a non-cooperative game with a very large number of similar agents, each of
them subject to some noise, ǫi, and coupled with each other through their utility function, which
depends on an aggregate quantity, qN , of the state variables.
3 Solution to the static non-cooperative game
In this section we turn our attention to the solution of the static non-cooperative game: we
study existence and characterization of Nash equilibria. In particular, we first show existence
of a pure strategy Nash equilibrium in the game played by the N agents. Then we describe the
expected behavior of the population when the number of players goes to infinity. This leads to
a characterization of the steady states of a sequence of repeated static games.
We look for an action ω = (ω1, . . . , ωN ) = (ωi, ω−i) such that
Ui(ωi, ω−i) ≤ Ui(ωi, ω−i) , i = 1, . . . , N, (5)
where ωi = 1 − ωi denotes a switch in agent i’s action.
We, firstly, prove existence of pure strategy Nash equilibria in a rather general class of N -
player non-cooperative games that encompasses the problem we are facing. To this aim we
define the class of binary monotone mean field games as follows.
Definition 3.1 (Binary monotone mean field game). A N player non-cooperative game is called
binary monotone mean field when:
• the N players face binary actions: ωi ∈ Ω = 0; 1, for i = 1 . . . , N ;
10See Lasry and Lions (2007) and Cardaliaguet (2010).
8
• their payoffs are of the kind
Ui(ω1, . . . , ωN ) = ωi
G(1)i
N∑
j=1
ωj
+ ξi
+ G(2)i (ω−i), (6)
where G(1)i is monotone, for i = 1 . . . , N .
In the next lemma, we prove the existence of pure strategy Nash equilibria for binary mono-
tone mean field games. As already said, the game is mean field in the sense that the payoff
depends on ω only through an aggregate variable. The monotonicity is related to the functions
G(1)i (·). This result extends and generalizes previous works on existence of pure strategy Nash
equilibria in the context of binary decisions (see, for instance, Cheng et al. (2004)) to games
that are not necessarily symmetric.
Lemma 3.2. A binary monotone mean field game as defined in Definition 3.1 admits pure
strategy Nash equilibria.
Proof. We give the proof for the increasing case. The proof for the decreasing one follows
the same lines.
For ω = (ω1, . . . , ωN ) ∈ 0, 1N , set A(ω) := i : ωi = 1. Thus vectors of strategies can be
identified with subsets of 1, 2, . . . , N. Note that the following statements are equivalent:
N1. ω is a Nash equilibrium;
N2. Setting A := A(ω),
1. for every i ∈ A, we have G(1)i (|A|) + ξi ≥ 0;
2. for every i 6∈ A, we have G(1)i (|A| + 1) + ξi ≤ 0.
Thus, all we need is to find A ⊆ 1, 2, . . . , N for which N2 holds. For A ⊆ 1, 2, . . . , N, define
the transformation
T A := i ∈ A : ξi ≥ −G(1)i (|A|).
Now set A0 := 1, 2, . . . , N. If T A0 = A0, then N2 holds for A0, in other words (1, 1, . . . , 1) is
a Nash equilibrium. Otherwise, set A1 := T A0 ( A0. Note that, if i 6∈ A1, then
ξi < −G(1)i (|A0|) ≤ −G
(1)i (|A1| + 1).
9
So, if T A1 = A1, then N2 holds for A1. Otherwise we iterate. By induction on k we show that,
whenever Ak := T Ak−1 ( Ak−1,
i 6∈ Ak ⇒ ξi ≤ −G(1)i (|Ak| + 1).
We have shown it for k = 1. Suppose it is true for k − 1. If i ∈ Ak−1 \ Ak, by definition of T we
have that
ξi < −G(1)i (|Ak−1|) ≤ −G
(1)i (|Ak| + 1).
On the other hand, if i 6∈ Ak−1, then, by the inductive assumption,
ξi ≤ −G(1)i (|Ak−1| + 1) ≤ −G
(1)i (|Ak| + 1).
Now, as soon as we have T Ak = Ak, then N2 holds for Ak. But, since T maps a finite set to a
subset of itself, it must eventually reach a fixed point, which is possibly the empty set.
In the following theorem we apply the existence result shown in Lemma 3.2 to the market
setting defined in the previous section:
Theorem 3.3. There exists a pure strategy Nash equilibrium ω∗ for the non-cooperative game
played at time t by the N agents with utilities as in equation (4). Moreover, ω∗ is such that
ω∗i = H
(
(1 − ωi)f
(
g
(
qN , q∗N +
1 − ω∗i
N
)
− µ
)
+ ωif
(
g
(
qN , q∗N +
1 − ω∗i
N
))
− ωif(−µ) + ǫi
)
,
(7)
where H(u) = I[0,+∞)(u) and where qN = 1N
∑Ni=1 ωi and q∗
N = 1N
∑Ni=1 ω∗
i .
Proof. Existence follows directly from Lemma 3.2. It is enough to put
G(1)i (x) = (1 − ωi)f(g(q, x/N) − µ) + ωf(g(q, x/N))
G(2)i (y) = ωif(−µ)
ξi = −ωif(−µ) + ǫi
We are left to show that any pure strategy Nash equilibrium satisfies equation (7).
First of all, notice that the term ωif(−µ) in equation (4) does not play any role in the maxi-
mization, so that we can ignore it. Fix now a pure strategy ω∗; suppose, firstly, that ω∗
i = 0.
This means that Ui(0, ω∗−i) ≥ Ui(1, ω
∗−i). Ui(0, ω
∗−i) = 0, whereas it is not difficult to see that
Ui(1, ω∗−i) = (1 − ωi)f
(
g
(
qN , q∗N +
1
N
)
− µ
)
+ ωif
(
g
(
qN , q∗N +
1
N
))
+ ǫi.
10
Thus ω∗i = 0 implies (1 − ωi)f
(
g(
qN , q∗N + 1
N
)
− µ)
+ ωif(
g(
qN , q∗N + 1
N
))
+ ǫi ≤ 0.
Consider now ω∗i = 1. We have Ui(1, ω
∗−i) ≥ Ui(0, ω
∗−i). Note that in this case
Ui(1, ω∗−i) = (1 − ωi)f (g (qN , q∗
N ) − µ) + ωif (g (qN , q∗N )) + ǫi ≥ 0.
Summarizing, ω∗i is characterized by equation (7).
Equation (7) characterizes the pure strategies of the game. In order to study the evolution of
returns in the market, an important statistics is q∗N , the optimal proportion of agents choosing
ω = 1 in the game with N players. Equation (7) can be rewritten in terms of q∗N via the so called
auto consistency equation, obtained summing over i and dividing by N both terms in equation
(7). We also allow the return RN to depend on the number of agents: these dependencies are
functional to the N → +∞ limit that will be taken later.
q∗N = 1
N
∑Ni=1 H
(
f(
g(
qN , q∗N +
1−ω∗
i
N
)
− µ)
−ωi
(
f(
g(
qN , q∗N +
1−ω∗
i
N
)
− µ)
− f(
g(
qN , q∗N +
1−ω∗
i
N
))
+ f(−µ))
+ ǫi
)
.
RN = g(qN , q∗N )
(8)
Note that Theorem 3.3 guarantees that, for qN given, at least one solution (q∗N , RN ) to (8)
exists. On the other hand uniqueness may fail; in other words, the dynamical system described
by (8) can be ill defined. Nevertheless, one can, at a heuristic level, consider the limit of (8)
as N → +∞, at least along convergent subsequences. We now prove the main theorem of this
paper.
Theorem 3.4. Assume that
limN→+∞
qN = q,
and, for each N , let qN be a solution of equation (8). Then (qN )N≥1 is a tight sequence of
random variables. Each weak limit point q is, with probability one, solution of the limiting
equation
q = qη (f(g(q, q)) − f(−µ)) + (1 − q)η (f(g(q, q) − µ)) . (9)
Proof. For simplicity of notations, we set f = id; no change is needed in the proof for the general
case.
To begin with, tightness of (qN ) is obvious, since qN is [0, 1]-valued. We still denote by (qN ) a
subsequence converging to q. By no loss of generality, possibly enlarging the probability space
11
where the ǫi’s are defined (see the Skorohod Representation Theorem in Billingsley (1999),
Theorem 1.6.7), we can assume qN → q almost surely. Now, let ρN be the empirical measure
ρN :=1
N
N∑
i=1
δ(ωi,ǫi).
Its weak limit comes from the assumptions: for h bounded and continuous,
limN→+∞
1
N
N∑
i=1
h(ωi, ǫi) = q
∫
h(1, ǫ)η(dǫ) + (1 − q)
∫
h(0, ǫ)η(dǫ) (10)
almost surely.
Now, let Hδ, with δ > 0, be a family of Lipschitz functions with infδ Hδ = H. For δ fixed,
by (8), (10) and the fact that
g
(
qN , qN +1 − ωi
N
)
→ g(q, q)
almost surely, we have
q = limN→+∞
1
N
N∑
i=1
H
(
g
(
qN , qN +1 − ωi
N
)
+ µ(2ωi − 1) + ǫi
)
≤ lim supN→+∞
1
N
N∑
i=1
Hδ
(
g
(
qN , qN +1 − ωi
N
)
+ µ(2ωi − 1) + ǫi
)
= q
∫
Hδ(g(q, q) + µ + ǫ)η(dǫ) + (1 − q)
∫
Hδ(g(q, q) − µ + ǫ)η(dǫ).
Taking the limit over a sequence δn ↓ 0, by Dominated Convergence we obtain
q ≤ q
∫
H(g(q, q) + µ + ǫ)η(dǫ) + (1 − q)
∫
H(g(q, q) − µ + ǫ)η(dǫ)
= qη (g(q, q) + µ) + (1 − q)η (g(q, q) − µ) .
The corresponding lower bound is obtained similarly: one takes a Lipschitz lower bound Hδ for
H, such that Hδ ↑ H− as δ ↓ 0, where H− differs from H only for H−(0) = 0 < H(0) = 1. We
obtain
q ≥ q
∫
H−(g(q, q) + µ + ǫ)η(dǫ) + (1 − q)
∫
H−(g(q, q) − µ + ǫ)η(dǫ).
Since, by assumption, η is a continuous distribution, we can replace H− with H in this last
formula, and the conclusion follows.
Whenever the limit equation (9), for a given q, has a unique solution q, Theorem 3.4 gives
a law of large numbers with a deterministic limit. For most examples, multiplicity of solutions
is actually exceptional, occurring only for few values of q.
12
Remark 3.5. The static game has been solved for given ǫ1, ǫ2, . . . , ǫN , as a game with total
information. One could argue that a more realistic model is obtained assuming that each player
i can only observe its own noise ǫi, while only the distribution of the other players’ noise is
known to him. In rigorous terms this means:
• strategies are measurable functions from the noise space R to 0, 1: in other words
ωi = ωi(ǫi);
• the utility of the ith player, which is of the form Ui(ωi, ω−i, ǫi), has to be replaced by
Ui := Eǫ−iUi(ωi, ω−i, ǫi),
where Eǫ−idenotes average with respect to the noise variables ǫj , j 6= i.
The equation for the (pure) Nash equilibrium becomes
ωi = H
(
(1 − ωi)Eǫ−i
[
f
(
g
(
qN , qN +1 − ωi
N
)
− µ
)]
+ ωiEǫ−i
[
f
(
g
(
qN , qN +1 − ωi
N
))]
− ωif(−µ) + ǫi
)
. (11)
The existence of a Nash equilibrium now becomes more problematic, since strategies belong to a
larger space. However, ignoring this problem, and assuming further that a law of large numbers
holds, i.e., qN converge to a constant, in the limit as N → +∞, the qN obtained from (11)
converge to a solution of (9), as for the total information case. In other words, we expect that
the total and the partial information model yield the same macroscopic behavior.
4 Long time behavior of the limit evolution equation
Equation (9) describes the behavior of the system with utility function (4) in the infinite volume
limit. We are interested in detecting the t-stationary solution(s) of this equation and studying
its (their) stability properties. We refer to (9) as the limit evolution equation of our system.
Notice that at some points it will be more convenient to rewrite the system (9) as the implicit
equation G(q, q) = 0, where
G(q, q) := q − qη(f(g(q, q)) − f(−µ)) − (1 − q)η(f(g(q, q) − µ)) . (12)
13
Non linear equations of the type of (9) have already appeared in literature; they have been
analyzed, for instance, in the context of heterogeneous agent based models (see Brock and
Hommes (1998) or Hommes (2006) for a survey). A recent application of agent based models to
a stylized financial market can be found in Chang (2007). In those papers, non linearity emerges
as a consequence of ex ante assumptions on agents’ heterogeneities or non linear price dynamics.
What makes our approach different is the derivation of (9) as a limit of a N -player game, where
agents are not heterogeneous in their expectations and where pricing rules are not forced to be
non linear.
The main results of this section are Theorem 4.3 and Theorem 4.4, in which we show,
respectively, existence and local (linear) stability of attractors for the limit evolution equation.
In Theorem 4.3 we consider two specific kinds of attractors for the dynamics induced by (12):
fixed points q such that G(q, q) = 0 and cycles of period 2 (briefly called 2-cycles). We define a
2-cycle as a pair (q, q) ∈ (0, 1)2, q 6= q such that G(q, q) = G(q, q).
Before stating the main theorems, we give some basic assumptions needed for the analysis of
the model:
Assumption 4.1. The model we are considering is such that
A.1 g : [0, 1] × [0, 1] → R, g : (x, y) 7→ r, introduced in (2), is of class C1,1, decreasing in x and
increasing in y and such that g(x, x) ≡ 0.
A.2 There exists a C1 map
i : Im(g) → Im(g),
where Im(g) is the image of g(·, ·), which is strictly decreasing, and
g(y, x) = i(g(x, y))
for all x, y ∈ (0, 1).
A.3 The distribution function η of random terms, as introduced in (3), is absolutely continuous,
with continuous strictly positive and symmetric density.
Assumption A.1 states that returns are increasing in demand. Concerning Assumption
A.2, the map i(·) describes the way the returns are transformed for time reversal and depends
on the specific models; some examples are presented in Section 5. Decreasingness guarantees
14
consistency with Assumption A.1. Finally, A.3 is a technical assumption and could be possibly
relaxed. We firstly prove a technical lemma:
Lemma 4.2. Define
A(R) := η(f(R) − f(−µ)) − η(f(R − µ)) and B(R) := η(f(R − µ)).
Under Assumption 4.1, the map τ defined as
τ(R) :=A(R)B(i(R)) + B(R)
1 − A(i(R))A(R), (13)
is well defined and Im(τ) ∈ (0, 1).
Proof. Since η has, by assumption, a strictly positive derivative, η(x) < 1 for every x ∈ R.
In particular 0 < B(R) < 1, |A(R)| < 1, A(R)+B(R) = η(f(R)−f(−µ)) ∈ (0, 1). In particular,
A(i(R))A(R) < 1, so that τ is well defined. We now show that τ(R) ∈ (0, 1).
• τ(R) > 0 amounts to A(R)B(i(R)) + B(R) > 0. This is clearly true whenever A(R) ≥ 0.
If A(R) < 0, then
A(R)B(i(R)) + B(R) > A(R) + B(R) > 0.
• τ(R) < 1 amounts to
A(R)[A(i(R)) + B(i(R))] + B(R) < 1.
This is clearly true when A(R) ≤ 0. If A(R) > 0, then
A(R)[A(i(R)) + B(i(R))] + B(R) < A(R) + B(R) < 1.
Next theorem shows that fixed point and 2-cycle can be described in terms of returns as fixed
points of a suitable map.
Theorem 4.3. Define the map
φ(R) := g(τ(i(R)), τ(R)), (14)
where τ(·) and i(·) are as in Lemma 4.2. Then, under Assumption 4.1, the set of fixed points
and 2-cycles for (12) is characterized as the set of fixed points of the map φ. In particular,
i) q ∈ (0, 1) is a fixed point for G if and only if q = 12 . Moreover φ(R)|R=0 = 0.
15
ii) G admits a 2-cycle if and only if the map φ has a nonzero fixed point.
Proof. A fixed point for (9) is such that
q = qA(R) + B(R)
R = g(q, q).
By assumption g(q, q) = 0, so that R = 0 and q = B(0)1−A(0) = 1
2 . Note, moreover, that i(0) = 0
and τ(0) = 12 , so that φ(0) = 0. This proves that q = 1
2 is the only fixed point for (9) and, since
R = 0, it also corresponds to the zero value fixed point for the map φ.
We are now left with point (ii). To this aim, note that the system (9) has a 2-cycle when there
exists a pair (q, q) ∈ (0, 1)2, with q 6= q, such that
q = qA(R) + B(R)
R = g(q, q) (15)
and
q = qA(R) + B(R)
R = g(q, q) (16)
both hold. Suppose (15) and (16) hold. Note that R = i(R). Plugging in (15) the expression
for q in (16), we get q = τ(R) and q = τ(i(R)). This, inserted in the second equation of (15),
gives R = φ(R).
Conversely, if R 6= 0 is such that R = φ(R), then q = τ(R) and q = τ(i(R)) solve (15) and (16).
This concludes the proof of point (ii).
Theorem 4.3 shows that the only steady state equilibrium corresponds to half of the agents
owning the asset at each point in time. Moreover, it provides conditions for the existence of a
2-cycle. Even if Theorem 4.3 does not rule out more complex attractors such as longer periodic
orbits or chaotic behaviors, however, all these situations have not been detected in simulations
corresponding to the examples given in Section 5.
In the next result we discuss the local (linear) stability of the fixed point and of the 2-cycle.
Theorem 4.4. Let R∗ be any fixed point (possibly R∗ = 0) of φ, R∗ := i(R∗) and (q∗, q∗) :=
(τ(i(R∗)), τ(R∗)) the associated 2-cycle, possibly the fixed point(
12 , 1
2
)
. Then, under Assumption
4.1, the following statements hold true.
16
1. Well definiteness. Assume
q∗A′(R∗)∂2g(q∗, q∗) + B′(R∗)∂2g(q∗, q∗) 6= 1 (17)
and
q∗A′(i(R∗))∂2g(q∗, q∗) + B′(i(R∗))∂2g(q∗, q∗) 6= 1. (18)
The implicit map (9) is well defined both in a neighborhood of (q∗, q∗) and of (q∗, q∗).
2. Local stability. Set
ρ :=
∣
∣
∣
∣
A(R∗) + q∗A′(R∗)∂1g(q∗, q∗) + B′(R∗)∂1g(q∗, q∗)
1 − q∗A′(R∗)∂2g(q∗, q∗) − B′(R∗)∂2g(q∗, q∗)
∣
∣
∣
∣
×
∣
∣
∣
∣
A(i(R∗)) + q∗A′(i(R∗))∂1g(q∗, q∗) + B′(i(R∗))∂1g(q∗, q∗)
1 − q∗A′(i(R∗))∂2g(q∗, q∗) − B′(i(R∗))∂2g(q∗, q∗)
∣
∣
∣
∣
. (19)
Then the 2-cycle (q∗, q∗) is linearly stable for ρ < 1, and linearly unstable for ρ > 1.
Proof. We only give a sketch of this rather straightforward proof. Note that we are consid-
ering the fixed point(
12 , 1
2
)
as a particular case of (degenerate) 2-cycle.
Statement 1. follows from a standard application of the Implicit Function Theorem to the im-
plicit function (9). Concerning the stability, we apply twice the map (9) to a small perturbation
of q∗:
qε := (q∗ + ε)A(g(q∗ + ε, qε) + B(g(q∗ + ε, qε)
qε := qεA(g(qε, qε)) + B(g(qε, qε)).
We then compute, by implicit differentiation, q′ := ddε
qε|ε=0 first, and q′ := ddε
qε|ε=0 then. We
obtain |q′| = ρ, from which our statement of linear stability follows.
All conditions given in Theorem 4.4 simplify for the fixed point(
12 , 1
2
)
, since equations (17)
and (18) coincide as well as the two factors in the right hand side of (19).
The rather awkward stability condition we have obtained, can in principle be checked nu-
merically in specific examples. We perform such an analysis for specific examples in the next
section. However, before moving forward, let us point out that a simpler geometrical condition
for the stability of the 2-cycle, when µ = 0 (i.e., no transaction costs), is shown in the next
result.
Corollary 4.5. Assume there are no transaction costs. Then, under Assumption 4.1,
17
1. the fixed point is linearly stable if and only if φ′(0) < 1,
2. a 2-cycle (q∗, q∗) := (τ(i(R∗)), τ(R∗)) is linearly stable if φ′(R∗) < 1.
Proof. Note that, when µ = 0, we have A(R) ≡ 0. So, by Theorem 4.4, we get
ρ =
∣
∣
∣
∣
B′(R∗)∂1g(q∗, q∗)
1 − B′(R∗)∂2g(q∗, q∗)
∣
∣
∣
∣
∣
∣
∣
∣
B′(i(R∗))∂1g(q∗, q∗)
1 − B′(i(R∗))∂2g(q∗, q∗)
∣
∣
∣
∣
.
Moreover,
φ(R) = g(B(i(R)), B(R)),
so that
φ′(R) = ∂1g(B(i(R)), B(R))B′(i(R))i′(R) + ∂2g(B(i(R)), B(R))B′(R).
In what follows we use the fact that, since g(x, y) = i(g(y, x)), we have
∂1g(x, y) = i′(g(y, x))∂2g(y, x) , ∂2g(x, y) = i′(g(y, x))∂1g(y, x). (20)
Assume that φ′(R∗) < 1, i.e.,
∂1g(q∗, q∗)B′(i(R∗))i′(R∗) + ∂2g(q∗, q∗)B′(R∗) < 1. (21)
Note that, by (20), we can rewrite (21) in the alternative form
∂2g(q∗, q∗)B′(i(R∗)) + ∂1g(q∗, q∗)i′(i(R∗))B′(R∗) < 1. (22)
Note that, since B(R) = η(f(R)), we have B′ > 0. Moreover ∂1g < 0, ∂2g > 0, and i′ < 0.
Finally i′(i(R∗)) = (i−1)′(i(R∗)) = 1i′(R∗) . By (21) we obtain
1 − ∂2g(q∗, q∗)B′(R∗) > ∂1g(q∗, q∗)B′(i(R∗))1
i′(R∗)> 0
or, equivalently,∣
∣
∣
∣
∂1g(q∗, q∗)B′(i(R∗))
i′(R∗)[1 − ∂2g(q∗, q∗)B′(R∗)]
∣
∣
∣
∣
< 1. (23)
Similarly, by (22),
1 − ∂2g(q∗, q∗)B′(i(R∗)) > ∂1g(q∗, q∗)i′(R∗)B′(R∗) > 0
or, equivalently,∣
∣
∣
∣
∂1g(q∗, q∗)i′(R∗)B′(R∗)
1 − ∂2g(q∗, q∗)B′(i(R∗))
∣
∣
∣
∣
< 1. (24)
18
By multiplying (23) and (24) we get the stability condition ρ < 1.
In the next section we characterize the volume of trades, i.e., the number of shares exchanged
on the market at any trading date. We see that, even at the equilibrium, where q = 12 , some
trade may still be in place. The volume of trades depends on the parameters of the model.
4.1 Volume of trades
The total volume percentage of trades in the market with N players is given by the sum of the
trades of players who do not hold the asset and want to acquire it and those who own it and
want to sell it:
SN =1
N
N∑
i=1
ωi(1 − ωi) +1
N
N∑
i=1
ωi(1 − ωi)
=1
N
N∑
i=1
ωi −2
N
N∑
i=1
ωiωi +1
N
N∑
i=1
ωi
= qN − 2sN + qN ,
(25)
where sN = 1N
∑Ni=1 ωiωi.
Proposition 4.6. Assume that limN→+∞ ρN = ρ. Then, limN→+∞ SN = S, where S is the
limiting trading volume percentage, in the game with infinite agents. Moreover, S solves the
following equation
S = q − 2s + q , where s = qη(f(R) − f(−µ)) . (26)
Proof.
sN =1
N
N∑
i=1
ωiωi = ρN (1, 1) , (27)
where ρN (ω, ω) is the joint distribution of each couple (ωi, ωi). To find the N → +∞ limit of
equation (27), we, first, need to prove there exists limN→+∞ ρN = ρ; but (ρN ) is a sequence
on a compact set, then it admits limit points. So we can consider the limit on convergent
subsequences, and, if the dynamics are well defined, there exists only one limit point. Thus a
law of large numbers apply and, in particular, it yields
sN → s = ρ(1, 1) .
19
Now
ρ(1, 1) = P(ω = 1, ω = 1)
= P(ω = 1, H((1 − ω)f(R − µ) + ω(f(R) − f(−µ)) + ǫ) = 1)
= P(ω = 1) · P(H((1 − ω)f(R − µ) + ω(f(R) − f(−µ)) + ǫ) = 1|ω = 1)
= q · P(f(R) − f(−µ) + ǫ > 0) = qη(f(R) − f(−µ)) ,
where H(x) = I[0,+∞)(x).
Recall that, when q = q = 12 , R = 0. In this case, S = S(µ) = 1− η(−f(−µ)) = η(f(−µ)), is
a decreasing function of µ. This confirms the intuition that a transaction tax lowers the volume
of trades. Moreover, S ∈[
0, 12
]
, since S(0) = 12 and limµ→+∞ S(µ) = 0.
Note that S(0) = 12 means that, under no transaction costs, half of the agents are trading the
asset at the equilibrium, even though the returns are zero. The reason being that in average,
half of the population believes in a decreasing market and half believes in the opposite (due to
the idiosyncratic error terms ǫ). As long as µ increases, the cost makes the transaction unworthy
for a larger fraction of agents. In this sense, the transaction tax µ has a regularizing effect, since
it mitigates the volume of transactions driven by boudedly rational traders.
5 Examples and comparative statics
All results derived so far have been given in a rather general form. In this section we discuss
in details the meaning and implication of our results, constraining the analysis to examples, in
which we adopt specific functional forms for the utility function f , for the distribution η of the
error parameter introduced in (3) and for the relationship between asset’s demand and its return
given by the function g defined in (2).
5.1 Examples
Demands and returns.
From now on, we rely on a common definition of returns: R = P −P
P.
Case 1. Linear evolution of prices. We consider linearity in the relationship between demand
changes and asset’s returns:
R = k · (q − q) , k > 0. (28)
20
This implies the following price dynamics:
P − P = P · k (q − q). (29)
This specification is inspired by Bouchaud and Cont (2000) and Nadal et al. (2005). k ≥ 0
is sometimes referred as the market depth. It measures how deeply the price is related to the
excess demand on the market. In this very stylized context it can be considered as a simplified
form of elasticity of price in demand.
Case 2. Loglinear evolution of prices. Simple as it might be, a linear relationship between
demand and returns can hide some interesting behavior of the system. In order to test such a
working hypothesis we allow for a non linear relationship between q and R. A simple way to
introduce it is assuming that returns follow an exponential shape:
R =
(
q
q
)k
− 1 , k > 0, (30)
which yields a loglinear price dynamics:
ln P − ln P = k · (ln q − ln q). (31)
Risk attitude.
We constraint the agents to be either risk neutral or risk-averse, two common features of portfolio
selection models.
Case A. Risk neutral agents; f is linear (e.g. the identity function). In this case agents’ utility
function becomes:
Ui(ω) = ωi [R + µ(2ωi − 1) + ǫi] − ωiµ . (32)
Case B. Risk averse agents; let f be a CARA utility function (f(x) = 1 − e−Ax). In this case
we have:
Ui(ω) = ωi
[
(1 − ωi)(1 − e−A (R−µ)) + ωi(1 − e−A R) − ωi(1 − eAµ) + ǫi
]
+ ωi(1 − eAµ) , (33)
where A > 0 is the Arrow Pratt coefficient of absolute risk aversion.
Distribution of errors.
We only consider logistically distributed error terms:
η(x) = P(ǫ ≤ x) =1
1 + e−βx, β > 0. (34)
21
The choice of logistic error terms is a rather common choice in many models of evolving
social systems11. Note that the logistic distribution is absolutely continuous and symmetric, as
it is needed in Sections 3 and 4. The parameter β measures the impact of the random compo-
nent in the decision process. When β is high, the deterministic part plays a major role in the
maximization process; when β is close to zero, the error term dominates and the choice between
ω = 1 and ω = −1 approximates a coin tossing.
The general results on existence and (local) stability of fixed point and 2-cycle of Theorems
4.3 and 4.4 can be rewritten in terms of the parameters of the model, depending on the cases
described above, as follows.
Theorem 5.1.
Case 1A (linear price evolution; risk neutral agents)
There exists a critical value kc(β, µ) for k, where kc(β, µ) := 1β
(
1 + eβµ)
, such that:
• for k < kc(β, µ), q∗ = 12 is locally stable; moreover, there are no locally stable 2-cycles;
• for k > kc(β, µ) and k 6= 1β
(
2 + eβµ + e−βµ)
, q∗ = 12 is unstable; moreover, there exists a
unique 2-cycle (q, q), which is locally stable.
Case 2A (loglinear price evolution; risk neutral agents)
There exist two critical values kcl (β, µ) ≤ kc
u(β, µ) for k, where kcu(β, µ) := 1
2β
(
1 + eβµ)
, such
that:
• for k < kcl (β, µ), q∗ = 1
2 is locally stable; moreover, there are no locally stable 2-cycles;
• there exists a non-zero measure set H ∈ ( kcl (β, µ); kc
u(β, µ) ) such that, for h ∈ H, the
locally stable fixed point q∗ = 12 and a locally stable 2-cycle coexist;
• for k > kcu(β, µ) and k 6= 1
2β
(
2 + eβµ + e−βµ)
, q∗ = 12 is unstable; moreover, there exists
a unique locally stable 2-cycle.
Case 1B (linear price evolution; risk averse agents)
There exists a critical value kc(β, µ, A) for k, where kc(β, µ, A) := 2Aβ(eAµ+1)
(
1 + expβ(eAµ − 1))
,
such that:
11See, for instance, Brock and Durlauf (2001).
22
• for k < kc(β, µ, A), q∗ = 12 is locally stable; moreover, there are no locally stable 2-cycles;
• for k > kc(β, µ, A) and k 6= 2Aβ(eAµ+1)
(
expβ(eAµ − 1) + exp−β(eAµ − 1) + 2)
,
q∗ = 12 is unstable; moreover, there exists a unique 2-cycle (q, q), which is locally stable.
Case 2B (loglinear price evolution; risk averse agents)
There exist two critical values kcl (β, µ, A) ≤ kc
u(β, µ, A) for k,
where kcu(β, µ, A) := 1
Aβ(eAµ+1)
(
1 + expβ(eAµ − 1))
, such that:
• for k < kcl (β, µ, A), q∗ = 1
2 is locally stable; moreover, there are no locally stable 2-cycles;
• there exists a non-zero measure set H ∈ ( kcl (β, µ, A); kc
u(β, µ, A) ) such that, for h ∈ H,
the locally stable fixed point q∗ = 12 and a locally stable 2-cycle coexist;
• for k > kcu(β, µ, A) and k 6= 1
Aβ(eAµ+1)
(
expβ(eAµ − 1) + exp−β(eAµ − 1) + 2)
,
q∗ = 12 is unstable; moreover, there exists a unique locally stable 2-cycle.
The lower thresholds kcl (β, µ) and kc
l (β, µ, A) in cases 2A and 2B, cannot be determined explicitly.
Remark 5.2. We conjecture that the statement of this theorem can be strengthened. In partic-
ular, we believe that H = ( kcl ; kc
u), meaning that coexistence happens for all values of k in the
interval. Up to now, we are only able to provide specific examples, where coexistence certainly
holds.
Proof.
We only concentrate on cases 1A and 2A. The corresponding risk bearing situations (1B and
2B) can be treated in a similar way.
Case 1A. The value kc(β, µ) = 1β
(
1 + eβµ)
is determined by computing ρ, as defined in
equation (19). To this aim, note that for g(x, y) = k(y − x) we have i(r) = −r (where i(·) has
been defined in Assumption 4.1). Concerning the fixed point(
12 , 1
2
)
, after some manipulations,
one finds that
ρ|( 1
2, 1
2) =
(
1 − 2η(−µ) − kη′(µ)
1 − kη′(µ)
)2
,
where ρ|( 1
2, 1
2 ) denotes the value of ρ computed in (q∗, q∗) =(
12 , 1
2
)
. The stability condition
follows solving ρ(
12 , 1
2
)
< 1, and it is easy to show it holds true if and only if k < η(µ)η′(µ) .
Substituting the logit expression for the distribution η, it yields
k <1
β
(
1 + e−βµ)
.
23
Concerning the 2-cycle, on the other hand, let firstly assume µ = 0. A 2-cycle corresponds to
the non zero fixed point of the map φ, which, in case 1A, gives
φ(R) = kB(R) − B(−R)
1 + A(R)= 2kη(R) − 1. (35)
It is easy to see that φ is an increasing concave map, clearly φ(0) = 0. Then a 2-cycle exists if
and only if the equation φ(R) = R has a positive solution if and only if φ′(0) > 1. This, in turns,
implies that the graph of φ in R∗ 6= 0 (corresponding to the 2-cycle) intersects the identity with
slope less than 1, giving also the stability condition (see Corollary 4.5). Now, φ′(0) > 1, in this
case, reads 2kη′(0) > 1, i.e., k > 2β
.
If µ 6= 0, the map φ can be written as
φ(R) = kη(R + µ) + η(R − µ) − 1
η(R + µ) − η(R − µ) + 1,
which is an increasing function, since
φ′(R) = 2kη′(R + µ)(1 − η(R − µ)) + η(R + µ)η′(R − µ)
[η(R + µ) − η(R − µ) + 1]2> 0.
Moreover, it is concave, since
φ′′(R) =2k
[η(R + µ) − η(R − µ) + 1]3·
· [η′′(R + µ)(1 − η(R − µ)) + η(R + µ)η′′(R − µ)][η(R + µ) − η(R − µ) + 1]
− 2[η′(R + µ)(1 − η(R − µ)) + η(R + µ)η′(R − µ)][η′(R + µ) − η′(R − µ)] < 0.
Then, as before, the 2-cycle exists if and only if φ′(0) > 1, which, in this case, gives k η(µ)η′(µ) > 1,
i.e., with logit distribution, k > 1β
(
1 + e−βµ)
. Then the 2-cycle exists if and only if the fixed
point is unstable.
Concerning the condition k 6= 1β
(
2 + eβµ + e−βµ)
, it follows from condition 1. of Theorem 4.4,
just computing the l.h.s. of equations (17) and (18) (the two conditions coincide in the case of
linear returns).
Case 2A. The value kcu(β, µ) = 1
2β
(
1 + eβµ)
follows arguing similarly as before; in this
case, since g(x, y) =( y
x
)k− 1, we have i(r) = −r
1+r. Thus in this case, it is not difficult to
see that ρ|( 1
2, 1
2 ) =(
1−2η(−µ)−2kη′(µ)1−2kη′(µ)
)2. Hence ρ
(
12 , 1
2
)
< 1 if and only if k < 12β
(
1 + e−βµ)
.
This is enough to discuss the (local) stability of the fixed point. Concerning the condition
k 6= 12β
(
2 + eβµ + e−βµ)
, it follows from condition 1. of Theorem 4.4 arguing as before.
24
As far as the 2-cycle is concerned, we are not able to explicitly find the threshold kcl (β, µ) for
k above, from which a stable 2-cycle starts to exist. Nevertheless, at least for µ = 0, we can
rely on Corollary 4.5 in order to provide a closed form expression for the threshold level for k.
Indeed, after some algebra, one finds that
φ′(R) = kβ ·
1 + eβR
1+R
1 + e−βR
k
·
1(1+R)2 · e
βR1+R
1 + eβR
1+R
+eβR
1 + e−βR
. (36)
Recall that the threshold for k is determined by φ′(R) = 1. In order to discuss existence and
stability of the 2-cycle, we start from the case µ = 0 and we rely on numerical tools. Indeed,
plotting the function φ(R) for different values of the parameters, one sees that, for k > 0 but
small, the function φ(R) is such that φ′(0) < 1 and φ(R) < R for all R > 0. Thus, there are
no (locally) stable 2-cycles (see left panel of Figure 1 for an example). For k large enough, in
particular, for k > kcu(β, 0), a locally stable 2-cycle exists, since φ′(0) > 0 and φ(R) < R for R
large enough. The last fact is obvious, because, for R → ∞, φ(R) has finite value (1 + eβ)k − 1.
Thus φ must cross the bisector in some positive R∗, where φ(R∗) < 1.
The last argument to be discussed is coexistence of attractors. We show, in particular, that there
are values of β, k and h such that φ′(0) < 1 and φ(R∗) < 1 for R∗ 6= 0. To this aim, it is sufficient
to show that, for specific values of the parameters, φ′(0) < 1, φ(R′) > R and φ(R′′) < R for
R′′ > R′ > 0. Let us take k = 3, β = 0.3 and µ = 0. We have φ′(0) = 0.9 < 1. Moreover,
φ(4) > 4 and φ(10) < 10. Thus, necessarily, k = 3 belongs to the set H ∈ ( kcl (0.3, 0); kc
u(0.3, 0) ).
We plot φ(R) with k = 3, β = 0.3 and µ = 0 in the right panel of Figure 1. In this case R∗ = 9.35
is the point marked with (∗). Therefore, there can be situations in which kcl (β, µ) < kc
u(β, µ)
and coexistence takes place. Note, finally, that, for certain values of the parameters, we can also
have kcl (β, µ) = kc
u(β, µ).
All these arguments can be extended to the case µ > 0 by means of numerical tools.
We discuss the main implications of this corollary in more details. In particular, we concen-
trate on the coexistence of fixed point and 2-cycle.
Look, firstly, at the case of linear evolution of prices: g(x, y) = k(y − x) (cases 1A and 1B).
To ease the notations, we put µ = 0. Under these assumptions, it is not difficult to show that
φ(x) = g(1 − η(x), η(x)) = k[2η(x) − 1], which is concave, since φ′(x) = 2kη′(x) is decreasing
25
0 1 2 3 4 5 6 7 8 9 100
1
2
3
4
5
6
7
8
9
10
R
φ(R
)
Loglinear case (k=2.5, β=0.3, µ=0)
φ(R)R
0 1 2 3 4 5 6 7 8 9 100
1
2
3
4
5
6
7
8
9
10Loglinear case (k=3, β=0.3, µ=0)
R
φ(R
)
φ(R) R Stable fixed point Unstable fixed point
Figure 1: Two plots of the function φ(R) under the case 2A. In the left panel the only fixed
point is R = 0. In the right panel we see a strictly positive fixed point of φ(R), where φ′(R) < 1
(marked with (∗)).
for x ≥ 0. The equation x = φ(x) admits positive solution if and only if φ′(0) > 1 if and only
if kη′(0) > 12 . In this case the 2-cycle is stable, since the graph of φ intersects the identity with
slope less than 1. As a result, there can be no coexistence of stable fixed point and stable 2-cycle.
On the other hand, under loglinear evolution of prices (i.e. in cases 2A and 2B), coexistence
is possible as shown in the right panel of Figure 1. Note that, looking at that figure, we also
have an unstable fixed point (denoted by (o) in the graph).
Coexistence has important consequences at the level of the finite dimensional system. We
perform some agent-based simulations in order to capture this aspect. More in details, we
simulate a large but finite population of N agents. At any time step, we let the N agents
play (sequentially) their best response to the (fixed) actions of the other agents. We continue
letting the algorithm work unless a fixed profile ω is reached. In doing this, we are numerically
identifying a Nash equilibrium as a strategy profile ω, that is a fixed point of the best response
map. In the case of multiple Nash equilibria, the algorithm always identifies the nearest to the
old equilibrium found in the previous time step (where the nearest means the one where less
agents have switched their choice).
In the case of coexistence, it can be shown that the finite dimensional system oscillates between
the two different attractors: it remains for rather long time windows near to the fixed point
and for rather long times near to the limiting 2-cycle. See Figure 2, where we use the same
26
parameters as in the right panel of Figure 1. This situation, rephrased in terms of returns,
mimics a time series, where we have periods of relatively stable returns and periods with very
volatile returns. Therefore, we can say that in our simple model we are able to endogenously
generate market volatility regime switching.
0 50 100 1500.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Time
Dem
and
Loglinear case (k=3,β=0.3,µ=0,q(0)=0.7)
N=1000Asymptotic
Figure 2: Asymptotic regime (2-cycle) for demand (dotted line) and finite dimensional simulation
with N = 1000 agents (continuous line).
5.2 Comparative statics
In this section we analyze how the picture of the stationary regime changes depending on the
parameters of the model. In fact, depending on the parameters k, β, µ, we discuss how vary
the domains of attractions towards the fixed point and the 2-cycle and the size of ∆q (and
consequently the jump of R), in the case of the limiting stable 2-cycle. For the sake of simplicity
we concentrate on the comparative statics only for the risk neutral cases 1A and 2A. In what
follows, we consider, without loss of generality, the case when q < q, i.e., when ∆q > 0.
Domains of attractions
Concerning the linear case 1A, we analyze how the threshold kc(β, µ), which separates the
regions of local stability of the two attractors (fixed point and 2-cycle), varies as a function of the
transaction cost µ, first, and of the error parameter β, then. Recall that kc(β, µ) := 1β
(
1 + eβµ)
.
It is clear that kc is an increasing function of µ, kc ≥ 2β
. Then, when µ increases, the region
27
of local stability of the fixed point enlarges, meaning that the transaction cost stabilizes the
market. Concerning the dependence of kc on β, we prefer to distinguish the case µ = 0 from the
case µ 6= 0.
If µ = 0, kc = 2β
, which is a decreasing function of β. What happens is that, when β → 0+, i.e.,
the error dominates, only the region of fixed point survives. This is due to the fact that, when
β → 0+, the agents are completely randomizing their choice. Hence, in the asymptotic market
with infinite agents, half of them chooses ω = 1 and half ω = 0. The demand is, therefore,
always equal to 12 . Note moreover that since also the vector of past actions ω is completely
randomized, there will be half of the people owning the asset that are willing to sell it and
viceversa. This is the reason why, at the equilibrium with µ = 0, the volume of trade is S = 12
(see Proposition 4.6).
Conversely, when β → +∞, i.e., the error vanishes, it seems that only the region of the 2-cycle
survives; but, when µ = 0 the amplitude of the 2-cycle is given, as we prove later, by the implicit
equation
q − q =: ∆q = 2η(k∆q) − 1
=|eβk∆q − 1|
eβk∆q + 1,
(37)
which gives solutions ∆q = 0 or ∆q > 0. ∆q = 0 is a trivial 2-cycle, that collapses in the fixed
point q = 12 ; instead ∆q > 0 is such that, when β → +∞, ∆q → 1. The two extreme degenerate
situations are explained by the fact that with zero costs and perfectly rational agents, either
nobody trades or everybody trades.
If µ 6= 0, when β → 0+ and when β → +∞, kc → +∞; moreover, there exists a value
β ∈ (1.27µ, 1.28µ) such that, for β ≤ β, kc decreases , while, for β > β, kc increases. When
the error dominates or it vanishes, only the region of fixed point survives. The existence of a
positive taxation excludes the previous situation in which everyone is trading the share, again
confirming the stability property of transaction costs. However, in intermediate situations there
is a sort of balancing effect between β and µ. In particular, there are values of k for which the
market is highly volatile and values for which the market converges towards the fixed point. The
value β denotes the value of β where kc reaches its minimum, i.e., the domain of attraction of
the 2-cycle is larger.
Concerning the loglinear case 2A, we have to discuss the values of kcl (β, µ) and kc
u(β, µ).
28
The shape of kcu(β, µ) is very close to the kc(β, µ) discussed in the previous case. As already
mentioned in Theorem 5.1, existence of the two attractors is possible under the loglinear scenario.
Nevertheless, it appears to be quite hard to make comparative statics about kcl (β, µ). Looking
at numerical simulations, it seems that the region of coexistence is large for very small values
of β, but it shrinks very fast increasing β. To show it, look at Table 2, where we collect some
values of kcl (β, µ) and kc
u(β, µ) varying β, for µ = 0.
β kcl (β, 0) kc
u(β, 0)
0.02 7.66 50
0.1 4.91 10
0.3 2.89 3.33
0.5 1.96 2
0.7 1.42 1.42
Table 2: How kcl (β, µ) and kc
u(β, µ) vary with β. In this simulation it is µ = 0.
Size of ∆q
Case 1A. Before analyzing the variation in demand and in R in the region of the 2-cycle, we
have to find a relation, which describes the amplitude |∆q| of the 2-cycle. We can derive an
explicit solution for the equation linking ∆q with the parameters of the model only in case 1A.
In particular, to do it, let rewrite in this case (15) and (16):
q = qη(k∆q + µ) + (1 − q)η(k∆q − µ) (38)
and
q = qη(−k∆q + µ) + (1 − q)η(−k∆q − µ) . (39)
Now, summing and subtracting (38) and (39), it yields
q + q = 1 (40)
and
q − q = ∆q =η(k∆q + µ) + η(k∆q − µ) − 1
η(k∆q + µ) − η(k∆q − µ) + 1
=e2βk∆q − 1
e2βk∆q + 2eβ(k∆q+µ) + 1,
(41)
29
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1Panel A
∆ q
h(∆
q)
Bisectork=2k=1.5k=1∆ q=0.858∆ q=0.551
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1Panel B
∆ q
h(∆
q)
Bisectorµ=0µ=0.2µ=0.5∆ q=0.858∆ q=0.781
Figure 3: Different solutions of equation (41) varying k (Panel A) and µ (Panel B), where h(∆q)
denotes the r.h.s. of (41). Note that ∆q increases with k and it decreases with µ.
where the last equality follows, after some manipulations, when we substitute the expression of
the logit distribution to η.
The fixed points of equation (41) characterize the attractors of our model. Note that ∆q = 0 is
always solution for (41) and it corresponds to the fixed point q = 12 . Moreover, since the r.h.s.
of equation (41) is increasing and concave in ∆q, there might be situations in which another
solution ∆q > 0 exists. It shows the size of the limiting 2-cycle. Figure 3 reports the shape of
equation (41), highlighting its fixed points. Note that, for low values of k and/or β, the solution
∆q = 0 is unique. If we increase k and/or β, we can see the positive solution ∆q, whose value
increases with k and/or β. Eventually, for k and/or β approaching infinity, ∆q approaches 1.
In particular, in Panel A of Figure 3 we plot the r.h.s. of (41) finding the (eventual) non zero
solution of (41) under increasing values of k. In Panel B we perform the same exercise varying
µ. It is interestingly enough Panel B of Figure 3 confirms that transaction costs regularize the
market in the sense that the jumps of demand (hence of returns) boil down with an increasing
taxation on trades. In order to explicit how the variation in demand at the equilibrium depends
on the parameters, we perform numerical simulations for both Cases 1A and 2A described in
Figure 4. In Panels A and B we plot the size of the 2-cycle in case 1A of linear returns. In
particular, in Panel A we keep k fixed and we let β change in the interval (0, 2]. Moreover, we
choose three different levels for µ: 0, 0.06 and 0.1. In Panel B we keep β fixed and we let k
change in the interval (0, 2] (same levels for µ as in Panel A). In Panels C and D we perform the
same analysis in case 2A. The graphs confirm the fact that the size of the 2-cycle decreases with
30
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 20
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
β
∆ q
Panel A
µ=0µ=0.06µ=0.1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 20
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
k
∆ q
Panel B
µ=0µ=0.06µ=0.1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 20
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
β
∆ q
Panel C
µ=0µ=0.06µ=0.1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 20
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
k
∆ q
Panel D
µ=0µ=0.06µ=0.1
Figure 4: Different levels of ∆q (i.e., the size of the 2-cycle at the equilibrium), varying the
parameters of the model in the linear case (Panel A and B) and loglinear case (Panel C and D).
Note that in Panels A and C, we keep k = 1 fixed and in Panels B and D we keep β = 1 fixed.
µ and increases with β and k. Note that, for small values of the parameters (β and k), the size
of the 2-cycle is zero, meaning that the dynamics for the demand are converging towards the
fixed point q = 12 . A second important remark is that the size of the 2-cycle does not depend
on the model we choose. Indeed, Panels A and C and Panels B and D are exactly equal. This
is an interesting result. The size of the 2-cycle does not depend on the specific model adopted,
even though the domains of attraction do (as shown in Theorem 5.1).
6 Concluding remarks.
We have built a simple model for a liquid market of a risky asset, where a large number of
similar interacting agents can trade a share of the asset. We have also taken transaction costs
into account.
31
We have studied the dynamics of the demand function and the related returns emerging
by the trading mechanism. Compared to related literature, we have suggested a framework in
which a rather general form of dependence between aggregate demand for the asset and the
asset returns is in place. In particular, the dynamics of these two fundamental state variables
cannot be separately studied. The joint evolution of these quantities gives rise to interesting
stylized facts such as multiplicity of equilibria, non linear dynamics of returns, boom and crash
cycles, all facts that have been discussed in details.
Agents interactions have been modeled through a static (one period) non-cooperative game.
Agents face social interactions and bounded rationality, the latter introduced following a well
known strand of research: random utility models inspired by Brock and Durlauf (2001). From
this point of view, we have enriched this framework, letting the agents update their opinion
in a parallel way, i.e., their action is the consequence of a game whose payoffs depend on the
expectations on the behavior of the population.
We have detected the mitigating effects of transaction costs that may help in regularizing
the market. Moreover, we have observed that, for some values of the parameters, booms and
crashes can arise as the result of agents’ interplay and of market features.
Finally, notice that, owing to the assumption of agents’ simultaneous updating, at the equi-
librium the limiting dynamics converge either to a fixed point or to a 2-cycle. This is a novelty
in probabilistic models that describe social interactions and, in case, contagion; see, for instance,
Blume and Durlauf (2003) or Dai Pra et al. (2009) in which the stable attractors can be only
fixed points, and where agents update their actions sequentially.
References
Billingsley, P. (1999) Convergence of Probability Measures, J. Wiley & Sons.
Blume, L. and Durlauf, S. (2003) Equilibrium concepts for social interaction models, Interna-
tional Game Theory Review, 5(3): 193-209.
Bouchaud, J.P. and Cont, R. (2000) Herd behavior and aggregate fluctuations in financial mar-
kets, Macroeconomic Dynamics, 4: 170-196.
Brock, W. and Durlauf, S. (2001) Discrete choice with social interactions, Review of Economic
Studies, 68: 235-260.
32
Brock, W. and Hommes, C. (1998) Heterogeneous beliefs and routes to chaos in a simple asset
pricing model, Journal of Economic Dynamics and Control, 22: 1235-1274.
Cardaliaguet, P. (2010) Notes on Mean Field Games, CEREMADE, UMR CNRS 7534,
Universitè de PARIS - DAUPHINE, Working Paper, November 2010. Avaiable online at
http://www.ceremade.dauphine.fr/∼cardalia/MFG100629.pdf.
Chang S. K. (2007) A simple asset pricing model with social interactions and heterogeneous
beliefs, Journal of Economic Dynamics and Control, 31: 1300-1325.
Cheng, S., Reeves, D.M., Vorobeychik, Y. and Wellman, M.P. (2004) Notes on Equilibria in
Symmetric Games, International Joint Conference on Autonomous Agents & Multi Agent
Systems, 6th Workshop On Game Theoretic And Decision Theoretic Agents, New York City,
NY, August 2004.
Citanna, A., Donaldson, J., Polemarchakis, H., Siconolfi, P. and Spear, S. (2004) General equi-
librium, incomplete markets and sunspots: A symposium in honor of David Cass: Guest
editors’ introduction, Economic Theory, 24(3): 465-468.
Cont, R., Ghoulmie, F. and Nadal, J.P. (2005) Heterogeneity and feedback in an agent-based
market model, Journal of Physics: Condensed Matter, 17: 1259-1268.
Dai Pra, P., Runggaldier, W.J., Sartori, E. and Tolotti, M. (2009) Large portfolio losses: A
dynamic contagion model, The Annals of Applied Probability, 19(1): 347-394.
Föllmer, H. (1974) Random Economies with many Interacting Agents, Journal of Mathematical
Economics, 1: 51-62.
Föllmer, H., Cheung, W. and Dempster, M.A.H. (1994) Stock Price Fluctuation as a Diffusion
in a Random Environment [and Discussion], Philosophical Transactions of the Royal Society
of London. Series A, 347(1684): 471-483.
Föllmer, H. and Schweizer, M. (1993) A Microeconomic Approach to Diffusion Models for Stock
Price, Mathematical Finance, 3(1): 1-23.
Forbes, W. (2009) Behavioural Finance, John Wiley & Sons.
33
Gordon, M.B., Nadal, J.P., Phan, D. and Semeshenko, V. (2009) Discrete Choices under Social
Influence: Generic Properties, Mathematical Models and Methods in Applied Sciences, 19(1):
1441-1481.
Guesnerie, R. (2001) Assessing rational expectations: sunspot multiplicity and economic fluctu-
ations, MIT Press.
Hommes, C.H. (2006) Heterogeneous agent models in economics and finance, in Tesfatsion, L.
and Judd, K.L. (eds.), Handbook of Computational Economics, Vol. 2: Agent-Based Compu-
tational Economics (Elsevier Sciences B.V.).
Lasry, J.M. and Lions, P.L. (2007) Mean field games, Japanese Journal of Mathematics, 2:
229-260.
Loewenstein, G. and Prelec, D. (1992) Anomalies in Intertemporal Choice: Evidence and an
Interpretation, Quarterly Journal of Economics, 107(2): 573-597.
Manski, C. F. (1988) Identification of Binary Response Models, Journal of the American Sta-
tistical Association, 83(403): 729-738.
McFadden, D. (1974) Conditional Logit Analysis of Qualitative Choice Behavior, in P. Zarembka
(ed.), Frontiers in Econometrics (New York: Academic Press).
Nadal J.P., Phan, D., Gordon, M.B. and Vannimenus, J. (2005) Multiple equilibria in a monopoly
market with heterogeneous agents and externalities, Quantitative Finance, 5(6): 557-568.
Rocheteau, G. and Weill, P.O. (2011) Liquidity in Frictional Asset Markets, Federal Reserve
Bank of Cleveland, Working Paper : 11-05.
Schelling, T.C. (1978) Micromotives and Macrobehavior, W.W. Norton and Co, N.Y..
Shleifer, A. (2000) Inefficient markets: An introduction to behavioral finance, Oxford University
Press.
Thaler, R.H. (1981) Some empirical evidence on dynamic inconsistency, Economics Letters, 8(3):
201-207.
Thaler, R.H. (2005) Advances in Behavioral Finance, Volume II. Princeton University Press.
34