Introduction to Formal Modeling John Aldrich & Arthur Lupia For EITM 2007.

Post on 20-Dec-2015

216 views 0 download

Tags:

Transcript of Introduction to Formal Modeling John Aldrich & Arthur Lupia For EITM 2007.

Introduction to Formal Modeling

John Aldrich & Arthur Lupia

For EITM 2007

Outline

The purpose of modeling Structure Relevance Controversy Elements of Logic

Your research design problem Where are they?

Who is your target audience? What factual premises/truth claims will they accept.

Where do they want to be? Which alternate conclusion will benefit them? What burden of proof and standard of evidence do they

impose?

Definitions

Theory Formal Theory Rational Choice Theory Rationality Game Theory

Cooperative Non-Cooperative*

Arguments

The currency of scientific communication.

The components of an argument are: The Conclusion The Premises

Value comes from explaining as much as possible with as little as possible.

Introduction to Logic

Premise Conclusion Logical Validity

Deductive Inductive Invalid

Soundness

Introduction to Logic Logical Validity

Deductive If all of the premises are true, then the conclusion must be true. The logical connection between premises and conclusion is one of

necessity.

Inductive If all of the premises are true, then the conclusion may be true. The logical connection between premises and conclusion is one of

possibility.

Invalid If all of the premises are true, then the conclusion must be false. The logical connection between premises and conclusion is one of

impossibility.

Examples

George W. Bush is a man. George W. Bush is over 5’ 11’’ tall. All men who are over 5’ 11’’ tall are the

president. Therefore, George W. Bush is the

president.

Examples

George W. Bush is a man. George W. Bush is over 5’ 11’’ tall. Some men who are over 5’ 11’’ tall are the

president. Therefore, George W. Bush is the

president.

Examples

George W. Bush is a man. George W. Bush is over 5’ 11’’ tall. If a man is over 5’ 11’’ tall, then he is not

the president. Therefore, George W. Bush is the

president.

Examples

Almost any random premise 1 Almost any random premise 2 Therefore, George W. Bush is the

president.

The value of logic in debate How to cast doubt on the reliability of a conclusion when an

argument has the following logical properties:

Invalid: Reveal the logical relationship.

Inductively valid: show that even if the premises are true the conclusion can

be false or demonstrate that one or more of the premises is untrue.

Deductively valid: demonstrate that one or more of the premises is untrue.

Standards for another time Soundness

Waller (p 20), The argument “must be [deductively] valid and all of its premises must actually be true.”

Reliability Waller (p 21). “[A]n inductive argument with all true

premises, and whose premises strongly support its conclsion ,will be a reliable inductive argument.”

These standards are more subjective.

Logical Fallacy: Denying the Antecedent

If it's raining, then the streets are wet. It isn't raining. Therefore, the streets aren't wet.

Logical Fallacy: Affirming the Consequent

If it's raining then the streets are wet. The streets are wet. Therefore, it's raining.

Logical Fallacy: Commutation of Conditionals

If James was a bachelor, then he was unmarried.

Therefore, if James was unmarried, then he was a bachelor.

The two faces of “or” Most logic texts claim that "or" has two meanings:

Inclusive (or "weak") disjunction: One or both of the disjuncts is true, which is what is meant by the "and/or" of legalese.

Exclusive (or "strong") disjunction: Exactly one of the disjuncts is true.

Example Today is Saturday or Sunday.

Today is Saturday. Therefore, today is not Sunday Suppressed premise: Saturday is not Sunday.

Logical Fallacy: Denying a Conjunct

It isn't both sunny and overcast. It isn't sunny. Therefore, it's overcast.

Not both p and q. Not p. Therefore, not q.

Ockham’s Razor (14th c.)

Arguments are most helpful to an audience the extent that they actually bring clarity to the phenomena you're studying. lex parsimoniae entities should not be

multiplied beyond necessity In many cases, less is more.

What will you choose? All political scientists make assumptions about:

Players, Actions, Strategies, Information, Beliefs, Outcomes, Payoffs, and

Method of inference (e.g., “I know it when I see it,” path dependence, Nash Equilibrium, logit plus LLN).

Some state their assumptions more precisely than others.

Conclusions depend on assumptions.

Questions to Ask Before You Begin Is the process static or dynamic?

Do the actors have complete or incomplete information?

Are there many actors or just a few?

How have others modeled the political phenomena you're studying?

Starting to write down a formal argument about politics

Make it as simple as possible to start: Use discrete instead of continuous parameter spaces Use two actors instead of a zillion Use complete information rather than incomplete information

Deriving Conclusions Uniqueness versus existence Interpreting Your Findings

Cooperative Game Theory

Originally dominant, possibly the future?

Three Forms of Games Extensive Form Normal form Characteristic Function Form

These differed by level of detail and generality (most detail to most general)

Before the “noncooperative revolution” game theory was based on modeling based on preferences only (not on probabilities a la Harsanyi or on information)

Extensive Form

Has the most details of moves Many of the central results developed in

the 1940s and 1950s (including what came to be called sub game perfection and the folk theorem)

It was thought to be practically impossible for studying any realistic setting – too few moves, too few players

Normal (Matrix) Form

Became the most used early (in both cooperative and non-cooperative)

Abstracted details of moves (and information sets) to strategies, which seemed like a small loss of detail

A few results on one-to-one mappings between normal form and extensive form equilibrium

Served as the locus for the “debate” between Nash and Von Neumann types.

Coalitions and Characteristic Function Form The perceived weakness of both extensive and normal forms

is that each was unmanageable with any realistic number of moves and players

It was also thought to have ignored one of the most important aspects of settings amenable to game theory – coalitions

Political Science – Riker and minimal winning coalitions was a huge literature

Economics – Shubik used a cooperative solution concept, the core, to demonstrate that the general equilibrium of Arrow-Debreu was also the core of the game in economics terms (see below)

The first formal type publication in Political Science was the Shapley- Shubik “value” applied to such as Congress and the UN

N.B.: Shapely and Shubik were graduate students of Morganstern and of Nash

Payoff Configurations, Imputations, and Characteristic Functions

Payoff configuration – vector of utilities for any given outcome.

Imputations – Payoff configurations that are Individually rational, i.e., give every one their security

level 9what they can get playing rationally and alone) Pareto Optimal

Characteristic Function – a form that assigns a real-valued payoff to each coalition.

Coalitions and the Core, Value, Bargaining Set

Core – One definition is that the core of the game is the set of

imputations that give each coalition their security level Core is general eq. in economics; R. Wilson – Arrow’s theorem is the “observations that

the core of a voting game is generally empty” Is the eq. concept in use in most spatial modeling (of

the Davis-Hinich-Ordeshook-McKelvey sort)

Shapley Value

Shapley-Shubik Power Index – Measures “value” based on the incidence with which a member or coalition is “pivotal” that is, by its inclusion, changes a coalition from losing to winning.

Shapley Value is the more general version looking at the marginal increase in payoff to a coalition.

Bargaining Set and Competitive Solution

A large class of games that provides a solution to the game by which is meant a coalition structure and a payoff configuration

Win Set Perhaps the most important of the cooperative

style solution concepts still in use (social choice theory, in general, has been approached as cooperative game setting, with the win set one concept used)

Win set is defined (for a majority rule game) as W(x) consists of the set of points that defeat x by majority rule. If there exists a point, say, y for which the win set is empty, it is a preference induced equilibrium, according to Shepsle, and is a point in the core.

Non-Cooperative Game Theory

The Dominant Form in Recent Decades

NC Game Theory Fundamentals

Player goals are represented by utility functions with utility defined over outcomes.

Actions and Strategies A strategy is a plan of action. In games that can be modeled as if they are simultaneous,

actions and strategies are equivalent. In other games, strategies and actions are quite different with

strategies being the primary choice of interest.

The combination of actions by all players determines a payoff for each player.

Normal Form Games

A normal form game 

  Study Loaf

Study 100,100 50,0

Loaf 0, 50 -10, -10

Graduate School Student A 

•By convention, the payoff to the so-called row player is the first payoff given, followed by the payoff to the column player.

Graduate School Student B 

Practical Description The normal form representation of a game specifies:

The players in the game. The strategies available to each player. The payoff received by each player for each combination of

strategies that could be chosen by the players.

Actions are modeled as if they are chosen simultaneously. The players need not really choose simultaneously, it is

sufficient that they act without knowing each others’ choices.

Components of a Normal Form Game

Players A small number. Actions Define columns and rows. Strategies. Define columns and rows. Information. Complete. Outcomes. Represented by vectors in

cells. Payoffs. Elements of the vectors. Equilibrium concept. Nash.

Technical Definition 1 to n: players in an n-player game. Si: player i’s strategy set. Si: an arbitrary element of Si. ui(si): player i’s payoff function.

Definition: The normal-form representation of an n-player game specifies the players’ strategy spaces S1,…,Sn and their payoff functions u1,…,un.

We denote the game by G={S1,…Sn;u1,…un}.

Nash Equilibrium For an equilibrium prediction to be correct, it is necessary that each

player be willing to choose the strategy described in the equilibrium.

Equilibrium represents the outcome of mutual and joint adaptation to shared circumstances.

If the theory offers strategies that are not a Nash equilibrium, then at least one player will have an incentive to deviate from the theory’s prediction, so the theory will be falsified by the actual play of the game.

Technical Definition In the n-player normal-form game G={S1,…Sn; u1,…un}, the

strategies (s1*,…sn*) are a Nash equilibrium if, for each player i, s*i is (at least tied for) player i’s best response to the strategies specified for the n-1 other players, (s*1,…s*i-1, s*i+1,

…s*n): ui(s*1,…s*i-1,s*i, s*i+1,…s*n)≥ ui(s*1,…s*i-1, si, s*i+1,…s*n) for every feasible strategy si in Si; that is, s*i solves max si Si ui(s*1,…s*i-1, si, s*i+1,…s*n).

If the situation is modeled accurately, NE represent social outcomes that are self-enforcing.

Any outcome that is not a NE can be accomplished only by application of an external mechanism.

Elimination of dominated strategies

 

  Left Middle Right

Up 1,0 1,2 0,1

Down 0,3 0,1 2,0

Figure 1.1.1. Iterated domination produces a solution. 

  Left Middle Right

Top 0,4 4,0 5,3

Middle 4,0 0,4 5,3

Bottom 3,5 3,5 6,6

Figure 1.1.4. Iterated elimination produces no solution. 

Requirements for Iterated Domination To apply the process for an arbitrary number of steps, we

must assume that it is common knowledge that the players are rational.

We need to assume not only that all the players are rational, but also that all the players know that all the players are rational, and that all the players know that all the players know that all the players are rational, and so on, ad infinitum.

In the many cases where there is no or few strictly dominated strategies, the process produces very imprecise predictions.

Example 1: A game with a dominated strategy.

  Left Right

Top 8, 10 -100, 9

Bottom 7, 6 6, 5

Example 2: A more complicated game: with dominated strategies.

  Left Middle Right

Top 4, 3 5, 1 6, 2

Middle 2, 1 8, 4 3, 6

Bottom 3, 0 9, 6 2, 8

 

NE: Fun facts If iterated elimination of dominated strategies eliminates all

but one strategy for each player, then these strategies are the unique NE.

There can be strategies that survive iterated elimination of strictly dominated strategies but are not part of any Nash equilibrium.

A game can have multiple Nash equilibria. The precision of its predictive power at such moments lessens. Existence versus uniqueness

Solving for MS-NE Row chooses “top” with probability p and bottom with probability 1-p. Column chooses “left” with probability q and “right” with probability 1-q.

Players choose strategies to make the other indifferent. 4q+1(1-q)=2q+3(1-q) -4p-2(1-p)=-1p-3(1-p)

The MS-NE is: p=.25, q=.5. The expected value of either Row strategy is 2.5 and of either Column strategy is –2.5

  Left Right

Top 4, -4 1, -1

Bottom 2, -2 3, -3

Mixed strategy NE A mixed strategy Nash Equilibrium does not rely on an player

flipping coins, rolling, dice or otherwise choosing a strategy at random.

Rather, we interpret player j’s mixed strategy as a statement of player i’s uncertainty about player j’s choice of a pure strategy.

Individual components of mixed strategy equilibriums are chosen to make the other player indifferent between all of their mixed strategies. To do otherwise is to give others the ability to benefit at your

expense. Information provided to another player that makes them better off makes you worse off.

The last word.

Theorem (Nash 1950): In the n-player normal-form game G={S1,…Sn; u1,…un}, if n

is finite and Si is finite for every i then there

exists at least one Nash Equilibrium, possibly involving mixed strategies.

Extensive Form Games

Extensive Form Games Allows dynamic games – player moves can be treated as sequential

as well as simultaneous.

Complete information – games in which all aspects of the structure of the game –including player payoff functions -- is common knowledge.

Perfect information – at each move in the game the player with the move knows the full history of the play of the game thus far.

Imperfect information – at some move the player with the move does not know the history of the game.

The structure of a simple game of complete and perfect information.

1. Player 1 chooses an action a1 from the feasible set A1.

2. Player 2 observes a1 and then chooses a2 from the feasible set A2.

3. Payoffs are u1(a1, a2) and u2(a1, a2).1. Moves occur in sequence, all previous moves are observed,

player payoffs from each move combination are common knowledge.

2. We solve such games by backwards induction.

The central issue is credibility.

Example 1 Here

3 legislators

Choices: Yes, No

Outcomes: Pass, Not.

Conceptual Advantage

The central issue in all dynamic games is credibility. Backwards induction outcomes. Subgame perfect outcomes.

Repeated games – the main theme: credible threats and promises about future behavior can influence current behavior.

Structure of a EF Game The structure of a simple game of complete and perfect information.

Player 1 chooses an action a1 from the feasible set A1.

Player 2 observes a1 and then chooses a2 from the feasible set A2.

Payoffs are u1(a1, a2) and u2(a1, a2).

Moves occur in sequence, all previous moves are observed, player payoffs from each move combination are common knowledge.

We solve such games by backwards induction.

Backwards Induction

At the second stage of the game, 2 faces the following problem, given the previously chosen action a1, maxa2A2 u2(a1, a2).

Assume for each a1A1, player 2’s optimization problem has a unique solution denoted by R2(a1). Since player 1 can solve player 2’s problem as well as 2 can,

player 1 should anticipate player 2’s reaction to each action a1 that 1 might take, so 1’s problem at the first stage amounts to maxa1A1 u1(a1, R2(a1)).

(a*1, R2(a*1)) is the backward induction outcome of this game. Implies sophisticated rather than sincere behavior. The sequence of action can affect equilibrium strategies.

Requirements for BI The prediction depends on players knowing and reacting to what

would happen if the game was not played as the equilibrium describes.

o We must assume that decision makes are interested in and capable of counterfactual reasoning. o In some cases, the amount of counterfactual reasoning required is

quite substantial.o If people reason “as if” they undertake such calculations, then the

theory’s validity is not imperiled.

When can we assume that people are, or act as if they are, capable of thinking through counterfactuals?

Subgame Perfect NE

A NE is subgame perfect if players’ strategies constitute a Nash Equilibrium in every subgame.

Player 1 chooses action a1 from feasible set A1. Player 2 observes a1 and then chooses action a2

from feasible set A2. Player 3 observes a1 and a2 and then chooses

action a3 from feasible set A3. Payoffs are ui(a1,a2,a3) for i=1,….,3. (a1, a2*(a1), a3*(a1, a2)) is the subgame-perfect

outcome of this two-stage game.

Example 2

S-PNE on a Voting Tree. (Agenda: abcde) TYPE 1 D A B C E TYPE 2 A B C E D TYPE 3 C B E D A TYPE E e D A C B

Repeated Games

Repetitive Play Over Time &

The Folk Theorem

A general result. Definition: Given a stage game G, let G(T) denote the finitely

repeated game in which G is played T times, with the outcomes of all preceding plays observed before the next play begins. The payoff for G(T) are simply the sum of the payoffs from the T stage games.

Proposition: If the stage game G has a unique NE then, for any finite T, the repeated game G(T) has a unique subgame perfect outcome: the NE of G is played in every stage.

Cooperation from Repetition? Proposition; If G={A1,…An;u1,…un} is a static game of

complete information with multiple NE then there may be subgame perfect outcomes of the repeated game G(T) in which, for any t<T, the outcome in stage T is not a Nash

equilibrium of G.

  Defect Cooperate Right

Defect 1, 1 5, 0 0, 0

Cooperate 0, 5 4, 4 0, 0

Bottom 0, 0 0, 0 3, 3

The prisoners’ dilemma with one action added for each player.

o Suppose players anticipate that (Bottom, Right) will be the second stage outcome if the first stage outcome is (Cooperate, Cooperate), but that (Defect, Left) will be the second-stage outcome otherwise.

o The players, first stage interaction then amounts to the following one-shot game:

  Defect Cooperate Right

Defect 1, 1 5, 0 0, 0

Cooperate 0, 5 4, 4 0, 0

Bottom 0, 0 0, 0 3, 3

  Defect Cooperate Right

Defect 2, 2 6, 1 1, 1

Cooperate 1, 6 7, 7 1, 1

Bottom 1, 1 1, 1 4, 4

Implications Insights from one-shot games do not automatically transfer

to repeated interactions.

Repeated games require special assumptions about time.

Credible threats or promises about future behavior can influence current behavior.

For some situations, subgame perfection may not embody a strong enough definition of credibility.

The Folk Theorem Let G be a finite, static game of complete information. Let

(e1,…en) denote the payoffs from a NE of G, and let (x1,…xn) denote any other feasible payoffs from G. If xi>ei for every player i and if is sufficiently close to one, then there exists a subgame-perfect NE of the infinitely repeated game G(,) that achieves (x1,…xn) as the average payoff. Insights from one-shot games do not automatically transfer to

repeated interactions. Repeated games require special assumptions about time. Credible threats or promises about future behavior can influence

current behavior.

EF Games of Incomplete Information

Reconciling Actions

and Uncertain Beliefs

“Hey, baby,… what’s your type?”

Key concepts In a game of incomplete

information at least one player is uncertain about another’s payoff function.

i’s payoff function is ui(a1,…an;ti) where ti is called player i’s type and belongs to a set of possible types.

Each type ti corresponds to a different payoff function that i might have.

t-i denotes others’ types and p(t-i|ti) denote i’s belief about them given ti.

Strategy In the game G={A1,…,An;T1,…tn; p1,…,pn;u1,…,un}, a strategy for i

is a function si(ti), where for each type ti Ti, si(ti) specifies the action from the feasible set Ai, that type ti would choose if drawn by nature.

Separating strategy: each type ti Ti chooses a different action ai Ai.

Pooling strategy, all types choose the same action.   When deciding what to do, player i will have to think about what

he or she would have done if each of the other types in Ti had been drawn.

Standard Assumptions

It it is common knowledge that nature draws a type vector t=(t1,…tn) according to

the prior probability distribution p(t).

Each player’s type is the result of an independent draw.

  Players are capable of Bayesian updating.

Bayes’ Theorem

A: state of the world. B: event.

Conditional probability p(B|A), is the likelihood of B given A.

We use Bayes’ Theorem to deduce the conditional probabilities of A given B.

Bayes Theorem. If (Ai)i=1,…,n is the set of states of the world and B is an event, then p(Ai|B)=

Know: The prior belief is p(A) The posterior belief is p(A|B). ∑

=

n

iii

ii

ABpAp

ABpAp

1

)|()(

)|()(

Sequential Rationality A pair of beliefs and strategies is sequentially

rational iff from each information set, the moving player’s strategy maximizes its expected utility for the remainder of the game given its beliefs and all players’ strategies.

Sequential rationality allows a process akin to backwards induction on games of incomplete information.

Perfect Bayesian Equilibrium A perfect Bayesian equilibrium is a belief-strategy

pairing such that the strategies are sequentially rational given the beliefs and the beliefs are calculated from the equilibrium

strategies by Bayes’ Theorem whenever possible.

A defection from the equilibrium path does not increase the chance that others will play “irrationally.”

Every finite n-person game has at least one perfect Bayesian equilibrium in mixed strategies.

Draw a simple signaling game Given the receiver’s response, is the signal utility

maximizing for type 1? Given the receiver’s response, is the signal utility

maximizing for type 2? Given the sender’s strategy, does the response to

L maximize expected utility? Given the sender’s strategy, does the response to

R maximize expected utility? If a signal is off the equilibrium path, do there

exist off-the-path beliefs that can sustain the equilibrium?

t1 t2 AR|L AR |R PBE? t1 t2 AR |L AR |R PBE?

L L u u R L u u

L L u d R L u d

L L d u R L d u

L L d d R L d d

R R u u L R u u

R R u d L R u d

R R d u L R d u

R R d d L R d d

Requirements for PBE in Extensive-Form Games An information set is on the equilibrium path if it

will be reached with positive probability the game is played according to the equilibrium strategies.

On the equilibrium path, Bayes’ Rule and equilibrium strategies determine beliefs.

Off the equilibrium path, Bayes’ Rule and equilibrium strategies determine beliefs where possible.

Implications

In a PBE, players cannot threaten to play strategies that are strictly dominated beginning at any information set off the equilibrium path.

A single pass working backwards through the tree (typically) will not suffice to compute a PBE.

Morrow, Table 7.1

Concept Replies judged

Key comparison

Beliefs used?

Beliefs off the equ path?

Nash Along the equ path

Complete Strategies

No Irrelevant

Subgame Perfect

In proper subgames

Strategies within proper subgames

No Irrelevant

Perfect Bayesian

At all information sets

Seq. Rationality at all Info sets

Yes Can be chosen.

Perfect At all information sets

Against trembles.No Weak. Dom. S.

No Irrelevant