2 2 R ep eated G am es and R eput atio n -...

15
22 Repeated Games and Reputation This chapter opens with comments about the importance of reputation in ongoing relationships. The concept of a repeated game is defined and a two-period repeated game is analyzed in detail. The two-period game demonstrates that any sequence of stage Nash profiles can be supported as a subgame perfect equilibrium outcome (a result that is stated for general repeated games). The example also shows how a non- stage Nash profile can be played in equilibrium if subsequent play is conditioned so that players would be punished for deviating. The chapter then turns to the analysis of infinitely repeated games, beginning with a review of discounting. The presenta- tion includes derivation of the standard conditions under which cooperation can be sustained in the infinitely repeated prisoners’ dilemma. In the following section, a more complicated, asymmetric equilibrium is constructed to demonstrate that differ- ent forms of cooperation, favoring one or the other player, can also be supported. A Nash-punishment folk theorem is stated at the end of the chapter. Lecture Notes A lecture may be organized according to the following outline. Intuition: reputation and ongoing relationships. Examples: partnerships, col- lusion, etc. Key idea: behavior is conditioned on the history of the relationship, so that misdeeds are punished. Definition of a repeated game. Stage game {A,u} (call A i actions), played T times with observed actions. Example of a two-period (non-discounted) repeated game. Diagram of the feasible repeated game payoffs and feasible stage game payoffs. Note how many subgames there are. Note what each player’s strategy specifies. The proper subgames have the same strategic features, since the payoff matrices for these are equal, up to a constant. Thus, the equilibria of the subgames are the same as those of the stage game. Characterization of subgame perfect equilibria featuring only stage Nash profiles (action profiles that are equilibria of the stage game). A reputation equilibrium where a non-stage Nash action profile is played in the first period. Note the payoff vector. Review of discounting. 56 Instructors' Manual for Strategy: An Introduction to Game Theory Copyright 2002, 2008 by Joel Watson For instructors only; do not distribute.

Transcript of 2 2 R ep eated G am es and R eput atio n -...

22 Repeated Games and Reputation

This chapter opens with comments about the importance of reputation in ongoingrelationships. The concept of a repeated game is defined and a two-period repeatedgame is analyzed in detail. The two-period game demonstrates that any sequence ofstage Nash profiles can be supported as a subgame perfect equilibrium outcome (aresult that is stated for general repeated games). The example also shows how a non-stage Nash profile can be played in equilibrium if subsequent play is conditioned sothat players would be punished for deviating. The chapter then turns to the analysisof infinitely repeated games, beginning with a review of discounting. The presenta-tion includes derivation of the standard conditions under which cooperation can besustained in the infinitely repeated prisoners’ dilemma. In the following section, amore complicated, asymmetric equilibrium is constructed to demonstrate that differ-ent forms of cooperation, favoring one or the other player, can also be supported. ANash-punishment folk theorem is stated at the end of the chapter.

Lecture Notes

A lecture may be organized according to the following outline.

• Intuition: reputation and ongoing relationships. Examples: partnerships, col-lusion, etc.

• Key idea: behavior is conditioned on the history of the relationship, so thatmisdeeds are punished.

• Definition of a repeated game. Stage game {A, u} (call Ai actions), played Ttimes with observed actions.

• Example of a two-period (non-discounted) repeated game.

• Diagram of the feasible repeated game payoffs and feasible stage game payoffs.

• Note how many subgames there are. Note what each player’s strategy specifies.

• The proper subgames have the same strategic features, since the payoff matricesfor these are equal, up to a constant. Thus, the equilibria of the subgames arethe same as those of the stage game.

• Characterization of subgame perfect equilibria featuring only stage Nash profiles(action profiles that are equilibria of the stage game).

• A reputation equilibrium where a non-stage Nash action profile is played in thefirst period. Note the payoff vector.

• Review of discounting.

56Instructors' Manual for Strategy:An Introduction to Game Theory

Copyright 2002, 2008 by Joel WatsonFor instructors only; do not distribute.

22 REPEATED GAMES AND REPUTATION 57

• The infinitely repeated prisoners’ dilemma game.

• Trigger strategies. Grim trigger.

• Conditions under which the grim trigger is a subgame perfect equilibrium.

• Example of another “cooperative” equilibrium. The folk theorem.

Examples and Experiments

1. Two-period example. It is probably best to start a lecture with the simplestpossible example, such as the one with a 3 × 2 stage game that is presented atthe beginning of this chapter. You can also run a classroom experiment basedon such a game. Have the students communicate in advance (either in pairsor as a group) to agree on how they will play the game. That is, have thestudents make a self-enforced contract. This will hopefully get them thinkingabout history-dependent strategies. Plus, it will reinforce the interpretation ofequilibrium as a self-enforced contract, which you may want to discuss near theend of a lecture on reputation and repeated games.

2. The Princess Bride reputation example. At the beginning of your lecture onreputation, you can play the scene from The Princess Bride in which Wesley isreunited with the princess. Just before he reveals his identity to her, he makesinteresting comments about how a pirate maintains his reputation.

Instructors' Manual for Strategy:An Introduction to Game Theory

Copyright 2002, 2008 by Joel WatsonFor instructors only; do not distribute.

23 Collusion, Trade Agreements,and Goodwill

This chapter presents three applications of repeated game theory: collusion betweenfirms over time, the enforcement of international trade agreements, and goodwill.The first application involves a straightforward calculation of whether collusion canbe sustained using grim trigger strategies in a repeated Cournot model. This examplereinforces the basic analytical exercise from Chapter 22. The section on internationaltrade is a short verbal discussion of how reputation functions as the mechanism forself-enforcement of a long-term contract. On goodwill, a two-period game with asequence of players 2 (one in the first period and another in the second period) isanalyzed. The first player 2 can, by cooperating in the first period, establish a valuablereputation that he can then sell to the second player 2.

Lecture Notes

Any or all of the applications can be discussed in class, depending on time con-straints and the students’ background and interest. Other applications can also bepresented, in addition to these or substituting for these. For each application, it maybe helpful to organize the lecture as follows.

• Description of the real-world setting.

• Explanation of how some key strategic elements can be distilled in a gametheory model.

• (If applicable) Description of the game to be analyzed.

• Determination of conditions under which an interesting (cooperative) equilib-rium exists.

• Discussion of intuition.

• Notes on how the model could be extended.

Examples and Experiments

1. The Princess Bride second reputation example. Before lecturing on goodwill,you can play the scene from The Princess Bride where Wesley and Buttercupare in the fire swamp. While in the swamp, Wesley explains how a reputationcan be associated with a name, even if the name changes hands over time.

58Instructors' Manual for Strategy:An Introduction to Game Theory

Copyright 2002, 2008 by Joel WatsonFor instructors only; do not distribute.

COLLUSION, TRADE AGREEMENTS, AND GOODWILL 59

2. Goodwill in an infinitely repeated game. If you want to be ambitious, you canpresent a model of an infinitely repeated game with a sequence of players 2who buy and sell the “player 2 reputation” between periods. This can followthe Princess Bride scene and be based on Exercise 4 of this chapter (which,depending on your students’ backgrounds, may be too difficult for them to doon their own).

3. Repeated Cournot oligopoly experiment. Let three students interact in a re-peated Cournot oligopoly. This may be set as an oil (or some other commodity)production game. It may be useful to have the game end probabilistically. Thismay easy to do if it is done by e-mail, but may require a set time frame if donein class. The interaction can be done in two scenarios. In the first, players maynot communicate, and only the total output is announced at the end of eachround. In the second scenario, players are allowed to communicate and eachplayer’s output is announced at the end of each round.

Instructors' Manual for Strategy:An Introduction to Game Theory

Copyright 2002, 2008 by Joel WatsonFor instructors only; do not distribute.

24 Random Events andIncomplete Information

This chapter explains how to incorporate exogenous random events in the specificationof a game. Moves of Nature (also called the nonstrategic “player 0”) are made atchance nodes according to a fixed probability distribution. As an illustration, the giftgame is depicted in the extensive form and then converted into the Bayesian normalform (where payoffs are the expected values over Nature’s moves). Another abstractexample follows.

Lecture Notes

A lecture may be organized according to the following outline.

• Discussion of settings in which players have private information about strategicaspects beyond their physical actions. Private information about preferences:auctions, negotiation, etc.

• Modeling such a setting using moves of Nature that players privately observe.(For example, the buyer knows his own valuation of the good, which the sellerdoes not observe.)

• Extensive form representation of the example. Nature moves at chance nodes,which are represented as open circles. Nature’s probability distribution is notedin the tree.

• The notion of a type, referring to the information that a player privately ob-serves. If a player privately observes some aspect of Nature’s choices, then thegame is said to be of incomplete information.

• Many real settings might be described in terms of players already knowing theirown types. However, because of incomplete information, one type of player willhave to consider how he would have behaved were he a different type (becausethe other players consider this).

• Bayesian normal form representation of the example. Note that payoff vectorsare averaged with respect to Nature’s fixed probability distribution.

• Other examples.

60Instructors' Manual for Strategy:An Introduction to Game Theory

Copyright 2002, 2008 by Joel WatsonFor instructors only; do not distribute.

RANDOM EVENTS AND INCOMPLETE INFORMATION 61

Examples and Experiments

1. The Let’s Make a Deal game revisited. You can illustrate incomplete informationby describing a variation of the Let’s Make a Deal game that is described inthe material for Chapter 2. In the incomplete-information version, Nature pickswith equal probabilities the door behind which the prize is concealed and Montyrandomizes equally between alternatives when he has to open one of the doors.

2. Three-card poker. This game also makes a good example (see Exercise 4 inChapter 24 of the textbook).

3. Ultimatum-offer bargaining with incomplete information. You might present,or run as a classroom experiment, an ultimatum bargaining game in which theresponder’s value of the good being traded is private information (say, $5 withprobability 1/2 and $8 with probability 1/2). For an experiment, describe thegood as a soon-expiring check made out to player 2. You show player 2 theamount of the check, but you seal the check in an envelop before giving it toplayer 1 (who bargains over the terms of trading it to player 2).

4. Signaling games. It may be worthwhile to describe a signaling game that youplan to analyze later in class.

5. The Price is Right. The bidding game from this popular television game showforms the basis for a good bonus question. (See also Exercise 5 in Chapter 25for a simpler, but still challenging, version.) In the game, four contestants mustguess the price of an item. Suppose none of them knows the price of the iteminitially, but they all know that the price is an integer between 1 and 1, 000. Infact, when they have to make their guesses, the contestants all believe that theprice is equally likely to be any number between 1 and 1, 000. That is, the pricewill be 1 with probability 1/1000, the price will be 2 with probability 1/1000,and so on.

The players make their guesses sequentially. First, player 1 declares his/herguess of the price, by picking a number between 1 and 1, 000. The other playersobserve player 1’s choice and then player 2 makes her guess. Player 3 nextchooses a number, followed by player 4. When a player selects a number,he/she is not allowed to pick a number that one of the other players alreadyhad selected.

After the players make their guesses, the actual price is revealed. Then theplayer whose guess is closest to the actual price without going over wins $100.The other players get 0. For example, if player 1 chose 150, player 2 chose 300,player 3 selected 410, and player 4 chose 490, and if the actual price were 480,then player 3 wins $100 and the others get nothing.

Instructors' Manual for Strategy:An Introduction to Game Theory

Copyright 2002, 2008 by Joel WatsonFor instructors only; do not distribute.

RANDOM EVENTS AND INCOMPLETE INFORMATION 62

This game is not exactly the one played on The Price is Right, but it is close.The bonus question is: Assuming that a subgame perfect equilibrium is played,what is player 1’s guess? How would the answer change if, instead of the winnergetting $100, the winner gets the value of the item (that is, the actual price)?

Instructors' Manual for Strategy:An Introduction to Game Theory

Copyright 2002, 2008 by Joel WatsonFor instructors only; do not distribute.

25 Risk and Incentives in Contracting

This chapter presents the analysis of the classic principal-agent problem under moralhazard, where the agent is risk-averse. There is a move of Nature (a random produc-tive outcome). Because Nature moves last, the game has complete information. Thus,it can be analyzed using subgame perfect equilibrium. This is why the principal-agentmodel is the first, and most straightforward, application covered in Part IV of thebook.

At the beginning of the chapter, the reader will find a thorough presentationof how payoff numbers represent preferences over risk. An example helps explainthe notions of risk aversion and risk premia. The Arrow-Pratt measure of relativerisk aversion is defined. Then a streamlined principal-agent model is developed andfully analyzed. The relation between the agent’s risk attitude and the optimal bonuscontract is determined.

Lecture Notes

Analysis of the principal-agent problem is fairly complicated. Instructors will notlikely want to develop in class a more general and complicated model than the onein the textbook. A lecture based on the textbook’s model can proceed as follows.

• Example of a lottery experiment/questionnaire that is designed to determinethe risk preferences of an individual.

• Representing the example as a simple game with Nature.

• Note that people usually are risk averse in the sense that they prefer the ex-pected value of a lottery over the lottery itself.

• Observe the difference between an expected monetary award and expected util-ity (payoff).

• Risk preferences and the shape of the utility function on money. Concavity,linearity, etc.

• Arrow-Pratt measure of relative risk aversion.

• Intuition: contracting for effort incentives under risk.

• The principal-agent model. Risk neutral principal.

• Incentive compatibility and participation constraints. They both will bind atthe principal’s optimal contract offer.

• Calculation of the equilibrium. Note how the contract and the agent’s behaviordepend on the agent’s risk preferences.

63Instructors' Manual for Strategy:An Introduction to Game Theory

Copyright 2002, 2008 by Joel WatsonFor instructors only; do not distribute.

25 RISK AND INCENTIVES IN CONTRACTING 64

• Discussion of real implications.

Examples and Experiments

You can illustrate risk-aversion by offering choices over real lotteries to the stu-dents in class. Discuss risk aversion and risk premia.

Instructors' Manual for Strategy:An Introduction to Game Theory

Copyright 2002, 2008 by Joel WatsonFor instructors only; do not distribute.

26 Bayesian Nash Equilibriumand Rationalizability

This chapter shows how to analyze Bayesian normal form games using rationalizabilityand equilibrium theory. Two methods are presented. The first method is simply toapply the standard definitions of rationalizability and Nash equilibrium to Bayesiannormal forms. The second method is to apply the concepts by treating different typesof a player as separate players. The two methods are equivalent whenever all typesare realized with positive probability (an innocuous assumption for static settings).Computations for some finite games exemplify the first method. The second methodis shown to be useful when there are continuous strategy spaces, as illustrated usingthe Cournot duopoly with incomplete information.

Lecture Notes

A lecture may be organized according to the following outline.

• Examples of performing standard rationalizability and equilibrium analysis toBayesian normal form games.

• Another method that is useful for more complicated games (such as those withcontinuous strategy spaces): treat different types as different players. One canuse this method without having to calculate expected payoffs over Nature’smoves for all players.

• Example of the second method: Cournot duopoly with incomplete informationor a different game.

Examples and Experiments

You can run a common- or private-value auction experiment or a lemons experi-ment in class as a transition to the material in Chapter 27. You might also considersimple examples to illustrate the method of calculating best responses for individualplayer-types.

65Instructors' Manual for Strategy:An Introduction to Game Theory

Copyright 2002, 2008 by Joel WatsonFor instructors only; do not distribute.

27 Lemons, Auctions,and Information Aggregation

This chapter focuses on three important settings of incomplete information: price-taking market interaction, auctions, and information aggregation through voting.These settings are studied using static models, in the Bayesian normal form, and thegames are analyzed using the techniques discussed in the preceding chapter. The“markets and lemons” game demonstrates Akerlof’s major contribution to informa-tion economics. Regarding auctions, the chapter presents the analysis of both first-price and second-price formats. In the process, weak dominance is defined and therevenue equivalence result is mentioned. The example of voting and informationaggregation gives a hint of standard mechanism-design/social-choice analysis and il-lustrates Bayes’ rule.

Lecture Notes

Any or all of these applications can be discussed in class, depending on timeconstraints and the students’ background and interest. The lemons model is quitesimple; a lemons model that is more general than the one in the textbook can easilybe covered in class. The auction analysis, on the other hand, is more complicated.However, the simplified auction models are not beyond the reach of most advancedundergraduates. The major sticking points are (a) explaining the method of assuminga parameterized form of the equilibrium strategies and then calculating best responsesto verify the form and determine the parameter, (b) the calculus required to calculatebest responses, and (c) double integration to establish revenue equivalence. One canskip (c) with no problem. The information aggregation example requires students towork through Bayes’ rule calculations.

For each application, it may be helpful to organize the lecture as follows.

• Description of the real-world setting.

• Explanation of how some key strategic elements can be distilled in a gametheory model.

• Description of the game to be analyzed.

• Calculations of best responses and equilibrium. Note whether the equilibriumis unique.

• Discussion of intuition.

• Notes on how the model could be extended.

66Instructors' Manual for Strategy:An Introduction to Game Theory

Copyright 2002, 2008 by Joel WatsonFor instructors only; do not distribute.

LEMONS, AUCTIONS, AND INFORMATION AGGREGATION 67

Examples and Experiments

1. Lemons experiment. Let one student be the seller of a car and another be thepotential buyer. Prepare some cards with values written on them. Show thecards to both of the students and then, after shuffling the cards, draw one atrandom and give it to student 1 (so that student 1 sees the value but student 2does not). Let the students engage in unstructured negotiation over the terms oftrading the card from student 1 to student 2, or allow them to declare whetherthey will trade at a prespecified price. Tell them that whomever has the cardin the end will get paid. If student 1 has the card, then she gets the amountwritten on it. If student 2 has the card, then he gets the amount plus a constant($2 perhaps).

2. Stock trade and auction experiments. You can run an experiment in whichrandomly-selected students play a trading game like that of Exercise 8 in thischapter. Have the students specify on paper the set of prices at which they arewilling to trade. You can also organize the interaction as a common-value auc-tion, or run any other type of auction in class. You can discuss the importanceof expected payoffs contingent on winning or trading.

Instructors' Manual for Strategy:An Introduction to Game Theory

Copyright 2002, 2008 by Joel WatsonFor instructors only; do not distribute.

28 Perfect Bayesian Equilibrium

This chapter develops the concept of perfect Bayesian equilibrium for analyzing be-havior in dynamic games of incomplete information. The gift game is utilized through-out the chapter to illustrate the key ideas. First, the example is used to demonstratethat subgame perfection does not adequately represent sequential rationality. Thencomes the notion of conditional belief, which is presented as the belief of a player atan information set where he has observed the action, but not the type, of anotherplayer. Sequential rationality is defined as action choices that are optimal in responseto the conditional beliefs (for each information set). The chapter then covers thenotion of consistent beliefs and Bayes’ rule. Finally, perfect Bayesian equilibrium isdefined and put to work on the gift game.

Lecture Notes

A lecture may be organized according to the following outline.

• Example to show that subgame perfection does not adequately capture sequen-tial rationality. (A simple signaling game will do.)

• Sequential rationality requires evaluating behavior at every information set.

• Conditional belief at an information set (regardless of whether players origi-nally thought the information set would be reached). Initial belief about types;updated (posterior) belief.

• Sequential rationality: optimal actions given beliefs (like best response, but withactions at a particular information set rather than full strategies).

• Consistency: updating should be consistent with strategies and the basic defi-nition of conditional probability. Bayes’ rule. Note that conditional beliefs areunconstrained at zero-probability information sets.

• Perfect Bayesian equilibrium: strategies, beliefs at all information sets, suchthat (1) each player’s strategy prescribes optimal actions at all of his informationsets, given his beliefs and the strategies of the other players, and (2) the beliefsare consistent with Bayes’ rule wherever possible.

• Definition of pooling and separating equilibria.

• Algorithm for finding perfect Bayesian equilibria in a signaling game: (a) posita strategy for player 1 (either pooling or separating), (b) calculate restrictionson conditional beliefs, (c) calculate optimal actions for player 2 given his beliefs,and (d) check whether player 1’s strategy is a best response to player 2’s strategy.

• Calculations for the example.

68Instructors' Manual for Strategy:An Introduction to Game Theory

Copyright 2002, 2008 by Joel WatsonFor instructors only; do not distribute.

28 PERFECT BAYESIAN EQUILIBRIUM 69

Examples and Experiments

1. Conditional probability demonstration. Students can be given cards with differ-ent colors written on them, say “red” and “blue.” The colors should be given indifferent proportions to males and females (for example, males could be givenproportionately more cards saying red and females could be given proportion-ately more cards saying blue). A student could be asked to guess the color ofanother student’s card. This could be done several times, and the color revealedfollowing the guess. Then a male and female student could be selected, and astudent could be asked to guess who has, for example, the red card.

2. Signaling game experiment. It may be instructive to play in class a signalinggame in which one of the player-types has a dominated strategy. The variantof the gift game discussed at the beginning of Chapter 28 is such a game.

3. The Princess Bride signaling example. A scene near the end of The PrincessBride movie is a good example of a signaling game. The scene begins withWesley lying in a bed. The prince enters the room. The prince does not knowwhether Wesley is strong or weak. Wesley can choose whether or not to stand.Finally, the prince decides whether to fight or surrender. This game can bediagrammed and discussed in class. After specifying payoffs, you can calculatethe perfect Baysian equilibria and discuss whether it accurately describes eventsin the movie. Exercise 6 in this chapter sketches one model of this strategicsetting.

Instructors' Manual for Strategy:An Introduction to Game Theory

Copyright 2002, 2008 by Joel WatsonFor instructors only; do not distribute.

29 Job-Market Signaling and Reputation

This chapter presents two applications of perfect Bayesian equilibrium: job-marketsignaling and reputation with incomplete information. The signaling model demon-strates Michael Spence’s major contribution to information economics. The repu-tation model illustrates how incomplete information causes a player of one type topretend to be another type, which has interesting implications. This offers a glimpseof the reputation literature initiated by David Kreps, Paul Milgrom, John Roberts,and Robert Wilson.

Lecture Notes

Either or both of these applications can be discussed in class, depending on timeconstraints and the students’ background and interest. The extensive form tree of thejob-market signaling model is in the standard signaling-game format, so this modelcan be easily presented in class. The reputation model may be slightly more difficultto present, however, because its extensive form representation is a bit different andthe analysis does not follow the algorithm outlined in Chapter 28.

For each application, it may be helpful to organize the lecture as follows.

• Description of the real-world setting.

• Explanation of how some key strategic elements can be distilled in a gametheory model.

• Description of the game to be analyzed.

• Calculating the perfect Bayesian equilibria (using the circular algorithm fromChapter 28, if appropriate).

• Discussion of intuition.

• Notes on how the model could be extended.

Examples and Experiments

In addition to, or in place of, the applications presented in this chapter, you mightlecture on the problem of contracting with adverse selection. Exercise 9 of Chapter 29would be suitable as the basis for such a lecture. This is a principal-agent game, wherethe principal offers a menu of contracts to screen between two types of the agent. Youcan briefly discuss the program of mechanism design theory as well.

70Instructors' Manual for Strategy:An Introduction to Game Theory

Copyright 2002, 2008 by Joel WatsonFor instructors only; do not distribute.