Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs...

44
Review: Decision Theory •Structure of Problem –Choices –Random events with probability –Payoffs •Diagram it –Box for choice, circle for random, lines –Probabilities and payoffs •Solve it, right to left –Evaluate circles –At square, lop off inferior choices

Transcript of Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs...

Page 1: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Review: Decision Theory•Structure of Problem

–Choices–Random events with probability–Payoffs

•Diagram it–Box for choice, circle for random, lines–Probabilities and payoffs

•Solve it, right to left–Evaluate circles–At square, lop off inferior choices–Continue until left with only one best choice

Page 2: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Game theory: The Problem• Strategic behavior

– Outcome depends on choices by both (or all) players– So what I should do depends on what you do– What you should do depends on what I do

• This describes, among other things– Much of economics– Politics– Diplomacy– …

• If we had a general solution, it might be useful• The Theory of Games and Economic Behavior

Page 3: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Strategic Behavior• A lot of what we do involves optimizing against nature

– Should I take an umbrella? – What crops should I plant?– How do we treat this disease or injury?– How do I fix this car?

• We sometimes imagine it as a game against a malevolent opponents– Finagle's Law: If Something Can Go Wrong, It Will– "The perversity of inanimate objects"– Yet we know it isn't

• But consider a two person zero sum game, where what I win you lose.– From my standpoint, your perversity is a fact not an illusion– Because you are acting to maximize your winnings, hence minimize mine

Page 4: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Consider a non-fixed sum game• My apple is worth

– Nothing to me (I'm allergic)

– One dollar to you (the only customer)

• If I sell it to you, the sum of our gains is … ?

• If bargaining breaks down and I don't sell it, the sum of our gains is … ?

• So we have both cooperation--to get a deal--and conflict over the terms.

• Giving us the paradox that – If I will not accept less than $.90, you should pay that, but …

– If you will not offer more than $.10, I should accept that.

• Bringing in the possibility of bluffs, commitment strategies, and the like.

Page 5: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Consider a Many Player Game• Many players

– Each choosing moves

– Trying maximize his returns

• This adds a new element: Coalitions– Even if the game is fixed sum for all of us put together

– It can be positive sum for a group of players

– At the cost of those outside the group

• So we really have three games at once– Forming coalitions

– The coalition playing against others

– The coalition dividing its gains among its members

Page 6: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

To “Solve” a game we need

• A way of describing a game—Better a way of describing all games

• A definition of a solution

• A way of finding it

• Some possible descriptions:– Decision tree– Strategy matrix

Page 7: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Decision Tree • A sequence of choices, except that now some are made by player 1, some by player 2 (and perhaps 3, 4, …)

– May still be some random elements as well– Can rapidly become unmanageably complicated, but …– Useful for one purpose: Subgame Perfect Equilibrium– A solution concept

• Back to our basketball player--this time a two person game. Team, player• To solve

– Upper blue box: Doesn't play– Lower: Plays– Repeat for other four– Back to decision theory

• Again we are going right to left– At each last choice, take the alternative with higher payoff– Given that A knows B’s last choice– What is now A’s best last choice?– Continue until only one sequence remains

• This is called “Subgame Perfect Equilibrium”

.25

1

3

-$10 million+$20 million

.25

1

2

3

$0

plays

doesn’t

play

2

$0

plays

doesn’t

play

Page 8: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

But … Tantrum/No Tantrum game

• Child ignores the logic of subgame perfect equilibrium– Because he knows it is a repeated game

– And if he is going to ignore it, parent should let him stay up

• So Subgame Perfect works only if commitment strategies are not available

To Bed

Not To Bed

Tantrum

Not Tantrum

Tantrum

Not Tantrum

-10

-10

+ 10

-5

-15

0

-5

+ 10

Page 9: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Strategy Matrix

• Scissors/Paper/Stone• The rules

– Scissors cut paper– Stone breaks scissors– Paper covers stone

• Payoff– Loser pays winner a dollar– No payment in case of tie

• Description: Matrix of strategies and outcomes

Page 10: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Strategy Matrix

Player 2 Scissors Paper Stone Scissors 0 (1,-1) (-1,1) Paper (-1, 1) 0 (1,-1)

Player

1 Stone (1,-1) (-1,1) 0

•Each player chooses a strategy

•Player 1 picks a column

•Player 2 picks a row

•The intersection shows the payoffs to the two players

Page 11: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Any Two Player Game Can be Described This Way• My strategy is a complete description

– Of what I am going to do, including– How it depends on observed random events– And on what I see the other player doing

• Your strategy similarly• Given my strategy and yours

– We can see what happens when they are played out together– With the outcome possibly probabilistic depending on random elements in the

game

• So any game can be represented as a matrix– A list of all possible strategies for player 1 – A list for player 2– At the intersection, the outcome if that pair is chosen

• The book never explains this, which is why– It claims some games can't be represented on a matrix– And that "don't sue/go to court" is a Nash Equilibrium– Both of which are wrong if we think in terms of strategies, not acts

• What about die rolls, card shuffles, and the like?• What does “zero sum” mean in this description?

Page 12: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Prisoner’s Dilemma

• There is a dominant pair of strategies--confess/confess– Meaning that whatever Player 1 does, Player 2 is better off

confessing, and

– Whatever Player 2, does Player 1 is better off confessing

– Even though both would be better off if neither confessed

• One possible sense of “solution” to a game.

Baxter Confess Deny

Confess -10,-10 0,-15 Chester Deny -15,0 -1,-1

Page 13: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Dealing with Prisoner’s Dilemma• Simple example of a general problem

– Where individual rational behavior by two or more

– Makes them worse off than irrational (don’t betray)

– Other examples?

• If you are in a PD, how might you get out?

• Enforceable contract– I won't confess if you won't

– In that case, using nonlegal mechanisms to enforce

– Commitment strategy--you peach on me and when I get out …

• Repeat plays? – Double cross me this time

– I’ll double cross you next

– Doesn’t work for a fixed number of plays!

– Might for an unknown number

– Which is the game all of us are in

Page 14: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Examples?

• Athletes taking steroids. Is it a PD?

• Countries engaging in an arms race

• Students studying in order to get better grades?

Page 15: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Defining a Solution• What does “solution of a game” mean?

– “What will happen”– “How to play?”– What will happen if everyone plays perfectly?

• Dominant strategy (as in prisoner’s dilemma)– Whatever he does, I should do X– Whatever I do, he should do Y– Reasonable definition--if it exists. But …– Consider scissors/paper/stone

• Von Neumann solution for 2 person zero sum• Nash equilibrium for many person

Page 16: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Von Neumann Solution• Von Neumann proved that for any 2 player zero sum game

– There was a pair of strategies, one for player A, one for B, – And a payoff P for A (-P for B)– Such that if A played his strategy, he would (on average) get at

least P whatever B did.– And if B played his, A would get at most P whatever he did

• How can this be true for scissors/paper/stone?

–You can choose a “mixed strategy”–Roll a die where the other player can’t see it

•1,2: scissors•3,4: paper•5,6: stone

–What is my payoff to this, whatever you do? Yours, whatever I do?

•Including mixed strategies, there is always a VN solution

Page 17: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Nash Equilibrium• Called that because it was invented by Cournot, in accordance

with Stigler's Law– Which holds that scientific laws are never named after their real

inventors– Cournot invented the two player version long before Nash– Puzzle:

• Consider a many player game. – Each player chooses a strategy– Given the choices of the other players, my strategy is best for me– And similarly for each other player– Nash Equilibrium

• Driving on the right side of the road is a Nash Equilibrium– If everyone else drives on the right, I would be wise to do the same– Similarly if everyone else drives on the left– So multiple equilibria are possible

Who invented Stigler's Law?

Page 18: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Other Problems with Nash Equilibrium• One problem: It assumes no coordinated changes

– A crowd of prisoners are escaping from Death Row– Faced by a guard with one bullet in his gun– Guard will shoot the first one to charge him– Standing still until they are captured is a Nash Equilibrium

• If everyone else does it, I had better do it too.• Are there any others?

– But if I and my buddy jointly charge him, we are both better off.– One reason why it doesn't work well for two player games

• Second problem: Definition of Strategy is ambiguous.– Consider an oligopoly--an industry with (say) 6 firms– Each chooses how much to produce, what price to ask

• But that’s really a single choice, since• The price I charge determines how much I can sell

– Nash equilibrium: I choose the best strategy given what others choose• But am I choosing the best price, given their price• Or the best quantity, given their quantity?• It turns out that the solutions in the two cases are entirely different

Page 19: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Solution Concepts

• Subgame Perfect equilibrium--if it exists and no commitment is possible

• Strict dominance--"whatever he does …" Prisoner's Dilemma

• Von Neumann solution to 2 player game

• Nash Equilibrium

• And there are more

Page 20: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Von Neumann Solution to Many Player Games

• Outcome--how much each player ends up with• Dominance: Outcome A dominates B if there is

some group of players, all of whom do better under A (end up with more) and who, by working together, can get A for themselves

• A solution is a set of outcomes none of which dominates another, such that every outcome not in the solution is dominated by one in the solution

• Consider, for example, …

Page 21: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Three Player Majority Vote• A dollar is to be divided among Ann, Bill and

Charles by majority vote.– Ann and Bill propose (.5,.5,0)--they split the dollar,

leaving Charles with nothing

– Charles proposes (.6,0,.4). Ann and Charles both prefer it, so it beats the first proposal, but …

– Bill proposes (0, .5, .5), which beats that …

– And so around we go.

• One Von Neumann solution is the set: (.5,.5,0), (0, .5, .5), (.5,0,.5) (check)

• There are others--lots of others.

Page 22: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Applied Schelling ("Focal") Points• In a bargaining situation, people may end up with a solution because it is perceived as

unique, hence better than continued (costly) bargaining– We can go on forever as to whether I am entitled to 61% of the loot or 62%– Whether to split 50/50 or keep bargaining is a simpler decision.

• But what solution is unique is a function of how people think about the problem– The bank robbery was done by your family (you and your son) and mine (me and my wife and

daughter)– Is the Schelling point 50/50 between the families, or 20% to each person?– Obviously the latter (obvious to me--not to you).

• It was only a two person job--but I was the one who bribed a clerk to get inside information

– Should we split the loot 50/50 or– The profit 50/50--after paying me back for the bribe?

• In bargaining with a union, when everyone gets tired, the obvious suggestion is to "split the difference."

– But what the difference is depends on each party's previous offers– Which gives each an incentive to make offers unrealistically favorable to itself.

• What is the strategic implication? – If you are in a situation where the outcome is likely to be agreement on a Schelling point– How might you improve the outcome for your side?

Page 23: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Experimental Game Theory• Computers work cheap

• So Axelrod set up a tournament– Humans submit programs defining a strategy for many times repeated prisoner's

dilemma

– Programs are randomly paired with each other to play (say) 100 times

– When it is over, which program wins?

• In the first experiment, the winner was "tit for tat"– Cooperate in the first round

– If the other player betrays on any round, betray him the next round (punish), but cooperate thereafter if he does (forgive)

• In fancier versions, you have evolution– Strategies that are more successful have more copies of themselves in the next round

– Which matters, since whether a strategy works depends in part on what everyone else is doing.

– Some more complicated strategies have succeeded in later versions of the tournament,

– but tit for tat does quite well

• His book is The Evolution of Cooperation

Page 24: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Threats, bluffs, commitment strategies• A nuisance suit.

– Plaintiff's cost is $100,000, as is defendant's cost– 1% chance that plaintiff wins and is awarded $5,000,000– What happens?– How might each side try to improve the outcome

• Airline hijacking, with hostages– The hijackers want to be flown to Cuba (say)– Clearly that costs less than any serious risk of having the plane

wrecked and/or passengers killed– Should the airline give in?

• When is a commitment strategy believable?– Suppose a criminal tries to commit to never plea bargaining?– On the theory that that makes convicting him more costly than

convicting other criminals– So he will be let go, or not arrested

Page 25: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Game Theory: Review•Problem: Strategic behavior

–What I should do depends on what you do–And vice versa

•Abstract representations of games–Decision tree for sequential games–Strategy matrix for all games (2D for 2 player)

•Solution concepts–Subgame perfect equilibrium–Dominance–Von Neumann solution to 2 player game–Nash equilibrium–Von Neumann solution to many player game

Page 26: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Subgame perfect equilibrium• Treat the final choice (subgame) as a decision theory problem

– The solution to which is obvious

– So plug it in

– Continue right to left on the decision tree

• Assumes no way of committing and

• No coalition formation– In the real world, A might pay B not to take what would otherwise be his ideal choice--

– because that will change what C does in a way that benefits A.

– One criminal bribing another to keep his mouth shut, for instance

• But it does provide a simple way of extending the decision theory approach– To give an unambiguous answer

– In at least some situations

– Consider our basketball player problem

Page 27: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Dominant Strategy• Each player has a best choice,

whatever the other does• Might not exist in two senses

– If I know you are doing X, I do Y—and if you know I am doing Y, you do X. Nash equilibrium. Driving on the right. The outcome may not be unique, but it is stable.

– If I know you are doing X, I do Y—and if you know I am doing Y, you don't do X. Unstable. Scissors/paper/stone.

Page 28: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Nash Equilibrium• By freezing all the other players while you decide, we reduce it to decision theory for each player--given what the rest

are doing

• We then look for a collection of choices that are consistent with each other– Meaning that each person is doing the best he can for himself

– Given what everyone else is doing

• This assumes away all coalitions– it doesn't allow for two ore more people simultaneously shifting their strategy in a way that benefits both

– Like my two escaping prisoners

• It ignores the problem of defining “freezing other players”– Their alternatives partly depend on what you are doing

– So “freezing” really means “adjusting in a particular way”

– For instance, freezing prize, varying quantity, or vice versa

• It also ignores the problem of how to get to that solution– One could imagine a real world situation where

• A adjusts to B and C• Which changes B's best strategy, so he adjusts• Which changes C and A's best strategies …• Forever …

– A lot of economics is like this--find the equilibrium, ignore the dynamics that get you there

Page 29: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Von Neumann Solution• aka minimax aka saddlepoint aka ….?

• It tells each player how to figure out what to do– A value V– A strategy for one player that guarantees winning at least

V– And for the other that guarantees losing at most V

• Describes the outcome if each follows those instructions

• But it applies only to two person fixed sum games.

Page 30: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

VN Solution to Many Player Game• Outcome--how much each player ends up with• Dominance: Outcome A dominates B if there is some group of players, all of

whom do better under A (end up with more) and who, by working together, can get A for themselves

• A solution is a set of outcomes none of which dominates another, such that every outcome not in the solution is dominated by one in the solution

• Consider, to get some flavor of this, three player majority vote• A dollar is to be divided among Ann, Bill and Charles by majority vote.

– Ann and Bill propose (.5,.5,0)--they split the dollar, leaving Charles with nothing– Charles proposes (.6,0,.4). Ann and Charles both prefer it, to it beats the first

proposal, but …– Bill proposes (0, .5, .5), which beats that …– And so around we go.– One solution is the set: (.5,.5,0), (0, .5, .5), (.5,0,.5)

Page 31: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Schelling aka Focal Point• Two people have to coordinate without communicating

– Either can’t communicate (students to meet)– Or can’t believe what each says (bargaining)

• They look for a unique outcome– Because the alternative is choosing among many outcomes– Which is worse than that– While the bank robbers are haggling the cops may show up

• But “unique outcome” is a fact– Not about nature but– About how people think

• Which implies that– You might improve the outcome by how you frame the decision– Coordination may break down on cultural boundaries

• Because people frame decisions differently• Hence each thinks the other is unreasonably refusing the obvious compromise.

Page 32: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Conclusion• Game theory is helpful as a way of thinking

– “Since I know his final choice will be, I should …”

– Commitment strategies

– Prisoner’s dilemmas and how to avoid them

– Mixed strategies where you don’t want the opponent to know what you will do

– Nash equilibrium

– Schelling points

• But it doesn’t provide a rigorous answer– To either the general question of what you should do

– In the context of strategic behavior

– Or of what people will do

Page 33: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Insurance• Risk Aversion

– I prefer a certainty of paying $1000 to a .001 chance of $1,000,000– Why?

• Moral Hazard– Since my factory has fire insurance for 100%– Why should I waste money on a sprinkler system?

• Adverse selection– Someone comes running into your office

• “I want a million dollars in life insurance, right now”• Do you sell it to him?

– Doesn’t the same problem exist for everyone who wants to buy?• Buying signals that you think you are likely to collect• I.e. a bad risk• So price insurance assuming you are selling to bad risks• Which means it’s a lousy deal for good risks

– So good risks don’t get insured• even if they would be willing to pay a price• At which it is worth insuring them

Page 34: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Risk Aversion• The standard explanation for insurance

– By pooling risks, we reduce risk• I have a .001 chance of my $1,000,000 house burning down• A million of us will have just about 1000 houses burn down each year• For an average cost of $1000/person/year

– Why do I prefer to reduce the risk?

• The more money I have, the less each additional dollar is worth to me– With $20,000/year, I buy very important things– With $200,000/year, the last dollar goes for something much less

important

• So I want to transfer money– To futures where I have little, because my house burned down – From futures where I have lots--house didn’t burn

Page 35: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

“Risk Aversion” a Misleading Term• Additional dollars are probably worth less to me

the more I have• It doesn’t follow that (say) additional years of life

are– Without the risky operation I live fifteen years– If it succeeds I live thirty, but …– Half the time the operation kills the patient– And I always wanted to have children

• So really “risk aversion in money”• Aka “declining marginal utility of income.”

Page 36: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Moral hazard• I have a ten million dollar factory and am worried about fire

– If I can take ten thousand dollar precaution that reduces the risk by 1% this year, I will—(.01x$10,000,000=$100,000>$10,000)

– But if the precaution costs a million, I won't.

• insure my factory for $9,000,000– It is still worth taking a precaution that reduces the chance of fire by %1

– But only if it costs less than …?

• Of course, the price of the insurance will take account of the fact that I can be expected to take fewer precautions:– Before I was insured, the chance of the factory burning down was 5%

– So insurance should have cost me about $450,000/year, but …

– Insurance company knows that if insured I will be less careful

– Raising the probability to (say) 10%, and the price to $900,000

• There is a net loss here—precautions worth taking that are not getting taken, because I pay for them but the gain goes mostly to the insurance company.

Page 37: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Solutions?• Require precautions (signs in car repair shops—no customers

allowed in, mandated sprinkler systems)– The insurance company gives you a lower rate if you take the precautions

– Only works for observable precautions

• Make insurance only cover fires not due to your failure to take precautions (again, if observable)

• Only insure for (say) 50% of value– There are still precautions you should take and don’t

– But you take the ones that matter most

– I.e. the ones where benefit is at least twice cost

– So moral hazard remains, but is cost is reduced

– Of course, you also now have more risk to bear

Page 38: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

A Puzzle• The value of a dollar changes a lot

between $20,000/year and $200,000/year

• But very little between $200,000 and $200,100

• So why do people insure against small losses?– Service contract on a washing machine– Even a toaster!– Medical insurance that covers routine

checkups– Filling cavities, and the like

Page 39: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Is Moral Hazard a Bug or a Feature?• Big company, many factories, they insure

– Why? They shouldn't be risk averse– Since they can spread the loss across their factories.

• Consider the employee running one factory without insurance– He can spend nothing, have 3% chance of a fire– Or spend $100,000, have 1%--and make $100,000

less/year for the company– Which is it in his interest to do?

• Insure the factory to transfer cost to insurance company– Which then insists on a sprinkler system– Makes other rules– Is more competent than the company to prevent the fire!

Page 40: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Put Incentive Where It Does the Most Good• Insurance transfers loss from owner to the insurance company• Sometimes the owner is the one best able to prevent the loss

– In which case moral hazard is a cost of insurance– To be weighed against risk spreading gain

• Sometimes the insurance company is best able– In which case “moral hazard” is the objective– Sears knows more about getting their washing machines fixed than I do– So I buy a service contract to transfer the decision to them

• Sometimes each party has precutions it is best at– So coinsurance--say 50%--gives each an incentive to take– Those precautions that have a high payoff

Page 41: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Health Insurance• If intended as risk spreading

– Should be a large deductible– So I pay for all minor things

• Giving me an incentive to keep costs down• Since I am paying them

– But cover virtually 100% of rare high ticket items• If my life is at stake, I want it• But I don’t want to risk paying even 10% of a million dollar procedure

• But maybe it’s intended– To transfer to the insurance company– The incentive to find me a good doctor– Negotiate a good price

• Robin Hanson’s version– I decide how much my life is worth– I buy that much life insurance, from a company that also– Makes and pays for my medical decisions– And now has the right incentive to keep me alive

Page 42: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Adverse Selection• The problem: The market for lemons

– Assumptions• Used car in good condition worth $10,000 to buyer, $8000 to seller

• Lemon worth $5,000, $4,000

• Half the cars are creampuffs, half lemons

– First try: • Buyers figure average used car is worth $7,500 to them, $6,000 to seller, so offer

something in between

• What happens?

– What is the final result?

• How might you avoid this problem—due to asymmetric information– Make the information symmetric—inspect the car. Or …

– Transfer the risk to the party with the information—seller insures the car

• What problems does the latter solution raise?

Page 43: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Plea Bargaining • A student raised the following question:

• Suppose we include adverse selection in our analysis of plea bargaining

• What does the D.A. signal by offering a deal?

• What does the defendant signal by accepting?

• Which subset of defendants end up going to trial?

Page 44: Review: Decision Theory Structure of Problem –Choices –Random events with probability –Payoffs Diagram it –Box for choice, circle for random, lines –Probabilities.

Why insurance matters?• Most of you won’t be working for insurance

companies• Or even negotiating contracts with them• But the analysis of insurance will be important

– Almost any contract is in part insurance– Do you pay salesmen by the month or by the sale?– Is your house built for a fixed price, or cost+?– Do you take the case for a fixed price, contingency fee, or

hourly charge?