Volume Two: Symposia and Invited Papers || Ratifiability and Causal Decision Theory: Comments on...

17
Ratifiability and Causal Decision Theory: Comments on Eells and Seidenfeld Author(s): William Harper Source: PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, Vol. 1984, Volume Two: Symposia and Invited Papers (1984), pp. 213-228 Published by: The University of Chicago Press on behalf of the Philosophy of Science Association Stable URL: http://www.jstor.org/stable/192506 . Accessed: 14/06/2014 19:14 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . The University of Chicago Press and Philosophy of Science Association are collaborating with JSTOR to digitize, preserve and extend access to PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association. http://www.jstor.org This content downloaded from 62.122.76.45 on Sat, 14 Jun 2014 19:14:52 PM All use subject to JSTOR Terms and Conditions

Transcript of Volume Two: Symposia and Invited Papers || Ratifiability and Causal Decision Theory: Comments on...

Ratifiability and Causal Decision Theory: Comments on Eells and SeidenfeldAuthor(s): William HarperSource: PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association,Vol. 1984, Volume Two: Symposia and Invited Papers (1984), pp. 213-228Published by: The University of Chicago Press on behalf of the Philosophy of Science AssociationStable URL: http://www.jstor.org/stable/192506 .

Accessed: 14/06/2014 19:14

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

The University of Chicago Press and Philosophy of Science Association are collaborating with JSTOR todigitize, preserve and extend access to PSA: Proceedings of the Biennial Meeting of the Philosophy of ScienceAssociation.

http://www.jstor.org

This content downloaded from 62.122.76.45 on Sat, 14 Jun 2014 19:14:52 PMAll use subject to JSTOR Terms and Conditions

Ratifiability and Causal Decision Theory: Comments on Eells and Seidenfeld

William Harper

University of Western Ontario

1. Comments on Eells: Causal Utility provides the proper framework for explicating Ratifiability and needs no Metatickles to do it

I want to begin by saying that, though I shall be taking issue with Eells on a number of points, I am very impressed with the clarity and integrity of thought he has displayed in this paper.

In his earlier work (1980, 1982) Eells defended evidential decision theory from such apparent counter-examples as Newcomb's problem and the Prisoners' Dilemma by arguing that a rational agent's knowledge of his own beliefs and desires will act as a metatickle to screen off the evidential relevance of his acts to outcomes they do not causally influence. Where K is an outcome determining condition (say, the million being in the closed box in Newcomb's problem) believed by the agent to be causally independent of his choice between act Al (taking one box) or A2 (taking both boxes) and R specifies the agent's beliefs and desires, R will count as a screening-off metatickle just in case P(K|R & Al) = P(KIR & A2). Since the ideally rational agent is presumed to know R, he will not treat his choice between Al and A2 as evidence about K. With the evidential relevance of the acts thus screened-off evidential decision theory will lead him to the same sure-thing recommendation as causal decision theory (A2, taking both boxes). It has seemed to defenders of causal decision theory that self knowledge sufficient to produce such a screening-off metatickle is too strict a requirement to place on rational decision makers. Eells himself points out that this is especially so for an agent at an early stage of deliberation who does not yet know which act is rational in light of the beliefs and desires described by R.

In this paper Eells weakens his account of metatickles to include any information about one's own deliberation: "A metatickle can be thought of, basically, as a piece of information about one's deliberation." (Eells 1985, p. 178) . He uses Jeffrey's idea of ratifiability reformulated to operate in a context of deliberation dynamics to produce an apparently less restrictive metatickle defence of evidential decision theory. Finally, he produces a decision

PSA 1984, Volume 2, pp. 213-228 Copyright ? 1985 by the Philosophy of Science Association

This content downloaded from 62.122.76.45 on Sat, 14 Jun 2014 19:14:52 PMAll use subject to JSTOR Terms and Conditions

214

problem designed to show that causal decision theory cannot avoid the metatickle intricacies he uses to rescue evidential decision theory from its apparent difficulties in Newcomb-type Droblems.

Jeffrey's ratification defence of evidential decision theory and Eells' newer modification of it both depend on (in the end) getting the agent into a state where screening-off metatickles apply. This, I shall argue, is still an objectionable requirement. More impor- tantly, I shall argue that causal decision theory provides a much more congenial framework for formulating the ratifiability idea and that, so formulated, ratifiability is a natural and correct requirement on rational choice. Finally, I shall show that such a ratifiability requirement yields a correct solution to Eells' problem within causal decision theory, without in any way introducing screening off metatickles.

1.1. Jeffrey's Ratifiability

According to Richard Jeffrey (1983, p. 18) an act is ratifiable just in case no alternative has a higher expected utility relative to the assumption that I choose the act in question. He attempted to formulate this idea within the context of evidential decision theory. In order to assess the evidential utility of other acts relative to the assumption that Al is chosen, the hypothetical posterior PAl representing this assumed post choice point of view must allow conditional probabilities on other acts to be non-trivially specified, so that the definition

VAl (A2) = SUMiPAl(Bi!A2).Des (A2,Bi)

will not collapse into incoherence. Jeffrey attempted to do this by defining PAl not as conditional probability on doing Al (which would have PAl(A2) = 0), but as conditional probability on the assumption that Al is chosen where it is assumed that there is some probability of a slip between choice and act so that

P(I choose Al but do A2) > 0

This makes room for the definition

PAl (BllA2) = P(BiII choose Al but do A2)

without involving any machinery for conditional probabilities on conditions having zero probability.

Jeffrey's account of ratification was developed as part of his attempt to make evidential decision theory yield up recommendations that would agree with those of causal decision theory in the Prisoners' Dilemma. It can do this only if the choices involved act as screening off metatickles so that in, e.g., the hypothetical posterior corres- ponding to Al not confessing (taking 1 box only in Newcomb's problem) we have

PAl (BilAl) = PAl (Bi!A2)

This content downloaded from 62.122.76.45 on Sat, 14 Jun 2014 19:14:52 PMAll use subject to JSTOR Terms and Conditions

215

where A2 is confessing (taking both boxes in Newcomb's problem), and Bi is either B1 partners not confessing (predicting you take one box) or B2 confessing (predicting you take both boxes). If these end-point screening-off metatickles apply then the only ratifiable option will be A2, the sure thing alternative recommended by causal decision theory.

One attraction these end-point metatickles have for Eells' position is that the hypothetical post choice point of view removes the circum- stance he finds most damaging to his original suggestion that an ideally rational agent will use his belief and desire state R to screen off the evidential relevance of his choices. Presumably, from this post choice point of view, the rational agent is no longer in doubt about which choice is rational given R. The problem, however, is that moving to this post choice point of view also removes any grounds for saying the agent is irrational if he does not use R (or his choice) to screen off the evidential relevance of the acts them- selves. Even at best, the Eellsian rationality considerations only motivate screening off the evidential relevance of one's choices. Any hypothetical slips between choice and act are surely not covered. Indeed, I don't see much motivation of any sort for the demand that

P(BiII choose Al but do A2) P(BitI choose Al)

though I do see some for

P(BiII choose Al and do Al) P(Bi|I choose Al)

Jeffrey was convinced to give up his ratification defence of causal decision theory (private communication and also see Jeffrey 1983, p. 20) when van Fraassen pointed out that the defence would not work in a Prisoners' Dilemma situation where you believed your slips would be correlated with those of your partner.

According to Eells, the van Fraassen point may not be a serious problem because "it seems that to the extent to which the agent believes that his act is thus out of his control, he shouldn't think it rationally worthwhile to deliberate anyway." (p. 184). He claims that the problem with Jeffrey's formulation is how to intelligibly construe the sort of hypothetical reasoning required to represent the assumed post choice point of view. I don't agree that there could not be situations where it would be rational for an agent to take the possibility of slips seriously and still have something left to deliberate about (the van Fraassen example seems to me to be just such a one). I also don't see that there need be anything intrinsically unintelligible about the sort of hypothetical reasoning Jeffrey's account demands. I, nevertheless, have sympathy with Eells' reservations.

I also think that the chief problem with Jeffrey's formulation of ratifiability is the way it makes consideration of possible slips between choice and act play a central role in assessing ratifiability. According to Jeffrey:

This content downloaded from 62.122.76.45 on Sat, 14 Jun 2014 19:14:52 PMAll use subject to JSTOR Terms and Conditions

216

"The notion of ratifiability is applicable only where, during deliberation, the agent finds it conceivable that he will not manage to perform the act he finally decides to perform, but will find himself performing one of the other available acts instead." (1983, p. 19).

This, it seems to me, is a restriction imposed only by the special difficulties of trying to use the procrustean framework of evidential decision theory to evaluate alternative acts from the evidential point of view corresponding to the assumption that a given act is performed. As I see it the problem is not that hypothetical reasoning about poss- ible slips between choice and act is unintelligible; but, rather that such reasoning seems to be irrelevant to the intuitive idea of ratifiability.

1.2. Unstable Choice and Ratifiability Properly Explicated

It is illuminating to formulate the Death in Damascus problem (Gibbard-Harper 1978, p. 136) as a zero-sum game. Your pure strategies are Al (to stay in Damascus) or A2 (to go to Aleppo). Your opponent is Death. His are B1 (to seek you in Damascus) or B2 (to seek you in Aleppo). If you are in the place where he seeks you, you die. If not, you get a reprieve. Let -100 be assigned as your utility for meeting Death and 0 as your utility for not meeting him.

B1 B2

Al -100 0

O --100 j A2 -

You believe Death is very good at predicting your choice. You assign the following conditional probabilities:

P(BlIAl) - 1 - P(B2|A2)

You also believe Death doesn't cheat. His choice is causally indepen- dent of yours, perhaps already made on the basis of his very accurate reading of your character and past circumstances. The independence assumptions of normal form games are met.

Scenario 1: You decide to stay, Al. This gives you good evidence that Death will seek you here, since P(Bl1Al) , 1. Now, from this new evidential perspective, going, A2, is evaluated as better.

Scenario 2: You decide to go, A2. This gives you good evidence that Death will seek you there, since P(B21A2) 1. Now, from this new evidential perspective, staying is evaluated as better.

Each pure strategy is unstable, because from the hypothetical point of view corresponding to your assumption that you choose it another

This content downloaded from 62.122.76.45 on Sat, 14 Jun 2014 19:14:52 PMAll use subject to JSTOR Terms and Conditions

217

alternative is evaluated as better. This unstability results from unratifiability, in exactly the sense of Jeffrey's intuitive idea, but it, apparently, does not depend on any possible slips between choice and act.

In causal decision theory we can represent the evaluation of A2 relative to the assumption that Al is chosen quite straightforwardly:

UAl (A2) = SUMiP(A2[} Bi|Al).Des(A2,Bi).

The agent's belief that the independence assumptions built into normal form games holds is represented by his assigning

P(AEIiBi IA') = P (Bi IA' )

for all A, A' and Bi. This makes the causal evaluation of A2 from the evidential assumption that Al is chosen reduce to

UAl (A2) = SUMiP(BiIAl).Des(A2,Bi).

The unstability arises because

UAl (A2) > UAl (Al)

We have UAl (Al) z -100 while UAl (A2) , 0, since P(BlAl) , 1 , P(B2[A2).

I take it to be a great merit of causal utility theory that it provides a much more congenial framework for formulating Jeffrey's intuitive idea. An act is ratifiable just in case, relative to the assumption that it is chosen, it has causal utility at least as high as any alternative.1

A is ratifiable iff U (A) > U (A') all A' - ~ ~~A A

I think that possible slips between choice and act can largely be ignored, so I use the present conditional probability on the act to represent the relevant hypothetical post choice point of view.

When the independence assumptions built into normal form games are met, the evidential relevance of the hypothetical post choice point of view requires nothing beyond the epistemic conditional probabilities of the Bi's on the A's. Such epistemic conditional probabilities on acts are required by evidential decision theory anyway.

Jeffrey claims that the choiceworthy acts are the ratifiable ones and suggests: "Pathology is to be expected when the number of ratifiable options is not exactly 1: in such cases you do well to reassess your beliefs and desires." (1983, p. 19) There are lots of apparently non-pathological examples where more than one act is ratifiable.2 Therefore, I claim that ratifiability of your choice is a necessary but not sufficient condition for non-pathological

This content downloaded from 62.122.76.45 on Sat, 14 Jun 2014 19:14:52 PMAll use subject to JSTOR Terms and Conditions

218

applications of causal decision theory. The basic recommendation is:

Choose from among your ratifiable alternatives (if there are any) one which maximizes unconditional causal utility.3

There are also problems where no act is ratifiable. For example, suppose you are prevented from using mixed strategies in Death and Damascus. I regard such cases as genuinely pathological and have no qualms about allowing that causal utility theory makes no recommen- dation in them.4

1.3. Deliberation Dynamics

Imagine you are faced with the death in Damascus problem. You begin to dither. You tentatively incline toward staying - shifting your belief in the proposition that you will stay to, say .6. Suppose the shift from your old PO(Al) to this new Pl(Al) = .6 summarizes the total new epistemic input provided by this stage of your deliberation, then this new epistemic input ought to generate a corresponding change in your epistemic probability of Bl by Jeffrey's rule so

P1(Bl) = SUMA P0 (BlA).P1(A)

where the A's range over the partition of alternative acts under con- sideration. If you are only considering Al and A2 then the assumption that Po(Bl!Al) ; 1 , Po(B21A2) makes Pl(Bl) t .6 . You now recal- culate the expected utility of Al and A2, using this new epistemic probability. The result is that Ul(Al) ; -60 while U1(A2) : -40, so that the alternative of going looks better. Now, you have reason to incline toward it. How much? Brian Skyrms (1982, 1984) has provided a number of schemes for answering this question that lead to models of deliberation dynamics where the agent's dithering will eventually settle down on the deliberation fixed point P*(Al) = , in this prob- lem. Any agent who reasons by Skyrmsian deliberation dynamics on the pure strategies with a best response prior (i.e., P(BilAj) = 0 unless Bi is a best response to Aj) in any zero-sum game must end up in a deliberation fixed point that corresponds to a mixed strategy which is a game-theoretic solution. This, it seems to me, is a virtue of these models of deliberation dynamics.

Eells formulates his version of ratifiability as a requirement on the way a decision rule ought to operate in a context of deliberation dynamics. It seems to come to the requirement that if at some stage n in the deliberation VnAl(A2) > VnAl(Al) and at no stage m in the deliberation is VmA2(Al) > VmA2(A2) then the deliberation should end up ranking A2 higher than Al. The evaluation VnAl(A2) is, presumably, to be the evidential utility of A2 evaluated from the hypothetical point of view corresponding to the assumption that Al is chosen at the time of stage n. As far as I can see, this rule requires the same sort of hypothetical reasoning that Eells found objectionable about Jeffrey's original formulation. Certainly, it must face at each stage n the same general problem of providing an evidential evaluation of A2 from a point of view corresponding to the assumption that Al is chosen. Jeffrey's expedient of considering possible slips between choice and act seems no more palatable here, in the special context of

This content downloaded from 62.122.76.45 on Sat, 14 Jun 2014 19:14:52 PMAll use subject to JSTOR Terms and Conditions

219

stage n, than it did in Jeffrey's original formulation, but Eells has suggested no alternative. If, on the other hand, the evaluation is carried out by causal utility then these problems are avoided and there is some initial plausibility for Eells principle; though, even under this constraint, it seems needlessly complex and needlessly wedded to the idea of deliberation dynamics.

Eells' ratification requirement is not enough to make evidential decision theory deliver the correct sure-thing recommendations gener- ated by causal decision theory in Newcomb type problems Eells proposes special conditions on deliberation dynamics that are not met by the schemes Skyrms provided. One is the condition (Assumption 1) that our old friend the screening-off metatickle is to apply at the end point of deliberation. I see no good reason for thinking that an agent would be irrational if he violated this assumption, but there is an indirect difficulty with it as well. If the deliberation dynamics is to use Jeffrey's probability kinematics on inputs consisting of the new probabilities over the acts, then the conditional probabilities of the Ki's on the acts will not change. Thus, any mechanism which is to produce screening-off metatickles at the end point when they are not already there at the beginning will have to provide some alternative to Jeffrey's rule to specify its probability kinematics. Eells has suggested no such alternative.

The other additional assumption Eells makes is a constraint designed to insure that the agent ends up committing himself to one of the basic alternatives. In the Death in Damascus problem Skyrms' scheme will automatically end up at a deliberation fixed point corres- ponding to the correct mixed strategy, but Eells' scheme will force the agent to opt for one of the irrational pure strategies, unless the mixed strategy alternatives are built in from the start. Moreover, it is not obvious that there is a reasonable way to reformulate this constraint (Eells' Assumption 3) to accommodate the continuum of possible mixed strategies. I think this is another advantage for Skyrms' schemes for deliberation dynamics.

The upshot of all this is that Eells has not succeeded in his attempt to provide a plausible metatickle defence of evidential decision theory. He seems to admit as much.

I admit, of course, that there will be some agents for whom the assumptions of this defense of the evidential theory will not hold true, in particular, the proviso that the agent recalculates expected utility sufficiently many times. But it is arguable that this is not the fault of the theory; indeed, perhaps all we can conclude is that evidential decision theory is simply not adequate for such agents, be that the fault of the agent or the theory. In the next section, we'll see that causal decision theory stands in the same relation to similar agents. (1985, p. 191).

1.4. Solving Eells' Problem

Eells offers the following decision problem as an attempt to show that causal decision theory is no less dependent on metatickles than evidential decision theory. Where Fo is the proposition that a

This content downloaded from 62.122.76.45 on Sat, 14 Jun 2014 19:14:52 PMAll use subject to JSTOR Terms and Conditions

220

predictor (Eells' predictor III) predicts you do AO, Ki is the proposition that a Newcomb predictor (Eells' predictor I) predicts that you do Al and K2 is the proposition that this Newcomb predictor predicts you do A2, the relevant desirability matrix is as follows:

FO - F0 & Kl - FO & K2

AO 999 999 999

Al 1,000,000 1,000,000 0

A2 1,000 1,001,000 1,000

You believe these predictions are all causally independent of your acts, and you assign P(FO|AO) , 1 ; P(_FO & Kl|Al) , P(_FO & K21A2). According to Eells A2 is the rational act here, but causal utility theory will recommend Al unless your initial epistemic probability P(FO) < 1/1000. He suggests that causal decision theory can get to the right recommendation only if it employs the sort of metatickle analysis he provided for evidential decision theory. I see no need for this. The basic recommendation to choose from among your ratifi- able acts provides everything needed. The conditional probabilities Eells provides in his description of the problem suffice for the circulation of ratifiability as explicated in causal decision theory: UAl(Al) ; 1,000,000, since P(_FO & Kl|Al) z 1. Similarly, UA1(A2) z 1,001,000, so Al is not ratifiable. AO is also not ratifiable. A2, however, is ratifiable, since UA2(A2) 1000 > 999 - UA2(AO) > 0 z UA2(Al) . Therefore, A2 is the only ratifiable act here and it is the recommendation of causal decision theory with its corresponding proper ratification requirement. There is no need to introduce Eells' objectionable screening-off metatickles. There is not even any need to explictly work out any scheme for probability dynamics.

Eells suggests that building in a ratification clause which would allow the agent to change his choice if it is not ratifiable might lead to trouble and might not get the agent to the desired outcome. The ratification requirement I have proposed doesn't work that way. The procedure is to first check the available alternatives for ratifi- ability, using your present epistemic conditional probabilities to do this. This is to be followed by using unconditional causal utility to select from among the remaining options. The ratifiability analysis is to be carried out before starting the ordinary causal utility calculation.

Eells also worries that the ratifiability requirement might be viciously circular. I don't see this worry, and I think the applica- tion to his own problem shows there is nothing to worry about. I used the conditional probabilities on the acts that Eells specified in the formulation of the problem to carry out the ratifiability calcu- lation. This involves no more circularity than the basic account of evidential decision theory which uses these same conditional probabilities on acts to calculate their evidential utilities.

This content downloaded from 62.122.76.45 on Sat, 14 Jun 2014 19:14:52 PMAll use subject to JSTOR Terms and Conditions

221

1.5. Game Theory

I represented the independence assumption built into the character- ization of normal form games as requiring that

P(A'LI-BiIA) = P(Bi|A)

for each of your opponents' pure strategy options Bi' and any of your strategies A, A'. Additional motivation for this explication of game theoretic independence as the Gibbard-Harper counterfactual representa- tion of causal independence is provided by considering the objective conditional chances corresponding to your opponents mixed strategies. Where q is the appropriate probability vector, let Kq be the chance hypothesis corresponding to the assumption that the other player plays mixed strategy (qlBl.. .qnBn). For each of the other players pure strategies Bi and all of your strategies A, Kq specifies objective conditional chances equal to the unconditional chances specified in the corresponding mixed strategy, that is

Cq(BiIA) = qi = Cq(Bi)

where Cq is the chance distribution specified in chance hypothesis Kq. The independence assumption built into normal form games is, thus, represented as the requirement that the Bi's be stochastically independent of any choice of strategy you might make in each of the objective chance distributions corresponding to your opponents mixed strategies.

The appropriate evaluation of P(A'D+BiIA) is as your epistemic expectation conditional on A of the objective conditional chance of Bi given A', that is:

P(A'L1+BiIA) = SUM P(KqIA).Cq(Bi|A'),

where this SUM is an integral defined relative to a probability density representing your epistemic conditional probability distribu- tion over the Kg's. Your epistemic conditional belief P(BiIA) is the epistemic conditional expectation on A of the unconditional chance of Bi, that is

P(Bi|A) = SUM P(KqIA).Cq(Bi)

Therefore, the independence assumption that makes Cq(BiIA') = Cq(Bi) for each of the Kq's requires

P(A'EI+BiIA) = P(BiIA)

which is the Gibbard-Harper representation of causal independence of Bi from A' holding in your epistemic conditional probabilities on A.5

One way to represent strategic reasoning in game-theoretic situ- ations is by assigning non-trivial conditional probabilities to your opponents choices on your choices, even though you accept the causal independence assumption built into specification of the game. This is exactly what you are doing when you assign P(Bl|Al) t 1 in Death in

This content downloaded from 62.122.76.45 on Sat, 14 Jun 2014 19:14:52 PMAll use subject to JSTOR Terms and Conditions

222

Damascus. One very interesting type of strategic reasoning for game- theory can be investigated by examining best response priors.

P is a best response prior iff P(BIA) = 0 unless B is some best response to A.

One nice result of such an investigation is that in zero-sum games only Nash equilibrium strategies are ratifiable by a best response prior.6

One serious drawback of Eells' screening-off metatickle restriction is that it would rule out as irrational any non-trivial strategic reasoning in any normal form game. Even at the end point of deliber- ation one might want to be able to ratify his choice of strategies by best response reasoning. I certainly do not see anything irrational about continuing to assign best response conditional probabilities on my other strategies even after I have decided on some particular one of them. This would seem to be essential if I wanted to be able to use best response reasoning to ratify the choice I have made. Eells' screening-off metatickle requirement would not allow any such applica- tion of strategic reasoning. This seems to me to be far too high a price to pay.

2. Comments on Seidenfeld: Neither perfect predictions nor third person cases prove that I ought to take one box

I want to say something about two considerations Seidenfeld raises against causal decision theory. First he agrees with Levi (1975) that the probability one case in Newcomb's problem is a clear choice between a million (if you take one box) and a thousand (if you take two boxes). He suggests that this is a serious challenge to causal decision theory and that it has not been answered in the literature. Second, he considers third person cases. If your friend is faced with the Newcomb choice then the price you would pay for his prospect (after his choice but before the box is opened) would be much higher if he takes one box than if he takes two boxes. Seidenfeld also proposes a partition between correct and incorrect prediction as the basis for an appropriate Savage style matrix in those Newcomb prob- lems where the epistemic conditional probabilities are so balanced that the correctness of the prediction is independent of your choice. I think it is clear that such a partition would be appropriate only if each of the relevant objective chance hypotheses made the objec- tive chance of correct prediction independent of your choice. This, however, would require making the chance that one box was predicted depend on your choice if you have a choice. (see note 8).

2.1. The Probability One Case

I first want to consider the corresponding epistemic utility argument for cooperation in the Prisoners' Dilemma. Let Al be your cooperative strategy; Bl your partners' while A2 and B2 are your respective strategies to defect. Suppose the utilities are given by the following matrix:

This content downloaded from 62.122.76.45 on Sat, 14 Jun 2014 19:14:52 PMAll use subject to JSTOR Terms and Conditions

223

Bi B2

Al (3,3) (0,10)

A2 (10,0) (1, 1)

If P(BlIAl) and P(B2|A2) are sufficiently high the evidential utility of Al will be higher than that of A2. Though many deplore the difficulties of motivating cooperation in the all too frequent social situations where Prisoners' Dilemma like utility assignments seem to apply, few would endorse this spurious way out when the corresponding conditional probabilities are less than one.

According to the reasoning Seidenfeld proposes, your matrix reduces to

Al F 3

A2 1

when P(Bl|Al) = 1 = P(B21A2), because you are certain that the (Al,B2) and (A2,Bl) outcomes will not arise. The preceding discussion of game theory makes it clear that such certainty alone is not sufficient grounds to reduce the matrix. Even such extreme epistemic dependence is compatible with having independence in the relevant objective chance hypotheses, and the matrix cannot be reduced unless you hold that the objective conditional chances of Bl on Al and B2 on A2 are both 1. Your epistemic conditional probabilities vary from your unconditional epistemic probabilities (and from one another) by representing hypothetical redistributions of your epistemic probabil- ity over the chance hypotheses, but these redistributions cannot change the conditional chances built into the hypotheses themselves.

The independence assumption built into normal form games makes the objective chance of Bl independent of the choices available to you in each of the chance hypotheses corresponding to your partner's avail- able choices of strategy. The matrix reduction Seidenfeld argues for is inconsistent with the assumption that you are in a normal form game, but the conditional certainties Seidenfeld appeals to are consistent with this assumption, therefore his argument is not valid.7 One needs to consider the relevant chance hypotheses before concluding that the matrix can be reduced in a probability one case.

This shows that there can be perfect predictor cases of Newcomb's problem that don't reduce to a choice between a thousand and a million. The two box choice will still be rational in a perfect predictor case, provided each of the relevant chance hypotheses makes the chance that the million is in the closed box independent of your choice.8

2.2. The third person case

If you made conditional bets on your friends getting a million in Newcomb's problem you would be willing to accept higher odds

This content downloaded from 62.122.76.45 on Sat, 14 Jun 2014 19:14:52 PMAll use subject to JSTOR Terms and Conditions

224

conditional on the one box choice, wouldn't you? What would you regard as the fair market price for your friends prospect (after the choice but before the box is opened)? Wouldn't this market value be much higher if the choice were one box than if it were two boxes? The answers to these questions are clearly yes. Seidenfeld sees no difference between third and first person cases. He suggests that this should erode your confidence in the two box choice for your own case. I don't agree.

Consider the case where you have two friends, a one boxer and a two boxer. They have both made their choices and neither of their closed boxes has been opened. You are now offered a choice between money prizes to be equal to the winnings of whichever friend you choose. You will, of course, pick the one boxer. This choice gives you an opportunity to manipulate the objective chance that you get rich. By choosing the one boxer you make your prospect for getting the million depend on the chance set up that obtained for that friend (i.e., on what the predictor predicted about that one boxer). Had you chosen the two boxer you would have made your prospects depend on a different chance set up (on what the predictor predicted about that two boxer). If you refrain from taking either opportunity you would make your chance at getting rich depend on yet another chance set up -

the status quo in your own case.

The case with one friend who has made his choice is similar to that of two friends. Your choosing to buy your friend's prospect is a way to make your chance of getting a million depend on what the predictor has predicted about that friend. This is a way of manipulating your chance of getting rich by changing the relevant chance set up from your own status quo alone to the result of adding in your friend's chance at a million. If your friend took one box you may have evi- dence that you were buying into a very favorable chance at a million. So, you might be willing to offer quite a lot for his prospect. On the other hand, if your friend took two boxes you would have evidence that the chance set up you would be buying into is much less favor- able to getting the million. In either case you are getting the opportunity to manipulate your objective chance of getting rich by buying into your friend's prospect.

The relevant difference between these third person cases and your own case is that when you are in your own Newcomb problem you will have no opportunity to use your choice to manipulate the objective chance that you get the big bucks by foregoing the small bucks. The same chance set up (what the predictor predicted about you) applies whether you take one box or two boxes.

Notes

I first presented this formulation of ratifiability in comments on Reed Richter at the Eastern A.P.A. in December 1983. I am grateful to Richter for showing me that the problem of unstable choice could not be ignored.

This content downloaded from 62.122.76.45 on Sat, 14 Jun 2014 19:14:52 PMAll use subject to JSTOR Terms and Conditions

225

2Lots of normal form coordination games provide examples. The following is one such

Bi B2

Al F(l,l) (0,0)

A2 L(O,O) (l,l)1

where P(BlIAl) ; 1 z P(B21A2). Rabinowicz (1983) calls this the pleasant coordination problem while Skyrms (1982) calls it the nice demon case.

3Howard Sobel (1983) apparently suggests that an act ought to be both ratifiable and have optimal unconditional causal utility relative to all options not just the ratifiable ones. This would lead to no recommendation in cases where the act with highest unconditional utility was unratifiable. I think this would be wrong and that Eells example (section 1.4) shows it. Skyrms (1984) shell game and an example by Rabinowicz (1983) are additional cases, like Eells', where the act with best unconditional utility is unratifiable. Eells' discussion of his example and Richter's (1983) and Weirich's (1985) discussions of instability convince me that ratifiability ought to be required of any option to be taken seriously. If mixed stategies are allowed this requirement is much less restrictive than it might other- wise seem. For example, in Skyrms' shell game (see Eells' note 9 for a specification of the game) the ratifiable options include the pure strategy Al (picking the first shell) and the mixed strategy (2/3 A2, 1/3 A3). This mixed strategy will be the one picked by the recommendation I endorse, unless the prior probability is overwhelm- ingly concentrated on the proposition that the predictor has predicted Al.

4 In this I seem to be agreeing with Paul Weirich (1985).

Terence Horgan (1981) has suggested that back-tracking conditionals where P(A[T-*Bi) might differ from P(Bi) are appropriate for decisions even in cases where you regard the B's as causally independent of your choice among the A's. The independence assumption built into normal form games provides for a large and interesting class of cases where the chance hypotheses corresponding to your opponents mixed strategies specify precisely defined evaluations of decision making conditionals that require the Gibbard-Harper non-backtracking representation. I think these examples can be generalized so that P(AD1+Bi) = P(Bi) when- ever the relevant alternative chance hypotheses all make Cq(BiIA) = Cq(Bi). This rules out back-tracking conditionals as appropriate for decision making unless some bizarre hypothesis allow- ing backward causation is entertained.

6This result answers an argument by McLennon (1978) designed to show that game theory is inconsistent with its own decision theo- retic foundation. It also provides a new rational for imperfect Nash equilibria. I shall be expanding on these developments in another paper.

This content downloaded from 62.122.76.45 on Sat, 14 Jun 2014 19:14:52 PMAll use subject to JSTOR Terms and Conditions

226

7This undercuts Horgan's claim (1981) that arguments like Seidenfeld's provide independent grounds for the choice recommended by epistemic utility theory.

As far as I can see there are no perfect prediction cases where evidential decision theory makes better recommendations than causal decision theory. If your choice influences the content of the prediction (as in cheating or bizarre causal set ups) then both theories will recommend one box. This way of resolving the ambigu- ities in the problem makes the correctness of the prediction causally independent of your choice, but not its content nor the contents of the box. What we cannot have is a case where the content of the prediction cannot be influenced by your choice and where the correctness of the prediction also cannot be influenced by your choice and where it is, nevertheless, in your power to choose either option. These three conditions form an inconsistent triad. See Ross and Hubin (1985) for a useful discussion on this point.

9 Eells made this crystal clear for the third person probability one

case with an elegant blackboard display at the symposium.

This content downloaded from 62.122.76.45 on Sat, 14 Jun 2014 19:14:52 PMAll use subject to JSTOR Terms and Conditions

227

References

Eells, Ellery. (1980). Newcomb's Paradox and the Principle of Maximizinr Conditional ExDected Utility. Unpublished Ph.D. Dissertation, University of California, Berkeley. University Microfilms Publication Number ADG80-29383.

-------------. (1982). Rational Decision and Causality. Cambridge: England and New York: Cambridge University Press.

-------------. (1985). "Causal Decision Theory." In PSA 1aSL, Volume 2. Edited by P.D. Asquith and P. Kitcher. East Lansing: Philosophy of Science Association. Pages 177-200.

Gibbard, Alan and Harper, William. (1978). "Counterfactuals and Two Kinds of Expected Utility." In Hooker et al. (1978), Volume 1. Pages 125-162.

Hooker, Clifford A.; Leach, James J.; and McLennen, Edward F. (eds.). (1978). Foundations and ADplications of Decision TheorY, (The University of' Western Ontario Series in Philosophy of Science, Volume 13.) Dordrecht, Holland: D. Reidel Publishing Co.

Horgan, Terence. (1981). "Counterfactuals and Newcomb's Problem." Journal of PhilosoDhy 68: 331-356.

Jeffrey, R.C. (1983). The Loaic of Decision. 2nd ed. Chicago and London: University of Chicago Press.

Levi, Isaac. (1975). "Newcomb's Many Problems." Theory and Decision 6: 161-175. (Reprinted in Hooker et al. (1978), Volume 1. Pages 369-383.)

McLennon, Edward F. (1978). "The Minimax Theory and Expected Utility Reasoning." In Hooker et al. (1978), Volume 1. Pages 337-359.

Rabinowicz, Wlodzimerz. (1983). "On Ratificationism: A Critique of Jeffrey's Logic of Decision." Unpublished manuscript.

Richter, Reed. (1983). "Rationality Revisited." A symposium address at eastern division A.P.A. meeting, December 1983.

Ross, Glenn and Hubin, Don. (1985). "Newcomb's Perfect Predictor." Nous XIX: 4 9-86.

Seidenfeld, Teddy. (1985). "Comments on Causal Decision Theory." In PSA 1B_,. Volume 2. Edited by P.D. Asquith and P. Kitcher. East Lansing: Philosophy of Science Association. Pages 201-212.

Skyrms, Brian. (1982). "Causal Decision Theory." Journal of Philosohy 79: 695-711.

-------------. (19841). Pragmatics and Emoiricism. New Haven, Connecticut: Yale University Press.

This content downloaded from 62.122.76.45 on Sat, 14 Jun 2014 19:14:52 PMAll use subject to JSTOR Terms and Conditions

228

Sobel, Jordan Howard. (1983). "Expected Utilities and Rational Actions in Choices." Theoria IL: 157-183.

Weirich, Paul. (1985). "Decision Instability." Australasian Journal of Philosochv 63: 465-472.

This content downloaded from 62.122.76.45 on Sat, 14 Jun 2014 19:14:52 PMAll use subject to JSTOR Terms and Conditions