Agents and Causes : Reconciling Competing Theories of Causal Reasoning Michael R. Waldmann
Embed Size (px)
Transcript of Agents and Causes : Reconciling Competing Theories of Causal Reasoning Michael R. Waldmann
Mind reading aliens: Causal forces and the Markov Assumption
Agents and Causes: Reconciling Competing Theories of Causal Reasoning
Michael R. Waldmann Cognitive and Decision SciencesDepartment of Psychology University of GttingenWith: Ralf MayrhoferCausal Reasoning: Two FrameworksCausal Bayes Nets as Psychological ModelsOverview of empirical evidenceMarkov violations in causal reasoningDispositional TheoriesForce dynamicsAgents and patients2. How Dispositional Intuitions Guide the Structuring of Causal Bayes NetsExperiments: Markov ViolationError attribution in an extended causal Bayes net
OverviewCausal Reasoning: Two FrameworksCausal Models: Psychological EvidencePeople are sensitive to the directionality of the causal arrow (Waldmann & Holyoak, 1992; Waldmann, 2000, 2001)People estimate causal power based on covariation information, and control for co-factors (Waldmann & Hagmayer, 2001)Causal Bayes nets as models of causal learning (Waldmann & Martignon, 1998)People (and rats) differentiate between observational and interventional predictions (Waldmann & Hagmayer, 2005; Blaisdell, Sawa, Leising, & Waldmann, 2006)Counterfactual causal reasoning (Meder, Hagmayer, & Waldmann, 2008, 2009)Categories and concepts: The neglected direction (Waldmann & Hagmayer, 2006)A computational Bayesian model of diagnostic reasoning (Meder, Mayrhofer, & Waldmann, 2009)Abstract knowledge about mechanisms influences the parameterization of causal models (Waldmann, 2007)The Bayesian Probabilistic Causal Networks framework has stimulated a productive research program on human inferences on causal networks. Such inferences have clear analogues in everyday judgments about social attributions, medical diagnosis and treatment, legal reasoning, and in many other domains involving causal cognition. So far,research suggests two persistent deviations from the normative model. Peoples inferences about one event are often inappropiately influenced by other events that are normatively irrelevant; they are unconditionally independent or are screened off by intervening nodes. At the same time, peoples inferences tend to be weaker than are warranted by the normative framework.Rottman, B., & Hastie, R. (2013). Reasoning about causal relationships: Inferences on causal networks. Psychological Bulletin.
Causal Bayes Net Research: SummaryMarkov Violations in Causal ReasoningThe Causal Markov ConditionDefinition: Conditional upon its parents (direct causes) each variable X is independent of all other variables that are not causal descendants of X (i.e., a cause screens off each of its effects from the rest of the network)
E1E2E3C6But recent research showsRecent research shows that human reasoners do consider the states of other effects of a target effects cause when inferring from the cause to a single effect(see Rehder & Burnett, 2005; Walsh & Sloman, 2007) vs.00?111?1An Augmented Causal Bayes Net?
Rehder & Burnett, 2005The Causal Markov Condition: Psychological EvidenceSubjects typically translate causal model instructions into representation that on the surface violate the Markov condition. Humans seem to add assumptions about hidden mechanisms that lead to violations of screening-off, even when the cover stories are abstract.It is unclear where the assumptions about hidden structure come from. People typically have only sparse knowledge about mechanisms (Rozenblit & Keil, 2002).
Dispositional TheoriesAbstract Dispositions, Force Dynamics, and the Distinction between Agents and PatientsCausation as the product of an interaction between causal participants (agents, patients) which are endowed with dispositions, powers, or capacities.e.g., Aspirin has the capacity to relieve headaches. Brains have the capacity to be influenced by Aspirin.Agents (who dont have to be humans) are the active entities emitting forces. Patients are the entities acted upon by the agents. Patients more or less resist the influence of the agents. Intuitions about abstract properties of agents and patients may guide causal reasoning in the absence of further mechanism knowledge.Patient tendencyfor endstateAffector (i.e., agent)-patientconcordanceEndstateapproachedCauseNoNoYesAllow (enable)YesYesYesPreventYesNoNoWolffs Theory of Force Dynamics(Wolff, 2007)ExamplesWinds caused the boat to heel (cause)Vitamin B allowed the body to digest (allow)Winds prevented the boat from reaching the harbor (prevent)
Where does the knowledge about tendencies come from if covariation information is excluded?How can predictive and diagnostic inferences within complex causal models be explained?How do we know whether a causal participant plays the role of an agent or patient?
ProblemsHow Dispositional Intuitions Guide the Structuring of Causal Bayes NetsEE.g., HypothesesBoth agents and patients are represented as capacity placeholders for hidden internal mechanisms.There is a tendency to blame the agent to a large extent for both successful and unsuccessful causal transmissions.These intuitions can be represented by elaborating or re-parameterizing the causal Bayes net.Experiments: Markov ViolationAn Unfamiliar Domain:Mind Reading Aliens(see also Steyvers et al., 2003)PORPORPOR
POR=food(in alien language)
EE.g., Dissociating Causes and AgentsI. CauseEffect AgentPatientCauseEffect PatientAgentII.Manipulating the Agent RoleSender Condition (Cause Object as Agent)
Gonz is capable of sending out his thoughts,and hence transmit them into the heads of Murks, Brrrx, and Zoohng.
Reader Condition (Effect Objects as Agents)
Murks, Brrrx, and Zoohng are capable of reading the thoughts of Gonz.
GonzMurksBrrrxZoohngExperiment 1a: Which Alien is the Cause?(Intervention Question)Imagine POR was implantedin head of cause/effect alien. How probable is it that the other alien thinks of POR.Experiment 1b: Blame AttributionsWho is more responsible,if cause is present andeffect absent, the cause alienor the effect alien?
Markov Violations: Experiment 2Instruction:4 aliens either think of POR or not; thoughts of pink top alien (cause) covary with thoughts of bottom aliens (effects); aliens think of POR in 70% of their time.
Test Question: Imagine 10 situations with this configuration. In how many instances does the right alien think of POR?Predictions
Sender Condition: The pattern seems to indicate that something is wrong with Gonzs capacity to send. Hence, the probability of Murks having Gonzs thought should be low (i.e., strong Markov violation).
Reader Condition: The pattern seems to indicate that something is wrong with Brrrxs and Zoohngs capacity to read. Hence, the probability of Murks having Gonzs thought should be relatively intact (i.e., weak Markov violation). GonzMurksResults: Experiment 2
Error attribution in an extended causal Bayes netStanError attribution in causal Bayes nets15ECwCStandard Model:Distinguishing between two types of error sources15ECFCwCCFCEFEDifferentiating between Cause (FC)- and Effect (FE)-Based Preventers:Simplified Version:Error Attribution in a Common Cause-Model15E2CEnFCE1wCFC is an unobserved common preventer, and must be inferred from the states of C and its effectsWhen C involves the agent, the strength of FC is high (i.e., error is mainly attributed to C), when E involves the agent, the strength of FC is low (hence errors are primarily attributed to the individual effects (i.e., FE, that is, wC).Model PredictionsCE1E2E3FC
The strength of the FC (red green blue) influences the size of the Markov violation (i.e., slope).17wF_C Further Predictions (1): A/B CaseIn the basic experiments an asymmetry between the two states of the cause was foundIn the absent case the cause is not active, thus mechanism assumptions cannot have an influence18
Prediction: When both states of the cause are described as active, the differential assumptions about error attribution should matter in both casesMarkov-Experiment A/B: Results
N=5619Further Predictions (2): Causal ChainsIf each C comes with its own FC, the difference between reading and sending conditions should completely disappear in a causal chain situation
Chain Experiment: Results
21Causal model instructions are typically augmented with hidden structure.In the absence of specific mechanism knowledge intuitions about abstract dispositional properties of causal participants guide the structuring of the models.Summary