The Empirical Untenability of Sentient Artificial Intelligence

73
Dickinson College Dickinson Scholar Student Honors eses By Year Student Honors eses 5-22-2011 e Empirical Untenability of Sentient Artificial Intelligence Andrew Joseph Barron Dickinson College Follow this and additional works at: hp://scholar.dickinson.edu/student_honors Part of the Philosophy Commons is Honors esis is brought to you for free and open access by Dickinson Scholar. It has been accepted for inclusion by an authorized administrator. For more information, please contact [email protected]. Recommended Citation Barron, Andrew Joseph, "e Empirical Untenability of Sentient Artificial Intelligence" (2011). Dickinson College Honors eses. Paper 122.

Transcript of The Empirical Untenability of Sentient Artificial Intelligence

Page 1: The Empirical Untenability of Sentient Artificial Intelligence

Dickinson CollegeDickinson Scholar

Student Honors Theses By Year Student Honors Theses

5-22-2011

The Empirical Untenability of Sentient ArtificialIntelligenceAndrew Joseph BarronDickinson College

Follow this and additional works at: http://scholar.dickinson.edu/student_honors

Part of the Philosophy Commons

This Honors Thesis is brought to you for free and open access by Dickinson Scholar. It has been accepted for inclusion by an authorized administrator.For more information, please contact [email protected].

Recommended CitationBarron, Andrew Joseph, "The Empirical Untenability of Sentient Artificial Intelligence" (2011). Dickinson College Honors Theses. Paper122.

Page 2: The Empirical Untenability of Sentient Artificial Intelligence

"If you gotta ask, you ain't never gonna know"

THE EMPIRICAL UNTENABILITY OF SENTIENT ARTIFICIAL INTELLIGENCE

By:

Andrew Barron

Submitted in partial fulfillment of Honors Requirements for the Department of Philosophy

Professor Jessica Wahman, Supervisor Professor Crispin Sartwell, Reader Professor Susan Feldman, Reader Professor Chauncey Maher, Reader

May 22, 2011

Page 3: The Empirical Untenability of Sentient Artificial Intelligence

"All sentience is mere appearance - even sentience capable of passing the Turing test."

Tuvok Star Trek: Voyager

"The key distinction here is between duplication and simulation. And no simulation by itself

ever constitutes duplication."

John Searle tvunds, Brains, and Science

Page 4: The Empirical Untenability of Sentient Artificial Intelligence

TABLE OF CONTENTS

INTRODUCTION 1

CHAPTER ONE: MINDS AND/OR COMPUTERS

WHAT IS COMPUTATION? 4

IS A MIND A COMPUTER? 9

IS A COMPUTER A MIND? 13

CHAPTER TWO: THE ENIGMA OF FAMILIARITY

THE HARD PROBLEM 23

ARGUMENTS FROM INEFFABILITY 28

CHAPTER THREE: DO ANDROIDS DREAM OF ELECTRIC SHEEP? WE'LL NEVER KNOW FOR SURE

THE EXPLANATORY GAP 39

MCGINN'S THEORY OF COGNITIVE CLOSURE 43

CRITICISM AND RESPONSE 52

Al, CONSCIOUSNESS, AND BLADE RUNNER: TYING EVERYTHING TOGETHER 56

CONCLUSION 62

WORKS CITED 63

Page 5: The Empirical Untenability of Sentient Artificial Intelligence

INTRODUCTION

The ultimate goal of Artificial Intelligence {Al) is to model the human mind and

ascribe it to a computer.1 Since the 1950s, astounding progress has been made in the field,

leading some to defend "strong Al," John Searle's term for the theory that it is possible to

write a computer program equivalent to a mind. As a result, a common trope in science

fiction from Isaac Asimov to James Cameron is the idea of sapient and sentient robots living

amongst humans, sometimes peacefully, but more commonly not. In Ridley Scott's Blade

Runner, androids indistinguishably humanlike in appearance and behavior live as outlaws

among human beings, hiding in plain sight. The idea of the strong Al makes for good

entertainment, but is it actually possible? Is it within the realm of human capability to

synthesize consciousness?

To many scholars and researchers, the answer is a resounding "yes!" Cognitive

science, the interdisciplinary amalgamation of neuroscience, computer science, philosophy,

linguistics, and psychology, has churned out increasingly advanced instances of Al for more

than half a century. This paper is an attempt to restrain the mounting excitement. There is

no doubt that Al is an incredible idea with far reaching implications already in effect today.

The marketplace is already saturated with "smart" cars, calculators, wristwatches, and

dishwashers, but the average consumer generally avoids thinking about what that really

means. Does the luxury car that parallel parks itself know that it is parallel parking? Simply

1 'Mind' is a controversial term. Intelligence does not presuppose mentality, as we shall

1

Page 6: The Empirical Untenability of Sentient Artificial Intelligence

because a device is touted by advertisers as "intelligent" does not entail the existence of a

conscious mind.

The purpose of my argument is not to prove the impossibility of a conscious

computer, but to prove the empirical untenability of ever knowing whether or not we have

succeeded in producing one. Consciousness is not something we can detect by observing

behavior-including brain behavior - alone; just because something seems sentient does

not necessarily mean that it is. There is an explanatory gap between brain behavior and the

phenomenon of conscious experience that cannot be solved using any extant philosophical

or scientific paradigms. Hypothetically, if we were to ever fully grasp the nature of what

consciousness is and how it arises and express it in the form of a coherent theory, it might

be possible to ascribe such a theory to an artifact. Even so, no conceivable test can

transcend the explanatory gap and definitively prove the existence of a sentient mind.

My thesis is a three-pronged argument against the possibility of ever knowing for

sure whether or not we succeed in building a self-aware, sentient, conscious computer. The

first tier is an explanatory discussion of computation and the Turing test. Here I address

various arguments for computer minds and the theoretical underpinnings for Strong

artificial intelligence. I also discuss the counterarguments that cognitive science

continuously fails to rebut. I place special emphasis on Searle's Chinese Room thought

experiment and its implications for the possibility of machine sentience. The second tier is a

discussion of what it means to be a conscious agent. I reject reductionism in any form as

acceptable solutions to the mind-body problem because of the explanatory gap fatally

separating first-person introspective accounts of consciousness and third-person observable

2

Page 7: The Empirical Untenability of Sentient Artificial Intelligence

neuro-behavioral correlates. In the final section I support Colin McGinn's cognitive closure

hypothesis by defending his view that the mind-body problem is inherently insoluble

because a full understanding of consciousness is beyond our epistemic limits. Using the film

Blade Runner and its antecedent novel Do Androids Dream of Electric Sheep?, I aim to prove

the impossibility of ever differentiating between a conscious, "strong" Al and one that

behaves as if it were conscious, but has no inner life whatsoever, also known as "weak" Al.

3

Page 8: The Empirical Untenability of Sentient Artificial Intelligence

CHAPTER ONE

MINDS AND/OR COMPUTERS

WHAT IS COMPUTATION?

Throughout the course of my research, I have encountered a bevy of disparate

definitions of the word 'computer'. The Oxford Dictionary of Philosophy defines it as "any

device capable of carrying out a sequence of operations in a defined manner" (Blackburn

2008). Computers permeate every aspect of modern society, from the microchips in

hearing aids to massive parallel-processing supercomputers. Recently, IBM built one such

supercomputer called Watson that competed on Jeopardy! against the two most successful

contestants in the show's history. Watson seemed to have no problem comprehending the

complex linguistic puzzles posed by Trebek, answering them in record time. As impressive

IBM's creation may be, does it function at all like a human brain? Are these electronic

operations equivalent to whatever phenomena are responsible for human thought?

Numerous cognitive scientists, philosophers, computer scientists, and neuroscientists would

say yes (Carter 2007). Computationalism is a cognitive theory that posits that the mind is

the functional representation of the external world through the manipulation of digital

symbols; the mind is software within brain hardware.

4

Page 9: The Empirical Untenability of Sentient Artificial Intelligence

Before I discuss computationalism in greater detail, it is important to take a closer

look at what a computer is. John Haugeland defines a computer as "an interpreted

automatic formal system" (Haugeland 1989, 48). To understand this definition, we must

first decipher what each of its component terms signifies. A formal system is comprised of

tokens to be manipulated according to a set of predetermined rules, not unlike a game.

Take chess, for instance. Before the game starts and regardless of who is playing, it is

decided that a pawn can only move one (or two) spaces forward, and one space diagonally

when attacking. Unless different rules are decided on before the game starts, these rules

are set in stone. Instead of physical pieces, computer tokens are electronic and invisible to

the eye.

Formal systems like chess and checkers are necessarily digital. 'Digital' means

discrete and precise while its opposite, 'analogue' means variable or nebulous. The

alphabet is digital - A, B, C, D ... are static and discrete with no middle ground between A

and B. The station preset buttons on car radios are digital, while an "old fashioned" dial is

analogue. If button '1' is set to 98.5 MHz, pushing the button will reliably set that exact

frequency. But when the driver of a 1983 Ford Bronco turns the tuner knob, it is effectively

impossible to tune the exact same frequency every time. To my knowledge, all existing

digital computers are binary, using strings of ls and Os as tokens.

Formal systems are completely self-contained, meaning that the rules only apply to

tokens within the system itself; 'black knight' seldom means 'movement restricted to two

spaces by one space' outside a the realm of a chess match. As a result, the "outside world"

is irrelevant. Chess can be played indoors, outdoors, on the moon, or underwater with

5

Page 10: The Empirical Untenability of Sentient Artificial Intelligence

pieces made from plastic, gold, or elephant meat; the medium is irrelevant. All that

matters is that the symbols pertain to the same system of rules, or syntax. As we shall see

later, the idea of medium independence is extremely relevant to the field of artificial

intelligence.2

So far we have learned that computers are digital, self-contained, and are syntactic.

A formal system is automatic if it works or runs devoid of any external influence. In his

discussion of automatic systems, Haugeland imagines a fanciful example: "a set of chess

pieces that hop around the board, abiding by the rules, all by themselves" or "a magical

pencil that writes out formally correct mathematical derivations without the guidance of

any mathematicians" (Haugeland 1989, 76). A computer becomes automated when its legal

moves are predetermined and carried through algorithmically. An algorithm works like a

flowchart, a "step-by step recipe for obtaining a prespecified result" (Haugeland 1989, 65).

Algorithms are designed to work indefinitely in finite time. For example, a programmer can

design a procedure that alphabetizes a set of data. The algorithm used for this program can

be used reliably with new sets of data ad infinitum.

In the 1930s, computer pioneer Alan Turing theorized that a device could be built

that manipulates symbols written on spools of tape into different states according to an

algorithm. The Turing machine was not designed to be practical but to lay out the

theoretical framework behind computation, "a mathematically idealized computer"

(Penrose 1994, 65). Turing and mathematician Alonzo Church realized that, in principle, any

algorithm could be iterated on a properly programmed Turing machine. Turing later

2 This concept will arise again in my later discussion of functionalism and multiple realizability.

6

Page 11: The Empirical Untenability of Sentient Artificial Intelligence

conceived of the Universal Turing machine (hereafter referred to as UTM), which can

simulate any arbitrary Turing machine. He wrote:

The special property of digital computers, that they can mimic any discrete state machine, is described by saying that they are universal machines. The existence of machines with this property has the important consequence that, considerations of speed apart, it is unnecessary to design various new machines to do various computing processes. They can all be done with one digital computer, suitably programmed for each case. It will be seen that as a consequence of this all digital computers are in a sense equivalent. [Turing, quoted in Dreyfus 1994, 72]

A UTM "can implement any algorithm whatever," which means that any modern computer

is a UTM (Searle 2008, 88). The laptop in front of me is designed to implement software,

cohesive sets of algorithms written by programmers. It can implement any software

plugged into it.3

As we all know, UTMs are greatly useful and extremely versatile. In 1997, IBM's

Deep Blue, a chess playing supercomputer, competed against world champion Garry

Kasparov. In a thrilling upset, the advanced programming machine defeated Kasparov,

showing the world how far computer intelligence had come. The two competitors played

very different chess, however. Deep Blue used brute force processing - multiple computers

running in parallel in order to power through the 1050 possible moves on the chessboard.

Kasparov is a human, and as Hubert Dreyfus explained to me in a personal correspondence

"[A chess program] needs to look at millions and millions of connections per second to do

something that human beings do in an obviously entirely different way, that's true of the

chess thing already[ ... ] Grandmasters look at two or three hundred at most" (Dreyfus

3 My MacBook Pro cannot run any program (i.e .. exe files), but it can surely implement any program in that it can be inputted and cause some sort of output.

7

Page 12: The Empirical Untenability of Sentient Artificial Intelligence

2011). Granted, a grandmaster's chess abilities far surpass the average human, but it is

extremely unlikely that he "computes" such an astronomical number of possibilities every

turn.

Is Kasparov's mind nothing more than a slow computer? Many theorists believe this

to be so, as we shall soon see. But a crucial barrier separating Kasparov's mind from a UTM

is consciousness, the poorly understood phenomenon that makes us sentient and sapient.

Humans are conscious, no doubt, but there is no definitive proof that computers can or will

ever achieve sentience. For starters, Kasparov has perceptual faculties: sight, touch, taste,

etc., while Deep Blue does not (the mainframes weren't even in the room). However,

apparatuses like electronic eyes, acoustic receivers, and pressure-sensitive tactile devices

have existed for years. Hypothetically, the series of parallel processing computers

comprising Deep Blue could be significantly shrunk and placed inside a humanoid robot,

along with high-tech perceptual devices (call it DB2). It seems that the only thing separating

Kasparov and DB2 is the latter's one-track mind, so to speak; its only function is playing

chess. On the other hand, as Steven Hamad argues in Minds and Machines, "It is unlikely

that our chess-playing capacity constitutes an autonomous functional module, independent

of our capacity to see, move, manipulate, reason, and perhaps even to speak. The [Turing

test] itself is based on the pre-emptive assumption that our linguistic communication

capacity is functionally isolable" (Hamad 1992). DB2 might not even know that it's playing.

Kasparov probably plays because he enjoys chess and has the desire to win, while DB/s

8

Page 13: The Empirical Untenability of Sentient Artificial Intelligence

actions are dictated and bounded by its pre-programmed software. We cannot assume that

the robot, despite its technological excellence, does anything because it wants to.4

Moreover, even though DB2 has perceptual capabilities, it does not necessarily have

any phenomenal experience of anything it senses, since the input is nothing more than

formal data. In short, is it right to say DB2 is conscious? If a mind is nothing more than

computation, then there is no reason to believe DB2 doesn't have the potential to be. The

next section explores the concept and plausibility of the computational nature of the mind.

IS A MIND A COMPUTER?

John Searle famously lectured:

Because we do not understand the brain very well we are constantly tempted to use the latest technology as a model for trying to understand it. In my childhood we were always assured that the brain was a telephone switchboard. ("What else could it be?") I was amused to see that Sherrington, the great British neuroscientist, thought that the brain worked like a telegraph system. Freud often compared the brain to hydraulic and electromagnetic systems. Leibniz compared it to a mill, and I am told some of the ancient Greeks thought the brain functions like a catapult. At present, obviously, the metaphor is the digital computer. [Quoted in Kurzweil 2005]

The way we understand the brain is correlated with the high-tech paradigm of the day, but

the idea of the computational mind is nothing new. Thomas Hobbes contended that the

mind is the brain's function as a calculating machine: "REASON ... is nothing but Reckoning

(that is, Adding and Subtracting) of the Consequences of general names agreed upon, for

the marking and signifying of our thoughts" (Hobbes 1904/1651). Our mind is a mechanical

thinking machine, an infinitely complex flowchart of inputs and outputs. In mid-201h century

4 This is a crucial theme in Blade Runner, which I will discuss later.

9

Page 14: The Empirical Untenability of Sentient Artificial Intelligence

Britain, Alan Turing set out to show that Hobbes was on to something. He designed "the

imitation game," an original thought experiment that attempted to solve the puzzle. The

game is simple. Three participants, a man, a woman, and an irrelevantly gendered

interrogator (A, B, and C, respectively) are placed in three rooms. The identity of each

participant is unknown. They can communicate with each other freely through Teletype,

keeping revealing factors like tone of voice or handwriting a mystery. The object of the

game is for C to determine the sexes of A and B by asking each of them questions. The

catch is that A attempts to mislead C by providing false information. At the same time, B

attempts to convince C of the truth. If A successfully fools C, then A "passes" the test. Next,

replace A with a computer. Band C remain human and don't know that A is a machine. If A,

a computer, manages to convince C, a human, that it is in fact a human female, then Chas

met the criteria for intelligence (Turing 1950).

Turing held that if a human cannot distinguish a computer

from a person, then that computer can think and thus has a mind. His

conclusion helped give rise to computationalism, or the

computational theory of the mind (henceforth CTM), the idea that

the brain is a computer, and mental states and consciousness

somehow arise from the formal processing. According to Turing and

Church, "anything that can be given a precise enough characterization

TURING TEST EXTPA CREDIT: CONVINCE THE EXAMINER THAT ti_CT; A COMPUTER.

Figure 1 From xkcd.com

as a set of steps can be simulated on a digital computer," so if a mind can be transcribed

algorithmically, a computer can implement it, thus giving rise to machine consciousness

(Searle 2008, 87). If this is possible, the brain is just a biological UTM, or "wetware."

10

Page 15: The Empirical Untenability of Sentient Artificial Intelligence

The computational theory of mind is built on a theoretical framework called

functionalism, the idea that a mental state is defined by its function. As an analogy, a

carburetor is a device whose primary function is to mix fuel and air in order to cause

combustion. A carburetor can be made of any materials, just as long as that as it fulfills that

function; it is solely defined by its function.5 Mental states would be no different according

to this theory. The functionalist argues that a mental state is nothing more than "its

function in mediating relations between inputs, outputs and other mental states" (Carter

2007, 45). For instance, a pain is defined by the role it plays. Some would argue that pain is

nothing more than tissue damage and C-fiber stimulation.6 To the functionalist, a mind is a

Turing machine. Accordingly, a pain is an input (tissue damage), a mental state (discomfort,

anxiety, etc.), and an output (behavior). The functionalist holds that any system that can

iterate these conditions is in pain, regardless of composition. The token example is the

biologically and chemically different Martian who is in a pain state when it meets the

requisite pain conditions. Pain "mediates relations between characteristic pain-inducing

inputs, pain-alleviating reasoning and behaviour" (Carter 2007, 45). Anything that serves

this functional role is a pain ex vi termini.

Computationalism's aim is "fleshing out these mediating relations - the relations in

question are held to be computations" (Carter 2007, 95). This is not to say, however, that

the mere operation of formal systems is sufficient for consciousness. If this was the case, a

simple tape-based Turing machine is itself a thinking thing, an unsettling thought indeed.

5 The carburetor example is an adaptation of an argument used by Jerry Fodor (Psychological Examinations, Random House 1968). 6 Also known as type-physicalism, as we shall see.

11

Page 16: The Empirical Untenability of Sentient Artificial Intelligence

(Although some argue that there is "something it is like" to be an inert artifact, as will be

discussed later, for now we shall assume the falsity of this claim). If CTM is true, however,

sentience arises from the interplay between our computational brains and our sensory

organs; the mind is the result of the operation of software while the brain and body are the

various pieces of hardware. Granted, CTM proponents realize that there might be more to

cognition than just syntactic manipulation. Just as long as "mental states are at least

computational states," CTM holds (Searle 2008, 87, my emphasis).

Functionalism7 and CTM turn on multiple realizability, the understanding that 11a

single mental kind (property, state, event) can be realized by many distinct physical kinds"

(Bickle 2008). If every minute detail of the brain, down to the very last neuron, was

reenacted with, say, beer cans, would a mind emerge? Douglas Hofstadter {2000) wrote "A

Conversation With Einstein's Brain," a thought experiment that explores this idea. Lets say

that one microsecond before his death, every detail of Albert Einstein's brain was copied

exactly as it was into a book. Each of the hundreds of billions of pages corresponds to a

single neuron and information about its connections (synapses). When the real life Einstein

heard someone ask a question, he perceived the vocal utterances with his ears, which then

affects the auditory neuron structure in the brain.Theoretically, one could ask the

disembodied Einstein a question by correlating relevant information about how his brain

registers each particular tone and trace consequent chain reaction of synapses, which

eventually lead to Einstein uttering a response to the question. Perhaps a machine could

act as a middleman, speeding up the page turning. Is the brain-book a mind?: "I'm losing a

7 Here I refer to certain variations of functionalism, but not all. See Lewis 1991.

12

Page 17: The Empirical Untenability of Sentient Artificial Intelligence

clear sight of who 'I' is. Is 'I' a person? A process? A structure in my brain? Or is 'I' some

uncapturable essence thatfeels what goes on in my brain?" {Hofstadter 2000, 445)

Hofstadter's point is that the 'I' all of us are intimately familiar with is perfectly conceivable

as a set of algorithms, realizable in any implementing medium, inasmuch as the

unfathomably complex programming is good enough.

If functionalism is correct, the mind-body problem is solvable with enough time and

energy spent on computational models of the brain. The ultimate goal of artificial

intelligence is to fully comprehend the human mind and create a unified theory of

consciousness. If such a theory is ascribed to an artifact, consciousness could presumably

arise. I do not believe that consciousness is or is not multiple realizable. My point is an

agnostic one; there is no test conceivable that can definitively prove whether consciousness

has been achieved. The Turing test, "though essential for machine modeling the mind, can

really only yield an explanation of the body" (Hamad 1992). Behavior alone is not sufficient

for mentality.

IS A COMPUTER A MIND?

If CTM is true, any UTM can implement a syntactic mind. Turing believed that the

mind is syntactic, and that the imitation game was an indicator of the presence of mind.

But what if CTM is false? If the human mind is not computational, is it still possible for a

computer to be conscious? Before we delve into the million-dollar question, I must first

clarify some key concepts and assumptions. Various writers across the gamut of philosophy

and science have different definitions of words like 'mind', 'consciousness', 'thinking', and

13

Page 18: The Empirical Untenability of Sentient Artificial Intelligence

'intelligence'. Let me clarify my own usage. I hold that the possibility for consciousness is a

necessary for mentality. For example, in a deep sleep one is decidedly unconscious, but

dream states are a form of consciousness since there is still some sort of phenomenal

experience taking place. But one cannot dream or think in the first place if there was never

any conscious experience to begin with. Thinking, or introspecting, is the active process of

exploring the contents of one's mind. Dreaming" is not thinking because it occurs passively.

Intelligence, on the other hand, does not presuppose mindedness. Cognitive

scientist Steven Pinker defines intelligence as "the ability to attain goals in the face of

obstacles by means of decisions based on rational (truth-obeying) rules" (Pinker 1997, 62,

my emphasis). The term 'Artificial Intelligence' is then, according to his definition, a

synthetic entity capable of following rules in order to attain goals. When I attempt to

control my car over an icy stretch, the anti-lock breaks automatically engage, helping me

avoid danger. In this case, the ice is an obstacle and the braking system follows a set of

rules in order to decide when to engage, accomplishing the ultimate goal of car and driver

protection. By Pinker's lights, the brakes on my car are intelligent. To say that my brakes

are conscious or have a mind, on the other hand, is a much stronger, and tenuous, assertion.

So what kind of artifact, if any, can think?

In 1956, computer scientist Allen Newell, economist Herbert Simon, and systems

programmer J.C. Shaw shocked the scientific community with what is considered the first

artificial intelligence program. "Logic Theorist," as it was called, managed to prove 38 of the

first 52 theorems from Whitehead and Russell's Principia Mathematica (Copeland 1993, 7).

8 Barring lucid dreaming, in which the dreamer is in active control of his or her situation.

14

Page 19: The Empirical Untenability of Sentient Artificial Intelligence

This may not sound as impressive as "I am C-3PO, human cyborg relations," but it was proof

enough for Al researchers that a thinking machine might be possible. Two years later,

Simon claimed that within ten years, computers will defeat the world champion chess

player {he was off by about thirty years), discover and prove a new mathematical theorem,

and, most importantly, that "most theories in psychology will take the form of computer

programs," advocating CTM {Simon and Newell 1958, 6). By 1961, Newell and Simon were

ecstatically optimistic, writing:

It can be seen that this approach makes no assumption that the 'hardware' of computers and brains are similar, beyond the assumptions that both are general-purpose symbol-manipulating devices, and that the computer can be programed to execute elementary information processes functionally quite like those executed by the brain. [Newell and Simon 1961, 9]

Al innovator Marvin Minsky concurred with Newell and Simon not long after, arguing that

brains are "meat machines" capable of duplication (Dreyfus 1994, 252). The search for the

synthetic mind was well underway.

Fast-forward to present day. There are no sentient robots roaming the streets or

running businesses. Critics of CTM and functionalism argue that Al research has been

barking up the wrong tree. John Searle leads the attack on computational Al with his

arguments against "strong Al," the concept that an "appropriately programmed computer

really is a mind, in the sense that computers given the right programs can be literally said to

understand and have other cognitive states" {Searle 1980). If strong Al holds, mental states

are invoked at least in part through syntactic computation.

Strong Al assumes the truth of CTM, but CTM does not necessarily assume strong Al.

"Weak Al" proponents hold that the brain is not necessarily computational, but the study of

15

Page 20: The Empirical Untenability of Sentient Artificial Intelligence

computationalism as a psychological paradigm is scientifically useful. The big difference

between strong and weak Al proponents is that while the former is convinced that the right

series of computations invokes an actual mind, the latter holds that minds can be simulated,

but not duplicated. This distinction is crucial. If strong Al is correct, it is theoretically

possible to synthesize a mind. Any UTM could potentially instantiate the unimaginably

complex array of naturally occurring algorithms resulting in a sentient artifact. But is syntax

enough for a mind? Searle famously argued in the negative with the Chinese Room thought

experiment:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese. [Cole 2009]

The Chinese speaker outside the room (call her Mei) assumes that the person inside (call

him John) is fluent, and for good reason. John is so adept at using the manual and shuffling

through the boxes that he could quickly respond to any of Mei's input, regardless of

complexity. For all intents and purposes, Mei is communicating with a fellow fluent Chinese

speaker. But John is not fluent; "Chinese writing is just so many meaningless squiggles"

(Searle 1980). All he does is manipulate those squiggles according to a set of

predetermined rules, or a "script."

16

Page 21: The Empirical Untenability of Sentient Artificial Intelligence

It is contradictory to say John understands Chinese when he doesn't actually

understand a word of it. Searle comes to a profound conclusion: mental states have

semantic (meaningful) content, and syntax alone is not sufficient for semantics, so mental

states cannot be syntactical. The purely syntactical computer can emulate the mind, but not

duplicate it. He writes:

You can simulate the cognitive processes of the human mind as you can simulate rain storms, five alarm fires, digestion, or anything else that you can describe precisely. But it is just as ridiculous to think that a system that had a simulation of consciousness and other mental processes thereby had the mental processes as it would be to think that the simulation of digestion on a computer could thereby actually digest beer and pizza. [Searle 2008, 68]

Searle finds it absurd to think a mind can be caused in this sense. Simulation is not

duplication. It might be possible in practice to simulate human intelligence on a computer,

"just as scientists routinely simulate everything from hurricanes and protein synthesis to

traffic jams and the black market of Albania," but the result is only an illusion of the original

(Haugeland 1985, 112).

Searle's point, that syntax is not sufficient for semantics, has become a mantra

amongst opponents of strong Al and CTM. The man in the room passes the Turing test,9 but

he lacks any semblance of semantic understanding. 'Understanding' is contingent upon

meaningfulness. When I understand a concept ('democracy', for example), the idea

becomes an object of my conscious experience. I know what democracy means once I

9 Some might argue that the man in the Chinese Room cannot actually pass the Turing test if asked questions like "What is time?" (see Ben-Yami 1993). I don't see the value in such claims. There is no good reason to presume that the Chinese-English manual can't be programmed to provide a conversationally acceptable answer like "the unextended dimension of existence that determines the sequential nature of events."

17

Page 22: The Empirical Untenability of Sentient Artificial Intelligence

attach to it some semantic content. To understand something requires mental

representation thereof. This power of representation is generally known as intentionality.

John has no intentional states about the actual meanings of the characters. It is impossible

for him to think about the semantic content of the "conversation" with the Chinese­

speaker. Of course, he can think, "well, maybe I'm talking about kittens," but such a

thought is a complete shot in the dark with no basis in understanding. The Chinese Room is

a Turing machine with the manual as its program. The argument shows that the Turing test

does not test for a mind, but simply for the existence of computational processes.

The Chinese Room represents the syntactic processes by which computers "think."

Computers as we know them are programmed to do exactly what John does (albeit at

lightning fast speeds), devoid of intentionality, understanding, and semantic content. A

computer may be able to simulate a mind, but not duplicate it. DB2, our perceptually

equipped, chess-playing robot does not have mental content at all, only brute-force data.

As might be expected, Searle's argument has garnered an enormous amount of

criticism. Here I will address a few of the strongest instances. The response typically known

as the "Systems Reply" admits that the person in the room (John) in fact does not

understand Chinese, but that the entire system of inputs and outputs does understand.

John acts like the central processing unit of a computer; only one part of a larger whole.

The room itself, so to speak, is the fluent Chinese speaker with whom Mei communicates.

Searle rebuts by claiming that there is no reason to claim that John can't just memorize the

entire manual and learn to write and recognize every symbol, all while working outside.

18

Page 23: The Empirical Untenability of Sentient Artificial Intelligence

The whole system is now completely internalized, but John remains completely ignorant of

any semantic understanding.

The "Robot Reply" admits the validity of the Chinese Room but suggests an

alternative thought experiment. Instead of manipulating formal inputs and outputs

according to a script, we place a computer inside a mobile robot who, like our 082, is

equipped with perceptual apparatuses like microphones, cameras, touch sensors, etc. The

robot uses its senses to learn the world around it, just like a chlld.:" The robot is free to

learn Chinese independently by forming a causal connection with the world. Searle writes

in response:

Suppose that instead of the computer inside the robot, you put me inside the room and, as in the original Chinese case, you give me more Chinese symbols with more instructions in English for matching Chinese symbols to Chinese symbols and feeding back Chinese symbols to the outside. Suppose, unknown to me, some of the Chinese symbols that come to me come from a television camera attached to the robot and other Chinese symbols that I am giving out serve to make the motors inside the robot move the robot's legs or arms. It is important to emphasize that all I am doing is manipulating formal symbols: I know none of these other facts. I am receiving 'Information' from the robot's 'perceptual' apparatus and I am giving out instructions' to its motor apparatus without knowing either of these facts. [Searle 1980]

Even though the external world is invoked in this situation, no semantics can arise because

the totality of the data is still formal. It only adds complexity to the man-in-the-room's task.

Although the Chinese Room argument has profound implications across the

spectrum, ultimately Searle begs his own question about the nature of the mind. He

10 This method of information processing is known as bottom-up Al, in contrast with top­ down cognition, in which "it has been constructed according to some well-defined and clearly understood fixed computational procedure ... where this procedure specifically provides a clear-cut solution to some problem at hand" (Penrose 1994, 18).

19

Page 24: The Empirical Untenability of Sentient Artificial Intelligence

contends that syntax is not sufficient for semantics and that the best computers can is

simulate consciousness {weak Al). However, he fails to address exactly what makes our

minds so special that they have semantics while computers cannot. He assumes that the

mind is fundamentally different than a symbol processing machine but never elaborates on

the nature of that difference. His argument is based on three premises:

Pl: Programs {software) are syntactic and are thus self-contained P2: Minds have semantics and are thus not self-contained P3: Syntax is a not a sufficient condition for semantics

What's missing missing is the second half of P2. It isn't enough to simply say that

our thoughts have meaning and thus cannot be syntactical. What gives our thoughts

meaning in the first place? How does visible light detected by the eye or vibrations

of air in the ear canal mean anything to a mind? Perhaps meaning and intentionality

are just illusions caused by our brains but are really semantically empty (see Dennett

1991a). Or perhaps we aren't conscious at all, as some persistent eliminativists

contend. Searle's argument is poignent but inconclusive.

The ultimate aim of this paper is to show that the Searl e's semantics will never be

fully understood. The missing ingredient, why our mental states are meaningful, is beyond

the scope of human comprehension for reasons I will address in the next two chapters.

Before we proceed, it is important to address one final criticism. The Other Minds response

asks how it is possible to know if anybody understands anything besides judging by their

behavior. If an android can fool anyone into thinking it understands, who's to say it

doesn't? Searle's response goes as follows:

20

Page 25: The Empirical Untenability of Sentient Artificial Intelligence

The problem in this discussion is not about how I know that other people have cognitive states, but rather what it is that I am attributing to them when I attribute cognitive states to them. The thrust of the argument is that it couldn't be just computational processes and their output because the computational processes and their output can exist without the cognitive state. It is no answer to this argument to feign anesthesia. In 'cognitive sciences' one presupposes the reality and knowability of the mental in the same way that in physical sciences one has to presuppose the reality and knowability of physical objects. [Searle 1980]

As sentient beings, it is impossible to deny the fact that we directly experience our own

consciousness (although the eliminativists and others have tried). We also know that

computation can exist without cognition. Using our own consciousness as a reference, we

assume that other people have minds and phenomenal experience of the world but have no

reason to assume that they are computers. This approach to the problem of other minds is

known as the argument from analogy.

However, the argument from analogy falls short. Just because something acts like a

sentient being by no means necessitates consciousness. We

anthropomorphize unconscious objects and nonhuman animals all the

time, sometimes attributing complex thoughts and emotions to entities

that are obviously incapable of them. We think cats are happy when

they grin because that's what humans do (figure 2). In reality, the cat Figure 2 Who are we to say this cat

is happy?

might be miserable or scared, but the tendency to humanity or sentience by analogy leads

us astray. My point is that analogy alone is not enough for the presence of other minds.

This plays a major role in the advancement of artificial intelligence theory and development.

A major barrier standing in its way is the inability to ever know, with full certainty, whether

21

Page 26: The Empirical Untenability of Sentient Artificial Intelligence

or not a nonhuman entity is sentient. In the next section, I will discuss this uncertainty

problem and its implications in philosophy, cognitive science, and Al research.

22

Page 27: The Empirical Untenability of Sentient Artificial Intelligence

CHAPTER 2

THE ENIGMA OF FAMILIARITY

THE HARD PROBLEM

At this moment, I am sitting in a cafe typing on my laptop. I hear the distinctive

sound of fingers striking a keyboard over chatter of other customers and ambient piano

music. I shiver when the door opens, sending a chill through my body. The pastry in front of

me smells of butter and cinnamon, and I can feel myself salivating as I write about it. I feel

cheerful and jovial because yesterday I received an offer for a wonderful job for next year.

have these experiences because I am sentient and aware, two defining characteristics of

consciousness, a barely understood but ubiquitous psychological phenomenon immediately

familiar to all of us. As far as I know, humans have always been conscious and always will

be. Death and coma notwithstanding, consciousness is a necessary part of human

existence.

For this reason, consciousness as a concept is ironic. It's been around for at least as

long as we have and without it, scientific study and epistemic progression could never

happen. But in spite of countless hours spent in laboratories and classrooms, a cohesive

theory of consciousness is yet to be discovered. In fact, so little is actually known about it

that The International Dictionary of Psychology defines conscious as such:

23

Page 28: The Empirical Untenability of Sentient Artificial Intelligence

Consciousness. The having of perceptions, thoughts, and feelings; awareness. The term is impossible to define except in terms that are unintelligible without a grasp of what consciousness means. Many fall into the trap of equating consciousness with self-consciousness - to be conscious it is only necessary to be aware of the external world. Consciousness is a fascinating but elusive phenomenon: it is impossible to specify what it is, what it does, or why it evolved. Nothing worth reading has been written on it. [Sutherland 1996, 95, emphasis added]

It is rare to find such candid language in a reference book. But how could it be true?

Bertrand Russell agrees: "The sciences have developed in an order the reverse of what

might have been expected. What was most remote from ourselves was first brought under

the domain of law, and then, gradually, what was nearer: first the heavens, next the earth,

then animal and vegetable life, then the human body, and last of all (as of yet imperfectly)

the human mind" (Russell 1961). The boundaries of human knowledge are expanding at an

exponential rate, but the nature of what is most familiar to all of us remains vague and

misunderstood.

That is not to say that volumes worth of literature have not been written about it.

Cognitive science is devoted to the subject, synthesizing psychology, neuroscience,

philosophy, linguistics, anthropology, and computer science into one contiguous,

interdisciplinary area of study. Theories of consciousness abound from every field, and

great progress has certainly been made. In particular, the systematic mapping of brain

function and behavioral correlation has allowed us to know more about the brain than ever

before. But the mind still eludes us. Although science can explain how the senses function,

why this cinnamon bun smells so damn good is a complete mystery. The inner experience

of the mind is private and subjective, knowable only by the knower and impossible to

describe in purely objective terms. I can describe how I'm feeling to the best of my ability,

24

Page 29: The Empirical Untenability of Sentient Artificial Intelligence

and you can try to relate using your own experience, but the two sets of data are

irreconcilable; we can't know if they are the same. Individual conscious experience is so

distinct and accessible to us, yet so unbelievably opaque and foreign to another.

This is a crucial aspect of the mind-body problem in philosophy. Descartes famously

contended cogito ergo sum, I think therefore I am, demonstrating the perspicuous,

undeniable existence of his own mind. He held that the physical world and mental world

were dualistic, comprised of two discrete substances, physical matter (res extensa) and the

non-extended, material soul or mind (res cogitans). Despite the currency of his words, the

cogito argument has become the butt of countless philosophy jokes, offered to freshman as

an exercise in argument analysis and refutation. Philosophy and science have become

inextricable, and there are no reputable empirical theories of the soul. Theology aside,

modern thought favors a naturalistic materialism, the idea that all that exists is matter and

all phenomena including consciousness can be reducible to phvsics.i! Materialism has been

successful in explaining the natural world, but inevitably fails to adequately account for

subjective conscious experience. A vast amount has been written both in defense of and in

opposition to materialistic consciousness, but the solution to the problem remains largely

unsolved.

Typically, the problems of consciousness are split into two issues.12 The Easy

Problem is well under control by empirical science. It considers topics involving brain

behavior, the integration of information, and stimulus response, all within the realm of

11 Of course this is a very narrow and underdeveloped view of materialism. My own position is in favor of a materialism in which consciousness is irreducible. 12 This is David Chalmers' idea, but it is widely accepted as a useful way to characterize consciousness theory.

25

Page 30: The Empirical Untenability of Sentient Artificial Intelligence

empirical study. The hard problem, as it is known, is much more of a challenge. David

Chalmers describes it as

the problem of experience. Human beings have subjective experience: there is something it is like to be them. We can say that a being is conscious in this sense - or is phenomenally conscious, as it is sometimes put - when there is something it is like to be that being. A mental state is conscious when there is something it is like to be in that state. [Chalmers 2002a]

What gives 'red' the quality of redness? After all, 'red' is nothing more than

electromagnetic radiation of a particular wavelength and frequency, detected by the eye

and perceived by the brain. If everything in the universe is reducible to physics, then the

experience of 'red' is nothing more than that. But this response is terribly unsatisfying. Our

senses have a phenomenal quality experienced by the mysterious "inner life" that frustrates

scientists to no end. Some choose to ignore the problem altogether, dismissing

consciousness as a trivial intuition:

The "intuition" at work here is the very raison d'etre of the problem of consciousness. The only consistent way to get around the intuitions is to deny the problem and the phenomenon altogether. One can always, at least when speaking "philosophically," deny the intuitions altogether, and deny that there is anything (apart from the performance of various functions) that needs explaining. [Chalmers 1996, 110]

This approach is a mistake, of course, because every normally functioning human has an

inner life and disregarding it will lead us nowhere. This mysterious quality is at the core of

the hard problem of consciousness, and it is the modern day expression of the mind-body

problem.

Those who take a purely scientific approach may claim that consciousness is

reducible to its constituent physical properties. Emergentism, championed by Samuel

26

Page 31: The Empirical Untenability of Sentient Artificial Intelligence

Alexander13 and C. Lloyd Morgan14 in the early zo" century, is the idea that consciousness

emerges from fundamentally simple brain processes. To illustrate, take water and break it

down to its constituent parts, oxygen and hydrogen. Without these two elements, water

cannot be. However, water is not an inherent property of oxygen or hydrogen. If there was

a world where oxygen and hydrogen exist as discrete elements but can never interact, that

world is necessarily devoid of H20. So oxygen and hydrogen are inherent to water, but not

vice versa. Similarly, consciousness emerges from neural processes even though

consciousness is not an inherent property thereof:

To apply emergentism theory to a universal scale is to accept physical ism. Type

physicalism (aka identity theory of mind) is a materialistic position that holds that all mental

states and processes are equivalent to brain behavior and, as a result, consciousness is

reducible to physics. U.T. Place argued that consciousness might be a pattern of brain

activity that could be correlated with certain brain processes and that the elusiveness of

introspective observations could just be a "phenomenological fallacy." Introspection is just

brain behavior, just like lightening is just a high voltage, short duration electric charge (Place

2002). Herbert Feigl, another leading identity theorist, concurred, claiming "the states of

direct experience which conscious human beings 'live through,' and those which we

confidently ascribe to some of the higher animals, are identical with certain (presumably

configurational) aspects of the neural processes in those organisms" (Feigl 2002, 69).

13 See Alexander's Space, Time, and Deity (London: Macmillan, 1927) 14 See Morgan's Introduction to Comparative Psychology, 2nd ed., rev. (London: Walter Scott, 1903)

27

Page 32: The Empirical Untenability of Sentient Artificial Intelligence

Mental states are contingent upon specific brain states. In other words, "there is no mental

difference without a physical difference" (Nagel 1998).

Is consciousness reducible? Many materialists of different stripes argue in the

affirmative. After all, if mental states aren't physical, what could they possibly be? Dualistic

philosophy allows for a mental "stuff" separate from physical substance, but dualism has

fallen out of fashion in the face of scientific development. Most contemporary philosophers

of mind rely on physics and neuroscience for empirically proven evidence to support

argumentation. As a result, the study of consciousness has become inextricably linked to

science. However, I reject dualism but deny the validity of physicalism about consciousness.

Subjective conscious experience is knowable only to the subject, and reduction to objective,

public data is impossible. The best we can do is correlate between brain states and

behavior. The next section analyzes some major arguments against physicalism in order to

show its inadequacy as a coherent theory of consciousness.

ARGUMENTS FROM INEFFABILITY

In 1866, Thomas Huxley wrote: "What consciousness is, we know not; and how it is

that anything so remarkable as a state of consciousness comes about as a result of irritating

nervous tissue is just as unaccountable as the appearance of the Djinn when Aladdin rubbed

his lamp" (Huxley 1866, 193). Despite 150 more years of rigorous scientific and

philosophical investigation, we still have no idea. Even Einstein is reputed to have admitted,

"Science could not give us the taste of soup (Dennett 2002, 230). At the forefront of the

28

Page 33: The Empirical Untenability of Sentient Artificial Intelligence

hard problem is the concept of qualia, 15 the technical term for the phenomenal, felt

qualities we experience through the senses. Qualia like the taste of an apple, the roughness

of sandpaper, the rumbling of a bass tone from a subwoofer, and the smell of a musty attic

can be experienced by anybody with normal perceptive abilities, but the mystery lies in

their subjective character. A piece by Stravinsky might sound like heaven to Jones and

cacophony to Smith. Both have normally functioning ears and a deep appreciation for

classical music. There is the matter of personal taste, of course, but it also must be

considered that Jones and Smith have different experiences of the same tones. Middle C is

always 262 Hz, but only I know what middle C sounds like to me.16

The problem is that our explanatory capacity is limited by the boundaries of natural

language.

I '3~L1;> llt>.~t. ~'<{ti "-"IM.,t...'S

·~LI> H"-~£. ' ·~o-; l'OI<:. ~ti.LS.

115 A. \.lliL'i:. 811\JHI<.'{. S\l~ THE \.OW ~\l!l,\\)'1\'f M'l"tcTS i\u;\.

!.SHt t•; ·.:>>Rr>..W.>t. \'tOW, SM\;U .. "'S ~\:: 00 \o'lc::.<:1'\ l~E.. 11\J'I Y\'i::. crw.i: G~SC.\l',~.£

THrc..~?

Figure 3 Borrowed from Chalmers 1996

15 As Chalmers 1996 notes, qualia, experience, phenomenology, phenomenal, what it is like, subjective experience all refer to the phenomena privy to the conscious mind. Only grammatical differences set them apart. 16 The inherent differences between Smith and Jones return later in my discussion of inverted qualia.

29

Page 34: The Empirical Untenability of Sentient Artificial Intelligence

In figure 3, Hobbes the tiger describes his olfactory experience of fire with words like

snorky, brambish, and brunky. To his friend Calvin, such terms are as meaningless as the

squiggles and squoggles in the Chinese Room. Only Hobbes knows precisely what they

mean because only he has direct access to the contents of his inner experience. Even

another verbal, sentient tiger with the same sensory faculties cannot know if its experience

of "snorky" is the same as Hobbes' because verbal explanation is inadequate. We learn

concepts like 'green' through ostensive definition, the repeated pointing to green objects by

others and labeling them such. I can identify green objects because 'green' has been

instilled in me through ostensive definition. Hobbes thinks fire smells snorky because he

taught himself to identify a particular odor with that term. No one, even his best friend

Calvin, can ever know exactly what Hobbes means when he calls something 'snorky' or

'green' or anything for that matter because of the private, enclosed, and ineffable nature of

conscious experience.

Or, to use a literary example, take the following dialogue from Brideshead Revisited

between the narrator Charles Ryder and his friend Sebastian Flyte as they drunkenly

describe the taste of a wine.

"It is a little, shy wine like a gazelle." "Like a leprechaun." "Dappled, in a tapestry meadow." "Like a flute by still water." " A wise old wine." " A prophet in a cave" " A necklace of pearls on a white neck." "Like a swan." "Like the last Unicorn." [Quoted in Lanchester 2008]

30

Page 35: The Empirical Untenability of Sentient Artificial Intelligence

Boozy shenanigans aside, Ryder and Flyte certainly experience something that compels

them to describe the wine in such comically distinctive ways. However, even if I do know

what pearls on a white neck actually tastes like (assuredly, I do not), I will never know what

it is like for Ryder to experience such a sensation.

Much of Wittgenstein's philosophy is couched in the limitations of natural language.

In §293 in Philosophical Investigations he writes:

Suppose everyone had a box with something in it: we call it a 'beetle'. No one can look into anyone else's box, and everyone says he knows what a beetle is by looking at his beetle. Here it would be quite possible for everyone to have something different in his box. One might even imagine such a thing constantly changing. But sup- pose the word 'beetle' had a use in these people's language? If so, it would not be used as the name of a thing. The thing in the box has no place in the language-game at all; not even as a something: for the box might even be empty. No one can 'divide through' by the thing in the box; it cancels out, whatever it is. [Wittgenstein 1974]

"Wittgenstein's Beetle," as this thought experiment is commonly referred, shows that our

explanatory capacity is limited by natural language. The beetle represents the picture we

form in our minds when we think about or perceive something. Everybody's beetle might

be completely different, or they might all be the same. The point is that no one will ever

know for sure. Simply describing what it is to someone else is insufficient for intelligibility

because the only reference point available is my own beetle. I have a beetle and you have a

beetle, but mere linguistic congruence alone does not give rise semantic equivalence.17

Another treatment of this issue is known as the inverted spectrum argument, dating

back to John Locke's18 1ih century empiricism. If two people with normally functioning

17 It should be noted that Wittgenstein's philosophy of mind is not in line with my own. However, his Beetle example is still pertinent to my argument. 18 See Locke 1689/1996, 169.

31

Page 36: The Empirical Untenability of Sentient Artificial Intelligence

visual faculties observe a fresh strawberry, both inevitably describe it as red. However,

there is the distinct possibility of the qualia of both observers to be completely different, or

in this case, inverted. Who is to say Smith's red is not Jones' green? If Jones spends his

entire life perceiving red qualia as green qualia, for him, green is red. In theory, remapping

whatever neurological apparatus pertain to color perception can stimulate this situation,

but for the sake of argument, the logical possibility of inverted qualia is sufficient. The

question is whether it is empirically possible to prove veracity of the situation. I argue later

that it is not.

In same vein as the inverted spectrum, Frank Jackson and Thomas Nagel argue for

the necessity of qualia and dispute physicalism with what is known as the knowledge

argument. In his famous thought experiment, Jackson supposes that Mary, the world's

foremost expert on neuroscience, spends her entire life in a monochrome room and has

never been exposed to any color other than black, white, and shades of gray. She has

always worn opaque, black stockings, a white lab coat, and black gloves. Even though she

knows more than anyone else about the physical processes behind color perception, there

is nothing Mary can do (short of taking drugs) to experience what color is like. 'Blue' means

nothing more than a 475nm wavelength, and will always mean that unless she ever leaves

the room. The physical facts behind a particular aspect of subjective experience are not the

same as the experience itself. When Mary is released and actually sees the blue sky for the

first time in her life, she learns what subjective color experience is. Even though she knew

every physical fact there is to know about color and human perception, her knowledge was

incomplete. Mary learns something an objective physical fact could not teach her. Jackson

32

Page 37: The Empirical Untenability of Sentient Artificial Intelligence

shows that "physicalism leaves something out" something unaccounted for by objective

data (Jackson 1982).

A real-life take on Jackson's conclusions comes from a 2008 article in The New

Yorker. Here are two molecules, identical in every way except that they are inverted on the

y-axis:

0 0

Figure 4 Figure 5

Any competent chemist can fully understand the molecular structure of both examples, and

modern science can tell us everything else we could possibly know about them. Well,

everything except for the fact that Figure 4 smells like spearmint, while Figure 5 smells like

caraway, two completely disparate and distinct odors. When it comes to phenomenal

experience, we are baffled: "When scientists create new molecules in the laboratory, they

may know every detail of a molecule's structure yet have no clue about what it will smell

like." Physical ism falls flat when it comes to the hard problem; it "risks missing the

fundamental truth of all smells and tastes, which that they are, by definition, experiences"

(Lanchester 2008, 121).

Coming to the same conclusion, Thomas Nagel's take on the knowledge argument

tries to imagine what it is like to be a bat. As we know, the sensory faculties of bats and

33

Page 38: The Empirical Untenability of Sentient Artificial Intelligence

humans are much different. Bats rely on echolocation or sonar, emitting high-frequency

shrieks and perceiving the world around them through the reflection of the sound.

Echolocation is incomparable to any human faculties and is thus unimaginable; "though

clearly a form of perception, it is not similar in its operation to any sense that we possess,

and there is no reason to suppose it is subjectively like anything we can experience or

imagine" (Nagel 1979). I suppose I could blindfold myself and try to find my way by yelling

and carefully listening for an echo, but it still wouldn't be the same since bats don't hear like

we do. Furthermore, to imagine having webbed arms, eating insects, having poor vision, or

spending the daytime sleeping upside down is to imagine bat behavior from the point of

view of a human, not bat experience, and these concepts are surely not the same. In short,

it is impossible to conceive of what the inner life of a bat might be like.

Nagel chose bats because they are mammals and are thus relatively close to humans

phylogenetically. He assumes, controversially, 19 that, like humans, bats are

on some level conscious and aware of their own being.20 However, while it

isn't difficult to imagine what it might be like to be another person,21 bats

are strange enough to be utterly unfathomable. I know what it feels like to

hear Bohemian Rhapsody, so it is within my realm of imagination to think Figure 6

about what it might be like for someone else to hear the same song. Humans are

19 See Janzen 2006; Hacker 2002 20 Nagel does not mean to say that bats fly around soliloquizing, of course. He merely means that there must be something it is like to be a bat in a comparable sense that there is something it is like to be a person. 21 Of course such a feat is impossible to perform accurately; subjective conscious experience is private and inaccessible. But considering how all (normally functioning) humans share the same perceptual and cognitive capacities, imagining what it might be like to be someone else is possible.

34

Page 39: The Empirical Untenability of Sentient Artificial Intelligence

cognitively closed to the possibility of imagining sonar perception. Thus, we are inextricably

tied to our own experiences and utterly incapable of conceiving of anything beyond our

perceptual horizon. Unless the Batboy story in the Weekly World News (Figure 6) is true, no

human can imagine the experience of bat qualia.

The bat brain, on the other hand, is perfectly understandable, "a domain of

objective facts par excellence - the kind that can be observed and understood" within the

limitations of our own perceptual horizon (Nagel 1979, 172). Science informs us about our

own brains and bat brains in the same ways. We can analyze brain states by correlating

brain activity with patterns of observable behavior; such methodology is the backbone of

neuroscience. However, neural correlation ignores qualia, the hallmark of conscious

experience. Exactly what it is like to be a bat is an ineffable property knowable only by a

bat, and no amount of raw data about brains and behavior can change that. Nagel

contends:

Without consciousness the mind-body problem would be much less interesting. With consciousness it seems hopeless. The most important and characteristic feature of conscious mental phenomena is very poorly understood. Most reductionist theories do not even try to explain it. And careful examination will show that no currently available concept of reduction is applicable to it. Perhaps a new theoretical form can be devised for the purpose, but such a solution, if it exists, lies in the distant intellectual future. [Nagel 1979, 166]

In short, the subjective character of consciousness is an unavoidable obstacle for any

discipline involving the study of the mind. He challenges physical ism to stop skirting the

qualia issue by conceding ignorance or giving up entirely.

Nagel's point is that the peculiar and ineffable nature of first-person experience is

incompatible with any sort of objective standpoint. Nagel blames these disharmonious

35

Page 40: The Empirical Untenability of Sentient Artificial Intelligence

standpoints for causing "many of the basic problems of philosophy: we are torn between

these two ways of seeing the world and cannot satisfactorily integrate them into a coherent

conception of things," and current cognitive-philosophical theories are inadequate means of

reconciliation {McGinn 1997b, 89}.

Another way to think about how to explain qualia and consciousness is by imagining

a being completely devoid of phenomenal experience. David Chalmers does so by talking

about zombies. Not Hollywood zombies, per se, but philosophical zombies (p-zombies) who

have no appetite for brains and look just like regular people. P-zombies are exact physical

copies of people, down to every last atom, neuron, and synapse. My p-zombie twin is

indistinguishable from me not only in appearance but also in behavior. My own mother

wouldn't be able to tell us apart. The key difference between us is that my p-zombie twin is

completely devoid of conscious experience. Even though he (it?) displays what seems to be

conscious behavior, there is an utter lack of phenomenology. When my p-zombie twin

enjoys a freshly baked chocolate chip cookie (my favorite}, all the same physical processes

occur when I eat the same cookie. But there is no actual taste experience occurring, only

overt behavior. There is nothing it is like to be a p-zombie.

This is a difficult position to defend, however, and there are compelling

counterarguments standing in the way. For starters, any molecule-for-molecule replica of a

conscious person would surely be conscious. Unless some sort of panpsychism22 or

Cartesian dualism is correct, it is undeniable that the configuration and quantity of

molecules comprising a person must have some determining effect on the existence of

22See Nagel 1979.

36

Page 41: The Empirical Untenability of Sentient Artificial Intelligence

conscious experience. If consciousness is contingent on the physical, then the x number of

molecules in orientation y must cause awareness in my p-zombie twin because it does so

for me. Chalmers skirts this issue by simply positing a p-zombie world parallel to ours; "the

question is not whether it is plausible that p-zombies could exist in our world, or even

whether the idea of a p-zombie replica is a natural one; the question is whether the notion

of a p-zombie is conceptually coherent" (Chalmers 1996, 98). Of course the chances of me

actually having a phenomenologically absent p-zombie twin are extremely unlikely, but

Chalmers argues that its existence is logically possible. If the logical possibility of p-zombies

is true, then consciousness cannot be broached from a third-person viewpoint, thus

strengthening the knowledge argument and frustrating cognitive scientists even further.23

The logical possibility of zombies illustrates the point I aim to advance. There is no

way for me (or my mother) to know that my zombie twin is actually a zombie. Because he is

physically, functionally, and behaviorally identical to me, no test could definitively tell us

apart. Now lets assume that computationalism is false and that consciousness cannot arise

from formal processing. Android Andrew has a computer brain and looks, functions, and

behaves just like me. In short, Android Andrew is my zombie twin. If we both took the

Turing test, both of us would perform equivalently, because we are, for all intents and

purposes, identical. As we shall see, consciousness falls outside the realm of human

explanatory capacity and as a result, no test devised by humans could be comprehensive

enough to verify the existence of a sentient mind.

23 See Chalmers 2002b.

37

Page 42: The Empirical Untenability of Sentient Artificial Intelligence

On a related and more lighthearted note, when I was very young I climbed out of

bed, made my way down the hall to the master bedroom, and awakened my mother.

"What's wrong?" she asked, to which I responded, "I think I have a finger infection." Since I

was still at the age when curiosity and ignorance often leads to personal injury, my

concerned mother turned on the light and examined my finger. "Nothing seems wrong,

dear. Are you sure it's your finger?" "Maybe it's my hand, or maybe my ... " Before I could

finish my sentence I proceeded to vomit all over my parents' bed. Torn between laughter

and frustration, my mother informed me, "The word is 'nauseous'. When you feel that way

you are nauseous, honey." But remember, just because I didn't have a word for it, I was still

nauseous. There is a disconnection between verbal description and phenomenal

experience. As we shall see later, this division is known as the explanatory gap and is a

crucial aspect to consciousness theories across various fields.

38

Page 43: The Empirical Untenability of Sentient Artificial Intelligence

CHAPTER THREE

DO ANDROIDS DREAM OF ELECTRIC SHEEP? WE'LL NEVER KNOW FOR SURE

THE EXPLANATORY GAP

In quantum physics, the Heisenberg uncertainty principle states that it is impossible

to know with precision the simultaneous position and momentum of a physical system. We

can know one or the other, but run into problems when we try to put them together.

Consciousness theory falls victim to a similar problem. Introspection gives us access to the

brilliant world of qualia and phenomenal experience but tells us nothing about our internal

processes underlying our inner life. Conversely, external observation has provided us a

wealth of information about how the brain works and interacts with the body. But it tells us

nothing about what it is like to be the test subject.

This fundamental divide is known as the explanatory gap. Joseph Levine coined this

term in support of Nagel and the knowledge argument. Take three propositions: (1) 'Pain is

the firing of C-fibers', (2) 'Heat is the motion of molecules', and (3) 'To be in pain is to be in

state F'. (2) is true by necessity; there is "no possible world in which [it is] false" (Levine

2002, 354). Heat is, by definition, the motion of molecules, so any other definition simply

wouldn't be heat. However, there is a "felt contingency" about statements (1) and (3); it is

conceivable to imagine pain without C-fiber firing and pain without being in a particular

39

Page 44: The Empirical Untenability of Sentient Artificial Intelligence

functional state. The sensation of pain is the pain, so we can imagine a world with no C-

fibers but a phenomenon equivalent to what we know as pain. Unlike the heat example, no

distinction between the appearance of a phenomenon and the phenomenon itself can be

drawn. The reason why heat and pain cannot be explained in the same way comes down to

identity.

If what it's particularly like to have one's C-fibers fire is not explained, or made intelligible, by understanding the physical or functional properties of C­ fiber firings - it immediately becomes imaginable that there be (-fiber firings without the feeling of pain, and vice versa. We don't have the corresponding intuition in the case of heat and the motion of molecules - once we get clear about the right way to characterize what we imagine24 - because whatever there is to explain about heat is explained by its being the motion of molecules. So, how could it be anything else? [Levine 2002, 358, footnote added]

The proposition "heat is the motion of molecules" and its contingent corollary express a

particular identity that is fully explainable, leaving out nothing of importance. Our scientific

knowledge can perspicuously explain how molecular motion causes the phenomenon

known as heat. The pain statement, on the other hand, leaves an explanatory gap between

the causal role of C-fiber firing and the way pain feels.

Both physicalism and functionalism fail to account for this gap. And since the gap is

left unexplained, consciousness is not intelligible in the same sense that 'heat is the motion

of molecules' is. For a proposition to be intelligible, "demand for further intelligibility is

24 One might ask, "What about how heat feels? Isn't that just as unexplainable as pain sensation? The answer to this question is yes, but the ineffability of heat experience is not what Levine was driving at. His point is that the science behind the transfer of heat is fully understood and is taken as a "primitive, brute fact about the universe," much like that value of the gravitational constant G is fully understood to be 6.67428x10-11 N (rn/kg]". In both cases, "there is nothing more we need to understand" (Levine 2002, 356).

40

Page 45: The Empirical Untenability of Sentient Artificial Intelligence

inappropriate" in that there is nothing left to be explained about it (Levine 2002, 357). 'Pain

is the firing of C-fibers' is not intelligible because of the necessity of further explanation.

As shown in the previous chapter, physicalism only accounts for third-person

observable behavior. We can induce pain in test subjects and observe the physiological,

neurological, and behavioral responses. But such experiments tell us nothing about the

first-person experience of consciousness: "Even hi-tech instruments like PET scans only give

us the physical basis of consciousness, not consciousness as it exists for the person whose

consciousness it is" (McGinn 1999, 48). Heat caused by molecular motion is fully

explainable in physical terms alone; there is no explanatory gap dividing the proposition

'Heat is the motion of molecules'. As for pain, the best we can do is correlation between C­

fiber firing and pain behavior, while the intermediate phenomenological experience is left

unexplained.

Functionalism, the basis for the computational theory of mind, fails to explain

consciousness as well. Instead of identifying consciousness as physical processes,

functionalists identify it by its causal roles. Pain, explained by functionalism, is "a higher­

order property of physical states which consists in having a certain pattern of causes and

effects, as it might be mediating bodily injury and avoidance behaviour" (McGinn 1991,

209). By these lights, the brain is just a UTM, so pain is the brain state mediating relevant

inputs and outputs. Pain is thus realizable in any medium insofar as it is in the correct

configuration. A robot that implements the causal properties of pain is in pain according to

the functionalist model.

41

Page 46: The Empirical Untenability of Sentient Artificial Intelligence

Functionalist definitions of mental states are not necessarily false, but they are

insufficient for a comprehensive explanation of consciousness. The causal role pain plays is

vital to a full understanding, but the phenomenological aspect is left gaping open. Think

back to Chalmers' P-zombies. They are functionally equivalent to humans; when they stub a

toe they cringe just like the rest of us. But in the end, all that occurs is pain behavior. Since

there is nothing it is like to be a zombie, under no circumstances does the zombie feel

anything. We know what pain is because we experience it intimately within our inner lives,

but "because the qualitative character itself is left unexplained by the physicalist or

functionalist theory that it remains conceivable that a creature should occupy the relevant

physical or functional state and yet not experience qualitative character" (Levine 2002,

359). Accordingly, pain is more than just the functioning of input and output mechanisms.

As Irving Krakow argues, functionalists "talk about pain, but they ignore the reality that the

felt quality of pain is conceptually independent of the empirical possibility that the

neurological correlate of pain qualia might play the 'causal role' they want to attribute to

those qualia" (Krakow 2002, 97). The unmistakable feeling of pain is what mediates input

(tissue damage) and output (pain behavior), and that phenomenological aspect is

completely ignored.

Krakow quotes Wesley Salmon's explanation of causal interaction:

Let Pl and P2 be two processes that intersect with one another at the space­ time point S, which belongs to the history of both. Let Q be a characteristic that Pl would exhibit throughout an interval (which includes subintervals on both sides of Sin the history of Pl) if the intersection with P2 did not occur; let R be a characteristic that process P2 would exhibit throughout an interval (which includes subintervals on both sides of Sin the history of P2) if the intersection with Pl did not occur. Then the intersection of Pl and P2 at s constitutes a causal interaction if: (1) Pl exhibits the characteristic Q before

42

Page 47: The Empirical Untenability of Sentient Artificial Intelligence

S, but it exhibits a modified characteristic Q' throughout an interval immediately following S; and (2) P2 exhibits a modified characteristic R' throughout an interval immediately following S. [Krakow 2002, 76]

In the case of pain caused by a stubbed toe, Pl is my brain as it functions normally, say,

while walking through a doorway and P2 is the act of my bare toe unexpectedly striking the

doorframe. Pl and P2 intersect at time S, and their intersection causes Q (characteristics of

the brain while not in pain) to switch to Q' (characteristics of the brain while in pain), as

well as R (my toe not striking the doorframe) to switch to R' (my toe striking the

doorframe). This functionalist explanation of pain accounts for all observable behavior

involved in pain function, but still fails to account for first-person experience. The pain itself

(as opposed to its causal or behavioral correlates) cannot be treated as Pl or P2 because it

cannot be pinpointed spatially. It is an unobservable, non-spatial property that exists solely

within the mind. As we shall see in the proceeding section, in addition to the problems of

irreducibility and ineffability, the issue of spatiality is major barrier to closing the

explanatory gap because human knowledge is bounded by space-time.

MCGINN'S THEORY OF COGNITIVE CLOSURE

According to legend, when Louis Armstrong was asked to explain what jazz is, he

responded, "If you gotta ask, you ain't never gonna know." A musicologist could give a

textbook definition of jazz and Mr. Armstrong himself could do his best to describe it. But in

the end, jazz is intelligible only after it is experienced first-hand. Much to the chagrin of

cognitive scientists, the same principle applies to consciousness. Levine argued for the

existence of an explanatory gap dividing first-person subjective experience and objective

43

Page 48: The Empirical Untenability of Sentient Artificial Intelligence

scientific facts about cognition and Nagel pointed out that such a schism will forever

alienate proponents of both sides: "Absurdity comes with the territory, and what we need is

the will to put up with it" (Nagel 1986, 11). Only recently have some maverick philosophers

come out and flatly admitted that mind-body problem is an unsolvable endeavor.

Arguments from every imaginable angle abound, but we are no closer to solving the hard

problem then we ever have.

Such a viewpoint, typically called New Mysterianism (henceforth NM), can easily be

dismissed as meager defeatism and that the problem will eventually be solved with enough

time and energy devoted to its solution. Steven Pinker, a recently converted mysterian,

explains the core tenets:

And then there is the theory put forward by philosopher Colin McGinn that our vertigo when pondering the hard problem is itself a quirk of our brains. The brain is a product of evolution, and just as animal brains have their limitations, we have ours. Our brains can't hold a hundred numbers in memory, can't visualize seven-dimensional space and perhaps can't intuitively grasp why neural information processing observed from the outside should give rise to subjective experience on the inside. This is where I place my bet, though I admit that the theory could be demolished when an unborn genius--a Darwin or Einstein of consciousness--comes up with a flabbergasting new idea that suddenly makes it all clear to us. [Pinker 2007]

NM is potentially the most realistic prognosis around; materialism is an eminently agreeable

philosophy except for its treatment of consciousness and functionalism is a non-starter.

Colin McGinnis at the forefront of the NM movement, and he defends his view as non-

defeatist. His goal is to figure out precisely why it is that we can't understand

consciousness, not give up on it entirely:

Consciousness depends upon an unknowable natural property of the brain. What this means is that I am not going to try to reduce consciousness to those mundane known properties of neurons that materialists hope to get by

44

Page 49: The Empirical Untenability of Sentient Artificial Intelligence

with. But neither am I going to conceive of consciousness as something apart from the brain, something with no further analysis or explanation. Consciousness is rooted in the brain via some natural property of brain tissue, but it is not explicable in terms of electrochemical processes of the familiar kind. [McGinn 1999, 43]

We can't deny the possibility that we are cognitively incapable of fully understanding

consciousness. Perhaps the solution we seek is part of the problem; the hard problem

might be beyond the realm of what is humanly possible.

I do not reject materialism, but simply ignoring or attempting to eliminate qualia is

insufficient. Qualia are an undeniable aspect of conscious experience and are crucial to any

discussion about the hard problem. Chalmers calls my position "Don't-have-a-clue

materialism" since I believe qualia are caused by physical processes, but cannot be

explained as such. The mind-body problem is unsolvable, qualia will never fully be

understood, and we will never know if an artificial intelligence ever achieves sentience.

In short, there is no conceivable bridge for the explanatory gap. The best we can do

is correlation. PET scans can measure brain activity, allowing scientists to match particular

brain states with exhibited behavior. For instance, a 2001 study measured the brain

function of twelve members of the Free Evangelical Fundamentalist Community, a religious

group in Germany. Each subject reports to have experienced transcendent experiences

during religious recitation. Brain activity was measured during an intense prayer session,

and the results showed that religious experience "activated a frontal-parietal circuit,

composed of the dorsolateral prefrontal, dorsomedial frontal and medial parietal cortex"

(Azari, et al. 2001). The neural data is then correlated with the subjects' verbal descriptions

of their own experiences in order to understand exactly what happens in the brain during

45

Page 50: The Empirical Untenability of Sentient Artificial Intelligence

prayer activity. There is no doubt that this sort of experiment helps explain certain

properties of brain function. To physicalists and functionalists, correlation is sufficient for a

full understanding of consciousness. I disagree because, as discussed earlier, the limitations

of natural language restrict us from comprehensively explaining first-person experience.

Neural correlates to verbal descriptions plus observable behavior is not comprehensive; the

results of such studies fail to explain why we have phenomenal experience. The

explanatory gap remains in between objective data and first-person experience.

But what exactly is the limiting factor that restricts us from a materialistic

understanding of the mind? McGinn argues that human understanding is cognitively closed

to certain aspects of the universe, consciousness being the prime example:

The materialists are right to think that it is some property of the brain that is responsible for consciousness, but they are wrong in the kind of brain property they select. The dualists are right to doubt that the brain as currently conceived can explain the mind, but they are wrong to infer that no brain property can do the job. Both views overestimate our knowledge of mind, and brain, presupposing that our current conceptions are rich enough to capture the essence of the mind-brain link. I maintain that we need a qualitative leap in our understanding of mind and brain, but I also hold that this is not a leap our intellectual legs can take. [McGinn 1993, 28-29]

Our "intellectual legs" can only stretch so far. The notion that humans possess the capacity

to know everything there is about the universe is a wild assumption that should not be

accepted out of blind anthropocentric arrogance. Since technology progressed at such an

alarming rate over the last century, this assumption is widespread. After all, we put a man

on the moon, high-tech communication computers (read: mobile phones) in our pockets,

and artificial organs in our bodies. What can possibly limit our epistemic growth?

46

Page 51: The Empirical Untenability of Sentient Artificial Intelligence

In 1965, Intel co-founder Gordon Moore charted the progression of human

technological achievement over time, leading him to conclude, "The number of transistors

incorporated in a chip will approximately double every 24 months" (Intel.com). Recent

studies show that Moore's predicted rate of progression is actually slower than reality;

according to University of Washington professor Edward Lazowska, "The ingenuity that

computer scientists have put into algorithms have yielded performance improvements that

make even the exponential gains of Moore's Law look trivial" (Lohr 2011). Arguably the

most optimistic technologist around is Al innovator Ray Kurzweil. According to his recent

book The Singularity is Near, by 2023 one thousand dollars will buy the processing

capacity25 of a human brain. By 2059, one cent will buy the computational capacity of the

entire human race (Kurzweil 2005). He contends that every aspect of consciousness will be

well understood by science and completely explainable in objective terms, all because of

advances in computational technology. We shouldn't worry about the hard problem for

much longer; computers will eventually close the explanatory gap.

What Kurzweil and his supporters must realize is that human cognition is not

unlimited.26 There are certain concepts we might never grasp simply because of an

epistemic horizon. To illustrate, look at the graph of the mathematical function/(x) = 1/x

(Figure 7). The function is asymptotic, meaning it will grow indefinitely, but will never cross

25 The key word is 'processing'. His claims only have merit insofar as CTM holds and the brain is a computational entity. 26 Actually, Kurzweil would not disagree with this point. He claims that computers and Al will eventually augment human knowledge. Without the aid of technology, " ... the architecture of the human brain is ... profoundly limited. For example, there is only room for about one hundred trillion interneuronal connections in each of our skulls ... Machines will be able to reformulate their own designs and augment their capacities without limit" (Kurzweil 2005, Amazon Kindle Location 692)

47

Page 52: The Empirical Untenability of Sentient Artificial Intelligence

either axis. The progression of human knowledge is asymptotic about

this horizon, in that there are naturally imposed limits on our cognitive

capacity that cannot be breached. "No finite mind could encompass all of

space and time," McGinn writes; accordingly so, a solution to the mind- Figure 7

body problem lies out of reach beyond the axis (McGinn 1999, 33).

Human cognitive understanding is limited to concepts that are explainable in terms

of space and time. Our universe is multidimensional; space has three dimensions and time

has one. Conceiving of a dimension other than space-time is only possible through

theoretical physics, but introspectively imagining such a world is impossible. This is because

the entirety of our experience is spent interacting within these dimensional boundaries,

barring certain accounts of certain psychotropic drug experiences, of course. Perception

occurs when our sensory faculties register stimuli in the world around us, and transmit

them as electrical impulses to the brain, thereby giving rise to a particular

phenomenological sensation. What is perceived is immediately apprehensible through

introspection, or as it is commonly called, the mind's eye. When I perceive a chair sitting

five feet in front of me, the image of the chair and whatever else is immediately perceivable

are the objects of my consciousness. The same applies to when I am imagining a chair even

when there isn't one present, or when I am dreaming, or hallucinating. Regardless of

circumstance, the locus of attention is the object of consciousness.

The object of consciousness is non-spatial; the chair I am looking at is extended in

space but my mental projection is not. We can say that the concurrent brain activity occurs

in a physical space (i.e. in the visual cortex, an inch from the back of the head, in three

48

Page 53: The Empirical Untenability of Sentient Artificial Intelligence

dimensions, etc.), but the felt experience itself has no measurable volume, location, mass,

or shape; "It falls

under temporal predicates and it can obviously be described in other ways - by specifying

its owner, its intentional content, its phenomenal character," but there is no spatial

dimension to consciousness (McGinn 1997a, 98).

The unembodied nature of consciousness is what causes problems for empirical

science. There is a disconnect between the non-spatiality of consciousness and the spatial

nature of the external world. The property distinctions between the extended world with

the unextended mind has been at the root of the mind-body problem since Descartes.

McGinn seeks to revitalize and update the Cartesian dilemma: "While consciousness is a

nonspatial phenomenon, human thought is fundmamentally governed by spatial modes of

representing the world, so that our ways of thinking tend to force consciousness onto a

Procrustean bed of broadly Euclidean design" (McGinn 1997b, 108). The objects of

consciousness are perceptually inaccessible; observing the brain says nothing about

experience. Consciousness is thus perceptually closed, meaning we expound it in terms of

the physical.

It is safe to say that the object of consciousness is perceptually closed and is only

accessible through introspection. However, perceptual closure does not entail cognitive

closure. Electrons exist, surely, even though it is impossible to measure them because the

very act of observation alters the electron's path. We can't observe genes, quarks, or atoms

either but we can postulate them because the explanability of other observable phenomena

depends on their existence. For example, trait inheritence can be observed but cannot be

49

Page 54: The Empirical Untenability of Sentient Artificial Intelligence

explained without genes. There is enough observable evidence {like DNA molecules) to

confirm the legitimacy of genetics without ever actually perceiving a gene; "We can infer

hidden structure to explain what overtly appears" {McGinn 1999, 141).

But consciousness is a different story. Inference is not a sufficient condition for a

comprehensive explanation of first-person experience. Define Pas the set of empirically

evident brain properties that clarifies the hard problem and fully explains consciousness.

McGinn contends that P cannot be infered like genes and electrons because of the

homogeneity that constrains sets of data:

Inference to the best explanation of purely physical data will never take us outside the realm of the physical, forcing us to introduce concepts of consciousness. Everything physical has a purely physical explanation. So the property of consciousness is cognitively closed with respect to the introduction of concepts by means of inference to the best explanation of perceptual data about the brain. [McGinn 1991, 13]

Attempting to explain phenomenal consciousness with observable data is like

comparing apples and oranges; the two sets of information are inherently

incompatible. Physical concepts cannot describe psychological concepts and vice

versa. Neural correlation experiments show us nothing but patterns of behavior and

concurrent brain states.

As Levine argued, observable data is fully explainable in physical terms, constrained

homogenously. It is a brute, empircally provenfact that the gravitational contant G is

6.67428x10-11 N (rn/kg}" However, attempting to explain Gin phenomenological terms is

scientifically meaningless. Yes, I can feel the affects of gravity upon my body and the

objects I interact with, and sure, I can do my best to verbally describe how it feels to not be

50

Page 55: The Empirical Untenability of Sentient Artificial Intelligence

floating above the ground." but the explanation is incomplete without physical proof.

Consciousness is homogenously constrained in the same way: "Our modes of concept

formation, which operate from a base in perception and introspection, cannot bridge the

chasm that separates the mind from the brain: They are tied to the mental and physical

terms of the relation, not to the relation itself" {McGinn 1997b, 106).

Since we can't explain conscious phenomena in terms of the physical, and

introspective analysis is unexplainable because of the limits of natural language, it follows

that humans are cognitively closed to the complete apprehension of consciousness.

Consciousness is closed off by the same epistemic boundaries that forbid us from ever

knowing what bat echolocation feels like and that limits blind people from the concept of

sight: "our concepts of consciousness just are inherently constrained by our own form of

consciousness, so that any theory the understanding of which required us to transcend

these constraints would ipso facto be inaccessible to us" {McGinn 1991, 9). Consciousness is

only accessible through introspection, but introspection can tell us nothing about brain

function. Perceptive analysis, on the other hand, can access brain function but not

consciousness. Since there is no intermediate method to speak of, Pon the whole is

unexplaninable.

That is not to say P does not exist, however. NM is not substance dualism;

materialism holds but it simply cannot account for consciousness. Cognitive closure does

not entail nonexistence. Perhaps "Martians or demigods might have better luck"

27 I'm sure such a description is possible. It would sound ridiculous, but it is surely possible. But in the end, verbal descriptions are understood fully only by the utterer (see figure 2 in the second chapter of this paper).

51

Page 56: The Empirical Untenability of Sentient Artificial Intelligence

understanding P, as Dennett writes in his criticism (Dennett 1991b). Something explains the

phenomenon of consciousness, but we can't grasp it. There is still much to learn from the

study of consciousness, and neuroscience might bring us closer to solving the mind-body

problem than ever before, but the hopes for a unified theory of consciousness are slim to

none.

CRITICISM AND RESPONSE

Criticism for McGinn's work is far reaching. Opponents tend to boil down cognitive

closure to a simple syllogism:

Pl. Introspection alone cannot fully explain P P2. External perception of the brain cannot fully explain P P3. Appeals to inference do not hold because of homogeneity constraints Cl: Therefore, Pis humanly unexplainable by any extant means of apprehension C2: P can only be apprehended from a "God's-eye point of view" (Flanagan

1992)

Pl is uncontroversially true. My own first-person experience tells me nothing about

neurons or synapses or anything else relevant to a unified theory of consciousness. If I

never learned that there is lumpy gray matter inside my head, I would have no reason to

ever assume that there is!28 P2 is the problem. McGinn is frequently condemned as

defeatist or even Luddite, accusing him of discounting the rapid progression of human

technology and knowledge. Simply because a solution has not yet been found, "the

solution may be just around the corner" (McGinn 1997b, 108). Continued research in

cognitive science and related fields will eventually solve the mind-body problem and

28 The point about not knowing about the brain is borrowed from Flanagan 1992.

52

Page 57: The Empirical Untenability of Sentient Artificial Intelligence

provide us with a unified theory of consciousness, allowing us to create sentient robots that

may or may not enslave humanity. In response, such criticism clearly misses the point NM

attempts to make. McGinn's contention is that the mind-body problem will never be solved

for sure because there is no definitive way of knowing if the solution is in fact correct. There

is no test comprehensive enough to test for true sentience, only observable behavior.

Furthermore, some argue that the truth or falsity of P2 depends on interpretation.

By McGinn's lights, P2 is unequivocally true because direct observation of a brain indeed

tells us nothing about conscious experience. On the other hand, the premise is

controversial because the "'unobservability' of the link between P and consciousness

prevents us from inferring that Pis in fact where the link resides" (Flanagan 1992, 112). P3

is invoked in this point; Flanagan calls into question why inference from unobservable data

is insufficient for a theory of consciousness. McGinn invokes Nagel's claim that "it will

never be legitimate to infer, as a theoretical explanation of physical phenomena alone, a

property that includes or implies the consciousness of its subject" because consciousness

and observable brain states are intrinsically different properties that cannot be used to

explain each other (Nagel 1979, 183).

To Flanagan, however, neural correlation is sufficient for a complete explanation:

We are not looking for an explanation of 'physical phenomena alone', at least not physical phenomena narrowly understood. There is a prior commitment to the existence of consciousness. Thus both brain facts and facts about consciousness are on the table to be explained. We then infer that the constellation of a certain set of phenomenological reports of restricted range ('tastes sweet') correlate with certain sorts of brain activity (activation in the relevant pathways), and we infer, given an overall commitment to naturalism, that the latter explains the former. [Flanagan 1992, 113)

53

Page 58: The Empirical Untenability of Sentient Artificial Intelligence

What makes consciousness so exceptional that the rules of inference used for electrons,

genes, etc. no longer apply? Electrons, for example, are never actually observed in

experiments that invoke them (i.e. cloud chambers), but we postulate their existence

because other observable data rides on their existence. Similarly, we don't see

consciousness during brain experiments but consciousness is postulated because the

evidence derived from observed brain activity and overt behavior are incoherent without

the existence of consciousness. Therefore, Pis explanable through inference.

Flanagan fails to account for what I call the behavior barrier. The behavior barrier is

an updated appeal to the classical philosophical dilemma of other minds combined with

Chalmers' argument for the logical possibility of p-zombies. How can we be certain that

something that behaves as if it is conscious is actually in posession of any sort of

phenomenal experience? While arguments for NM and cognitive closure are admittedly

inductive, postulating the existence of an inner life is inductive as well. Neural correlation

studies show that the subject is probably in some conscious state, but the only real proof

the experimenters have is an analogy to their own experience. Researcher Brown knows he

is a conscious, sentient agent because he can introspect, the same way everybody else

knows that they are not mindless automata. Test subjects Black and Gray are also capable

of introspection as well and is thus conscious. Black and Grey are hooked up to a device

that provides a thorough analysis of all brain activity. The device is small and uninvasive, as

to not cause nervousness or discomfort and distort test results. Brown injects a dose of the

little known chemical 11 that causes them both to scream in agony that the area of injection

feels like its on fire. Brown correlates the neural scan data and the subjects' behavior in

54

Page 59: The Empirical Untenability of Sentient Artificial Intelligence

order to postulate that chemical 11 causes a painful and localized burning feeling, and this

feeling is concurrent with certain brain patterns. Is this test conclusive?

As Chalmers 1996 and 2002b show, there is a logical possibility for the existence of

p-zombies who feel nothing at all, but are compositionally equivalent to humans who do

feel a burning sensation from chemical 11. Black and Gray show equivalent neural and

behavioral activity during the test, but while Black feels burning pain, Gray feels nothing at

all because he is a cybernetic organism (cyborg) functionally and behaviorally equivalent to

humans but entirely devoid of phenomenal experience. What's most interesting is the fact

that Dr. Brown will never know this revealing fact. Gray is so ingeniously designed that he is

capable of passing the even the most trying Turing test questions because he is

programmed with a full life's worth of memories and emotional response behavior. Indeed,

Gray is unaware that he is a cyborg because he is not actually aware of anything at all. If

Brown is somehow compelled to ask whether Gray is a cyborg, Gray's behavioral software is

programmed to brush off such questions as ridiculous, just as any other human would do.

My point is that even if the world's foremost neuroscientists claim to find a unified

theory of consciousness P, there is no definitive way to test its veracity. When I asked

about possibility of the ascription of consciousness onto an artifact, Dreyfus told me:

Once you've got to the point where you have a theory of consciousness and you're implementing it and its behaving just like human being, then whether its actually conscious or not will have exactly the same structure as whether you are actually conscious or not when I'm dealing with you. If that can't be settled, then maybe it's a p-zombie. [Dreyfus 2011]

His point agrees with mine; the explanatory gap remains indefinitely unclosed because it is

impossible to "climb in" to someone else's consciousness and find out for sure. In the film

55

Page 60: The Empirical Untenability of Sentient Artificial Intelligence

Being John Malkovich, Schwartz, the protagonist, discovers a secret portal that can

transport him into the mind of actor John Malkovich, allowing him to simultaneously

experience Malkovich's inner world as well as his own. Schwartz exclaims, "Do you know

what a metaphysical can of worms this portal is?" and rightly so; he transcended the very

idea of first-personality. But unless a real-life Schwartz really manages to discover such a

portal, the minds of others are indefinitely inaccessible and can never be confirmed for sure

because of the behavior barrier.

Al, CONSCIOUSNESS, AND BLADE RUNNER: TYING EVERYTHING TOGETHER

If McGinn is right, humanity is cognitively closed to the full understanding of

consciousness. If a unified theory of consciousness is ever proposed, its truth can only be

postulated but never known for sure. Strong Al proponents disagree. As I discussed earlier,

some theorists believe that the brain is wetware, a biological digital computer. The brain is

hardware and the mind is the software it implements, not unlike a video game running on a

personal computer. And also like a video game, conscious experience is an illusion caused

by the physical processes "behind the scenes." Computationalists (or functionalists, if you'd

like) claim that conscious states are just functional states of the brain and "feelings of

conscious awareness are evoked merely by the carrying out of appropriate computations"

(Penrose 1994). It follows that a complete understanding of how the brain physically works

will eventually give rise to an explanation of consciousness.

In this section I will demonstrate that like physicalism, a computational explanation

of consciousness fails to close the explanatory gap and solve the hard problem for sure

56

Page 61: The Empirical Untenability of Sentient Artificial Intelligence

because of the behavior barrier. Through introspection we can access consciousness

directly, but only through physical experimentation can we understand the physical

processes. Unification is futile, and CTM can bring us no closer to an answer than anything

else.

In order to advance my argument, I will elaborate on themes from Phillip K. Dick's

classic novel Do Androids Dream of Electric Sheep and its film adaptation, Ridley Scott's

Blade Runner. Both texts have deep philosophical implications ranging from the meaning of

love to the ethics of animal treatment, but I will be primarily focusing on its treatment of

the metaphysical issue of person hood in order to strengthen my argument for the

untenability of solving the hard problem.

The book and the film have similar themes and the same backstory, but significant

variations in plot development.29 Both stories take place in the not-so-distant future in a

dystopian, pollution-riddled Los Angeles. World War Terminus decimated the human

population and the vast majority of animals are extinct. As a result, empathy towards

animals is revered to an almost religious level and owning a real animal (as opposed to the

more common synthetic replacements) is the highest symbol of status." The turmoil on

Earth caused the colonization of nearby planets, and the arduous job of construction and

development was assigned to anthropomorphic robots called replicants. Because they are

dangerous to humans, replicants are outlawed on Earth. Protagonist Rick Deckard is a blade

29 For the sake of clarity and continuity, I will use names and terminology from the film instead of the book. It should be noted that the lifelike Al characters in the film are called 'replicants' while the novel calls them 'androids'. The two terms refer to the same conceptual construction. 30 The theme of animal ownership and its social implications is a main theme in the novel but less so in the film. Needless to say, empathy towards animals is crucial to both texts.

57

Page 62: The Empirical Untenability of Sentient Artificial Intelligence

runner, a bounty hunter who specializes in the "retirement" of replicants. The plot revolves

around Deckard's hunt for Pris, Zhora, Leon, and Roy, advanced Nexus-6 class replicants

who returned to Earth in order to pressure their creator Tyrell to extend their four-year

limited lifespan.

What gives blade runners the right to kill? Replicants seem to feel

pains and emotions just like their human creators. Among the most

disturbing scenes in the film is when Deckard shoots Zhora, causing her

to fall through multiple panes of glass while she screams and grimaces in

utter agony (Figure 8}. Dick wrote in his essay "Man, Android, and Figure 8 Warner Brothers

Machine" that an android is "a thing somehow generated to deceive us in

a cruel way, to cause us to think it to be one of ourselves" (Dick 1976, 202}. Zhora seems

human in every way; accordingly so, watching her die so brutally evokes empathy from the

audience. But does she actually feel anything at all, or just behave as if she does because of

her programming? Lets assume that replicant "minds" are syntactical computers running

advanced software that causes their overt and physiological behavior to function identically

to humans, "capable of selecting within a field of two trillion constituents, or ten million

separate neural pathways" (Dick 1968, 28}.31 Lets also assume that a computer mind is not

a mind at all, and no conscious experience arises from it. In this case, replicants are be p-

zombies, "identical...functiona//y ... reacting in a similar way to inputs ... with indistinguishable

behavior resulting" (Chalmers 1996, 95}. Moreover, they are not conscious, feel no qualia,

and have no experience in the same sense that you and I have experience of the world. As

31 This entails that Kurzweil's prediction that a comprehensive computational model of human brain function is true in 2019 Los Angeles.

58

Page 63: The Empirical Untenability of Sentient Artificial Intelligence

Daniel Robinson points out "It is conceivable that a device could be made in such a way as

to change its appearance and make loud sounds when one or another component is

destroyed. [ ... ] Here, then, we have a device that replaces 'pain behavior' with 'pain

language.' We have, in a word, everything but pain!" Behavior, despite its apparent

authenticity, is not sufficient for confirmed consciousness. In this case,there is nothing it is

like to be a replicant, "Chitinous reflex-machines who aren't really alive" (Dick 1968, 194).

Regardless, it is natural to feel bad about Zhora's painful and unceremonious

execution. Accordingly, a central theme in Blade Runner is uncertainty. The only reliable

litmus test for replicancy/humanity is the Voight-Kampff (V-K) test that measures

physiological responses to emotionally jarring questions. Most questions are about animal

mutilation, a topic that every human cares about deeply; although some replicants "had

been equipped with an intelligence greater than that of many human beings," they

"possessed no regard for animals ... [and] no ability to feel empathic joy for another life

form's success or grief at its own defeat (Dick 1968, 32). Rachael is a replicant but, unlike

the others, she had real human memories implanted into her neural network. She

"believes" that she is a real human and acts convincingly so. Even Deckard's humanity (or

not) is up to interpretation.32 Most importantly, there is a patent uncertainty about

32 "An android," [Deckard] said, "doesn't care what happens to another android. That's one of the indications we look for." 'Then," Miss Luft [Zhora's novel analogue] said, "you must be an android." She then asks whether or not Deckard has taken the V-K test himself. "Yes." He nodded. "A long, long time ago; when I first started with the department." "Maybe that's a false memory. Don't androids sometime go around with false memories?" (Dick 1968, 101-102).

59

Page 64: The Empirical Untenability of Sentient Artificial Intelligence

mistakenly retiring "authentic humans with underdeveloped empathic ability" who fail the

V-K test (Dick 1968, 54).

The V-K test is a reimagined variation on the Turing test. Remember that a

computer that passes Turing's imitation game by behaving as a human would is, on some

views, intelligent and conscious. Anti-computationalists like Searle and Dreyfus disagree:

"I'm not enough of a behaviorist to think that if it just behaves like people it shows that it's

intelligent" (Dreyfus 2011). I agree, to an extent. As Searle made clear in the Chinese Room

argument, simulation is not, in and of itself, proof of duplication. Judging solely on the basis

that an artifact simulates consciousness behavior is insufficient evidence for knowing

whether or not it has an inner life at all.

Strangely enough, Dennett, a staunch opponent of Searle's claims, seems to agree.

A simulation of a hurricane, for instance, is hardly equivalent to a real one. A

comprehensive theory of hurricane behavior can be implemented through a computer

simulation. Certain measurable inputs (i.e. barometric and temperature data) yield

appropriate outputs (i.e. 150 mph winds blowing westward accompanied by heavy rain),

but one should "not expect to get wet or windblown in its presence" (Dennett 1981, 191).

By the same lights, an Al can simulate consciousness but be just as lifeless as hurricane

software. Android S might respond to an anvil dropped on its foot as follows:

S's (-fibers are stimulated, ... a pain memory is laid down; S's attention is distracted; S's heart-rate increases .. ; S jumps about on right foot, a tear in the eye, screaming. [Dennett 1981, 192]

S certainly passes the Turing test for pain. Zhora does too. In the novel, replicant Pris acts

like the timid and scared young woman she is meant to resemble: "Fear made her seem ill;

60

Page 65: The Empirical Untenability of Sentient Artificial Intelligence

it distorted her body lines, made her appear as if someone had broken her and then, with

malice, patched her together badly" (Dick 1968, 62). Behavior may be convincing but it is

not definitive proof for the existence of mental states.

Roy and his Nexus-6 brethren are programmed with a bottom-up emotional

response mechanism. Like humans, they are "born" with no emotional attachments but

develop them naturally through experience.33 Dreyfus writes:

Generally, in acquiring a skill - in learning to drive, dance, or pronounce a foreign, for example - at first we must slowly, awkwardly, and consciously follow the rules. But then there comes a moment when we finally can perform automatically. At this point, we do not seem to be simply dropping these same rigid rules into unconsciousness; rather we seem to have picked up the muscular gestalt which gives our behavior a new flexibility and smoothness. [Dreyfus 1994, 248-249]

Emotions are no different. We gradually learn through experience how to identify specific

emotions and behave appropriately. Unlike humans, however, the Nexus-6s only have four

years to learn a lifetime of emotions, and there is a clear discrepancy between their

emotional immaturity and superhuman strength and/or intelligence. Towards the end of

the film, Roy is distraught to find the lifeless and bloodied body of his lover Pris, killed by

the blade runner in an earlier scene. His underdeveloped emotions become evident when

he begins to act oddly, smearing her blood on his lips and howling like a wounded animal.

Earlier on he attempts to inform Pris about the deaths of their allies Leon and Zhora, but his

inability to process emotional depth leads him to convulse and stiffen robotically,

inhumanly. The imbalance of Roy's physical prowess and calculating rationality with his

33 Excluding Rachael, who has emotions from her implanted memories.

61

Page 66: The Empirical Untenability of Sentient Artificial Intelligence

emotional handicaps stands in stark contrast with Tyrell Corporation's motto "More human

than human," reminding the audience that replicants are fundamentally different than us.

On the other hand, the penultimate scene adds to the complexity of this issue. After

a climactic cat-and-mouse chase between Deckard and the physically superior Roy, Deckard

finds himself facing his death, clinging to a roof beam for dear life. In a beautiful display of

what appears to be humanity, Roy chooses to save Deckard. Throughout the film, Roy is

obsessed with extending his limited lifespan but in the end, he finally seems to understand

the value and transience of life:

Rov: I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the darkness at Tannhai..iser Gate. All those moments will be lost in time like tears in rain. [Fancher and Peoples 1982]

Deckard can only sit back in awe as his savior perishes. It certainly appears that Roy

feels something, judging by his actions. In his essay on postmodernism in Blade

Runner, Nick Lacey writes, "In [Roy's] final moments, when he saves Deckard, he

becomes human because of his behavior and his realisation that his life was worth

living (Lacey 2005, 190}. Marilyn Gwaltney adds that movie audiences sympathize

with Roy because, "Our understanding of his cruelty changes as we come to

understand it as a very human reaction to his existential situation: the imminence of

his death and that of those he loves; the feeling of betrayal by the beings that

brought him into existence" (Gwaltney 1997, 33}. But can we safely assume that this

sort of demonstrated self-realization implies phenomenal first-person experience?

Gwaltney later writes, "Computers may be thought to act rationally, in the sense of

acting logically, but they do not act to any purpose of their own, only to the purposes of

62

Page 67: The Empirical Untenability of Sentient Artificial Intelligence

others. To act with purpose requires consciousness of self" (Gwaltney 1997, 35). Roy

certainly appears conscious and self-aware, but in the end, he could very well still be a p­

zombie. If technology in the Blade Runner universe is advanced enough to create the

Nexus-6, software for this sort of humanlike behavior is surely plausible. It thus follows that

Roy's epiphany, despite its personal impact, might not feel like anything at all to him. This is

a hard concept to swallow, considering how dramatic an effect it appears to have. Indeed,

actor Rutger Hauer completely improvised the "tears in rain" speech, no doubt drawn his

own felt experiences. But if the assumption that replicants are p-zombies is correct, Roy

feels nothing at all while his artificial brain implements the program

SELF _REALIZATION_EPIPHANY.EXE in the background, no different from my laptop not feeling the

word processer currently running. As Rachael laments in the novel, "We are machines,

stamped out like bottle caps. It's an illusion that 1-1 personally- really exist; I'm just

representative of a type ... I'm not alive." (Dick 1968, 189, 198}.

But remember, none of this can be definitively proven by any test, Turing, V-K, or

otherwise. Both claims in support and against machine consciousness are based on

reasoning that cannot be verified for sure. On the possibility of a conscious artifact of

human creation, McGinn contends, "I should say the matter is entirely an empirical one: it

concerns whether human beings ever in fact achieve enough in the way of scientific and

technological knowledge" to close the explanatory gap and physically observe the existence

of conscious experience. "It is like asking whether we shall ever travel to another galaxy"

(McGinn 1991, 203}. Since replicants are human creations, as of now it is an empirical

impossibility of ever confirming if it there is anything it is like to be one. On the other hand,

63

Page 68: The Empirical Untenability of Sentient Artificial Intelligence

the possibility of a conscious artifact in general is a matter of principle. If hyper intelligent

Martians (the philosophers' favorite species) were to abduct a human and reproduce it

molecule for molecule, there is no reason to say the clone isn't conscious. Of course, it may

very well be a p-zombie, devoid of any experience at all, but only Martians who understand

the explanatory gap can know for sure. Whether or not replicants are anything more than

lifeless "skinjobs," as they are disparagingly called, is ultimately beyond our epistemic

boundaries.

Figure 9 Dilbert.com

64

Page 69: The Empirical Untenability of Sentient Artificial Intelligence

CONCLUSION

In this work I make the claim that sentient artificial intelligence may be possible, but

it is beyond the epistemic limits of human understanding to ever devise a test

comprehensive enough to prove it for sure. Neither functionalism and CTM nor physical ism

account for the explanatory gap dividing first-person conscious experience and externally

observable neural and behavioral correlates. Mental states are exclusively accessible to an

individual mind and any attempts to describe them verbally are limited by ostensive

definition and the boundaries of natural language. There is no conceivable means of

knowing whether or not 'sadness' feels the same to me as it does to you; observable

behavior can be simulated but not conclusively duplicated. Accordingly, the Turing test can

only assess behavioral equivalence. As we see in Blade Runner, simply because something

looks and acts sentient does not necessarily entail sentience. I contend that we should be

mysterians about consciousness. Neuroscience and related fields continue to make

remarkable progress in the study of both consciousness and Al and should certainly

continue to do so. However, it is beyond the cognitive capacity for human beings to ever

fully explain consciousness in a testable and provable theory. If Terminator is an accurate

prediction of the future, whether or not Skynet is sentient is forever unknown.

65

Page 70: The Empirical Untenability of Sentient Artificial Intelligence

Works Cited

Azari, Nina P., et al. "Neural Correlates of Religious Experience." European Journal of Neuroscience 13, no. 8 (2001): 1649-1652.

Ben-Yami, Hannoch. "A Note on the Chinese Room." Synthese 95, no. 2 (May 1993): 169- 172.

Bickle, John. "Multiple Realizability." Stanford Encyclopedia of Philosophy. Edited by Edward N. Zalta. Fall 2008. plato.stanford.edu/archives/fall2008/entries/multiple­ realizability (accessed March 26, 2011).

Blackburn, Simon. The Oxford Dictionary of Philosophy. Oxford: Oxford University Press, 2008.

Carter, Matt. Minds and Computers. Edinburgh: Edinburgh University Press, 2007.

Chalmers, David. "Consciousness and Its Place in Nature." In Philosophy of Mind: Classical and Contemporary Readings, edited by David Chalmers. Oxford: Oxford University Press, 2002.

Chalmers, David. "Does Conceivability Entail Possibility?" In Conceivability and Possibility, edited by T. Gendler and J. Hawthorne, 145-200. Oxford: Oxford University Press, 2002.

-. The Conscious Mind: In Search of a Fundamental Theory. Oxford: Oxford University Press, 1996.

Copeland, Jack. Artificial Intelligence. Oxford: Blackwell, 1993.

Dennett, Daniel C. "Why You Can't Make a Computer that Feels Pain." In Brainstorms, by Daniel Dennett, 190-229. Cambridge, MA: MIT Press, 1981.

-. Consciousness Explained. Boston: Little, Brown and Company, 1991a.

-. "Quining Qualia." In Philosophy of Mind: Classical and Contemporary Readings, edited by David Chalmers, 226-246. Oxford: Oxford University Press, 2002.

66

Page 71: The Empirical Untenability of Sentient Artificial Intelligence

-. "The Unimagined Preposterousness of Zombies." Journal of Consciousness Studies 2, no. 4 (1995): 322-325.

-. "Review of McGinn, The Problem of Consciousness." The Times Literary Supplement, May 10, 199lb.

Dick, Phillip K. Do Androids Dream of Electric Sheep? New York: Del Rey, 1968.

-. "Man, Android and Machine." In Science Fiction at Large, edited by Peter Nicholls. New York: Harper & Row, 1976.

Dreyfus, Hubert L. What Computers Stiff Can't Do. Cambridge, MA: MIT Press, 1994.

-. Interview by Andrew Barron. Interview with Hubert Dreyfus (June 11, 2011).

Fancher, Hampton, and David Peoples. Blade Runner. Directed by Ridley Scott. Performed by Rutger Hauer. 1982.

Feigl, Herbert. "The "Mental" and the "Physical"." In Philosophy of Mind: Classical and Contemporary Readings, edited by David Chalmers, 68-72. Oxford: Oxford University Press, 2002.

Flanagan, Owen. Consciousness Revisited. Cambridge, MA: MIT Press, 1992.

Gwaltney, Marilyn. "Androids as a Device for Reflection on Personhood." In Retrofitting Blade Runner, 32-40. Bowling Green, OH: Bowling Green State University Popular Press, 1997.

Grossman, Lev. "2045: The Year Man Becomes Immortal." Time, February 10, 2011.

Intel.com. Moore's Law and Intel Innovation. Intel. http://www.intel.com/about/companyinfo/museum/exhibits/moore.htm (accessed April 14, 2011).

Huxley, T.H. Lessons in Elementary Physiology. London: Macmillan, 1866.

Haugeland, John. Artificial Intelligence: The Very Idea. Cambridge, MA: MIT Press, 1989.

Hacker, PMS. "Is There Anything It is Like to Be a Bat?" Philosophy 77, no. 300 (2002): 157- 175.

Hamad, S. "Other Bodies, Other Minds: A Machine Incarnation of an Old Philosophical Preble,." Minds and Machines (Kluwer Academic) 1 (1992): 43-54.

Hobbes, Thomas. Leviathan. Cambridge: Cambridge University Press, 1904/1651.

67

Page 72: The Empirical Untenability of Sentient Artificial Intelligence

Hofstadter, Douglas R. "A Conversation With Einstein's Brain." In The Mind's I, translated by Daniel C. Dennett and Douglas R. Hofstadter. New York: Basic Books, 2000.

Jackson, Frank. "Epiphenomena! Qualia." Philosophical Quarterly 83 (1982): 127-136. Janzen, Greg. "Phenomenal Character as Implicit Self-Awareness." Journal of Consciousness

Studies (Imprint Academic) 13, no. 12 (2006): 44-73.

Kurzweil, Ray. The Singularity is Near. New York: Penguin, 2005.

Kaufman, Charlie. Being John Malkovich. Directed by Spike Jonze. Performed by John Cusack. 1999.

Krakow, Irving. Why The Mind-Body Problem Cannot Be Solved! Lanham, MD: University Press of America, 2002.

Lacey, Nick. "Postmodern Romance: The Impossibility of (De)Centering the Self." In The Blade Runner Experience, edited by Will Brooker, 190-199. London: Wallflower Press, 2005.

Lanchester, John. "Scents and Sensibility." The New Yorker, March 10, 2008: 120-122.

Lewis, David Kellog. "Mad Pain and Martian Pain." In The Nature of Mind, by David Kellog Lewis. New York: Oxford University Press, 1991.

Levine, Joseph. "Materialism and Qualia: The Explanatory Gap." In Philosophy of Mind: Classical and Contemporary Readings, edited by David Chalmers, 354-361. Oxford: Oxford University Press, 2002.

Locke, John. An Essay Concerning Human Understanding. Indianapolis: Hackett, 1689/1996.

Lohr, Steve. "Software Progress Beats Moore's Law." The New York Times, March 7, 2011.

Nagel, Thomas. "Conceiving the Impossible and the Mind-Body Problem." Philosophy 73, no. 285 (1998): 337-52.

-. Mortal Questions. Cambridge: Cambridge University Press, 1979.

Newell, Allen, and H.A. Simon. "Computer Simulation of Human Thinking." Science 134, no. 3495: 2011-2017.

McGinn, Colin. "Consciousness and Space." In Explaining Consciousness - The 'Hard Problem', edited by Jonathan Shear, 97-108. Cambridge, MA: MIT Press, 1997.

-. Minds and Bodies. New York: Oxford University Press, 1997.

68

Page 73: The Empirical Untenability of Sentient Artificial Intelligence

-. The Mysterious Flame: Conscious Minds in a Material World. New York: Basic Books, 1999.

-. The Problem of Consciousness. Oxford: Basil Blackford, 1991.

Penrose, Roger. Shadows of the Mind. Oxford: Oxford University Press, 1994.

Pinker, Steven. How the Mind Works. New York: Norton, 1997.

-. "The Mystery of Consciousness." Time, January 19, 2007.

Place, U.T. "Is Consciousness a Brain Process." In Philosophy of Mind: Classical and Contemporary Readings, edited by David Chalmers, 55-60. Oxford: Oxford University Press, 2002.

Sutherland, Stuart. The International Dictionary of Psychology. Second. New York: Crossroad, 1996.

Searle, John. "Minds, Brains, and Programs." Behavioral and Brain Sciences 3 {1980).

-. Philosophy in a New Century. Cambridge: Cambridge University Press, 2008.

Simon, Herbert A., and Allan Newell. "Heuristic Problem Solving: The Next Advance in Operations Research." Operations Research 6 (January-February 1958).

Russell, Bertrand. Religion and Science. New York: Oxford University Press, 1961.

Turing, Alan. "Computing Machinery and Intelligence." Mind 49 {1950): 433-460.

Wittgenstein, Ludwig. Philosophical /nvestigtions. Oxford: Basil Blackwell, 1974.

69