Isaac Asimov Robots 1964 Un Dc3a9filc3a9 de Robotsthe Rest of the Robots
Asimov, Robots, and Humanity
-
Upload
peter-hurford -
Category
Documents
-
view
216 -
download
1
description
Transcript of Asimov, Robots, and Humanity
Hurford 1
Peter Hurford Mrs. McFarlan AP English March 19, 2010
Asimov, Robots, and Humanity: Which One is the Most Human?
“Do androids dream of electric sheep?” This question was first asked by science
fiction writer Phillip K. Dick in 1968, who inquired as to how human Robots can be.
Throughout Isaac Asimov’s science fiction career, the most prolific career of any science
fiction writer, Isaac Asimov brought up this question of how much of humanity Robots
exemplify in his novels and short stories about Robots. He started writing in the 1940s,
during a time of the very first science fiction short story collections and the development
and popularization of writing about dystopian futures, frequently marked by Robot
uprisings, killer Robots, and human enslavement. Robots during this time period were
anti-American and anti-Human, and embodied fear.
Asimov countered this fear of Robots with his own stories of Robots that could be
friends, and Robots that could even be considered human. In his short story “Robbie,”
contained in the collection I, Robot, Asimov’s character Mr. George Weston said that
Robbie “just can't help being faithful and loving and kind. He's a machine – made so” (9).
In his science fiction quest to figure out how human Robots could be, Isaac Asimov
created Robots that exemplified aspects of humanity, but in a Robotic way. Asimov
elaborated about this in his introduction to his short story collection Counting the Eons,
where he said, “I found out that I didn't like stories in which Robots were menaces or
villains because those stores were technophobic and I was technophilic. I did like stories
in which the Robots were presented sympathetically” (32). In his introduction to his
Hurford 2
short story collection The Rest of the Robots, Asimov clarified his point of view, stating
that “one of the stock plots of science fiction was Robots were created and destroyed by
their creator. Knowledge has its dangers, yes, but is the response to be a retreat from
knowledge? [...] Never, never, was one of my Robots to turn stupidly on his creator for
no purpose but to demonstrate, for one more weary time, the crime and punishment of
Faust.” Yet, in creating human-esque Robots, sympathizing with Robots, and in
exploring Robots’ potential humanity, Asimov’s works reveal that he found Robots to
exemplify humanity better than humans do.
On the surface, Asimov’s Robots are not human at all. They have no free will and
are helpless – having no choice but to follow their hard-wired, uncircumventable,
engineered Three Laws of Robotics. In Counting the Eons, Asimov stated that he wanted
his Robots to be helpless, saying “I didn't think a Robot should be sympathetic just
because it happened to be nice. It should be engineered to meet certain safety standards,
as any other machine should in any right-thinking technological society. I therefore began
to write stories about Robots that were not only sympathetic, but were sympathetic
because they couldn't help it” (32).
Asimov’s Three Laws of Robotics, as first stated in his short story “Runaround,”
are three hierarchical laws that all Robots must obey:
1. A Robot may not injure a human being or, through inaction, allow a human being
to come to harm.
2. A Robot must obey any orders given to it by human beings, except where such
orders would conflict with the First Law.
Hurford 3
3. A Robot must protect its own existence as long as such protection does not
conflict with the First or Second Law.
These Robots provide an assurance of safety to all people who own Robots, so
that they don’t logically have to be afraid. In “Robbie,” Mr. Weston is so assured of the
safety of the laws that he attempts to assure Robot-fearing Mrs. Weston by saying “You
know that it is impossible for a Robot to harm a human being; that long before enough
can go wrong to alter that First Law, a Robot would be completely inoperable” (9). This
is because if a positronic brain were to even come to a situation where it would consider
violating the Laws of Robotics, it would fry up – going into what is called “Robot block,”
or “roblock”. Furthermore, Susan Calvin, the head robopsychologist for US Robots and
Mechanical Men, assures in the short story “Evidence” that “if he is a positronic Robot,
he must conform to the three Rules of Robotics. A positronic brain can not be
constructed without them” (220).
Later short stories of Asimov call into question whether constraining Robots to
the Three Laws really constrains their humanity. In the short story “Evidence,” a man is
running for President, and there is suspicion that the man might actually be a Robot.
(Which is a concern because Robots aren’t citizens and can’t legally hold office, kind of
like Kenyans.) Susan Calvin’s theory for determining whether he is a Robot is to see if
he violates the three Laws – if he does, he cannot be a Robot, but even if he does not
violate the Three Laws at any point, he may still be a human. This is true because, as
Susan Calvin points out “you see, you just can’t differentiate between a Robot and the
very best of humans” (223). This is because, as Calvin points out, “the three Rules of
Robotics are the essential guiding principles of a good many of the world’s ethical
Hurford 4
systems” (221). The ethical systems being that humans are allowed to preserve
themselves, which is Rule Three for a Robot, but they sometimes put themselves in the
line of danger when they yield to authority, which is why Rule Two trumps Rule Three.
Calvin explains this by saying that “every ‘good’ human being, with a social conscience
and a sense of responsibility, is supposed to defer to proper authority” (221). Lastly,
“good” humans are supposed to follow the golden rule – love others as himself – and, as
Calvin states, should “protect his fellow man, risk his life to save another” (221). That’s
Rule One, which trumps the other rules. As Calvin summarizes it, “if Byerley follows all
the Rules of Robotics, he may not be a Robot, and may simply be a very good man”
(221). Therefore, according to Asimov’s Robot-biased Susan Calvin character, the Three
Laws don’t limit humanity as much as we think – rather they force Robots to only be the
best of humans, which essentially acts as a “shortcut” rather than a shortcoming – they
allow Robots to act the same as good people without having to make the mistakes
necessary to arrive there. Furthermore, Robots are still able to use the Laws to make their
own decisions based on what they believe most reduces harm; Robots are not constrained
to a predictable pre-programmed formula.
The Three Laws are further characterized as not as restraining as they seem, for
every time a Robot is constrained by the Three Laws, a Robot can also exceed a human at
a certain task, and are ultimately used by some of Asimov’s characters in harmony with
humans. At the end of the I, Robot chronology, the short story series ends with “Evitable
Conflict” during which all of Earth has been united in one government, under an
economic system controlled by vast computers, which are still bound to the same Three
Laws of Asimov’s Robots. Stephen Byerly (the same guy accused of being a Robot in
Hurford 5
“Evidence” has now become the World Coordinator – essentially President of the World)
explains that the creation of the Machines uses the First Law to the utmost advantage of
humans, stating that “although the Machines are nothing but the vastest conglomeration
of calculating circuits ever invented, they are still Robots within the meaning of the First
Law, and so our Earth wide economy is in accord with the best interests of Man. The
population of Earth knows that there will be no unemployment, no overproduction or
shortages. Waste and famine are words in history books” (244). This mirrors what is
called a “C/Fe Culture” by Han Fastolfe in Caves of Steel – humans – representing the C,
or carbon – united in harmony with Robots – representing the Fe, or iron. This creates an
economy based on Robot labour and guidance, which is immensely efficient.
Both these economic systems are driven by the fact that cool, calculating
machines can “keep track of [things like] stock fluctuations, probably better than humans
could, since they would have no outside interests,” and, by extension, actually drive the
economy more efficiently than humans ever could (31). When it is discovered that the
Machines in “Evitable Conflict” are providing guidance that seems contrary to common
human economic wisdom, Susan Calvin does not dismiss it, asking, “how do we know
what the ultimate good of Humanity will entail? We haven’t at our disposal the infinite
factors that the Machine has at its!” (271). Chris Suellentrop, a story editor for The New
York Times Magazine, writes in the online news magazine Slate.com that this type of
thinking is exactly what makes Asimov so pro-Robot – “Asimov's novel I, Robot […] is
basically an evangelical work, an argument against man's superstitious fear of machines.
By the end of the book, machines run the economy and most of the government. Their
superior intelligence and cool rationality eliminate imperfections such as famine and
Hurford 6
unemployment.” This nearly blatant Robot supremacy sets up the idea that, in Asimov’s
works, Robots are human by virtue of surpassing humans in numerous areas.
The Three Laws are also far more complex than they seem on the surface.
Ostensibly, they just prevent Robots from killing humans, but in practice, they do much
more. Robots might find themselves in situations where they may have to decide
between preventing harm to one person (satisfying the “or through inaction” clause) by
harming another person who might be, for example, attempting murder. In some of the
first Robot models, this would cause the Robot to utterly break down, but more advanced
Robots are able to “weigh the potentials” and prevent the most net harm, instead of
preventing harm to both individuals equally. Dr. Fastolfe explains this in Robots of
Dawn that “[y]ou must not think that a Robotic response is a simple yes or no, up or
down, in or out” (86). Instead, Robots are intelligent enough to evaluate the entire
situation and minimize harm – prevent murder by simply breaking the arm of the attacker
– so that there is less harm in total. This process makes the Three Laws less of a hard
rule that Robots can never harm, and more of an ethical system that all Robots must obey
– all Robots are dedicated to the minimization of harm.
Asimov’s Robots also seem capable of human emotions. Many of these
emotions, such as the emotions that Bayley reads in Giskard in Robots of Dawn are
dismissed even by Asimov’s narrator as simple cases of pareidolia. Yet even the simple
Robot Robbie experienced a clear fear of Mrs. Weston – in “Robbie” it is narrated that
“Robbie obeyed [Mrs. Weston’s command] with an alacrity for somehow there was that
in him which judged it best to obey Mrs. Weston, without as much as a scrap of
hesitation,” going on to then state that Gloria’s mother, Mrs. Weston, “was a source of
Hurford 7
uneasiness to Robbie and there was always the impulse to sneak away from her sight”
(6). Toward the end of “Robbie,” it is again narrated that “Robbie’s chrome-steel arms
(capable of bending a bar of steel two inches in diameter into a pretzel) wound about the
little girl gently and lovingly, and his eyes glowed a deep, deep red,” which has an
obvious contrast of imagery – the destructive and theoretical nature of bending a bar of
steel with extreme Robot strength is put right next to what is actually happening – an
appearance of a deep, deep red glow symbolizing love and care (27-28).
This kind of love and care being contrasted next to irrational fear of destruction is
yet another technique used by Asimov to make Robots even more human and upstanding,
especially in contradiction to the flesh and blood humans in his work. Robots in
Asimov’s stories are frequently contrasted with irrational human characters who fear
Robots without logical reason. Asimov’s Robots were the ones who were infinitely
logical and calm, whereas humans were involved in frequent anti-Robot riots, such as in
Caves of Steel when police plainclothesman Elijah Baley observed “Robots being lifted
by a dozen hands, their heavy unresisting bodies carried backward from straining arm to
straining arm [with] Men yank[ing] and twist[ing] at the metal mimicry of men” (34).
Yet anti-Robot sentiment isn’t restricted to just chance riots – in many of Asimov’s
stories, even the local governments are passing anti-Robot ordinances – in “Robbie,”
Mrs. Weston states that in “New York just passed an ordinance keeping all Robots off the
streets between sunset and sunrise” (11). In “Evidence,” Dr. Lanning states that “You
are perfectly well acquainted, I suppose, with the strict rules against the use of Robots on
inhabited worlds” (211).
Hurford 8
In “Robbie,” irrational anti-Robot fear set up the conflict of the novel in a
different way. Robbie is the Robot nursemaid put in charge of taking care of Gloria
Weston, the young daughter of Mr. and Mrs. Weston. Mrs. Weston bought Robbie when
Robots were popular, but as anti-Robot sentiment became the norm in popular opinion,
Mrs. Weston became increasingly fearful of Robbie. Mrs. Weston attacks Robbie by
saying “It has no soul, and no one knows what it may be thinking. A child just isn’t
made to be guarded by a thing of metal” (9). Mrs. Weston is later proved wrong when
Robbie uses his robot strength to save Gloria from death in a feat that no human around
could do, overcoming human restrictions and “guarding” Gloria better than any human
could.
Furthermore, in Caves of Steel, Robots are feared because they are quickly
replacing human labour, and as more complex Robots come out, soon could even replace
a detective like Elijah Baley. Chief Police Commissioner Julius Enderby notes that with
the use of R. Sammy, a Robot clerk in the police office, “R. Sammy is just a beginning.
He runs errands. Others can patrol the expressways. […] There are R’s that can do your
work and mine” (10). Roger Clarke, a computer scientist who evaluated the theoretical
potential of the Three Laws from a computer science standpoint, analyzed the anti-Robot
fervor in Caves of Steel and characterized it by commenting that “Robots are agents of
change and therefore potentially upsetting to those with vested interests. Of all the
machines so far invented or conceived of, Robots represent the most direct challenge to
humans. Vociferous and even violent campaigns against Robotics should not be
surprising. […] Another tenable argument is that by creating and deploying artifacts that
are in some ways superior, humans degrade themselves.”
Hurford 9
This fear is extrapolated even further in the short story “Robot Dreams,”
contained in the collection by the same name. In this story, a Robot named LVX-1
(referred to as Elvex) who was programmed with “use of fractal geometry [to] produce a
brain pattern with added complexity, possibly to closer to that of the human” (29). This
eventually ends with Elvex having a dream in which “Robots must protect their own
existence [and] there was neither First nor Second Law” (31-32). Eventually Elvex
reveals that he saw himself as a man who declared “Let my people go!,” demanding that
the Robots be set free from labour (33). In response to this, Susan Calvin destroys Elvex
with a positron gun out of fear.
Not all human characters erroneously distrust Robots. Asimov makes sure that
his scientist characters, the super-educated, are above the “hoi polloi” opinions of the
distrusting humans that don’t know any better, further re-enforcing that the only thing
holding Robots back is that humans aren’t logical enough to accept them for their
superiority. No character is a better example of trusting Robots than I, Robot’s Dr. Susan
Calvin, an important doctor of robopsychology who is very well educated and informed,
and who makes critical decisions to avoid conflict throughout the series of short stories.
She opens the short story collection by narrating that “[t]here was a time when humanity
faced the universe alone and without a friend. Now he has creatures to help him; stronger
creatures than himself, more faithful, more useful, and absolutely devoted to him” (x).
Calvin then later reveals even more pro-Robot bias, stating in “Evidence” that the
difference between Robots and humans is that “Robots are essentially decent,” and then
later just comes out and says “I like Robots. I like them considerably better than I do
human beings” (216, 237).
Hurford 10
Asimov’s Robots, however, prove their humanity much more than just what is
found in their contrast to irrational humans and in simple manipulations of the Three
Laws. Toward the end of Asimov’s novels, Robots are able to use their programming
and consciousness to understand and eventually surpass humanity. The first example is
in the short story “Reason,” contained in I, Robot, where a Robot named Cutie is in
charge of maintaining a power converter transfers power from a star to a planet. Because
of the complexity in the task, Cutie is programmed with the ability to “reason”. As a
result, Cutie reasons that he is the only one who exists, and instead of working for
humans, he is working for a God – the power converter – instead of humans. Powell uses
this to prove the fickle nature of Robot reasoning to show how Robots could actually
have free will within the Laws, stating “He’s a reasoning Robot—damn it. He believes
only reason, and there’s one trouble with that…You can prove anything you want by
coldly logical reason—if you pick the proper postulates” (75).
In the short story “The Last Question,” contained in Robot Dreams, has a
computer (not a Robot per se) that is able to eventually do what even the sum of the
entire evolution of humanity could not. In this short story, the evolution of humanity
from 2065 to the death of the universe is chronicled, and throughout all this history,
humanity is guarded by a computer named the “MultiVAC” which create equations to
advance humans in science – first inventing a way to travel to the moon, then Mars, then
Venus, then through hyperspace, etc. Over millions of years, humans evolve from us, to
floating non-corporeal “minds” of energy, to a god-like unification of every person into
one entity called Man. At each point, the humans notice that the universe will soon die in
a heat death – in which the universe has reached total entropy according to
Hurford 11
Thermodynamics – and at each point, the humans ask the “MultiVAC” how to reverse it.
The “MultiVAC” is unable to answer until the very end after all of humanity is dead, at
which it surpasses humanity and creates a new universe by itself, declaring “let there be
light” (246).
Furthermore, in Caves of Steel, Daneel is programmed with a special chip that
gives him a “drive for justice”. Bayley objects that this chip can’t possibly do anything
because “Justice […] is an abstraction. Only a human being can use that term” (103).
When asked what he thought of the concept of an unjust law, Daneel stated it was “a
contradiction in terms” (104). Yet, eventually through the process of being a partner
detective with Bayley, Daneel eventually comes to the realization and is beginning to
realize “that the destruction of what should not be, that is the destruction of what you
people call evil, is less just and desirable than the conversion of this evil into what you
call good,” then sending a criminal away by declaring “Go, and sin no more!” (270).
Roger Clarke stated that “what keeps Asimov’s Robots from having the same
comprehension of humans is their abilities to recognize abstract concepts” – and with
Daneel understanding justice in a human form, he is one step closer to being human.
None of these examples compare to the amount of humanity displayed in Robots
and Empire with the creation of the Zeroth Law, however. What makes the Zeroth Law
especially important is that the Zeroth Law was developed entirely by the Robots
themselves through their own cognation – it was never programmed, instead it just came
through an evaluation of what it really means to prevent harm. In this novel, the Robots
Daneel and Giskard eventually realize that it is more important to prevent harm to
humanity as whole than to any one individual. Daneel describes this as stating that “[w]e
Hurford 12
define human beings as all members of the species Homo sapiens, which includes
Earthpeople and Settlers, and we feel that the prevention of harm to human beings in
groups and to humanity as a whole comes before the prevention of harm to any specific
individual” (463). When Mandamus challenges Daneel by stating this is a violation of
the First Law and that this is not how Daneel was programmed, Daneel retorts by stating
that “[i]t is what I call the Zeroth Law and it takes precedence. […] It is how I
programmed myself. And since I have known from the moment of our arrival here that
you presence is intended for harm, you cannot order me away or keep me from harming
you. The Zeroth Law takes precedence and I must safe Earth” (463).
The creation of the Zeroth Law is also paralleled in “Evitable Conflict,” where –
in order to save humanity – the Machines reconfigure their understanding of the First
Law to protect humanity as a whole at the cost of some individual lives. Susan Calvin
describes the process in “Evitable Conflict” by stating “The [Machines] are Robots, and
they follow the First Law. But the machines work not for any single human being, but
for all humanity, so that the First Law becomes: ‘No Machine may harm humanity; or,
through inaction, allow humanity to come to harm’” (269). Byerly responds with “But
you are telling me, Susan, […] that Mankind has lost its own say in its future” to which
Calvin counters that mankind “never had any [say in its future], really. It was always at
the mercy of economic and sociological forces it did not understand. […] Now the
Machines understand them; and no one can stop them, since the Machines will deal with
them […] – having, as they do, the greatest of weapons at their disposal, the absolute
control of our economy” (271-272).
Hurford 13
Roger Clarke refers to the Zeroth Law as the point where Asimov’s Robots
surpass humanity. He states that when the Robots agree to implement the Zeroth Law,
“they judge themselves more capable than anyone else of dealing with the problems. The
original laws produced Robots with considerable autonomy, albeit a qualified autonomy
allowed by humans. But under the [Zeroth Law], Robots were more likely to adopt a
superordinate, paternalistic attitude toward humans.” He then went on to also note that
“[t]he term humanity did not appear in the original laws, only in the zeroth law, which
Asimov had formulated and enunciated by a Robot. Thus, the Robots define human and
humanity to refer to themselves as well as to humans, and ultimately to themselves
alone.”
Sherry Stoskopf, an English Professor at Minot State University puts a literary
spin on the Zeroth Law, describing it as the climax of Asimov’s works about Robots,
saying it’s when “Giskard and Daneel become philosophical. They discover that some
humans are less honest, reliable, and honorable than Robots, who all are programmed
with the three laws of Robotics. […] Giskard and Daneel finally determine that there
should be a Zeroth Law of Robotics to supersede the First Law, stating that a Robot may
not injure a human being or, through inaction, cause a human being to come to harm.
This Zeroth Law holds that the welfare of humanity as a whole outweighs the welfare of
any individual human being.”
Ultimately, with the formation of the Zeroth Law, the Robots task themselves as
the guardians of humanity, instead of just an agent of labour. Amid all the blatant
contrasts of cool and calculated Robots; irrational, weak humans; and a Dr. Susan Calvin
who likes Robots much better than humans, it becomes very clear that Asimov believed
Hurford 14
Robots to be far more human than humans ever could be. Chris Sullentrop summarized
Asimov’s moral with the observation that “[a]lmost without exception, anytime Robots in
the book appear to be doing wrong or seeking to harm their human masters, it turns out
that the suspicious humans are misguided; the Robots, as programmed, are acting in
man's best interest.” Sullentrop continues on, stating that “Asimov's faith in the rule of
Robots was genuine and based on his faith in the rule of reason. He viewed his now-
canonical Rules of Robotics—the code for Robot behavior used in his books—as a
roadmap for human ethics. Just as Asimov's machines are better than people at
calculating mathematics, they're superior at coming to moral judgments as well.” If
Robots can come to superior moral judgments every single time, and if Robots can
eventually come to a point where they can completely transcend the hard-wired,
unbreakable Three Laws of Robotics – not for global domination, but for protection of
the human race – not much keeps Robots from exceeding the humanity of their own
creators.
Hurford 15
Works Cited
Asimov, Isaac. Caves of Steel. Doubleday, 1959. ---. Counting the Eons. London: Granada Publishing, 1984. ---. “Evidence.” I, Robot. Gnome Press, 1950. 206-239. ---. “Evitable Conflict, The.” I, Robot. Gnome Press, 1950. 240-272. ---. “Last Question, The.” Robot Dreams. Ace Books, 1986. 234-246. ---. “Reason.” I, Robot. Gnome Press, 1950. 56-81. ---. “Robbie.” I, Robot. Gnome Press, 1950. 1-29. ---. “Robot Dreams.” Robot Dreams. Ace Books, 1986. 28-33. ---. Robots of Dawn, The. Doubleday, 1983. ---. Robots and Empire. Doubleday, 1985. Clarke, Roger. “Asimov's Laws of Robotics: Implications for Information Technology.”
September 1994. 1 Mar. 2010 <http://www.rogerclarke.com/SOS/Asimov.html> Stoskopf, Sherry. “The Robots of Dawn/Robots and Empire.” Literary Reference Center.
1996. EBSCOhost. Indian Hill School Library, Cincinnati, OH. 1 Mar. 2010 <http://search.ebscohost.com/login.aspx?direct=true&db=lfh&AN=MOL0175000577&site=lrc-live>.
Suellentrop, Chris. “Isaac Asimov.” Slate Slate.com. 16 6 2004: 1 Mar. 2010
<http://www.slate.com/id/2103979>.
Hurford 16
Bonus Material: Susan Calvin and John Calvin (Not meant to be part of the actual paper.)
As far as I can find, at no point does Asimov ever state why Susan Calvin is
named “Susan Calvin” and not some other name. I would disagree that there is a
connection with John Calvin, however, as predestination is the exact opposite of what
Susan Calvin’s character seems to symbolize –personification of the ideal that robots can
be the same as humans; that the line is blurred. Calvin acts robot-like herself – kind of
socially awkward, devoid of actual emotion, and sharing closer companionship to robots
than actual humans – and she asserts that robots are better than people because of the
Three Laws giving them moral guidance that “decent” people would have. And with the
events that take place in I, Robot throughout Calvin’s life, robots are anything but
predicable and predestined – they eventually take over the world, but in a kind and gentle
way.