Patient Zero: The Origins, Risks, and Prevention of Emerging Diseases
X-risks prevention plan
Transcript of X-risks prevention plan
A Roadmap: Plan of action to prevent human extinction risksAlexey Turchin
www.existrisks.org
X-Risks
Exponential growth of technologies
Unlimited possibilities
Including destruction of humanity
X-Risks
Main X-Risks
Large scale nuclear war
Runaway global warming
Nanotech - grey goo
AI
Synthetic biology
Nuclear doomsday weapons
and large scale nuclear war
Contest: more than 50 ideas were added
David Pearce and Sakoshi Nakamoto
contributed
Crowdsoursing
International
Control
System
Plan A is complex:
Friendly
AI
Rising
Resilience
Space
Colonization
Decentralized
monitoring of
risks
We should do all it simultaneously:
international control, AI, robustness, and space
Plan A1.1 Plan A2 Plan A3 Plan A4Plan A1.2
Survive
the
catastrophe
Plans B, C and D have smaller chances on success
Leave
backups
Improbable
ideas
Bad
Plans
Plan B Plan C Plan D
Research
Social
Support
International
Cooperation
Risk
Control
Worldwide risk
prevention
authority
Plan A1.1International Control System
Active
Shields
Plan A1.1International Control System
Planing
Step 1
Research
• Long term future model
• Comprehensive list of risks, with probability
assessment and prevention roadmap
• Wiki and internet forum
• Integration of approaches, funding,
education, translation
• Additional studies areas: biases, law
Plan A1.1International Control SystemStep 2
Preparation
Social
support
• %QQRGTCVKXG�UEKGPVKƒE�EQOOWPKV[� peer-reviewed journal, conferences,
intergovernment panel,
international institute
• Popularization (articles, books,
forums, media)
• Public support, street action
����̵PVK�PWENGCT�RTQVGUVU�KP���U�• Political support: lobbyism, parties
Plan A1.1International Control SystemStep 3
International
Cooperation
First level of
defence on
low-tech level
• #NN�UVCVGU�EQPVTKDWVG�VQ�VJG�70�VQ�ƒIJV�UQOG�INQDCN�risks
• Group of supernations takes responsibility of x-risks
prevention
• International law about x-risks
• International agencies dedicated to certain risks
• A smaller catastrophe could help unite humanity
Plan A1.1International Control SystemStep 3
Risk
control
First level of
defence on
low-tech level
• International ban on dangerous technologies or
voluntary relinquishment
• Freezing potentially dangerous projects for 30 years
• Concentrate all bio, nano and AI research in several
controlled centers
• Differential technological development: develop
UCHGV[�CPF�EQPVTQN�VGEJPQNQIKGU�ƒTUV�$QUVTQO�
Plan A1.1International Control SystemStep 4
Worldwide risk
prevention
authority
Second level of
defence on
high-tech level
• Center for quick response to any emerging risk; x-risks police
• Worldwide video-surveillance and control
Ů�2GCEGHWN�WPKƒECVKQP�QH�VJG�RNCPGV�DCUGF� on a system of international treaties
• Narrow AI based expert system on x-risks, Oracle AI
• Control over dissemination of knowledge of mass
��FGUVTWEVKQP��ETGCVG�YJKVG�PQKUG�KP�VJG�ƒGNF
Plan A1.1International Control SystemStep 4
Active
shields
Second level of
defence on
high-tech level
• Geoengineering anti-asteroid shield
• Nano-shield – distributed system of
control of hazardous replicators
• Bio-shield – worldwide immune system
• Mind-shield - control of dangerous
ideation by the means of brain implants
• Worldwide monitoring, security and control
��U[UVG�PQKUG�KP�VJG�ƒGNF
Risks of plan A1.1
2NCPGV�WPKƒECVKQP�YCT Fatal mistake in world
control system
Global catastropheGlobal catastrophe
Active
Shields
Worldwide risk
prevention authority
ResultSingleton
«A world order in which there is a single
decision-making agency at the highest level» (Bostrom)
Worldwide government system based on AI
Super AI which prevents all possible risks and provides
immortality and happiness to humanity
Colonization of the solar system, interstellar travel and
Dyson spheres
Colonization of the Galaxy
Exploring the Universe
Improving
human
intelligence
and morality
Plan A1.2Decentralised risk monitoring
Values
transformation
Cold war
and WW3
prevention
Decentralized
risks
monitoring
Plan A1.2Decentralised risk monitoringStep 1
Values
Transformation
• The value of the indestructibility of the civilization
��UJQWNF�DGEQOG�ƒTUV�RTKQTKV[�QP�CNN�NGXGNU• Reduction of radical religious (ISIS) or nationalistic
values
• Popularity of transhumanism
• Movies, novels and other works of art that honestly
depict x-risks and motivate to their prevention
• Memorial and awareness days: Earth day, Petrov
day, Asteroid day
Plan A1.2Decentralised risk monitoringStep 2
Improving
human
intelligence and
morality
• Higher IQ, New rationality, Fighting cognitive biases
• High empathy for new geniuses, lower proportion
of destructive beliefs
• Engineered enlightenment: use brain science
• Prevent worst forms of capitalism
• Promote best moral qualities
Plan A1.2Decentralised risk monitoringStep3
Cold war
and WW3
prevention
Dramatic
social
changes
• International conflict management authority like
international court
• Large project which could unite humanity
• Antiwar and antinuclear movement
• Cooperative decision theory in international politics
• Prevent brinkmanship
• Prevent nuclear proliferation
Plan A1.2Decentralised risk monitoringStep 4
Decentralized
risks
monitoring
• Transparent society: groups of vigilantes,“ Anony
mous” style hacker groups
• Decentralized control: local police,mutual control,
whistle-blowers
• Net based safety: ring of x-risks prevention organi
zations
• Economic stimulus: prizes for any risk found and
prevented
Ů�/QPKVQTKPI�QH�UOQMG��PQV�ƒTG
Plan A2Friendly AI
Solid Friendly
AI theory
AI practical
studies
Seed
AI
Superintelligent
AI
Study
and Promotion
Plan A2Friendly AI
Step 1
Study
and
Promotion
• Study of Friendly AI theory
• Promotion of Friendly AI (Bostrom and Yudkowsky)
• Fundraising (MIRI)
• •Slowing other AI projects (recruiting scientists)
• •FAI free education, starter packages in programming
Plan A2Friendly AI
Plan A2Friendly AI
Step 2
Solid
Friendly
AI theory
• Human values theory and decision theory
• Full list of possible ways to create FAI,
and sublist of best ideas
• Proven safe, fail-safe, intrinsically safe AI
• Preservation of the value system during
• AI self-improvement
• A clear theory that is practical to implement
Plan A2Friendly AI
Step 3
AI
practical
studies
• Narrow AI
• Human emulations
• Value loading
• FAI theory promotion to most AI commands; they
agree to implement it and adapt it to their systems
• Tests of FAI theory on non self-improving models
Plan A2Friendly AI
Step 4
Seed
AI
Creation of a small AI capable
of recursive self-improvement and based
on Friendly AI theory
Plan A2Friendly AI
Plan A2Friendly AI
Step 5
Superintelligent
AI
• Seed AI quickly improves itself and
undergoes “hard takeoff”
• It becomes dominant force on Earth
• AI eliminates suffering, involuntary
death, and existential risks
• AI Nanny – one hypothetical variant of
super AI that only acts to prevent
existential risks (Ben Goertzel)
Singleton
Unfriendly AI
Plan A2Friendly AI
Plan A3Rising Resilience
Improving
sustainability
of civilization
Improving
human
intelligence
and morality
High-speed
tech
development
Timely
achievement
of immortality
AI based on
uploading
of it’s creator
Plan A3Rising Resilience
Step 1
Improving
sustainability
of civilization
• Intrinsically safe critical systems
• Growing diversity
• Universal methods of catastrophe
prevention
• Building reserves (food stocks,)
• Widely distributed civil defence,
Plan A3Rising Resilience
Step 2
Useful ideas
to limit
catastrophe
scale
• Limit the impact of catastrophe: quaran
tine, rapid production of vaccines, grow
stockpiles
• Increase time available for preparation
supporting general research risks,
connect disease surveillance systems
• Worldwide x-risk prevention exercises
• The ability to quickly adapt to new risks
Plan A3Rising Resilience
Step 2High-speed
tech dev.
needed to quickly
pass risk window
• Investment in super-technologies (nanotech, biotech)
• High speed technical progress helps to overcome slow
process of resource depletion
• Invest more in defensive technologies than in offensive
Plan A3Rising Resilience
Plan A3Rising Resilience
Step 4
Timely
achievement
of immortality
Miniaturization
for survival
and
invincibility
• Nanotech-based immortal body
• &KXGTUKƒECVKQP�QH�JWOCPKV[�KPVQ�UGXGTCN�UWEEGUUQT�URGEKGU�capable of living in space
• Mind uploading
• Integration with AI
• Earth crust colonization by miniaturized nano tech bodies
• Moving into simulated world inside small self sustained
computer
Plan A3Rising Resilience
Plan A4Space colonisation
Temporary
asylums in
space
Space
colonies
on large
planets
Colonisation
of the Solar
system
Interstellar
travel
Plan A4Space colonisation
Plan A4Space colonisation
Step 1Temprorary
asylums in
space
• Space stations as temprorary asylums (ISS)
• Cheap and safe launch systems
Plan A4Space colonisation
Step 2Space
colonies on
large planets
Creation of space colonies on the Moon and
Mars (Elon Musk) with 100-1000 people
Plan A4Space colonisation
Step 3Colonization
of the Solar
system
• Self-sustaining colonies on Mars and large
asteroids
• Terraforming of planets and asteroids using
self-replicating robots and building space
colonies there
• Millions of independent colonies inside
asteroids and comet bodies in the Oort
cloud
Plan A4Space colonisation
Step 4
Interstellar
travel
• “Orion” style, nuclear powered “generation
ships” with colonists
• Starship which operate on new physical
principles with immortal people on board
• Von Neumann self-replicating probes with
human embryos
Result
Interstellar distributed humanity
Many unconnected human civilizations
New types of space risks (space wars, planets and stellar
explosions, AI and nanoreplicators, ET civilizations
Plan A4Space colonisation
Plan BSurvive the catastrophe
Preparation Bulding ReadinessHigh-tech
bunkers
Rebulding
civilisation
after
catastrophe
Plan BSurvive the catastrophe
Plan BSurvive the catastrophe
Step 1
Preparation
• Fundraising and promotion
• Textbook to rebuid civilization
(Dartnell’s book «Knowledge»)
• Hoards with knowledge, seeds and raw
materials (Doomsday vault in Norway)
• Survivalist communities
Plan BSurvive the catastrophe
Step 2
Building
• Underground bunkers, space colonies
• Nuclear submarines
• Seasteading
Natural
refuges
• Uncontacted tribes
• Remote villages
• Remote islands
• Oceanic ships
• Research stations in Atarctica
Plan BSurvive the catastrophe
Step 3
Readiness
• Crew training
• Crews in bunkers
• Crew rotation
• Differnt types of asylums
• Frozen embryos
Plan BSurvive the catastrophe
Plan BSurvive the catastrophe
Step 4
Miniaturization
for survival
and invincibility
• • Earth crust colonization by miniatur-
ized nanotech bodies
• Moving into simulated world inside
small self sustained computer
• Adaptive bunkers based on nanotech
Plan BSurvive the catastrophe
Step 5
Rebuilding
civilisation after
catastrophe
• Rebuilding population
• Rebuilding science and technology
• Prevention of future catastrophes
Result
Reboot of civilization
Several reboots may happen
Finally there will be total collapse or a new
supercivilization level
Plan CLeave backups
Time
capsules
with
information
Messages
to ET
civilizations
Preservation
of earthly life
Robot-rep-
licators in
space
Plan CLeave backups
Step 1
Time
capsules with
information
• Underground storage with information
and DNA for future non-human
civilizations
• Eternal disks from Long Now
Foundation (or M-disks)
Plan CLeave backups
Plan CLeave backups
Step 2
Messages to ET
civilizations
• Interstellar radio messages with encoded
human DNA
• Hoards on the Moon, frozen brains
• Voyager-style spacecrafts with information
about humanity
Plan CLeave backups
Plan CLeave backups
Step 3
Preservation of
earthly life
• Create conditions for the re-emergence of
new intelligent life on Earth
• Directed panspermia (Mars, Europe, space
and dust)
• Preservation of biodiversity and highly
developed animals (apes, habitats)
Plan CLeave backups
Plan CLeave backups
Step 4
Robot-replicators
in space
• Mechanical life
• Preservation of information about
humanity for billions of years
• Safe narrow AI
Result
Resurrection by another civilization
Resurrection of concrete people
Creation of a civilization which has a lot of common val-
ues and traits with humans
Plan DImprobable ideas
Saved by
non-human
intelligence
Quantum
immortality
Strange
strategy to
escape
Fermi paradox
Technological
precognition
Manipulation of
the extinction
probability
using
Doomsday
argument
Control of the
simulation
(if we are in it)
Plan DImprobable ideas
Idea 1
Saved by
non-human
intelligence
• Maybe extraterrestrials are looking out for us
and will save us
• Send radio messages into space asking for help
if a catastrophe is inevitable
• Maybe we live in a simulation and simulators
will save us
• The Second Coming, a miracle, or life after
death
Plan DImprobable ideas
Idea 2
Quantum
immortality
• If the many-worlds interpretation of QM is true,
an observer will survive any death including any
global catastrophe (Moravec, Tegmark)
• It may be possible to make almost univocal
correspondence between observer survival and
survival of a group of people (e.g. if all are in
submarine)
• Another human civilizations must exist in the
KƒPKVG�7PKXGTUG�
Plan DImprobable ideas
Plan DImprobable ideas
Idea 3
Strange strategy
to escape Fermi
paradox
Random strategy may help us to escape some
dangers that killed all previous civilizations in
space
Plan DImprobable ideas
Plan DImprobable ideas
Idea 4
Technological
precognition
• Prediction of the future based on advanced
quantum technology and avoiding dangerous
world-lines
• Search for potential terrorists using new
scaning technologies
• Special AI to predict and prevent new x-risks
Plan DImprobable ideas
Idea 5Manipulation
of the extinction
probability
using Doomsday
argument
• Decision to create more observers in case of
unfavourable event X starts to happen, so low-
ering it’s probability (method UN++ by Bostrom)
• Lowering the birth density to get more time for
the civilization
Plan DImprobable ideas
Idea 6
Control of the
simulation
(if we are in it)
• Live an interesting life so our simulation isn’t
switched off
• Don’t let them know what we know we live in
simulation
• Hack the simulation and control it
• Negotiation with the simulators or pray for help
Bad plans
Prevent x-risk
research
because it only
increases risk
Controlled
regressionDepopulation
Unfriendly AI
may be better
than nothing
Attracting good
outcome by
positive
thinking
Bad plans
Idea 1
Prevent x-risk
research
because it only
increases risk
• Do not advertise the idea of man-made global
catastrophe
• Don’t try to control risk as it would only give rise
to them
• As we can’t measure the probability of global
catastrophe it maybe unreasonable to try to
change the probability
• Do nothing
Bad plans
Idea 2
Controlled
regression
• Use small catastrophe to prevent large one
(Willard Wells)
• Luddism (Kaczynski): relinquishment of dangerous
science
• Creation of ecological civilization without
technology (“World made by hand”, anarcho-primi
tivism)
• Limitation of personal and collective intelligence to
prevent dangerous science
• #PVKINQDCNK\O�CPF�FKXGTUKƒECVKQP�KPVQ�OWNVKRQNCT�world
Bad plans
Bad plans
Idea 3
Depopulation
• Could provide resource preservation and make
control simpler
• Natural causes: pandemics, war, hunger (Malthus)
• Extreme birth control
• Deliberate small catastrophe (bio-weapons)
Bad plans
Bad plans
Idea 4
Unfriendly AI
may be better
than nothing
• Any super AI will have some memory about
humanity
• It will use simulations of human civilization to
study the probability of it’s own existence
• It may share some human values and distribute
them through the Universe
Bad plans
Bad plans
Idea 5
Attracting good
outcome by
positive thinking
• Preventing negative thoughts about the end of the world
and about violence
• Maximum positive attitude «to attract» positive outcome
• 5GETGV�RQNKEG�YJKEJ�WUG�OKPF�EQPVTQN�VQ�ƒPF�RQVGPVKCN�terrorists and superpowers to stop them
• Start partying now
Next stage of the research will be creation
of collectively editable wiki-style roadmaps
They will cover all existing topics of
transhumanism and future studies
Create AI system based on the roadmaps
or working on their improvement
Dynamic roadmaps
You could read all roadmaps on:www.immortality-roadmap.com
www.existrisks.org