Beep...Destroy All Humans!

30
Beep... Destroy All Humans Steven C. Mitchell, Ph.D. Componica, LLC

Transcript of Beep...Destroy All Humans!

Page 1: Beep...Destroy All Humans!

Beep... Destroy All HumansSteven C. Mitchell, Ph.D.Componica, LLC

Page 2: Beep...Destroy All Humans!

The Singularity is Near!

RAY KURZWEIL AND KURZWEIL TECHNOLOGIES, INC.LICENSED UNDER CC BY 1.0 VIA WIKIMEDIA COMMONS

-Morse's Law is an observation that computational density increases at an exponential rate.-If you believe Ray Kurzweil, it won't be very long before computers exceed human brains reaching a great unknown dubbed the Singularity.-At that point humans will transcend their meaty ways, and become one with machines, etc.-Frankly I just want something to fold my laundry, nothing that grandiose.-Now there’s this huge asterisk that’s stopping the metal ones from ruling the world...-Sure you can have a machine computationally as powerful as a brain, but you need software to make it think like a person.

Page 3: Beep...Destroy All Humans!

The Singularity is Near!

RAY KURZWEIL AND KURZWEIL TECHNOLOGIES, INC.LICENSED UNDER CC BY 1.0 VIA WIKIMEDIA COMMONS

*-Morse's Law is an observation that computational density increases at an exponential rate.-If you believe Ray Kurzweil, it won't be very long before computers exceed human brains reaching a great unknown dubbed the Singularity.-At that point humans will transcend their meaty ways, and become one with machines, etc.-Frankly I just want something to fold my laundry, nothing that grandiose.-Now there’s this huge asterisk that’s stopping the metal ones from ruling the world...-Sure you can have a machine computationally as powerful as a brain, but you need software to make it think like a person.

Page 4: Beep...Destroy All Humans!

So what would Human Software look like?

IMAGE TAKEN FROM CYBER RODENT PROJECT

-So what would such software / algorithm look like?-Something that would model the environment taking inputs and making predictions and decisions from the input and model.-This is part of a discipline in Computer Science is known as Machine Learning.

Page 5: Beep...Destroy All Humans!

Ok, So what is Machine Learning?

-Machine learning draws upon many disciplines like statistics, signal processing, mathematical optimization, and so on to create algorithms that learn from data.-Basically it boils down to a large tool chest of techniques to process data and make predictions or decisions.-Some examples are Support Vector Machines, Random Decision Forests, Logistic regression, k-Nearest Neighbors, k-Means clustering, Principal Component Analysis, and my favorite one to pronounce: Mahalanobis distance.-Most of these methods recognize patterns and make predictions from a mathematical and statistical standpoint.-The field is quite broad and we could spend multiple semesters covering these tools, but we're not.

Page 6: Beep...Destroy All Humans!

The Latest Trend...

-Instead let's focus on the one technique that's been capturing the media's interest for the past several years as well as big software companies like FaceBook, IBM, Google, Microsoft buying, hiring, and clawing their way into this field.-And that one technique is...-Why not just copy the brain?-It may sound preposterous, but we've been doing it for years.

Page 7: Beep...Destroy All Humans!

The Latest Trend...

-Instead let's focus on the one technique that's been capturing the media's interest for the past several years as well as big software companies like FaceBook, IBM, Google, Microsoft buying, hiring, and clawing their way into this field.-And that one technique is...-Why not just copy the brain?-It may sound preposterous, but we've been doing it for years.

Page 8: Beep...Destroy All Humans!

The Latest Trend...

-Instead let's focus on the one technique that's been capturing the media's interest for the past several years as well as big software companies like FaceBook, IBM, Google, Microsoft buying, hiring, and clawing their way into this field.-And that one technique is...-Why not just copy the brain?-It may sound preposterous, but we've been doing it for years.

Page 9: Beep...Destroy All Humans!

The Latest Trend...

-Instead let's focus on the one technique that's been capturing the media's interest for the past several years as well as big software companies like FaceBook, IBM, Google, Microsoft buying, hiring, and clawing their way into this field.-And that one technique is...-Why not just copy the brain?-It may sound preposterous, but we've been doing it for years.

Page 10: Beep...Destroy All Humans!

The Latest Trend...

-Instead let's focus on the one technique that's been capturing the media's interest for the past several years as well as big software companies like FaceBook, IBM, Google, Microsoft buying, hiring, and clawing their way into this field.-And that one technique is...-Why not just copy the brain?-It may sound preposterous, but we've been doing it for years.

Page 11: Beep...Destroy All Humans!

The Latest Trend...

-Instead let's focus on the one technique that's been capturing the media's interest for the past several years as well as big software companies like FaceBook, IBM, Google, Microsoft buying, hiring, and clawing their way into this field.-And that one technique is...-Why not just copy the brain?-It may sound preposterous, but we've been doing it for years.

Page 12: Beep...Destroy All Humans!

Neurons, Real and Artificial

DIAGRAM OF AN ARTIFICIAL NEURON.CHRISLD UNDER GFDL - WIKIMEDIA COMMONS

NEURON HAND-TUNEDQUASAR JAROSZ UNDER GFDL - WIKIMEDIA COMMONS

-So let's start with the fundamental building block of the brain, a nerve cell or neuron.-A neuron receives signals from other neurons from dendrites across synapses.-These incoming signals may excite or inhibit a neuron with vary degrees of strength.-Once a neuron reaches a threshold of excitement, it fires a signal along the axon to further neurons.-Learning mostly happens by the growth and decline of dendrites and changes in synapses.-We can approximate this using this model of an artificial neuron.-Here incoming signals are received from other neurons and inputs, weighted positive or negatively, summed, and passed thru a non-linear threshold function.

Page 13: Beep...Destroy All Humans!

Layers of Artificial Neurons

-It seems neurons are structured in layers from inputs to outputs.-For example it's hypothesized there are 50-100 layers between seeing an apple and then responding that you've seen it.-So in our model, we can arrange neurons like so.-Mathematically we can represent this using linear algebra. y = f(Mx), y = f(M_1 * f(M_2 * x))-With this mathematical representation it's been show that: -Learning is all about adjusting these matrices or connections between neurons. -With enough layers, this configuration is a universal function approximator. This means it can effectively learn any relationship between input and output if provided enough neurons. -More that one layers of neurons is very important. The more layers, the better the generalization. -The non-linear threshold function, f, is critical, otherwise mathematically the system can be simplified as a single layer. -This non-linear threshold function makes it difficult to train the network...-And that's one of the main crux to the field of artificial neural networks, they're difficult to train.

Page 14: Beep...Destroy All Humans!

The Perceptron

FRANK ROSENBLATT

-The idea of trying to model the brain bottom-up with artificial neurons started in the 40s/50s using electronics.-In the 1950s, artificial neurons trained using an algorithm call the Perceptron was created by Frank Rosenblatt.-The model consisted of a single layer of neurons because mathematically it was simple to train.-Literally it was machine with motors that adjusted it’s own potentiometers.-It adjusted the weights that contributed to incorrect neurons so they don't make the same mistakes.-It was a learning machine and that amazed people.-He applied it to speech recognition and visual pattern classifiers.-Many claims about their potential were made with some exaggerated, and in 1969 Marvin Minsky published a book proving their limitations and halted the field for more than a decade.-It was known as AI Winter. Research in artificial intelligence effectively halted for more than a decade.-Unfortunately Rosenblatt died soon after Minsky’s publication in a boating accident.

Page 15: Beep...Destroy All Humans!

Backpropagation

HTTPS://THECLEVERMACHINE.WORDPRESS.COM/

-In the 80s and 90s there was a resurgence of neural networks with the invention of the back-propagation algorithm by Rumelhart, Hinton, WIlliams et al.-This algorithm was able to train more than one layer producing networks that could solve many useful problems.-The output is compared to a desired output creating an error or delta.-This delta is propagated back up the weights adjusting those that contributed to the error.-Again claims were made about their successes, but interested died in the late 90s due to limitations of back-propagation.-This was due to the delta values gets weaker and weaker as you move up the layers. Known as the vanishing gradient problem.-The algorithm didn't work too well after 2 or 3 layers.-Neurology also chimed in saying biologically it's implausible. They didn't observe error signals flowing up real neurons. The field of Neural Networks effectively froze again.-Another AI Winter, AI Winter 2.0, resulted from this between the late 90s and mid-2000s.-The Terminator’s statement came about right when the field started dying.

Page 16: Beep...Destroy All Humans!

Deep Learning

GEOFF HINTON

-Around 2005, Geoff Hinton (same guy that co-invented back-prop) showed that many layers could be trained one-layer at the time.-The idea is each layer runs independently trying to understand and predict the data flow in and out of that layer learning to associate inputs and outputs from the bottom up. (Restricted Boltzmann Machines, Auto Encoders).-Seems to jive with neurology as they observe feedback connections between layers in real brains. For example there are 10x more neurons flowing to the cochlea in the ear than flowing from the cochlea to the brain.-They can be fine-tuned using backpropagation. Improvements were also made in that area with the use of rectified non-linear functions and better ways of initializing the networks.-Also computers gotten really fast. 100x-10000x (with GPUs) since the 90s.-A start of a new era for artificial neurons as many layers can be trained now.

Page 17: Beep...Destroy All Humans!

Speech Recognition

LI DENG - A TUTORIAL SURVEY OF ARCHITECTURES, ALGORITHMS, ANDAPPLICATIONS FOR DEEP LEARNING

HINTON - DEEP NEURAL NETWORKS FOR ACOUSTIC MODELING IN SPEECH RECOGNITION

-Between 2009-2010 Hinton and Li Deng at Microsoft applied deep learning to speech recognition.-Blew other state-of-the-art techniques right out of the water.-Since then all major commercial speech recognition system use deep learning (Apple Siri, Google Now, Microsoft Cortana, XBox, Baidu)-Hinton has publicly bragged that his research is what makes modern speech recognition on your smart phone possible.-In 2013, Google acquired Hinton and his research team from the University of Toronto.

Page 18: Beep...Destroy All Humans!

KUNIHIKO FUKUSHIMA

Convolutional Neural Networks

NEOCOGNITRON MOVIE - PART #1HTTPS://WWW.YOUTUBE.COM/WATCH?V=QIL4KMVM2SW

-Now let’s look at computer vision.-In the early eighties, Kunihiko Fukushima invented the Neocognitron, an algorithm for object recognition based on “disassembling” the visual cortex of cats... Poor cats.-So basically if you simulate the neural structure of a visual cortex on a compute, it can recognize stuff... go figure.-What’s amazing about this video is that it was created in the early 80s yet it demonstrates real robust object recognition at least 2 decades ahead of it’s time.

Page 19: Beep...Destroy All Humans!

KUNIHIKO FUKUSHIMA

Convolutional Neural Networks

NEOCOGNITRON MOVIE - PART #1HTTPS://WWW.YOUTUBE.COM/WATCH?V=QIL4KMVM2SW

-Now let’s look at computer vision.-In the early eighties, Kunihiko Fukushima invented the Neocognitron, an algorithm for object recognition based on “disassembling” the visual cortex of cats... Poor cats.-So basically if you simulate the neural structure of a visual cortex on a compute, it can recognize stuff... go figure.-What’s amazing about this video is that it was created in the early 80s yet it demonstrates real robust object recognition at least 2 decades ahead of it’s time.

Page 20: Beep...Destroy All Humans!

KUNIHIKO FUKUSHIMA

Convolutional Neural Networks

NEOCOGNITRON MOVIE - PART #1HTTPS://WWW.YOUTUBE.COM/WATCH?V=QIL4KMVM2SW

-Now let’s look at computer vision.-In the early eighties, Kunihiko Fukushima invented the Neocognitron, an algorithm for object recognition based on “disassembling” the visual cortex of cats... Poor cats.-So basically if you simulate the neural structure of a visual cortex on a compute, it can recognize stuff... go figure.-What’s amazing about this video is that it was created in the early 80s yet it demonstrates real robust object recognition at least 2 decades ahead of it’s time.

Page 21: Beep...Destroy All Humans!

Convolutional Neural Networks

YANN LECUN

-In the late 80s and early nineties, Yann LeCunn was inspired by the Neocognitron and created the Convolutional Neural Network based off it.-A system based on the S and C cells, but are trained using backpropagation using shared weighting.-It was one of the first deep neural networks. Hinton once said, “Yann Lecun can make backpropagation work for anything.”-Since the late 90s it was used to read to handwritten zipcodes and scripts on checks, but it too got caught up in AI Winter 2.0. A decade ahead of it’s time, yet fell onto deaf ears.-Recently it’s been rediscovered by the Googles and Facebooks in the mid-2010s heralded as a new idea.

Page 22: Beep...Destroy All Humans!

Convolutional Neural Networks

ANDREW NG

-For example in 2012 a collaboration by Google and Stanford, headed by Andrew Ng, created a convolutional neural network trained from stills from 10 million YouTube videos taught itself to recognize cats automatically. That got the attention of many researchers and suddenly 15-yr old CNNs were a hot idea again.

-Facebook developed DeepFace, a CNN-based system to recognize faces from their user photos (because the general public wasn’t creeped out enough by them.)

-As of last year, it’s better than human in recognition accuracy. (What the hell are they using it for?)

-And if you’re wondering, Yann Lecun became head of the AI Dept. at FaceBook. (Why?)

-And that Andrew Ng guy from Stanford joined Baidu last May as Chief Scientist working on Deep Learning

-To my dismay, CNNs have dominated the Computer Vision field for the past few years. They are a tool people, not a solution to everything! Stop! You can create images that will fool them and not fool humans.

Page 23: Beep...Destroy All Humans!

RatSLAM - Artificial Hippocampus

RATSLAM: A HIPPOCAMPAL MODEL FOR SIMULTANEOUS LOCALIZATION AND MAPPINGM. J. MILFORD, G. F. WYETH, D. PRASSER

-SLAM or Simultaneous Localization and Mapping (SLAM) is a system that constructs a map of an unknown environment while simultaneously keeping track of an agent's location within it.-Something you probably wish your Roomba had.-The hippocampus of the brain is known to be the center of memory as well as mapping yourself in the world.-In your hippocampus there are cells that only respond if you’re in a particular location facing a particular direction. For example I have a few neurons in my head that light up only if I stand under the Eiffel Tower looking up.-Milford et al, modeled a Rat’s hippocampus in an algorithm, and after three attempts produced a system that learns it position in the world strictly using vision.

Page 24: Beep...Destroy All Humans!

RatSLAM - Artificial Hippocampus

RATSLAM: A HIPPOCAMPAL MODEL FOR SIMULTANEOUS LOCALIZATION AND MAPPINGM. J. MILFORD, G. F. WYETH, D. PRASSER

-SLAM or Simultaneous Localization and Mapping (SLAM) is a system that constructs a map of an unknown environment while simultaneously keeping track of an agent's location within it.-Something you probably wish your Roomba had.-The hippocampus of the brain is known to be the center of memory as well as mapping yourself in the world.-In your hippocampus there are cells that only respond if you’re in a particular location facing a particular direction. For example I have a few neurons in my head that light up only if I stand under the Eiffel Tower looking up.-Milford et al, modeled a Rat’s hippocampus in an algorithm, and after three attempts produced a system that learns it position in the world strictly using vision.

Page 25: Beep...Destroy All Humans!

Playing Atari with Deep Reinforcement Learning

-DeepMind, British artificial intelligence founded in 2011 and acquired by Google in 2014, last month published a paper in Nature demonstrating the use of CNNs applied to reinforcement learning.-Using only the pixel information from the screen, the system automatically thru trial and error how to play classic Atari video games.-In a few games like Space Invaders and Breakout, the system learned how to play more efficiently than any human even could.-Unfortunately Reinforcement Learning is another large topic for another talk, but basically it’s about how to train an algorithm to perform a sequence of actions like playing a game, where the only inputs are the environment and rewards / punishments.-It’s basically how we and other mammals learn. Good Dog/Human/Rat! Bad Dog/Human/Rat!-Fun Fact: The field of Reinforcement Learning will probably lead to the great robot upraising.<Due to lack of time insert a half-ass explanation of reinforcement learning, how we’ve identified in the brain structures where Q-Learning and SARSA occur... wave your hands a lot too.>

Page 26: Beep...Destroy All Humans!

Playing Atari with Deep Reinforcement Learning

-DeepMind, British artificial intelligence founded in 2011 and acquired by Google in 2014, last month published a paper in Nature demonstrating the use of CNNs applied to reinforcement learning.-Using only the pixel information from the screen, the system automatically thru trial and error how to play classic Atari video games.-In a few games like Space Invaders and Breakout, the system learned how to play more efficiently than any human even could.-Unfortunately Reinforcement Learning is another large topic for another talk, but basically it’s about how to train an algorithm to perform a sequence of actions like playing a game, where the only inputs are the environment and rewards / punishments.-It’s basically how we and other mammals learn. Good Dog/Human/Rat! Bad Dog/Human/Rat!-Fun Fact: The field of Reinforcement Learning will probably lead to the great robot upraising.<Due to lack of time insert a half-ass explanation of reinforcement learning, how we’ve identified in the brain structures where Q-Learning and SARSA occur... wave your hands a lot too.>

Page 27: Beep...Destroy All Humans!

Long Term Short Term Memory

JÜRGEN SCHMIDHUBER

-Lastly there’s the Long Term Short Term Memory neuron, or LSTM for short.-All the previous examples involved systems with no knowledge of time. Even the reinforcement example I just showed lacks a time component. Basically it’s a network that if it sees a screen like that, move the controls in that direction. You probably noticed in the video a certain jitter to it’s movement because if it.-Jürgen Schmidhuber (You_again Shmidhoobuh) invent a neuron with memory in 1997 which allows networks to have a sense of time.-Mostly forgotten, but as with the other techniques, The Googles and Facebooks have rediscovered it and in 2009 it became the best technique for handwriting recognition and OCR.-People are just starting to talk about it this year, and I’m currently implementing my own to see what be created from it.

Page 28: Beep...Destroy All Humans!

Long Term Short Term Memory

JÜRGEN SCHMIDHUBER

-Lastly there’s the Long Term Short Term Memory neuron, or LSTM for short.-All the previous examples involved systems with no knowledge of time. Even the reinforcement example I just showed lacks a time component. Basically it’s a network that if it sees a screen like that, move the controls in that direction. You probably noticed in the video a certain jitter to it’s movement because if it.-Jürgen Schmidhuber (You_again Shmidhoobuh) invent a neuron with memory in 1997 which allows networks to have a sense of time.-Mostly forgotten, but as with the other techniques, The Googles and Facebooks have rediscovered it and in 2009 it became the best technique for handwriting recognition and OCR.-People are just starting to talk about it this year, and I’m currently implementing my own to see what be created from it.

Page 29: Beep...Destroy All Humans!

In Conclusion

I just presented a small subset of a large topic

These ideas have been bubbling for decades, but only now are starting to catch on.

Yann LeCun has stated he is concern of an AI Winter 3.0 due to the sudden excitement over them.

Time will tell...

Page 30: Beep...Destroy All Humans!

So how do I get into this field?Be very good in programming, linear algebra, and statistics.

Both Andrew Ng and Geoff Hinton provide outline courses on Machine Learning and Neural Networks on Coursera.

Read papers and implement the algorithms yourself.

Be aware of the players in this game, as it will give you a good insight on what to follow next.

Look at software packages like Scikit-Learn, Theano, and Yann Lecun’s Torch7.