AAS 17-329 A DEEP LEARNING APPROACH FOR OPTICAL...

10
1 A DEEP LEARNING APPROACH FOR OPTICAL AUTONOMOUS PLANETARY RELATIVE TERRAIN NAVIGATION Tanner Campbell * , Roberto Furfaro , Richard Linares , and David Gaylor § Autonomous relative terrain navigation is a problem at the forefront of many space missions involving close proximity operations which does not have any definitive answer. There are many techniques to help cope with this issue using both passive and active sensors, but almost all require very sophisticated dynam- ical models. Convolutional Neural Networks (CNNs) trained with images ren- dered from a digital terrain map (DTM) can provide a way to side-step the issue of unknown or complex dynamics while still providing reliable autonomous navigation by directly mapping an image to position. The portability of trained CNNs allows offline training that can yield a matured network capable of being loaded onto a spacecraft for real-time position acquisition. INTRODUCTION Developing and implementing methods for autonomous relative terrain navigation on plane- tary bodies is a challenging problem. Generally, given an arbitrary section of terrain, it is not triv- ial for a spacecraft to autonomously determine its position in three dimensions relative to the ter- rain. Due to the frequency of this problem, particularly during the course of deep space missions, there have been a host of possible solutions each with their own merits and disadvantages. Mario and Debrunner (2015) discuss a method called Natural Feature Tracking (NFT) that is to be used on-board the OSIRIS-REx (Origins Spectral Interpretation Resource Identification and Security- Regolith Explorer) spacecraft to autonomously perform the Touch-And-Go (TAG) maneuver that will collect a sample from the asteroid Bennu. This method uses a normalized cross-correlation algorithm to match images rendered of hand selected “features” from a shape model to images taken in real time. This information can then be used as a state update in the dynamics propaga- tor. 1 Johnson, et al. (2007) propose a similar correlation method with persistent feature tracking for planetary descent. A more direct approach can be to use an active sensor such as LIDAR, which was proposed by Johnson and Ivanov (2011) for precision lunar landings. So far, there have not been any methods that use a deep learning approach, and more specifically Convolu- tional Neural Networks (CNNs) to solve this problem. The inspiration for this current work comes from the success seen by Furfaro and Law (2015) where they used a Single Layer Feed- forward Network (SLFN) known as an Extreme Learning Machine (ELM) to determine an ob- * Graduate Student, Department of Aerospace and Mechanical Engineering, University of Arizona, USA. Associate Professor, Department of Systems and Industrial Engineering, Department of Aerospace and Mechanical Engineering, University of Arizona, USA. Assistant Professor, Department of Aerospace Engineering and Mechanics, University of Minnesota, Minnesota, USA. § Associate Professor, Department of Aerospace and Mechanical Engineering, University of Arizona, USA. AAS 17-329

Transcript of AAS 17-329 A DEEP LEARNING APPROACH FOR OPTICAL...

Page 1: AAS 17-329 A DEEP LEARNING APPROACH FOR OPTICAL …arclab.mit.edu/wp-content/uploads/2018/10/2017_06.pdf · Developing and implementing methods for autonomous relative terrain navigation

1

A DEEP LEARNING APPROACH FOR OPTICAL AUTONOMOUS PLANETARY RELATIVE TERRAIN NAVIGATION

Tanner Campbell*, Roberto Furfaro†, Richard Linares‡, and David Gaylor§

Autonomous relative terrain navigation is a problem at the forefront of many space missions involving close proximity operations which does not have any definitive answer. There are many techniques to help cope with this issue using both passive and active sensors, but almost all require very sophisticated dynam-ical models. Convolutional Neural Networks (CNNs) trained with images ren-dered from a digital terrain map (DTM) can provide a way to side-step the issue of unknown or complex dynamics while still providing reliable autonomous navigation by directly mapping an image to position. The portability of trained CNNs allows offline training that can yield a matured network capable of being loaded onto a spacecraft for real-time position acquisition.

INTRODUCTION

Developing and implementing methods for autonomous relative terrain navigation on plane-tary bodies is a challenging problem. Generally, given an arbitrary section of terrain, it is not triv-ial for a spacecraft to autonomously determine its position in three dimensions relative to the ter-rain. Due to the frequency of this problem, particularly during the course of deep space missions, there have been a host of possible solutions each with their own merits and disadvantages. Mario and Debrunner (2015) discuss a method called Natural Feature Tracking (NFT) that is to be used on-board the OSIRIS-REx (Origins Spectral Interpretation Resource Identification and Security-Regolith Explorer) spacecraft to autonomously perform the Touch-And-Go (TAG) maneuver that will collect a sample from the asteroid Bennu. This method uses a normalized cross-correlation algorithm to match images rendered of hand selected “features” from a shape model to images taken in real time. This information can then be used as a state update in the dynamics propaga-tor.1 Johnson, et al. (2007) propose a similar correlation method with persistent feature tracking for planetary descent. A more direct approach can be to use an active sensor such as LIDAR, which was proposed by Johnson and Ivanov (2011) for precision lunar landings. So far, there have not been any methods that use a deep learning approach, and more specifically Convolu-tional Neural Networks (CNNs) to solve this problem. The inspiration for this current work comes from the success seen by Furfaro and Law (2015) where they used a Single Layer Feed-forward Network (SLFN) known as an Extreme Learning Machine (ELM) to determine an ob- * Graduate Student, Department of Aerospace and Mechanical Engineering, University of Arizona, USA. † Associate Professor, Department of Systems and Industrial Engineering, Department of Aerospace and Mechanical Engineering, University of Arizona, USA. ‡ Assistant Professor, Department of Aerospace Engineering and Mechanics, University of Minnesota, Minnesota, USA. § Associate Professor, Department of Aerospace and Mechanical Engineering, University of Arizona, USA.

AAS 17-329

Page 2: AAS 17-329 A DEEP LEARNING APPROACH FOR OPTICAL …arclab.mit.edu/wp-content/uploads/2018/10/2017_06.pdf · Developing and implementing methods for autonomous relative terrain navigation

2

server’s X and Y position with a fixed altitude over terrain given a single nadir pointed image and slew strip. The results from this showed that a neural network, when properly trained, can quickly and accurately provide an observer’s position using images. This was not without drawbacks, however, it required the use of a nadir “slew” image that was a complete north-south strip cut from the original image and it also relied on a single image cut into many pieces to train the net-work instead of using many realistic images. Moreover, this method used a combination of Gabor filters and Principal Component Analysis (PCA) to extract features that were fed to the SLFN. This combination represented an ad-hoc solution that cannot necessarily be applied in general.

In contrast to other Artificial Neural Networks, CNNs can take advantage of the two, or even three dimensional shapes of input data using the spatial distribution of learned features to its ad-vantage. This is what has made them a logical choice for image processing. Using multiple con-volutional layers interspersed with statistical pooling layers allows features to be learned while reducing the input data size, drastically improving computational performance. The stacked layer approach also allows for relatively simple modeling of highly nonlinear relationships that would otherwise be impossible or require massive single layer networks. As such, training a CNN to provide location given an observation, such as an image, is a very attractive prospect for autono-mous spaceflight. Not only is the location acquisition process extremely quick after training (only taking a fraction of a second) a CNN also allows side stepping the need for a complicated dynam-ics model. The CNN maps directly from image to position, regardless of the underlying accelera-tions. This can be extremely useful when navigating near small bodies or new objects where the dynamics are unknown or are very complex.

With the increasing popularity and sup-port for neural networks, we can find them being applied to many new and old prob-lems with tremendous success. CNNs lend themselves very naturally to a classification type problem where each input has a “label” as its desired output. Ease of classification, coupled with the ability to greatly reduce the input parameters without significant loss of information has made CNNs a powerful im-age processing tool. They have already been used for autonomous vehicles as a way of providing “hazard maps” for passable and impassable terrain. Maturana and Scherer (2015) use what they call a 3D CNN to cre-ate a safety map for autonomous landing zone detection for helicopters. Bagnell, et al. (2010), under the DARPA UPI program, train a CNN with LIDAR data to enable a military vehicle to autonomously navigate rough terrain by again making a hazard map of obstacles that their vehicle cannot sur-pass. So far, there have not been any exam-ples of a CNN being used to actually determine position from an image, with examples in litera-ture all being of logistic regression for classification of objects in the terrain.

This paper shows that a CNN can be trained with a series of images rendered from a digital terrain map (DTM) of the lunar surface (Figure 1) to return the position of the observer relative to

Figure 1. This is a rendered image of part of the Apollo 16 landing site. This image was rendered us-ing the open source POV-Ray software and an LROC DTM of the lunar surface.

Page 3: AAS 17-329 A DEEP LEARNING APPROACH FOR OPTICAL …arclab.mit.edu/wp-content/uploads/2018/10/2017_06.pdf · Developing and implementing methods for autonomous relative terrain navigation

3

the DTM. Since a trained CNN is portable, a pre-trained CNN can be used on-board a spacecraft for autonomous navigation. The CNN implementation explored here is done using the recently released Google Brain Team’s TensorFlow python API. This open source software provides an extensive library of optimized C++ functions tailored for deep neural networks. TensorFlow al-lows a CNN to be setup with relative ease, where a “graph” of the computations to be completed is specified in python, and then initialized in C++.2 Once the CNN is trained, a query for position from an image is almost instantaneous so this method is well adapted for use in autonomous nav-igation.

CNN STRUCTURE

Convolutional Neural Networks have a very basic structure that can be tailored to suit a wide variety of problems. At their core, CNNs are made of three main parts: multiple layers of weighted convolution, nonlinear activation functions, and some calculation of cost that can be optimized (minimized) generally by some type of stochastic gradient descent. Statistical pooling (such as max or average) was omitted from this list, because while it may play a useful role in the translation invariance for some networks, in this research it serves mainly to reduce the spatial dimension of features and improve computational efficiency.

The general flow of a CNN is that it will take some input (in this case an image) and pass it

through multiple layers of convolution and nonlinear activation. Once this is done, the output is fed to the cost function and new weights are calculated to start the process over again. The new weights are calculated via a stochastic gradient descent function by implementing

𝑊",$%& = 𝑊",$ − 𝜂*+

*,-,. (1)

In Equation (1) the weights 𝑊",$%& for the next iteration are calculated by subtracting a small portion of the gradient of the cost function 𝜃 from the current weights. Here 𝑖 is the index of the current convolutional layer, and 𝑘 is the current iteration. The value of 𝜂 is generally small (e.g.𝜂 = 0.0001) and is referred to as the learning rate. The learning rate is usually held constant,

Figure 2. This diagram shows the flow of a general convolutional neural network. The ellipses rep-resent the possibility of replication of the different layers.

Page 4: AAS 17-329 A DEEP LEARNING APPROACH FOR OPTICAL …arclab.mit.edu/wp-content/uploads/2018/10/2017_06.pdf · Developing and implementing methods for autonomous relative terrain navigation

4

but it may be beneficial to lower it as training progresses to more precisely minimize the cost function.

In this research, the Adaptive Moment Estimation (Adam) gradient descent algorithm is used to minimize cost. For brevity, the gradient of the cost at a given iteration will be denoted 𝑔$. This algorithm starts by calculating the first and second moments of the cost gradient.

𝑚$ = 𝛽&𝑚$9& + (1 − 𝛽&)𝑔$ (2)

𝑣> = 𝛽?𝑣$9& + (1 − 𝛽?)𝑔$? (3)

For the first iteration, 𝑚$ and 𝑣> are initialized as 0 vectors. This can cause some short term bias towards zero if the hyper-parameters (𝛽&, 𝛽?) are close to 1.3 This initial bias can be account-ed for however, with a simple correction.

𝑚$ =@.&9AB

(4)

𝑣$ =C.

&9AD (5)

These corrected moments can then finally be used with the update rule

𝑊",$%& = 𝑊",$ −EC.%F

𝑚$ (6)

Kingma and Ba (2015) give recommended values for the defaults of the hyper-parameters as 𝛽& = 0.9, 𝛽? = 0.999, and 𝜖 = 109I.

The nonlinear activation function that is typically used with CNNs is the ReLU function, short for Rectified Linear Units. This very simple function is given by

𝑓 𝑥 = max(0, 𝑥) (7)

There are other common activation functions that are used in machine learning, and in theory any nonlinear mapping would suffice, but particular success has been seen using the ReLU func-tion in CNNs, hence its overwhelming prevalence in the literature. The sigmoid function is anoth-er big contender in machine learning, but this function behaves poorly in CNNs because it causes the gradients to vanish quickly which makes gradient descent learning very difficult.

The convolutional layers follow the standard two dimensional convolution algorithm from mathematics.

(𝑋 ∗ 𝑊)(𝑖, 𝑗) = 𝑋@,R ∙ 𝑊"9@,T9RU@VW

XRVW (8)

In Equation (8), 𝑊 is the convolution kernel corresponding to the randomly initialized weights, and 𝑋 is the image with indices (𝑚, 𝑛). It is customary in CNNs to also add a small bias after the convolution. This can help to bring out “edges” of features in an image that are formed when the convolution has a high magnitude. Once these edges are brought out, the ReLU layer that follows the convolution will serve to extract these features that are then used to initialize the next layer. By doing this iteratively, the network can learn similar features in different images and use them for classification.

DATA

The terrain base model used for this research is a subsection of the Apollo 16 lunar landing site. The lunar surface was chosen for its richness of features, plethora of available data, and like-liness as a potential host body for use of this navigation method. The Lunar Reconnaissance Or-biter (LRO) mission has made many images and digital terrain maps (DTM) of the lunar surface

Page 5: AAS 17-329 A DEEP LEARNING APPROACH FOR OPTICAL …arclab.mit.edu/wp-content/uploads/2018/10/2017_06.pdf · Developing and implementing methods for autonomous relative terrain navigation

5

available to the public, including the one chosen for this project. From the 0.5 m/px Apollo 16 DTM, a small 1024 by 1024 pixel section was selected to provide the raw data for image genera-tion.

Due to the computationally intensive nature of image processing, there were two main meth-ods used to generate data to train the CNN. The first method was to use the open source Persis-tence of Vision Raytracer (POV-Ray) software to generate a single, nadir pointed, realistic image of the entire terrain selection. This image can then easily have smaller images sampled from it in various patterns to generate training and testing sets for a CNN. The benefit to this is that it is very quick to generate a very large training set, and the training images are smaller in size to speed up processing. This is the main method used to generate images during algorithm develop-ment to allow for the quickest turnaround of results. The second method is what is to be used for final results. This method involves using the POV-Ray tracer to generate all images used to train the CNN where each image is generated as if it were taken from a nadir pointed spacecraft above the surface at different locations.

APPROACH

Convolutional Neural Networks have had great success in solving classification-type logistic regression problems that are a natural extension to image processing.4,5,6 In order to take ad-vantage of this success, it is possible to elegantly repose the linear regression problem at hand as a classification problem. To start algorithm development, the simplest form of the relative terrain navigation problem is considered, and motion is restricted to one dimension. This way each pixel location along that one dimension can be treated as its own class, in this case giving 1024 classes. Each training image can then be labeled by where along this line its center lies. This same con-cept can then just as easily be extended to two dimensions.

To start, images that are 128 by 128 pixels were sampled from the single nadir base image (Figure 1) every 8 pixels horizontally across the center of the image. In order to increase the train-ing sample size, this nadir image was generated eleven times over with the Sun in different posi-tions, and the same sampling of images taken from each of these eleven base images. All in, this still gave a relatively small training sample size of 1,243 images. Tests were also run with images sampled every 4 pixels from the eleven base images giving a training set of 2,475 images, and a comparison of the results can be found later in this paper. It should be noted that this is a some-what unorthodox approach to classification training. Generally it is customary to have hundreds if not thousands of images per class label to train the network, while in this work we do not even have one image per class. Instead, there are 11 images for every eighth class and the rest are rely-ing on the overlapping data in each image and the generalization performance of the CNN for proper identification.

The test data set also used images that were 128 by 128 pixels, but these images were sampled from 30 random locations along the same one dimension that the training images came from. Since there is a 97% likelihood of duplicating at least one training image by sampling in this manner, the testing base image was rendered with a Sun location that was not included in the oth-er 11 base images. This way, even if the random sampling happened to occur directly at one of the training locations the image would still be different.

The CNN input is the full 128 by 128 pixel image labeled with a 1024 element one-hot vector corresponding to its center location. The image is then passed through three convolutional layers, each followed by a max pooling layer to reduce the dimensionality of the input and speed up computation. The convolutional layer is defined by

Page 6: AAS 17-329 A DEEP LEARNING APPROACH FOR OPTICAL …arclab.mit.edu/wp-content/uploads/2018/10/2017_06.pdf · Developing and implementing methods for autonomous relative terrain navigation

6

Figure 3. This is a plot of the training accuracy versus the test-ing accuracy for 2000 iterations using images sampled every 8 pixels from eleven base images as a training set. The training accuracy very quickly stabilizes at 100% while the testing accu-racy oscillates between 0-70%. This oscillation is a result of the random draw used to pick a sub-sample of test images after eve-ry iteration.

𝑣$ = 𝑅𝑒𝐿𝑈(𝑊$ ∗ 𝑋 + 𝑏$) (9)

Where 𝑘 is the index corresponding to the convolution layer, ∗ is the two dimensional convo-lution operation, and 𝑅𝑒𝐿𝑈() is given by max(0, 𝑥") for each element 𝑥" in 𝑋. The learned pa-rameters during training are the filter weights 𝑊$, and biases 𝑏$. After the convolutional layers there are two fully connected layers and a readout classification layer. The training is optimized by minimizing a softmax cross entropy cost function. The softmax is given by

𝑦(𝑋)" =`a-`abb

(10)

This softmax is used to calculate the distribution of the CNN’s predictions that is then passed to the cross entropy function.

𝜃 𝑦, 𝑦 = − 𝑦" log 𝑦"" (11)

In Equation (11), 𝑦 denotes the true distribution (one-hot vector of image location) and 𝑦 is the predicted distribution by the network. This network can then be trained on each image recur-sively, either in batch or one at a time. For this research, the network was trained with batches of 25 images 2000 times per image. As can be seen in the results section, this is far more itera-tions than needed to reach stability, but was chosen so that stability could be ensured during testing and develop-ment. At the end of every iteration, a random sample of 10 out of the thirty test images was checked for accuracy in order to monitor the progress of the CNN training.

RESULTS

While this research is still very much in progress, the results so far are very promising. The results from the test with training images taken every 8 pixels from the eleven base images will be presented first followed by the results of the test with training images every 4 pixels.

Page 7: AAS 17-329 A DEEP LEARNING APPROACH FOR OPTICAL …arclab.mit.edu/wp-content/uploads/2018/10/2017_06.pdf · Developing and implementing methods for autonomous relative terrain navigation

7

Figure 3 illustrates the progression of both the training accuracy and testing accuracy through the 2000 iterations of the test. The training accuracy very quickly climbs to 100% and after only 100-200 iterations achieves good stability. The testing accuracy is more deceptive. While the single-instance accuracy peaks at 70% and the value appears to oscillate throughout the duration of the test, this is actually a consequence of how the testing images are selected. Further analysis of the results actually shows a quite stable solution. At the time of this test, a very limited diag-nostic printout was given after every iteration. This coupled with how the images are randomly tested makes it impossible to say for certain how many of the testing images the network was able to predict locations for with zero error for this test. All that can be said is that at least six out of the thirty images were classified with no error. If only six images were classified correctly, you would expect at least one of them (corresponding to a 10% accuracy) to show up in a given ran-dom test about 92% of the time. Figure 4 provides the distribution of testing accuracies over the 2000 iterations, and this seems consistent with 6 correct classifications.

Six out of thirty possible correct classifications is not an impressive result, but when looking at the magnitude of the predicted position error of the rest of the test images, these results become more encouraging. It is standard to view classification as a binary system of either correct or in-correct because classically that would be the case. For instance when considering the MNIST data set for classifying hand written numbers, if the trained network were to classify a hand written “1” as an “8” this is an obvious failure because the relationship between the classes is unim-portant. The written number one looks nothing like the number eight, and the justification that the answer is only 7 off from truth is illogical. In this application however, the relationship between the classes is important, so the question of how wrong the prediction is becomes important. After around 400-500 iterations the errors in predicted location for each image all dropped to within 5

Figure 4. This is a plot of the distribution of test accuracy values that result from the random draw used to test the images. It is not easily seen, but there were two instances when 70% accu-racy occurred. This graph implies that at least 6 images were classified correctly.

Page 8: AAS 17-329 A DEEP LEARNING APPROACH FOR OPTICAL …arclab.mit.edu/wp-content/uploads/2018/10/2017_06.pdf · Developing and implementing methods for autonomous relative terrain navigation

8

pixels of the true location. For a ground sample distance of 0.5 m, this means that all errors once the network reached sufficient stability were reduced to < 2.5 m. While perhaps not perfect, this preliminary test shows the great potential of this method for relative terrain navigation.

The test using images sampled every 4 pixels was stopped after 450 iterations because the model had reached sufficient stability. Training accuracy reached 100% with reasonable stability in just 30 iterations, with a peak test accuracy of 80%, with 70% accuracy reached in just 40 itera-tions. In this test all errors dropped to within 3 pixels, or 1.5, m which is a good improvement over the previous test’s results.

Due to the summary nature of this evaluation of the network, more in-depth results were de-

sired. A new evaluation set of 180 images was created by sampling a 128 by 128 pixel area every 5 pixels across the training dimension. Again this was done from an image with a unique illumi-nation condition to ensure no testing image was in the training set. This set was chosen to get a more comprehensive view of the accuracy of the trained networks. Both saved networks were loaded and evaluated against this new set of images. The results from this evaluation corroborated what was discussed previously, but introduced a few outlying points which can be seen in Figure 6. These outliers are most likely a result of the network overfitting, or not calculating enough fea-tures at the convolution layers. Either of these two issues can cause images with very similar ter-rain to be classified drastically in the wrong place. Fortunately both of these things are easy to correct by adding in a “dropout” probability in the training and increasing the number of features in each convolutional layer respectively. Efforts to enact these corrections and re-evaluate results are currently under way.

Figure 5. Plots (a) and (b) respectively show the progression of the training and testing accuracy after every iteration and a histogram of the distribution of testing results for the test with training images every four pixels. These two plots are analogous to Figures 3 and 4, which showed results for the test with training images every 8 pixels.

(a) Testing/Training accuracy time (b) Testing accuracy histogram

Page 9: AAS 17-329 A DEEP LEARNING APPROACH FOR OPTICAL …arclab.mit.edu/wp-content/uploads/2018/10/2017_06.pdf · Developing and implementing methods for autonomous relative terrain navigation

9

CONCLUSIONS AND FUTURE WORK

The structure of CNNs makes them a very natural extension to image processing. Since the features are learned by the network and not defined a priori this allows for tremendous flexibility in application. As a result, Convolutional Neural Networks can be seen providing very positive results in almost every field they are applied to. The results of these preliminary tests during algo-

(a) Network trained every 8 pixels

(b) Network trained every 4 pixels

Figure 6. Plots (a) and (b) respectively show the position errors for each network when eval-uated with an image taken every five pixels. The error bounds on both networks are very tight and fall in line with those reported in the Results section. The few outliers are a conse-quence of overfitting and not calculating enough features in the convolution layers. Both is-sues are easily remedied.

Page 10: AAS 17-329 A DEEP LEARNING APPROACH FOR OPTICAL …arclab.mit.edu/wp-content/uploads/2018/10/2017_06.pdf · Developing and implementing methods for autonomous relative terrain navigation

10

rithm development are no different. With a very simple Convolutional Neural Network and a very sparse training set, it is possible to get down to a 3 pixel position error in one dimension. This method can easily be extended to two dimensions with similar results expected. Once this exten-sion has been made, full sized (1024 by 1024 pixel) rendered images of the selected lunar terrain will be used to demonstrate the applicability of this method to a more realistic environment. While it appears that the distribution of training locations is ultimately a limiting factor for the accuracy of the network using this method, it is still a very viable means of determining position from images.

REFERENCES 1 Mario, C; Debrunner, C; “Robustness and Performance Impacts of Optical-Based Feature Tracking to OSIRIS-REx Asteroid Sample Collection Mission.” AAS 16-087. 2015. 2 Abadi, M; et al. “TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems,” Google Re-search. 2015. 3 Kingma, Diederik P. and Ba, Jimmy. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs.LG], De-cember 2014. 4 Ciresan, Dan C., et al. "Flexible, high performance convolutional neural networks for image classification." IJCAI Proceedings-International Joint Conference on Artificial Intelligence. Vol. 22. No. 1. 2011. 5 Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems. 2012. 6 Lawrence, Steve, et al. "Face recognition: A convolutional neural-network approach." IEEE transactions on neural networks 8.1 (1997): 98-113. 7 Johnson, A. E; Ansar, A; Matthies, L. H; Trawny, N; Mourikis, A; Roumeliotis, S; “A General Approach to Terrain Relative Navigation for Planetary Landing,” Proc. AIAA Conference and Exhibit, no. AIAA 2007-2854, Rohnert Park, CA, May 2007. 8 Johnson, A. E; Ivanov, T; “Analysis and Testing of LIDAR-Based Approach to Terrain Relative Navigation for Pre-cise Lunar Landing,” Proc. AIAA Guidance Navigation and Control Conference (AIAA-GNC 2011). 2011. 9 Furfaro, R. and Law, A., 2015, Relative optical Navigation Around Small Bodies via Extreme Learning Machines (AAS 15-712), Proceedings of the annual AAS/AIAA Astrodynamics Specialists Conference, Aug 9-13, 2015, Vail, CO. 10 Maturana, D; Scherer, S; “3D Convolutional Neural Networks for Landing Zone Detection from LiDAR,” Interna-tional Conference on Robotics and Automation, March, 2015. 11 Bagnell, J. A; Bradley, D; Silver, D; Sofman, B; “Learning Rough-Terrain Autonomous Navigation,” Robotics Au-tomation Magazine, IEEE, 17(2):74-84, June 2010. 12 Gaskell, R. W; et al; “Characterizing and Navigating Small Bodies with Imaging Data,” Meteeroites & Planetary Science 43, Nr 6, 1049-1061, 2008. 13 Na, Y; Jung, Y; Bang, H; “Flash-LIDAR Based Terrain Relative Navigation for Autonomous Precision Lunar Land-ing,” Proc. International Symposium on Spaceflight Dynamics, ISSFD, Munich, Germany, Oct. 2015. 14 Leines, M. T; Terrain Referenced Navigation Using SIFT Features in LIDAR Range-Based Data (Master’s thesis). AFIT-ENG-MS-14-D-47, 2014.