Autonav-Polle-Frapard-Vision Based Navigation for Planetary Exploration Opportunity for...

11
IAC-03-Q.4.04 1 VISION BASED NAVIGATION FOR PLANETARY EXPLORATION OPPORTUNITY FOR AURORA B.Polle, B.Frapard, T.Voirin EADS Astrium 31, avenue des Cosmonautes, 31402 Toulouse Cedex 4, France [email protected] J. Gil-Fernández, E. Milic, M. Graziano, R. Panzeca GMV S.A., c/ Isaac Newton 11, 28760 Tres Cantos, Madrid, Spain [email protected] J. Rebordão, B. Correia, M. Proença, J. Dinis, P. Motrena, P. Duarte INETI, Estrada do Paço do Lumiar, 1649-038 Lisboa, Portugal ABSTRACT Much interest has been given in recent years to vision based navigation for planetary mission. A dedicated experiment has been conducted by NASA on Deep Space One, further experiments are planned on the Mars Reconnaissance Orbiter spacecraft. Recently, the preparation of the European framework for a manned exploration of Mars, the so-called AURORA missions has set the challenge. This paper presents the progress in this frame, results of the ESA Technological Research Program “Autonomous Navigation for Interplanetary Missions” * . In a first part, navigation requirements of exploration missions are addressed. Navigation needs depends on phases, such as cruise, approach or rendezvous, and also on the nature of the target, either small or large bodies. Parametric performances for various classes of cruise and encounters, with large and small bodies, are presented and compared to mission requirements. Both impulsive and continuous propulsion are considered. Vision based navigation appears as a natural supporting technology for continuous propulsion systems. A preliminary camera design is presented. Major challenges in detection of faint objects, imagery of extended bodies with or without atmosphere, with low Sun-Object- Target angles are introduced and discussed. The first tests on images acquired on ground telescopes, combined with simulation of the navigation prototype demonstrate the maturity of the concept, preparing flight demonstration of this full concept planned in the frame of the ESA SMART-1 project in 2004. * ESTEC contract #15292/01/NL/EC INTRODUCTION The objective of this study is to take a major step towards a European capability for autonomous onboard navigation. The study involves design, prototype implementation and test of the developed autonomous onboard navigation system (ObNav) covering all the phases of the mission: cruise, low thrust deep space manoeuvre, planetary or small bodies (asteroid, comet or planet) encounters. The achieved navigation accuracy is generally competitive with ground based radiometric navigation and in some cases much more accurate. AN AURORA PERSPECTIVE In the frame of the AURORA missions, optical navigation is now a valuable alternative providing autonomy and accuracy for cruise navigation but also for aerocapture and rendez-vous. Low thrust cruise navigation is certainly the domain where optical navigation interest is more obvious allowing to recover autonomy in this phase where introduction of new propulsion technology actually decrease autonomy when using a conventional ground based radiometric navigation system. Accurate rendez-vous and encounter is also a potential application of vision based navigation, as has been demonstrated on DS1. Application to aerocapture, aerobreaking can also be considered as a potential application, although achievable accuracy in presence of planetary atmosphere has still to be demonstrated. This will be one of the main objectives of the SMART-1 navigation experiment planned in this study. Vision based navigation can also be applied to relative navigation in a Martian orbit rendez-vous with a passive target. Although this type of application is not within the scope of this study, many techniques developed are fully

Transcript of Autonav-Polle-Frapard-Vision Based Navigation for Planetary Exploration Opportunity for...

Page 1: Autonav-Polle-Frapard-Vision Based Navigation for Planetary Exploration Opportunity for Aurora-IAF2003v-IAC-03.Q.4.04

IAC-03-Q.4.04

1

VISION BASED NAVIGATION FOR PLANETARY EXPLORATION OPPORTUNITY FOR AURORA

B.Polle, B.Frapard, T.Voirin EADS Astrium

31, avenue des Cosmonautes, 31402 Toulouse Cedex 4, France [email protected]

J. Gil-Fernández, E. Milic, M. Graziano, R. Panzeca

GMV S.A., c/ Isaac Newton 11, 28760 Tres Cantos, Madrid, Spain [email protected]

J. Rebordão, B. Correia, M. Proença, J. Dinis, P. Motrena, P. Duarte

INETI, Estrada do Paço do Lumiar, 1649-038 Lisboa, Portugal

ABSTRACT

Much interest has been given in recent years to vision based navigation for planetary mission. A dedicated experiment has been conducted by NASA on Deep Space One, further experiments are planned on the Mars Reconnaissance Orbiter spacecraft. Recently, the preparation of the European framework for a manned exploration of Mars, the so-called AURORA missions has set the challenge. This paper presents the progress in this frame, results of the ESA Technological Research Program “Autonomous Navigation for Interplanetary Missions”*. In a first part, navigation requirements of exploration missions are addressed. Navigation needs depends on phases, such as cruise, approach or rendezvous, and also on the nature of the target, either small or large bodies. Parametric performances for various classes of cruise and encounters, with large and small bodies, are presented and compared to mission requirements. Both impulsive and continuous propulsion are considered. Vision based navigation appears as a natural supporting technology for continuous propulsion systems. A preliminary camera design is presented. Major challenges in detection of faint objects, imagery of extended bodies with or without atmosphere, with low Sun-Object-Target angles are introduced and discussed. The first tests on images acquired on ground telescopes, combined with simulation of the navigation prototype demonstrate the maturity of the concept, preparing flight demonstration of this full concept planned in the frame of the ESA SMART-1 project in 2004.

* ESTEC contract #15292/01/NL/EC

INTRODUCTION

The objective of this study is to take a major step towards a European capability for autonomous onboard navigation. The study involves design, prototype implementation and test of the developed autonomous onboard navigation system (ObNav) covering all the phases of the mission: cruise, low thrust deep space manoeuvre, planetary or small bodies (asteroid, comet or planet) encounters. The achieved navigation accuracy is generally competitive with ground based radiometric navigation and in some cases much more accurate.

AN AURORA PERSPECTIVE

In the frame of the AURORA missions, optical navigation is now a valuable alternative providing autonomy and accuracy for cruise navigation but also for aerocapture and rendez-vous. Low thrust cruise navigation is certainly the domain where optical navigation interest is more obvious allowing to recover autonomy in this phase where introduction of new propulsion technology actually decrease autonomy when using a conventional ground based radiometric navigation system. Accurate rendez-vous and encounter is also a potential application of vision based navigation, as has been demonstrated on DS1. Application to aerocapture, aerobreaking can also be considered as a potential application, although achievable accuracy in presence of planetary atmosphere has still to be demonstrated. This will be one of the main objectives of the SMART-1 navigation experiment planned in this study. Vision based navigation can also be applied to relative navigation in a Martian orbit rendez-vous with a passive target. Although this type of application is not within the scope of this study, many techniques developed are fully

Page 2: Autonav-Polle-Frapard-Vision Based Navigation for Planetary Exploration Opportunity for Aurora-IAF2003v-IAC-03.Q.4.04

2

applicable to such problems are can now be considered as candidate technologies for AURORA.

REFERENCE MISSIONS

Two European Space Agency (ESA) interplanetary space missions SMART-1 and ROSETTA, selected for their planning coincident to that of this study, have been used as reference missions and to provide real images for testing.

SMART-1 is the first of the SMART (Small Missions for Advanced Research and Technology) programmes. SMART-1 will test a range of new technologies for spacecraft and instruments and will be the first European spacecraft to travel to and orbit around the Moon. It will be the first ESA spacecraft making use of electric propulsion as primary propulsion system.

Rosetta primary mission goal is a rendezvous with comet 46 P/Wirtanen. On its eight-year journey to the comet, the spacecraft will pass close to two asteroids, (Otawara and Siwa). The trip requires three gravity assist maneuvers, one with Mars and two with the Earth (See Figure 2). Rosetta launch has been postponed after Ariane 5 launch failure in December 2002 and a new target selected (Churyumov-Gerasimenko).

Three scenarios have been selected for their interest in supporting future planetary missions:

• Rosetta second Earth gravity assist manoeuvre.

• Rosetta Siwa asteroid encounter

• SMART-1 Apogee Raising phase.

SMART-1 will also provide flight images to the study to evaluate achievable navigation accuracy using observation of Earth and Moon limbs. Ground images of asteroids complete the validation, providing cruise phase validation with real images.

OPTICAL NAVIGATION PRINCIPLE

Between two encounters, the space vehicle is either in ballistic cruise phase, or performing low thrust trajectory correction maneuver. During these phases, optical navigation relies on Line Of Sight (LOS) measurements of distant visible objects. These can be planets or asteroids, which are used as beacon. Measurement of the line of sight of a known object locates the vehicle in a cone starting from the beacon with a half cone angle equal to the camera LOS

measurement uncertainty. Measurement of two beacons LOS enables positioning of the vehicle in 3D as illustrated in Figure 3. A dynamical filter is used then to estimate the vehicle velocity from the observations of positions at (at least) two different times and enables to cope with non-simultaneous measurements.

D n ahpvadp(bccadkatatkpabdgtgpi

uring encounter phase, the target becomes closer tha

Vehicle position uncertainty

LOS Measurement error of beacon 1

LOS Measurement error of beacon 2

Figure 1. Principle of 3D localization using 2 beacons LOS measurement

ny other objects and target observation provides the ighest positioning accuracy in the direction erpendicular to the LOS vector. The LOS vector being ery close to the velocity vector, the encounter ccuracy is mainly improved in the cross track irections. At large distances, along track (or range) osition estimation is still based on cruise accuracy based on distant beacons observation). When the target ecomes resolved, along track position measurement an be based on the object size observation and omparison to the a priori knowledge. Moons orbiting round the target can also be used for this purpose. The rawback of this approach is that it requires a good nowledge of the target size or the presence of Moons, nd thus lacks genericity. Observability analysis shows hat, assuming the vehicle velocity modules is known nd the target not aligned with the velocity, the along rack error can be observed without any a priori nowledge or presence of Moons. The accuracy is roportional to the distance between the velocity vector nd the target direction. This concept is selected as the aseline of the ObNav along track position estimation uring close encounters. For resolved objects, the eneric observable retained for navigation is therefore he reconstructed LOS to the center of the body. This eneric approach permits to derive achievable erformance figures in all phases of a generic nterplanetary mission.

Page 3: Autonav-Polle-Frapard-Vision Based Navigation for Planetary Exploration Opportunity for Aurora-IAF2003v-IAC-03.Q.4.04

3

Navigation requirements

In a previous published paper [3] a review of navigation requirements for various phases of the reference missions has been presented and is shown in Figure 2.

ASTEROID BASED NAVIGATION PERFORMANCE PREDICTION

Cruise navigation performance is function of many parameters: the LOS accuracy, the asteroid spatial distribution, the position of the vehicle… In order to specify the camera limit magnitude, a statistical analysis has been performed to evaluate the RSS error as a function of these parameters. Given the asteroid distribution, the key parameters are expected to be the distance of the vehicle from the Sun and the camera limit magnitude. For each distance and limit magnitude the position of the vehicle and the date are randomly selected. For each case, the database is sorted by visual magnitude and only asteroids brighter than the limit magnitude are retained. Then the resulting list is sorted by distance to retain the closer visible asteroids. Finally, the navigation error ellipsoid is computed by successively including asteroids by distance order. The error ellipsoid size is proportional to the distance and to the LOS error. Normalized to a LOS error of one, the result is homogeneous to a distance. The semi major axis represents the worst-case navigation error. Figure 3 shows the result of this analysis assuming observation of 20 asteroids.

To obtain the navigation accuracy for a given camera design, the value has just to be multiplied by the camera LOS accuracy. It can be seen that performance is

between 0.02 and 0.05 AU in the range [2-3.5] AU, which correspond to the main asteroids belt. Before and after performance degrades very quickly. From this analysis we can conclude that the minimum limit magnitude for asteroid based navigation is 12 and that 13 allows navigation up to 3 AU from the Sun. Above 3AU a much more sensitive camera is required to detect a less dense population of asteroid receiving less light from the Sun. Assuming 8µrad LOS accuracy (Conservative value for the selected navigation camera), the performance at 2.5 AU from the Sun is 0.03AU*8µrad=40km and navigation performance be dominated by Asteroid ephemeris errors. At 1AU from the Sun, the navigation accuracy is growing up to 800 km. This prediction has been confirmed by the results obtained using ground observation presented at the end of this paper.

NAVIGATION CAMERA DESIGN

The camera field of view is driven by the required accuracy and by the density of reference stars necessary for the asteroid position measurement in the camera reference frame. In order to keep the camera size minimum, advanced image processing algorithms will be used to detect faint asteroids. The basic principle is to perform post integration of the low asteroid signal (Mag 13) using brighter guide stars (Mag 11). This implies that guide stars shall be detected unambiguously in every single exposure frame. The star density changes a lot with galactic latitude. Based on data available about star density per square degree a 0.5°x0.5° FOV was first selected in order to have a minimum of two reference stars even in the mean worst-case (90° galactic latitude). In this case, less than 50 stars with magnitude lower than or equal to 13+/-0.5

Phase Requirement Asteroid Encounter • 1 km cross track

• 3 % along track (Up to encounter)

Planet Encounter (Gravity Assist Maneuver)

• 10 km cross track • 3% along track

(Up to –3 hours)

Cruise (Before Asteroid Encounter)

• 500 km

Cruise (Before Planet Encounter)

• 10 000 km

Low Thrust Deep Space Maneuver

• 500-1000 km

Figure 3 Study Applicable Navigation Requirements

Figure 2 Asteroid Based Navigation Accuracy Prediction

Page 4: Autonav-Polle-Frapard-Vision Based Navigation for Planetary Exploration Opportunity for Aurora-IAF2003v-IAC-03.Q.4.04

4

are expected for the mean best case (0° galactic latitude). During ground based image acquisition campaign the FOV is found to be slightly too small, leading to reject a significant number of the few detectable asteroids, lacking of sufficient number of background stars. A field of view of 0.8° has been found to be more appropriate and finally selected.

A 512x512CCD detector has been selected in baseline, but the emerging CMOS detector technology (also called "APS" for "Active Pixel Sensor") is considered as a promising alternative or successor. CMOS Imaging System (CIS) offer interesting features for the ObNav application: 1) Extension of domain when bright objects and stars can be imaged simultaneously. 2) Reduction of smearing with bright objects. In addition, CIS will allow designing a less expensive camera, with reduced power and mass budgets when compared to a camera using a CCD.

The refractive optic entrance pupil diameter is 70 mm, sufficient to detect the Mag 11 guide stars with a good signal to noise ratio even at end of life.

IMAGE PROCESSING

Spacecraft jitter and long exposures are key variables in this context. One possible sequence of operational image processing (IP) functions can be as follows.

When approaching a target, the IP function relies upon the a priori knowledge of its location as provided by accumulated ephemeris. Targets are initially dim although their brightness increases as the spacecraft comes closer to it. In most cases, targets are initially optically non-resolved but can be fully resolved or even occupying most of the field-of-view (FOV) of the navigation camera later. In addition, in the earlier phases where targets are dim and non-resolved, they can easily mis-interpreted and sequences of images and corresponding validation criteria are needed to ensure their identification in the image. The problem is that

jitter changes continuously the camera reference system and any image sequence must be given spatial coherence and a common reference, implying the implementation of star mapping functions to ensure such commonality. The same difficulty arises when the target is close and very bright, precluding the simultaneous observation of reference stars. The IP design must cope with this complex set of considerations.

As explained, a star mapping function is needed to provide sequences of images with the same reference (in the ICRS). This function might seem relatively standard although the potentially huge dimension of the supporting on-board star catalogue imposes different tradeoffs still under consideration. The limiting situations are: 1. the attitude error provided by the AOCS system is large and a significant number of stars must be considered to match the imaged pattern of points and the catalogue stars; 2. the attitude error is very small, the jitter is minimum and the IP function knows that only stars within a small patch are candidates to match the image pattern. Paradoxically, the complexity of this function depends on the jitter itself, which is very welcome from an IP point of view, because it clearly simplifies the detection of stars on a noisy background. The cost is a more complex detection algorithm based on: 1. multiple time integration to increase the signal-to-noise ratio on different slice-images (which are later combined in the form of an accumulation image) and 2. multiple cross-correlation in which every such pattern is sequentially used as a correlation template, and correlation maxima are combined in order to produce the most accurate set star locations. In summary, a spacecraft jitter enables a complex but effective star identification methodology, by strongly reducing the effects of noise; a jitter free spacecraft increases the number of false stars the star mapping function must cope with, although with a smaller set of candidate stars, reducing the probability of a successful match.

When the target is dim it should be observed simultaneously with stars. In normal conditions, the star mapping function should not have used the target itself in the star matching phase - an useful information for the IP function at this phase – but we cannot identify the target without a sequence of at least three images which unambiguously demonstrates that it is not a noise effect and that it moves according to the known ephemeris and the laws of physics. After a successful identification, the IP enters in a tracking mode ensuring that the target is located correctly and with increasing accuracy as the approach continues and the target

Window

Front Baffle

Aft Baffle

Detector8-Lenses Optical Combination

Window

Front Baffle

Aft BaffleAft Baffle

Detector8-Lenses Optical Combination

Figure 4 NAVCAM Preliminary Design

Page 5: Autonav-Polle-Frapard-Vision Based Navigation for Planetary Exploration Opportunity for Aurora-IAF2003v-IAC-03.Q.4.04

5

eventually becomes resolved, and its image occupies several pixels or even a large part of the image.

Actual image parameters determine the image acquisition sequence: image referencing and target evaluation can be performed on the same image or alternate sequences of higher exposure images for referencing and lower exposure images for target assessment must be programmed (the referencing of the latter is interpolated from the previous and subsequent images). As far as the target is concerned, the IP function will change from a pure raster processing of the image of the target – which only allows an estimation of the centre of brightness (COB)), to a vectorial processing, relying on its contour and striving to improve the estimation the centre of mass (COM) from the COB with the support of geometric assumptions on the target, such its ellipsoidal shape and illumination and viewing directions. This approach is used either for objects completely within the FOV or for objects that, for any reasons are cut by the borders of the image. In all cases, the navigation system receives the best possible estimation of the COM and proceeds with its own tasks to accomplish the mission.

The IP approach validation is being supported by an image generator which actually simulates images to be taken by the navigation camera. The main parameters of the image generator are the characteristics of the camera itself (optics, sensor and other system parameters), a star catalogue and a star photometric model (which computes the flux at the entrance pupil of the camera from magnitudes and the B-V colour indexes - used to compute the bolometric correction and the effective temperature of each star), the attitude profile of the spacecraft, and the information on the beacon to be retrieved. Images generated by the simulator and images acquired by on-ground telescopes are being used to test the complete IP approach.

NAVIGATION AND GUIDANCE

TRAJECTORY DETERMINATION

Multiple criteria have supported the navigation filter design such as generality, reactivity, numerical stability and adaptability to available measurements, to maneuvers and to camera performances. The preferred design solution is a sequential Kalman filter with UD decomposition of the covariance matrix.

The state vector is selected considering the overall filter design and also the previous experience in the field (mainly the NASA DS-1 mission). Accordingly, the augmented state vector includes the SC phase-space state, the solar radiation pressure coefficient (modeled as a bias) and a thrust scale factor. The latter accounts for the possible thrust magnitude measurement error and is modeled, for generality, as an exponentially correlated random variable (ECRV).

The filter includes the capability of reconfiguring the state and covariance after an impulsive maneuver or after change of the origin of the coordinate system (the target body towards which the SC flies is set as the origin in order to improve the navigation performances during encounters).

State Propagation A simplified dynamic equation for the SC phase-space state is integrated with a fixed step-size RK-4 numerical scheme. This scheme provides enough accuracy for the time intervals between measurements while requiring a low number of acceleration function evaluations.

The simplified dynamics considers only the forces due to the spherical central body gravitation, the most important third bodies actions, the solar radiation pressure and the low-thrust. The low-thrust

Figure 5 Multiple Time Integration image (bottom) compared with long integration image

(Top)

Page 6: Autonav-Polle-Frapard-Vision Based Navigation for Planetary Exploration Opportunity for Aurora-IAF2003v-IAC-03.Q.4.04

segmentation required by the guidance algorithm imposes the same segmentation in the low-thrust arc propagation to cope with the discontinuities in the acceleration.

State Update (Measurements Equation) The processed measurement is a linearised line-of-sight (LOS) from the SC to the center of mass (COM-LOS) or to the limb (limb-LOS) of the target. The measurement equations are different in each case. However, the same structure can be achieved under certain conditions so that the interface between the IP and the navigation modules is simplified.

The filter contains an internal control aimed at rejecting bad measurements. A measurement is rejected when differing from the predicted value more than a configurable factor times the measurement covariance.

The navigation phase (whether heliocentric or target-relative) is driven by the achieved performance. It is not an autonomous task but the switching-time is the result of the on-ground analysis.

an image is taken before and after a thrust segment and the orbit determination is immediately executed.

The standard deviation of the acceleration process noise is assumed to be the 5% of the thrust magnitude (140 mN), the measurements covariance is obtained assuming 0.5 pixel of error in each image with the following camera data: FOV = 0.5º and 512 pixels in the CCD matrix. Figure 6 shows a covariance simulation result for Rosetta Earth fly-by. These simulations validates the filter behavior but are not representative of the system level performance which will be dominated by the space/earth transition uncertainty (atmosphere) and the S/C attitude determination error.

GUIDANCE

Two different types of maneuvers are envisaged,

• Electrical Deep Space Maneuvers (EDSM) are maneuvers that compensate from deviations from the reference trajectory making use of the nominal low-thrust arc.

• Trajectory Corrective Maneuvers (TCM) are maneuvers, either chemical or electrical, added to the nominal DSM or low-thrust arcs.

Trajectory Corrective Maneuvers (TCM) Applying linear theory, the required nominal correction to compensate the deviations between estimated and nominal state is a linear function of the estimated state deviation. However, the non-linearities of the problem may induce a delivery error requiring a second iteration, considering the new initial state (with the addition of the computed maneuver).

No orbit determination of the target body is performed on-board and only the deviation of the SC from the nominal trajectory can be considered in the guidance algorithm. Hence, only the deterministic-target case is

Figure 6 Filter performances processing Earth

limb-LOS shifted ±30º

6

Simulation Results Performance evaluation has been performed using both covariance and MonteCarlo methods to assess achievable performances and to validate filter implementation.

Covariance analysis has been performed on the SMART-1 apogee-raising phase (ARP) that, later on, is used to analyses the guidance algorithm. The imaging strategy reported for the DS-1 mission is employed, i.e.

considered.

Two types of TCM guidance are envisaged depending on the type of encounter,

• Fixed-time guidance law, for swing-by (gravity assist). The objective is to find the nominal trim maneuver so that at a fixed final time the nominal position is reached.

• Variable-time guidance law, for fly-by (small bodies). The objective is to find the velocity

Page 7: Autonav-Polle-Frapard-Vision Based Navigation for Planetary Exploration Opportunity for Aurora-IAF2003v-IAC-03.Q.4.04

correction to obtain a desired position w.r.t. the target body, disregarding the final time.

The new dispersion in the velocity has to be added to the covariance matrix at the TCM instant, so that the OD filter can update the covariance and the state during the propagation. Under generic assumptions on the thruster and neglecting terms of higher order, the mechanization error matrix can be assumed as a diagonal matrix in the SC frame.

A Monte Carlo simulation (10000 shots) is carried out introducing an unbiased uncorrelated Gaussian error in each component of the initial position and velocity. The considered value for the standard deviation of the position is spos = 5000 km and for the velocity is svel = 5×10-3 km/s. A threshold of 300 km of delivery error is imposed to perform a new iteration. The considered scenario is the first TCM in the Rosetta 2nd Earth swing-by (30 days before the perigee passage). The final delivery error at the perigee passage, after application of the TCM, is presented in Figure 7.

TtO

F

• The generic applicability of the algorithm to very different missions prevents the use of dedicated guidance arcs.

• It seems that the inclusion of additional arcs before and after the nominal thrust arc is less fuel-efficient than the modification of the thrust profile and the switch-off time.

The DS1 mission successfully employed the latter guidance strategy to achieve its mission goals. Considering that, for the moment, the DS1 is the unique autonomous low-thrust mission tested in-flight, it provides a solid basis for the study.

The requirements impose a maximum deviation in the thrust duration increase and in the thrust direction variation. In addition, the guidance execution time shall be limited in order to avoid the on-board GNC to exceed the maximum assigned time.

With these specifications, the most suitable guidance strategy seems to be a weighted line search (WLSR)

Figure 8 Rosetta 2nd swing-by perigee error

after ‘recursive’ TCM application

7

he TCM application reduces by 2 orders of magnitude he delivery error at the perigee, from an initial error of (104) km to O(102) km.

Electrical Deep Space Maneuvers (EDSM) or the EDSM several strategies arise,

The implementation of the algorithm for autonomous on-board software seems to make the re-optimization inadvisable as nominal procedure.

wWmathgn

Tclia

Figure 7 Right Ascension of corrected and

nominal thrust profile (10º threshold)

ith a maximum number of guidance segments. The LSR guarantees a new reference thrust law not oved away from the initial one and, in addition, gives

final state deviation closer to the given thresholds an the initial low-thrust law. The limited number of

uidance segments assures that the computational time ever exceeds a given maximum.

he algorithm computes a linear correction to the ontrol vector (DS-1 algorithm) and then performs a ne search in order to avoid divergence of the trajectory fter the application of the correction. The process is

Page 8: Autonav-Polle-Frapard-Vision Based Navigation for Planetary Exploration Opportunity for Aurora-IAF2003v-IAC-03.Q.4.04

8

iterative in order to smooth the non-linearities of the problem.

After a converged guidance solution is found, the angular constraints are checked. If some of the modified segments violates the restriction (active constraints), the thrust direction is projected onto the boundary and a new optimization is performed imposing that the active control variables are always in the boundary (i.e. the angle between the corrected thrust direction and the nominal thrust orientation is the maximum allowed).

For the same SMART-1 ARP arc presented in the OD, introducing an initial unbiased Gaussian error with standard deviation spos = 30 km for position and svel = 1e-4 km/s for velocity, and achieving a final error 3 orders of magnitude smaller, the evaluation of imposing the angular constraint on all the thrust segments is carried out imposing a 10º threshold.

GROUND BASED IMAGE ACQUISITION AND PREPARATION

In order to perform a validation of ESA’s Autonomous On-board Optical Navigation System for Interplanetary Missions, EADS-ASTRIUM realized a ground-based observation campaign of faint objects at the Pic du Midi

observatory (French Pyrenees), using a CCD camera and a 60mm refractor. Ten different asteroids were observed, from various luminosities, at various distances, and in various sky areas: crowded areas as well as scattered ones.

For each asteroid, a sequence of 30 images was taken. On each image, optical defects and CCD inherent artifacts were removed thanks to an astronomical pre-

processing technique. Then stars and faint objects on each image were extracted from the field of view and their magnitude estimated. The catalog obtained that way was cross-correlated with a reference star catalog, hence enabling the astrometrical reduction of the images, that is to say the different parameters defining the passage from the pixel coordinates of the extracted stars to their absolute coordinates in the reference catalog. Finally an experience of localization of the Pic du Midi, simulating a spacecraft, was realized using a least square filter on observations. Estimated accuracy, in the bounds specified for interplanetary cruise phase, proved the reliability and the maturity of the concept being developed for the AutoNAV project.

IMAGING CAMPAIGN

The Imaging Campaign took place between May, 13th and May, 15th 2003 at the Pic du Midi observatory, famous for the quality of its atmosphere, thus better simulating deep space conditions. It is located in the French Pyrenees (alt : 2877m, Lat : 42°56’12’’ N, Long : 00°09’32’’ E ). We used a typical CCD camera similar to the one we will use on AutoNAV missions (see Figure 9 and 10).

The ten observed asteroids are represented in Figure 11 in their heliocentric configuration. The nearest, Cremona (UAI 486), is located at 0.99 AU from Earth, whereas the furthest, Ino (UAI 173) is at 2.34 AU. Their mean distance is 1.49 AU from the geo-center. Their angular coordinates present an amplitude of 18 degrees in declination, and 100 degrees in right ascension. To have enough lighted pixels, exposure time of the camera is adapted to each asteroid, and varies between 5s and 20s.

Name KODAK KAF-0401 E Size (pixel²) 768*512 Pixel size (µm²) 9*9 FOV (degrees²) 0.56*0.37

Figure 10 Ground Camera characteristics

Figure 9 Refractor and camera CCD used for

the observations

Page 9: Autonav-Polle-Frapard-Vision Based Navigation for Planetary Exploration Opportunity for Aurora-IAF2003v-IAC-03.Q.4.04

Alssddb

P

Totm

Figure 11 heliocentric configurations of

observed asteroids

9

s we can see on Figure 12, targeted asteroids are ocated in crowded sky zones (Arequipa), as well as in cattered ones (Harmonia). The raw images, as the ones hown, present both optical defects (mainly dust eposed on optical parts) and CCD artifacts (hot pixels, ark noise…). All of these defects must be corrected efore any further manipulation.

RE-PROCESSING

he ultimate objective of the images is to validate the n board navigation processing on realistic images aken by the specified on board navigation camera. To eet this objective, the following steps are required:

Correction of the defaults of the ground telescope and detector.

Astrometric reduction to provide on board algorithms the a priori knowledge of the attitude that can be expected from a state of the art star tracker.

Introduction of camera noise and background as expected from the on board camera.

Tact(ccwcbFCitoi

Sdfsr

Figure 12 Arequipa and Harmonia

o perform these corrections we realize a so-call stronomical preprocessing on the raw images. To haracterize the optical defects we use a Flat image, hat is to say an image of a uniformly lighted surface typically a white surface) taken with the camera. To haracterize the defects due to all the electronic omponents, we use an Offset Image, image obtained ith a zero second exposure time. This should be a

ompletely dark image, but in fact it appears gray, ecause of the electronic components themselves. inally, the use of a CCD detector leads to inherent CD artifacts: “hot” pixels. These quantum effects

ncrease with the temperature and with the exposure ime. The dark image, taken with an obturated refractor ver a duration equal to the exposure time of the raw mage, reveals these defects.

uch a pre-processing, removing dark and offset efects, normalizes the luminosity by dividing by the lat image. Hence the relative luminosities between tars on the Field Of View are more likely to be ealistic, and photometrically exploitable.

Page 10: Autonav-Polle-Frapard-Vision Based Navigation for Planetary Exploration Opportunity for Aurora-IAF2003v-IAC-03.Q.4.04

10

DETECTION OF LIGHT SOURCES

Once this pre-processing established, the exploitation of the images itself is doable. It consists firstly in extracting the positions (x,y) of the light sources on the image in pixels. This extraction is based on four main concepts:

• Only pixels whose luminosity is 2 sigma above the background level are considered as light sources

• An extension of light sources on more than 5 pixels² is required to be extracted, to avoid remaining CCD artifacts (cosmic rays, for example)

• A gaussian profile is assumed for light-sources luminosity, and fitted to the observed profiles. The center of the light source is taken at the maximum of the gaussian profile. The coordinates of this center are considered as the punctual measurement of the light source location.

• Integrating the pixel values on the whole light-source extension, and removing the mean local background value, enables to estimate the light source luminosity.

Figure 13 illustrates an example of extraction of the Amalthea image (43 extracted sources). The number of extracted sources goes from 28 in Harmonia image to 530 in Arequipa.

For each image, a catalog was written, containing the (x,y) position in pixels and the estimated flux.

ASTROMETRICAL REDUCTION

To identify these sources and determine their absolute location in the geocentric-equatorial ICRS frame (J2000), a cross-correlation was performed between these “extracted” catalogs and a reference one. We used a coupled TYCHO2 and USNB1.0 reference catalog, to cover the magnitudes domain reached by the camera (around magnitude 13). This catalog was in fact pre-processed to make the cross-correlation: For each asteroid, only a “box” of 0.6 square degrees around its expected location was considered in the reference catalog, and inside it, only stars with a magnitude below 14 were taken into account.

To perform the cross-correlation between an image catalog and the corresponding reference one, triangles of points were formed in each one and compared. The best matching triangles are considered as similar, following the algorithm described by Valdes et al. [1]. The number of identified stars is comprised between 19 (Victoria) and 393 (Arequipa).

Having identified extracted stars, we model the relationships between ICRS angular coordinates in the reference frame and pixel coordinates by a rotation, a tangent projection and a camera intrinsic parameter calibration. The camera calibration matrix includes in fact all external parameters excluded from the model for simplicity and especially atmospheric refraction, which was significant with respect to the target accuracy. In the ObNav validation experiment using these images, a specific camera calibration matrix will be associated to each observation while in flight a single calibration matrix would be used for all the measurements.

For the 300 images analysed, the accuracy of this model for the matched stars is thus comprised between 0.14 pixel (Flora) and 0.28 pixel (Amalthea), with a mean value of 0.24 pixel.

Using the ASTORB database and the corresponding ephemeris [2], one could also determine the pixel accuracy for the localization of the 10 asteroids observed. The mean accuracy is 0.15 pixel, excepted for one case: Ino, for which accuracy is 4 pixels. This inaccuracy for the asteroid Ino can be explained through a deeper analysis of this sequence of images. A zoom reveals indeed that Ino is superposed to a star, and this creates confusion during the extraction process. This case will therefore be excluded from the next analyses. For the other ones we can convert the found angular accuracy into a distance by multiplying to the

Figure 13 Example of extraction for Amalthea

Page 11: Autonav-Polle-Frapard-Vision Based Navigation for Planetary Exploration Opportunity for Aurora-IAF2003v-IAC-03.Q.4.04

11

known distance of the asteroids. The inherent accuracy of the ephemeris prediction must also be taken into account, since the maximal residual on asteroid orbit determination varies between 0.02 and 0.1 arcs. Results are shown in Figure 14. The estimated error varies between 115 km (Flora) and 1200km (Hertha). The difference is mainly due to the earth-to-asteroid distance. For the set of 9 asteroids, the mean error is therefore 500 km for a mean distance of 1.49 AU.

PIC DU MIDI EXPERIMENT

In order to evaluate the navigation accuracy that can be achieved using these images, a least square filter of the measurements has been performed to retrieve the Pic-du-Midi observatory position from the observations.

Taking into account the 9 remaining asteroids, and performing a least square filtering on the 9 measurements, one can estimate the typical ellipsoid error on the localization, in the ICRS reference frame. We find a principal direction for the ellipsoid defined in our reference ICRS frame by:

dX=91 km dY=-120 km dZ=231 km

This vector with a norm of 275 km (better than the mean value of 500 km previously estimated), oriented to the direction n=(0.33,-0.435,0.836), presents an angle of 33° with the inertial Z axis. The principal direction of the error is thus oriented (at a first sight) in a direction orthogonal to the mean plane of the asteroids, e.g. approximately the ecliptic plane.

This evaluation has been performed using better signal to noise ratio than the one expected on board and can be considered as the target for ObNav validation that will

use images with degraded signal to noise ratio and MTI image processing algorithms.

CONCLUSION

Starting from pictures taken with a CCD camera at the Pic du Midi, we were able to extract and match, under simplified hypotheses, the light sources on the images. The accuracy reached was 0.2 pixel. Preliminary tests on these observations have shown that an accuracy better than 300km could be obtained on the localization of the observer. This work was conducted in preparation to the future validation campaign that will start in autumn 2003 on the same images but with the on board Image Processing and Navigation algorithms. The predicted performances of the autonomous navigation system presented in this paper have been confirmed and will be validated with the onboard software prototype before end of 2004.

REFERENCES

[1] VALDES et al. 1995, “FOCAS Automatic Catalog Matching Algorithms”, Publications of the astronomical society of the Pacific, vol 107, page 1119

[2] ASTORB DATA BASE , Lowell Observatory ftp://ftp.lowell.edu/pub/elgb/astorb.html

[3] B.Polle, B.Frapard, O.Sain-Pe, V.Crombez, S.Mancuso, F.Ankersen, J.Fertig “Autonomous Navigation For Interplanertary Missions” 26th Annual AAAS Guidance And Control Conference

[4] J.Berthier, Définitions relatives aux éphémérides de position des corps célestes, Notes Scientifiques Et Techniques Du Bureau Des Longitudes

[5] J.E Riedel, S. Bhaskara Autonomous Optical Navigation DS1 Technology Validation Report, Jet Propulsion laboratory

[6] Composite Image Subtraction: A Technique For Calibrating MICAS Images, Justin Maki, JPL, Section 388, 21, December 1998, in http://www-mipl.jpl.nasa.gov/~jnm/ds1/micas.htm.

0

200

400

600

800

1000

1200

1400

1 2 3 4 5 6 7 8 9

km

Figure 14 Localization error in km for

Amalthea, Hertha,Harmonia, Belisana, Victoria, Cremona, Elisabetha, Flora, Arequipa