Improving Robustness and Precision in Mobile Robot Localization...

46
LA PLATA,1º de junio de 2010.---------------------------------------------------------- AUTOS Y VISTOS : el expediente número 2306-17520, año 1996, caratulado “SAUMA AUTOMOTORES S.A.”.---------------------------------------- Y RESULTANDO : Que a fojas 3481/3499, la Dirección de Auditoría Fiscal Especial dicta, el 19 de noviembre de 2002, la Resolución Determinativa y Sumarial Nº 622/02. Mediante dicho acto, la Autoridad de Aplicación determina las obligaciones fiscales de la firma del epígrafe, respecto de los Impuestos sobre los Ingresos Brutos y Sellos (Convenio Multilateral Nº 902-891053-8) por el ejercicio de las actividades de “Comercialización de Automotores Nuevos”, código de actividad “LCA 6270001”, y “Venta en comisiones, ramo automotor”, código de actividad “LCA 8530102”, correspondientes a los períodos fiscales enero de 1992 a Julio de 1999. En el artículo 1º, del acto mencionado, establece diferencias a favor de la Dirección por el Impuesto sobre los Ingresos Brutos, por las actividades citadas, que ascienden a un importe total de “Un millón quinientos cuarenta y cuatro mil quinientos dieciséis con noventa centavos” ($ 1.544.516,90) y por el Impuesto de Sellos una diferencia de “quinientos tres con sesenta centavos” ($503,60), sumas que deberán abonarse con más los accesorios previstos por el artículo 75 del Código Fiscal -Ley 10397- T.O. 1999, y concordantes de años anteriores, calculados a la fecha de su efectivo pago. En el artículo 2º, aplica una multa equivalente al diez por ciento (10%) del monto dejado de abonar, por haberse constatado la comisión de la infracción prevista y penada por el artículo 52 del texto legal referenciado. Asimismo en el artículo 3º, establece que de acuerdo con lo normado por los artículos 17, 20 y 54 del mencionado Código, revisten el carácter de responsables solidarios e ilimitados por el pago del gravamen, recargos, intereses y multa, los señores Sauma, Carlos Elías, DNI 14.186.551; Sauma, Elsa Liliana, DNI 11.618.498; Sfeir de Sauma, Susana, LC 1.648.845; Sauma de Ramírez, Marta, DNI 10.562.587; Baron Sabattini, Carlos Raúl, LE 4.198.663; Sauma de Adem, Lucía DNI 12.543.500 y el señor Rodríguez, Ricardo Antonio, DNI 10.173.085.-------------------------------------------------------------------------------------

Transcript of Improving Robustness and Precision in Mobile Robot Localization...

Page 1: Improving Robustness and Precision in Mobile Robot Localization …srl.informatik.uni-freiburg.de/publicationsdir/arrasEU... · 2010. 4. 27. · sensor temperature information, where

Improving Robustness and Precision in Mobile RobotLocalization by Using Laser Range Finding and Monocular Vision

Kai O. Arras, Nicola Tomatis

Autonomous System LabSwiss Federal Institute of Technology Lausanne (EPFL)

CH–1015 Lausanne, [email protected], [email protected]

Abstract

This paper discusses mobile robot localization bymeans of geometric features from a laser rangefinder and a CCD camera. The features are linesegments from the laser scanner and vertical edgesfrom the camera. Emphasis is put on sensormodels with a strong physical basis. For bothsensors, uncertainties in the calibration andmeasurement process are adequately modeled andpropagated through the feature extractors. Thisyields observations with their first order covari-ance estimates which are passed to an extendedKalman filter for fusion and position estimation.

Experiments on a real platform show thatopposed to the use of the laser range finder only, themultisensor setup allows the uncertainty to staybounded in difficult localization situations likelong corridors and contributes to an importantreduction of uncertainty, particularly in the orien-tation. The experiments further demonstrate theapplicability of such a multisensor localization sys-tem in real-time on a fully autonomous robot.

1. Introduction

Localization in a known, unmodified environmentbelongs to the basic skills of a mobile robot. Inmany potential service applications of mobile sys-tems, the vehicle is operating in a structured orsemi structured surrounding. This property can beexploited by using these structures as frequentlyand reliably recognizable landmarks for naviga-tion. Topological, metric or hybrid navigationschemes make use of different types of environ-ment features on different levels of perceptualabstraction.

Raw data have the advantage of being as gen-eral as possible. But, with most sensors, they arecredible only by processing great amounts and areof low informative value when looking for concise

scene descriptions. Navigation based on geometricfeatures allow for compact and precise environ-ment models. Maps of this type are furthermoredirectly extensible with feature information fromdifferent sensors and thus a good choice for multi-sensor navigation. This approach relies howeveron the existence of features which represents alimitation of environment types.

This is viewed as a loss of robustness which canbe diminished by simultaneously employing geo-metric features from different sensors with com-plementary properties. In this work we considernavigation by means of line segments extractedfrom 1D range data of a 360° laser scanner andvertical edges extracted from images of anembarked CCD camera.

Precise localization is important in service taskswhere load stations might demand accurate dock-ing maneuvers. Mail delivery is such an example[2]. When the task includes operation in crowdedenvironments where a moving vehicle is supposedto suggest reliability and predictability, preciseand thus repeatable navigation helps evoking thissubjective impression.

The use of the Kalman filter for localization bymeans of line segments from range data is not new[9][11][2][7]. Vertical edges have been equallyemployed [8], and propositions for specific match-ing strategies are available in this context [12]. In[10], the same features were applied for approach-ing the relocation problem. The multisensor setupwas used to validate observations of both sensorsbefore accepting them for localization. In [13], asimilar setup was used with a 3D laser sensorsimultaneously delivering range and intensityimages of the scene in front of the robot. Line seg-ments and vertical edges were also employed in arecent work [14], where the localization precisionof laser, monocular and trinocular vision has beenseparately examined and compared to groundtruth measurements.

Page 2: Improving Robustness and Precision in Mobile Robot Localization …srl.informatik.uni-freiburg.de/publicationsdir/arrasEU... · 2010. 4. 27. · sensor temperature information, where

2. Sensor Modeling

It is attempted to derive uncertainty models of thesensors employed with a strong physical basis.Strictly speaking, it is necessary to trace eachsource of uncertainty in the measurement processand, with knowledge of the exact measurementprinciple, propagate it through the sensor electron-ics up to the raw measurement the operator willsee. This allows for a consequent statistical treat-ment with noise models of high fidelity which is ofgreat importance for all subsequent stages likefeature extraction and matching.

2.1 Laser Range Finder

In all our experiments we used the commerciallyavailable Acuity AccuRange4000LR. The Acuitysensor is a compromise between building a laserrange finder by one’s own and devices like thescanners of SICK (e.g. PLS100, LMS200). The lat-ter two deliver both range and angle informationand come with standard interfaces. Besides theprotocol driver which is to be written, they can beused practically plug-and-play. The disadvantageis that this black-box character inhibits the above-mentioned analysis of noise sources. TheAccuRange 4000 provides range, amplitude andsensor temperature information, where amplitudeis the signal strength of the reflected laser beam.They are available as analogue signals.

Relationships for range and angle variance aresought. The accuracy limit of encoders are usuallylow with respect to the beam spot size. Angularvariability is therefore neglected. For range accu-racy there are several factors which influence theextent of noise:• The amplitude of the returned signal which is

available as measurement.• Drift and fluctuations in sensor circuitry. At the

configured sampling frequency (1 kHz) this ispredominant over thermal noise of the detectionphotodiode and resolution artifacts of the inter-nal timers.

• Noise injected by the AD conversion electronics.In [1] a phase shift measurement principle has

been examined yielding a range variance to ampli-tude relationship of the form , where

is range variance and the measured ampli-tude. After identification, an inverse, slightly non-linear function was found. For identification in ourcase, an experiment was performed with a station-ary target at about 1 meter distance. The returnedsignal strength was varied systematically with a

Kodak gray scale control patch where 10’000 read-ings were taken at each of the twenty gray levels.

Opposed to the model in [1], we can observe anabrupt rise of noise below a certain amplitude(fig. 1). This reduces our model for range varianceto a constant value, independent on target dis-tance and amplitude.

Although this analysis lead to such a simpleresult it permits rejection of false or very uncer-tain readings by means of the amplitude measure-ments. This is very important since in manypractical cases the sensor exhibits strong depen-dency upon the surface properties like color androughness. Moreover, the Acuity sensor is oftenand reproducibly subject to outliers. When thelaser beam hits no target at all, and at normallyoccurring range discontinuities it returns an arbi-trary range value, typically accompanied by a lowsignal strength.

2.2 CCD Camera

The vision system consists in a Pulnix TM-9701full frame, gray-scale, EIA (640 x 480) camera withan effective opening angle of 54° which sends astandard RS-170 signal to a Bt848 based framegrabber. No dedicated DSPs are used in our setup,all image processing is done directly by the CPU ofthe VME card.

A vision task which is intended to extract accu-rate geometric information from a scene requires acalibrated vision system. For this a variety of cam-era parameters including its position and orienta-tion (extrinsic parameters), image center, scalefactor, lens focal length (intrinsic parameters) anddistortion parameters are to be determined. To cal-culate a coherent set of intrinsic, extrinsic and dis-tortion parameters the calibration technique [15]has been combined with a priori knowledge of fea-tures from a test field. The procedure is to extract

σρ2 a Vr⁄ b+=

σρ2 Vr

−9 −8 −7 −6 −5 −4 −3 −2 −1 00

0.5

1

1.5

2

2.5

3x 10

−3

Figure 1: Range standard deviation (y-axis, inmeters) against measured amplitude (x-axis, inVolts). 10’000 readings, measurements were donewith a Kodak gray scale control patch.

Page 3: Improving Robustness and Precision in Mobile Robot Localization …srl.informatik.uni-freiburg.de/publicationsdir/arrasEU... · 2010. 4. 27. · sensor temperature information, where

and fit vertical and horizontal lines from the testfield and determine the distortion in the x- and y-direction. By knowing the 3D position of theselines in space, the intrinsic, extrinsic and distor-tion parameters can be determined simulta-neously.

Due to the particularity of our application, somesimplifications for the final calibration model canbe done. The camera is mounted horizontally onthe robot and the center of projection of the imag-ing device is at (0,0) in robot coordinates with ori-entation 0°. Since only vertical edges areextracted, calibration is needed only in the hori-zontal direction (x-axis). With that, only fewparameters remain to be modeled: the focal lengthC, the image center , and the distortionparameters . This yields equation (1) forparameter definition

, (1)

where refers to the distorted location,, , and , are

measures of the test field in camera coordinates.The angle of a feature relative to the robot isfinally calculated using equations (2) and (3).

(2)

(3)

The uncertainties from the test field geometryand those caused by noise in the camera electron-ics and frame grabber AD conversion are propa-gated through the camera calibration procedureonto the level of camera parameters yielding a 4x4parameter covariance matrix.

3. Feature Extraction

Geometric environment features can describestructured environments at least partially in acompact and exact way. Horizontal or vertical linesegments are of high interest due to the frequentoccurrence of line-like structures in man-madeenvironments and the simplicity of their extrac-tion. More specifically, the problem of fitting amodel to raw data in the least squares sense hasclosed form solutions if the model is a line. This iseven the case when geometrically meaningfulerrors are minimized, e.g. the perpendicular dis-tances from the points to the line. Already slightlymore complex models like circles do not have thisproperty anymore.

3.1 Laser Range Finder

The extraction method for line segments from 1Drange data has been described in [3]. A short out-line of the algorithm shall be given.

The method delivers lines and segments withtheir first order covariance estimate using polarcoordinates. The line model is

(4)

where is the raw measurement andthe model parameters.

Segmentation is done by a relative model fidel-ity measure of adjacent groups of points. If thesegroups consist of model inliers they constitute acompact cluster in model space since their parame-ter vectors of the previously fitted model are simi-lar. Segments are found by comparing thiscriterion against a threshold. A hierarchicalagglomerative clustering algorithm with a Mahal-anobis distance matrix is then applied. It mergesadjacent segments until their distance in modelspace is greater than a threshold.

The constraint of neighborhood is removed assoon as the threshold has been exceeded so as tofuse non-adjacent segments as well. The clusteringterminates if the same threshold was reachedagain. This allows for particular precise reestima-tions of spatially extended structures like forexample a long wall where objects, doors or otherdiscontinuities lead to multiple small segmentsbelonging to this wall.

For the line fitting, the nonlinear regressionequations have been explicitly derived for polarcoordinates minimizing the weighted perpendicu-lar error from the points to the line. Opposed to aKalman filter, more general error scenarios of pos-

Figure 2: CCD image of the corridor where theexperiments have been carried out (step 5 of the tra-jectory). The image is compensated for radial dis-tortion.

cx cy,( )k1 k2,

Cxw

zw-----⋅ x' xc k1r 2 k2r 4+( )+=

x' xc

x' cx–= yc y' cy–= r 2 xc2 yc

2+= xw zw

ϕ

x x' xc k1r 2 k2r 4+( )+=

ϕ x C⁄( )atan=

ρ ϕ α–( )cos r– 0=

ρ ϕ,( ) α r,( )

Page 4: Improving Robustness and Precision in Mobile Robot Localization …srl.informatik.uni-freiburg.de/publicationsdir/arrasEU... · 2010. 4. 27. · sensor temperature information, where

sibly correlated measurements can be taken intoaccount.

An advantage of this method is its generality.This follows from the fact that all decisions on seg-mentation and merging are taken in the modelspace, independent on the spatial appearance ofthe model. Only the fitting step is inherentlymodel specific and has to provide the parameter’sfirst and second moments.

In [7] the segmentation method first detectshomogeneous regions which are basically markedby unexpected range discontinuities. Identical tothe detection of edges in [1], this is done with aKalman filter. Afterwards segments are found by awidely used recursive algorithm which applies adistance criterion from the points to their currenthomogeneous region. If the maximal distance isabove a threshold, the segment is splitted into twoseparate regions restarting the process for them.The line parameter estimate is finally done withan information filter. Alike [3] consecutive seg-ments are merged and reestimated if theirmoments exhibit sufficient similarity. In [16] rangediscontinuities are equally detected with a Kal-man filter. They serve directly as segment endpoints.

3.2 CCD Camera

If the camera has a horizontal position on the vehi-cle, vertical structures have the advantage of beingview invariant, opposed to horizontal ones. Fur-thermore, they require simple image processingwhich is important for a real-time implementationunder conditions of moderate computing power.

For several reasons, only the lower half of theimage is taken: better illumination conditions (lessdirect light from windows and lamps), the avail-ability of corresponding feature information to theone from the laser range finder and more frequentoccurrence of reliable vertical edges.

The extraction steps can be summarized as fol-lows:• Vertical edge enhancement: Edge extraction

using a specialized Sobel filter which approxi-mates the image gradient in the x-direction.

• Non-maxima suppression with dynamic thresh-olding: The most relevant edge pixels (maximalgradient) are extracted and thinned by using astandard method.

• Edge image calibration: The camera model isapplied to the edge image using the previouslydetermined parameters. The image is calibratedonly horizontally.

• Line fitting: Columns with a predefined numberof edge pixel are labelled as vertical edges. Linefitting reduces to a 1D problem. Sub-pixel preci-sion is achieved by calculating the weightedmean of the non-discrete edge pixel positionafter image calibration.

This line extraction method is essentially a specialcase of the Hough transformation, where themodel is one-dimensional since the direction iskept constant.

In each abovementioned step, the uncertainty ofthe calibration parameters and a constant uncer-tainty in the x-position of the input image arepropagated. The result is a set of vertical edgesdescribed by their first two moments .

4. EKF Localization

Under the assumption of independent errors, theestimation framework of a Kalman filter can beextended with information from additional sensorsin a straight-forward way. Since this paper doesnot depart from the usual use of extended Kalmanfiltering and error propagation based on first-orderTaylor series expansion, most mathematical

(a)

(c)

(b)

Figure 3: Extraction of vertical edges. (a): uncali-brated half image, (b): edge pixels after edgeenhancement, non-maxima suppression and cali-bration, (c): resulting vertical edges.

ϕ σϕ2,( )

Page 5: Improving Robustness and Precision in Mobile Robot Localization …srl.informatik.uni-freiburg.de/publicationsdir/arrasEU... · 2010. 4. 27. · sensor temperature information, where

details are omitted. Only aspects which are partic-ular are presented. Please refer to [2][3][5] or [11]for more profound treatments.

The prediction step uses an a priori map of theenvironment which has been measured by hand.Each feature in the map (i.e. lines and verticaledges) received constant values for their positionaluncertainty, held by the covariance matrix ,where superscript W denotes the world frame.Together with the predicted robot pose uncertainty

, it is propagated into the according sen-sor frame for prediction. Additionally, the covari-ance matrix accounts for uncertain robot-to-sensor frame transformations. The innovationcovariance matrix of prediction and observation

at time index is then

, (5)

where is the first order covariance esti-mate of observation , the measurementJacobian with respect to the uncertain vectors

of the nonlinear measurement modelrelating the system state to the measurementsfor prediction .

For matching, an iterative strategy has beenimplemented where each step includes (i) match-ing of the current best pairing, (ii) estimation and(iii) re-prediction of features not associated so far.As in [13], line segments are integrated first sincethey are mutually more distinct, leading less oftento ambiguous matching situations.

Always the current best match is searchedamong pairs of predictions and observationswhich satisfy the validation test

(6)

where is a threshold reflecting a probabilitylevel chosen from a chi-square distribution with

degrees of freedom. Matching observed seg-ments to predicted lines from the map is done inthe –model space, therefore . With ver-tical edges since the –model space is one-dimensional.

The criterion of pairing quality is different forsegments and vertical edges. ‘Best’ for line seg-ments means smallest observational uncertaintyamong the candidates satisfying (6) – not smallestMahalanobis distance. This renders the matchingmore robust against small spurious line segmentswhich have been extracted in groups of outliersoccasionally occurring with our sensor (chapter 2).

For several reasons vertical edges are muchmore difficult to associate [13][14]. For example,multiple, closely extracted vertical edges (seefig. 3) are often confronted with large validationregions around the predictions. Accordingly, ‘best’for vertical edges means unique match candidatewith smallest Mahalanobis distance. When thereis no unique pairing anymore, candidates withmultiple observations in the validation region areaccepted and chosen according to the smallestMahalanobis distance of the their closest observa-tion.

5. Implementation and Experiments

5.1 The Robot Pygmalion

Our experimental platform is the robot Pygmalionwhich has been recently built in our lab (fig. 4). Itsdesign principles are oriented towards an applica-tion as service or personal robot. Long-term auton-omy, safety, extensibility, and friendly appearancewere the main objectives for design. With itsdimensions of about 45x45x70 cm and its weight of55 kg it is of moderate size and danger opposed tomany robots in its performance class. The systemruns the deadline-driven, hard real-time operatingsystem XOberon [6].

5.2 Experimental Results

Long corridors notoriously recur in mobile robot-ics. They are, however, an excellent benchmark forthe localization system of a service robot, sincethey occur often in realistic application scenariosand contain the difficulty to stay localized in the

pi

CW i[ ]p

P k 1+ k( )

CRS

zi

zj k 1+

Sij k 1+( ) hxi[ ]∇ P k 1+ k( ) hx

i[ ]∇ T⋅ ⋅ hpi[ ]∇ CW i[ ]

p hpi[ ]∇ T⋅ ⋅+=

hsi[ ]∇ CR

S hsi[ ]∇ T⋅ ⋅ Rj k 1+( )+ +

Rj k 1+( )zj h∇ x p s, ,

i[ ]

x p s, , h .( )x

zi

zi zj

zj zi–( ) Sij1–

zj zi–( )T χα n,2≤

χα n,2

αn

α r,( ) n 2=n 1= ϕ

Figure 4: Pygmalion,the robot which wasused in the experi-ments. It is a VMEbased system with asix axis robot con-troller. The proces-sor board carriescurrently a PowerPCat 100MHz. Besideswheel encoders andbumpers, the sen-sory system includesthe laser rangefinder and the CCDcamera discussed inthe second chapter.

Page 6: Improving Robustness and Precision in Mobile Robot Localization …srl.informatik.uni-freiburg.de/publicationsdir/arrasEU... · 2010. 4. 27. · sensor temperature information, where

direction of travel. In order to demonstrate theperformance of our multisensor localization sys-tem, we chose the corridor environment shown infig. 2, 7 and 8 as benchmark. Two experimentshave been conducted where the robot drove anoverall distance of more than 1 km. In both experi-ments the vehicle followed the trajectory depictedin fig. 7 and 8 with a length of about 78 m.

The robot moved autonomously in a stop-and-gomode, driven by a position controller for non-holo-nomic vehicles [4]. A graph with nodes at charac-teristic locations was built which allows for globalplanning in the map. No local navigation strategyaccounting for unmodeled objects was active. Sincethe angle of the laser range finder has to be cali-brated at each system startup by measuring thefour vertical profiles (see fig. 4), there is a angularuncertainty of the robot-to-sensor frame transfor-mation. This is modeled by setting with

reflecting 1.5° in equation (5). For the vision

system is set to 0, since this uncertain trans-formation is already captured by the camera cali-bration and its uncertain parameters (chapter2.2).

From the figures it can be seen that the corridoris actually well-conditioned in the sense that manystructures exist which can be used for updates inthe direction of travel. In the first experiment allthese structures have been removed from the map.Only in the two labs and at the upper corridor endthe map was left unchanged. In fig. 7 and 8 mod-eled structures are in black, unmodeled ones ingray. It is attempted to demonstrate the need ofadditional environment information, if, due to adifficult working area, information of one type isnot sufficient for accomplishing the navigationtask.

Figure 7 shows one of three runs which wereperformed with the laser range finder only. As tobe expected during navigation in the corridor,uncertainty growed boundless in the direction oftravel. Inaccurate and uncertain position esti-mates lead regularly to false matches and incor-rect pose updates in front of the lower lab.

In the second run of the first experiment (fig. 8,one of three runs shown), the vision system wasable to ameliorate precision and reduce uncer-tainty to an uncritical extent. This is illustrated infig. 5 and fig. 6, where for one of the corridor situa-tions (step 14), predictions and extraction resultsare displayed in the robot frame before estimation.The three matched vertical edges allow for a fullupdate. The task could always be finished. Theprecision at the endpoint was typically in a sub-centimeter range, showing a slight temperaturedependency of the Acuity sensor.

In the second experiment, the contribution ofmonocular vision to the reduction of estimationuncertainty has been examined. The same testpath was travelled five times with the laser rangefinder only and five times with both sensors. Themap was complete, i.e. no structures had beenremoved. The resulting averaged error bounds areshown in fig. 9. In fig. 10 the mean numbers of pre-dictions, observations and matchings at the 47steps of the test path are presented, and table 1briefly quantifies the results.

The resulting error bounds in figure 9 and theiroverall means in table 1 show that, compared tothe laser-only case, the multisensor system partic-ularly contributes to the reduction of uncertaintyin the vehicle orientation. This although the num-ber of matched vertical edges is relatively low dueto their infrequent occurrence in our map.

−1 −0.5 0 0.5 1 1.5 2 2.5 3 3.5 4

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

x [m]

1

2

3

40

10

25

65

130 135

140 145

170175180185

190

80

Figure 5: Predictions and observations in the robotframe before estimation at step 14 of the test trajec-tory. All line segments and three vertical edgeshave been matched (to predictions 145, 175, 180).

−78.3 −78.2 −78.1 −78 −77.9 −77.8 −77.7 −77.6 −77.5

Figure 6: Update result-ing from the matches infig. 5. The Kalman fil-ter corrects the robotabout 5 cm backwards,establishing good cor-respondence of observa-tion and prediction ofthe three matched verti-

cal edges in fig. 5. Thus they allow for an updatealso in corridor direction; the state covariancematrix stays bounded.P k 1+ k 1+( )

P k 1+ k( )

P k 1+ k 1+( )

CRSl

σθs

2=σθs

CRSv

Page 7: Improving Robustness and Precision in Mobile Robot Localization …srl.informatik.uni-freiburg.de/publicationsdir/arrasEU... · 2010. 4. 27. · sensor temperature information, where

7075

8085

9095

1001

98

100

102

104

106{W

} map: robot position x,y w

ith uncertainty Pxy, 95%

Figure 7: Run with laserrange finder only. All lineswhich would allow for anupdate in the corridor direc-tions have been removedfrom the map. Modeledenvironment features are inblack.At each of the 47 trajectorysteps, the robot is depictedwith an ellipse reflecting the99.9% probability region ofthe posterior state covari-ance matrixfor and . The axisdimension is meter.The result is typical. Falsematches due to great posi-tion errors and extensiveuncertainty at the lower cor-ridor end yield incorrectestimates. The robot is lost.

P k 1 k 1+ +( )x y

start

7075

8085

9095

1001

98

100

102

104

106{W

} map: robot position x,y w

ith uncertainty Pxy, 95%

Figure 8: Run with laser andvision. Vertical edges havebeen added to the map infig. 7.The pose uncertainty staysbounded during navigationin the corridor. Only moder-ate corrections at the upperand lower corridor end arenecessary; the robot accom-plishes its task and returnssafely to the start point.Filled objects in light gray(doors and cupboards on theleft side) are barely visiblefor the laser range finder.They have a shiny, green sur-face and turned out to bealmost perfect specularreflectors for the infraredlaser.

Page 8: Improving Robustness and Precision in Mobile Robot Localization …srl.informatik.uni-freiburg.de/publicationsdir/arrasEU... · 2010. 4. 27. · sensor temperature information, where

The average execution times express overalldurations and reflect the full CPU load includingsensor acquisition, low-level controllers and com-munication. No special code optimization has beendone.

6. Discussion

Experience with earlier implementations [2]showed that on-the-fly localization during motionis not merely an option for good looking demon-strations. It augments the robustness againstmany factors because it brings the update ratetowards a better ratio to the dynamics of the Kal-man filter variables. Stop-and-go navigation with

long steps (like at the lower corridor end in fig. 7)causes very uncertain predictions, making thematching problem difficult for any feature. On theother hand, on-the-fly localization with moreupdates per distance travelled produces smallervalidation regions and thus, diminishes the effectof a poorly calibrated odometry and an inadequatemodel for nonsystematic odometry errors.

Matching vertical edges is especially error-pronedue to their frequent appearance in groups (fig. 3).With large validation regions caused by veryuncertain pose predictions, false matches havebeen occasionally produced. But the effect of theseincorrect assignments remain weak since thesegroups are typically very compact. A possible rem-edy is to exploit the prior knowledge from the mapand to try to ‘correlate’ patterns of edges to theobservation. Instead of independently matchingsingle observations to single predictions, the obser-vation pattern will be forced to match a similarconstellation of predicted edges [10][12].

Furthermore, for the vision system, the largevalidation regions are contrasted by observationswhose uncertainties are very small in comparison.From the experiments we conclude that it wouldbe more relevant to model an uncertain robot-to-sensor frame transformation to account for vibra-tions and uneven floors, i.e. alike setting

in equation (5).

7. Conclusions and Outlook

It could be demonstrated that the presented multi-sensor setup allows for more robust navigation incritical localization situations like long corridors.The permanent occurrence of features of a specifictype becomes less important. The experimentsshowed furthermore that, compared to the laser-only case, already a moderate number of matched

5 10 15 20 25 30 35 40 450

5

10

15

20

fron

tal [

cm]

5 10 15 20 25 30 35 40 450

1

2

3

late

ral [

cm]

5 10 15 20 25 30 35 40 450

0.5

1

1.5

2

2.5

angu

lar

[deg

]

Figure 9: Averaged 2 –error bounds of frontal (a),lateral (b) and angular (c) a posterior uncertaintyduring the 47 steps of the test trajectory. Five runshave been made for each mode. Solid lines: laserrange finder only, dashed lines: laser and vision.

σ

(a)

(b)

(c)

5 10 15 20 25 30 35 40 450

5

10

15

5 10 15 20 25 30 35 40 450

5

10

15

00

00

00

00Figure 10: Averaged number of predictions ( ),observations ( ) and matches ( ) for line seg-ments (a) and vertical edges (b). See also table 1.

(a)

(b)

CRSl

CRSv

0≠

laser laser and vision5.191 cm 4.258 cm

1.341 cm 1.245 cm

1.625° 0.687°

3.53 / – 3.51 / 1.91

377 ms 1596 ms

Table 1: Overall mean values of the error bounds,the number of matched line segments andmatched vertical edges , and the averagelocalization cycle time .

2σ frontal

2σlateral

2σangular

nl nv⁄

texe

nl

nv

texe

Page 9: Improving Robustness and Precision in Mobile Robot Localization …srl.informatik.uni-freiburg.de/publicationsdir/arrasEU... · 2010. 4. 27. · sensor temperature information, where

vertical edges contribute to an important reduc-tion of estimation uncertainty, especially in thevehicle orientation.

By reducing the problem of camera calibrationand edge extraction to an appropriate degree ofspecialization, the computational complexity couldbe kept low so as to have an applicable implemen-tation on a fully autonomous system.

Further work will focus on more efficient imple-mentations for on-the-fly localization duringmotion. Although ambiguous matching situationwill be expected to appear less often, more sophis-ticated strategies could be envisaged [5][12].Finally, more complex fusion scenarios of rangeand vision features could be addressed as well.Detection of high-level features for concise scenedescriptions [3] can be a starting point for topol-ogy-based navigation schemes.

References

[1] Adams M.D., Optical Range Analysis of Sta-ble Target Pursuit in Mobile Robotics. PhDThesis, University of Oxford, UK, Dec. 1992.

[2] Arras K.O., Vestli S.J., "Hybrid, High-Preci-sion Localization for the Mail DistributingMobile Robot System MoPS", Proc. of the1998 IEEE Int. Conf. on Robotics and Auto-mation, p. 3134-3129, Leuven, Belgium, 1998.

[3] Arras K.O., Siegwart R.Y., "Feature Extrac-tion and Scene Interpretation for Map-BasedNavigation and Map Building", Proc. of SPIE,Mobile Robotics XII, Vol. 3210, p. 42-53, 1997.

[4] Astolfi A., "Exponential Stabilization of a Mo-bile Robot," Proc. of the Third EuropeanControl Conference, p. 3092-7, Italy, 1995.

[5] Bar-Shalom Y., Fortmann T.E., Tracking andData Association, Mathematics in Scienceand Engineering, Vol. 179, Academic PressInc., 1988.

[6] Brega R., “A Real-Time Operating System De-signed for Predictability and Run-TimeSafety”, The Fourth Int. Conference on Mo-tion and Vibration Control (MOVIC’98),Zurich, Switzerland, 1998.

[7] Castellanos J.A., Tardós J.D., “Laser-BasedSegmentation and Localization for a MobileRobot”, Proc. of the Sixth Int. Symposium onRobotics and Manufacturing (ISRAM’96),Montpellier, France, 1996.

[8] Chenavier F., Crowley J.L., “Position Estima-tion for a Mobile Robot Using Vision and Odo-metry”, Proc. of the 1992 IEEE Int. Confer-ence on Robotics and Automation, p. 2588-93,Nice, France, 1992.

[9] Crowley J.L., "World Modeling and PositionEstimation for a Mobile Robot Using Ultra-sonic Ranging," Proc. of the 1989 IEEE Int.Conference on Robotics and Automation, p.674-80, Scottsdale, AZ, 1989.

[10] Delahoche L., Mouaddib E.M., Pégard C.,Vasseur P., "A Mobile Robot LocalizationBased on a Multisensor Cooperation Ap-proach," 22nd IEEE Int. Conference on Indus-trial Electronics, Control, and Instrumen-tation (IECON'96), New York, USA, 1996.

[11] Leonard J.J., Durrant-Whyte H.F., DirectedSonar Sensing for Mobile Robot Navigation,Kluwer Academic Publishers, 1992.

[12] Muñoz A.J., Gonzales J., “Two-DimensionalLandmark-based Position Estimation from aSingle Image”, Proc. of the 1998 IEEE Int.Conference on Robotics and Automation, Leu-ven, Belgium, 1998, p. 3709-14.

[13] Neira J., Tardos J.D., Horn J., Schmidt G.,"Fusing Range and Intensity Images for Mo-bile Robot Localization," IEEE Transactionson Robotics and Automation, 15(1):76-84,1999.

[14] Pérez J.A., Castellanos J.A., Montiel J.M.M.,Neira J. and Tardós J.D., "Continuous MobileRobot Localization: Vision vs. Laser," Proc. ofthe 1999 IEEE Int. Conference on Roboticsand Automation, Detroit, USA, 1999.

[15] Prescott B., McLean G.F., “Line-Based Cor-rection of Radial Lens Distortion”, GraphicalModels and Image Proc. 59(1), 1997, p. 39-47.

[16] Taylor R.M., Probert P.J., “Range Finding andFeature Extraction by Segmentation of Imag-es for Mobile Robot Navigation”, Proc. of the1996 IEEE Int. Conference on Robotics andAutomation, Minneapolis, USA, 1996.