Autonomous UAV Landing via eye in hand visual...

6
Autonomous UAV Landing via eye in hand visual servoing Nikolai Kummer * , Hadi Firouzi , Darryl Jacobs, Homayoun Najjaran Okanagan School of Engineering, The University of British Columbia, Canada October 4, 2011 Abstract Catastrophic damage to a fixed-wing unmanned aerial vehicle (UAV) is most likely to occur during the land- ing stage. In this paper an autonomous method for the landing of the UAV is proposed in which the UAV steers towards a large, multi-coloured, dome-shaped airbag via visual servoing. Images taken from a UAV-mounted cam- era were used to detect the dome with colour based de- tection methods. The image analysis portion operated at 4 frames per second. A controller was created for the proposed landing system and the method was validated in the FlightGear flight simulator. The paper includes simulation results and concludes that the vision-based dome-detection method is ready for implementation into a real UAV. Tuning of an autopilot controller based on the proposed visual servoing system is outside the scope of this paper. The integration of the proposed detection method into a UAV autopilot is underway. 1 Introduction Landing is the most accident prone stage of fixed-wing, small-scale UAV 1 operation. Commercial solutions for large aircraft and military UAV are available, but are prohibitively expensive. Currently, manual landing of a UAV via remote control is the cheapest solution, but of- ten leads to crashed landings. The majority of crashed landings are due to human error caused by the third- person view the operator has of the UAV. An inexpen- sive, autonomous UAV landing method would make UAV operation more accessible to a wider market. Current methods use GPS and INS sensors for position estimation and measure height above ground (HAG) us- ing range-finders or barometric pressure measurements. The problem with these methods is that the UAV blindly descend towards the ground at a reduced velocity until touchdown occurs. Beard et al. [1] demonstrated an autonomous landing method for miniature air vehicles (MAV), that employs * [email protected] hadi.fi[email protected] 1 Unmanned Aerial Vehicle GPS for localization and barometric pressure measure- ment and optic flow to estimate the HAG. The vehicle is brought close to the ground and slides along the ground for landing. This may not be desirable for larger UAV and may not be possible if the ground is cluttered with objects or the landing area is small. Also, the barometric pressure in the landing area needs to be calibrated and the landing location accuracy is within a few meters. In net-based landing systems [2, 3] the UAV is guided into a net, which has the advantage that landing can oc- cur in a cluttered environment with too little space for traditional landing. It has however the disadvantage that landing can usually occur from one or two directions only. The limited directions can cause trouble if cross-wind is present. The problem about using GPS or INS position estimations is that the cheaper sensors are not accurate enough to guarantee successful landings. More expensive sensors have the required accuracy but the implementa- tion of a high-cost sensor is often not feasible for civilian applications. With increasing computational capacity, computer vi- sion has been increasingly utilized on UAV. The advan- tage of camera vision is that it is lightweight, cheap and has the possibility to extract a lot of information. Vi- sion sensors are versatile and various methods can be combined to extract information. Beyeler et al. [4] cre- ated a vision-based collision avoidance method that uses optical flow sensors for obstacle avoidance. Yakimenko et al. [5] developed an algorithm to estimate aircraft at- titude from vision data. Bourquardez and Chaumette [6] created methods to estimate runway location from image data. The significant disadvantage of machine vi- sion is the required computational power, which is why a lot of the landing algorithms are either implemented on large aircraft, where large computers can be carried, or analyzed off-line. Masuko et al. [7] attempted to cre- ate an autonomous takeoff and landing system for small scale UAV using vision data. The method worked when non-vision sensors were employed. The vision navigation did not operate properly due to the low frame-rate which was caused by the on-board computer Armadillo-500 FX, which has approx. 500 MHz. The general problem with most vision based strategies is the low frame rate and the time delay between the image capture and the avail- 1

Transcript of Autonomous UAV Landing via eye in hand visual...

Page 1: Autonomous UAV Landing via eye in hand visual servoingacis.ok.ubc.ca/.../Autonomous_UAV_Landing_via_eye_in...visual_serv… · Autonomous UAV Landing via eye in hand visual servoing

Autonomous UAV Landing via eye in hand visual servoing

Nikolai Kummer∗, Hadi Firouzi†, Darryl Jacobs, Homayoun NajjaranOkanagan School of Engineering, The University of British Columbia, Canada

October 4, 2011

Abstract

Catastrophic damage to a fixed-wing unmanned aerialvehicle (UAV) is most likely to occur during the land-ing stage. In this paper an autonomous method for thelanding of the UAV is proposed in which the UAV steerstowards a large, multi-coloured, dome-shaped airbag viavisual servoing. Images taken from a UAV-mounted cam-era were used to detect the dome with colour based de-tection methods. The image analysis portion operatedat 4 frames per second. A controller was created for theproposed landing system and the method was validatedin the FlightGear flight simulator. The paper includessimulation results and concludes that the vision-baseddome-detection method is ready for implementation intoa real UAV. Tuning of an autopilot controller based onthe proposed visual servoing system is outside the scopeof this paper. The integration of the proposed detectionmethod into a UAV autopilot is underway.

1 Introduction

Landing is the most accident prone stage of fixed-wing,small-scale UAV1 operation. Commercial solutions forlarge aircraft and military UAV are available, but areprohibitively expensive. Currently, manual landing of aUAV via remote control is the cheapest solution, but of-ten leads to crashed landings. The majority of crashedlandings are due to human error caused by the third-person view the operator has of the UAV. An inexpen-sive, autonomous UAV landing method would make UAVoperation more accessible to a wider market.

Current methods use GPS and INS sensors for positionestimation and measure height above ground (HAG) us-ing range-finders or barometric pressure measurements.The problem with these methods is that the UAV blindlydescend towards the ground at a reduced velocity untiltouchdown occurs.

Beard et al. [1] demonstrated an autonomous landingmethod for miniature air vehicles (MAV), that employs

[email protected][email protected] Aerial Vehicle

GPS for localization and barometric pressure measure-ment and optic flow to estimate the HAG. The vehicle isbrought close to the ground and slides along the groundfor landing. This may not be desirable for larger UAVand may not be possible if the ground is cluttered withobjects or the landing area is small. Also, the barometricpressure in the landing area needs to be calibrated andthe landing location accuracy is within a few meters.

In net-based landing systems [2, 3] the UAV is guidedinto a net, which has the advantage that landing can oc-cur in a cluttered environment with too little space fortraditional landing. It has however the disadvantage thatlanding can usually occur from one or two directions only.The limited directions can cause trouble if cross-wind ispresent. The problem about using GPS or INS positionestimations is that the cheaper sensors are not accurateenough to guarantee successful landings. More expensivesensors have the required accuracy but the implementa-tion of a high-cost sensor is often not feasible for civilianapplications.

With increasing computational capacity, computer vi-sion has been increasingly utilized on UAV. The advan-tage of camera vision is that it is lightweight, cheap andhas the possibility to extract a lot of information. Vi-sion sensors are versatile and various methods can becombined to extract information. Beyeler et al. [4] cre-ated a vision-based collision avoidance method that usesoptical flow sensors for obstacle avoidance. Yakimenkoet al. [5] developed an algorithm to estimate aircraft at-titude from vision data. Bourquardez and Chaumette[6] created methods to estimate runway location fromimage data. The significant disadvantage of machine vi-sion is the required computational power, which is whya lot of the landing algorithms are either implementedon large aircraft, where large computers can be carried,or analyzed off-line. Masuko et al. [7] attempted to cre-ate an autonomous takeoff and landing system for smallscale UAV using vision data. The method worked whennon-vision sensors were employed. The vision navigationdid not operate properly due to the low frame-rate whichwas caused by the on-board computer Armadillo-500 FX,which has approx. 500 MHz. The general problem withmost vision based strategies is the low frame rate andthe time delay between the image capture and the avail-

1

Page 2: Autonomous UAV Landing via eye in hand visual servoingacis.ok.ubc.ca/.../Autonomous_UAV_Landing_via_eye_in...visual_serv… · Autonomous UAV Landing via eye in hand visual servoing

ability of extracted information.

The practice of visual servoing is widely used inrobotics for using visual information to complete a feed-back loop[8, 9]. There are two main types of visual ser-voing, image based visual servoing (IBVS) and positionbased visual servoing (PBVS). PBVS estimates the abso-lute location of the target relative to a global coordinateframe. This method is not reliable for UAV landing, be-cause of two factors. The first one being the error inUAV position estimation from GPS or similar sensorsand the second one being the error in depth estimationfrom images. In IBVS the target object is positioned inthe desired location in the image and changes in cameraattitude are extracted directly from the image.

In 2008 Huh and Hyunchul [10] demonstrated an au-tonomous landing system for fixed-wing UAV via vi-sual servoing and a brightly coloured airbag. In themethod, the UAV was brought to a halt by a controlledcollision with a red coloured airbag. The bright redcoloured airbag was detected by a camera mounted onthe UAV. This eye-in-hand camera configuration removesthe third-person view problem that the human opera-tor experiences. From the captured images the objectwas extracted based on colour. Shape-based filtering wasused to detect the dome among all the extracted objects.The disadvantage of shape-based methods is the fact thatthe shape has to be a minimum size in the image beforeit can be detected and reliably differentiated from othershapes. This results in the dome being visible in the im-age but not detected by the algorithm. This results in areduced distance at which the dome can be reliably dif-ferentiated from other objects. Shape based methods arealso unable to detect an object if it lies partially outsidethe frame.

In this paper a more robust, colour-based landingdetection method is proposed. The airbag consists ofthree distinct colours (red, green, blue) and the airbag istracked by colour-based object detection. A solid domeis easier detected from larger distances because of theincreased surface area over a net. Landing can also oc-cur from any direction, which allows for a landing intothe wind. Landing into the wind is desirable, because thecontrolled collision occurs at lower speeds. A lower speedrelative to the ground results in a longer time to maneu-ver the UAV into the air dome. The UAV is guided intothe dome via IBVS. The dome tracking procedure wasvalidated using the FlightGear[11] flight-simulator and acontroller was created. Following the introduction, thecolour based object detection will be discussed in section2. The visual servoing method will be discussed in sec-tion 3. Following this the implementation of the methodand the simulation results will be discussed in section 4and finally the conclusions will be presented in section 5.

Figure 1: Image thresholding for the red colour. Thefinal binary image occurs at the overlap of the a and bchannel ranges

2 Colour based Object Detection

The following colour based object detection procedurewas followed:1. Colour Conversion

The image is converted from RGB to L*a*b* colourspace. This is done because of the linear properties ofthe colour space[12]. An equal change in one channelproduces the same change of visual importance, as anequal change in the other channel. The L*a*b* consistsof three channels. The L-channel (lighting) helps sepa-rate lighting from colour information. And the a and theb channel contain the colour information.2. Binary Image Generation

A colour range for the a and the b channel is foundfor the red, green and blue colours of the dome. Thesethresholds are determined from sample images capturedfrom the flight simulator. Calibration of these thresholdswill have to occur with varying environmental conditions.Filtering these colour ranges produces three binary im-ages with a value of 1 denoting pixels within the specifiedcolour range and a value of 0 denoting values outside thecolour range (see Figure 1).3. Connected Component Extraction

For each binary image the connected components areextracted and assigned to a set of objects. Each objecteither belonged to the red set, the green or the blue set.Objects that are too small based on pixel count, are fil-tered out because they are considered noise.4. Outlier Rejection

2

Page 3: Autonomous UAV Landing via eye in hand visual servoingacis.ok.ubc.ca/.../Autonomous_UAV_Landing_via_eye_in...visual_serv… · Autonomous UAV Landing via eye in hand visual servoing

For each connected component a measure of the com-ponent size is computed. The measure of the size of theobject is defined as the diagonal length of the compo-nent bounding box. The measure is denoted by n. Forevery possible pair of two different colours the followingcondition is checked:

d < n1 + n2 (1)

Where d is the distance between the centroids of thetwo components and n1 and n2 were the measure of thecomponents sizes. The condition given by Eq. 1 holdstrue for the dome in any image, as the different coloursare always adjacent to each other. If the condition doesnot hold true for a particular pair, it is rejected and notconsidered as a possible colour pair for the next step.5. Matching The next step is to create all possiblecolour combinations that could be the dome. Since one ofthe three colours could be occluded, combinations of twoor three colours are accepted. A possible dome consistingof three colours, is made up of three different colour pairs.A possible dome of two colours consists of one pair only.The pairs used in the matching process are the remainderof the outlier rejection step. The matching algorithmcreates all possible combinations of two or three coloursand for each matched set, the similarity measure Simis calculated. The similarity measure for a matched setconsisting of two colours is given by:

Sim =1

1 + d(2)

For three colours the similarity measure is the sum of themeasures of each colour pair, given by:

Simrgb =1

1 + dr,b+

1

1 + dg,b+

1

1 + dg,r(3)

Where the subscripts r,g and b denote the colour blobsof set red, green and blue, respectively. From the abovetwo equation it follows that the similarity measure forthree colours will be larger than for a set of two colours.

The highest similarity measure from the matched com-ponent set is the detected dome.

3 Visual Servoing

IBVS is used over PBVS in this paper which has theadvantage that the error is corrected directly in the imageand the exact aircraft dynamics need not be known.

There are two types of camera configurations, the fixedand the eye-in-hand configuration. The fixed configura-tion has a stationary camera collect data from a fixedlocation on the ground, which removes the limited com-puting power. The drawback of this configuration is thethird person view of the landing, which is the same prob-lem that the human operator experiences. Also the error

Figure 2: Error determination for visual servoing

in depth estimation of the image will result in inaccurateestimation of the UAV position. The eye-in-hand con-figuration uses a UAV-mounted camera which reducescomputing power, but allows for a landing approach fromany direction of the dome as it does not experience thethird-person view problem.

Figure 2 shows the error in the image after the objecthas been detected. The error in the y-direction (denotedby Erry) was corrected by adjusting the pitch angle (θR)via the elevators. The error in the x-direction (denotedby Errx) was corrected by adjusting the bank angle (ref-erence roll angle φr) of the UAV. The reference pitchangle θR was denoted by:

θR = arctan

(Erryf

)+ θoffset (4)

Where θoffset is the angle offset due to the camera ori-entation and f is the focal length of the camera. Thesame approach as equation 4 can be used to adjust theyaw angle. In this paper however errors in the yaw an-gle were corrected by a banked turn, therefore a differentapproach as equation 4 was used to correct the error intthe x-direction. A simple proportional controller was ini-tially considered to reduce the error in the x-direction,but it was found that a small error in the x-directionwould not be corrected in sufficient time in a stable man-ner. Therefore an integral term as was added to the con-troller. The reference bank angle φR is given by:

φR = Kp(Errx) + fcKi

(n∑

t=0

(Errx,t)

)(5)

Where fc is the forgetting factor, a value defined on theinterval [0,1] that ensured that the input into the systemis only affected by the most recent integral values. Theterm Ki is an integral gain for the image error.

3

Page 4: Autonomous UAV Landing via eye in hand visual servoingacis.ok.ubc.ca/.../Autonomous_UAV_Landing_via_eye_in...visual_serv… · Autonomous UAV Landing via eye in hand visual servoing

Figure 3: Controller Layout with Visual Analysis

4 Implementation and SimulationResults

4.1 Simulation Validation

FlightGear, an open-source flight simulator was used inorder to check the validity of the proposed method. Thelanding dome was modeled in the simulation environ-ment and given a diameter of approximately 8 meter.In order to simulate a hardware-in-the-loop system, theFlightGear output was saved to a log-file and visual infor-mation was captured via screen-shot. A C++ programthen processed the information in real time and passedthe calculated input back to the simulator via simulatedbutton presses. For the input into FlightGear, the key-board was remapped to increase or decrease the aileron,rudder and elevator control surfaces. The control threadwas continuously running in parallel to the dome detec-tion thread. The parallel setup was required, as the con-trol thread monitored the UAV attitude at a rate of 20Hz and the image analysis occurred at about 3 Hz. Thecontrol thread adjusted the attitude of the plane, to themost recent reference angles. Dome tracking was not im-plemented, although it is desirable to estimate the nextdome position in between image analysis threads.

4.2 Flight Controller

A flight controller was created for the method validation.The attitude of the aircraft was changed by adjusting thecontrol surfaces. Given the error between the current andthe reference attitude a PID controller was used to cor-rect the error. In the flight simulator the control surfacepositions were normalized between −1 and 1. Throttlewas not adjusted with this controller. The throttle wasset to a safe position above the stall speed. Throttle con-trol can be implemented to reduce the speed further andto ensure that maneuvering does not stop the UAV be-low the stall speed. The PID equations used for attitudeadjustment were:

δA = KPA(φR − φ) +KIA

n∑t=0

(φR,i − φi) +KDAφ̇e (6)

δE = KPE(θR − θ) +KIE

n∑t=0

(θR,i − θi) +KDE θ̇e (7)

δR = KPR(φ) (8)

where the symbol definition follows:

δA : the aileron angle input

δE : the elevator angle input

δR : the rudder angle input

φ : the current UAV roll angle

φR : the reference (or desired) UAV roll angle

φ̇e : the rate of change of the error in the roll angle

θ : the current UAV pitch angle

θR : the reference (or desired) UAV roll angle

θ̇e : the rate of change of the error in the pitch angle

Errx : the image error in the x-direction

KPA : the proportional gain of the aileron input

KDA : the derivative gain of the aileron input

KIA : the integral gain of the aileron input

KPE : the proportional gain of the elevator input

KDE : the derivative gain of the elevator input

KIE : the integral gain of the elevator input

KPR : the proportional gain of the rudder input

4.3 Results

The UAV in the simulator was positioned in the air andwas pointed in the approximate direction of the dome.When no dome was detected for 3 frames, the UAV wasput back into level flight. The UAV in the Simulatorwas a full-sized Cessna 172P as there were no small-scaleUAV’s in the simulator. The camera was positioned un-derneath the UAV.

The flightpath of the simulation can be seen in Figure5. The initial straight section is the time when the domewas too small to be detected. The dome detection in thesimulation was at approximately 3 to 4 frames per sec-ond. A better frame rate would have been obtained, hadimage tracking been implemented to narrow the regionof interest. The image error in the X and Y directionscan be seen in Figures 6 and 7, respectively. The errorin the figures was set to 0 when no dome was detected.The image error in figure 6 tends to 0 in order to alignthe nose of the UAV with the dome in the horizontal di-rection. The error in figure 7 does not go zero becauseof the θoffset term in equation 4, which accounts for thecamera mounting angle. Figure 4 shows captured andanalyzed images from the test flight. The detected domeis surrounded by a magenta bounding box. In the lastframe the dome is not fully in the frame but the domedetection part successfully detects it. There is a sudden

4

Page 5: Autonomous UAV Landing via eye in hand visual servoingacis.ok.ubc.ca/.../Autonomous_UAV_Landing_via_eye_in...visual_serv… · Autonomous UAV Landing via eye in hand visual servoing

Figure 4: Screenshots of the FlighGear flight simulatorwith the camera mounted below the UAV

Figure 5: Landing flightpath

change in the y-error before collision occurs. This changecan be explained by the increasing dome size in the imageand the change of the location of the centroid.

5 Conclusions

In this paper an autonomous landing system for UAV,using colour-based object detection, capable of direct-ing a UAV into a multi-coloured inflatable airbag, wasproposed. The experiments in the FlightGear flight sim-ulator showed that the image analysis and dome detec-tion worked on the captured images. Noise removal andimage de-blurring will be necessary on real images. Thedome-detection was able to detect the dome when at leasttwo colours were visible. The dome was also successfullydetected when it was partially outside the image. Thecolour based dome detection method worked as expectedand is ready to be tested on a real small-scale UAV. How-ever, the integration of the proposed detection system

Figure 6: Image error in the X direction, based on animage width of 640 pixels

Figure 7: Image error in the Y direction, based on animage height of 480 pixels

5

Page 6: Autonomous UAV Landing via eye in hand visual servoingacis.ok.ubc.ca/.../Autonomous_UAV_Landing_via_eye_in...visual_serv… · Autonomous UAV Landing via eye in hand visual servoing

into a real UAV autopilot system require further tuningthat is underway and will be reported in the near future.

6 Acknowledgment

The authors would like to acknowledge the technical sup-port of Accuas Inc. and financial support of the Natu-ral Science and Engineering Reseach Council (NSERC)Canada for this project under the Engage program.

References[1] R. Beard, S. Griffiths, T. McLain, and D. Barber,

“Autonomous Landing of Miniature Aerial Vehicles,” Journalof Aerospace Computing, Information, and Communication,vol. 4, no. 5, pp. 770–784, May 2007. [Online]. Available:http://doi.aiaa.org/10.2514/1.26502

[2] S. Yoon, H. J. Kim, and Y. Kim, “Spiral landing guidancelaw design for unmanned aerial vehicle net-recovery,”Proceedings of the Institution of Mechanical Engineers,Part G: Journal of Aerospace Engineering, vol. 224,no. 10, pp. 1081–1096, Jan. 2010. [Online]. Available: http://pig.sagepub.com/lookup/doi/10.1243/09544100JAERO744

[3] I. Kaminer, O. Yakimenko, V. Dobrokhodov, M. Lizarraga,and A. Pascoal, “Cooperative control of small UAVs for navalapplications,” 2004 43rd IEEE Conference on Decision andControl (CDC) (IEEE Cat. No.04CH37601), pp. 626–631Vol.1, 2004. [Online]. Available: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1428714

[4] A. Beyeler, J.-C. Zufferey, and D. Floreano, “Vision-basedcontrol of near-obstacle flight,” Autonomous Robots, vol. 27,no. 3, pp. 201–219, Aug. 2009. [Online]. Available: http://www.springerlink.com/index/10.1007/s10514-009-9139-6

[5] O. Yakimenko, I. Kaminer, and W. Lentz, “A three pointalgorithm for attitude and range determination using vision,”Proceedings of the 2000 American Control Conference. ACC(IEEE Cat. No.00CH36334), vol. 3, no. June, pp. 1705–1709,2000. [Online]. Available: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=879492

[6] O. Bourquardez and F. Chaumette, “Visual servoing of anairplane for auto-landing,” in 2007 IEEE/RSJ InternationalConference on Intelligent Robots and Systems. Ieee, Oct.2007, pp. 1314–1319. [Online]. Available: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4399216

[7] K. Masuko, I. Takahashi, S. Ogawa, M. Wu, A. Oosedo,T. Matsumoto, K. Go, F. Sugai, A. Konno, and M. Uchiyama,“Autonomous takeoff and landing of an unmanned aerialvehicle,” in System Integration (SII), 2010 IEEE/SICEInternational Symposium on. IEEE, 2010, pp. 248–253.[Online]. Available: http://ieeexplore.ieee.org/xpls/abs all.jsp?arnumber=5708333

[8] F. Chaumette, “Visual servo control. I. Basic approaches,”Robotics &amp; Automation, vol. 12, no. 5, pp. 651–670,2006. [Online]. Available: http://ieeexplore.ieee.org/xpls/abs all.jsp?arnumber=4015997

[9] F. Chaumette and S. Hutchinson, “Visual servo control.II. Advanced approaches [Tutorial],” IEEE Robotics &Automation Magazine, vol. 14, no. 1, pp. 109–118, Mar.2007. [Online]. Available: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=4141039

[10] S. Huh and D. Hyunchul, “Control Engineering PracticeA vision-based landing system for small unmanned aerialvehicles using an airbag,” Control Engineering Practice,vol. 18, no. 7, pp. 812–823, 2010. [Online]. Available:http://dx.doi.org/10.1016/j.conengprac.2010.05.003

[11] FlightGear, http://www.flightgear.org/, 2011.

[12] J. M. Kasson and W. Plouffe, “An analysis of selectedcomputer interchange color spaces,” ACM Transactions onGraphics, vol. 11, no. 4, pp. 373–405, Oct. 1992. [Online].Available: http://portal.acm.org/citation.cfm?doid=146443.146479

6