TRACKING AND STATE ESTIMATION OF AN UNMANNED …€¦ · The Air Force Research Laboratory at...
Transcript of TRACKING AND STATE ESTIMATION OF AN UNMANNED …€¦ · The Air Force Research Laboratory at...
1
TRACKING AND STATE ESTIMATION OF AN UNMANNED GROUND VEHICLE SYSTEM USING AN UNMANNED AIR VEHICLE SYSTEM
By
DONALD KAWIKA MACARTHUR
A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY
UNIVERSITY OF FLORIDA
2007
2
© 2007 Donald K. MacArthur
3
I proudly dedicate my life and this work to my wonderful wife Erica. Many trials, we both have suffered through this process.
4
ACKNOWLEDGMENTS
I would like to thank my father Donald Sr., my mother Janey, and my brother Matthew for
their support through my many years of schooling.
5
TABLE OF CONTENTS page
ACKNOWLEDGMENTS ...............................................................................................................4
LIST OF TABLES...........................................................................................................................7
LIST OF FIGURES .........................................................................................................................8
ABSTRACT...................................................................................................................................11
CHAPTER
1 INTRODUCTION ..................................................................................................................13
2 BACKGROUND ....................................................................................................................14
Position and Orientation Measurement Sensors .....................................................................14 Global Positioning Systems.............................................................................................14 Inertial Measurement Units .............................................................................................17 Magnetometers ................................................................................................................18 Accelerometer..................................................................................................................19 Rate Gyro.........................................................................................................................19
Unmanned Rotorcraft Modeling.............................................................................................20 Unmanned Rotorcraft Control ................................................................................................21
3 EXPERIMENTAL TESTING PLATFORMS .......................................................................24
Electronics and Sensor Payloads ............................................................................................24 First Helicopter Electronics and Sensor Payload ............................................................24 Second Helicopter Electronics and Sensor Payload........................................................26 Third Helicopter Electronics and Sensor Payload...........................................................28 Micro Air Vehicle Embedded State Estimator and Control Payload ..............................29
Testing Aircraft.......................................................................................................................29 UF Micro Air Vehicles....................................................................................................29 ECO 8 ..............................................................................................................................30 Miniature Aircraft Gas Xcell...........................................................................................30 Bergen Industrial Twin....................................................................................................31 Yamaha RMAX...............................................................................................................31
4 GEO-POSITIONING OF STATIC OBJECTS USING MONOCULAR CAMERA TECHNIQUES .......................................................................................................................33
Simplified Camera Model and Transformation......................................................................33 Simple Camera Model .....................................................................................................33 Coordinate Transformation .............................................................................................34
Improved Techniques for Geo-Positioning of Static Objects.................................................36
6
Camera Calibration.................................................................................................................39 Geo-Positioning Sensitivity Analysis .....................................................................................43
5 UNMANNED ROTORCRAFT MODELING .......................................................................55
6 STATE ESTIMATION USING ONBOARD SENSORS......................................................60
Attitude Estimation Using Accelerometer Measurements .....................................................60 Heading Estimation Using Magnetometer Measurements .....................................................64 UGV State Estimation ............................................................................................................66
7 RESULTS...............................................................................................................................69
Geo-Positioning Sensitivity Analysis .....................................................................................69 Comparison of Empirical Versus Simulated Geo-Positioning Errors ....................................77 Applied Work .........................................................................................................................79
Unexploded Ordnance (UXO) Detection and Geo-Positioning Using a UAV ...............79 Experimentation VTOL aircraft ...............................................................................80 Sensor payload .........................................................................................................81 Maximum likelihood UXO detection algorithm ......................................................81 Spatial statistics UXO detection algorithm ..............................................................83
Collaborative UAV/UGV Control...................................................................................86 Waypoint surveying .................................................................................................87 Local map.................................................................................................................88
Citrus Yield Estimation ...................................................................................................91 Materials and methods .............................................................................................93 Results ......................................................................................................................96 Discussion ................................................................................................................97
8 CONCLUSIONS ..................................................................................................................102
LIST OF REFERENCES.............................................................................................................106
BIOGRAPHICAL SKETCH .......................................................................................................110
7
LIST OF TABLES
Table page 7-1 Parameter standard deviations for the horizontal and vertical position.............................70
7-2 Parameter standard deviations for the roll, pitch, and yaw angles.....................................71
7-3 Normalized pixel coordinate standard deviations used during sensitivity analysis...........73
7-4 Parameter standard deviations used during sensitivity analysis ........................................73
7-5 Comparison of Monte Carlo Method results .....................................................................77
7-6 Production of Oranges (1000’s metric tons) (based on NASS, 2006)...............................92
7-7 Production of Grapefruit (1000’s metric tons) (based on NASS, 2006)............................92
7-8 Irrigation Treatments .........................................................................................................94
7-9 Results from Image Processing and Individual Tree Harvesting.......................................96
8
LIST OF FIGURES
Figure page 2-1 Commercially available GPS units ....................................................................................15
2-2 Commercially available GPS antennas..............................................................................16
2-3 Commercially available IMU systems...............................................................................17
2-4 MicroMag3 magnetometer sensor from PNI Corp. ...........................................................18
2-5 HMC1053 tri-axial analog magnetometer from Honeywell..............................................19
2-6 ADXL 330 tri-axial SMT magnetometer from Analog Devices Inc. ................................19
2-7 ADXRS150 rate gyro from Analog Devices Inc. ..............................................................20
4-1 Image coordinates to projection angle calculation.............................................................33
4-2 Diagram of coordinate transformation...............................................................................34
4-3 Normalized focal and projective planes.............................................................................37
4-4 Relation between a point in the camera and global reference frames................................38
4-5 Calibration checkerboard pattern.......................................................................................40
4-6 Calibration images .............................................................................................................41
4-7 Calibration images .............................................................................................................42
5-1 Top view of the body fixed coordinate system..................................................................56
5-2 Side view of the body fixed coordinate system .................................................................56
5-3 Main rotor blade angle .......................................................................................................58
5-4 Main rotor thrust vector .....................................................................................................58
6-1 Fast Fourier Transform of raw accelerometer data............................................................62
6-2 Fast Fourier Transform of raw accelerometer data after low-pass filter ...........................62
6-3 Roll and Pitch measurement prior to applying low-pass filter ..........................................63
6-4 Roll and Pitch measurement after applying low-pass filter ...............................................64
6-5 Magnetic heading estimate ................................................................................................65
9
7-1 Roll and Pitch measurements used for defining error distribution ....................................71
7-2 Heading measurements used for defining error distribution..............................................71
7-3 Image of triangular placard used for geo-positioning experiments ...................................72
7-4 Results of x and y pixel error calculations.........................................................................72
7-5 Error Variance Histograms for the respective parameter errors ........................................76
7-6 Experimental and simulation geo-position results.............................................................78
7-7 BLU97 Submunition..........................................................................................................79
7-8 Miniature Aircraft Gas Xcell Helicopter ...........................................................................80
7-9 Yamaha RMAX Unmanned Helicopter.............................................................................80
7-10 Sensor Payload System Schematic ....................................................................................81
7-11 Segmentation software.......................................................................................................82
7-12 Pattern Recognition Process ..............................................................................................84
7-13 Raw RGB and Saturation Images of UXO ........................................................................85
7-14 Segmented Image...............................................................................................................85
7-15 Raw Image with Highlighted UXO ...................................................................................85
7-16 TailGator and HeliGator Platforms....................................................................................86
7-17 Aerial photograph of all simulated UXO...........................................................................87
7-18 Local map generated with Novatel differential GPS .........................................................88
7-19 A comparison of the UGV’s path to the differential waypoints ........................................90
7-20 UAV waypoints vs. UGV path ..........................................................................................91
7-21 Individual Tree Yields as Affected by Irrigation Depletion Treatments ...........................96
7-22 Individual Tree Yield as a Function of Orange Pixels in Image........................................97
7-23 Individual Tree Yield as a Function of Orange Pixels with Nonirrigated Removed.........97
7-24 Image of Tree 2C Before and After Image Processing......................................................98
7-25 Image of Tree 2F Before and After Image Processing ......................................................98
10
7-26 Image of Tree 6D Before and After Image Processing......................................................99
7-27 Ground Images of Tree 6D and Tree 2E..........................................................................100
8-1 Simulated error calculation versus elevation ...................................................................103
8-2 Geo-Position error versus elevation.................................................................................104
11
Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy
TRACKING AND STATE ESTIMATION OF AN UNMANNED GROUND VEHICLE SYSTEM USING AN UNMANNED AERIAL VEHICLE SYSTEM
By
Donald Kawika MacArthur
May 2007
Chair: Carl Crane Major: Mechanical Engineering
Unmanned Air Vehicles (UAV) have several advantages and disadvantages compared with
Unmanned Ground Vehicles (UGV). Both systems have different mobility and perception
abilities. These UAV systems have extended perception, tracking, and mobility capabilities
compared with UGVs. Comparatively, UGVs have more intimate mobility and manipulation
capabilities. This research investigates the collaboration of UAV and UGV systems and applies
the theory derived to a heterogeneous unmanned multiple vehicle system. This research will also
demonstrate the use of UAV perception and tracking abilities to extend the capabilities of a
multiple ground vehicle system. This research is unique in that it presents a comprehensive
system description and analysis from the sensor and hardware level to the system dynamics.
This work also couples the dynamics and kinematics of two agents to form a robust state
estimation using completely passive sensor technology. A general sensitivity analysis of the
geo-positioning algorithm was performed. This analysis derives the sensitivity equations for
determining the passive positioning error of the target UGV. This research provides a
framework for analysis of passive target positioning and error contributions of each parameter
used in the positioning algorithms. This framework benefits the research and industrial
community by providing a method of quantifying positioning error due to errors from sensor
12
noise. This research presents a framework by which a given UAV payload configuration can be
evaluated using an empirically derived sensor noise model. Using this data the interaction
between sensor noise and positioning error can be compared. This allows the researcher to
selectively focus attention to sensors which have a greater effect on position error and quantify
expected positioning error.
13
CHAPTER 1 INTRODUCTION
The Center for Intelligent Machines and Robotics at the University of Florida has been
performing autonomous ground vehicle research for over 10 years. In that time, research has
been conducted in the areas of sensor fusion, precision navigation, precision positioning systems,
and obstacle avoidance. Researchers have used small unmanned helicopters for remote sensing
purposes for various applications. Recently, experimentation with unmanned aerial vehicles has
been in collaboration with the Tyndall Air Force Research Laboratory at Tyndall AFB, Florida.
Recently, unmanned aerial vehicles (UAVs) have been used more extensively for military
and commercial operations. The improved perception abilities of UAVs compared with
unmanned ground vehicles (UGVs) make them more attractive for surveying and reconnaissance
applications. A combined UAV/UGV multiple vehicle system can provide aerial imagery,
perception, and target tracking along with ground target manipulation and inspection capabilities.
This research investigates collaborative UAV/UGV systems and also demonstrates the
application of a UAV/UGV system for various task-based operations.
The Air Force Research Laboratory at Tyndall Air Force Base has worked toward
improving EOD and range clearance operations by using unmanned ground vehicle systems.
This research incorporates the abilities of UAV/UGV systems to support these operations. The
research vision for the range clearance operations is to develop an autonomous multi-vehicle
system that can perform surveying, ordnance detection/geo-positioning, and disposal operations
with minimal user supervision and effort.
14
CHAPTER 2 BACKGROUND
Researchers have used small unmanned helicopters for remote sensing purposes for
various applications [1,2,3]. These applications range from agricultural crop yield estimation,
pesticide and fertilizer application, explosive reconnaissance and detection, and aerial
photography and mapping.
This research effort will strive to estimate the states of a UGV system using monocular
camera techniques and the extrinsic parameters of the camera sensor. The extrinsic parameters
can be reduced to the transformation from the camera coordinate system to the global coordinate
system.
Position and Orientation Measurement Sensors
Global Positioning Systems
Global Position Systems (GPS) are widely becoming the position system of choice for
autonomous vehicle navigation. This technology allows for an agent to determine its location
using broadcasted signals from satellites overhead. The Navigation Signal Timing and Ranging
Global Positioning System (NAVSTAR GPS) was established in 1978 and is maintained by the
United States Department of Defense to provide a positioning service for use by the U.S. military
and is utilized by the public as a public good. Since its creation, the service has been used for
commercial purposes such as nautical, aeronautical, and ground based navigation, and land
surveying. The current U.S. based GPS satellite constellation system consists of over 24
satellites. The number of satellites in operation for this system can vary due to satellites being
taken in and out of service. Other countries are leading efforts to develop alternative satellite
systems for their own GPS systems. A similar GPS system is the GLONASS constructed by
Russia. The GALILEO GPS system is being developed by a European consortium. This system
15
is to be maintained by Europeans and will provide capabilities similar to that of the NAVSTAR
and GLONASS systems.
Each satellite maintains its own specific orbit and circumnavigates earth once every 12
hours. The orbit of each satellite is timed and coordinated so that five to eight satellites are above
the horizon of any location on the surface of earth at any time. A GPS receiver calculates
position by first receiving the microwave RF signals broadcast by each visible satellite. The
signals broadcasted by the satellites are complex high frequency signals with encoded binary
information. The encoded binary data contains a large amount of information but mainly
contains information about the time that the data was sent and location of the satellite in orbit.
The GPS receiver processes this information to solve for its position and current time.
GPSreceivers typically provide position solutions at 1Hz but GPS receivers can be
purchased that output position solutions up to 20Hz. The accuracy of a commercial GPS system
without any augmentation is approximately 15 meters. Several types of commercially available
GPS units are shown in Figure 2-1. Some units are equipped with or without antennas. The
Garmin GPS unit in Figure 2-1 contains the antenna and receiver whereas the other two units are
simply receivers. Several types of antennas are shown in Figure 2-2.
Figure 2-1. Commercially available GPS units
16
Figure 2-2. Commercially available GPS antennas
Differential GPS is an alternative method by which GPS signals from multiple receivers
can be used to obtain higher accuracy position solutions. Differential GPS operates by placing a
specialized GPS receiver in a known location and measuring the errors in the position solution
and the associated satellite data. The information is then broadcast in the form of correction data
so that other GPS receivers in the area can calculate a more accurate position solution. This
system is based on the fact that there are inherent delays as the satellite signals are transmitted
through the atmosphere. Localized atmospheric conditions cause the satellite signals within that
area to have the same delays. By calculating and broadcasting the correction values for each
visible satellite the differential GPS system can attain accuracy from 1mm to 1cm [4].
In 2002, a new type of GPS correction system has been integrated so that a land-based
correction signal is not required to improve position solutions. Satellite based augmentation
systems (SBAS) transmit localized correction signals from orbiting satellites [5]. A SBAS
system implemented for North America is the Wide Area Augmentation System (WAAS). This
system has been used in this research and position solutions with errors of less than three meters
have been observed.
In 2005, the first in a series of new satellites was introduced into the NAVSTAR GPS
system. This system provides a new GPS signal referred to as L2C. This enhancement is
intended to improve the accuracy and reliability of the NAVSTAR GPS system for military and
public use.
17
Inertial Measurement Units
Inertial Measurement Unit (IMU) systems are used extensively in vehicles where accurate
orientation measurements are required. Typical IMU systems contain accelerometers and
angular gyroscopes. These sensors allow for the rigid body motion of the IMU to be measured
and state estimations to be made. These systems can vary greatly in cost and performance.
When coupled with a GPS system, the positioning and orientation of the system can be
accurately estimated. The coupled IMU/GPS combines the position and velocity measurements
based on satellite RF signals with inertial motion measurements. These systems complement
each other whereby the GPS is characterized by low frequency global position measurements
and the IMU provides higher frequency relative positioning/orientation measurements. Some of
the commercially available IMU systems are shown in Figure 2-3.
Figure 2-3. Commercially available IMU systems
Other sensors allow for orientation measurements such as fluidic tilt sensors, imaging
sensors, light sensors, thermal sensors. Each of these sensors has different advantages and
disadvantages for implementation. Fluidic tilt sensors provide high frequency noise rejection
and decent attitude estimation for low dynamic vehicles. In high G turns and extreme dynamics
these sensors fail to provide usable data. Imaging sensors have the advantage of not being
affected by vehicle dynamics. However, advanced image processing algorithms can require
significant computational overhead and these sensors are highly affected by lighting conditions.
Thermopile attitude sensors have been used for attitude estimation and are not affected by
18
vehicle dynamics. These sensors provide excellent attitude estimations but are affected by
reflective surfaces and changes in environment temperature.
Magnetometers
A magnetometer is a device that allows for the measurement of a local or distant magnetic
field. This device can be used to measure the strength and direction of a magnetic field. The
heading of an unmanned vehicle may be determined by detecting the magnetic field created by
the Earth’s magnetic poles. The “magnetic north” direction can aid in navigation and geo-spatial
mapping. For applications where the vehicle orientation is not restricted to planar motion, the
magnetometer is typically coupled with a tilt sensor to provide a horizontal north vector
independent of the vehicle orientation.
There are several commercially available magnetometer sensors. The MicroMag3 from
PNI Corp. provides magnetic field measurements in three axes with a digital serial peripheral
interface (SPI) shown in Figure 2-4.
Figure 2-4. MicroMag3 magnetometer sensor from PNI Corp.
The Honeywell Corporation also manufactures a line of magnetic field detection sensors.
These products vary from analog linear/vector sensors to integrated digital compass devices.
The HMC1053 from Honeywell is a three axis magneto-resistive sensor for multi-axial magnetic
field detection and is shown in Figure 2-5.
19
Figure 2-5. HMC1053 tri-axial analog magnetometer from Honeywell
Accelerometer
An accelerometer measures the acceleration of the device in single or multiple
measurement axes. MEMS based accelerometers provide accurate and inexpensive devices for
measurement of acceleration. The ADXL330 from Analog Devices Inc. provides analog three
axis acceleration measurements in a small surface mount package shown in Figure 2-6.
Figure2-6. ADXL 330 tri-axial SMT magnetometer from Analog Devices Inc.
A two or three axis accelerometer can be used as a tilt sensor. The off horizontal angles
can be determined by measuring the projection of the gravity vector onto the sensor axes. These
measurements relate to the roll and pitch angle of the device and when properly compensated to
account for effects from vehicle dynamics, can provide accurate orientation information.
Rate Gyro
A rate gyro is a device that measures the angular time rate of change about a single axis or
multiple axes. The ADXRS150 is a single axis MEMS rate gyro manufactured by Analog
20
Devices Inc. which provides analog measurements of the angular rate of the device and is shown
in Figure 2-7.
Figure 2-7. ADXRS150 rate gyro from Analog Devices Inc.
This device uses the Coriolis effect to measure the angular rate of the device. An
internally resonating frame in the device is coupled with capacitive pickoff elements. The
response of the pickoff elements changes with the angular rate. This signal is then conditioned
and amplified. When coupled with an accelerometer, these devices allow for enhanced
orientation solutions.
Unmanned Rotorcraft Modeling
For this research, the aircraft operating region will mostly be in hover mode. The flight
characteristics of the full flight envelope are very complex and involve extensive dynamic,
aerodynamic, and fluid mechanics analysis. Previously, researchers have performed extensive
instrumentation of a Yamaha R-50 remote piloted helicopter [6]. These researchers outfitted a
Yamaha R-50 helicopter with a sensor suite for in-flight measurements of rotor blade motion and
loading. The system was equipped with sensors along the length of the main rotor blades,
measuring strain, acceleration, tilt, and position. This research was unique due to the detail of
instrumentation not to mention the difficulties of instrumenting rotating components. This work
provided structural, modal, and load characteristics for this airframe and demonstrates the
extensive lengths required for obtaining in-flight aircraft properties. In addition, extensive work
has been conducted in system identification for the Yamaha R-50 and the Xcell 60 helicopters
21
[7,8,9]. These researchers performed extensive frequency response based system identification
and flight testing and compared modeling results with that of the scaled dynamics of the UH-1H
helicopter. These researchers have conducted an extensive analysis of small unmanned
helicopter dynamic equations and system identification. This work has resulted in complete
dynamic modeling of a model scale helicopter. These results showed great promise in that they
demonstrated a close relation between the UH-1H helicopter dynamics and the tested aircraft.
This research also showed that the aircraft modeling technique used was valid and that the
system identification techniques used for larger rotorcraft were extensible to smaller rotorcraft.
Other researchers present a more systems level approach to the aircraft automation
discussion [10]. They present the instrumentation equipment and architecture, and present the
modeling and simulation derivations. They go on to present their work involving hardware-in-
the-loop simulation and image processing.
Unmanned Rotorcraft Control
Many researchers have become actively involved in the control and automation of
unmanned rotorcraft. The research has involved numerous controls topics including robust
controller design, fuzzy control, and full flight envelope control.
Robust H∞ controllers have been developed using Loop Shaping and Gain-Scheduling to
provide rapid and reliable high bandwidth controllers for the Yamaha R-50 UAV [11,12]. In this
research, the authors sought to incorporate the use of high-fidelity simulation modeling into the
control design to improve performance. Coupled with the use of multivariable control design
techniques, they also sought to develop a controller that would provide fast and robust controller
performance that could better utilize the full flight envelope of small unmanned helicopters.
Anyone who has observed experienced competition level Radio Controlled (RC) helicopter
22
pilots and their escapades during flight have observed the awesome capabilities of small RC
helicopters during normal and inverted flight. It is these capabilities that draw researchers
towards using helicopters for their research. But with increased capabilities comes increased
complexities in aircraft mechanics and dynamics. These researchers have attempted to
incorporate the synergic use of a high-fidelity aircraft model with robust multivariable control
strategies and have validated their findings by implementing and flight testing their control
algorithms on their testing aircraft. Also, H∞ controller design has been applied to highly
flexible aircraft [13]. As will be shown later, helicopter airframes are significantly prone to
failures cause by vibration modes. Disastrous consequences can occur if these vibration modes
are not considered and compensated. In this research, a highly flexible aircraft model is used for
control design and validation. The controller is specifically designed to compensate for the high
flexibility of the airframe. The authors present the aircraft model and uncertainties and discuss
the control law synthesis algorithm. These results demonstrate the meshing of the aircraft
structure modeling/analysis and the control design/stability. This concept is important not only
from a system performance perspective but also from a safety perspective. As UAVs become
more prevalent in domestic airspace, the public can benefit from the improved system safety
provided by more sophisticated modeling and analysis techniques.
Previous researchers have also conducted research on control optimization for small
unmanned helicopters [14]. In this research, the authors focus on the problem of attitude control
optimization for a small-scale unmanned helicopter. By using an identified model of the
helicopter system that incorporates the coupled rotor/stabilizer/fuselage dynamic effects, they
improve the overall model accuracy. This research is unique in that it incorporates the stabilizer
bar dynamic affects commonly not included in previous work. The system model is validated by
23
performing flight tests using a Yamaha RMax helicopter test-bed system. They go on to
compensate for the performance reduction induced by the stabilizer bar and optimize the
Proportional Derivative (PD) attitude controller using an established control design methodology
with a frequency response envelope specification.
24
CHAPTER 3 EXPERIMENTAL TESTING PLATFORMS
Electronics and Sensor Payloads
In order to perform testing and evaluation of the theory and concepts involved in this
research, several electronics and sensor payloads were developed. The purpose of these
payloads was to provide perception and aircraft state measurements and onboard processing
capabilities. These systems were developed to operate modularly and enable transfer of the
payload to different aircraft. The payloads were developed with varying capabilities and sizes.
The host aircraft for these payloads ranged from a 6” fixed wing micro-air vehicle to a 3.1 meter
rotor diameter agricultural mini-helicopter.
First Helicopter Electronics and Sensor Payload
The first helicopter electronics and sensor payload was constructed to provide an initial
testing platform to ensure proper operation of the electronics and aircraft during flight. The
system schematic is shown in Figure 3-1.
Figure 3-1. First helicopter payload system schematic
DC/DC Converters
Industrial CPU
Digital Stereovision Cameras
LiPo Battery
Laptop Hard Drive Wireless Ethernet
Power
Communication
Imaging
Data Storage
25
The system consisted of five subsystems:
1. Main processor 2. Imaging 3. Communication 4. Data storage 5. Power
The main processor provides the link between all of the sensors, the data storage device,
and the communication equipment. The imaging subsystem consists of a Videre stereovision
system linked via two firewire connections. The data storage subsystem consists of a 40GB
laptop hard drive which runs a Linux operating system and was used for sensor data storage.
The power subsystem consists of a 12V to 5V DC to DC converter, 12V power regulator and a
3Ah LiPo battery pack. The power regulators condition and supply power to all electronics. The
LiPo battery pack served as the main power source and was selected based on the low weight
and high power density of the LiPo battery chemistry. The first helicopter payload attached to
the aircraft is shown in Figure 3-2.
Figure 3-2. First payload mounted on helicopter
26
The first prototype system was equipped on the aircraft and was tested during flight.
Although image data was able to be gathered during flight, it was found that the laptop hard
drive could not withstand the vibration of the aircraft. Figure 3-3 shows in flight testing of the
first prototype payload.
Figure 3-3. Helicopter testing with first payload
Second Helicopter Electronics and Sensor Payload
The payload design was refined in order to provide a more robust testing platform for this
research. In order to improve the designs, vibration isolation of the payload from the aircraft was
required as well as a data storage method that could withstand the harsh environment onboard
the aircraft. The system schematic for the second prototype payload is shown in Figure 3-4.
Figure 3-4. Second helicopter payload system schematic
DC/DC Converters
Industrial CPU
Digital Stereovision Cameras
LiPo Battery
Wireless Ethernet
Power
Communication
Imaging
Compact Flash
Data Storage
Compact Flash
OEM Garmin GPS
Pose Sensors
Digital Compass
27
The system consisted of six subsystems:
1. Main processor 2. Imaging 3. Pose sensors 4. Communication 5. Data Storage 6. Power
The second prototype payload contained similar components as the first prototype, but
instead of a laptop hard drive it utilized two compact flash drives for storage, and in addition two
pose sensors were added. The OEM Garmin GPS provided global position, velocity, and altitude
data at 5Hz. The digital compass provided heading, roll, and pitch angles at 30Hz. The second
prototype payload is shown in Figure 3-5.
Figure 3-5. Second payload mounted to helicopter
28
Flight tests showed that the second payload could reliably collect image and pose data
during flight and maintain wireless communication at all times. Figure 3-6 shows the second
prototype payload equipped on the aircraft during flight testing.
Figure 3-6. Flight testing with second helicopter payload
Third Helicopter Electronics and Sensor Payload
The helicopter electronics and sensor payload was redesigned slightly to include a high
accuracy differential GPS (Figure 3-7). This system has a vendor stated positioning accuracy of
2 cm in differential mode and allows precise helicopter positioning. This system further
improves the overall system performance and allows for comparison of the normal versus RT2
differential GPS systems.
Figure 3-7. Third helicopter payload system schematic
DC/DC Converters
Industrial CPU
Digital Stereovision Cameras
LiPo Battery
Wireless Ethernet
Power
Communication
Imaging
Compact Flash
Data Storage
Compact Flash
Novatel RT2 Differential GPS
Pose Sensors
Digital Compass
29
Micro Air Vehicle Embedded State Estimator and Control Payload
An embedded state estimator and control payload was developed to support the Micro Air
Vehicle research being performed at the University of Florida. This system provides control
stability and video data. The system schematic is shown in Figure 3-8.
Figure 3-8. Micro-Air Vehicle embedded state estimator and control system schematic
Testing Aircraft
UF Micro Air Vehicles
Several MAVs have been developed for reconnaissance and control applications. This
platform provides a payload capability of < 30 grams with a wingspan of 6” (Figure 3-9). This
system is a fixed wing aircraft with 2-3 control surface actuators and an electric motor. System
development for this platform requires small size and weight, and low power consumption.
Figure 3-9. Six inch micro air vehicle
DC/DC Converters
Atmel Mega128
CMOS Camera RF Video Transmitter
LiPo Battery
Aerocomm 900MHz RF
Modem
Power
Communication
Imaging
2 Axis Accelerometer
Pose Sensors
Altitude and Airspeed Pressure Sensors
30
ECO 8
This aircraft was the first helicopter built in the UF laboratory. The aircraft is powered by
a brushed electric motor with an eight cell nickel cadmium battery pack. The aircraft is capable
of flying for approximately 10 minutes under normal flight conditions. This system has a
payload capacity of less than 60 grams with CCPM swashplate mixing as shown in Figure 3-10.
Figure 3-10. Eco 8 helicopter
Miniature Aircraft Gas Xcell
A Miniature Aircraft Gas Xcell was the first gas powered helicopter purchased for testing
and experimentation (Figure 3-11). This aircraft is equipped with a two stroke gasoline engine,
740 mm main rotor blades, and has an optimal rotor head speed of 1800 rpm. The payload
capacity is approximately 15 lbs with a runtime of 20 minutes.
Figure 3-11. Miniature Aircraft Gas Xcell
31
Bergen Industrial Twin
A Bergen Industrial Twin was purchased for testing with heavier payloads (Figure 3-12).
This aircraft is equipped with a dual cylinder two stroke gasoline engine, 810 mm main rotor
blades, and has an optimal rotor head speed of 1500 rpm. The payload capacity is approximately
25 lbs with a runtime of 30 minutes.
Figure 3-12. Bergen industrial twin helicopter
Yamaha RMAX
Several agricultural Yamaha RMAX helicopters were purchased by the AFRL robotics
research laboratory at Tyndall Air Force base in Panama City, Florida. The aircraft is shown in
Figure 3-13. This system has a two-stroke engine with internal power generation and control
stabilization system. This system has a 60 lb payload capability. The system is typically used
for small area pesticide and fertilizer spraying.
These aircraft were used to conduct various experiments involving remote sensing, sensor
noise analysis, system identification, and various applied rotorcraft tasks. These experiments
and their results will be discussed in the subsequent chapters. Each aircraft has varying costs,
32
payload capabilities, and runtimes. As with the various sensors available for UAV research, the
aircraft should be selected to tailor to the needs of the particular project or task.
Figure 3-13. Yamaha RMAX helicopter
33
CHAPTER 4 GEO-POSITIONING OF STATIC OBJECTS USING MONOCULAR CAMERA
TECHNIQUES
Two derivations were performed which allowed for the global coordinates of an object in
an image to be found. Both derivations perform the transformation from a 2D coordinate system
referred to as the image coordinate system to the 3D global coordinate system. The first
derivation utilizes a simplified camera model and calculates the position of the static object using
the concept of the intersection of a line and a plane. The second derivation utilizes intrinsic and
extrinsic camera parameters and uses projective geometry and coordinate transformations.
Simplified Camera Model and Transformation
Simple Camera Model
The cameras were modeled by linearly scaling the horizontal and vertical projection angle
with the x and y position of the pixel respectively as illustrated in Figure 4-1. This allowed for
the relative angle of the static object to be calculated with respect to a coordinate system fixed in
the aircraft.
Figure 4-1. Image coordinates to projection angle calculation
x
y
Roll
Pitch
34
Coordinate Transformation
A coordinate transformation is performed on the static object location from image
coordinates to global coordinates as shown in Figure 4-2. The image data provides the relative
angle of the static object with respect to the aircraft reference frame. In order to find the position
of the static object, a solution of the intersection of a line and a plane was used.
Figure 4-2. Diagram of coordinate transformation
The equation of a plane that is used for this problem is
0=+++ DCzByAx (4-1)
where x, y, and z are the coordinates of a point in the plane.
The equation of a line used in this problem is
( )121~~~~ ppupp −+= (4-2)
where p1 and p2 are points on the line.
Substituting (4-2) into (4-1) results in the solution
)()()( 212121
111
zzCyyBxxADCzByAxu
−+−+−+++
= (4-3)
35
where x1, y1, and z1 are the coordinates of point p1, and x2, y2, and z2 are the coordinates of point
p2.
For this problem the ground plane is defined in the global reference frame by A=0, B=0,
C=1, and D=-ground elevation. The point p1 is the focal point of the camera and it is determined
in the global reference frame based on the sensed GPS data. The point p2 is calculated in the
global reference frame as equal to the coordinates of p1 plus a unit distance along the static
object projection ray. This is known from the static object image angle and the camera’s
orientation as measured by attitude and heading sensors. In other words the direction of the
static object projection ray in the global reference frame was found by transforming the
projection vector from the aircraft to the static object as measured in the aircraft frame to the
global frame. This entailed using a downward vector in the aircraft frame and rotating about the
yaw, pitch, and roll axis by the projection angles and pose angles. The rotation matrices are
⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡−=
10000
21 φφφφ
CosSinSinCos
R (4-4)
⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡
−=
θθ
θθ
CosSin
SinCosR
0010
032 (4-5)
⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡
−=
ψψψψ
CosSinSinCosR
00
00143 (4-6)
where
φ=yaw of the aircraft
θ=pitch of the aircraft plus projection pitch angle
ψ=roll of the aircraft plus projection pitch angle.
36
The downward vector r=(0 0 -1)T was transformed using the compound rotation matrix
43322141 RRRR = . (4-7)
The new projection vector was found as
rRr ~'~ 41= (4-8)
where r is the projection ray measured in the aircraft reference frame and '~r is the projection ray
as measured in the global reference frame. Using the solution found for the intersection of a line
and a plane and using the aircraft position as a point on the line, the position of the static object
in the global reference frame was found. Thus, for each object identified in an image, the
coordinates of p1 and p2 are determined in the global reference frame and (4-2) and (4-3) are then
used to calculate the position of the object in the global reference frame.
Improved Techniques for Geo-Positioning of Static Objects
A precise camera model and an image to global coordinate transformation were developed.
This involved finding the intrinsic and extrinsic camera parameters of the camera system
attached to the aerial vehicle. A relation between the normalized pixel coordinates and
coordinates in the projective coordinate plane was used:
⎪⎪⎭
⎪⎪⎬
⎫
⎪⎪⎩
⎪⎪⎨
⎧
=⎭⎬⎫
⎩⎨⎧
c
c
c
c
v
n
ZYZX
vu
(4-9)
The normalized pixel coordinate vector m~ and the projective plane coordinate vector M~
are related using Equation 4-9 and form the projection relationship between points in the image
plane and points in the camera reference frame as shown in Figure 4-9, where
⎪⎭
⎪⎬
⎫
⎪⎩
⎪⎨
⎧=
1
~n
n
vu
m (4-10)
37
⎪⎪⎭
⎪⎪⎬
⎫
⎪⎪⎩
⎪⎪⎨
⎧
=
1
~c
c
c
ZYX
M . (4-11)
Figure 4-3. Normalized focal and projective planes
The transformation from image coordinates to global coordinates was determined using the
normalized pixel coordinates, and the camera position and orientation with respect to the global
coordinate system (Figure 4-4). The transformation of a point M expressed in the camera
reference system C to a point expressed in the global system is shown in Equation 4-12.
MCG
CMG PTP = (4-12)
⎪⎪⎭
⎪⎪⎬
⎫
⎪⎪⎩
⎪⎪⎨
⎧
⎥⎦
⎤⎢⎣
⎡==
⎪⎪⎭
⎪⎪⎬
⎫
⎪⎪⎩
⎪⎪⎨
⎧
=
110
13 C
C
C
TCo
GGC
MCG
CG
G
G
MG
ZYX
PRPT
ZYX
P (4-13)
Dividing both sides of Equation 4-13 by ZC and substituting ZG = 0 (assuming the
elevation of the camera is evaluated as the above ground level and the target location exists on
the ZG = 0 global plane) results in Equation 4-14.
38
Figure 4-4. Relation between a point in the camera and global reference frames
⎪⎪⎪⎪
⎭
⎪⎪⎪⎪
⎬
⎫
⎪⎪⎪⎪
⎩
⎪⎪⎪⎪
⎨
⎧
⎥⎦
⎤⎢⎣
⎡=
⎪⎪⎪⎪
⎭
⎪⎪⎪⎪
⎬
⎫
⎪⎪⎪⎪
⎩
⎪⎪⎪⎪
⎨
⎧
C
C
C
C
C
TCo
GGC
C
C
G
C
G
Z
ZYZX
PR
Z
ZYZX
1110
10 3
(4-14)
Substituting XC/ZC = un and YC/ZC = vn:
⎪⎪
⎭
⎪⎪
⎬
⎫
⎪⎪
⎩
⎪⎪
⎨
⎧
⎥⎦
⎤⎢⎣
⎡=
⎪⎪⎪⎪
⎭
⎪⎪⎪⎪
⎬
⎫
⎪⎪⎪⎪
⎩
⎪⎪⎪⎪
⎨
⎧
C
n
n
TCo
GGC
C
C
G
C
G
Z
vu
PR
Z
ZYZX
1110
10 3
(4-15)
This leads to three equations and three unknowns XG, YG, ZC:
C
CoxG
nnC
G
ZPRvRuR
ZX
+++= 131211 (4-16)
39
C
CoyG
nnC
G
ZP
RvRuRZY
+++= 232221 (4-17)
C
CozG
nn ZPRvRuR +++= 3332310 (4-18)
where the scalar ijR represents the element in the ith row and jth column of the ijR matrix.
Using Equations 4-16, 4-17, and 4-18, ZC, XG, YG can be determined explicitly:
333231 RvRuRPZ
nn
CozG
C ++−
= (4-19)
( ) CoxG
nnnn
CozG
G PRvRuRRvRuR
PX +++⎟⎟⎠
⎞⎜⎜⎝
⎛++
−= 131211
333231 (4-20)
( ) CoyG
nnnn
CozG
G PRvRuRRvRuR
PY +++⎟⎟⎠
⎞⎜⎜⎝
⎛++
−= 232221
333231 (4-21)
Equations 4-20 and 4-21 provide the global coordinates of the static object.
Camera Calibration
In order to calculate the normalized pixel coordinates using raw imaging sensor data, a
calibration procedure is performed using a camera calibration toolbox for MATLAB®[15]. The
calibration procedure determines the extrinsic and intrinsic parameters of the camera system.
During the calibration procedure, several images are used with checkerboard patterns of specific
size that allow for the different parameters to be estimated as shown in Figure 4-5.
The extrinsic parameters define the position and orientation characteristics of the camera
system. These parameters are affected by the mounting and positioning of the camera relative to
the body fixed coordinate system.
40
Figure 4-5. Calibration checkerboard pattern
The intrinsic parameters define the optic projection and perspective characteristics of the
camera system. These parameters are affected by the camera lens properties, imaging sensor
properties, and lens/sensor placement properties. The camera lens properties are generally
characterized by the focal length and prescribed imaging sensor size. The focal length is a
measure of how strongly the lens focuses the light energy. This in essence correlates to the zoom
of the lens given a fixed sensor size and distance. The imaging sensor properties are generally
characterized by the physical size, and horizontal/vertical resolution of the imaging sensor.
These properties help to define the dimensions and geometry of the image pixels. The
lens/sensor placement properties are generally characterized by the misalignment of the lens and
image sensor, and the lens to sensor planar distance. For our analysis we are mostly concerned
with determining the intrinsic parameters of the camera system. These parameters are used for
calculating the normalized pixel coordinates given the raw pixel coordinates.
The intrinsic parameters that are used for generating the normalized pixel coordinates are
the focal length, principal point, skew coefficient, and image distortion coefficients. The focal
41
length as described earlier estimates the linear projection of points observed is space to the focal
plane. The focal length has components in the x and y axes and does not assume these values are
equal. The principal point estimates the center pixel position. All normalized pixel coordinates
are referenced to this point. The skew coefficient estimates the angle between the x and y axes
of each pixel. In some instances the pixel geometry is not square or even rectangular. This
coefficient describes how “off-square” the pixel x and y axes are and allows for compensation.
The image distortion coefficients estimate the radial and tangential distortions typically caused
by the camera lens. Radial distortion causes a changing magnification effect at varying radial
distances. These effects are apparent when a straight line appears to be curved through the
camera system. The tangential distortions are caused by ill centering or defects of the lens
optics. These cause the displacement of points perpendicular to the radial imaging field.
Figure 4-6. Calibration images
42
The camera calibration toolbox allows for all of the intrinsic parameters to be estimated
using several images of the predefined checkerboard pattern. Once the calibration procedure is
completed, the intrinsic parameters are used in the geo-positioning algorithm. Selections of
images were used that captured the checker pattern at different ranges and orientations as shown
in Figure 4-6.
The boundaries of the checker pattern were then selected manually for each image. The
calibration algorithm used the gradient of the pattern to then find all of the vertices of the
checkerboard as shown in Figure 4-7.
Figure 4-7. Calibration images
Once the boundaries for all of the images were selected, the algorithm calculated the
intrinsic camera parameter estimates using a gradient descent search. Using the selected images
the following parameters were calculated:
Focal Length: fc = [ 1019.52796 1022.12290 ] ± [ 20.11515 20.62667 ]
Principal point: cc = [ 645.66333 527.72943 ] ± [ 13.60462 10.92129 ]
Skew: alpha_c = [ 0.00000 ] ± [ 0.00000 ] => angle of pixel axes = 90.00000 ±
0.00000 degrees
43
Distortion: kc = [ -0.17892 0.13875 -0.00128 0.00560 0.00000 ] ± [ 0.01419
0.02983 0.00158 0.00203 0.00000 ]
Pixel error: err = [ 0.22613 0.14137 ]
Geo-Positioning Sensitivity Analysis
In this section the derivation for the sensitivity of the position solution will be derived
based on the measurable parameters used in the geo-positioning algorithm. This analysis will
show the general sensitivity of the positioning solution, and also the sensitivity at common
operating conditions.
Equation 4-15 is used to determine the global position of the target based on the global
position and orientation of the camera, and the normalized target pixel coordinates. Multiplying
Equation 4-15 through by Zc produces:
3
10 0 1
11
nG
nG GG C Co
C T
C
ux
vy R P
z
z
⎧ ⎫⎧ ⎫ ⎪ ⎪⎪ ⎪ ⎪ ⎪⎡ ⎤⎪ ⎪ ⎪ ⎪=⎨ ⎬ ⎨ ⎬⎢ ⎥
⎣ ⎦⎪ ⎪ ⎪ ⎪⎪ ⎪ ⎪ ⎪⎩ ⎭ ⎪ ⎪⎩ ⎭
. (4-22)
Since the geo-positioning coordinates are of primary concern for the sensitivity analysis,
Equation 4-22 is reduced to the form:
11 12 13
21 22 23
11
x
y
n
G G G G nC C C CoG
C G G G GG C C C Co
C
uvR R R Px
zy R R R P
z
⎧ ⎫⎪ ⎪⎪ ⎪⎡ ⎤⎧ ⎫ ⎪ ⎪= ⎢ ⎥⎨ ⎬ ⎨ ⎬
⎢ ⎥⎩ ⎭ ⎪ ⎪⎣ ⎦⎪ ⎪⎪ ⎪⎩ ⎭
. (4-23)
Equation 4-23 is rewritten in the form:
GC
G
xz Ab
y⎧ ⎫
=⎨ ⎬⎩ ⎭
% (4-24)
where
44
x
y
GCo
GCo
C C S S S C S S S C S C P
C S C C S Pθ ψ θ φ ψ θ ψ θ φ ψ θ φ
φ ψ φ ψ φ
⎡ ⎤+ −Α = ⎢ ⎥
−⎢ ⎥⎣ ⎦ (4-25)
11
n
n
c
uv
b
z
⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟=⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠
%
. (4-26)
The geo-positioning process is modeled by assuming there are some errors in the
parameters used in the calculation. The parameter vector is defined below:
{ }
x
y
z
GCo
GCo
GCo
n
n
P
P
P
p
uv
φθψ
⎧ ⎫⎪ ⎪⎪ ⎪⎪ ⎪⎪ ⎪⎪ ⎪
= ⎨ ⎬⎪ ⎪⎪ ⎪⎪ ⎪⎪ ⎪⎪ ⎪⎩ ⎭
(4-27)
The modeled process is shown below:
{ }ˆG G
C pG Gactual error
x xz Ab
y y⎧ ⎫ ⎧ ⎫
+ =⎨ ⎬ ⎨ ⎬⎩ ⎭ ⎩ ⎭
%. (4-28)
where
45
{ }
( )( )( )( )( )( )( )( )
ˆ
x x
y y
z z
G GCo Co
G GCo Co
G GCo Co
n n
n n
P P
P P
P P
p
u uv v
δ
δ
δ
φ δ φθ δ θψ δ ψ
δδ
⎧ ⎫+⎪ ⎪⎪ ⎪+⎪ ⎪⎪ ⎪+⎪ ⎪⎪ ⎪= +⎨ ⎬⎪ ⎪+⎪ ⎪⎪ ⎪+⎪ ⎪+⎪ ⎪⎪ ⎪+⎩ ⎭
(4-29)
The positioning error from Equation 4-28 reduces to:
{ } { }ˆ
G
G
xC Cp p
y
ee z Ab z Ab
e⎧ ⎫⎪ ⎪= = −⎨ ⎬⎪ ⎪⎩ ⎭
% %% . (4-30)
In order to establish a metric from measurement of the error for the sensitivity analysis, the
inner product of Equation 4-30 is used.
{ } { }( ) { } { }( )( )
{ }( )
{ } { } { }( )( )
{ } { } ( ){ } { } ( )
{ } { }
ˆ ˆ
ˆˆ
ˆˆ ˆ2
TT
C C C Cp p p p
T TTC C C Cp pp p
T T TTC C C C C Cp p pp p p
e e z Ab z Ab z Ab z Ab
e e z Ab z Ab z Ab z Ab
e e z Ab z Ab z Ab z Ab z Ab z Ab
= − −
⎛ ⎞= − −⎜ ⎟⎝ ⎠
= − +
% % % %% %
% % % %% %
% % % % % %% %
(4-31)
Upon using Equation 4-24 to substitute for { }C p
z Ab% , the generic form for the error
variance becomes:
{ } ( ){ } { }
2 2
ˆ ˆ2
T GT T T T TC C Cp pp G
xe e z b A Ab z Ab z b A Ab
y⎧ ⎫
= − +⎨ ⎬⎩ ⎭
% % % % %% % . (4-32)
The partial derivative of the generic error variance is shown for an arbitrary parameter ξ.
46
( ){ }
( )
( ){ }
( ){ }
( )
( ){ }
( ){ }
( )
( ) ( ) ( ) ( ) ( )
2 2
ˆ ˆ
2
ˆ ˆ
2 2 2 2
2
2
2
TT T T T
CT C Cp p pG
G
T T T TT C Cp p G
G
T T TT T T T T TC
C C C C C
z Abz b A Ab z b A Abxe ey
z b A Ab z b A xe ey
ze e b A Az b A Ab z A Ab z b Ab z b A b z
δ ξ δ ξ δ ξ δ ξ
δ ξ δ ξ δ ξ
δ ξ δ ξ δ ξ δ ξ δ ξ
∂∂ ∂⎧ ⎫∂= − +⎨ ⎬∂ ∂ ∂ ∂⎩ ⎭
∂ ∂ ⎧ ⎫∂= − ⎨ ⎬∂ ∂ ∂ ⎩ ⎭
∂∂ ∂ ∂ ∂= + + + +
∂ ∂ ∂ ∂ ∂
%% % % %% %
% % %% %
%% % % % % % % % % %( )
{ }
( ) { } ( ) { } ( ) { }
ˆ
ˆ ˆ ˆ
2
T T
p
T TG G GT T T TC
C CG G Gp p p
bb A A
x x xz b Ab A z A z by y y
δ ξ
δ ξ δ ξ δ ξ
⎛ ⎞∂⎜ ⎟⎜ ⎟∂⎝ ⎠
⎛ ⎞⎧ ⎫ ⎧ ⎫ ⎧ ⎫∂ ∂ ∂⎜ ⎟− + +⎨ ⎬ ⎨ ⎬ ⎨ ⎬⎜ ⎟∂ ∂ ∂⎩ ⎭ ⎩ ⎭ ⎩ ⎭⎝ ⎠
%
%% %
(4-33)
In order to reduce the complexity of the analysis and to provide a more concise
representation of the effects of the parameter errors on the error variance, and without loss of
generality, the target position is set as the origin of the global coordinate system. Equation 4-30
reduces to:
{ }ˆC pe z Ab= %% . (4-34)
{ }( ) { }( ){ }
ˆ ˆ
2
ˆ
TT
C Cp p
T T TC p
e e z Ab z Ab
e e z b A Ab
=
=
% %% %
% %% %. (4-35)
This quantity equates to the error variance of the positioning solution given the system
configuration and error values. It is desirable to determine the effects of errors in each parameter
used in the geo-positioning solution. Hence the partial derivative of the inner product is
calculated with respect to each parameter error.
The partial derivative of the reduced error variance is shown for an arbitrary parameter ξ.
( )2 2 2 22
T T TT T T T T T T TC
C C C C Cze e b A A bz b A Ab z A Ab z b Ab z b A b z b A A
δ ξ ξ ξ ξ ξ ξ∂∂ ∂ ∂ ∂ ∂
= + + + +∂ ∂ ∂ ∂ ∂ ∂
% %% % % % % % % % % % . (4-36)
47
Equation 4-25 is restated below along with the partial derivatives with respect to x
GCoP ,
x
GCoP ,
x
GCoP , φ , θ , ψ , nu , and nv :
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
ˆ ˆ ˆ ˆ ˆ
ˆ
ˆx
y
GCo
GCo
C C S S S C S S S C S C P
C S C C S P
ψ ψ ψ ψθ θ φ θ θ φ θ φ
ψ ψφ φ φ
⎡ ⎤+ −⎢ ⎥Α =⎢ ⎥−⎣ ⎦
(4-25)
( )0 0 0 10 0 0 0
x
GCoPδ
⎡ ⎤∂Α= ⎢ ⎥∂ ⎣ ⎦
(4-37)
( )0 0 0 00 0 0 1
y
GCoPδ
⎡ ⎤∂Α= ⎢ ⎥
∂ ⎣ ⎦ (4-38)
( )0 0 0 00 0 0 0
z
GCoPδ
⎡ ⎤∂Α= ⎢ ⎥∂ ⎣ ⎦
(4-39)
( )ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
ˆ ˆ ˆ ˆ ˆ
0
0
S C S S C C S S
S S S C Cψ ψθ φ θ φ θ φ
ψ ψφ φ φδ φ
− −⎡ ⎤∂Α= ⎢ ⎥
−∂ ⎢ ⎥⎣ ⎦ (4-40)
( )ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ 0
0 0 0 0
S C C S S S S C S C S Sψ ψ ψ ψθ θ φ θ θ φ θ φ
δ θ
− + − − −⎡ ⎤∂Α= ⎢ ⎥
∂ ⎢ ⎥⎣ ⎦ (4-41)
( )ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
ˆ ˆ ˆ ˆ
0 0
0 0
C S S S C C C S S S
C C C Sψ ψ ψ ψθ θ φ θ θ φ
ψ ψφ φδ ψ
− + +⎡ ⎤∂Α= ⎢ ⎥
− −∂ ⎢ ⎥⎣ ⎦ (4-42)
( )0 0 0 00 0 0 0nuδ⎡ ⎤∂Α
= ⎢ ⎥∂ ⎣ ⎦ (4-43)
( )0 0 0 00 0 0 0nvδ⎡ ⎤∂Α
= ⎢ ⎥∂ ⎣ ⎦. (4-44)
Equation 4-26 is restated below along with the partial derivatives with respect to x
GCoP ,
x
GCoP ,
x
GCoP , φ , θ , ψ , nu , and nv :
ˆˆ11
n
n
c
uv
b
z
⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟=⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠
% (4-26)
48
( ) ( )ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆˆ ˆ1ˆ
Oz
n n
Gc C
u S C C S S v S S C S C C C
z Pψ ψ ψ ψθ θ φ θ θ φ θ φ− + + −
= (4-45)
( )ˆ ˆ ˆ ˆ ˆ
100
ˆOz
n
GC
bu S C C S S
Pψ ψθ θ φ
δ
⎛ ⎞⎜ ⎟⎜ ⎟∂ ⎜ ⎟=⎜ ⎟∂ −⎜ ⎟⎜ ⎟⎝ ⎠
% (4-46)
( )ˆ ˆ ˆ ˆ ˆ
010
ˆOz
n
GC
bv S S C S C
Pψ ψθ θ φ
δ
⎛ ⎞⎜ ⎟⎜ ⎟∂ ⎜ ⎟=⎜ ⎟∂ +⎜ ⎟⎜ ⎟⎝ ⎠
% (4-47)
( )
0000
x
GCo
bPδ
⎛ ⎞⎜ ⎟∂ ⎜ ⎟=⎜ ⎟∂⎜ ⎟⎝ ⎠
% (4-48)
( )
0000
y
GCo
bPδ
⎛ ⎞⎜ ⎟∂ ⎜ ⎟=⎜ ⎟∂⎜ ⎟⎝ ⎠
% (4-49)
( ) ( ) ( )( )( )
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
2
010
ˆ ˆ
ˆ
z
Oz
GCo n n
GC
bP u S C C S S v S S C S C C C
P
ψ ψ ψ ψθ θ φ θ θ φ θ φδ
⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟∂
= ⎜ ⎟∂ ⎜ ⎟− − + − −
⎜ ⎟⎜ ⎟⎝ ⎠
% (4-50)
( )
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
000
ˆ ˆˆ
Oz
n n
GC
b
u C C S v C C C C S
Pψ ψθ φ θ φ θ φ
δ φ
⎛ ⎞⎜ ⎟⎜ ⎟∂ ⎜ ⎟=⎜ ⎟∂ − + +⎜ ⎟⎜ ⎟⎝ ⎠
% (4-51)
49
( ) ( ) ( )ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
000
ˆ ˆˆ
Oz
n n
GC
b
u C C S S S v C S S S C S C
Pψ ψ ψ ψθ θ φ θ θ φ θ φ
δ θ
⎛ ⎞⎜ ⎟⎜ ⎟
∂ ⎜ ⎟= ⎜ ⎟∂+ + − +⎜ ⎟
⎜ ⎟⎜ ⎟⎝ ⎠
% (4-52)
( ) ( ) ( )ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
000
ˆ ˆˆ
Oz
n n
GC
b
u S S C S C v S C C S S
Pψ ψ ψ ψθ θ φ θ θ φ
δ ψ
⎛ ⎞⎜ ⎟⎜ ⎟
∂ ⎜ ⎟= ⎜ ⎟∂− − + −⎜ ⎟
⎜ ⎟⎜ ⎟⎝ ⎠
% (4-53)
Equation 4-19 is restated below along with the partial derivatives with respect to
x
GCoP ,
y
GCoP ,
z
GCoP , φ , θ , ψ , nu , and nv :
( ) ( )ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
ˆ
ˆ ˆOz
GC
c
n n
Pz
u S C C S S v S S C S C C Cψ ψ ψ ψθ θ φ θ θ φ θ φ
−=
− + + − − + (4-54)
( )0
x
cG
Co
zPδ
∂=
∂ (4-55)
( )0
y
cG
Co
zPδ
∂=
∂ (4-56)
( ) ( ) ( )ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
1ˆ ˆ
z
cG
Co n n
zP u S C C S S v S S C S C C Cψ ψ ψ ψθ θ φ θ θ φ θ φ
δ
⎛ ⎞∂ ⎜ ⎟= −⎜ ⎟∂ − + + − − +⎝ ⎠
(4-57)
( )( )
( ) ( )( )ˆ ˆ ˆ ˆ ˆ
2
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
ˆ
ˆ ˆ
Oz
GCc
nn n
P S C C S Szu u S C C S S v S S C S C C C
ψ ψθ θ φ
ψ ψ ψ ψθ θ φ θ θ φ θ φδ
− +∂=
∂ − + + − − + (4-58)
( )( )
( ) ( )( )ˆ ˆ ˆ ˆ ˆ
2
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
ˆ
ˆ ˆ
Oz
GCc
nn n
P S S C S Czv u S C C S S v S S C S C C C
ψ ψθ θ φ
ψ ψ ψ ψθ θ φ θ θ φ θ φδ
− −∂=
∂ − + + − − + (4-59)
( )( ) ( )( )
( ) ( )( )ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
2
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
ˆ ˆ ˆ
ˆ ˆ
Oz
GC n n
c
n n
P u C C S v C C C C Sz
u S C C S S v S S C S C C C
ψ ψθ φ θ φ θ φ
ψ ψ ψ ψθ θ φ θ θ φ θ φδ φ
+ − −∂=
∂ − + + − − + (4-60)
50
( )( ) ( )( )
( ) ( )( )ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
2
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
ˆ ˆ ˆ
ˆ ˆ
Oz
GC n n
c
n n
P u C C S S S v C S S S C S Cz
u S C C S S v S S C S C C C
ψ ψ ψ ψθ θ φ θ θ φ θ φ
ψ ψ ψ ψθ θ φ θ θ φ θ φδ θ
− − + − + −∂=
∂ − + + − − + (4-61)
( )( ) ( )( )
( ) ( )( )ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
2
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
ˆ ˆ ˆ
ˆ ˆ
Oz
GC n n
c
n n
P u S S C S C v S C C S S C Sz
u S C C S S v S S C S C C C
ψ ψ ψ ψθ θ φ θ θ φ θ φ
ψ ψ ψ ψθ θ φ θ θ φ θ φδ ψ
+ + − + −∂=
∂ − + + − − + (4-62)
With the partial derivatives for the components of the error inner product defined, the
sensitivity of the error can be quantified for each parameter. Hence the sensitivity of the error
variance can be derived with respect to each parameter error.
The error sensitivity with respect to x
GCoP is shown in Equations 4-63 and 4-64.
( )2 2
2 2
2x x xx
x x
T T TT T T TC
C C CG G GGCo Co CoCo
T T T TC CG G
Co Co
ze e b Az b A Ab z A Ab z b AbP P PP
A bz b A b z b A AP P
δ∂∂ ∂ ∂
= + +∂ ∂ ∂∂
∂ ∂+ +
∂ ∂
%% % % % % % %
%% % %
(4-63)
( ) ( ) ( )ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
2 2
12ˆ ˆ
0 0 0 1 0 0 0 10 0 0 0 0 0 0 0
x
TT T
CGCo n n
TT T T
C C
e e z b A AbP u S C C S S v S S C S C C C
z b Ab z b A b
ψ ψ ψ ψθ θ φ θ θ φ θ φδ
⎛ ⎞∂ ⎜ ⎟= −⎜ ⎟∂ − + + − − +⎝ ⎠
⎡ ⎤ ⎡ ⎤+ +⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣ ⎦
% % % %
% % % %
(4-64)
The error sensitivity with respect to y
GCoP is shown in Equations 4-65 and 4-66.
( )2 2
2 2
2y y yy
y y
T T TT T T TC
C C CG G GGCo Co CoCo
T T T TC CG G
Co Co
ze e b Az b A Ab z A Ab z b AbP P PP
A bz b A b z b A AP P
δ∂∂ ∂ ∂
= + +∂ ∂ ∂∂
∂ ∂+ +
∂ ∂
%% % % % % % %
%% % %
(4-65)
( )2 20 0 0 0 0 0 0 0
0 0 0 1 0 0 0 1y
TTT T T
C CGCo
e e z b Ab z b A bPδ
⎡ ⎤ ⎡ ⎤∂= +⎢ ⎥ ⎢ ⎥
∂ ⎣ ⎦ ⎣ ⎦
% % % % % % (4-66)
The error sensitivity with respect to z
GCoP is shown in Equations 4-67 and 4-68.
51
( )2 2
2 2
2z z zz
z z
T TT T T T TC
C C CG G GGCo Co CoCo
TT T T
C CG GCo Co
ze e b bz b A Ab z A Ab z b A AP P PP
A Az b Ab z b A bP P
δ∂∂ ∂ ∂
= + +∂ ∂ ∂∂
∂ ∂+ +
∂ ∂
% %% % % % % %
% % % %
(4-67)
( ) ( ) ( )
( ) ( )( )( )
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
2
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
2
2
ˆ ˆ ˆ ˆ ˆ
12ˆ ˆ
010
ˆ ˆ
ˆ
010
ˆ
z
Oz
TT T
CGCo n n
T
TC
n n
GC
T TC
n
e e z b A AbP u S C C S S v S S C S C C C
z A Abu S C C S S v S S C S C C C
P
z b A Au S C C S S
ψ ψ ψ ψθ θ φ θ θ φ θ φ
ψ ψ ψ ψθ θ φ θ θ φ θ φ
ψθ θ φ
δ
⎛ ⎞∂ ⎜ ⎟= −⎜ ⎟∂ − + + − − +⎝ ⎠
⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟
+ ⎜ ⎟⎜ ⎟− − + − −⎜ ⎟⎜ ⎟⎝ ⎠
+− −
% % % %
%
%
( ) ( )( )( )
ˆ ˆ ˆ ˆ ˆ ˆ ˆ
2
ˆ
ˆOz
n
GC
v S S C S C C C
P
ψ ψ ψθ θ φ θ φ
⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟+ − −⎜ ⎟⎜ ⎟⎝ ⎠
(4-68)
The error sensitivity with respect to φ is shown in Equations 4-69 and 4-70.
( )2 2 2 22
T T TT T T T T T T TC
C C C C Cze e b A A bz b A Ab z A Ab z b Ab z b A b z b A A
δ φ φ φ φ φ φ∂∂ ∂ ∂ ∂ ∂
= + + + +∂ ∂ ∂ ∂ ∂ ∂
% %% % % % % % % % % % (4-69)
( )( ) ( )( )
( ) ( )( )ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
2
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
2 2
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
ˆ ˆ ˆ2
ˆ ˆ
0 00 00 0
ˆ ˆ ˆˆ
Oz
Oz
GT C n n T T
C
n n
T
T T TC C
n n n
GC
P u C C S v C C C C Se e z b A Abu S C C S S v S S C S C C C
z A Ab z b A Au C C S v C C C C S u C C
P
ψ ψθ φ θ φ θ φ
ψ ψ ψ ψθ θ φ θ θ φ θ φ
ψ ψθ φ θ φ θ φ θ φ
δ φ
+ − −∂=
∂ − + + − − +
⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟+ +⎜ ⎟− + + −⎜ ⎟
⎜ ⎟⎝ ⎠
% % % %
% %
ˆ ˆ ˆ ˆ ˆ ˆ
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ2 2
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
ˆˆ
0 0
0 0
Oz
n
GC
T
T T TC C
S v C C C C S
P
S C S S C C S S S C S S C C S Sz b Ab z b A b
S S S C C S S S C C
ψ ψθ φ θ φ
ψ ψ ψ ψθ φ θ φ θ φ θ φ θ φ θ φ
ψ ψ ψ ψφ φ φ φ φ φ
⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟⎜ ⎟
+ +⎜ ⎟⎜ ⎟⎝ ⎠
− − − −⎡ ⎤ ⎡ ⎤+ +⎢ ⎥ ⎢ ⎥
− −⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦
% % % %
(4-70)
52
The error sensitivity with respect to θ is shown in Equations 4-71 and 4-72.
( )2 2 2 22
T T TT T T T T T T TC
C C C C Cze e b b A Az b A Ab z A Ab z b A A z b Ab z b A b
δ θ θ θ θ θ θ∂∂ ∂ ∂ ∂ ∂
= + + + +∂ ∂ ∂ ∂ ∂ ∂
% %% % % % % % % % % %
(4-71)
( )( ) ( )( )
( ) ( )( )
( ) ( )
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
2
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
2
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
ˆ ˆ ˆ2
ˆ ˆ
000
ˆ ˆˆ
Oz
Oz
GT C n n T T
C
n n
C
n n
GC
P u C C S S S v C S S S C S Ce e z b A Abu S C C S S v S S C S C C C
zu C C S S S v C S S S C S C
P
ψ ψ ψ ψθ θ φ θ θ φ θ φ
ψ ψ ψ ψθ θ φ θ θ φ θ φ
ψ ψ ψ ψθ θ φ θ θ φ θ φ
δ θ
− − + − + −∂=
∂ − + + − − +
⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟+ ⎜ ⎟
+ + − +⎜ ⎟⎜ ⎟⎜⎝ ⎠
% % % %
( ) ( )2
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ2
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ2
000
ˆ ˆˆ
0
0 0 0 0
Oz
T
T
T TC
n n
GC
T
TC
T TC
A Ab
z b A Au C C S S S v C S S S C S C
P
S C C S S S S C S C S Sz b Ab
S C C S S S S C S C Sz b A
ψ ψ ψ ψθ θ φ θ θ φ θ φ
ψ ψ ψ ψθ θ φ θ θ φ θ φ
ψ ψ ψ ψθ θ φ θ θ φ
⎟
⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟+ ⎜ ⎟
+ + − +⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠
− + − − −⎡ ⎤+ ⎢ ⎥
⎢ ⎥⎣ ⎦− + − − −
+
%
%
% %
% ˆ ˆ 0
0 0 0 0
Sbθ φ⎡ ⎤
⎢ ⎥⎢ ⎥⎣ ⎦
%
(4-72)
The error sensitivity with respect to ψ is shown in Equations 4-73 and 4-74.
( )2 2 2 22
T T TT T T T T T T TC
C C C C Cze e b b A Az b A Ab z A Ab z b A A z b Ab z b A b
δ ψ ψ ψ ψ ψ ψ∂∂ ∂ ∂ ∂ ∂
= + + + +∂ ∂ ∂ ∂ ∂ ∂
% %% % % % % % % % % %
(4-73)
53
( )( ) ( )( )
( ) ( )( )
( ) ( )
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
2
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
2
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
ˆ ˆ ˆ2
ˆ ˆ
000
ˆ ˆˆ
Oz
Oz
GT C n n T T
C
n n
T
TC
n n
GC
P u S S C S C v S C C S S C Se e z b A Abu S C C S S v S S C S C C C
z A Abu S S C S C v S C C S S
P
ψ ψ ψ ψθ θ φ θ θ φ θ φ
ψ ψ ψ ψθ θ φ θ θ φ θ φ
ψ ψ ψ ψθ θ φ θ θ φ
δ ψ
+ + − + −∂=
∂ − + + − − +
⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟+ ⎜ ⎟
− − + −⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠
% % % %
%
( ) ( )2
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ2
ˆ ˆ ˆ ˆ
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ2
ˆ ˆ
000
ˆ ˆˆ
0 0
0 0
0 0
Oz
T TC
n n
GC
T
TC
T TC
z b A Au S S C S C v S C C S S
P
C S S S C C C S S Sz b Ab
C C C S
C S S S C C C S S Sz b A
C C C
ψ ψ ψ ψθ θ φ θ θ φ
ψ ψ ψ ψθ θ φ θ θ φ
ψ ψφ φ
ψ ψ ψ ψθ θ φ θ θ φ
ψφ
⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟+ ⎜ ⎟
− − + −⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠
− + +⎡ ⎤+ ⎢ ⎥
− −⎢ ⎥⎣ ⎦− + +
+− −
%
% %
%
ˆ ˆ 0 0b
Sψφ
⎡ ⎤⎢ ⎥⎢ ⎥⎣ ⎦
% (4-74)
The error sensitivity with respect to nu is shown in Equations 4-75 and 4-76.
( )2 2 2 22
T T TT T T T T T T TC
C C C C Cn n n n n n
ze e b b A Az b A Ab z A Ab z b A A z b Ab z b A bu u u u u uδ
∂∂ ∂ ∂ ∂ ∂= + + + +
∂ ∂ ∂ ∂ ∂ ∂
% %% % % % % % % % % % (4-75)
( )( )
( ) ( )( )ˆ ˆ ˆ ˆ ˆ
2
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
2 2
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
ˆ2
ˆ ˆ
1 10 00 0
ˆ ˆ
Oz
Oz Oz
GT C T TC
nn n
T
T T TC C
G GC C
P S C C S Se e z b A Abu u S C C S S v S S C S C C C
z A Ab z b A AS C C S S S C C S S
P P
ψ ψθ θ φ
ψ ψ ψ ψθ θ φ θ θ φ θ φ
ψ ψ ψ ψθ θ φ θ θ φ
δ
⎛ ⎞− +⎜ ⎟∂= ⎜ ⎟∂ − + + − − +⎜ ⎟
⎝ ⎠
⎛ ⎞ ⎛ ⎞⎜ ⎟ ⎜⎜ ⎟ ⎜⎜ ⎟ ⎜+ +⎜ ⎟ ⎜
− −⎜ ⎟ ⎜⎜ ⎟ ⎜⎝ ⎠ ⎝ ⎠
% % % %
% %
⎟⎟⎟⎟⎟⎟
(4-76)
The error sensitivity with respect to nv is shown in Equations 4-77 and 4-78.
( )2 2 2 22
T T TT T T T T T T TC
C C C C Cn n n n n n
ze e b b A Az b A Ab z A Ab z b A A z b Ab z b A bv v v v v vδ
∂∂ ∂ ∂ ∂ ∂= + + + +
∂ ∂ ∂ ∂ ∂ ∂
% %% % % % % % % % % % (4-77)
54
( )( )
( ) ( )( )ˆ ˆ ˆ ˆ ˆ
2
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
2 2
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ
ˆ2
ˆ ˆ
0 01 10 0
ˆ ˆ
Oz
Oz Oz
GT C T TC
nn n
T
T T TC C
G GC C
P S S C S Ce e z b A Abv u S C C S S v S S C S C C C
z A Ab z b A AS S C S C S S C S C
P P
ψ ψθ θ φ
ψ ψ ψ ψθ θ φ θ θ φ θ φ
ψ ψ ψ ψθ θ φ θ θ φ
δ
⎛ ⎞− −⎜ ⎟∂= ⎜ ⎟∂ − + + − − +⎜ ⎟
⎝ ⎠
⎛ ⎞ ⎛ ⎞⎜ ⎟ ⎜⎜ ⎟ ⎜⎜ ⎟ ⎜+ +⎜ ⎟ ⎜
+ +⎜ ⎟ ⎜⎜ ⎟ ⎜⎝ ⎠ ⎝ ⎠
% % % %
% %
⎟⎟⎟⎟⎟⎟
(4-78)
This derivation provides the general sensitivity equations for target geo-positioning from a
UAV. These equations provide the basis for the sensitivity analysis conducted in the following
chapters. These results will be combined with empirically derived sensor data to determine the
parameter significance relative to the induced geo-positioning error.
55
CHAPTER 5 UNMANNED ROTORCRAFT MODELING
In order to derive the equations of motion of the aircraft and to perform further analysis, an
aircraft model was developed based on previous work [7,8,9,10,16,17]. For this research the
scope of the rotorcraft mechanics was limited to Bell-Hiller mixing and Flapping rotor head
design. A simplified aircraft model was developed previously [16,17] for simulation and
controller development. A similar approach will be used here for the derivations.
Mettler et al. [7,8,9] use more complex analysis when deriving their dynamic equations.
Their analysis includes more complex dynamic factors such as fly-bar paddle mixing, main blade
drag/torque effects, and fuselage/stabilizer aerodynamic effects.
The actuator inputs commonly used for control of RC rotorcraft are composed of:
• δlon: Longitudinal cyclic control
• δlat: Lateral cyclic control
• δcol: Collective pitch control
• δrud: Tail rudder pitch control
• δthr: Throttle control
A body fixed coordinate system was used in order to relate sensor and motion information
in the inertial and relative reference frames. Figures 5-1 and 5-2 show the body fixed coordinate
system.
A transformation matrix was derived which relates the position and orientation of the body
fixed frame to the inertial frame. The orientation of the body fixed frame is related to the inertial
frame using a 3-1-2 rotation sequence. The inertial frame is initially in the North-East-Down
orientation. The coordinates system undergoes a rotation ψ about the Z axis, then a rotation φ
56
about the X’ axis, and then a rotation θ about the Y’’ axis. The compound rotation is equated
below in Equation 5-1 and the subsequent rotations are shown in 5-2, 5-3, and 5-4.
Figure 5-1. Top view of the body fixed coordinate system
Figure 5-2. Side view of the body fixed coordinate system
43322141 RRRR = (5-1)
⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡−=
10000
21 ψψψψ
CosSinSinCos
R (5-2)
57
⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡
−=
φφφφ
CosSinSinCosR
00
00132 (5-3)
⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡
−=
θθ
θθ
CosSin
SinCosR
0010
043 (5-4)
The final compound rotation matrix is equated below in Equation 5-4.
1 4
C C S S S C S S S C S CR C S C C S
S C C S S S S C S C C C
θ ψ θ φ ψ θ ψ θ φ ψ θ φ
φ ψ φ ψ φ
θ ψ θ φ ψ θ ψ θ φ ψ θ φ
⎡ ⎤+ −⎢ ⎥= −⎢ ⎥⎢ ⎥− + − −⎣ ⎦
(5-4)
where the notation iC and iS represent the Cosine and Sine of the angle i respectively.
The transformation matrix which converts a point measured in the body fixed frame to the
point measured in the inertial fixed frame is shown in Equation 5-5.
⎥⎦
⎤⎢⎣
⎡=
103T
BodyoInertialInertial
BodyBodyInertial PRT (5-5)
where BodyoInertial P represents the position of the body fixed frame origin measured in the inertial
frame.
The lateral and longitudinal motion of the aircraft is primarily controlled by the lateral and
longitudinal cyclic control inputs. For a flapping rotor head, the motions of the main rotor blades
form a disk whose orientation with respect to the airframe is controlled by these inputs. The
orientation of the main rotor disk is illustrated in Figure 5-3:
In this analysis, a represents the lateral rotation of the main rotor blade disk and b
represents the longitudinal rotation of the main rotor blade disk. In a report by Heffley and
Munich [17], motion of the main rotor disc is approximated by a first order system as shown
below:
58
Figure 5-3. Main rotor blade angle
max
max
0 00 0lat lat
lon lon
aa abb b
τ δτ δ
⎡ ⎤ ⎡ ⎤ ⎧ ⎫⎧ ⎫ ⎧ ⎫= +⎨ ⎬ ⎨ ⎬ ⎨ ⎬⎢ ⎥ ⎢ ⎥
⎩ ⎭ ⎩ ⎭⎣ ⎦ ⎣ ⎦ ⎩ ⎭
&&
(5-6)
where latτ is the lateral cyclic damping coefficient and lonτ is the longitudinal cyclic damping
coefficient.
The angular velocity as measured in the body fixed frame B can be translated into angular
velocity in the inertial frame by using Equation 5-4:
( )
1 4B G
TG B B G
G BG B
R R
R R
Rω ω
=
=
=
(5-7)
Figure 5-4. Main rotor thrust vector
59
The main rotor induces a moment and linear force on the body of the aircraft. These
induce lateral and longitudinal motion, and roll and pitch rotations of the aircraft. The main rotor
thrust vector MRT% is illustrated in Figure 5-4:
The main rotor thrust vector as measured in the body fixed frame is:
2 2
( )( )
1 ( ) ( )MR MR
Sin bT T Sin a
Sin b Sin a
⎧ ⎫−⎪ ⎪⎪ ⎪= ⎨ ⎬⎪ ⎪
− −⎪ ⎪⎩ ⎭
% (5-7)
The equations of motion of the aircraft were derived in the inertial frame using the
following equations:
[ ]
G BB
G B TB B
R F ma
R M R I R α
=
=
∑∑
% %
% % (5-8)
This derivation has resulted in a simplified helicopter dynamic model. This model
provides a foundation for simulation of the aircraft in the absence of an experimental platform.
This derivation was performed to provide the reader with basic helicopter dynamic principles
and an introduction to helicopter control mechanics. Now that a background on helicopter
mechanics and dynamics has been presented, the next chapter will discuss the use of onboard
sensors and signal processing for aircraft state estimation.
60
CHAPTER 6 STATE ESTIMATION USING ONBOARD SENSORS
This research proposes to derive and demonstrate the estimation of UGV states using a
UAV. In order to estimate the UGV states, the estimates of the UAV states are required. In this
research, sensors measurements from the UAV will be used to perform the state estimation of the
UAV and UGV. This research is primarily concerned with developing a remote sensing system.
Where it stands out is in how UAV dynamics and state measurements are utilized in passively
determining the states of the UGV.
Attitude Estimation Using Accelerometer Measurements
As discussed earlier, a two or three axis accelerometer can be used for determining the
attitude of an aircraft. A simple equation for determining the roll and pitch angles of an aircraft
using the acceleration measurements in the x and y body fixed axes is shown in Equation 6-1 and
6-2.
1sin yaroll
g− ⎛ ⎞
= ⎜ ⎟⎝ ⎠
(6-1)
1sin xapitch
g− ⎛ ⎞
= ⎜ ⎟⎝ ⎠
(6-2)
where xa and ya are the measured acceleration measurements in the body fixed x and y axes.
A major problem with using accelerometers for attitude estimation is the effects from high
frequency vibration inherent to rotary wing aircraft. There are several characteristic frequencies
in the rotorcraft system to consider when analyzing the accelerometer signals. The main
characteristic frequencies are the speed of the main rotor blades, tail rotor blades, and
engine/motor. The highest frequency vibration will come from the engine/motor. The main gear
of the power transmission reduces the frequency to the main rotor head by about a factor of 9.8
for the Gas Xcell helicopter. The frequency is then further reduced by the tail rotor transmission
61
to the tail rotor blades. Any imbalances in the motor/motor fan, transmission gears, and rotor
heads/blades can cause significant vibration. Also any bent or misaligned shafts can cause
vibration in the system.
Due to the speed and number of moving parts in a helicopter, these aircraft have significant
vibration at the engine, main and tail rotor frequencies and harmonics. Extreme care must be
taken to ensure balance and proper alignment of all elements of the drive train. Time taken
balancing and inspecting components can payoff in the long run in system performance. The
airframe and payload structure must be carefully considered. Due to the energy content at
specific frequencies, any structural element with a natural frequency at or around the engine, or
main/tail rotor frequencies or harmonics could produce disastrous effects. Rigid mounting of
payload is highly discouraged as there would be no element other than the aircraft structure to
dissipate the cyclic loading.
Prospective researchers are forewarned that small and large unmanned aircraft systems
should be treated like any other piece of heavy machinery. In this case the payload was rigidly
attached to the base of the aircraft frame. Upon spool-up of the engine the head speed
transitioned into the natural frequency of the airframe with the most flexible component of the
system being the side frames of the aircraft. In less than a second the aircraft entered a resonant
vibration mode. This resulted in a tail-boom strike by the main blades. The damage was a result
of the main shaft shattering and projecting the upper main bearing block striking the pilot over
thirty feet away. Airframe resonance is particularly dangerous in all rotary aircraft from small
unmanned systems to large heavy lift commercial and military helicopters.
A Fast Fourier Transform (FFT) of the accelerometer measurements show very specific
spikes on all axes at specific frequencies as shown in Figure 6-1.
62
Figure 6-1. Fast Fourier Transform of raw accelerometer data
Figure 6-2. Fast Fourier Transform of raw accelerometer data after low-pass filter
Strategic filtering at the major vibration frequencies can improve the attitude estimates
while still allowing for the aircraft dynamics to be measured. Also by attenuating only specific
63
frequency bands, the noise can be reduced yet still produce fast signal response. A discrete low-
pass Butterworth IIR filter was used, with a 5 Hz pass band and a 10 Hz stop band that filtered
the high frequency noise evident between 15-25Hz. The raw accelerometer FFT response using
the low-pass filter is shown in Figure 6-2.
The FFT of the raw accelerometer data shows that the high frequency noise is attenuated
beyond 5 Hz thereby eliminating the major effects caused by the power train or high frequency
electrical interference. Before the low-pass filter was applied, the roll and pitch measurements
were almost unusable as shown in Figure 6-3.
Figure 6-3. Roll and Pitch measurement prior to applying low-pass filter
64
After the low-pass filter was applied, the measurements produced much more viable results
as shown in Figure 6-4.
Figure 6-4. Roll and Pitch measurement after applying low-pass filter
These results indicate the importance of proper vehicle maintenance and assembly. More
rigorous balancing and tuning of the vehicle can produce much better system performance and
reduce the work required to compensate for vibration in sensor data.
Heading Estimation Using Magnetometer Measurements
The heading of unmanned ground and air vehicles is commonly estimated by measuring
the local magnetic field of the earth. The magnetic north or compass bearing has been used for
hundreds of years for navigation and mapping. By measuring the local magnetic field, an
65
estimate of the northern magnetic field vector can be obtained. Errors between the true north as
measured relative to latitude and longitude compared with magnetic north varies depending on
the location on the globe. Depending on the location, the variations between the true and
magnetic north are known and can be compensated. Alternative methods for determining the
heading of unmanned systems exist including using highly accurate rate gyros. By precisely
measuring the angular rate of a static vehicle, the angular rate induced from the rotation of the
earth can be used to estimate heading. This requires extremely high precision rate gyros which
are currently too expensive, large, and sensitive for small unmanned systems.
Normally all three axes of the magnetometer would be used for heading estimation but due
to the fact that the aircraft does not perform any radical roll or pitch maneuvers, only the lateral
and longitudinal magnetometer measurements are required as shown in Figure 6-5.
Figure 6-5. Magnetic heading estimate
66
UGV State Estimation
The geo-positioning equations derived in the previous chapters are restated below:
GC
G
xz Ab
y⎧ ⎫
=⎨ ⎬⎩ ⎭
% (6-3)
where
x
y
GCo
GCo
C C S S S C S S S C S C P
C S C C S Pθ ψ θ φ ψ θ ψ θ φ ψ θ φ
φ ψ φ ψ φ
⎡ ⎤+ −Α = ⎢ ⎥
−⎢ ⎥⎣ ⎦ (6-4)
11
n
n
c
uv
b
z
⎛ ⎞⎜ ⎟⎜ ⎟⎜ ⎟=⎜ ⎟⎜ ⎟⎜ ⎟⎝ ⎠
% (6-5)
333231 RvRuRPZ
nn
CozG
C ++−
= . (6-6)
By identifying two unique points fixed to the UGV, the direction vector can be defined:
( ) ( )
( ) ( )
2 12 12 2 2 1
2 1 2 1
2 12 2
2 1 2 1
( )ˆ 1( )
1
n nG G
n nG G G G
C
G G
G G G G c
u ux xv vx x y yCos
h z ASin y y
x x y y z
ψψ
−⎛ ⎞⎧ ⎫⎛ ⎞− ⎜ ⎟⎪ ⎪⎜ ⎟ −⎜ ⎟ ⎜ ⎟− + −⎪ ⎪⎧ ⎫ ⎪⎝ ⎠⎪ ⎜ ⎟= = =⎨ ⎬ ⎨ ⎬⎜ ⎟⎛ ⎞⎩ ⎭ ⎪ ⎪−
⎜ ⎟ ⎜ ⎟⎪ ⎪⎜ ⎟ ⎜ ⎟− + −⎪ ⎪⎝ ⎠⎩ ⎭ ⎝ ⎠
(6-7)
The heading of the vehicle can be found using:
atan2( ( ), ( ))Sin Cosψ ψ ψ= (6-8) The kinematic motion of the vehicle can be described by the linear and angular velocity
terms. In the 2D case, the UGV is constrained to move in the x-y plane with only a z component
in the angular velocity vector. Hence the state vector is shown below:
( ) ( ) 0( ) ( ) 0
0 1
x vCos Cosv
x y vSin Sinψ ψψ ψ
ωψ ψ
⎧ ⎫ ⎧ ⎫ ⎡ ⎤⎧ ⎫⎪ ⎪ ⎪ ⎪ ⎢ ⎥= = =⎨ ⎬ ⎨ ⎬ ⎨ ⎬⎢ ⎥ ⎩ ⎭⎪ ⎪ ⎪ ⎪ ⎢ ⎥⎩ ⎭ ⎩ ⎭ ⎣ ⎦
&
% &
& & (6-9)
67
In [27] the researchers define the kinematic equations for an Ackermann style UGV.
Using these equations the kinematic equations are restated using our notation:
( ) ( ) ( ) ( )2 2
( ) ( ) ( ) ( )2 2
L LvCos Sin Cos Sinx vy L LvSin Cos Sin Cos
ψψ ψ ψ ψ
ψ ωψ ψ ψ ψ
⎧ ⎫ ⎡ ⎤+⎪ ⎪ ⎢ ⎥⎧ ⎫ ⎧ ⎫⎪ ⎪= = ⎢ ⎥⎨ ⎬ ⎨ ⎬ ⎨ ⎬⎩ ⎭ ⎩ ⎭⎢ ⎥⎪ ⎪− −⎢ ⎥⎪ ⎪⎩ ⎭ ⎣ ⎦
&&
&& (6-10)
Equation 6-10 follows the structure outlined in [28] and is rewritten in the form:
( ) ( )2
( ) ( )2
LCos Sinz x v
LSin Cos
z Hx v
ψ ψ
ψ ψ
⎡ ⎤⎢ ⎥
= +⎢ ⎥⎢ ⎥−⎢ ⎥⎣ ⎦
= +
% %%
% %%
(6-11)
where z% is the measurement vector, x% is the state vector, and v% is the additive measurement
error. The measurement error can be isolated and the squared error can be written in the form:
( ) ( )TT
v z Hx
v v z Hx z Hx
= −
= − −
% %%
% % % %% % (6-12)
The measurement estimate is written in the form:
ˆz Hx= (6-13)
Hence the sum of the squares of the measurement variations ˆz z− is represented by:
( ) ( )ˆ ˆTJ z Hx z Hx= − −% % (6-14) The sum of the squares of the measurement variations is minimized with respect to the state
estimate as shown:
( ) ( )
( ) ( ) ( ) ( )
( ) 1
ˆ ˆˆ ˆ
ˆ ˆ0
ˆ ˆ0ˆ0 2 2
ˆ
T
T T
T T T T T
T T
T T
z Hx z HxJx x
H z Hx z Hx H
H z H Hx z H x H HH z H Hx
x H H H z−
∂ − −∂=
∂ ∂
= − − + − −
= − + − +
= − +
=
% %
% %
% %
%
%
(6-15)
Therefore the state estimate can be expressed as:
68
1
1
2
( ) ( ) ( ) ( )( ) ( )2ˆ
( ) ( ) ( ) ( )( ) ( )2 2 2 22
1 0 ( ) ( )ˆ
( ) ( )02 24
1 0ˆ
LCos Sin Cos SinCos Sinx zL L L LLSin Cos Sin CosSin Cos
Cos Sinx zL LL Sin Cos
x
ψ ψ ψ ψψ ψ
ψ ψ ψ ψψ ψ
ψ ψ
ψ ψ
−
−
⎛ ⎞⎡ ⎤⎡ ⎤ ⎡ ⎤⎜ ⎟⎢ ⎥⎢ ⎥ ⎢ ⎥⎜ ⎟= ⎢ ⎥⎢ ⎥ ⎢ ⎥− −⎜ ⎟⎢ ⎥−⎣ ⎦ ⎣ ⎦⎜ ⎟⎢ ⎥⎣ ⎦⎝ ⎠
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥= ⎢ ⎥ ⎢ ⎥−−⎢ ⎥ ⎣ ⎦⎣ ⎦
=
%
%
2
( ) ( )40 ( ) ( )
2 2( ) ( )
ˆ 2 2( ) ( )
Cos SinzL LSin Cos
LCos Sin
x zSin Cos
L L
ψ ψ
ψ ψ
ψ ψ
ψ ψ
⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥− −⎣ ⎦ ⎣ ⎦⎡ ⎤⎢ ⎥=⎢ ⎥−⎣ ⎦
%
%
(6-16)
Equation 6-16 can be rewritten in the form:
( ) ( )ˆ2 2ˆ ( ) ( )
Cos Sinv xySin Cos
L L
ψ ψ
ω ψ ψ
⎡ ⎤⎧ ⎫ ⎧ ⎫⎢ ⎥=⎨ ⎬ ⎨ ⎬⎢ ⎥−⎩ ⎭ ⎩ ⎭⎣ ⎦
&
& (6-17)
This chapter has discussed the use of onboard sensors for aircraft and ground vehicle state
estimation. These techniques will be used in the following chapter to determine the sensor noise
models for the sensitivity analysis. They will also be used for the validation of the geo-
positioning algorithm and comparison with simulation results.
69
CHAPTER 7 RESULTS
This chapter presents the results derived from the experiments performed using the
experimental aircraft and payload systems. It will also present the results of the effort of
applying this research to solve several engineering problems.
Geo-Positioning Sensitivity Analysis
The sensitivity of the error variance to the errors of each parameter is highly coupled with
the other parameters in the positioning solution. The mapping of the error variance sensitivity to
the parameters is highly nonlinear. In order to achieve a concise qualitative representation of the
effects of each parameter’s error in the positioning solution, a localized sensitivity analysis is
performed. This entails using common operating parameters and experimentally observed values
for the parameter errors. When substituted into the error variance partial differential equations,
the respective sensitivity of each parameter is observed for common testing conditions.
The error models for the various geo-positioning parameters were obtained using
manufacturer specifications and empirically derived noise measurements. The main aircraft used
for empirical noise analysis was the Miniature Aircraft Gas Xcell platform. This aircraft was
equipped with the testing payload as discussed in previously and sensor measurements were
recorded with the aircraft on the ground but with the engine on, and head-speed just below
takeoffs speed.
The sensor used for measuring the global position of the camera was a WAAS enabled
Garmin 16A model GPS. This GPS provides a 5 Hz positioning solution. The manufacturer
specifications for horizontal and vertical positioning accuracy is less than 3 meters. For the
sensitivity analysis the lateral and longitudinal error distribution was defined using a uniform
70
radial error distribution bounded by a three meter range. The error distribution parameters for
the horizontal and vertical positioning measurements are stated in Table 7-1.
Parameter Value
ˆGCOxP
σ , ˆGCOyP
σ 3 m
ˆGCOzP
σ 1.5 m
Table 7-1. Parameter standard deviations for the horizontal and vertical position
The sensor used for measuring the orientation of the camera was a Microstrain 3DMG
orientation sensor. This sensor provide three axis measurements of linear acceleration, angular
rate, and magnetic field. The sensor measurements used for determining the roll and pitch angle
of the camera were the lateral and longitudinal linear accelerations. The roll and pitch angles
were calculated using these measurements as described previously. The roll and pitch
measurements used for defining the error distribution are shown in Figure 7-1.
71
Figure 7-1. Roll and Pitch measurements used for defining error distribution
The Microstrain 3DMG contains a three axis magnetometer for estimating vehicle heading.
The measurements made to estimate the heading error distribution for the sensitivity analysis is
shown in Figure 7-2.
Figure 7-2. Heading measurements used for defining error distribution
Using this data set the standard deviations for the roll, pitch, and yaw were calculated and
shown in Table 7-2.
Parameter Value
φσ 4.4°
θσ 6.8° ψσ 0.9°
Table 7-2. Parameter standard deviations for the roll, pitch, and yaw angles
72
The error distributions for the normalized pixel coordinates were calculated using a series
of images of a triangular placard taken from various elevations as shown in Figure 7-3.
Figure 7-3. Image of triangular placard used for geo-positioning experiments
Figure 7-4. Results of x and y pixel error calculations
73
It was difficult to quantify the expected error distribution for the normalized pixel
coordinates. The error distribution for the x and y components of the normalized pixel
coordinates were estimated by comparing the detected vertex points of the placard with the
calculated centroid of the volume. The resulting variation is shown in Figure 7-4. The pixel
errors were then converted to normalized pixel errors and are shown in Table 7-3.
Parameter Value
nuσ 0.0021
nvσ 0.0070
Table 7-3. Normalized pixel coordinate standard deviations used during sensitivity analysis
A summary of the parameters for the sensor error distributions that are used in the
proceeding sensitivity analysis are shown in Table 7-4.
Parameter Value
ˆGCOxP
σ 3 m
ˆGCOyP
σ 3 m
ˆGCOzP
σ 1.5 m
φσ 4.4°
θσ 6.8°
ψσ 0.9°
nuσ 0.0021
nvσ 0.0070
Table 7-4. Parameter standard deviations used during sensitivity analysis
The Monte Carlo method was used to evaluate each sensitivity equation. In order to
demonstrate the significance of each parameter in the Monte Carlo analysis, each parameter is
perturbed by a uniform error distribution based on experimentally derived measurements. This
74
analysis seeks to show the difference between the positioning errors based off of each varying
parameter. The key element to this analysis is that the error sensitivity for each parameter is
calculated including errors from other parameters. This allows for the nonlinear and coupled
relationship between the parameters to propagate through the sensitivity analysis. The results of
this analysis determine the rank of the dominance of each parameter in causing positioning error.
The error sensitivity is used in the subsequent analysis and is restated in Equation 7-1.
( )T
T
e e
e eSξ
δ ξ∂
=∂% %
% % (7-1)
The error sensitivity is evaluated using the common parameter values perturbed by a
uniform error distribution. The range of the error distribution is defined using experimentally
derived data. A uniform distribution was chosen instead of a normal distribution for the Monte
Carlo simulation. It was found that the normal distribution took too long to converge during
testing. The normal distribution also had a larger search space, combined with the nonlinear
coupling between the parameters, caused for the processing times to not be manageable. The
uniform distribution provides solid limits to the error distribution and quickly traversed through
the search space. This provided a quick yet fruitful analysis. In order to quantify the errors in
position attributable to each parameter, Equation 7-1 was modified as shown in Equation 7-2.
( )ˆT TT
e e e ep pe e S S eξ ξ
ξΔ = − Δ% % % %%
% % (7-2)
where p% : parameter vector with all elements perturbed by associated uniform error distribution
p : parameter vector with all elements but ξ perturbed by associated uniform error
distribution
eξ : parameter error.
75
This formulation allows for the Monte Carlo simulation to calculate the error variance
distribution associated with each parameter using all parameter error distributions. This allows
not only for the coupling between the different parameters to affect the positioning error but also
the various parameter error distributions to affect the results. As with many complex systems,
not only does the inherent relationship between the various parameters effect the observations
but also the measurement errors of the various parameters.
The simulation uses the empirically derived error distribution and the target system
configuration. For this analysis, the target system is a hovering unmanned rotorcraft, operating
at a 10 meter elevation. The parameter values used for this analysis are shown in Equation 7-3.
{ }
( )( )( )
0.0
0.0
10
0.00.00.00.00.0
x
y
z
GCo
GCo
GCo
n
n
P m
P m
P m
p
uv
φθψ
⎧ ⎫=⎪ ⎪
=⎪ ⎪⎪ ⎪
=⎪ ⎪⎪ ⎪= °= ⎨ ⎬
= °⎪ ⎪⎪ ⎪= °⎪ ⎪⎪ ⎪=⎪ ⎪
=⎩ ⎭
(7-3)
The error distribution is defined as a uniform distribution with bounds at the standard deviations
defined previously for the geo-positioning algorithm parameters. The histograms of the error
variance relative to each parameter are shown in Figure 7-5.
76
Figure 7-5. Error Variance Histograms for the respective parameter errors
The bounds of the results with respect to the parameter standard deviations are shown in
Table 7-5. For the given parameter error distributions and system configuration, the results show
that the order of significance is as follows: ( )z
GCoPδ , ( )x
GCoPδ , ( )y
GCoPδ , ( )δ φ , ( )δ θ , ( )δ ψ ,
( )nvδ , and ( )nuδ . The most significant term ( )z
GCoPδ , demonstrates the importance in the
altitude data in the geo-positioning calculations. This simulation has shown the process by
which the geo-positioning parameter rank was calculated using empirically derived sensor noise
distributions and a specified system configuration. By simply adapting the sensor noise
distributions and system configuration values, this process can be applied to any given system to
provide insight in geo-positioning error source dominance.
77
Parameter Value
( )ˆ
ˆmaxG
COxT G
COx
P
e e PS σ% %
18.00 m2
( )ˆ
ˆmaxG
COyT G
COy
P
e e PS σ% %
18.00 m2
( )ˆ
ˆmaxG
COzT G
COz
P
e e PS σ% %
20.53 m2
( )max Te eSφ
φσ% % 1.594 m2
( )max Te eSθ
θσ% % 0.3130 m2
( )max Te eSψ
ψσ% % 0.0001524 m2
( )max nT n
uue e
S σ% % 0.001240 m2
( )max nT n
vve e
S σ% % 0.01316 m2
Table 7-5. Comparison of Monte Carlo Method results
Comparison of Empirical Versus Simulated Geo-Positioning Errors
The experimental results obtained using the Gas Xcell Aircraft equipped with a downward facing
camera and the experimental payload discussed earlier were compared with simulation results
using the estimated error distributions used in the Monte Carlo analysis. The testing conditions
used for the simulation analysis are shown in Equation 7-4. The results show that the geo-
positioning errors from simulation closely match the geo-positioning results obtained using the
experimental vehicle/payload setup. The geo-positioning results are shown in Figure 7-6.
78
{ }
( )( )( )
0.0
0.0
10
0.00.00.00.00.0
x
y
z
GCo
GCo
GCo
n
n
P m
P m
P m
p
uv
φθψ
⎧ ⎫=⎪ ⎪
=⎪ ⎪⎪ ⎪
=⎪ ⎪⎪ ⎪= °= ⎨ ⎬
= °⎪ ⎪⎪ ⎪= °⎪ ⎪⎪ ⎪=⎪ ⎪
=⎩ ⎭
(7-4)
Figure 7-6. Experimental and simulation geo-position results
The use of a uniform error distribution for the simulation produces different results
compared with a normal distribution. While the simulation results vary slightly from the
79
experimental results, the uniform distribution provides more of an absolute bound for the error
distribution.
Applied Work
Unexploded Ordnance (UXO) Detection and Geo-Positioning Using a UAV
This research investigated the automatic detection and geo-positioning of unexploded
ordnance using VTOL UAVs. Personnel at the University of Florida in conjunction with those at
the Air Force Research Laboratory at Tyndall Air Force Base, Florida, have developed a sensor
payload capable of gathering image, attitude, and position information during flight. A software
suite has also been developed that processes the image data in order to identify unexploded
ordnance (UXO). These images are then geo-referenced so that the absolute positions of the
UXO can be determined in terms of the ground reference frame. This sensor payload was
outfitted on a Yamaha RMAX aircraft and several experiments were conducted in simulated and
live bomb testing ranges. This paper discusses the object recognition and classification
techniques used to extract the UXO from the images, and present the results from the simulated
and live bombing range experiments.
Figure 7-7. BLU97 Submunition
Researchers have used aerial imagery obtained from small unmanned VTOL aircraft for
control, remote sensing and mapping experiments [1,2,3]. In these experiments, it was necessary
80
to detect a particular type of ordnance. The primary UXO of interest in these experiments was
the BLU97. After deployment, this ordnance has a yellow main body with a circular decelerator.
The BLU97 is shown in Figure 7-7.
Experimentation VTOL Aircraft
The UXO experiments were conducted using several aircraft in order to demonstrate the
modularity of the sensor payload and to determine the capabilities of each aircraft. The first
aircraft that was used for testing was a Miniature Aircraft Gas Xcell RC helicopter. The aircraft
was configured for heavy lift applications and has a payload capacity of 10-15lbs. The typical
flight time for this aircraft is 15 minutes and provides a smaller VTOL aircraft for experiments at
UF and the Air Force Research Laboratory. The Xcell helicopter is shown in Figure 7-8.
Figure 7-8. Miniature Aircraft Gas Xcell Helicopter
The second aircraft used for testing was a Yamaha RMAX unmanned helicopter. With a
payload capacity of 60lbs and a runtime of 20 minutes, this platform provided a more robust and
capable testing platform for range clearance operations. The RMAX is shown in Figure 7-9.
Figure 7-9. Yamaha RMAX Unmanned Helicopter
81
Sensor Payload
Several sensor payloads were developed for various UAV experiments. Each payload was
constructed modularly so as to enable attachment to various aircraft. The system schematic for
the sensor payload is shown in Figure 7-10.
Figure 7-10. Sensor Payload System Schematic
The detection sensor used for these experiments was dual digital cameras operating in the
visible spectrum. These cameras provided high resolution imagery with low weight packaging.
These experiments sought to also explore and quantify the effectiveness of this sensor for UXO
detection.
Maximum Likelihood UXO Detection Algorithm
A statistical color model was used to differentiate pixels in the image that compose the
UXO. The maximum likelihood (ML) UXO detection algorithm used a priori knowledge of the
color distribution of the surface of the BLU97s in order to detect ordnance in an image. The
color model was constructed using the RGB color space. The feature vector was defined as
⎥⎥⎥
⎦
⎤
⎢⎢⎢
⎣
⎡=
bgr
x~ (7-5)
where r, g, and b are the eight bit color values for each pixel.
DC/DC Converters
IndustrialCPU
Digital Stereovision Cameras
LiPo Battery
Wireless Ethernet
Power
Communication
Imaging
Compact Flash
Data Storage
Compact Flash
Novatel RT2 Differential GPS
Pose Sensors
Digital CompassDC/DC Converters
IndustrialCPU
Digital Stereovision Cameras
LiPo Battery
Wireless Ethernet
Power
Communication
Imaging
Compact Flash
Data Storage
Compact Flash
Novatel RT2 Differential GPS
Pose Sensors
Digital Compass
82
Using the K-means Segmentation Algorithm [18], an image containing a UXO was
segmented and selected. A segmented image is shown in Figure 7-11. This implementation
used a 5D feature vector for each pixel which allowed for clustering using spatial and color
parameters. Results varied depending on relative scaling of the feature vector components.
Figure 7-11. Segmentation software
The distribution of the UXO pixels was assumed to be a Gaussian distribution [18] thereby
the maximum likelihood method was used to approximate the UXO color model. The region
containing the UXO pixels was selected and the color model was calculated. The mean color
vector is calculated as
∑=n
xn 1
~1~μ (7-6)
where n is the number of pixels in the selected region.
The covariance matrix was then calculated as
[ ] ∑ −−=Σn
Txxn 1
)~~)(~~(1 μμ . (7-7)
The mean and covariance of the UXO pixels were then used to develop a classification
model. This classification model described the location and the distribution of the training data
within the RGB color space. The equation used for the classification metric was
Segmented Ordnance
83
( )⎟⎟⎠⎞
⎜⎜⎝
⎛−Σ−
= − )~~()~~(exp1
1 μμ xxp T . (7-8)
The classification metric is similar to the likelihood probability except that it lacks the pre-
scaling coefficient required by Gaussian pdfs. The pre-scaling coefficient was removed in order
to optimize the performance of the classification algorithm. This allows for the classification
metric value to range from 0 to 1. The analysis was performed by selecting a threshold for the
classification metric in order to classify UXO pixels from the image. This allowed for images to
be screened for UXO detection and for the pixel coordinate location of the UXO to be identified
in the image.
Initial experimentation using simulated UXO and the ML UXO detection algorithm proved
to provide successful classification of UXO using images obtained using both aircraft. As
expected the performance of the algorithm deteriorated when there were variations in the color of
the surface of the UXO or the contrast between the UXO and the background. Relating to the
data, the ML UXO detection algorithm failed when either the actual UXO color distribution fell
far from the modeled distribution in RGB space or the background distribution closely
encompassed the actual UXO color distribution. In these cases, the variations caused both false
positives and negatives when using the classification algorithm. The use of an expanded training
data set and multiple Gaussian distributions for modeling was investigated but found to slightly
improve UXO detection rates but greatly increase false positive readings from background
pixels. The algorithm performance was also extremely sensitive to the likelihood threshold
thereby introducing another tunable parameter to the algorithm.
Spatial Statistics UXO Detection Algorithm
Previous experimental results showed that when the background of the image closely
resembled the UXO color, the ML UXO performance degraded. In order to perform more robust
84
UXO detection, an algorithm was developed whose parameters were based solely on the
dimensions of the UXO and not a trained color model. A more sophisticated pattern recognition
approach was used as shown in Figure 7-12.
Fig. 7-12. Pattern Recognition Process
The spatial statistics UXO detection algorithm was designed to segment like
colored/shaded objects and classify them based on their dimensions. This would allow for robust
performance in varying lighting, color, and background conditions. The assumptions made for
this algorithm were that the UXO was of continuous color/shading, and the UXO region would
have scaled spatial properties of an actual UXO. Based on the measured above ground level of
the aircraft and the projective properties of the imaging device, the algorithm parameters would
be auto-tuned to accommodate the scaling from the imaging process.
In order to reduce the dimensionality of the data set, the color space was first converted
from RGB to HSV. By inspection, it was found that the saturation channel provided the greatest
contrast between the background and the UXO. The raw RGB image and the saturation channel
images are shown in Figure 7-13.
Capture Image
Pre-filtering
Segmentation Classification
85
Figure 7-13. Raw RGB and Saturation Images of UXO
The pre-filtering process consisted of histogram equalization of the saturation image. This
allowed for the contrast between the UXO pixels and the background and improved
segmentation.
The segmentation process was conducted by segmenting the pre-filtered image using the k-
means algorithm as shown in Figure 7-14.
Figure 7-14. Segmented Image
Each region was analyzed and classified using the scaled spatial statistics of the UXO.
Properties such as the major/minor axis length for the region were used to classify the regions.
Regions whose spatial properties closely matched those of the UXO were classified as UXO and
highlighted in the final image as shown in Figure 7-15.
Figure 7-15. Raw Image with Highlighted UXO
86
Collaborative UAV/UGV Control
Recently unmanned aerial vehicles (UAVs) have been used more extensively in military
operations. The improved perception abilities of UAVs compared with unmanned ground
vehicles (UGVs) make them more attractive for surveying and reconnaissance applications. A
combined UAV/UGV multiple vehicle system can provide aerial imagery, perception, and target
tracking along with ground target manipulation and inspection capabilities. This experiment was
conducted to demonstrate the application of a UAV/UGV system for simulated mine disposal
operations.
The experiment was conducted by surveying the target area with the UAV and creating a
map of the area. The aerial map was transmitted to the base station and post-processed to extract
the locations of the targets and develop waypoints for the ground vehicle to navigate. The
ground vehicle then proceeded to each of the targets, which simulated the validation, and
disposal of the ordnance. Results include the aerial map, processed images of the extracted
ordnances, and the ground vehicle’s ability to navigate to the target points.
The platforms used for the collaborative control experiments are shown in Figure 7-16.
Figure 7-16. TailGator and HeliGator Platforms
87
Waypoint Surveying
In order to evaluate the performance of the UAV/UGV system, the waypoints were
surveyed using a Novatel RT-2 differential GPS. This system provided two centimeter accuracy
or better when provided with a base station correction signal. Accurate surveying of the visited
waypoints provided a baseline for comparison of the results obtained from the helicopter and the
corresponding path the ground vehicle traversed.
The UXOs were simulated to resemble BLU-97 ordnance. Aerial photographs as shown in
Figure 7-17 of the ordnance along with the camera position and orientation were collected.
Using the transformation described previously the global coordinates of the UXOs were
calculated. The calculated UXO positions were compared with the precision survey data.
Figure 7-17. Aerial photograph of all simulated UXO
88
Local Map
A local map of the operating region was generated using the precision survey data. This
local map as shown in Figure 7-18 provided a baseline for all of the position comparisons
throughout this task.
3280320
3280325
3280330
3280335
3280340
3280345
3280350
3280355
3280360
368955 368960 368965 368970 368975 368980 368985 368990 368995 369000
Easting (m)
Nor
thin
g (m
)
Diff. Waypoints
Boundaries
Figure 7-18. Local map generated with Novatel differential GPS
The data collected compares the positioning ability of the UGV and the ability of the UAV
sensor system to accurately calculate the UXO positions. While both the UGV and UAV use
WAAS enabled GPS there is some inherent error due to vehicle motion and environmental
affects. The UGV’s control feedback was based on waypoint to waypoint control versus a path
following control algorithm.
89
Once a set of waypoints was provided by the UAV, the UGV was programmed to visit
every waypoint as if to simulate the automated recovery/disposal process of the UXOs. The
recovery/disposal process was optimized by ordering the waypoints in a manner that would
minimize the total distance traveled by the UGV. This problem was similar to the traveling
salesman optimization problem in which a set of cities must all be visited once while minimizing
the total distance traveled. An A* search algorithm was implemented in order to solve this
problem.
The A* search algorithm operates by creating a decision graph and traversing the graph
from node to node until the goal is reached. For the problem of waypoint order optimization, the
current path distance g, estimated distance to the final waypoint h , and the estimated total
distance f was evaluated for each node by
∑=g length of straight line segments of all predecessor waypoints
h = (minimum distance of any two waypoints∈(successors & current waypoints)) × (# of
successors)
hgf ˆ+= . (7-10)
The requirement for the A* algorithm of the admissibility of the h heuristic is fulfilled due
to the fact that there exists no path from the current node n to a goal node with a distance less
than h . Therefore the heuristic provides the minimum bound as required by the A* algorithm
and guarantees optimality should a path exist.
The UGV was commanded to come within a specified threshold of a waypoint before
switching to the next waypoint as shown in Figure 7-19. The UGV consistently traveled within
three meters or less of each of the desired waypoints which is within the error envelope of typical
WAAS GPS accuracy.
90
3280320
3280325
3280330
3280335
3280340
3280345
3280350
3280355
3280360
368955 368960 368965 368970 368975 368980 368985 368990 368995 369000
Easting (m)
Nor
thin
g (m
)Diff. Waypoints
Boundaries
UGV
Figure 7-19. A comparison of the UGV’s path to the differential waypoints
The UAV calculates the waypoints based on its sensors and these points are compared with
the surveyed waypoints. There is an offset in the UAV’s data due to the GPS being used and due
to error in the transformation from image coordinates to global coordinates as shown in Figure 7-
20.
The UGV is able to navigate within several meters of the waypoints, however, is limited
due to the vehicle kinematics. Further work involves a waypoint sorting algorithm that accounts
for the turning radius of the vehicle.
91
3280320
3280325
3280330
3280335
3280340
3280345
3280350
3280355
3280360
368955 368960 368965 368970 368975 368980 368985 368990 368995 369000
Easting (m)
Nor
thin
g (m
)
Waypoints
Boundaries
UAV
Figure 7-20. UAV waypoints vs. UGV path
Citrus Yield Estimation
Within the USA, Florida is the dominant state for citrus production, producing over two-
thirds of the USA’s tonnage, even in the hurricane-damaged 2004-2005 crop year. The citrus
crops of most importance to Florida are oranges and grapefruit, with tangerines and other citrus
being of less importance.
With contemporary globalization, citrus production and marketing is highly
internationalized, especially frozen juice concentrates. So there is great competition between
various countries. Tables 7-6 and 7-7 show the five most important countries for production of
oranges and grapefruit in two crop years. Production can vary significantly from year-to-year
due to weather, especially due to hurricanes. Note the dominance of Brazil in oranges and the
rise of China in both crops.
92
Country 2000-2001 Crop Year 2004-2005 Crop Year Brazil 14,729 16,606 USA 11,139 8,293 China 2,635 4,200
Mexico 3,885 4,120 Spain 2,688 2,700
Other Countries 9,512 9,515 World Total 44,588 45,434
Table 7-6. Production of Oranges (1000’s metric tons) (based on NASS, 2006)
Country 2000-2001 Crop Year 2004-2005 Crop Year China 0 1,724 USA 2,233 914
Mexico 320 310 South Africa 288 270
Israel 286 247 Other Countries 680* 330
World Total 3,807 3,795 *Cuba produced a very significant 310 (1000 metric tons) in 2000-2001
Table 7-7. Production of Grapefruit (1000’s metric tons) (based on NASS, 2006)
The costs of labor, land, and environmental compliance are generally less in most of these
countries than in the USA. Labor is the largest cost for citrus production in the USA, even
though many workers, especially harvesters, are migrants. In order for producers from the USA
to be competitive, they must have advantages in productivity, efficiency, or quality to counteract
the higher costs.
This need for productivity, efficiency, and quality translates into a need for better
management. One management advantage that USA producers can use to remain competitive is
to utilize advanced technologies. Precision agriculture is one such set of technologies which can
be used to improve profitability and sustainability. Precision agriculture technologies were
researched and applied later to citrus than some other crops, but there has been successful
precision agriculture research [19,20,21]. There has been some commercial adoption [22].
93
Yield maps have been a very important part of precision agriculture for over twenty years
[23]. They allow management to make appropriate decisions to maximize crop value
(production quantity and quality) while minimizing costs and environmental impacts [24].
However, citrus yield maps, like most yield maps, can currently only be generated after the fruit
is harvested because the production data is obtained during the harvesting process. It would be
advantageous if the yield map was available before harvest because this would allow better
management, including better harvest scheduling and crop marketing.
There has been a history of using machine vision to locate fruit on trees for robotic
harvesting [25]. More recent work at the University of Florida has attempted to use machine
vision techniques to do on-tree yield mapping. Machine vision has been used to count the
number of fruit on trees [26]. Other researchers not only counted the fruit, but used machine
vision and ultrasonic sensors to determine fruit size [27]. This research has been extended to
allow for counting earlier in the season when the fruit is still quite green[28].
However, these methods all require vehicles to travel down the alleys between the rows of
trees to take the machine vision images. Researchers have demonstrated that a small remotely-
piloted mini-helicopter with machine vision hardware and software could be built and operated
in citrus groves[29]. They also discuss some of the recent research on using mini-helicopters in
agriculture, primarily conducted at Hokkaido University and the University of Illinois.
The objective of this research was to determine if images taken from a mini-helicopter
would have the potential to be used to generate yield maps. If so, there might be a possibility of
rapidly and flexibly producing citrus yield maps before harvest.
Materials and Methods
The orange trees used to test this concept were located at Water Conserv II, jointly owned
by the City of Orlando and Orange County. The facility, located about 20 miles west of
94
Orlando, is the largest water reclamation project (over 100 million liters per day) of its type in
the world, one that combines agricultural irrigation and rapid infiltration basins (RIBs). A block
of ‘Hamlin’ orange trees, an early maturing variety (as opposed to the later maturing ‘Valencia’
variety), was chosen for study.
The spatial variability of citrus tree health and production can range from very small to
extremely great depending upon local conditions. This block had some natural variability,
probably due to its variable blight infestation and topography. Additional variability was
introduced by the trees being subjected to irrigation depletion experiments. However, mainly
due to substantial natural rainfall in the 2005-2006 growing season, the variation in the yield is
within the bounds of what might be expected in contemporary commercial orange production,
even with the depletion experiments.
The irrigation depletion treatment (percent of normal irrigation water NOT applied) was
indicated by the treatment number. Irrigation depletion amounts were sometimes different for
the Spring and the Fall/Winter parts of the growing season, as seen in Table 7-8 below. The
replication was indicated by a letter suffix. Only 15 of the 42 trees (six treatments with seven
replications each) were used for this mini-helicopter imaging effort. Treatment 6 had no
irrigation except periodic fertigation, and the trees lived on rainfall alone.
Treatment Spring Depletion Fall/Winter Depletion
1 25 25 2 25 50 3 25 75 4 50 50 5 50 75 6 100 100
Table 7-8. Irrigation Treatments
95
The mini-helicopter used for this work was a Gas Xcell model modified for increased
payload by its manufacturer [30]. It was purchased in 2004 for about US$ 2000 and can fly up to
32 kph and carry a 6.8 kg payload. Its rotor is rated to 1800 rpm and has a diameter of less than
1.6 m. The instrumentation platform is described in MacArthur, et al., (2005) and includes GPS
with WAAS, two compact flash drives, a digital compass, and wireless Ethernet. The machine
vision system uses a Videre model STH-MDCS-VAR-C stereovision sensor.
The mini-helicopter was flown at the Water Conserv II site on 10 January 2006, a mostly
sunny day, shortly before noon. The helicopter generally hovered over each tree for a short
period of time as it moved down the row taking images with the Videre camera. The images
were stored on the helicopter and some were simultaneously transferred to a laptop computer
over the wireless Ethernet. In addition, a Canon PowerShot S2 IS five-megapixel digital camera
was used to take photos of the trees (in north-south rows) from the east and west sides.
The fruit on the individual trees were hand harvested by professional pickers on 13
February 2006. The fruit from each tree was weighed and converted to the industry-standard
measurement unit of “field boxes”. A field box is defined as 40.8 kg (90 lbs.).
The images were later processed manually. A “best” image of each tree was selected,
generally on the basis of lighting and complete coverage of the tree. Each overhead image was
cropped into a square that enclosed the entire tree and scaled to 960 by 960 pixels. The pixel
data from several oranges were collected from several representative images in the data set. The
data was assumed to be normally distributed, thus the probability function was calculated for
each orange pixel dataset. Using a "Mixture of Gaussians" to represent the orange class model,
the images were analyzed and a threshold established based on our color model. The number of
"orange" pixels was then calculated in each image and used in our further analysis
96
Results
The results of the image processing and the individual tree harvesting of the 15 trees
studied in this work are presented in Table 7-9. As Figure 7-21 illustrates, only irrigation
depletion treatment 6 had a great effect on the individual tree yields. Treatment 6 was 100%
depletion, or no irrigation. The natural rainfall was such in this production year that the other
treatments produced yields of at least four boxes per tree.
Table 7-9 Results from Image Processing and Individual Tree Harvesting
0
2
4
6
8
10
0 1 2 3 4 5 6 7
Treatment Number
Frui
t Yie
ld P
er T
ree
(Box
es)
Figure 7-21. Individual Tree Yields as Affected by Irrigation Depletion Treatments
Treatment Replication Orange Pixels Boxes Fruit 1 B 13990 7 1 G 6391 6 2 B 11065 8 2 C 2202 4 2 E 5884 5 2 F 17522 7.5 3 B 2778 6 4 A 4433 6.2 4 B 5516 4.8 4 E 5002 4 4 F 11559 4.3 5 B 9069 7 5 C 17088 6.8 6 B 5376 2.5 6 D 6296 1
97
The images were treated by the process discussed above. The number of “orange” pixels
varied from 2202 to 17,522. More pixels should indicate more fruit. However, as Figure 7-22
shows, there was substantial scatter in the data. It can be improved somewhat by the removal of
the nonirrigated treatment 6, as shown in Figure 7-23.
y = 0.0002x + 3.5919R2 = 0.2835
0123456789
0 5000 10000 15000 20000
Orange Pixels in Image
Frui
t Yie
ld P
er T
ree
(Box
es)
Figure 7-22. Individual Tree Yield as a Function of Orange Pixels in Image
y = 0.0002x + 4.5087
R2 = 0.373
0
2
4
6
8
10
0 5000 10000 15000 20000
Orange Pixels
Frui
t Yie
ld P
er T
ree
(box
es
Figure 7-23. Individual Tree Yield as a Function of Orange Pixels with Nonirrigated Removed
Discussion
This work showed that good overhead images of citrus trees could be taken by a mini-
helicopter and processed to have some correlation with the individual tree yield. A tree with
fewer oranges should have fewer pixels in the image of the “orange” color. For example, tree
98
2C had only 2202 pixels and 4 boxes of fruit while tree 2F had 17,522 pixels and 7.5 boxes of
fruit. These are shown as Figures 7-24 and 7-25 below in which the “oranges” in the “After”
photo are enhanced to indicate their detection by the image processing algorithm.
Figure 7-24. Image of Tree 2C Before and After Image Processing
Figure 7-25. Image of Tree 2F Before and After Image Processing
The image processing used in this initial research was very simple. More sophisticated
techniques would likely improve the ability to better separate oranges from other elements in the
images. The strong sunlight likely contributed to some of the errors. Again, the use of more
99
sophisticated techniques from other previous research, especially the techniques developed for
yield mapping of citrus from the ground, would likely improve the performance in overhead
yield mapping.
A major assumption in this work is that the number of orange pixels visible is proportional
to the tree yield. However, the tree canopy (leaves, branches, other fruit, etc.) do hide some of
the fruit. Differing percentages of the fruit may be visible on differing trees. This is quite
apparent with the treatment 6 trees. Figure 7-26 shows the images for tree 6D. This tree,
obviously greatly affected by the lack of irrigation and a blight disease, has 6296 “orange” pixels
but only yielded one box of fruit. The poor health of the tree meant that there were not many
leaves to hide the interior oranges. Hence, a falsely high estimate of the yield was given.
Figure 7-27 shows the images taken from the ground of Trees 6D and 2E. Even though they had
similar numbers of “orange” pixels on the images taken from the helicopter, Tree 2E had five
times the number of fruit. The more vigorous vegetation, especially the leaves, meant that the
visible oranges on Tree 2E represented a smaller percentage of the total tree yield.
Figure 7-26. Image of Tree 6D Before and After Image Processing
100
Figure 7-27. Ground Images of Tree 6D and Tree 2E
Mini-helicopters are smaller and less expensive than piloted aircraft. Accordingly, the
financial investment in them may be justifiable to growers and small industry firms. The mini-
helicopters would give their owners the flexibility of being able to take images on their own
schedule. The mini-helicopters also do not cause a big disturbance in the fruit grove. The noise
and wind are moderate. They can operate in a rather inconspicuous manner, as shown by Figure
7-28.
Figure 7-28 Mini-Helicopter Operating at Water Conserv II
101
While the yield mapping results of this work may appear to be a little disappointing at first
glance, the results do indicate that there is potential. Getting accurate yield estimates by mini-
helicopter image processing will be a somewhat complex image processing task. But this is a
start. The images acquired by the helicopter are not that different, other than the direction, than
those acquired from the ground. Hence, the techniques developed in the ground-based yield
mapping might be applicable.
102
CHAPTER 8 CONCLUSIONS
This work has presented the theory and equipment used for tracking and state estimation
of an unmanned ground vehicle system using an unmanned aerial vehicle system. This research
is unique in that it presents a comprehensive system description and analysis from the sensor and
hardware level to the system dynamics. This work also couples the dynamics and kinematics of
two agents to form a robust state estimation using completely passive sensor technology. A
sensitivity analysis of the geo-positioning algorithm was performed which identified the
significance of the parameters used in the algorithm.
The simulation results showed that the elevation error was the most dominant parameter in
geo-positioning algorithm. By assuming that the error distributions would not change
dramatically across varying system configurations, it seems intuitively obvious that errors in the
system position would dominate at low altitudes due to the close mapping of errors in system
position to target position. This was shown in the results by the dominance of the three position
parameters relative to all other parameters. It was hypothesized that as the elevation of the
aircraft increases, the dominance of the horizontal position errors would diminish and the
orientation and pixel errors would begin to dominate. While the errors attributed to the
horizontal position parameters would remain relatively constant, the errors attributed to the
orientation and pixel parameters would increase due to the projective nature of the geo-
positioning algorithm. This hypothesis was tested by performing the sensitivity analysis again at
varying elevations.
The sensitivity analysis was performed from 10 meters to 100 meters to show the
dominance trend of the various parameters. The results are shown in Figure 8-1.
103
101 10210-5
10-4
10-3
10-2
10-1
100
101
102
103Max Variance versus Elevation
Elevation (m)
Max
Var
ianc
e (m
2 )PxPyPzPhiThetaPsiunvn
Figure 8-1. Simulated error calculation versus elevation
These results show how the geo-positioning error attributable to the horizontal position
error stays constant as the elevation increases. Also, all orientation and pixel parameters increase
their dominance as the elevation increases. These results validate the hypothesis that the
horizontal position parameters will dominate at low altitude and the orientation and pixel
parameters will dominate at higher altitudes. This finding is significant in that it shows the
usefulness of this analysis in predicting which parameters are most dominant in a given system.
Moreover, this analysis can be used to guide the prospective researcher to what sensor
specifications would most benefit their application given the anticipated system operating
conditions. For example, a low altitude UAV application should employ a high accuracy
horizontal and vertical positioning system. Conversely, a high altitude reconnaissance UAV
104
should employ a high accuracy IMU and camera system. Depending on the application, the
emphasis should shift towards the sensors which would most benefit the system performance.
This analysis can also provide the anticipated system performance for a given system
configuration. For the researcher, this research provides a valuable tool for assisting the system
level design.
These results were also validated experimentally. The experimental aircraft system was
flown at various altitudes. Using the previous data obtained for use in the geo-position error
analysis, the target position error was evaluated relative to the aircraft elevation. These results
show that the error distribution increased with increasing altitude as shown in Figure 8-2.
Figure 8-2. Geo-Position error versus elevation
105
This work can be extended to improve the tracking and state estimation techniques and
further reduce system errors and also performing a sensitivity analysis given the system
configuration and parameter error statistics. Future work can also include autonomous control of
the aircraft by way of UGV tracking to form collaborative heterogeneous control strategies.
106
LIST OF REFERENCES
1. Drake, P. 1991. Fire mapping using airborne global positioning. Engineering Field Notes 23:17-24. USDA Forest Service Engineering Staff.
2. Feron, E., and J. Paduano. 2004. Vision technology for precision landing of agricultural autonomous rotorcraft. Proceedings, Automation Technology for Off-Road Equipment. Kyoto, Japan. October 7-8. pp. 64-73.
3. Iwahori, T., R. Sugiura, K. Ishi, and N. Noguchi. 2004. Remote sensing technology using an unmanned helicopter with a control pan-head. Proceedings, Automation Technology for Off-Road Equipment. Kyoto, Japan. October 7-8. pp. 220-225.
4. Dana, P. H., “Global Positioning System Overview,” 5/1/2001, http://www.colorado.edu/geography/gcraft/notes/gps/gps.html, 1/14/2003.
5. Federal Aviation Administration “Wide Area Augmentation System,” http://gps.faa.gov/Programs/WAAS/waas.htm, 4/21/2003.
6. Schrage D., Yillikci Y., Liu S., Prasad J., Hanagud S., “Instrumentation of the Yamaha R-50/RMAX Helicopter testbeds for Airloads Identification and follow-on research”. 25th European Rotorcraft Forum, 1999.
7. Mettler B., Tischler M.B., Kanade T., "System Identification of Small-Size Unmanned Helicopter Dynamics". American Helicopter Society 55th Forum, May, 1999.
8. Mettler B., Tischler M., Kanade T., “System Identification of a Model-Scale Helicopter”. Technical report CMU-RI-TR-00-03, Robotics Institute, Carnegie Mellon University, January, 2000.
9. Mettler B., Dever C., Feron E., “Identification Modeling, Flying Qualities, and Dynamic Scaling of Miniature Rotorcraft”, Nato SCI-120 Symposium on “Challenges in Dynamics, System Identification, Control and Handling Qualities for Land, Sea”, Berlin, Germany, May, 2002.
10. Johnson E., DeBitetto P., “ Modeling and Simulation for Small Autonomous Helicopter Development”. AIAA Modeling & Simulation Technologies Conference, 1997.
11. Civita M., Papageorgiou G., Messner W., Kanade T., “Design and Flight Testing of a High-Bandwidth H-infinity Loop Shaping Controller for a Robotic Helicopter”. Journal of Guidance, Control, and Dynamics, Vol. 29, No. 2, March-April 2006, pp. 485-494.
12. Civita M., Papageorgiou G., Messner W., Kanade T., “Design and Flight Testing of a Gain-Scheduled H-infinity Loop Shaping Controller for Wide-Envelope Flight of a Robotic Helicopter”. Proceedings of the 2003 American Control Conference, pp. 4195-4200, Denver, CO, 4-6 June 2003.
107
13. Kron A., Lafontaine J., Alazard D., “Robust 2-DOF H-infinity Controller for Highly Flexible Aircraft : Design Methodology and Numerical Results”. Can. Aeronautics and Space J., Vol. 49, No. 1, pp. 19-29, 2003.
14. B. Mettler, M.B. Tischler, and T. Kanade, "Attitude Control Optimization for a Small-Scale Unmanned Helicopter," AIAA Guidance, Navigation and Control Conference, 2000.
15. Bouguet, J., Camera Calibration Toolbox for MATLAB®, http://www.vision.caltech.edu/bouguetj/calib_doc/.
16. Koo T., Ma Y., Sastry S., “Nonlinear Control of a Helicopter Based Unmanned Aerial Vehicle Model”. IEEE Transactions on Control Systems Technology, January 2001.
17. Heffley R., Mnich M, “Minimum Complexity Helicopter Simulation Math Model Program”. Manudyne Report 83-2-3, October 1986.
18. Duda, R., Hart, P, Stork, D., Pattern Classification, John Wiley and Sons, Inc, 2001.
19. Schueller, J.K., J.D. Whitney, T.A. Wheaton, W.M. Miller, and A.E. Turner. 1999. Low-cost automatic yield mapping in hand-harvested citrus. Computers and Electronics in Agriculture 23:145-153.
20. Whitney, J.D., W.M. Miller, T.A. Wheaton, M. Salyani, and J.K. Schueller. 1999. Precision farming applications in Florida citrus. Applied Engineering in Agriculture. 15: 399-403.
21. Cugati, S.A., W.M. Miller, J.K. Schueller, and A.W. Schumann. 2006. Dynamic characteristics of two commercial hydraulic flow-control valves for a variable-rate granular fertilizer spreader. ASABE Paper No. 061071.
22. Sevier, B. J., and W. S. Lee. 2005. Precision farming adoption in Florida citrus: A grower case study. ASAE Paper No. 051054.
23. Schueller, J.K. and Y.H. Bae. 1987. Spatially attributed automatic combine data acquisition. Computers and Electronics in Agriculture. 2:119-127.
24. Schueller, J.K. 1992. A review and integrating analysis of spatially-variable control of crop production. Fertilizer Research. 33:1-34.
25. Slaughter, D.C., and R.C. Harrell. 1987. Color vision in robotic fruit harvesting. Transactions of the ASAE. 30(4):1144-1148.
26. Annamalai, P., W. S. Lee, and T. F. Burks. 2004. Color vision system for estimating citrus yield in real-time. ASAE Paper No. 043054.
27. Regunathan, M. and W. S. Lee. 2005. Citrus yield mapping and size determination using machine vision and ultrasonic sensors. ASAE Paper No. 053017.
108
28. Kane, K. E., and W. S. Lee. 2006. Spectral sensing of different citrus varieties for precision agriculture. ASABE Paper No. 061065.
29. MacArthur, D.K., J.K. Schueller, and C.D. Crane. 2005. Remotely-piloted mini-helicopter imaging of citrus. ASAE Paper No. 051055.
30. Miniature Aircraft of Sorrento, Florida, http://www.miniatureaircraftusa.com/.
31. DOC. 2006. CS Market Research: Unmanned Aerial Vehicles (UAVs). U.S.A. Department of Commerce. U.S. Commercial Service. www.buyusa.gov/newengland/155.pdf (accessed 25 May 2006).
32. NASS. 2006. Florida agricultural statistics: Citrus summary 2004-2005. United States Department of Agriculture, National Agricultural Statistics Service, Florida Field Office. February. 54 pp.
33. Crane, C., Duffy, J., Kinematic Analysis of Robot Manipulators, Cambridge University Press, 1998.
34. Faugeras, O., Luong, Q., The Geometry of Multiple Images, The MIT Press, 2001.
35. Center for Advanced Aviation Systems Development, The MITRE Corporation, “Navigation,” 2/25/2002, www.caasd.org/work/navigation.html, 4/21/2003.
36. Garmin, GPS16 OEM GPS receiver, Olathe, Kansas.
37. D. Burschka and G. Hager, “Vision-Based Control of Mobile Robots,”Proc. of the IEEE International Conference on Robotics and Automation, pp. 1707-1713, 2001.
38. J. Chen, D. M. Dawson, W. E. Dixon, and A. Behal, “Adaptive Homography-Based Visual Servo Tracking for Fixed and Camera-in-Hand Configurations,” IEEE Transactions on Control Systems Technology, accepted, to appear.
39. J. Chen, W. E. Dixon, D. M. Dawson, and M. McIntire, “Homography based Visual Servo Tracking Control of a Wheeled Mobile Robot”, Proc. of the IEEE International Conference on Intelligent Robots and Systems, Las Vegas, Nevada, pp. 1814-1819, October 2003.
40. J. Chen, W. E. Dixon, D. M. Dawson, and V. Chitrakaran, “Visual Servo Tracking Control of a Wheeled Mobile Robot with a Monocular Fixed Camera,” Proceedings of the IEEE Conference on Control Applications, Taipei, Taiwan, pp. 1061-1066, 2004.
41. A. K. Das, et al., “Real-Time Vision-Based Control of a Nonholonomic Mobile Robot,” Proc. of the IEEE International Conference on Robotics and Automation, pp. 1714-1719, 2001.
109
42. W. E. Dixon, D. M. Dawson, E. Zergeroglu, and A. Behal, “Adaptive Tracking Control of a Wheeled Mobile Robot via an Uncalibrated Camera System,” IEEE Transactions on Systems, Man, and Cybernetics-Part B: Cybernetics, Vol. 31, No.
43. MacArthur, E., Crane, C., “Development of a Multi-Vehicle Simulator and Control Software”, 2005 Florida Conference on Recent Advances in Robotics, Gainesville, Fl, 2005.
44. The Analytic Sciences Corporation, “Applied Optimal Estimation”, The MIT Press, Cambridge, Massachusetts, and London, England, 1974.
110
BIOGRAPHICAL SKETCH
Donald Kawika MacArthur was born in Miami, Florida. He attended the Maritime and
Science Technology (MAST) High School. He attended the University of Florida where he
graduated summa cum laude with a B.S. in mechanical engineering. He pursued graduate
research at the University of Florida and received his master’s degree and ultimately his Ph.D.
His research has spanned various vehicle automation technologies. He has performed research in
areas such as computer vision, autonomous ground vehicle control and navigation, sensor
systems for guidance, navigation and control, unmanned aircraft automation, and embedded
hardware and software design.