California State Polytechnic University, Pomona | 2 California State Polytechnic University, Pomona...
Transcript of California State Polytechnic University, Pomona | 2 California State Polytechnic University, Pomona...
Page | 1 California State Polytechnic University, Pomona - Broncos
California State Polytechnic University, Pomona
AUVSI Team
2017 Student UAS Competition
Technical Journal Paper
AUVSI Team Composition:
Team Lead: Andrew Rashid
Department of Aerospace Engineering
Thomas Fergus, Miguel Lopez, Noah Miller, Alexander Rey,
Cristal Ruano-Ramirez, Luis Rodriguez, Kyle Winterer
Department of Aerospace Engineering
Bogdan Pugach
Department of Electrical and Computer Engineering
Faculty Adviser: Dr. Subodh Bhandari
Abstract
With this being the seventh year that Cal Poly Pomona has participated in AUVSI’s SUAS Competition,
the team fully expects to improve on the successes and performance of the previous years. This year, the
waypoint navigation, search area, air delivery, interoperability, and sense, detect, and avoid tasks will be
attempted. This year, a new airframe was chosen; a hexacopter multicopter will be used. A new camera
system and payload drop system has been designed and built to complete image recognition and air delivery
tasks. Multiple flight tests were conducted to prove the performance of all the system elements, giving the
team the assurance that the system will successfully perform at the 2017 Competition.
Page | 2 California State Polytechnic University, Pomona - Broncos
Table of Contents
1.0 System Engineering Approach ................................................................................................................................ 3
1.1 Mission Requirements Analysis.......................................................................................................................... 3
1.2 Design Rationale ................................................................................................................................................. 4
1.3 Programmatic Risk and Mitigations ................................................................................................................... 5
1.2.1 Aircraft Subsystem ...................................................................................................................................... 5
1.2.2 Autopilot Subsystem ................................................................................................................................... 5
1.3 Programmatic Risks and Mitigation ................................................................................................................... 5
2.0 System Design ......................................................................................................................................................... 5
2.1 Aircraft Design ................................................................................................................................................... 5
2.1.1 Airframe ...................................................................................................................................................... 5
2.1.2 Power System .............................................................................................................................................. 6
2.2 Autopilot ............................................................................................................................................................. 6
2.3 Sense, Detect, and Avoid .................................................................................................................................... 7
2.4 Imaging system ................................................................................................................................................... 7
2.4.1 Camera ........................................................................................................................................................ 7
2.4.2 Camera Gimbal ........................................................................................................................................... 7
2.5 Object Detection, Classification, Localization .................................................................................................... 7
2.5.1 Imaging Computer ...................................................................................................................................... 7
2.5.2 Image Processing ........................................................................................................................................ 7
2.6 Communications ............................................................................................................................................... 11
2.6.1 RF Transmitter Design .............................................................................................................................. 11
2.6.2 Radio Frequencies ................................................................................................................................... 111
2.6.3 Antenna Selection ................................................................................................................................. 1111
2.6.4 Ground Control Station ......................................................................................................................... 1212
2.6.5 Mission Planner computer ....................................................................................................................... 123
2.6.6 Airplane Tracking Antenna System ........................................................................................................ 133
2.6.7 Telemetry Processing .............................................................................................................................. 133
2.6.8 Interoperability ........................................................................................................................................ 134
2.7 Air Delivery .................................................................................................................................................... 144
2.7.1 Air Delivery Mechanism ......................................................................................................................... 144
2.7.2 Air Delivery Software ............................................................................................................................. 144
2.8 Cyber Security ................................................................................................................................................ 155
3.0 Testing and Evaluation ...................................................................................................................................... 1515
3.1 Developmental Testing ................................................................................................................................. 1515
3.1.1 Interoperability Performance................................................................................................................. 1515
3.1.2 Sense, Detect, and Avoid Performance ................................................................................................. 1515
3.1.3 Imaging Software .................................................................................................................................. 1515
3.2 Individual Component Testing.......................................................................................................................... 16
3.2.1 Camera .................................................................................................................................................. 1616
3.2.2 Payload Drop ......................................................................................................................................... 1616
3.3 Mission Testing Plan......................................................................................................................................... 17
3.3.2 Overall Performance ................................................................................................................................. 17
4.0 Safety ..................................................................................................................................................................... 17
4.1 Developmental Risks and Mitigations .......................................................................................................... 1717
4.2 Mission Risks and Mitigations ...................................................................................................................... 1717
4.3 Operational Risks and Mitigations ................................................................................................................ 1717
5.0 Acknowledgements ............................................................................................................................................... 18
References ................................................................................................................................................................... 19
Page | 3 California State Polytechnic University, Pomona - Broncos
1.0 System Engineering Approach
1.1 Mission Requirements Analysis
True to a Systems Engineering approach, the starting point for Cal Poly’s UAS design was to first dissect mission
objectives and their corresponding requirements to determine system level requirements. A detailed understanding of
the system level requirements is critical as it allows for the determination of the tradeoffs and complexity of each task.
These elements were looked at with respect to their corresponding point value to determine their priority in the overall
design of the UAS. Table 1.1-1 shows the breakdown of a few of the higher point-value mission objectives.
Table 1.1-1: System level requirements of the SUAS-AUVSI Competition
Mission
Objective Points
Objective
Requirements
(for full points)
System Requirements Considerations and
Trade Offs Complexity
Autonomous
Flight 12
-no safety pilot
takeovers
-auto takeoff and
landing
-Autopilot tuned for
chosen platform
-Reliable takeoff and
landing sequence
-auto takeoff/landing is
potential failure point med
Waypoint
Accuracy 15
-max(0,(100ft-
distance)/100)
-valid telemetry at
1Hz
-ability to handle wind
-turn radius of air vehicle
-telemetry transmission to
ground
-precision turn
radius=lower air speed high
Stationary
Obs.
Avoidance
10
-avoid cylinder
with 30-300ft
radius
-avoid cylinder
with 30-750ft
height
-customized flight paths
-non-linearized flight
paths=less efficient search
pattern
med
Moving Obs.
Avoidance 10
-avoid sphere
with radius 30-
200ft
@ 0-40KIAS
-highly customized code
-high impulse in flight
pattern could result in
autopilot
failure to follow path
-dynamic flight path, real
time course correction
high
Target
Characteristic 4
-shape, shape
color, alpha
numeric,
alpha numeric
color, orientations
-for emergent,
description of
scene
-high resolution
camera/lens
-gimbal
-damping
-data management
(onboard or
transmit to ground)
-weight (flight time,
agility) high
Target
Geolocation 4
-location of target
max(0,(150ft-
distance)/150)
-integration of imagining
and location/telemetry,
orientation
high
Air delivery 10
-8 oz bottle
-80% retention
-max(0,(150ft-
distance)/150ft)
-drop mechanism
-drop prediction code -weight low
Page | 4 California State Polytechnic University, Pomona - Broncos
1.2 Design Rationale
To determine a proper design rationale, it was necessary to combine the system level requirements, overall complexity,
and point values from the previous section with the external factors which included the team’s prior experience,
available man hours, and budget.
Early on it was deduced that available man hours would be the most limiting external factor for this year. Therefore,
the initial design approach involved deciding which tasks would allow for the greatest return on the team’s investment
of limited man hours. To ascertain which tasks needed to be prioritized, it was necessary to identify which mission
objectives were most attainable by using the team’s previous experiences to estimate the amount of design hours and
test hours required to achieve each objective. Based on past team experience it was understood that, for a fixed wing
aircraft, many hours are required to guarantee the reliability of certain mission elements like autonomous takeoff and
landing, waypoint navigation, obstacle avoidance, and payload delivery. Thus, a study was conducted to determine if
switching to a multicomputer platform would deliver better results as seen in Figure 1.2-1 and Figure 1.2-2
Figure 1.2-1: Trade Study of a Fixed Wing Platform
Figure 1.2-2: Trade Study of a Multicopter Platform
Environmental Factors
Target
Value
Actual
Value
% of Target
Value
Score
3= within 90%
2=70% to 89%
1=70% to 50%
Weight
3=most important
2=Important
1=less important
Final Score
(Score x Weight)
Hrs of design and test work req per week 8 6 75 2 3 6
Years experinece w/ platform 1 0.8 80 2 1 2
test flight hours req/month 4 6 150 3 2 6
Mission Requiremetns
endurance (mins) 40 20 50 1 2 2
autopilot compatibility
(high=3,med=2,low=1) 3 3 100 3 3 9
auto-take off/landing
easy=3,med=2,hard=1, not possible=0 3 3 100 3 2 6
payload capacity (kg) 5 6.6 132 3 3 9
Total 40
Platform: Multicopter (S900)
Environmental
Target
Value
Actual
Value
% of Target
Value
Score
3= within 90%
2=70% to 89%
1=70% to 50%
Weight
3=most important
2=Important
1=less important
Final Score
(Score x Weight)
Hrs of design & test work req. per week 12 6 50 0 3 0
Years of experinece w/ platform 1 1.5 150 3 1 3
test flight hours req/month 4 3 75 2 2 4
Mission
endurance (mins) 40 40 100 3 2 6
autopilot compatibility
(high=3,med=2,low=1) 3 3 100 3 3 9
auto-take off/landing
easy=3,med=2,hard=1, not possible=0 3 1 33.33333333 0 2 0
payload capacity (kg) 5 7 140 3 3 9
Total 31
Platform: Fixed Wing (H9-Valiant)
Page | 5 California State Polytechnic University, Pomona - Broncos
This study determines how feasible certain objectives are with each platform. Heavy consideration is given to tasks
such as autopilot compatibility and endurance. A multicopter is predictable with an autopilot out of the box, whereas
a fixed wing aircraft requires hours of tuning for a task such as waypoint navigation to be reliable. Therefore, many
of the elements included on this study translate into a time requirement, a primary limiting factor as identified
previously. Based on this study, it was determined that the multicopter platform would deliver better results based
upon its capabilities and our limiting factors.
Other elements of our unmanned air system were driven by a similar rational. This year, an off-the-shelf aircraft
antenna tracking system as well as an off-the-shelf camera gimbal were used. In previous years, these components
were designed, built, and tested from the ground up. However, buying reliable off-the-shelf designs bypasses lengthy
designing and testing procedures, allowing for more time to be allocated towards other critical system elements.
1.2.1 Aircraft Subsystem
The aircraft subsystem utilizes a modified DJI S900 hexacopter. This model was chosen for its versatility and ease of
use, as well as its effectiveness with other Cal Poly Pomona UAS projects. This hexacopter features six electric motors.
An example of a modification made to the hexacopter was the inclusion of access panels at the nose and rear fuselage.
This modification was made to make the task of accessing the interior of the aircraft for component installations easier.
Other modifications include internal mountings for the security of the two required payload components, cutouts to
accommodate the air-drop subsystem and gimbaled camera, and reinforcement of the landing gear to reduce or
eliminate stress and fatigue from the rigors of flight testing.
1.2.2 Autopilot Subsystem
The autopilot subsystem primarily utilizes a 3DR Pixhawk with Ardupilot software. This was chosen due to familiarity
with the hardware and software of the Pixhawk, as well as the multi-function capabilities of the hardware and open-
source nature of the software. In addition, the airplane is equipped with a GPS receiver and a 915 MHz radio and
antenna to transmit telemetry to the Ground Control Station (GCS). The GCS uses a dedicated laptop computer
running the Mission Planner software for writing waypoints to the multicopter. For safety purposes, a 2.4 GHz radio
is used to allow the safety pilot to take over the aircraft at any time.
1.3 Programmatic Risk and Mitigations
One of the programmatic risks was the difficulty in scheduling a meeting time to work on the project. Because all
members of the team were also full-time undergraduate students, it was difficult to work around everyone’s
schedule.
A major programmatic risk encountered in completing the project was scheduling a meeting time that worked with
every team member’s schedule. This was mitigated by instituting one general meeting time that worked for most of
the members. Furthermore, the team was divided into sub-teams to allow more flexibility in scheduling the meeting
times. Related to this problem was the risk of not completing the task by the deadlines. This was mitigated by
requesting sub-teams to complete their tasks as soon as possible, with more time and personnel devoted to the
unfinished tasks as the deadline approached. This allowed tasks to be prioritized chronologically and be completed
on time.
Another major programmatic risk that affected the project this year was the risk of being unable to conduct flight tests
due to FAA restrictions on UAS testing, which includes student projects. To mitigate this risk, the team had to find an
airfield that would allow for legal flight testing under FAA regulations. The chosen airfield was Prado Airpark in
Chino, California. Any flight testing done for this competition was done at Prado Airpark.
2.0 System Design
2.1 Aircraft Design
2.1.1 Airframe
This year, the team decided to use a DJI S900 hexacopter. This is the first use of a multicopter, as previous teams
from Cal Poly Pomona utilized a fixed-wing aircraft. This design choice was made due to its versatility and ease of
use. The DJI S900 is a very capable platform. It can accommodate a wide variety of payloads. The airframe
construction consists of carbon fiber, plastic, and aluminum. Carbon fiber makes up the major components of the
frame such as the arms, center frame, landing gear, and the gimbal rails. Plastic makes up a small percentage of the
frame, mainly connections between the arms and center frame. Aluminum also makes up a small percentage of the
Page | 6 California State Polytechnic University, Pomona - Broncos
frame as the supporting structure of the gimbal frame. The landing gear is fully retractable. However, this feature
will not be used as it serves no benefit for the current payload configuration. The arms can collapse down to reduce
overall size, mitigating the risk of damage during transportation. The S900 has poor aerodynamic qualities, which
restricts its cruising speed.
Figure 2.1.1-1: Picture of the DJI S900 UAS that will be used in SUAS-AUVSI competition
2.1.2 Power System
The design of the power system for the UAS focused on increasing the endurance of the aircraft. Multicopters are
known for short flight times. The first power system is a large 6-cell 22.2-V 16,000-mAh battery. It is the largest
battery that will fit the airframe. This battery provides the most flight time with the current payload setup. One battery
provides a flight time of 10-15 min.
2.2 Autopilot
The autopilot being used this year is the 3DR Pixhawk, as shown in Figure 2.2-1. This will be the first year this
autopilot will be used. In previous years, an APM 2.6 autopilot was used. The rationale behind switching autopilots is
the fact that the 3DR Pixhawk outperforms the APM 2.6 while retaining some of its best qualities, such as waypoint
navigation and being open source. It was determined that using this new autopilot would not require a substantial
amount of developmental time, as the Pixhawk operates in a similar manner as the APM 2.6. A LIDAR-Lite 2 sensor
from Garmin was integrated into the UAS for autonomous take-offs and landings.
Figure 2.2-1: Picture of the 3DR Pixhawk Autopilot attached to the DJI S900 UAS
Page | 7 California State Polytechnic University, Pomona - Broncos
2.3 Sense, Detect, and Avoid
This competition requires the UAS to avoid sets of stationary and moving obstacles, as obtained through the
interoperability server. To avoid the stationary objects, the obstacle avoidance algorithm takes the initial list of desired
waypoints and the list of stationary objects and checks for certain conditions. First, the algorithm iterates through the
list of waypoints and ensures that the path between any single waypoint and its following waypoint does not intersect
with any of the stationary obstacles. In the event that the path puts the UAS on a collision trajectory, a set of waypoints
to path around the obstacle is generated. This set of waypoints is then checked to make sure that the UAS does not
run into another obstacle or goes outside the competition’s boundary area. Functionality of avoiding moving obstacles
is currently under development and is expected to be completed by the date of the competition.
2.4 Imaging system
2.4.1 Camera
A trade study on camera systems was conducted in order to replace the FLEA3 camera used in previous competitions
due to its low resolution and the new target sizes for this year. Other higher resolution and higher frame rate cameras
from Point Grey were researched first due the minimal change in the imaging system. However, these were not
selected due to budget limitations. DSLR cameras were not selected due to weight limiations. In order to meet the
frame rate, resolution, and weight requirements, a modified GoPro camera was selected. The GoPro Hero 4 Black is
capable of 4K video at 30fps, however the HDMI output on the camera is limited to 1080p at 60fps, thus the GoPro
is set to match these settings. A Ribcage Air kit from Back-Bone was implemented to allow CS and C-mount lenses
to be utilized. A 25-135-mm lens is used to allow for adjustments to the field of view without having to change the
flight altitude, thus optimizing our search pattern. During the camera selection process, accessing the images live on
a computer was initially overlooked. An Avio.4K capture card had to be purchased to allow for the utilization of
images on the onboard computer. With the weight and flight time of the multicopter becoming a growing concern,
methods to remove the onboard computer were researched. The GoPro now sends the video live through its HDMI
output (30-Mbps) to an HDMI extender through a CAT6 (1-Gbps) cable. The CAT6 cable then connects to the team’s
M5 Bullet (80-Mbps at 5-GHz), which connects to another M5 Bullet on the ground station. This goes through a CAT6
cable to the receiver HDMI extender where the HDMI cable connects to the team’s AVIO.4K capture card. The live
feed then shows up on the image processing computer on the ground.
2.4.2 Camera Gimbal
A new two-axis gimbal was chosen this year. It features a lightweight carbon fiber construction and brushless gimbals
to stabilize the 530-gm camera and lens. It effectively keeps the camera lens pointed normal to the ground. This is
extremely important for the case that the multicopter is banking or changing altitude above a target. The gimbal has
the capability of rotating approximately ±30◦ for roll and ±30◦ for pitch.
2.5 Object Detection, Classification, Localization
2.5.1 Imaging Computers
Two computers, Computer A and Computer B, are used for manual and autonomous object detection respectively.
2.5.1.1 Computer A Computer A has limited performance requirements and is only used for manual detection tasks. The main objective
of this computer is to receive images from the camera as well as telemetry data from the primary Mission Planner
computer. MATLAB and OpenCV are used to process the images received from the camera. this station displays the
images as a video and the user finds and identifies objects. This software will be further described in section 2.6.1.1.
When the user identifies objects the information is sent using interoperability.
2.5.1.2 Computer B
Computer B has a high-performance requirement. The System must support CUDA acceleration for the Region-Based
Convolution Neural Network. This means the computer must have an NVIDIA graphics card. The main objective of
this computer is to receive images from the camera along with telemetry and autonomously detect the objects. This
software is further described in section 2.6.1.2. When the software identifies, the standard object and the object
descriptions are sent using the interoperability requirements.
2.5.2 Image Processing
Software was written to assist with the object detection, classification, and localization task. Manual and autonomous
software was developed to achieve the requirements for this task. If issues arise the manual target detection software
Page | 8 California State Polytechnic University, Pomona - Broncos
can perform all requirements however, its primary usage will be for the off-axis and emergent tasks. The standard
object detection, classification and localization is handled by the autonomous target detection software.
2.5.2.1 Manual Object Detection The manual object detection software provides a simple user interface to allow easy access to the video stream from
the UAV. This software is broken down into four sections: Image Retrieval and Display, video review, object
evaluation, and object classification.
2.5.2.1.1 Image Retrieval and Display The program retrieves images from the GoPro camera. Then, in real-time, the software saves the image into a pre-set
location and displays the image to the user. During this mode, if a target is seen, the user provides input, which allows
the program to save the picture manually with the location of the target. The information about each image is saved
into a text file within the same folder as the images. The image stream continues until the end of the flight or until the
user decides to end the stream.
2.5.2.1.2 Video Review During the video review, a target can be saved, similar to the Image Retrieval and Display section. However, this
option includes rewind, pause, and video playback speeds. This is a redundant system, which allows the user to watch
the video during the data processing portion of the mission if there were any issues with finding targets during the
flight.
2.5.2.1.3 Object Evaluation Each image that was saved as an object in Image Retrieval and Display or the video review phase is displayed for
further review. This allows users to navigate between images before and after the marked picture to determine the best
target picture. If any false positives were included, the user can make a request to remove it from the list of the targets.
2.5.2.1.4 Object Classification Target classification starts by matching the targets to their corresponding GPS data. The target is then displayed and
the information about the target is provided by the user. The information provided by the user and data from the
telemetry processing is sent using interoperability.
2.5.2.2 Autonomous Object Detection The object detection requirement of this competition presents a unique opportunity to implement machine learning for
target recognition. The human brain is extremely efficient at identifying shapes even when there is significant variation
to what the viewer expects the shape to look like. Unfortunately, it is very difficult to create a program that can
determine variations in shapes. Hardcoding a program with a description of the shape is very inaccurate and usually
results in missed detections and false positives. The software written for the previous year was designed around
contour analysis. It can identify targets and is scale and orientation invariant however, it fails when the contours are
obstructed or the shape is different from what is expected. An issue for which the severity was highly amplified by
high grass and shadows. A different approach was taken this year to overcome the challenges affecting the software
from last year by implementing an Artificial Neural Network(ANN). The object detection software is designed and
programmed in MATLAB R2016B.
2.5.2.2.1 Artificial Neural Network An ANN is a computational model for machine learning that approximates the behavior of the human brain in
performing various tasks. The most basic unit of an ANN is the artificial neuron which is equivalent to a single neuron
in the human brain. Several artificial neurons are linked together to create a layer. The variations to the connections
between artificial neurons and between the layers are what affect the type of ANN that is created. A large set of data
is used to train an ANN. The training acts to change the strength of the connections between layers and artificial
neurons. The downfall of many Artificial Neural Network is a requirement to provide a large training data set.Two
main factors affected the type of ANN that is used for this competition: The input is an image and the location of the
target is needed.
Convolutional Neural Networks (CNNs) are a powerful tool used to identify images. The input into a CNN can be a
mutli-channel image. They have been shown to effectively identify images and are often used in robotics and self-
driving cars. CNNs alone, however, are not capable of determining the location of the detected feature. A problem
Page | 9 California State Polytechnic University, Pomona - Broncos
that would prevent accurate localization of the target. A scanning window can be used to determine the location of the
object being detected, however this has an extremely high computational cost.
Region-Based Convolution Neural Network(RCNN), much like a CNN has an input of an image, but it also has the
ability to determine the region of the object within the image. It does this by processing regions within the image that
are likely to contain an object. This substantially reduces the computational cost compared to using a scanning window
with a standard CNN. A high-level diagram of the RCNN is shown in Figure 2.5.1.2.1.
Figure 2.5.2.2.1: RCNN high level diagram [7]
2.5.2.2.2 Training the Region-Based Convolution Neural Network
The most difficult barrier to overcome when using an ANN is obtaining a training data set for the network. Two
methods were used to overcome this issue: Data augmentation and transfer learning.
Data augmentation generated targets within images to provide the RCNN with a larger training set and allowed for
random variation to the target’s characteristics, shown in Figure 2.5.1.2.2-1. Images without objects are also used to
train the network on ignoring bad data and reduce false positives. Some of the images generated use Google maps
satellite images of the airfield to increase the accuracy of detection.
Page | 10 California State Polytechnic University, Pomona - Broncos
Figure 2.5.2.2.2-1: Augmented airfield with an object
Transfer learning allows for training the RCNN on a far smaller number of images than normally required. Normally
to train an RCNN it would require thousands of images of a single object; this would be extremely difficult since there
is a limited number of available images for the standard objects of this competition. To bypass this requirement, the
RCNN is trained on a database of 1.2 million images of random objects. Once the training is complete, the augmented
data of just a few hundred examples generated by the data augmentation software is used to fine tune the RCNN to
detect the standard objects in this competition. Training the RCNN on such a large data set requires a very large
amount of processing power. To increase the speed of training CUDA acceleration has been implemented. CUDA
acceleration has reduced training times from nearly a week to just a single day. Once training is complete the
connections between the artificial neurons, known as weights, are saved.
2.5.2.2.3 Detection with the Region-Based Convolution Neural Network Detection happens in two steps; the object is identified and the characteristics are extracted.
2.5.2.2.3.1 Identification The saved weights from training are used to recreate the trained RCNN and feed images into it. The neural network
returns a probability of the image containing some object and the location of the object within the image, shown in
Figure 2.5.1.2.2-1. In some cases the RCNN struggles to identify objects correctly, as shown in Figure 5. Future work
will require increasing the accuracy and precision of the RCNN. This will likely be done through an increase in the
training data set, required confidence, and changes to the learning rates and momentum of the RCNN.
Figure 2.5.2.2.2-1: Detection using RCNN
Page | 11 California State Polytechnic University, Pomona - Broncos
Figure 2.5.2.2.2-1: Object Incorrectly Identified as a Circle.
2.5.2.2.3.2 Characteristic Extraction
There are 5 characteristics for the standard object: shape, shape color, alphanumeric, alphanumeric color, and
orientation.
The shape is determined by the RCNN. Once the object is identified by the RCNN the bounding box of the object is
cropped. The cropped region is converted to the HSV color space and the HSV value is used to determine the color of
the shape and alphanumeric. The alphanumeric determination is performed using Tesseract OCR, an open source
optical character recognition engine whose development is sponsored by Google.
2.6 Communications
2.6.1 RF Transmitter Design
The 3DR that was used for last year’s competition was selected due to ease in connecting it to Mission Planner. To
create a strong connection between the airplane and the ground station, the Ubiquiti Bullet M5 wireless radio was
used. Last year, a lower powered router was used to transmit video to the ground station. The M5 radios were chosen
because they can support a range performance of up to 50 km with up to 100 Mbps. This allows for decent quality
imaging to be transmitted from the ends of the flight zone to the ground station.
2.6.2 Radio Frequencies
The UAS has three radio frequency (RF) sources for its data link. These three sources are for the manual control of
the airplane, telemetry, and video. The manual control for the aircraft is on a 2.4 GHz frequency to ensure no
interference would occur for the safety pilot’s control. The telemetry communication between the autopilot and the
ground station is on a 915 MHz frequency. The video is streamed over Wi-Fi using a 5.8 GHz frequency. All of the
radios use frequency hopping spread spectrum technology to mitigate risk of interference.
2.6.3 Antenna Selection
After last year’s issues with maintaining connection to the camera payload and telemetry, it was necessary to change
the antennas for the ground station to be directional instead of omnidirectional. The antenna selection for the UAS
was narrowed down to what is shown in Table 2.6.3-1.
Page | 12 California State Polytechnic University, Pomona - Broncos
Table 2.6.3-1: Comparison of possible antenna selections
Purpose Type of
Antenna
Polarity Gain
(dB)
Beam Width
(Degrees)
Weight
(kg)
Price
($)
5.8 GHz
Ground
parabolic linear 24 12 1.4 64
helical circular 12.5 30 0.15 50
5.8 GHz
Airplane
whip linear 5.5 180 0.05 11
clover circular 1.4 360 0.05 50
915 MHz
Ground
parabolic linear 15 18 2.29 94
Patch circular 8 65 0.45 52
915 MHz
Airplane
whip linear 3 180 0.05 12
clover circular 1.4 360 0.05 34
The drawback to a linearly polarized antenna is that it does not maintain a strong data link if the two antennas are not
properly aligned. When the antenna is rotated, a linearly polarized antenna undergoes changes in both amplitude and
phase angle, whereas circularly polarized antennas only has changes in its phase [1]. Due to the airplane constantly
pitching and rolling, a linear antenna can potentially lose data. A linear antenna generally has higher gain and range
capabilities compared to a circular antenna. This is due to the difficulties in manufacturing circularly polarized
antennas. A major influence on antenna selection was based on the fact that both antennas have to have matching
polarity to have the strongest connection. The trade study used to determine what antennas were selected are shown
in Table 2.6.3-2.
Table 2.6.3-2: Antennas Trade Study
Purpose Type of
Antenna
Polarity Gain(dB) Beam
Width
Weight Price Overall
5.8 GHz Ground parabolic 0 10 10 1 7 28
helical 10 5 5 10 10 40
5.8 GHz Airplane whip 0 10 5 10 10 35
clover 10 3 10 10 2 35
915 MHz Ground parabolic 0 10 10 2 6 28
Patch 10 5 3 10 10 38
915 MHz Airplane whip 0 10 5 10 10 35
clover 10 5 10 10 5 40
Due to this trade study, the circularly polarized antennas were chosen for both the 5.8 GHz frequency and 915 MHz
frequency. With the selected directional antennas for the ground station, a tracking system was necessary to maintain
a strong connection for the imagery and telemetry. This system will be described in more detail in Section 2.5.5.
2.6.4 Ground Control Station
The Ground Control Station (GCS) consists of an antenna tracking system and two laptop computers. The two
computers will be utilizing the generator provided, and will consist of two stations. The first station uses the Mission
Planner software, and the other station uses the image processing software which will be further explained. Uniden
handheld radios are used to ensure proper communication between the GCS crew and minimal error and potential
risks by the safety pilot.
Page | 13 California State Polytechnic University, Pomona - Broncos
2.6.5 Mission Planner computer The objective of this computer is to use the modified Mission Planner software as the main connection between the
airplane and the GCS. Mission Planner will generate a flight mission after retrieving waypoint path and search area
details from the interoperability server, which will show where the aircraft will fly. The main interface of the Mission
Planner software, as shown in Figure 2.6.5-1, will provide the information requested: altitude, speed, heading, no-fly
zones and obstacles.
Figure 2.6.5-1: Picture of Mission Planner Interface with a Flight Mission
This information will be relayed to the image processing computer to provide the airplane’s telemetry, which is
required for the target information. This computer will also connect to the sUAS interoperability server to collect the
information provided by the server, which includes the mission details and obstacle locations. The team member at
this station will be responsible for watching the airplane’s path for smooth flight as well as monitoring the
interoperability program. The team member must press a button to activate each part of the mission for various tasks
such as the bottle drop or the emergent target. A settings window will allow for the mission to be fully autonomous,
meaning the next task is automatically started. The autonomous setting is currently being tested and is expected to be
ready by the day of the competition.
2.6.6 Aircraft Tracking Antenna System
Due to the selection of directional antennas for the GCS, a tracking system was necessary to maintain a strong
connection for the imagery and telemetry. It was decided to purchase an off-the-shelf tracker primarily to ensure more
time was spent on mission critical tasks. The antenna tracker includes a slip ring, continuous rotation servo, metal
geared servo for the tilt motion, and 3DR Pixhawk with ground station firmware. The continuous rotation servo and
slip ring allows the tracker to rotate as many times as needed during a mission. The Pixhawk uses GPS and altitude
data from the multicopter to predict its position. The tilt portion of the tracker assembly used a servo with a 120◦
rotation and has a built-in potentiometer for the measurement of the tilt angle based on the pulse width modulation.
The tracker is controlled using the Mission Planner software, which already included a piece of code for the tracker.
2.6.7 Telemetry Processing
The telemetry processing was amended to the Mission Planner software. The software retrieves flight information
from a communication link between the Mission Planner and the Pixhawk called MAVLink. This information, which
is received at 10 Hz, is saved as doubles for: latitude, longitude, altitude, airspeed, and heading. The data is used in
five tasks: the primary objective; actionable intelligence; emergent target; interoperability; and Sense, Detect, and
Avoid (SDA). To achieve this, the retrieved data is first saved to a text file in a shared folder. The primary image
processing computer then retrieves the information from the file so that the telemetry can be associated with an image.
The data is also sent to the interoperability server and the SDA software. Interoperability and SDA are discussed
further in sections 2.6.3 and 2.6.4, respectively.
Page | 14 California State Polytechnic University, Pomona - Broncos
2.6.8 Interoperability
The interoperability program works through Mission Planner and the Web Server that is provided during the
competition to upload and download information to and from each. It was discovered that the code needed to be
written in C# in order to properly communicate information to and from the Mission Planner software. In previous
years, the interoperability program was directly added into a Mission Planner file. This year, a separate file with all of
the interoperability functions was made and added into the Mission Planner Visual Studio project in order to make
working with the code easy. Buttons and forms were added for configuring the interoperability program. The program
is a modification of the Mission Planner code and is split into two parts. The first half of the program runs requests
and functions that are not needed to be updated at 10 Hz. Those requests and functions include the login Post request
to the Web Server and the Mission Details Get request. In order to ensure that all of the requests work properly, the
login request is saved into a cookie. Each time a request is made, the cookie is called for the request to be made. The
other half of the program includes the requests to receive obstacle information and to upload UAS telemetry data at
10 Hz. The program acquires the UAS telemetry information from the Mission Planner and then uploads that
information to the server at 10 Hz to meet the objective requirement given in the competition rules. In previous years,
all responses were saved as a string and then parsed by a function written to parse these specific responses. This year
the Newtonsoft Json.Net is used to deserialize the Json response from the server in order to make working with the
code easier for the future. The obstacle information is deserialized using Json.Net and then sent to the obstacle
avoidance part of the code.
2.7 Air Delivery
2.7.1 Air Delivery Mechanism
The water bottle drop system was designed for the purpose of the air delivery task. The drop system consists of a drop
mechanism, a parachute, and shock absorption. The parachute is secured to the top of the water bottle and the shock
absorption material will be secured to the bottom of the water bottle. The drop mechanism is a 3D printed casing that
encloses the water bottle and its components as designed in SolidWorks; This model is shown in Figure 2.7.1-1. The
drop mechanism features one servo and two hinged doors. The doors are held shut by one servo. Rubber bands are
attached to the doors and the bottom of the drop mechanism. The servo swings away from the doors, allowing the
water bottle to fall out. The rubber bands stretch as the doors open and pulls the doors shut once the water bottle exits
the drop mechanism
Figure 2.7.1-1: SolidWorks model of the Drop Mechanism
2.7.2 Air Delivery Software
To determine the optimal position to drop the water bottle, a combination of MATLAB and C# was used. First some
assumptions were made to simplify the problem: Wind acted in a plane parallel to the ground, the hexacopter will be
in steady state flight parallel to the ground with the drop mechanism perpendicular to the ground. With these
Page | 15 California State Polytechnic University, Pomona - Broncos
assumptions, the flight of the bottle will be that of a simple projectile. Determining the position comes from calculating
the displacement of the bottle with an initial velocity equal to the hexacopter’s flight speed. To account for the wind
and drag on the bottle and parachute, a 4th order Runge-Kutta Method was used in MATLAB to solve the resulting
nonlinear, differential equation. With the displacement, the change in position was then calculated in terms of latitude
and longitude. The optimal drop position was then determined by subtracting the change in position from the known
coordinates of the intended target.
2.8 Cyber Security
The potential security threats are mostly between the ground station and the UAV. These potential threats are between
the 3 radio connections from the UAV to the pilot or ground station. The connections are the Pixhawk telemetry
connection, GoPro Hero 4 black video stream, and the radio control system connections.
The UAV is connected to the Mission Planner ground station using MAVLink which is well known for not being
secure. MAVLink is designed to make sure the packets can be sent without loss but it is not designed to keep other
people from connecting to the same device. To solve this issue, the Xbee radio system is used to connect the UAV to
the Mission Planner ground station. The Xbee radios offer a 128 bit AES encrypted connection which helps prevent
unwanted outside connections and unwanted data leaching. [4]
The Bullet M5 is used to transmit the GoPro Hero 4 Black’s video stream down to the ground station. The data is sent
down using an HDMI extender which uses an TCP/IP to send the data to the receiving HDMI extender device. This
connection is not secure, although a Bullet M5 is used to transmit the ethernet connection through a 5.8 Ghz radio.
The Bullet M5 is setup to use the WPA2-AES to protect the video stream that is transmitted to the ground station.
The radio control system is a Spectrum DX18 transmitter and the Spectrum DSMX remote receiver uses DSM
technology which features a Globally Unique Identification number which ensures the connection to the receiver
remains between the two devices.
3.0 Testing and Evaluation
3.1 Developmental Testing
3.1.1 Interoperability Performance
In order to test the interoperability software, a Django web server provided by the competition judges was created
using Virtual Box as a computer. A separate computer with the Mission Planner modified with the interoperability
code connects to the server to test the validity of the program. Each change made to the code is tested with the server.
It was found that the program is reliable in achieving a download and display at a rate of at least 1 Hz which complies
with the Competition Rules.
3.1.2 Sense, Detect, and Avoid Performance
Evaluation of the stationary obstacle avoidance portion of the sensing, detection, and avoidance software was done
through MATLAB. By plotting a hypothetical field of objects and waypoints, a simulated path was generated. Four
different scenarios were considered when testing the software algorithm. The first situation was for pathing around a
single object. The second situation was for pathing around a group of overlapping obstacles. The third situation was
for pathing around an obstacle located on the border of the boundary area. The fourth and final scenario was for
waypoints placed within an obstacle. All of these situations were validated as successful. The software for avoiding
moving obstacles is expected to meet the mission requirements in time for this year’s competition.
3.1.3 Imaging Software
Due to the variety of issues that can occur during flight, the imaging software underwent extensive testing and the
program was designed with redundancy in mind. At every stage of the design and programming, it was tested for
possible failures. The goal was to develop a program that was stable and reliable during unforeseen events. After
completion of the program, it went through an initial testing phase to confirm that the software acted as intended. A
mockup stationary test was setup where all the elements of the flight were present. The test was initiated as it would
be during flight, and each section of the software that was discussed in 2.6.1 was tested. Once this was complete, the
code was tested to see how it handles interruptions and the software was terminated in the middle of the video
streaming and restarted. Upon its restart, the code continued where it left off as intended. It was also tested for the loss
of video stream. Upon the loss of video, the code notified the user of an issue and went to an outer menu where it
waited for the user to reinitiate a video stream. During the testing phase, when the code was shut down to save the text
Page | 16 California State Polytechnic University, Pomona - Broncos
file, an issue was encountered where a part of the data was lost. The issue was fixed by backing up all the saved data
before any alterations are attempted of the data. This solved the issue of data loss, and added increased safety in case
the main save file is corrupted.
The autonomous object detection software from section 2.6.1.2 was designed and tested using a team of engineers.
The primary user and system requirements that the software was designed around are listed in Table 3.1.3-1: and
Table 3.1.3-2. The user interface was tested for potential failure by giving randomly chosen users access to the
software user interface. The software held up for several minutes before unexpected action by a user caused a fatal
error. Although slightly comedic, this notified the developers of a weakness to the system. Further error handling was
added to prevent a user from crashing the software. The software was tested on multiple computers and the software
was compatible with all systems tested. The software was well documented for easy maintainability and adaptability
to future requirements. The software was also tested to ensure high reliability and availability for the duration of the
competition time.
Table 3.1.3-2: System Requirements Table 3.1.3-2: User Requirements
3.2 Individual Component Testing
3.2.1 Camera
With a new camera system, extensive testing was done on the ground before integration with the vehicle. Targets were
placed at various distances ranging from 150 to 250 feet away from the camera and recorded on the GoPro. Various
camera settings were tested, but a resolution of 1080p at 60fps was selected due to the maximum output of the camera
and still high definition resolution. The focal length was adjusted to find a good field of view for initial testing and
these images taken on the ground were tested on previous image processing code to determine that the image quality
is satisfactory. While the frame rate could be lowered to reduce the bit rate and increase the exposure time, the GoPro
uses a CMOS sensor which could lead to image distortion at low frame rates. Given the slower speed of the hexacopter,
the ideal focal length and frame rate is still being tested in flight in order to reduce the search time while still
maintaining high resolution images and minimize risk of missing targets. The video transmission system (see 2.4.1)
was tested component by component. First the GoPro was connected to the HDMI extender over CAT6, with the
receiver connected to a monitor. Next the M5 Bullets were configured and connected, after displaying images on a
monitor and receiving images over a capture a card, a program was written to captures images during flight. This
ensured that the images capture card during flight was still satisfactory after being converted several times.
3.2.2 Payload Drop
The payload drop system has completed several successful drop tests to date with no mishaps. The drops were
simulated by holding the drop mechanism a few feet off the ground and manually triggering the drop mechanism. The
mechanism has been tested by both physically moving the release servo and by computer command. The water bottle
and parachute successfully clears the bay doors without being caught. The water bottle drop system has been tested
multiple times in flight and has been determined safe and effective for competition.
Page | 17 California State Polytechnic University, Pomona - Broncos
3.3 Mission Testing Plan
3.3.1 Flight Testing
This UAS has completed ten flight tests in the previous academic year and eight further flight tests this year, for a
total of eighteen flight tests overall. These tests were performed at Prado Airpark in Chino, California. These flight
tests were student led and student conducted, following detailed flight cards and pre-flight checklists. About two flight
tests were performed during each trip, with a break between each test to change flight batteries and modify autopilot
parameters. The 8 flight tests this year were completed without the occurrence of any significant mishaps. Waypoint
navigation was successfully accomplished during flight testing.
3.3.2 Overall Performance The subsystem testing and full mock up system testing has given evidence that the UAS will be successful at its
expected mission of autonomous flight, search area, actionable intelligence, off-axis target, emergent target, air-drop,
interoperability, and SDA tasks. First, the hardware-in-the-loop testing of the autopilot system has shown that the
UAS can successfully accomplish autonomous waypoint navigation, search area, off-axis target, payload drop and
emergent target tasks. Secondly, the imaging system has been built with redundancies and was tested thoroughly with
success. Interoperability was extremely successful and has provided the team with great confidence for this task. Tasks
such as payload drop, autonomous takeoff and landing, and SDA tasks are expected to be functional at this year’s
competition, but with less confidence.
4.0 Safety
4.1 Developmental Risks and Mitigations
The most critical developmental risks originate from the fact that a new aircraft platform is being used for this year.
The team’s limited experience with a multicopter platform means that testing the behavior of each subsystem during
flight is a potential risk due to limited understanding of how it will behave on a multirotor platform. Waypoint
rewriting during flight was identified as one a critical risk due to a limited understanding of how a multirotor would
behave to real-time changes. This is a concern specific a multicopter platform because unlike a fixed-winged platform
which is generating lift through forward motion, any unpredicted changes to a multirotor throttle could result in fatal
flight behavior. This risk was mitigated through rapid safety pilot response until it could be verified that the aircraft
responded consistently and safely to real-time waypoint rewriting. Another critical developmental risk is with complex
subsystems such as imaging. The software intended to be used for autonomous image recognition was a developmental
risk itself due to its complexity, which may result in it not being developed in time for the competition. The mitigation
for this risk comes from the ability to approach image recognition manually during the competition if necessary.
4.2 Mission Risks and Mitigations
The system’s safety methodology is based on redundant subsystems to ensure that the aircraft never poses a threat to
personnel or property. The electric motors, autopilot, and payload subsystems all have their own dedicated batteries.
This ensures that the loss of one electrical subsystem does not cascade throughout the entire UAS. The autopilot
telemetry frequency is separate from the safety pilot’s radio control frequency. This prevents the failure of both
autopilot and telemetry in the event of RF interference. In the event that both the autopilot GCS and the safety pilot
cannot communicate with the aircraft, the autopilot is programmed to loiter until connection is reestablished. If this
does not occur in a predetermined time period, a failsafe is triggered where the aircraft will immediately land in order
to prevent damage to personnel or property. There are two ways that the flight termination failsafe can be triggered.
At any time, the ground control station operator can manually trigger an abort that will send a failsafe command to
the aircraft. Alternatively, if the autopilot has lost its telemetry link with the ground for more than 20 seconds, it will
automatically trigger the failsafe. This ensures that the flight can be terminated in a safe way in all possible scenarios.
4.3 Operational Risks and Mitigations
Allowances for safety are made at every step of flight operations. A checklist is followed prior to each flight in order
to verify the operation of all critical systems. Safety pilot, GCS operators, and ground crew work in conjunction to
ensure that all functions of the system are checked. The checked tasks include:
● Checking and recording the voltages of all batteries and safety of battery mounting
● Inspection of all the servos, GPS, and communications wiring and connections
● Powering up the aircraft system and radio and verifying telemetry connection to aircraft
● Transmitter calibration and range check
● Checking all sensor outputs, including the accelerometers, voltmeter, ammeter, Lidar light
Page | 18 California State Polytechnic University, Pomona - Broncos
If a component fails to pass a check, the flight is suspended until the problem can be determined and remedied. A
safety pilot and observer are always present to maintain line of sight with the aircraft and take over the control in the
case of a malfunction. They both stay in constant contact with the GCS operators to ensure that the aircraft is monitored
at all stages of the mission.
A procedure was also made to mitigate any risks that could happen midflight, see Table 4.3-1. Midflight is the most
dangerous time of a mission, and therefore all risks that could occur during it should be mitigated with the
personnel’s safety being the most crucial factor.
Table 4.3-1: Potential Risks and Mitigation methods midflight
Risk Mitigation method
Loss of command and control link Have the safety pilot immediately take over and attempt
to establish communications. Alert bystanders of
situation. If communications cannot be reestablished,
have the safety pilot land the aircraft. If neither option can
be done, allow the aircraft to timeout and trigger its
failsafe.
Loss of position or line of sight
Command the autopilot to loiter until line of sight can be
reestablished. Alert bystanders of situation. If line of sight
cannot be reestablished, command the failsafe condition
to minimize potential damage to personnel or property.
Unresponsive flight controls
Command the autopilot to loiter until problem can be
resolved. Alert bystanders of situation. If the problem
cannot be resolved, trigger the failsafe command to bring
down the aircraft safely.
Loss of electric power
Before taking off, all personnel in the area will be a safe
distance away from the flight area. Safety pilot will alert
all present if loss of electric power happens midflight.
Ground control station failure
Immediately have the safety pilot take over and land the
aircraft.
5.0 Acknowledgements
The Cal Poly Pomona AUVSI team would like to thank Northrop Grumman for sponsoring the project. The team
would also like to thank SolidWorks for providing access to the 3D solid modeling software for this project. The team
would finally like to thank our advisor Dr. Subodh Bhandari for his help and support in guiding the team in the correct
direction.
Page | 19 California State Polytechnic University, Pomona - Broncos
References:
[1] Milligan, Thomas A. "Properties of Antennas." Modern Antenna Design. New York: McGraw-Hill, 1985. 22.
Print.
[2] “Spreading Wings S900 – Highly Portable, Powerful Aerial System for the Demanding Filmmaker.” DJI Official.
N.p, n.d Web.15 Apr. 2017.
[3] “Gopro Hero4 Black Specs.” CNET . N.p., n.d. Web. 15 Apr. 2017.
[4] Digi International Inc. (2008). XBee-Pro 900: Data Sheet. Retrieved from
https://www.sparkfun.com/datasheets/Wireless/Zigbee/XBee-900-Manual.pdf
[5] Ubiquiti Networks Inc. (2015). AirOS 5:User Guide. Retrieved from
https://dl.ubnt.com/guides/airOS/airOS_UG.pdf
[6] Horizon Hobby, Inc. (2012). SPM9645 DSMX Remote Receiver User Guide. Retrieved from
https://www.spektrumrc.com/ProdInfo/Files/SPM9645-Manual.pdf
[7] Leonardo Araujo Santos (2017). Object Localization and Detection. Retrieved from
https://leonardoaraujosantos.gitbooks.io/artificial-inteligence/content/object_localization_and_detection.html