CSUN Autonomous Small Unmanned Aerial System for
Intelligence, Surveillance, and Reconnaissance
for
2012 AUVSI Seafarers Student Design Competition
Andres Chavez, Ben Hapipat, David Penniman, Hakim Bachmid, Hansolo Dela Cruz, Ivan Alvarez,
James Zimmerman, Jincun Wang, Joseph Saeed, Karam Kaoud, Maaz Waheed, Mustafa Qudsi, Nadine
Menjuga, Paulus Sunarli, Ruben Cuellar, Scott Schultz, and Ulysses Marquez
Department of Mechanical Engineering, California State University Northridge, Northridge California,
91330
Omar Flores, Pipat Jetawatana, and Ramzey Elallamy
Department of Computer & Electrical Engineering, California State University Northridge, Northridge
California, 91330
And
Dr. Tim Fox
Department of Mechanical Engineering, California State University Northridge, Northridge California,
91330
Abstract of Proposal:
The California State University Northridge (CSUN) Aeronautics entry into the 2012 AUVSI Seafarers
design competition is a Small Unmanned Aerial System (SUAS) consisting of a fixed wing Unmanned Aerial
Vehicle (UAV), a Payload, and a Ground Control Station (GCS). The system is capable of autonomous flight from
launch to recovery, able to conduct navigation through a multitude of waypoints while collecting images of the
ground below. Simultaneously an onboard computer processes each picture for potential targets then characterizes
each target for color, shape, orientation, as well as the co-located alphanumeric character and color. The user
interface for the UAV features flight planning software along with a graphical autopilot control. The payload
operator uses graphical interface for payload control and a self-populating spread sheet that is displayed to the user
allowing verification of the autonomous system. Additionally the system can connect to a Simulated Remote
Intelligence Center (SRIC) collecting a data file that can be downloaded by the operator.
California State University, Northridge AUVSI Page 2
Table of Contents 1. Introduction……………………………………………………………………………………… 3
1.1 CSUN Aeronautics Team…………………………………………………………… 3
1.2 Mission Requirements………………………………………………………………. 3
1.3 System Concept……………………………………………………………………… 4
1.4 Mission Preview…………………………………………………………………….. 4
2. Aircraft Design…………………………………………………………………………………… 4
2.1 Aircraft………………………………………………………………………………. 4
2.1.1 Airframe……………………………………………………………. 5
2.1.2 Launch and Recovery……………………………………………… 5
2.1.3 Propulsion…………………………………………………………. 5
2.1.4 Flight Control System……………………………………………… 5
2.1.5 Nose Video Camera……………………………………………….. 6
2.2 Navigation and Flight Planning ……………………………………………………… 6
3. Payload System…………………………………………………………………………………… 8
3.1 Imaging System………………………………………………………………………. 8
3.1.1 Camera……………………………………………………………. 9
3.1.2 Camera Interface………………………………………………….. 9
3.2 Camera Stabilization……………………………………………………….. 9
3.2.1 Gimbal…………………………………………………………….. 9
3.2.2 Gimbal Control……………………………………………………. 10
3.3 Target Recognition & Characterization………………………………………………. 11
3.4 Onboard Computer……………………………………………………………………. 14
3.5 SRIC Capability………………………………………………………………………. 15
4. Ground Control Station……………………………………………………………………………. 15
4.1 Pilot Operators Station………………………………………………………………… 16
4.2 Payload Operators Station…………………………………………………………….. 16
5. Communication…………………………………………………………………………………….. 17
5.1 UAV…………………………………………………………………………………… 17
5.2 Payload………………………………………………………………………………. 18
6. Safety……………………………………………………………………………………….. 18
7. System Validation…………………………………………………………………………………. 18
7.1 Aircraft and Airframe Components…………………………………………………… 18
7.2 Autopilot and Flight Planning………………………………………………………… 18
7.3 Imaging and Targeting………………………………………………………………... 19
7.4 Ground Control Station……………………………………………………………….. 19
7.5 Communication Test………………………………………………………………….. 19
7.6 SRIC………………………………………………………………………………….. 19
7.7 Checklist……………………………………………………………………………… 20
7.8 Full System Test……………………………………………………………………… 20
7.9 Safety…………………………………………………………………………………. 20
8. Conclusion………………………………………………………………………………………… 20
9. Acknowledgements………………………………………………………………………………. 20
California State University, Northridge AUVSI Page 3
1. Introduction
CSUN Aeronautics has designed and developed many mission specific unmanned aircraft for
multiple collegiate competitions. The design for the UAS System was based on previous experiences
from SUAS competitions where a reliable yet limited lifetime UAV was designed and paired with a
payload system to meet multiple mission specific requirements. Specifically the UAS developed for this
competition can be easily transported, quickly assembled, operated with a minimal crew, while providing
a robust and mature ISR system.
1.1 CSUN Aeronautics Team & Design Method
CSUN Aeronautics consists of students from multiple engineering disciplines such as Computer
Science, Aerospace, Mechanical, Civil, Electrical and Computer Engineering attending CSUN. The
current team was formed throughout the summer of 2011.
The team used a systematic approach to the design of the UAS starting with a preliminary
analysis of the mission concept defined by AUVSI. Requirements and objectives documents based on the
KPP’s for the various systems, subsystems, and components were then completed and a preliminary
design was developed. A PDR was held as exit criteria from the definition phase of our project before
entering the design phase. In December a CDR was held to verify that the design met the requirements
established. Once the design was approved, the development phase began with fabrication and
integration. Attendance at the AUVSI Seafarers competition defines our operational phase.
1.2 Mission Requirements & Goal Statement
AUVSI’s mission concept required the design an UAS that was a reliable and robust system
capable of accurate ISR while utilizing both system autonomy and human interaction when practical. The
mission timeline provides a 40 minute set up window prior to launch, following which, autonomous flight
through a waypoint series while acquiring targets both on and off the UAV flight path, searching a
predetermined area for potential targets, and acquiring data from an SRIC are planned. Additional points
are also awarded for autonomous takeoff and landing, as well as actionable intelligence, in flight re-
tasking, autonomous target characterization, and gathering data from an SRIC.
The team’s goal was to design and create a mature and robust small unmanned aerial system that
meets the AUVSI key performance parameters while being a user friendly and innovative design.
Fig. 1 Illustrated Mission Requirements
California State University, Northridge AUVSI Page 4
1.3 System Concept
The UAS developed consists of three main components, a UAV, Payload, and Ground Control
Station, along with operations manuals and checklists for each component.
The UAV is a hybrid canard configuration with forward horizontal tail providing increased
endurance and aft vertical tails providing yaw stability. To optimize system performance the design of the
UAV was accomplished in house providing greater payload flexibility.
The payload is an independent system contained in the UAV utilizing only the aircraft fuselage as
containment. To minimize future ground control station size and complexity as well as decrease required
communication bandwidth image processing is accomplished within the UAV. The camera is mounted
on a single axis gimbal acting about the aircraft’s longitudinal axis increasing system reliability.
The Ground Control Station was designed to be operated by two crew personnel consisting of a
pilot responsible for operation of the UAV as well as system safety, and a payload operator responsible
for the operation of the payload and mission planning. The pilot interfaces with the aircraft and autopilot
through a program called virtual cockpit that supplies the operator with real time flight data including a
map overlay and artificial horizon. Communication to the payload is done through a CSUN developed
payload control that features a camera and gimbal control as well as a real time potential target
spreadsheet.
1.4 Mission Preview
Within 40 minutes of arriving to the designated mission sight the aircraft should be capable of
beginning the mission. Preflight and safety briefings may take place during this time frame. After the 40
minute set up time, the aircraft will takeoff, fly autonomously through the assigned waypoints, stay within
the assigned airspace, enter an assigned search area at an altitude between 100 and 750 feet MSL, image
and characterize potential targets, land the aircraft autonomously, and deliver image data to the judges.
During the mission the UAS may also be re-tasked to do perform a search of a new location and to collect
data from an SRIC. The mission is to be completed between a time frame of 20 and 40 minutes.
2. Aircraft Design and Overview
2.1 Aircraft
The FF12 aircraft was designed and constructed primarily by CSUN Aeronautics specifically for
the mission and payload requirements. A design load of +5 and -3 G’s was utilized for flight load
conditions and a safety factor of 2.5 was used for all design limits. Additionally the airframe was
designed to be quickly and easily constructed as well as repaired. An emphasis on low parts count and
interchangeable parts resulted in a very simple and light weight UAV weighing 16.4 pounds with an
overall length of 98 inches.
Fig. 2 The Flying Fox 2012 showing payload access
California State University, Northridge AUVSI Page 5
2.1.1 Airframe
The fuselage features a monocoque shell design composed of a fiberglass/epoxy laminate with
strategically placed wood formers that transfer load into the skin from various components. The fuselage
utilizes a removable top access hatch that is secured with Velcro allowing quick and easy access to the
payload and avionics. Construction was accomplished using three female molds fabricated in house
allowing multiple parts to be built for mockup and development.
Foam core, laminate sandwich construction was used for the wings, canard, and tail of the
aircraft. This method used an outsourced CNC cut foam core that was laminated with a fiberglass/epoxy
skin resulting in a simple and light weight structure. In house construction provided the ability to iterate
designs, as well as construct replacement and spare parts as needed. An 80 inch span wing is constructed
in two halves that join together with a 1 inch carbon fiber tube. The wings are locked onto the aircraft
with two metal pins that have large red caps, allowing easy assembly verification.
A 40 inch span canard made of 1.5lb density foam with a fiberglass/epoxy skin mounts to the
front bottom of the fuselage with one ¼ inch nylon bolt designed to break prior to aircraft structural
damage.
Behind the aircraft, the vertical stabilizers and rudders mount to the wing using removable rigid
carbon tubes and are interchangeable left to right. The vertical stabilizers are made with 1lb density foam
skinned in fiberglass/epoxy laminate. They are joined to the aircraft using four red nylon bolts visible to
the ground crew for assembly verification. The tails lower portion of the tail is designed to impact the
ground before the propeller at high pitch angles. Located 20 inches apart on either side of the propeller
the tail booms also act as a safety barrier to the propeller.
2.1.2 Launch and Recover (Landing Gear)
Early in the design process it was decided that a rolling takeoff and landing was most appropriate
for our system, both for development and operation. Although a catapult launch and belly landing can be
accomplished with this system, the competition aircraft uses a tricycle type landing gear with fixed rear
main gear and a retractable steerable nose gear. Landing gear parts are standard COTS hobby type.
2.1.3 Propulsion
Aircraft propulsion is all electric and provided by an AXI 4120/20 motor with a 17x8 propeller
capable of 9 lb. static thrust. The system uses two 22.2V lithium polymer batteries with 2100 mA-hours
for the onboard computer and 8000 mA-hours for the motor and capable of providing the UAV with 50
minutes of endurance and nominal cruise conditions.
2.1.4 Flight Control System
Aircraft flight control consists of a standard aircraft layout of elevator for pitch, aileron for roll,
and rudder for yaw. Each axis uses two separate surfaces with independent servo drives which reduce
single point of failure items. All flight controls are constructed from balsa wood covered in lightweight
Monokote film and attached to the aircraft using a hinge tape. Flight control servos are located as close as
practical to the flight control surface and are accessible without the need to remove access panels
allowing easier adjustment.
The flight control system is primarily controlled by a Procerus Kestrel autopilot that was
primarily selected based on CSUN’s experience with the system and availability. With this system aircraft
speed and altitude are determined through the use of a nose mounted pitot tube that allows the autopilot to
receive outside static and dynamic pressure. Aircraft orientation is determined by using a 3 axis inertial
measurement unit (IMU) that is contained within the autopilot itself. Location is determined using a GPS
receiver that is connected directly to the autopilot.
California State University, Northridge AUVSI Page 6
Flight control protocol requires the safety pilot to give control of the UAV to the ground control
station before any autonomous commands can be executed. Control is transferred from R/C to Autopilot
through a Pololu multiplexer board that connects either the RC receiver or autopilot to the flight control
servos. The multiplexer board is set to default to the RC control if power or control is lost. The autopilot
uses an additional expansion board to allow control of nose gear steering as applicable to takeoff and
landing as well as allow the pilot to retract the nose gear from the ground control station.
Fig 3. Flight control system design
2.1.5 Nose Video Camera
CSUN developed a requirement to provide the pilot and payload operator a view from the aircraft
as if they were onboard. The nose video camera system was designed to provide the operators with a real
time feel to the mission increasing system awareness and providing a system that could preview the
upcoming terrain to the payload operator. The nose camera is mounted in rapid prototyped mount and
angled 15° downward.
2.2 Navigation & Flight Planning
Aircraft navigation is performed by the Kestrel autopilot which guides the aircraft through a
series of waypoints that are defined in three dimensions. Each waypoint has a sphere around it, that when
the UAV enters the sphere it will consider it as met the waypoint objective and proceed to the next way
point. The aircraft may also be commanded to hold at a location which results in the aircraft flying
leaving its flight plan and orbiting a location until told to return to its flight plan or begin a new task.
For all flights a series of way points must be generated that will compose the flight plan, during
the competition the waypoint navigation series is provided to us. However for the search areas a flight
plan must be created. For this a VBA macro file is used in conjunction with Excel. The search area
boundaries are inputted using lat/long coordinates and the macro will generate a flight path through the
search area. The flight plan is then saved ready to be loaded with Virtual Cockpit and ultimately
uploaded to the autopilot.
California State University, Northridge AUVSI Page 7
Fig. 4 Training flight plan showing operator interface
Fig. 5 Internal layout of UAV fuselage
The UAV was designed to meet the AUVSI objectives of an autonomous platform that could position
the payload over a desired imaging location. Additional design decisions were made allowing autonomy
which has safety and mission assurance benefits such as flight control autonomy from takeoff to
touchdown which provides a more predictable flight path over an RC pilot. Human interaction was
included where safety and mission assurance required it such as re-tasking, flight plan development, and
verification.
3. Payload
The AUVSI KPP’s, should, and shall statements generated payload design requirements that were
developed into component requirements for an imaging system , an image stabilizing system, a target
detection and characterization system, an SRIC system, all of which interface with or are contained in the
onboard computer.
The SP12 Payload is a stand-alone system with a self-contained power and sensor sources allowing
independent operation from the UAV. The payload imaging and stabilization system uses Canon
California State University, Northridge AUVSI Page 8
Powershot A620 camera that is located within a single axis gimbal system that allows acquisition of off
axis targets and minimizes flight disturbances. This connects to a PCM computer with an Intel dual core
processor that runs the target detection and characterization system. The CSUN developed software is
used to autonomously locate potential targets, crop around a possible target, and then characterize the
target. The computer is also able to perform SRIC tasks using separate Wi-Fi network is carried aboard
allowing independent data links.
3.1 Imaging System
To satisfy the payload requirement to gather images of the terrain designated as a search area an
imaging system was designed to gather photos of the area specified by the payload operator. The imaging
system consists of a camera and associated software required to control it.
3.1.1 Camera
For the competition the AUVSI requirements and CSUN flight strategy required the system have
the capability to recognize a target off axis (i.e. 250ft off flight path) at an altitude of 200ft AGL and to
recognize an en-route target at an altitude of 500ft AGL. Based on the imaging software developed it was
determined that it would be necessary for each image to maintain a ratio of at least 12 pixels/foot. An
additional requirement was that the image transfer rate from the camera to the on-board computer be
2s/image at max.
A Canon A620 Powershot camera was chosen for simplicity and ease of interface software
programming and availability of development kits to altering of the camera’s software controls. The A620
also had significant weight and cost advantages over a larger DSLR type camera. The Canon A620 has a
7.1MP CCD which suffices for the amount of pixels needed for the imaging payload.
Fig. 7 Canon A620 Powershot
3.1.2 Camera Interface
The software used is a beta program called SDMcon, currently being developed by a freelance
programmer in the United Kingdom. The program allows for a user to remotely access the camera from a
computer through the use of a USB connection. From the computer the user has access to various camera
functions such as zoom, ISO setting, and shutter speed. This is made more readily accessible through a
California State University, Northridge AUVSI Page 9
developmental GUI, also still in the beta phase. The current programmable capabilities of the program
include the ability for the camera to be set-up and “left alone” to perform a set of functions. This includes
being able to take a user-specified amount of pictures, and subsequently upload them with the simple
push of a button.
These various capabilities of the camera allows for the project to accomplish the task of capturing
object images during flight through the use of autonomy. Essentially, the camera could be tasked with a
set of functions and left to perform them throughout the course of the flight. During the course of the
mission, the camera will be set-up to take a batch of images, dependent on how many is needed for the
section of interest, and corresponding parameters will be adjusted in terms of camera settings. From there,
the camera will automatically upload those images, reset and be ready for the next batch to begin.
3.2 Camera Stabilization
Controlling the orientation of the camera was vital to the CSUN Aeronautics flight strategy. The
UAV was determined to be most variable about the longitudinal axis during flight, and with additional off
axis targets now placed in the waypoint navigation sequence, our design requirements determined a single
axis gimbal should be included. To meet these requirements a stabilization system that would actively
control the camera about the longitudinal axis was designed. The stabilization system has two main
components, a gimbal system allowing rotation about the aircraft roll axis, and a controller that reads
orientation from a 3DOF IMU and converts it to a servo drive command controlling a Parallax continuous
rotation servo that is gear coupled to the gimbal.
Fig. 8 Canon A620 and Gimbal
3.2.1 Gimbal
As the entire system was iterated, optimum camera placement was determined to be within the
fuselage with a hole or slot for the camera field of view. The gimbal components were constructed on a
rapid prototype machine, which printed plastic gimbal components using a material similar in property to
ABS Plastic. This process resulted in quick manufacturing and low parts count. The gimbal uses 4 main
components, a frame that is bonded to the fuselage during manufacture, two removable bulkheads, a
camera frame with integral gear, and a servo mount.
While imagine the aircraft roll rate was determined to be below 100°/second therefore the gear
ratio was determined to be 3:1 using the Parallax servo. The camera gear was designed as an integral part
of the camera frame while the servo gear was designed to fit over an existing servo head reducing
California State University, Northridge AUVSI Page 10
manufacturing needs. The system has a travel arc of 180° which mitigates an off line condition, while
neither gear contain travel stops ensuring that if an offline condition is reached the gimbal will not bind.
Multiple fasteners are imbedded into the plastic components reducing both maintenance
requirements and the risk of foreign object damage. The gimbal is removed from the aircraft by removing
either the front or rear bulkhead which frees the camera frame from the aircraft. The gimbal system is
modular, compact, and easily iterated for camera updates.
3.2.2 Gimbal Control
To manipulate the gimbal, a control system was required that could position the camera at a
relative to a vertical as determined by the payload operator. The control architecture of the image
stabilization system was given the name S-Chain and designed to exist separately while supporting the
imaging system within the payload. The S-chains primary task would be to hold the camera stable
relative to a vertical plane coincident with the longitudinal axis of the plane, the relative angle would
nominally be 0° while performing most tasks, however can be manipulated by the payload operator up to
50° when off axis targets acquisition is desired.
The control system is based off orientation feedback from a 9DOF Razor IMU’s ATMega328
microcontroller’s boot loader which runs a script controlling a continuous rotation Parallax servo. The
IMU is mounted directly to the camera housing with foam tape to reduce vibration feedback. The
microcontroller is powered by USB from the on-board computer system. The control script uses a
proportional control where rotational speed is based on the difference in the desired and actual roll angle.
Along with being able to control the rotation, we have to ability to lock the gimbal in place if need be.
Fig. 9 Control Architecture of Camera Stabilization System
This system met our design requirements by holding the camera stable about the roll axis during
roll rates predicted to be seen during ISR flight.
California State University, Northridge AUVSI Page 11
3.3 Target Recognition & Characterization
Vital to our autonomous ISR design goal, a target recognition system was required that could
identify potential targets contained in the images provided by the camera. This system was integral in the
system meeting the AUVSI KPP’s related to target recognition.
Our entry to this year’s AUVSI competition will includes a prototype software program called the
Malinoski program that performs autonomous object detection and recognition using Open Computer
Vision (OpenCV). OpenCV is an open source computer vision library originally developed by Intel
using libraries written in C/C++ and has the ability to interface with Python. Unlike other proprietary
systems like Matlab and Labview, OpenCV does not need a runtime engine and can be incorporated into
existing code.
The object detection and recognition process examines all input imagery for potential targets and
their characteristics. Images received from the camera undergo a contrast threshold for edge detection.
The source image, in an RGB format, is split into the individual color channels. Each channel is evaluated
at 3 different levels (64, 128, and 192 respectively). Each channel subsequently becomes a binary image.
Pixels are grouped with neighbors to make consecutive surfaces. This process is performed in hopes of
separating the target with the background. Targets that meet the surface area range for legal sizes are kept
for further examination. The figure shows this process being performed on an image.
Fig. 10 Filtering Process of Retrieved Image
The next phase examines potential targets that meet a polygon description. Edges in the binary
images are represented by a contour object. A contour in this context is a collection of two dimensional
points. Points in the collection are stored linearly where points immediately before and after any given
point is a direct neighbor. Because these contours have potential information about a shape, each is
evaluated for geometrical properties. Contours are matched against a list of shape descriptors: convexity,
the number of vertices, the length of edges, the relationship between edges, and the angles at each vertex.
California State University, Northridge AUVSI Page 12
The figure below shows that a triangle and the outline of the Latin character ‘A’ are what remain of this
process. Previous edges that failed to meet a polygon description are discarded.
Fig. 11 Target being evaluated for Outlines and Characters
Edges that meet the polygon descriptors are kept for the next process: color recognition. The
edges that describe the shape are used to crop the target from the original image. Pixels that fall within the
edge boundaries are considered target surface area. These pixels are used as input to a k-means clustering
algorithm. K-means is an unsupervised clustering algorithm that partitions a data set into k different
subsets. Each target is to have only two colors: a surface color and an alphanumeric painting. These
colors should create two natural looking groups in 3D space. By setting the partition to k = 2 groups,
group ‘centers’ are found. These centers represent the average value per partition, which consequently are
the colors of the target and alphanumeric character. Each center is matched against a list of predefined
colors using Euclidean distance. The closest values are tagged with color names.
Fig. 12 Process of Identifying Target & Training Examples
The target has now been identified for a shape, the shape color, and the alphanumeric color. The
process now continues to alphanumeric character recognition and orientation. Early prototypes for
character recognition utilized neural networks for identification. This artificial intelligence proved useful
with the caveat that it only worked on single orientation characters. Large distortions in rotation rendered
the character unrecognizable. The solution to this problem was to use support vector machines (SVM).
This concept works by dividing spatial data into different categories. Each target is pushed through
another threshold level. This is done to separate the character from the background. This new binary
image is used as input to an SVM. The SVM uses this spatial data to identify not only the character but
the orientation that it resides in.
California State University, Northridge AUVSI Page 13
Fig. 13 Spatial Data for Identification & Character Orientation
The SVM required several thousand training images using fonts that could potentially represent
the alphanumeric character. After carefully examining fonts made available through commercial word
processors and those available on the Internet, a total of 8 fonts were used for training. Each font was
recorded for the Latin alphabet and numbers 0 through 9. For each character, the training process
randomly selected a font, rotated the character to a cardinal direction, and slightly warped the perspective.
The different fonts were used to compensate for the variability in the potential look to characters. The
rotations were necessary to be able to recognize the target in different configurations. Perspective warps
were added to include a window of noise or randomness to the image. It is known that images will not be
taken at a perfectly flat angle and that distortions will arise. The figures below show the fonts used for
training and some sample training data.
Targets have now been identified with a shape descriptor, a color, an alphanumeric character, the
alphanumeric character color, and an orientation. All characteristics are then packaged into an XML file
for transmission. This XML file and a cropped target from the image are transferred to the ground station
via the FTP protocol. Communication with the UAV and the ground station is facilitated use the cURL
software library. It is freely available, is written in C, and abstracts out a lot of the complexities involved
with network communication. Both files are transmitted to a file server on the ground station.
One of the ground station computers will have FileZilla and Microsoft Excel installed. FileZilla
receives target transmission data from the UAV. All data received is placed into a directory where an
automated script will extract target details and import them into an Excel workbook. The workbook is
used to present real time and updated target data to the judges. The workbook can be printed at any stage.
California State University, Northridge AUVSI Page 14
Fig. 14 Spreadsheet generated for Payload Operator
3.4 Onboard Computer
Our payload design required that the images taken by the camera, be processed to identify
possible targets, and characterized to determine target characteristics. This could either be accomplished
at the ground control station or within the UAV. With increasing autonomy desired, the program
designed to accomplish this required little human interaction therefor containing the image processing
within the UAV was a logical choice. The ability to accomplish the target recognition and
characterization within the UAV increases system flexibility, reduces bandwidth requirements, and in a
defense environment increases security. While the images can be processed for target identification and
characterization on board the aircraft, the ground control station is equipped with duplicate software
allowing ground and or post flight processing if needed.
To sufficiently run the target identification and characterization program discussed earlier, a
1GHz processor speed compatible with Linux or Windows, and at least an 8GB hard drive was required
to store the operating system and data. The on-board computer selected was the PCM-9363, which
features a 1.8GHz dual core Intel processor, and 4GB of RAM running Windows 7. The hard drive picked
to match this computer was a 32GB Patriot Torqx 2 solid state drive and is used to meet the memory
requirements and provide system modularity. The computer is responsible for controlling and triggering
image capture and geotagging data, processing images using the Malinoski program, and executing data
transfer to the GCS. When an imaging sequence is desired, the payload operator will send a batch amount
and a start command to the onboard computer which will then trigger the camera to begin imaging until
the batch amount is accomplished. As the images are being taken, the camera’s autofocus light triggers a
photodiode which synchronously captures the geotagging information (GPS, IMU, and magnetometer
readings) and saves it in an XML file. When the batch is complete, the computer will then download the
images from the camera and store them to be processed when available.
Processing is accomplished using the Malinoski program which copies files from the queue folder
and characterizes them for potential targets as previously discussed. The positive files are then sent to a
target folder where they are correlated with their respective data file containing camera orientation,
altitude, and position. Once correlated the files are transmitted to the GCS via the primary Wi-Fi network
for operator in the loop review. All files are stored in the onboard hard drive if needed for further review
or communication failure.
California State University, Northridge AUVSI Page 15
Fig. 15 Image transfer architecture
This selection of components met the basic design requirements by processing the images within
the UAV allowing future decreased ground control station requirements.
3.5 SRIC Capability
The payload is also required to carry an SRIC capability. This is accomplished by entering the
provided information for the SRIC Wi-Fi network to the network configuration of Windows 7 used in the
onboard computer. When within the vicinity of the SRIC location, the operating system will allow a
secondary Wi-Fi adapter to connect to the SRIC Wi-Fi network automatically, and allow the payload
operator to search for the file with the given file path via Remote Desktop.
4. Ground Station
With a focus on increasing UAS autonomy the ground control station requirements and duties should
decrease, however relying on an over autonomous system generates a higher level of risk in development.
To mitigate this risk the ground station was required to provide both the pilot and payload operator
enough information that he or she would be able to manually regain control of the system and safely
complete or terminate the mission. This is done on the pilot side through the use of a backup RC pilot and
assisted with a forward looking camera and HUD. On the payload operators side this is accomplished by
providing duplicate software available in the plane to the operator on the ground as well as storing all raw
data that the payload operator can access either by virtual desktop or through a LAN network post flight.
California State University, Northridge AUVSI Page 16
Figure 16: Ground control station design
4.1 Pilot Operator Station
Control of the UAV is essential to mission execution and safety, for this a reliable and user
friendly interface with the UAV is required. The pilots station is based around a program called Virtual
Cockpit which allows the user to interface with the UAV in a method that mimics and actual aircraft
cockpit. It provides the user real time UAV data via a primary flight display including an artificial
horizon with flight data overlay and a ground map of the desired area with a graphical representation of
the flight plan and current aircraft position. The flight plan overlaid on the ground map also highlights
the waypoint that it is currently navigating to providing the operator feedback of the UAV’s intended
course.
Virtual cockpit also allows the user to communicate with the aircraft through a flight plan either
by visually entering waypoints on the map which would be applicable during a hold task, or by pasting in
lat/long coordinates as discussed in the flight planning section, applicable for a complex area search task.
Aircraft control is further enhanced by a suite of features such as a hold around a point, return home, and
land that the pilot may choose to select depending on mission requirements.
Virtual cockpit also provides its own flight plan verification system where the user can simulate a
flight plan based on the aircraft performance and current weather conditions after which the user can
iterate a flight plan ensuring the aircraft will travel over the intended path.
The Pilots station is further enhanced by displaying the nose video camera feed directly to the
pilot. This combined with a data overlay provided by the autopilot system provides a true heads up
display. The nose video camera is mainly cursory feature however it was vital to the development
process and determined to give the pilot a real time feel to the mission.
4.2 Payload Operator Station
While the pilot’s station ensures the payload will be placed over the intended locations, it is the
payload operator who must ensure the proper data is gathered. To accomplish this task the payload
California State University, Northridge AUVSI Page 17
operator is required to set up and monitor the imaging system, ensure proper camera positioning, and
monitor target acquisition and characterization.
The systems primary task is to collect images of ground targets along with their characteristics
and location. This data leaves the UAV and travels to the ground station in a variety of formats. The
pictures are sent as .jpg files that are time stamped from the camera as well as the corresponding .xml that
uses a time stamp as a title and contains the corresponding GPS, IMU, and magnetometer data. The
ground station algorithm collects these files determines the image location using the data in the .xml file,
then uploads that into a spread sheet with the corresponding .jpg image of the target for the operator to
review.
To support the imaging task, the payload is equipped with a gimbal that can be commanded to
hold an angle relative to vertical. To meet the off axis target requirement, the payload operator station is
equipped with a gimbal control that allows the user to input the desired relative angle relative to vertical
that the gimbal should hold.
As a secondary task the payload is designed to access an SRIC, to accomplish this, the payload
operator is provided with an interface allowing the input of a WEP key and pass-code. When SRIC
operations are desired the user simply triggers the task and can view the files as they are downloaded.
5. Communication
Constant contact with both the UAV and payload is required for a successful mission, to
accomplish this, the ground control station is equipped an antenna array communicating both with the
UAV and GPS. The UAV features a communication protocol where failures trigger a communication
handoff to another system while the payload uses single source links where interruptions are handled by
re-sending data. Both systems handle abnormalities differently providing both safety and mission
assurance.
5.1 UAV
Primary flight control of the UAV is accomplished through a 900MHz radio modem that connects
the Kestrel autopilot to the pilot and virtual cockpit within the GCS. This is accomplished with a dipole
flat patch antenna located on the UAV belly and a Commbox with self-contained power supply located at
the ground control station. The Commbox is then connected to a laptop via an RS232 port and a primary
RC transmitter used by the backup pilot.
Secondary flight control is accomplished through a 2.4GHz RC transmitter communicating its
respective receiver within the UAV. This transmitter is also manned by the safety pilot and is used if the
900MHz link is lost. This link is also used to both arm, by giving control to the ground control station
pilot, or disarm, by removing the ground control station from the loop.
Tertiary communications are accomplished through the downlink of the live nose video feed via a
1.3GHz link using a transmitter in the aircraft and a dipole antenna on the ground. The nose video
receiver requires is an omnidirectional antenna.
California State University, Northridge AUVSI Page 18
Position data is supplied to the auto pilot via GPS through a uBlox GPS receiver mounted on a
ground plane and located under the fiberglass upper skin of the fuselage. A second receiver is connected
to the Commbox and provides differential GPS to the system reducing position error.
5.2 Payload
Payload communication is accomplished through two Wi-Fi networks operating on 2.4GHz. The
primary network is used to connect the ground control system with the onboard computer and consists of
an Alfa 2.4GHz USB wireless antenna onboard the aircraft and an ultra-long range directional Wi-Fi
adapter at the ground control station. The ground control station requires it to be pointed at the UAV
within a 14° cone and is manually positioned.
The secondary network uses an identical Alfa antenna also connected to the onboard computer
but is used for identification and connection to the SRIC.
To determine target location the Arduino microcontroller is triggered to write the GPS data
collected from an Eagle Tree GPS antenna to an xml file when a picture is taken. The Eagle Tree antenna
is located under the upper skin of the fuselage.
6. Safety
System safety and mission assurance is accomplished through design and practice. Design
choices were made to promote safety such as having tail booms acting as a propeller barrier, and a
redundant flight control system both in the air and on the ground. Safety and mission assurance is
practice during operation through training, team briefings, assigned duties, and the use of iterated
checklists.
7. System Validation
Validation of the FF12 UAS system was accomplished using both analysis and testing on individual
subsystems and full system when applicable.
7.1 Aircraft & Airframe Components
The aircraft design was validated using x-plane simulation software that allowed performance
models to be validated prior to flight. The design was then further validated using the kestrel autopilot
data logging feature to record flight data for post flight analysis. Airframe structure was validated using
both FEA and coupon testing. Coupon testing was performed on representative structures while solid
modeling and FEA were used to verify component interaction. The aircraft design and structure were
further validated during a 15 individual test flight designed to validate that aircraft design and identify
operational parameters.
7.2 Autopilot & Flight Planning
Validation of the autopilot comes from working a flight simulator called Aviones. The model of CSUN’s
UAV is constructed with accurate parameters and performance. Lift, center of gravity, moments, wing
loading all were considered into the construction of the model in Aviones. Model simulator tested the
performance of the aircraft to verify the calculations are accurate and performing the as expected. The
California State University, Northridge AUVSI Page 19
Aviones model was imported to Virtual Cockpit where a simulation of the UAV can be run. In the
simulation inside virtual cockpit, the aircraft can be seen whether or not it can handle all the given
waypoints uploaded to the aircraft. Virtual Cockpit makes its calculations based on the Aviones model.
To adjust for incomplete data or missing parameters in the simulation the stability of the aircraft can be
perfected even further with the use of PID controls through virtual cockpit.
7.3 Imaging & Targeting
Payload verification was accomplished using task scenarios on both individual subsystems, and
the complete payload system. A full system bench test was performed, prior to installation in the aircraft.
The targeting sub system was verified after many training scenarios where the system learned
multiple fonts and colors. The verification of the targeting system resulted in a 79% success rate.
7.4 Ground Control Station
Ground Control Station verification was performed in conjunction with correlated systems. The
pilot station was verified in conjunction with the aircraft navigation system through the use of Aviones
software which was able to mimic aircraft performance through simulated flights. The payload operator
station was verified through individual component and full system tests on the payload. Additionally the
payload ground umbilical system was verified through testing.
7.5 Communication
Communication systems were verified individually and as a complete system through ground
range testing prior to flight. The following table lists the demonstrated ranges of each of the
communication links.
Table 1 System range testing
Apollo Field Van Nuys, Ca
Network Frequency Range Tested Successful
Autopilot 900 MHz 3000 ft Y
Nose Cam 1.3 GHz 3000 ft Y
R/C 2.4GHz 3000 ft Y
Porter Ranch, CA
Computer Wi-Fi 2.4GHz 3000 ft Y
The control priority discussed in the flight control section was verified to show that at any time
the RC pilot can take control of the aircraft using the 2.4GHz transmitter.
7.6 SRIC
The SRIC connectivity protocol was tested in lab conditions, while the full SRIC system
including aircraft orbit will be tested during the payload verification test flights.
California State University, Northridge AUVSI Page 20
7.7 Checklist
Checklist verification was and is currently accomplished during fight test and training scenarios.
7.8 Full System Test
A full UAS system with payload is projected to be flown on June 1st and will be verified using
multiple flight plan scenarios to accomplish takeoff, waypoint navigation, area search, SRIC acquisition,
and landing. During this simulation real targets will be used to establish full system target reliability.
8. Conclusion
Based on the requirements and goals for this year’s UAV, a complete system was successfully design
and built through a systematic approach evaluating all aspects of autonomy, safety and overall mission.
This paper was able to show this process and how it was implemented throughout design, manufacturing,
testing of this complete UAV system. Furthermore, to ensure a successful mission a significant amount
of hours were specifically design for testing. Having met all of the requirements, the Flying Fox 12 team
is overwhelmingly confident that a successful mission accomplishment will be met and looks forward to
this year’s AUVSI student competition.
9. Acknowledgements
The CSUN Aeronautics team wishes to thank our advisor, Professor Tim Fox to whom the
aircraft is named after. Our graduate advisors, Ammy Cardona (USAF), Anton Bouckaert (Boeing-
SpectroLab), Franz Revalo (USAF-CIV), Hooman Fatinajed, Jack Carrick (L-3 Communications), Mahdi
Ghalami, Phillip Malinoski (HAAS), Ryan Schaafsma, and Tomasz Dykier.
We would also like to acknowledge the financial contributions from the CSUN Department of
Mechanical Engineering, CSUN Associated Students, Astro Aluminum Inc., Dickson Testing Company
Inc., and Aerocraft Heat Treating.
Additionally the team has been honored to have the help of student engineer volunteers, Curtis
Darby, Sandy Otero, and Thad Moody who have been invaluable to the team and are excited to lead next
year’s team to success.
Fig. 17 UAV climbing during test flight
Top Related