Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has...

42
1 Mosquito Chasers Mid-Project Report Fall Semester 2015 -Full Report- By Chad Wachsmann Austin Worden Josh Olson Ryan Robinson Department of Electrical and Computer Engineering Colorado State University Fort Collins, Colorado 80521 Project Advisor: Dr. Ali Pezeshki Approved by: Dr. Ali Pezeshki

Transcript of Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has...

Page 1: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

1

Mosquito ChasersMid-Project ReportFall Semester 2015

-Full Report-

ByChad Wachsmann

Austin WordenJosh Olson

Ryan Robinson

Department of Electrical and Computer EngineeringColorado State University

Fort Collins, Colorado 80521

Project Advisor: Dr. Ali PezeshkiApproved by: Dr. Ali Pezeshki

Page 2: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

2

Abstract:

Every year there are upwards of 350 million cases of Dengue Fever worldwide. This mosquito-borne illness spreads rapidly in tropical and subtropical climates and is currently without a cure. Experts in the field of disease control have expressed interest in knowing the exact location of breeding grounds for mosquitos. There is a need for a system of effectively and efficiently identifying mosquito breeding grounds in larger, urban population centers, with Mexico City at the frontline.

This report presents a newer approach at creating this system. We have used the Scale Invariant Feature Transform (SIFT) algorithm to process large sets of images. These pictures are taken 20 feet above the ground from a quadcopter. We then use Support Vector Machines (SVM) and vector quantization to determine the probability that an image contains a potential mosquito breeding ground. Our program contains two main sections: training and testing. First, we use a large set of known images to “train” the program so that it can recognize which new images have potential breeding grounds and which do not. The new images are sent into the testing program that uses data from the training to determine probability.

The images fed into the program are also geotagged. In the future, we plan to develop a type of heat map using the geotagging information and probability outputs, allowing us to see the relative probabilities of mosquito breeding grounds in a certain city or other locations on a map.

Page 3: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

3

Table of Contents

Title 1

Abstract 2

Table of Contents 3

1. Introduction 4

2. Design Constraints and Objectives 5

3. SIFT Theory 6

4. Code 6

5. Picking a Correct Camera 9

6. Geotagging Images and Time Stamping Images 10

7. Risk Mitigation 11

8. Conclusion and Future Work 15

References 17

Appendix A - Abbreviations 18

Appendix B - Budget 19

Appendix C - Project Plan Evolution 20

Appendix D - Training Code 24

Appendix E - Testing Code 28

Appendix F - Remove Noise Function 30

Appendix G - Confusion Matrix Generator 31

Appendix H – Field of View Calculations 32

Page 4: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

4

1. Introduction

Every year there are upwards of 350 million cases of Dengue Fever worldwide. This mosquito-borne illness spreads rapidly in tropical and subtropical climates and is currently without a cure. Experts in the field of disease control have expressed interest in knowing the exact location of breeding grounds for mosquitos. There is a need for a system of effectively and efficiently identifying mosquito breeding grounds in larger, urban population centers, with Mexico City at the frontline.

Our proposed solution to this problem is to attach a camera to quadcopter and use the quadcopter to take aerial pictures of the area. We will then use create a computer program to analyze the images and determine which images have potential mosquito breeding grounds. This approach is very viable because the quadcopter will be able to cover a large amount of land in a short period of time and will have a unique viewing angle that will allow us to spot potential breeding grounds that are on roofs or hard to reach areas. In this application, we will need to quickly analyze and classify hundreds of images in a very short amount of time – a task that doesn’t make sense to do by hand. We will use MATLAB to write a program that performs these tasks on as many images as is needed. There exists a handful of robust image processing algorithms that can be used in the first step of our solution, using a computer to “look” at a large set of images instead of viewing and classifying every single ourselves. In 1999, David Lowe, a professor of computer science at the University of British Columbia, published a paper detailing image processing algorithm called SIFT (Scale Invariant Feature Transform). This algorithm is ideal for our application because it is resistant to changes in scale, rotation, and relative position. this means the the algorithm is able to recognize objects no matter where they are in the image, which way they face, or what size they are. This algorithm is used as our primary tool in identifying breeding grounds. The other primary step of the process is classifying these images, in terms of whether or not they contain a potential mosquito breeding ground. This is done using technology called Support Vector Machines, or SVM. Mathworks has written their own SVM library and it is included in Matlab. This SVM is what will be used to classify each image as either containing a potential breeding ground, or not containing one.

Once the quadcopter has images of an area, we will use our code to search for specific items in the images that indicate the possibility of a mosquito breeding ground. Mosquito breeding grounds occur wherever there is stagnant water. Some common sources of stagnant water are puddles, buckets, and old tires. The first step of our project will be to create a program that can recognize if any of these three items are in a specific image.

Page 5: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

5

Alongside this, our more specific task was to use this program with a method of gathering these images quickly and efficiently. We will be using a quad copter that was created by a previous year’s senior design team. We will then attach a GoPro to the quadcopter and generate an automated flight plan to fly it around and take pictures. We then take these images from the camera and feed them through the program. The program will then generate a map that describes what areas have potential mosquito breeding grounds.

In Chapter 2, we will talk about what constraints we need to be cautious about while still achieving our objectives. Chapter 3 presents the necessary theory behind the SIFT algorithm, where it comes from, how it functions, and the key features that make it useful for our purpose. Chapter 4 will go into detail about our code. It will detail the process we followed and the algorithms used to complete the task. In chapter 5, we explain the process we went through to choose the appropriate camera for this project. We will also talk about the calculations that we performed to determine the optimal height to fly the quadcopter. Chapter 6 describes our current progress in geotagging the images, how we were able to get this capability in spite of the GoPro’s capabilities, and what we plan to do with the geotagging in the future. Chapter 7 will detail the risks that we might encounter and how to avoid them. Chapter 8 will conclude our report and talk about our plans for next semester.

2. Objectives and Design Constraints

Objectives: As the semester has progressed and we have set reasonable goals for ourselves to accomplish,we decided that our best course of action would be to pursue a proof of concept approach. It was our goal by the end of the first semester to be able to identify a puddle, a bucket, and a tire with over 80% accuracy from the air.

1st Semester Objective: 1. Identify a test set from the air with 80% accuracy.2. Have a concrete plan and idea for geotagging images

Design Constraints:

We are limited to one pound of payload on the quadcopter and are limited to 30 minutes of flight time. We also are limited to only testing in Coby’s facility until FAA approval is given to operate the quadcopter in other areas. Also, because there is no real rush on how fast it is

Page 6: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

6

necessary to run through our algorithms, our only constraint we put on the algorithm to run through iterations was out of convenience. Our entire program needs to be able to run through our data set over night (a ten hour period).

Flight Time: 30 minutesWeight of Camera: 1 lbBudget: $600Time for code to run: 10 hours

3. SIFT Theory

SIFT stands for Scale-Invariant Feature Transform developed by David Lowe[1]. It is an image processing technique to determine whether or not an image contains a specific feature. This technique works on any feature regardless of its scale, rotation, translation, or illumination to a certain extent. Sift works by calculating the difference between two images convolved with different Gaussian curve. The difference image is represented by a large collection of vectors. Several training sets are used to determine the dominant orientation, these dominant orientations are assigned to localized keypoints. After the localized key points are determined, the gradient is performed to determine the direction and magnitude of the greatest change at that location. The result is 1-by-128 feature vector that represents the localized key feature. Matching image features on new images accesses these key points and compares them to the new image to find the key points closest to the descriptor vector.

4. Code

In order to detect potential mosquito breeding grounds, we are following a method detailed in the paper “Detection of Potential Mosquito Breeding Sites based on Community Sourced Geotagged Images”[2]. In their paper, Ankit Agrawal, Ushasi Chaudhuri, Subhasis Chaudhuri, and Guna Seetharaman used several key steps to determine mosquito breeding grounds, all centered around the SIFT algorithm. By using two sets of training data, one set where a potential mosquito breeding ground is present and one where a potential mosquito breeding ground is not present, this method can predict whether a new image has a mosquito breeding ground in it. The first step in this process was to run the training images through the SIFT algorithm. The output of this algorithm would be an array of all the feature vectors for each image. A feature vector is a 1-by-128 array that contains information about the orientation and magnitude of the key feature. Each image produces several thousand feature vectors, which results in an X-by-128 array. The next step is to take the feature vectors from each image in the training set, and union them together. This step combines all the feature vectors while removing any possible duplicates. After we have a master list of all the unique feature vectors, we use a method, called k-means clustering, to

Page 7: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

7

essential group together all the vectors that are very similar. This method will take in an array of data and reduce it into ‘K’ clusters. Each of the clusters will represent several of the unique feature vectors. Following Agrawal, Chaudhuri, Chaudhuri, and Seetharaman’s method, we are using, where N is the number of total unique feature vectors. After we have K clusters, we go back and compare each of the original image’s full set of feature vectors to the generated clusters. This process is called Vector Quantization. The result of this comparison is an X-by-1 array, where each cell is an integer. This integer represents which one of the clusters was closest to the feature vector, we will then perform a histogram on the array produced from each image. This new array will be a K-by-1 array where each cell is a decimal number between 0 and 1. Each cell is representative of a cluster. So cell 1 represents cluster 1, and so on. The value in this cell is now the probability that a specific cluster will appear in a specific image. After this step, we have a K-by-1 array for each image in the training data set. These array are then used to train a Support Vector Machine (SVM). SVM is a tool used to classify data by graphing all the training data points fed into it, and trying to find a hyperplane that separates the two sets of data. Once the SVM is trained, we can use it to predict whether or not a new image has a mosquito breeding ground in it. Once we have the SVM trained, a new image has to go through a similar process in order for the SVM to accurately predict the presence of a mosquito breeding ground within the image. The first step is for the image to be analyzed using the SIFT algorithm. After this, the image will go through a vector quantizer using the same clusters from the training data. This will transform the image’s feature vectors into an array describe which of the training clusters each of the image’s feature vectors is closest to. The next step is to perform a histogram of the newly produced array. Now we have an array describing the probability of the clusters appearing in the image being tested. This is the same data format that the SVM was trained with so we can now use the SVM to predict if the new image has a mosquito breeding ground in it or not. All of our coding was done in Matlab. The first step in our coding process was to either create or find a working SIFT algorithm. After researching David Lowe, we found a demo version of his SIFT algorithm being offered from the University of British Columbia [1]. Since this algorithm was created by David Lowe and was compatible with Matlab, it was the more reliable option. Since we are using the demo version of his software, there are some limitations of the algorithm. The biggest limitation being that the algorithm will throw an error if the image fed into the algorithm has more than 1800 pixels in any direction. This limits the image size to just over 3 Megapixels. Using Matlab, we can reduce our picture from its original size to a size that the SIFT algorithm will accept. However, this will reduce the resolution of the image. As a result, it may be more difficult to predict the presence of a mosquito breeding ground because the breeding ground will take up less pixels in the image and be harder to detect. All of the remaining steps (K-Means clustering, Vector Quantization, Histogram, and SVM) were accomplished using methods that were created by Mathworks and included in Matlab. the

Page 8: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

8

remainder of our code ensures that the data is handled correctly between methods and that it is in the format required for each method. After we created a rough draft of our code, we tested it first on a sample set of 30 black squares and 30 black triangles on a white background. The goal was that our classifier would be able to accurately predict if an image was a square or a triangle. In order to determine how well our classifier was working, we developed a confusion matrix. A confusion matrix is an easy way to visualize how well the classifier predicts the contents of an image. Figure 1 shows the confusion matrix that we used to visualize our accuracy. Figure 2 shows the result of our first test. This first test wasn’t accurate enough so we needed a way to improve the accuracy. After looking at an image with the SIFT feature vectors overlaid on the image, we noticed that there were a lot of small vectors that looked like noise. We then developed a Matlab function that would remove these noise vectors after the SIFT step was performed. We set this function up so that you could directly feed it the information that the SIFT algorithm generated. The function would then regenerate this information. The only difference between the two sets of data is that the second set of data would not contain any information associated with feature vectors which had a magnitude below X, where x is a variable that could be adjusted to remove more noise or less noise. For this sample set of data, removing some of the noise ended up improving the accuracy. After testing different values of X, it was determined that the best value of X for the squares and triangle data set was X=0.9 and gave us an accuracy of about 85%.

Number of Pictures Containing Potential Mosquito Breeding Grounds that were Correctly Classified

Number of Pictures Containing Potential Mosquito Breeding Grounds that were Incorrectly Classified

Number of Pictures Containing No Potential Mosquito Breeding Grounds that were Correctly Classified

Number of Pictures Containing No Potential Mosquito Breeding Grounds that were Incorrectly Classified

Figure 1

23 7

2 28

Figure 2

We decided to run another simple test to further verify that our code was working as planned. We generated a set of 40 black squares on a white background and 40 circles on a white background. We then used 30 of each set of images to train the classifier. After the classifier was

Page 9: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

9

trained, we used tested all 40 images in each data set. After testing with different values of X, the most accurate value ended up being 0.5 with an accuracy of about 81%. We consider this accuracy to be high enough since Agrawal, Chaudhuri, Chaudhuri, and Seetharaman’s paper reported a similar accuracy. After proving that our code works with simple images, we started to gather and test images that might represent mosquito breeding grounds. We started with pictures of a bucket filled with water vs. pictures with no bucket. We first ran the code with no noise reduction. This resulted in an accuracy of 56% which was not desirable. After testing this code with many different values of X, we realized that the noise reduction function was having little effect on the accuracy of our code. This caused us to go back to the basics and review our code. during this we realized a significant mistake that we had made.

One of the key features of the SIFT algorithm is its ability to be scale-invariant and rotation invariant. When we went to create the remove noise function, We saw that all the key feature vectors were unit vectors. We were so focused on trying to remove the noise from the image, that we started doing our calculations based on the unit vectors scaled to their appropriate length. The essential result of this was that we lost ability to be scale invariant. As a result. our accuracy was greatly diminished. After fixing this error and running the code with the bucket of water data set vs. the no bucket data set, we ended up with the confusion matrix shown in figure 3. This confusion matrix result in an overall accuracy of 86%.

25 0

8 17

Figure 3

5. Picking a Camera - GoPro Hero 3

Early into the project we knew that we had a few different design constraints that we needed to consider. Something very important was that we picked a camera with the right specifications for the project. We are limited to a $600 budget for our entire team, so naturally, an infrared sensor was not something within our budget, as anything worth using would use our entire budget. We also needed a camera with at least 30+ hours of battery life as that is the max ammount of flying time that our battery limits us to. We also needed to make sure our camera was under one pound for the quadcopter to operate correctly.

Page 10: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

10

After a series of calculations and understanding (FOV) Field of View, we decided that a GOPro Hero 3 would have enough resolution to accomplish what we needed to. We were mainly concerned with how high we could fly the quadcopter in order to have a satisfactory Pixels per Centimeter rating. Attached in Appendix E are the calculations that were done to know what the correct number of Pixels per Centimeter on each specific height we could fly the quadcopter. These were the final results at possible flying heights:

20 ft: 2.8 Pixels/centimeter30 ft: 1.9 Pixels/centimeter40 ft: 1.4 Pixels/centimeter

It was conclusive that the GoPro Hero 3 Silver Edition would give us the necessary image quality that we need for our sift algorithm to work correctly for our goals this semester.

The camera also made all of the initial testing go very smoothly because of the user friendly interface that the GoPro incorporates. We could take the images from our phones before we would ever actually even need to create a triggering mechanism for the camera. If it turns out we need to have higher quality images later on in the project when we incorporate a flight plan with a quadcopter that is constantly moving, we will be able to re-evaluate later. However, for now we have images that are behaving as we would expect.

Down side to GoPro: One of the major downsides to working with a GoPro is that a GoPro unfortunately does not have an internal GPS chip. So, because the overall goal for the project was to develop a heatmap by the end, it will be necessary to figure out a way to geotag each image, as described in the following section.

6. Geotagging

As stated above, one of the downsides to using a GoPro is that there is no GPS chip located internally. However, because we decided to move forward with that camera choice, the only possible way we could create a heatmap would be to Geotag the images. After some research and discussions, we were able to construct a way to do this. Every JPEG image, which is the image file that a GoPro will captur, has a set of metadata. This metadata is known as EXIF metadata; standing for Exchangeable Image File. This file includes copious amounts of information.

The two pieces that we are most concerned with are the date stamp and the GPS tag. These two lines of data are important to us because we have found that the PIXHAWK system that is flown with our quadcopter can output a set of IMU data. This IMU data ends up being able to track the

Page 11: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

11

GPS coordinates of quadcopter to the nearest meter along with the timestamp from the moment the quadcopter was turned on. However, since this was not in the same format that the timestamp on the GoPro was done in, it is necessary to create a way to interchange the two with an algorithm.

In order to write to an image’s EXIF file we were able to find a open source MATLAB script by Peter Burns [3]. This MATLAB code takes the existing metadata and writes over it to create an image with the gps coordinates from the IMU data. We attached to an appendix what our IMU dataset looks like when generated. By using one large MATLAB script with a few giant “for loops” we will be able take the GPS coordinates and time stamped data from the IMU data file the PIXHAWK generates and rewrite the metadata of a GoPro Captured image. By doing this, we will most likely be able use google maps with an add-on to create a heat map of where we spotted possible mosquito breeding grounds.

7. Risk Mitigation

The following table lists out possible risks that may prevent a successful working product being delivered by May. So we have decided to create possible testing plans for each risk we have listed.

Risk Mitigation

Risk # Risk Description Risk Result Risk Level

Mitigation Plan

1 Incompatible Image Software would be useless

8 Double check image type andbe in constant communication with Josh and Austin

2 Unable to Trigger Camera Would have no pictures

9 Work a Plan with Coby

3 Images are Blurry Software may not work

7 Work with Go Pro and understand how

Page 12: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

12

to receive a clear image

4 Inadequate Sample Set Code would be useless

8 Understand the research enough to generate our own sample set

5 Inadequate Open Source Software

The project is a bust

10 Invest some of our budget into commercial software

6 Code doesn’t work during dry seasons

Only works for wet times

3 Create a more robust code anddo further research

7 SIFT and SVM are not Compatible

Project is a bust 10 Constant Communication between Josh and Austin

Risk #2: This is the first risk we are concerned with.Triggering the camera is something important to test. Our testing plan for this could be something simple. Right now our camera is triggered on a 5 second time interval. However, as we further develop the project with a flight plan, we will need to make sure that our camera triggers when we need it to. A simple check of triggering the camera 20 times and making sure there is no delay would be a sufficient in determining if more time needs to be spent working with the process.

Risk #3: Blurry images would really prevent the system from working. So, what we have decided to do is implement a MatLab script that can sort through image quality and eliminate images that will not work. The theory is called Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE). We will run the BRISQUE code on our images and then do a visual check on 50 or so images to make sure that the code was able to eliminate all of the blurry images from our data set.

Risk #5: Implementing our DesignAs our design is being built, there are several steps that will be taken to ensure that the code being written, is being written correctly.Our code will have a couple major sections. These sections are SIFT, K-means clustering, vector quantization, and SVM. Each of these sections of code will be written separately and verified.

The SIFT section needs to be able to read in multiple images and run it through a SIFT algorithm. In order to test this, we created the section that will read in the images. This section needed to be able to read in the image names from a folder and feed that name into the SIFT

Page 13: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

13

algorithm. To verify this, all that was needed was a simple print statement that would print that name of the image before feeding it into the algorithm. The algorithm itself was tricky to verify. SIFT is a process of determining the keypoints in an image. David Lowe created this method of determining key points. At the end of the day we determined that the SIFT algorithm we were using was correct and accurate because it was actually written by David Lowe. The last task for this section of code is to correctly store the output from the algorithm. This task was validated by running a couple test pictures through the algorithm, looking at the output, and comparing it to what the code stored. I Matlab, it is very easy to compare variables in the workspace. While testing this task, it was determined that the data type we were using to store the data, was not flexible enough to store different sized data sets. Each picture run through SIFT will have a different sized output. As a result, the global variable needs to be flexible enough to account for this.

The next section of the code is the K-means clustering. This is a relatively small section of code. The first portion of the code will combine all the unique vectors that were outputted from the SIFT section into one variable. This union of data was relatively easy to verify. I created a couple of small variables, combined them, and looked at the output in Matlab’s workspace. It was determined that the union of these vectors was done correctly on a small scale, therefore it will work when we try to union thousands of vectors. Once the union was complete, the vectors had to go through a process called K-means clustering. This will group vectors together into ‘K’ unique clusters. Luckily, Matlab has created a method to do just this. The easiest way to verify that this was working correctly was to feed in several vectors and then look at the output. The output was a matrix containing information about the K clusters. Since this is an official Matlab and the output was in the expected format, we can conclude that the K-means Clustering section works correctly.

The third section of the code is Vector Quantization. This section of the code will compare the clusters to the original SIFT data and classify each of the original SIFT vectors as one of the new clusters. Matlab has also created a function for this. To verify that the output of this code was what we expected, we generated a couple of sample clusters and sample vectors and fed it into the Vector Quantizer. The output of this should be a matrix of size 1 by the number of vectors, where each cell is an integer value between 1 and the number of clusters. When we looked at the output of the Quantizer, we saw that the matrix was the correct size and only contained valid values. This coupled with the fact that Matlab wrote this function, it is safe to assume that the Vector Quantizer works correctly. Once the quantization is complete, a histogram needs to be created for each image. This histogram will produce a matrix of size 1 by the number of clusters. Each cell in the matrix will show how many times a certain cluster appears in an image. This value will then be divided by the total number of vectors such that the value in each cell represents the probability of a certain vector appearing in an image. This was verified in a very similar manner to the Vector Quantizer. We fed in some sample data and looked at the output. The output of the histogram section was a matrix of correct size where each cell had a value

Page 14: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

14

between 0 and 1. This is exactly as it should be and verifies that the histogram section works correctly.

The final section to verify is the SVM section. SVM stands for Support Vector Machine. The job of this section of code is to classify new data into one of two groups based on the training data received. Matlab has written a function that does just this. Similar to the other sections of code, we fed some sample data into it and looked at the output. The output should have been a label classifying the new data into one of the two predefined groups. We generated a sample set of data to feed into the SVM and we looked at the output. The output was one of the two groups so we were able to verify that this function at least did what Matlab said it should. However, it was very tricky to create a sample set of data such that we could determine the accuracy of the SVM. As a result, we did a proof of concept test to verify that the classifier worked along with the rest of the code.

Proof of ConceptThe first step in verifying that the implementation of our design works correctly will be to test our code on a set of simple images. We chose to generate a set of images of black squares on a white background and a set of images of black triangles on a white background. If our code and classifier are working correctly, then we should be able to use these simple images to test whether or not our code works. Once all the code sections were integrated, we fed the two simple sample sets of images into our program.as a result, our code created a classifier that we could use to predict whether a new image is a square or a triangle. To verify that our classifier works, we will ask the classifier to predict whether an image is a square or a triangle. The images we will use are the images that we used to train the classifier. If the classifier is working correctly, then we should have 100% accuracy using the images we used to train the classifier. The first time we did this, we found the classifier did not work. After looking at the code we found a possible reason as to why it didn’t work. We believe that the vectors outputted from the SIFT code were all unit vectors. We believe that these vectors need to be scaled to their respective lengths. We will continue to test our classifier using this simple data set. Our plan is that this data set, along with the sectional testing, will allow us to find all the errors in our code. If we work out all the bugs in our code and we still don’t have 100% accuracy, we will have to go back to the drawing board to determine a new method of classifying images. The sections that have the highest probability of not working the way we need to are the SFIT section or the SVM section. If it is determined that one of these sections isn’t accurate enough, then we will find a new method to accomplish the same goal.

8. Conclusion and Future Work

Page 15: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

15

We had a very productive semester. We were able to successfully create a classifier that could predict the presence of a potential mosquito breeding ground. This classifier was created using the SIFT algorithm created by David Lowe and by following the implementation described by Agrawal, Chaudhuri, Chaudhuri, and Seetharaman. On top of this success, we were started to generate a set of data that could be used to train our classifier. We were able to generate a robust set of data because of the calculations that were done to determine the best altitude to take pictures from. We have also made a good amount of progress towards being able to geotag the images. This is the first major step to potentially being able to generate a heatmap of the potential mosquito breeding grounds.

After looking back at all we’ve accomplished this semester, we realize that there is still plenty of work to be done in several areas. The first area to be focused on would be the code. As of now, we have been able to use our code to predict the possible presence of mosquito breeding grounds with an accuracy up to 86%. There is room for improvement for this statistic. In order to increase the accuracy of this classifier, there are several steps we can take. The first step we plan to take is to gather more training data. In Agrawal, Chaudhuri, Chaudhuri, and Seetharaman’s paper, they gathered close to 500 images, 250 had potential mosquito breeding grounds and 250 did not have potential breeding grounds. We plan to increase our training data from 50 to at least 500. More training data usually improves accuracy. However, as you increase the number of training images used, you could encounter an issue where the classifier becomes over trained and the overall accuracy of the system will decrease. Next semester, we would ideally try to find out an optimal number of images to use to train our classifier. Another way we could improve the accuracy of our code would be to use a different SVM. The SVM we used is a SVM that Mathworks wrote and included in Matlab. However, there are many other open source SVMs that we could use. For example, in Agrawal, Chaudhuri, Chaudhuri, and Seetharaman’s paper, they mention which SVM machine they used for their implementation of this design. Currently in our code, we have to reduce the resolution of our images in order to input it to the SIFT algorithm. This is a possible area that could be hurting our accuracy. A possible way to fix this would be to take a single picture and then crop it into several smaller pictures. We would then use these smaller pictures to do the predictions. One of the problems with our code right now is that the output of the data could be improved. Currently, the output of the code is in an array format. This could be greatly improved so that it is easier to read and interpret. One of the possible solutions we have discussed is to generate a heatmap displaying the probability of a mosquito breeding ground being in a certain location. In order to do this, we would need to find and use a different SVM. this new SVM would need to have the ability to report back the probability of the prediction being correct.

Once we have our core code working, we have several possible additions that would improve our overall project. Currently when we attach the GoPro to the quadcopter, the quadcopter has to be manually flown around. There are many open source software packages that we could use to

Page 16: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

16

automate the flight path of the quadcopter. This would allow us to optimize the coverage of the quadcopter and the coverage of our prediction software.

Currently we are only able to predict if there is a potential mosquito breeding currently in an image. The next step would be to train our SVM to predict if the location will contain a potential breeding ground once it starts raining at that location. For example, we currently assume that a bucket full of water is a potential breeding ground while an empty bucket is not a potential breeding ground. However, once it starts raining near the bucket, the bucket will fill with water and become a potential breeding ground for mosquitos. There are two possible ways to implement this change. The most simple way would be to create two instances of our code. one instance will be trained to predict whether an image currently contains a potential breeding ground while the second instance will be trained to predict if the image will contain a breeding ground in the future. The second way to implement this change would be to choose a different SVM. This new SVM will need to be a 3-class classifier. The three classes would be current breeding ground, future breeding ground, and no chance to be a breeding ground. The plan for next semester will be to try both solutions and implement the solution that is more accurate.

Page 17: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

17

References

[1]D. Lowe, ‘Keypoint detector’, Cs.ubc.ca, 2015. [online] Available: http://www.cs.ubc.ca/~lowe/keypoints/ . [Accessed: 12-Dec-2015].

[2]A. Agarwal, U. Chaudhuri, S. Chaudhuri, G Seetharaman, ‘Detection of potential mosquito breeding sites based on community sourced geotagged images’ Geospatial InfoFusion and Video Analytics IV; and Motion Imagery for ISR and Situation Awareness II,2014.

[3]P. Burns, 'run_exiftool - File Exchange - MATLAB Central', Mathworks.com, 2015. [Online]. Available: http://www.mathworks.com/matlabcentral/fileexchange/42000-run-exiftool. [Accessed: 12- Dec- 2015].

Page 18: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

18

Appendix A - Abbreviations

SIFT - Scale Invariant Feature TransformSVM - Support Vector MachineMetadata - a set of data that describes and gives information about each imageIMU - Inertial Measurement UnitGPS - Global Positioning SystemEXIF - Exchangeable Image File Format

Page 19: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

19

Appendix B - Budget

The budget for this project was $200 for each member, totaling $800. This all came from the ECE Student Project Fund. The only item we had to spend some of this budget on was the GoPro camera, which we bought used. In order to obtain a complete set of sample data to train our program, we went around to thrift stores to find a 5 gallon bucket and tire stores to get any old tire. We finally got donations for the bucket and tire from CSU Surplus and Alpine Tire Center, respectively. Fortunately, we were able to use a Quad Copter from that one place to take aerial images, free of charge.

Items Purchased Cost

GoPro Hero 3 Silver Edition $200

5 Gallon Bucket Donation: CSU Surplus

Old Tire Donation: Alpine Tire Center

Total: $200

Figure 4

Page 20: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

20

Appendix C - Project Plan Evolution

Initial Project Plan -

Page 21: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

21

Mid-semester Plan Revision –

Page 22: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

22

End of Fall Semester Project Plan

Page 23: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

23

Final Project Plan:Project Plan Explanation:

As the semester has progressed, naturally we have needed to make change to our project plan. However, we still did manage to still to schedule and accomplish large portion of our early semester goals. One of the major changes that took place was us losing one of our group members. Because of this we needed to re assign tasks to other group members. It did not however end up proving to be that much of a blow to the project as we originally thought it would be. One of the main objectives that we are still working on is the geotagging process and having a matlab script that automates it. As stated earlier, we do have a wa

Page 24: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

24

Appendix D – Training Code %read in images%loop until all images are read in clear; addpath('Yes'); addpath('No');YesPics = dir('Yes/*.jpg'); %directory containing water photosNoPics = dir('No/*.jpg'); %directory containing no water photos %read in the yes images and perform sift on them SizeYes = length(YesPics); clear('A'); A = cell(1,SizeYes); for i = 1:SizeYes %Convert selected image to .pgm file

X = imread(YesPics(i).name);%resize image so that sift doesnt error out.c = imresize(X, [1600,900]); imwrite(c,'test.pgm'); %call Sift function on the image[image, descriptors, locs] = sift('test.pgm');%show the image with the Sift vectors in the image%showkeys(image, locs);

% [cleandes,cleanlocs] = removeNoise(descriptors,locs);% newdes = cleandes;% for j = 1:size(newdes,1) % newdes(j,:)=newdes(j,:)*cleanlocs(j,3);% % end

%put the vectors from each picture into a 3d array, 'A'A{i} = descriptors; disp(i) end

%read in all the No pics and perform sift on them SizeNo = length(NoPics); clear('B'); B = cell(1,SizeNo); for i = 1:SizeNo %Convert selected image to .pgm file

X = imread(NoPics(i).name);%resize image so that sift doesnt error out.

c = imresize(X, [1600,900]); imwrite(c,'test.pgm'); %call Sift function on the image[image, descriptors, locs] = sift('test.pgm');%show the image with the Sift vectors in the image %showkeys(image, locs);

% [cleandes,cleanlocs] = removeNoise(descriptors,locs);%put the vectors from each picture into an 3d array, 'B'

Page 25: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

25

% newdes = cleandes;% for j = 1:size(newdes,1) % newdes(j,:)=newdes(j,:)*cleanlocs(j,3);% % end

B{i} = descriptors;disp(i) end

%union them together N=A{i}; for i =2:SizeYesN=union(N,A{i},'rows'); end

for i =1:SizeNoN=union(N,B{i},'rows'); end

disp('union')%kmeans squared of unioned set %get the size of the matrix (#of vectors in the images)[D,E] = size(N);%kmeans clustering%http://www.mathworks.com/help/stats/kmeans.html#buefthh-2%convert the 3d array into a usable 2d array % Master = squeeze(N(1,:,:));% %verify 3d->2d array conversion% size(Master);%Clusters contains the cluster informationK = ceil(sqrt(D));[idx,Clusters] = kmeans(N, K); disp('clustered') %vector quantization of inputed images %create vector quantizer %codebook k-by-N matrix each column is a codeword Codebook = transpose(Clusters);%CodewordOutputPort%if true, the index of the codeword closest to the input vector is outputed%CodewordOutputPort = true; VectorQuantizer = dsp.VectorQuantizerEncoder('Codebook',Codebook);%,'CodewordOutputPort', true);%step returns which cluster the input vector is closest to%Input can be k-by-1 or k-by-M k=#ofVectorDescriptors %Input = transpose(descriptors);%http://www.mathworks.com/help/dsp/ref/dsp.vectorquantizerencoder-class.html%http://www.mathworks.com/help/dsp/ref/dsp.vectorquantizerencoder.step.html%index = step(VectorQuantizer,Input); %Vector Quantize A%Create cell structure to hold all the clustered arrays YesClustered = cell(1,SizeYes);%loop through all the images and vector quantize each of them for i = 1:SizeYes

Page 26: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

26

%get the vectors for each imageAsimple = A{i};%transpose the vectors for each imageAtranspose = transpose(Asimple); %vector quantize each image

release(VectorQuantizer);YesClustered{i} = step(VectorQuantizer,Atranspose);disp(i) end

%Vector Quantize B%Create cell structure to hold all the clustered arrays NoClustered = cell(1,SizeNo);%loop through all the images and vector quantize each of them for i = 1:SizeNo

%get the vectors for each imageBsimple = squeeze(B{i});%transpose the vectors for each imageBtranspose = transpose(Bsimple); %vector quantize each image

release(VectorQuantizer);NoClustered{i} = step(VectorQuantizer,Btranspose); disp(i) end

disp('quantized')%histogram of outputs from vector quantization%calculate the total number of clusters NumClusters = size(Clusters);NumClusters = NumClusters(1); %create array to hold the resulting histograms Yes = zeros(SizeYes,NumClusters);%loop through all the images and calculate the histogram for each for i = 1:SizeYes

%convert the clustered array into the right data formatY = double(YesClustered{1,i});%perform histogram of each image [h,counts] = histcounts(Y,NumClusters);ysize = size(Y); ysize = ysize(2);%convert the histogram into the relative freq of each cluster in each %imageYes(i,:) = h/ysize; end

%create array to hold the resulting histograms No = zeros(SizeNo,NumClusters);%loop through all the images and calculate the histogram for each for i = 1:SizeNo

%convert the clustered array into the right data format nope = double(NoClustered{1,i}); %perform histogram of each image [h,counts] = histcounts(nope,NumClusters);

nsize = size(nope); nsize = nsize(2);%convert the histogram into the relative freq of each cluster in each%imageNo(i,:) = h/nsize; end disp('histo')

%create inputs for SVM Model%array with with histogram information%array contains info about water/nowater Size = SizeYes + SizeNo;

Page 27: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

27

SvmData = zeros(Size,NumClusters); SvmLabel = zeros(Size,1); for i = 1:SizeYes

SvmData(i,:) = Yes(i,:);SvmLabel(i,1)= 1; end

for i = 1:SizeNoSvmData(SizeYes + i,:) = No(i,:);SvmLabel(SizeYes + i,1)= 0; end

%create SVM Model SvmModel = fitcsvm(SvmData,SvmLabel);%export SVM Model%export cluster array save 'SvnModel.mat' SvmModel; save 'VectorQuantizer.mat' VectorQuantizer; save 'NumClsuters.mat' NumClusters; save 'everything.mat'

Page 28: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

28

Appendix E – Testing Code %Main function to test each image function [label] = testing(image) addpath('Yes'); addpath('No'); addpath('testing'); %load the output variables from the training program load('SvnModel.mat'); load('VectorQuantizer.mat'); load('NumClsuters.mat'); %Read in image and convert to .pgm fileX = imread(image);%resize image so that sift doesnt error out. c = imresize(X, [1600,900]); imwrite(c,'tester.pgm'); %call Sift function on the image[~, descriptors, locs] = sift('tester.pgm');%call remove function% [cleandes,cleanlocs] = removeNoise(descriptors,locs);% newdes = cleandes;% for i = 1:size(newdes,1) % newdes(i,:)=newdes(i,:)*cleanlocs(i,3);% % end %transpose the descriptors matrix so it can be quantizeddescriptorsT = transpose(descriptors); %Vector Quantization of image release(VectorQuantizer);quantizedImage = step(VectorQuantizer,descriptorsT); %Histogramy = double(quantizedImage);[h,counts] = histcounts(y,NumClusters);% bins holds histogram dataysize = size(y); ysize = ysize(2); %Use SVM model to predict image

Page 29: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

29

relativeFreqs = h/ysize;label = predict(SvmModel,relativeFreqs); %Report prediction

Page 30: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

30

Appendix F – Remove Noise Function

%Removes SIFT vectors below magnitude Xfunction [newDesc,newLocs] = removeNoise(desc,locs)%determines the total number of vectorsSize = size(locs);Size = Size(1);X=.5;%Variable used to store the number of vectors with magnitude above XnewSize=0; for i =1:Size

%the array locs, column contains vector magnitude information if(locs(i,3) > X) %locs(i,3); %if the vector has a magnitude above X, then increment the counter newSize = newSize +1; end end%generate empty arrays to store the data about the vectors that are larger%than XnewLocs = zeros(newSize,4); newDesc = zeros(newSize,128);%local counter to track how many vectors have been stored in the new%arrrays a=1; for i =1:Size if(locs(i,3) > X) %if the magnitude of the vector is above X, copy all the information %corresponding to that vector to the new array, then increment a newLocs(a,:) = locs(i,:); newDesc(a,:) = desc(i,:); a=a+1; end end%showkeys(image, newLocs); end

Page 31: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

31

Appendix G - Confusion Matrix Code

clear;%gather the name and location of all the images to be testedYesPics = dir('testing/Yes/*.jpg'); %directory containing water photosNoPics = dir('testing/No/*.jpg'); %directory containing no water photos%TESTING = dir('testing/*.jpg'); addpath('testing'); addpath('testing/Yes'); addpath('testing/No');%determine number of pictures to be testedSizeYes = length(YesPics);SizeNo = length(NoPics);%SizeTesting = length(TESTING);%clear variable and recreate it with the correct size clear('A');A = cell(1,SizeYes+SizeNo);%create a master list with all the image names for i = 1:SizeYes %test each image and record the result in A

A{i} = testing(YesPics(i).name); A{i+SizeYes} = testing(NoPics(i).name);

disp(i) end%initialize the confusion matrix Confusion = zeros(2,2);%sort the results into a confusion matrix for i=1:(SizeYes+SizeNo) if(i < (SizeYes+1))

if(A{i} == 1) %yes picture corectly classified Confusion(1,1) = Confusion(1,1)+1; else %Yes picture incorrectly classified Confusion(1,2) = Confusion(1,2) + 1; end else if(A{i} == 1) %no picture incorrectly classified Confusion(2,1) = Confusion(2,1)+1; else %no picture correctly classified Confusion(2,2) = Confusion(2,2) + 1; end end end

Page 32: Mosquito Chasersprojects-web.engr.colostate.edu/ece-sr-design/AY15... · Web viewMathworks has written their own SVM library and it is included in Matlab. This SVM is what will be

32

Appendix H - Field of View Calculations