Machine Vision for Line Detection with Labview

10
LabVIEW Machine Vision for Line Detection EGR315: Instrumentation Spring 2010 Than Aung and Mike Conlow Department of Physics and Engineering Elizabethtown College Elizabethtown, Pennsylvania Email: [email protected] , and [email protected] . Abstract To improve the previous development of a visual obstacle avoidance algorithm without using proprietary software that is not financially practical to purchase. The goal is to use more advanced methods to produce more concise output in terms of turning angle and the nearest point of interest. I. Introduction The following data is an analysis of improvements made to a system previously developed using NI Vision Development software to detect white lines on non uniform grass. The need for this arose due to over complexity of a vision system that is on an autonomous robot that is used as a learning platform. The current system [5] uses a DVT Legend 554C that collects and filters the images internally and transmits the relevant data, via TCP/IP string, to a LabVIEW program that is a closed loop motor control. During the fall semester of 2009 using the NI Vision Development package a prototype virtual instrument was developed to attempt to improve processing speed by using LabVIEW for the entire image processing procedure through a USB webcam [6]. There were several improvements that needed to be made to the prototype in order to justify its implementation over the previous vision system. The turning algorithm depended on a set of line detection sub virtual instruments that generated large amounts of noise due to inadequate intensity filtering. To resolve these and other issues the filtering, thresholding, and line detection were programmed using base package of LabVIEW, using LabVIEW IMAQ and IMAQ USB to capture the images from a webcam [6]. The result is a great improvement over the previous version. Further enhancements still need to be implemented in order to operate properly in the field, but the goals that were set for this semester have been met.

description

LabVIEW Machine Vision for Line Detection EGR315: InstrumentationSpring 2010 Than Aung and Mike Conlow Department of Physics and Engineering Elizabethtown College Elizabethtown, Pennsylvania Email: [email protected], and [email protected]. Abstract – To improve the previous development avoidance proprietary of a visual without that is obstacle using not The package a prototype virtual instrument was developed to attempt to improve processing speed by using LabVIEW for the entire image processing

Transcript of Machine Vision for Line Detection with Labview

Page 1: Machine Vision for Line Detection with Labview

LabVIEW Machine Vision for Line Detection

EGR315: Instrumentation Spring 2010

Than Aung and Mike Conlow

Department of Physics and Engineering

Elizabethtown College

Elizabethtown, Pennsylvania

Email: [email protected], and [email protected].

Abstract – To improve the previous

development of a visual obstacle

avoidance algorithm without using

proprietary software that is not

financially practical to purchase. The

goal is to use more advanced methods to

produce more concise output in terms of

turning angle and the nearest point of

interest.

I. Introduction

The following data is an analysis of

improvements made to a system previously

developed using NI Vision Development

software to detect white lines on non

uniform grass. The need for this arose due to

over complexity of a vision system that is on

an autonomous robot that is used as a

learning platform. The current system [5]

uses a DVT Legend 554C that collects and

filters the images internally and transmits

the relevant data, via TCP/IP string, to a

LabVIEW program that is a closed loop

motor control. During the fall semester of

2009 using the NI Vision Development

package a prototype virtual instrument was

developed to attempt to improve processing

speed by using LabVIEW for the entire

image processing procedure through a USB

webcam [6].

There were several improvements that

needed to be made to the prototype in order

to justify its implementation over the

previous vision system. The turning

algorithm depended on a set of line

detection sub virtual instruments that

generated large amounts of noise due to

inadequate intensity filtering. To resolve

these and other issues the filtering,

thresholding, and line detection were

programmed using base package of

LabVIEW, using LabVIEW IMAQ and

IMAQ USB to capture the images from a

webcam [6].

The result is a great improvement over the

previous version. Further enhancements still

need to be implemented in order to operate

properly in the field, but the goals that were

set for this semester have been met.

Page 2: Machine Vision for Line Detection with Labview

II. Background

The previous project mainly employed NI

Vision Development Module 9.0 (Trial

Version), which provides various image

processing and machine vision tools. By

using the edge-detection sub virtual

instrument we implemented the following

line detection algorithm.

The image resolution is set to 320x240

pixels capturing at 8 frames per second.

Each frame is then converted to an 8 bit

gray-scale image and then the image is

segmented into regions as follows [6]:

Figure 1: Edge Detection Regions

White lines are detected with IMAQ Edge

Detection by finding lines in the eight

boarder regions represented in green in

Figure 1. In our algorithm, we use two

vertical lines (VL1 and VL2) and two

horizontal lines (HL1 and HL2) detectors.

VCL (Vertical Center Line) is then

calculated by averaging VL1 and VL2.

Likewise, HCL (Horizontal Center Line) is

calculated by averaging HL1 and HL2. The

line angle is then calculated by find the

angle between HCL and VCL, by using

Where m2 is the slope of HCL and m1 is

the slope of VCL. By using the intersection

point and the angle between HCL and VCL,

the appropriate heading for the robot is

determined.

Although the algorithm seems simple

enough, there are a lot of drawbacks. First,

when converting from 32 bit color images to

8 bit gray-scale images there is a loss of

edge information in every frame. In the

presence of background noises it is very

difficult to detect stable edges, thereby

making line detection less accurate. Second,

using four edge detectors is unnecessarily

redundant, and over-use of edge detectors

results in slower processing. Third, we did

not have time to implement the filters to

eliminate the noises and to threshold the

unnecessary pixels information. Finally,

since we used the 30-day trial version of NI

Vision Development Module, to continue

using the program the only option was to

purchase the three thousand dollar full

version.

Therefore, the primary motivation of our

project was to solve the problems we faced

Page 3: Machine Vision for Line Detection with Labview

by using NI Vision Development, and

improve upon the shortcomings of the first

project. With the main goals in mind, we

developed the second version of our line

detection algorithm.

III. Implementation

Our project goals were to reduce the

noises during the image acquisition, enhance

the edge information, and stabilize the

detected line even with the background

reflections and light sources. Therefore, we

divided the project into different modular

processes to achieve these goals.

A. Single Color Extraction

The images acquired from the camera

(Creative VF-0050) are 320x240 32-bit

color images. Although we can simply

convert the 32-bit color (RGB) images to 8-

bit grayscale images by averaging the 32-bit

color, we have learned how to use a better

method for eliminating the noises and

enhancing the edge information. Since the

background of the images is mostly green,

we decided that if we just simply extract the

blue color pixels from RGB the images, we

can reduce the noises and enhance the white

color lines. The thought process behind this

is that the dirt and grass are mostly

composed of reds and greens, so if we were

to only look at objects composed of some

amount of blue the most intense blues would

be whites.

In binary format, 32 bits color is

represented as follows:

Alpha Red Green Blue

xxxx xxxx xxxx xxxx xxxx xxxx xxxx xxxx

(x is a binary 1 or 0). In order to extract Blue

color information, we performed an AND

operation with the 32 color bits with the

following binary bit mask [2].

0000 0000 0000 0000 0000 0000 1111 1111

Figure 2: 32-bit Color Image

Figure 3: 8-bit Blue Color Image

Page 4: Machine Vision for Line Detection with Labview

Figure 4: Blue Color Extraction

By extracting the blue color plane from an

image only pixels with a high blue intensity

will appear white. This will reduce some

noise from high intensity greens and reds.

To eliminate noise from natural reflections a

spatial convolution using a flattening filter

will be used to further enhance the image

edges.

B. Spatial Convolution Filter

To prevent large quantities of noise it was

necessary to implement a convolution using

a 7x7 flatten kernel [2]. Since the image is

represented as a two dimensional matrix of

blue intensity values, after the color

extractions, a convolution using a kernel of

all ones will be applied to the image

reducing high level intensity values. The

reason this is done after the blue plane is

extracted is to prevent the high intensity

greens and reds from mixing with blues

causing the blue plane to be inaccurate when

extracted.

Figure 5: LabVIEW Convolution

The figure above is the virtual instrument

for a convolution, where X is the image

matrix and Y is the convolution kernel.

Since there are many convolutions being

performed the algorithm is using frequency

domain convolution. The way the operation

is performed first requires that the image

matrix is padded horizontally and vertically

by one minus the width and height of the

kernel [4]. Then by shifting the kernel over

the image the padded matrix is given the

values from the convolution, which is the

Fourier Transform of X and Y multiplied

together as a dot product in terms of two

dimensional matrices and then the result is

transformed back to give the value desired

in the original resolution of 320x240 [4].

The following figure is a representation of

how special convolution is utilized to apply

a kernel to a simple set of data.

Figure 6: Padding and Filtering

Page 5: Machine Vision for Line Detection with Labview

From the resulting matrix it is possible to

get the average value of every 7x7 value by

dividing the elements by forty-nine to get

the values in the matrix in terms one byte

per index. Now all the high intensity noise

that would have thrown off the later line

detection should be gone as long as the noise

doesn’t appear in large groups.

From the flattened image it should now be

much easier to find the white lines.

However, since the image was flattened the

line edges will not be as intense as they

were. So the next step is to determine the

highest intensity values in the image to

attempt to only detect the highest intensities

in a set range.

C. Intensity Analysis

After the single-color extraction and

filtering, the image seems to be ready for

edge detection. However, there is still one

problem we have to solve before performing

the edge detection. Under non-uniform

background lighting, the maximum image

intensity and intensity distribution of the

image change accordingly, and it is almost

impossible to perform normal thresholding

to detect the edges. Therefore, we need to

analyze the image intensity distribution. In

order to do so, we first tried to acquire the

intensity histogram of the image, which

includes both intensity range and frequency

of each intensity value. Once we know the

intensity values of the image and their

frequencies, it will become easier for us to

determine the edges that we are interested.

Figure 7: Intensity Histogram

In Figure 7, we can see clearly that the

maximum image intensity is around 200,

and minimum image intensity is around 15.

However, even with the different

background lighting, there is one thing we

know for sure: the white lines always have

the maximum intensity. Therefore, if we can

extract the intensity range from 180 to 200,

we can detect the white lines of the image.

Page 6: Machine Vision for Line Detection with Labview

Figure 8: Intensity Analysis

In Figure 8 it can be seen that the highest

value found in the histogram is being passes

on to the next part of the program. Also at

the bottom of the figure there is a user

control called Interval that contains the

range of intensities accepted as white. This

is where the adaptive threshold will receive

its maximum and intensity range from.

D. Adaptive Thresholding

Actually, thresholding is the simplest most

powerful method for image segmentation.

Mathematically, thresholding can be

described as [1]:

, where f(x,y) is the input image, g(x,y) is

the threshold output image and T the

threshold. Generally, thresholding uses the

fixed value of T to segment the images. In

this case we will use a variable threshold

value, which will be adjusted according to

the background lighting as discussed in the

previous section. Since we already know the

maximum intensity of the image from the

intensity analysis, then we will calculate the

variable threshold as follows:

Figure 9: Adaptive Thresholding

Figure 10: Thresholded Image/ Interval = 20

E. Hough Transformation

Once we get the edge pixels after the

adaptive thresholding we need to link them

together to form a meaningful line. To

accomplish this task a Hough

Transformation will be used to bridge any

gaps in the line that may appear. This will

Page 7: Machine Vision for Line Detection with Labview

give us the position and direction of the line

in the field of view [1] [2].

In Hough Transformations, each pixel (x,

y) is transformed from Cartesian Space to

Hough Space, H (R,θ), as follows:

, where 0 < R < and

. If two pixels (x1, y1) and (x2, y2) are

co-linear, we will get the same value for R

and θ. In other words, a line in Cartesian

Space is represented as a point in Hough

Space. A simple Hough Transformation can

be achieved by using a two-dimensional

accumulator (array), which corresponds to R

and θ. Each cell of the accumulator is

defined by unique R and θ values, and the

value inside each cell is increased according

to the number of co-linear points in

Cartesian space. However, for practical

purposes, this algorithm is too slow for real-

time image processing. Therefore, we must

use the Matlab ‘hough function’ for our line-

detection [2].

Once the accumulator is filled, we look for

the maximum cell value stored in the

accumulator and its related R and θ. The

resulting R and θ represent the line we are

interested.

Figure 11: Hough Space (R, θ)

Once we get the R and θ, we need to shift

them as follows:

The most efficient way to implement these

equations is to use a formula node.

Page 8: Machine Vision for Line Detection with Labview

Once the line is generated in the Hough

Transformation the line values are sent to a

line detection algorithm to determine how to

properly handle the possibility of crossing

the line.

F. Line Detection Algorithm

Once we get the values of x, y, x1 and y1,

we use the following line detection

algorithm to calculate the line angle. This

will be used to determine whether the robot

needs to turn, along with what direction and

intensity the turn should be made.

If (x > 0) AND

(y > 0)

If ( α < 30) AND

(α > 0)

Yes

Go Straight Turn Right

No

Yes

If ( α < 30) AND

(α > 0)

Go Straight Turn Left

No

Yes

No

Right = x1 > 160

Left = Not (Right)

Figure 12: Turning Algorithm

Right will tell us if the detected line is

located on the left side of the camera and

Left will tell us if the detected line is located

on the right side of the camera. These are

decided by what x coordinate pixel the

nearest line is on at the bottom row of the

image.

IV. Results and Performance Analysis

In order to test the reliability and

performance of our algorithm, we carried

out a series of tests with different scenarios.

The results were captured and are shown

below; for each set of conditions there is a

picture of what the camera sees followed by

what the program interprets as the proper

avoidance maneuver along with where the

nearest line is.

Test 1: Simple Right

Page 9: Machine Vision for Line Detection with Labview

Test 2: Right w/ Obstacle

Test 3: Parallel

Test 4: Simple Left

Page 10: Machine Vision for Line Detection with Labview

Test 5: Left w/ Obstacle

According to the test results, we found that

our new algorithm can give more accurate

and reliable results than our old algorithm.

In addition, since we do not use NI Vision

Development, and wrote the whole project

with the intrinsic LabVIEW functions, we

also solved the problems related to software

expiration. One problem that still needs to

be dealt with is if no line is present the

adaptive threshold will give the line

detection of the largest set of intensities.

This has to be solved before the system can

be declared a fully functional obstacle

avoidance utility.

V. Further Improvements

Although our algorithm is satisfactory to

some extent, there is a lot to be improved

upon that would require more time and a

budget plan for additional equipment. First

of all, we use monocular vision system, to

detect the lines. By adding a second camera

the system could be reprogrammed to have

one camera handle the left and the other

dedicated to the right line. This would allow

greater control due to the visual field being

doubled.

For the project to be a feasible substitute

for the current system the algorithm will

need the ability to distinguish whether or not

a line is even present. Without this ability

the system would need the guarantee that

either the left or right line would be in the

field of view. For our purposes this is not an

acceptable loss, but the improvements made

are still enough to show definite

improvement over the previous prototype.

VI. References 1. Davies, E.R. Machine Vision. 2nd ed. San Diego Academic

Press., 1997. 80-269. 2. González, Rafael C.; Woods Richard Eugene; and Eddins

Steven L. Digital Image Processing with MATLAB. Pearson

Prentice Hall., 2004.380-406. 3. Jahne Bernd. Digital Image Processing. 6th ed. Heidelberg:

Springer-Verlag., 2005.331-340.

4. "NI Vision Acquisition Software." National Instrument. 30 Nov 2009. http://sine.ni.com/psp/app/doc/p/id/psp-394 >

5. Painter, James G. Vision System for Wunderbot IV Autonomous

Robot. Elizabethtown College, 9 May 2008. 6. Aung, Than L. & Conlow Michael. Alternative Vision System

for Wunderbot V. Elizabethtown College, 9 Dec 2009.