Detection Probability Model Good Rich)

download Detection Probability Model Good Rich)

of 12

Transcript of Detection Probability Model Good Rich)

  • 8/8/2019 Detection Probability Model Good Rich)

    1/12

  • 8/8/2019 Detection Probability Model Good Rich)

    2/12

    Sx, Sy pixel scale factorsuc direction vector in the camera frameuc distorted direction vector in the camera frame(x0, y0) image center point in the pixel frameXc, Yc, Zc axes of the camera coordinate frameXP, YP axes of the pixel coordinate frame(xtp, ytp) target point in the pixel frame

    I. Introduction

    This paper offers a probability model for target detection using a charge-coupled device (CCD) camerathat is searching for a spherical object. The model is analogous to B.O. Koopmans inverse cube detectionmodel,2 and can be used in finding an optimal allocation of search effort according to search theory. Thistheory was invaluable during World War II in searching out and exposing the threat of enemy ocean vessels. 1

    The value of probabilistic search is still sought after by many today. However, researchers typically re-useKoopmans detection models3 or assume an overly-simplified sensor footprint that is circular or rectangulararound the agent.4 Matthew Flint recognizes the primary requirement of search width, and uses a simplemodel that detects targets when a specified line passes over them.5 These simplified models may be useful insome cases, but fail to capture the detail inherent in a real search. They fail to acknowledge the fact that evenwhen looking directly at a target there may still be a chance of not detecting it. They also do not account forthe effect of the agent turning or changing altitude. In contrast, this paper provides a valuable and realisticmodel for describing the search performance of one of todays common sensors: the CCD camera.

    II. Camera Characteristics and Calibration

    When determining a targets detection probability from a CCD camera, it is first necessary to understandsome basic terms and definitions. This section offers an explanation of these definitions, and illustrates thecamera calibration procedure. Section III uses this information to develop a detection probability model.

    Figure 1. This figure shows the relationship be-tween the camera and pixel coordinate frames.The Xc, Yc and Zc axes represent the camera co-ordinate frame. The Xp and Yp axes represent thepixel coordinate frame.

    For modeling purposes a camera image is assumed toexist on a plane at one focal length, f, in front of thecamera center. (See Figure 1.) The focal length of an

    image is rarely known, and is typically set to unity. Acamera may be calibrated to find its intrinsic matrix, A,which describes how the pixel coordinate system relatesto the camera coordinate system. A simplified intrinsicmatrix is

    A =

    fx 0 x00 fy y0

    0 0 1

    . (1)

    Ignoring image skew for the moment, each of the termsin Eq.(1) has physical meaning. For example, fx repre-

    sents the number of pixels that could be lined up side byside to equal the focal length. Similarly, fy representsthe number of pixels that could be stacked on top of eachother to equal one focal length. The parameters x0 andy0 indicate the pixel coordinates where the camera z-axisintersects the pixel frames x-y plane. (See Figure 1.)

    There are also two related scale factors, Sx and Sy,that are not explicitly included in the intrinsic matrix. They are defined as

    Sx =f

    fxand Sy =

    f

    fy. (2)

    2 of 12

    American Institute of Aeronautics and Astronautics

  • 8/8/2019 Detection Probability Model Good Rich)

    3/12

    Sx is the width of one pixel that has been projected onto a perpendicular plane one focal length away, andSy is the height of one pixel at the same distance. With this understanding, one can easily construct a vectorin the camera frame that points in the direction of a target pixel, (xtp, ytp). This vector is

    uc = [xuc , yuc , zuc ]T = [Sy(y0 ytp), Sx(xtp x0), f]T. (3)

    However, image skew must be taken into account before the true direction is known. The radial distortioncoefficients follow the Plumb Bob model introduced by Brown in 1966.6 However, the lens used in this

    paper does not represent extreme radial distortion and the reduced camera model is warranted.6 Therefore,the equation

    rm = k2rm5 + k1rm

    3 + rm (4)

    is assumed to be an accurate representation of radial distortion, and higher order terms are not considered.This uses

    rm =

    x2uc + y2uc

    and rm =

    x2uc + y2uc

    , (5)

    along with parameters k1 and k2, which are identified during camera calibration.

    Figure 2. Information that would appear at radialdistance rm from the image center is distorted todistance rm by the lens.

    Figure 2 shows that radial distortion is a phenomenonthat distorts an elements radial distance from the distor-tion center, which is approximated as the distance fromthe camera z-axis in this paper. The goal is to obtain the

    un-distorted rm in terms of the distorted rm that appearsin the image.

    Unfortunately, Eq.(4) provides no direct solution forrm. However, the fifth-order polynomial may be approx-imated by a second-order polynomial within the regionof interesta. Within this range, points may be generatedfrom Eq.(4) and a second-order least-squares approxima-tion can be made of the form

    rm = k4rm2 + k3rm. (6)

    If this is done, then a value for rm in terms of rm maynow be readily obtained as

    rm =k3 +

    k23 + 4k4rm

    2k4. (7)

    Note that the region of interest is only for positive rm, so only the positive root applies. Added insightmay be gained by re-writing Eq.(6) as a scaling of rm such that

    rm = rmks and ks = k4rm + k3. (8)

    Substituting the rm from Eq.(7) into Eq.(8), the image skew constant becomes

    ks(rm) =k32

    +

    k32

    2+ k4rm. (9)

    Eq.(8) can also be separated into its x and y components such that

    xuc =xucks

    and yuc =yucks

    . (10)

    In other words, the true xuc and yuc distances are scaled by ks before appearing in the image at location(xuc , yuc). A vector in the true target direction is

    uc = [xuc , yuc , zuc ]T =

    xucks

    ,yucks

    , f

    T. (11)

    aFor a 640 x 480 image with fx = 503.652, and fy = 500.110, the furthest distorted radial distance in the image is 0.796 m,at an assumed focal length of 1 m.

    3 of 12

    American Institute of Aeronautics and Astronautics

  • 8/8/2019 Detection Probability Model Good Rich)

    4/12

    This scaling also causes an objects image size to be reduced. Consider some small rectangular region, p,whose projected area onto a plane one meter away is Ap = SxSy. Due to image skew, the information thatwould be viewed through region p is actually viewed through the smaller region p, with area Ap = SxSy.Due to this scaling in both the XP and YP directions, the area of p is related to the area of p through

    Ap = SxSy = (Sxks)(Syks) = Apks2. (12)

    This is true as long as regions p and p are small enough that the same skew factor applies to the whole regionwith sufficient approximation.

    Now call region p an image pixel, and let the region c represent a collection of qualifying target pixelsthat is small enough to assume uniform ks in c. If the number of pixels within c is Ps, then the number ofpixels within c is Ps, where Ps = Psks

    2. This size reduction is herein referred to as shrinkage.

    Figure 3. The test camera.

    To model these effects for a given camera (See Figure 3.) a calibrationprocedure must be used. New techniques in camera calibration have beensurfacing as CCD cameras have become widely used in the home and pro-fessional communities. A flexible technique is presented in,7 and similartechniques have been developed in Matlab6 and OpenCV.8 Camera cali-bration is not the main focus of this paper, and the reader may pursue thecalibration algorithms in the mentioned sources. However, the followinginformation may be helpful for those wanting to calibrate a camera.

    The method of camera calibration used in this paper is based on theCameraCalibration2 function in OpenCV. Using a CCD camera, a chess-board of known dimension is viewed in various orientations. In each posi-tion the Imperx VCE-Pro frame grabber, or similar hardware, is used totransfer the pixel data to host memory. Corner points are then detectedusing the cvFindChessboardCorners OpenCV function, and these objectpoints are given to the CameraCalibration2 function, along with the num-ber of corner points in the image. This causes the intrinsic and extrinsic parameters to be calculated for thetest camera, and returned in matrix form. The average result for three repetitions (20 images each) of thisprocess is shown in Table 1.

    Table 1. Camera calibration results.

    Camera Parameter fx fy x0 y0 k1 k2

    Average Result 503.652 500.110 319.5 239.5 -0.363 0.193

    Figure 4. Radial distortion approximation. Thefifth-order distortion model can be approximatedusing a second-order function.

    Points for the fifth-order function of Eq.(4) can nowbe generated using k1 = 0.363 and k2 = 0.193 fromcalibration. Regression can then be used to determine thecoefficients for the second-order approximation of Eq.(6),yielding a k3 = 1.026, and k4 = 0.217. (See Figure 4.)

    In addition to image skew, the light that enters a cam-era undergoes other effects such as defocusing, diffraction,photosensitive variation, etc. The imperfect pixel patternthat results from a viewed point of light can be modeled

    as a Gaussian distribution called the point spread function(PSF).9

    Due to distortion complexity, the PSF parameters areoften estimated empirically. One way to do this is toview a distant point source of light against a black back-ground.9 However, when measuring the PSF for the testcamera there are two considerations that will help in ob-taining reasonable results: First, a light source should bechosen and placed far away, such that its projected areais significantly less than one pixel. Second, the shutter speed should be similar to what it would be duringan outdoor search, even though the PSF is measured in a dark room.

    4 of 12

    American Institute of Aeronautics and Astronautics

  • 8/8/2019 Detection Probability Model Good Rich)

    5/12

    For comparison, the projected area of one pixel at a distance of one meter is

    Ap = SxSy =1

    503.652

    1

    500.110= 3.97 106 m2. (13)

    In addition, the light source selected for this experiment is the diode inside a Mag-Lite Solitaire flashlight,as shown in Figure 5. When the diode is placed a known distance away (limited to 7.9 m by the room),Figure 5(a) can be used to calculate the diodes projected area at one meter away:

    Adiode =.003

    7.9

    .006

    7.9= 2.88 107 m2. (14)

    (a) Mag-Lite diode size. (b) The Mag-Lite diode lit.

    Figure 5. The Mag-Lite diode used to determine the cameraspoint spread function.

    Thus, if the light source is .003 m .006 m at a distance of 7.9 m, then thediodes projected area is .0726 square pix-els. This is considered sufficiently smallto give a reasonable PSF estimate, be-cause the point spread function of thiscamera has a standard deviation on theorder of one pixel. (See Figure 6.)

    A feature of the test camera that must

    be dealt with is that it has automaticshutter speed control based on the am-bient lighting. As a result, the test mustbegin with full lighting. The lights arethen turned off and the PSF is mea-sured in the next frame, before the shut-ter speed has time to adjust. A view ofthe diode using this method is shown in

    Figure 6(a). The raw image can be analyzed to determine the standard deviation of the PSF by using eachpixels value (also called intensity) to indicate the degree of illumination. Figure 6(b) graphically shows thevalue of the pixels in the vicinity of the light source.

    (a) A raw image of the diode againsta dark background (7.9 m away).

    (b) The value of each pixel from Figure 6(a)

    Figure 6. The test cameras point spread function.

    One interesting characteristic of the test camera is that its PSF has a greater standard deviation in theXp direction (2.052 pixels) than in the Yp direction (.688 pixels), independent of the diode orientation. Thisreveals the true shape of the point spread function for this camera and lens. However, for simplicity anaverage standard deviation of

    PSF= 1.37 pixels is used.

    III. Detection Probability

    Here it is important to understand that color is one of the most noticeable characteristics of a passivetarget. For example, BYUs recent search and rescue exercise (SAREX) was based on a lost victim who had

    5 of 12

    American Institute of Aeronautics and Astronautics

  • 8/8/2019 Detection Probability Model Good Rich)

    6/12

    been seen carrying an orange jacket and wearing a red or white t-shirt and blue jeans. Color was the primaryinformation used when establishing initial target contact. Comments like I thought I saw something whiteor Theres an orange spot! were typical observations.

    Although the BYU SAREX was based on human detection, color is also a key factor in computer-aidedsearch. Figure 7(b) is a zoom-in on one of the targets as seen from a CCD camera during a SUAV flight test.

    (a) An image of six red targets on aroad, as seen by a CCD camera.

    (b) A close-up view of one red target.

    Figure 7. Image colors seen during a SUAV flight test (red darkened for grayscale contrast).

    The color variation in this image can be surprising when considering that the target is just a flat piece ofred cloth on a road. However, a CCD image can vary for several reasons, including environmental lightingchanges, object position and orientation differences, lens distortion, shutter speed variation, the Bayer filter,interlacing, and signal noise. These sources of color variation generate a unique color distribution that canbe difficult to predict. Figure 8 shows the measured hue, saturation, and value (HSV) color distributionsof the gray road and red targets present during the flight test. Six cloths of identical color were laid onthe ground, and were viewed by the UAV from approximately 63 meters in altitude. The color data wasobtained by selecting sections in many images from the field test video, and accruing the HSV informationin histogram form. There was no noticeable variation in ambient lighting during the experiment.

    (a) Road hue. (b) Road saturation. (c) Road value.

    (d) Target hue. (e) Target saturation. (f) Target value.

    Figure 8. These are the HSV color distributions for the road and target. They are obtained by using anapplication that allows the user to repetitively select different areas of pixels, and put the pixel data into acollective histogram. Sections of road were sampled in several images to get an accurate distribution for roadcolor, and small sections were sampled at the center of each target (many times, and in many images) to getan accurate distribution for target color. The bi-modal nature of the road value distribution was seen duringthe field test as a series of shaded diagonal lines that are likely to have been caused by signal noise from theelectronic speed control. (See Figure 7(a).)

    When compared to other scenarios, searching for a red piece of cloth on a road may seem to be verysimplistic. However, the color distributions of Figure 8 still embody many complex factors. For instance,the bi-modal nature of the road value distribution was seen during the field test as a series of dark diagonal

    6 of 12

    American Institute of Aeronautics and Astronautics

  • 8/8/2019 Detection Probability Model Good Rich)

    7/12

    lines that (according to testing in the MAGICC Lab) are likely to have been caused by signal noise fromthe electronic speed control. In addition, the left tail of the target hue distribution probably representsimpure sampling, where some pixels already appear partially blended between the target and road colors.The complexity of these and other factors often causes each experiment to have unique color distributions.

    The goal is to use the color data from a given target and background to model the probability ofsuccessfully detecting the target. This probability can found by generating many target images based onassumed color distributions. One approach to modeling these color distributions might be to pick randomly

    from the exact set of colors that was generated by sampling target/background images. Another approachis to calculate the mean and standard deviation for each distribution (See Table 2.), and generate targetimages based on Gaussian distributions with these parameters.

    Table 2. Mean and standard deviation for the HSV distributions.

    Road Color Target Color

    Mean St.Dev Mean St.Dev.

    Hue 222.37 11.83 327.09 21.86

    Saturation 27.63 7.51 65.37 18.60

    Value 154.80 15.34 178.19 14.62

    Pixels Sampled 24,749 985

    Although the Gaussian assumption may not fit all of the data exactly (See Table 3.), some of the distribu-tions are primarily Gaussian, and others could be Gaussian in the absence of the un-intended experimentalconditions described. A single target color is expected to generate a color distribution characterized by asingle hue, saturation, and value. Thus, the bell-shaped variation experienced in the actual measurementsis assumed to be random error that follows a Gaussian distribution. Accordingly, the discrepancies in skewand kurtosis shown in Table 3 are rejected, and the Gaussian model is used.

    Table 3. Skew and kurtosis for the HSV distributions.

    Skew Kurtosis

    Road Hue -0.459 4.025

    Road Saturation -0.136 2.568

    Road Value 0.180 1.786

    Target Hue -0.366 1.044

    Target Saturation -0.324 3.048

    Target Value 0.051 3.018

    Normal Distribution 0.000 3.000

    Figure 9. A pixels color depends on howmuch of its p oint spread function overlapsthe target projection.

    To apply the color data from Table 2 and generate simulatedtarget images, the cameras point spread function must be used.Notice that the PSF describes the effect of light as it enters thecamera, but can also be used to describe the sources of lightthat illuminate each pixel. Thus, if a PSF is assumed to becentered on each pixel, then a pixels color depends on whatprojected regions the PSF covers.

    To illustrate this principle, consider the pixel shown in Fig-ure 9. Let R = [hR , sR , vR ]

    Trepresent the HSV road color

    and T = [hT , s T , v T ]T

    represent the target color. These col-ors are independent random samples in H,S and V from thetarget and road distributions of Figure 8 and Table 2. Thecolor for any pixel in the image can now be defined as

    P = R(1 T) + TT, (15)where T is the portion of the PSF that overlaps the target projection. Notice that although the PSF is

    7 of 12

    American Institute of Aeronautics and Astronautics

  • 8/8/2019 Detection Probability Model Good Rich)

    8/12

    a Gaussian distribution, it represents the color blending that occurs and is not a probability distribution.Also note that Eq.(15) can be used in this case because of the sequential nature of the road and target hue.However, for hue distributions that span 360 (say 1 = 10

    and 2 = 350), a method must be used that

    appropriately transitions between these colors.For the pixel shown in Figure 9, an approximation for

    Tcan be obtained by integrating under the

    Gaussian curve:

    T =pt

    1

    22PSF

    expx

    2 + y2

    22PSF

    dydx =

    1

    2

    1 erf ptPSF

    2

    . (16)

    Eq.(15) can now be used to estimate the color of any image pixel based on its distance from the target

    boundary. For a circular target projection with an area ofPs square pixels and a target radius ofrt =

    Ps/,pt can be defined as rp rt, where rp is a given pixels radial distance from the target center in pixels. Bydefining each pixel in this way, one can generate simulated images of the target. (See Figure 10.)

    Figure 10. A simulated tar-get image (red darkened forgrayscale contrast).

    However, a specific algorithm must be used to analyze the probabilityof target detection. This paper uses a connected-component algorithmbased on color segmentation. Some pixel threshold, P0, and some colorlimits in hue, saturation, and value are first chosen as parameters forthe algorithm. It scans through each image to determine which pixelsthat are within the color limits, and these become possible target pixels.

    The algorithm then counts how many possible target pixels are connectedtogether. If the size of any of these pixel groups is greater than P0, thenthe pixel group is considered a target.

    It is hoped that image noise and color distributions from other objectswill rarely generate a group of P0 pixels, and that an actual target willoften generate a group of at least P0 pixels. While searching for redtargets using the test camera and a typical wireless transmission system,10

    a practical value for P0 is 20 pixels. This is just large enough for thealgorithm to distinguish between the intended targets and image noise orbackground objects in most cases.

    Using the P0 size threshold and the derived pixel colors, many imagescan be created and analyzed using connected-component algorithm. Fig-

    ure 11 shows the detection probability for various target sizes and color thresholds, and Figure 10 shows asimulated target image.

    Figure 11. Target detection probability is shown as a function of target size (pixels). Each p oint representsthe success ratio for 1000 trials. Images were analyzed using connected-component color segmentation on atarget of size Ps. The color limits used in this analysis are centered on the mean, and spaced according to theseries label. For example, using data from Table 2, the series labeled 1.0 identifies target pixels within ahue of 327.0921.86, a saturation of 65.3718.60 and a value of 178.1914.62.

    8 of 12

    American Institute of Aeronautics and Astronautics

  • 8/8/2019 Detection Probability Model Good Rich)

    9/12

    Each point in Figure 11 represents the ratio of successful detection events to the total number of simulatedtarget images processed. These points can be now be used to create the connected component detection model:

    p

    DI(Ps), MCC = max

    1 expPs s1

    a1

    1 + exp

    Ps s2

    a2

    , 0 . (17)

    This equation has several necessary terms. The term p() indicates the probability of , D indicates thedetection event, s1, s2, a1 and a2 are some given constants, I(Ps) denotes an instantaneous look (oneprocessed image) at a target with an image projection size of Ps pixels, and MCC denotes that the probabilityis based on the connected component detection model.

    Notice that the model is a direct multiplication between a sigmoid and an exponential curve. The datain Table 4 shows the constants used for various color limits, and Figure 12 shows the simulation results andtheir approximations together.

    Table 4. Target detection model parameters.

    1.3 1.5 1.7 2.0 3.0

    s1 38 32 26 19.8 6.5

    s2 46 40 34 27.8 14.5a1 72.5 19 9 5 2.5

    a2 8 3.8 2.8 1.6 0.96

    Figure 12. Detection probability results, along with their approximations. The data p oints here are identicalto those in Figure 11. However, a black approximation curve is also shown for each case. The approximationcurve is generated using values from Table 4 in Eq.(17).

    If the detection probability is dependent on the color thresholds, then what are the color limits likelyto be? The appropriate limits are estimated based on the assumed color distributions of the target and its

    surrounding environment. These limits are constrained by how close the background color is to the targetcolor, and how accurately the colors have been sampled or estimated. Often the color limits are only vagueestimates based on a statement like a bright orange jacket, and there is no way to know the true color ofthe target. However, threshold estimates can be significantly improved before the search by sampling thecolor of something similar to a target in the lighting that will be present during the search.

    Due to these uncertainties, color limits typically do not extend as far as 3 from the mean, nor arethey precisely centered on the mean. Accurately approximating this human-based selection would requirean extensive experiment where many users would set the color limits in a variety of conditions. Such anexperiment is outside the scope of this paper. However, during a flight test it was observed that it becomesnearly impossible to detect these targetsb with this camera and algorithm if the UAV is further than 100 m

    bEach target has an area of 1 m2.

    9 of 12

    American Institute of Aeronautics and Astronautics

  • 8/8/2019 Detection Probability Model Good Rich)

    10/12

    away. As will be shown, this indicates that a detection probability closer to the 1.7 curve might berepresentative of typical limit setting. Undoubtedly the accuracy of the detection model could be improvedby more sophisticated methods, but this offers a starting point for now and is based on real experience witha common sensor. Based on the 1.7 assumption, the detection probability becomes

    pDI(Ps), MCC = max

    1 expPs 26

    9

    1 + expPs 342.8

    , 0

    . (18)

    This models the detection probability for a target projection whose size in the image is Ps. However, itis desirable to extend the model by defining detection probability as a function of target distance.

    Figure 13. A spherical object viewed by a CCD camera.

    Consider an unobstructed spherical ob-ject that is observed through a CCD cameraas shown in Figure 13. By similar trianglesrobjp

    fp=

    robjd

    , and the number of target pix-

    els on-screen (occupied by an object of ra-dius robj at d units away) is

    Ps = r2objp =

    fpd

    2r2obj . (19)

    It can also be represented as

    Ps =

    fpd

    2Aobj, (20)

    where fp is the focal length measured in av-erage pixels, d is the object distance, andAobj is the cross-sectional area of the object.

    Using Eq.(20), one can determine the expected number of target pixels in an image based on the targetssize and distance. Note that Ps is not necessarily an integer value but represents the precise size of the target

    projected onto the image plane in units of square pixels. Substituting the result from Eq.(20) into Eq.(18)yields an expression for the instantaneous detection probability as a function of the observers distance fromthe object:

    p

    DI(d), MCC = max

    1 exp

    fpd

    2Aobj 269

    1 + exp

    fpd

    2Aobj 342.8

    , 0

    . (21)

    Here I(d) denotes an instantaneous look from distance d, and MCC denotes the connected component

    detection model. If an agent is searching for a target that has a typical cross-sectional area of Aobj = 1 m2,then this may be inserted to gain some intuition about the detection probability as a function of distance.The parameters fx and fy are taken from Table 1, and averaged to obtain fp. The instantaneous detectionprobability is as shown in Figure 14.

    This predicts that if the UAV has a distance greater than 98 m, then no targets will be detected.Also, being much closer than about 60 meters to a target does not significantly increase the probability oftarget detection. Of course, the accuracy of the model still depends on whether or not the conditions andassumptions used to derive it are valid in a given environment. The target color must be different enoughfrom the environment color such that thresholds can be set that achieve the performance modeled by the1.7 curve.

    10 of 12

    American Institute of Aeronautics and Astronautics

  • 8/8/2019 Detection Probability Model Good Rich)

    11/12

    Figure 14. Instantaneous detection probability as a function of distance.

    However, notice that Eq.(21) applies only to a target appearing at the center of the image, where imageskew does not significantly affect its projection. Adding the shrinkage from image skew as shown in Eq.(12),the expected pixel size is

    Ps =

    ks(rm)fp

    d

    2Aobj. (22)

    The term ks(rm) shows that ks depends on the distance of the target center from the image center, as shownin Eq.(9). Thus, a more accurate version of Eq.(21) is

    p

    DI(d, rm), MCC

    = max

    1 exp

    ks(rm)fp

    d

    2Aobj 26

    9

    1 + exp

    ks(rm)fpd

    2

    Aobj 342.8

    , 0

    . (23)

    Here I(d, rm) denotes an instantaneous look from distance d, with the target appearing at radius rmfrom the image center. Figure 15 shows what happens to the sensor footprint when image skew is taken intoaccount.

    (a) Detection probability without imageskew taken into account.

    (b) Detection probability with image skewdirectionality taken into account.

    (c) Detection probability with image skewdirectionality and shrinkage taken into ac-count.

    Figure 15. The effect of image skew on detection probability and sensor footprint. A plane is viewed from80 meters away using the camera calibrated in Table 1. Pure green indicates p(D|I(d, rm),MCC ) 1, and purered indicates p(D|I(d, rm),MCC ) = 0. Although image skew increases the viewing angle, it also decreases theperipheral detection probability.

    11 of 12

    American Institute of Aeronautics and Astronautics

  • 8/8/2019 Detection Probability Model Good Rich)

    12/12

    For a numerical example, assume that a target has a cross-sectional area of Aobj = 1 m2, and is a distanceofd = 80 m away. Also assume that the target center appears at pixel (xtp, ytp) = (500, 300) when the imageis analyzed. Using this information and the camera calibration results, the radial distance from the image

    center is rm =

    500319.5503.652

    2+300239.5500.110

    2= 0.378, which yields ks = 0.939 and p(D|I(d, rm), MCC) =

    0.346. Thus, even though the target is unobstructed and in view, there is only a 34.6% chance of detectingit in the image.

    One might ask how motion affects this detection probability. The answer is that the detection probability

    can be affected by motion if the detection algorithm is analyzing interlaced video. Otherwise the shutterspeed is typically fast enough that any foreseeable UAV motion does not cause detectable blurring.

    Interlacing is a process where every other pixel line from one shutter opening is combined with theremaining pixel lines from the next shutter opening. The test camera streams video at a rate of 30 framesper second, with 1/60 s between shutter openings. If image motion is enough to move the target more thanone target width in 1/60 s, then target pixels in every other line will not be connected. One way to correctfor this problem is to only run the algorithm on half of the image, and interpolate the color values of thepixels that are missing. If this is done, then the detection probability should remain as derived.

    The detection probability developed in this paper depends on well-lit conditions. In the dark, the shutterspeed is slower and color blurring from motion is more prevalent. In addition, hue measurements are muchless accurate without adequate light.

    IV. Conclusion

    A probability model has been presented for target detection using a CCD camera. The model is dependenton many assumptions, and is unlikely to apply in many search cases. However, the model has been generatedbased on color, which is one of the most noticeable characteristics of a passive target. In addition, the processof creating a detection model has been shown.

    The given model assumes that a search is being made for an unobstructed spherical object, althoughother targets with comparable cross-sectional areas may follow nearly the same model. The results are onlyvalid if the detection algorithm uses color segmentation and connected components to identify targets. Theparameters of the model are dependent on the separation between the target color and background colordistributions, and the models validity depends on well-lit conditions.

    If all of these assumptions are met, then an approximation for realistic search progress can be made using

    the given model. It includes the effect of image skew, which makes a targets detection probability dependenton its location in the image. In a dynamic sense, if this model is connected to the motion of an airborneagent, then it can reveal what is likely to happen when the agent turns or changes altitude. To provide thisinsight over time, the model needs to be applied once for every processed image. Finally, concepts of optimalsearch theory may be used with this model to determine an optimal search allocation, and paths may beplanned to approximate the optimal solution.2

    References

    1Koopman, B. O., Search and Screening, General Principles with Historical Applications, Pergamon Press Inc., NewYork, USA, 1980.

    2Hansen, S. R., Applications of Search Theory to Coordinated Searching by Unmanned Aerial Vehicles, M.s. thesis,Brigham Young University, 2008, http://contentdm.lib.byu.edu/ETD/image/etd1809.pdf.

    3Stone, L. D., Theory of Optimal Search, Vol. 118 of Mathematics in Science and Engineering, Academic Press, NewYork, USA, 1975.

    4Tang, Z., Information-Theoretic Management of Mobile Sensor Agents, M.s. thesis, Ohio State University, Columbus,Ohio, 43210, 2005, http://www.ohiolink.edu/etd/send-pdf.cgi?osu1126882086.

    5Flint, M., Polycarpou, M., and Fernandez-Gaucherand, E., Cooperative Control for Multiple Autonomous UAVs Search-ing for Targets, Proc. IEEE Conference on Decision and Control, June 2002.

    6June 2006, http://www.vision.caltech.edu/bouguetj/calib doc/.7Zhang, Z., A Flexible New Technique for Camera Calibration, Tech. rep., Microsoft Corporation, Redmond, WA 98052,

    1998.8June 2006, http://www710.univ-lyon1.fr/ ameyer/devel/opencv/docs/ref/opencvref cv.htm.9Baker, S. and Kanade, T., Limits on Super-Resolution and How to Break Them, IEEE Computer Society Conference

    on Computer Vison and Pattern Recognition, 2000.10June 2006, http://www.blackwidowav.com.

    12 of 12

    American Institute of Aeronautics and Astronautics