Separability between pedestrians in hyperspectral imagery …€¦ ·  · 2013-03-16hyperspectral...

9
Separability between pedestrians in hyperspectral imagery Jared Herweg, 1,2, * John Kerekes, 2 and Michael Eismann 3 1 Air Force Institute of Technology, Dayton, Ohio 45433, USA 2 Rochester Institute of Technology, Rochester, New York 14623, USA 3 Air Force Research Laboratory, Dayton, Ohio 45433, USA *Corresponding author: [email protected] Received 15 October 2012; accepted 2 January 2013; posted 14 January 2013 (Doc. ID 177965); published 19 February 2013 The popularity of hyperspectral imaging (HSI) in remote sensing continues to lead to it being adapted in novel ways to overcome challenging imaging problems. This paper reports on research efforts exploring the phenomenology of using HSI as an aid in detecting and tracking human pedestrians. An assessment of the likelihood of distinguishing between pedestrians based on the measured spectral reflectance of observable materials and the presence of noise is presented. The assessments included looking at the spectral separation between pedestrian material subregions using different spectral-reflectance regions within the full range (4502500 nm), as well as when the spectral content of the pedestrian subregions are combined. In addition to the pedestrian spectral-reflectance data analysis, the separability of pedes- trian subregions in remotely sensed hyperspectral images was assessed using a unique data set garnered as part of this work. Results indicated that skin was the least distinguishable material between pedestrians using the spectral Euclidean distance metric. The clothing, especially the shirt, offered the most salient feature for distinguishing the pedestrian. Additionally, significant spectral separability performance is realized when combining the reflectance information of two or more subregions. © 2013 Optical Society of America OCIS codes: 100.3008, 100.4999. 1. Introduction Hyperspectral imaging (HSI) continues to mature and progress as a technology for improving target detection and discrimination in remote-sensing applications. Traditionally, HSI has been used in re- mote sensing for land-use and land-cover classifica- tion research [ 1]. HSI has also been used for smaller targets, such as vehicles, where spectral features serve as additional discriminants between targets for improved tracking [ 2, 3]. More recently, research has shown how HSI can be used for skin detection as well as clothing classification [ 4, 5], independently. The work here builds upon these studies related to the constituent materials found on human pedes- trians. This paper reports on work performed to understand the ability to distinguish between pedes- trians in a complex urban environment when using HSI. The work focused on the spectral separability phenomenology for the reflectance signatures of predominant materials on a pedestrian (e.g., skin, clothing, and hair). This work only looked at distin- guishing between already detected pedestrians, whether by manual extraction or some other outside process. Scene-wide detection of the constituent ma- terials will not be covered in this paper. Analysis in- cluded looking at classification error of materials in the presence of noise when using just the pedestrian- material spectral reflectance signatures and also using the sensor-reaching radiance in HSI. For this work a unique HSI data set with extensive ground 1559-128X/13/061330-09$15.00/0 © 2013 Optical Society of America 1330 APPLIED OPTICS / Vol. 52, No. 6 / 20 February 2013

Transcript of Separability between pedestrians in hyperspectral imagery …€¦ ·  · 2013-03-16hyperspectral...

Page 1: Separability between pedestrians in hyperspectral imagery …€¦ ·  · 2013-03-16hyperspectral imagery Jared Herweg,1,2,* John Kerekes,2 and ... spectral separation between pedestrian

Separability between pedestrians inhyperspectral imagery

Jared Herweg,1,2,* John Kerekes,2 and Michael Eismann3

1Air Force Institute of Technology, Dayton, Ohio 45433, USA2Rochester Institute of Technology, Rochester, New York 14623, USA

3Air Force Research Laboratory, Dayton, Ohio 45433, USA

*Corresponding author: [email protected]

Received 15 October 2012; accepted 2 January 2013;posted 14 January 2013 (Doc. ID 177965); published 19 February 2013

The popularity of hyperspectral imaging (HSI) in remote sensing continues to lead to it being adapted innovel ways to overcome challenging imaging problems. This paper reports on research efforts exploringthe phenomenology of using HSI as an aid in detecting and tracking human pedestrians. An assessmentof the likelihood of distinguishing between pedestrians based on the measured spectral reflectance ofobservable materials and the presence of noise is presented. The assessments included looking at thespectral separation between pedestrian material subregions using different spectral-reflectance regionswithin the full range (450–2500 nm), as well as when the spectral content of the pedestrian subregionsare combined. In addition to the pedestrian spectral-reflectance data analysis, the separability of pedes-trian subregions in remotely sensed hyperspectral images was assessed using a unique data set garneredas part of this work. Results indicated that skin was the least distinguishable material betweenpedestrians using the spectral Euclidean distance metric. The clothing, especially the shirt, offeredthe most salient feature for distinguishing the pedestrian. Additionally, significant spectral separabilityperformance is realized when combining the reflectance information of two or more subregions. © 2013Optical Society of AmericaOCIS codes: 100.3008, 100.4999.

1. Introduction

Hyperspectral imaging (HSI) continues to matureand progress as a technology for improving targetdetection and discrimination in remote-sensingapplications. Traditionally, HSI has been used in re-mote sensing for land-use and land-cover classifica-tion research [1]. HSI has also been used for smallertargets, such as vehicles, where spectral featuresserve as additional discriminants between targetsfor improved tracking [2,3]. More recently, researchhas shown how HSI can be used for skin detectionas well as clothing classification [4,5], independently.The work here builds upon these studies related to

the constituent materials found on human pedes-trians. This paper reports on work performed tounderstand the ability to distinguish between pedes-trians in a complex urban environment when usingHSI. The work focused on the spectral separabilityphenomenology for the reflectance signatures ofpredominant materials on a pedestrian (e.g., skin,clothing, and hair). This work only looked at distin-guishing between already detected pedestrians,whether by manual extraction or some other outsideprocess. Scene-wide detection of the constituent ma-terials will not be covered in this paper. Analysis in-cluded looking at classification error of materials inthe presence of noise when using just the pedestrian-material spectral reflectance signatures and alsousing the sensor-reaching radiance in HSI. For thiswork a unique HSI data set with extensive ground

1559-128X/13/061330-09$15.00/0© 2013 Optical Society of America

1330 APPLIED OPTICS / Vol. 52, No. 6 / 20 February 2013

Page 2: Separability between pedestrians in hyperspectral imagery …€¦ ·  · 2013-03-16hyperspectral imagery Jared Herweg,1,2,* John Kerekes,2 and ... spectral separation between pedestrian

truth was gathered to assess the spectral separabil-ity of the materials of interest.

2. Pedestrian as a Multiregion Target

One of the primary challenges to a tracking algo-rithm is distinguishing among moving objects withina scene; in this case, between pedestrians. This be-comes evenmore of a challenge when the pedestrianscome within close proximity of each other. Intuitively,a pedestrian can be thought of as a complex targetwhose spectral signature is made up of his or hersubregion characteristics, where “subregion” refersto items such as hair, skin, and clothing. Typically,it can be observed that individual recognition amongpedestrians is largely due to these subregions. Notethat in many cultures a pedestrian’s shirt and trou-sers are made of different materials or are distin-guishable by different colors. Thus the pedestriancan be thought of as having four contiguous subre-gions: hair, skin, torso, and trousers. These contigu-ous subregions could be used to differentiate betweenpedestrians when used individually or together. Onestep in this research effort looked at the spectralseparability between pedestrians based on their sub-regions. There may be further distinguishing aspectsassociated with these subregions such as texture, butthey will not be considered here. Again, this workassumed that the individual pedestrians had beendetected and segmented by some outside process.Scene-wide detection of pedestrians against thebackground was not considered within the scope ofthis paper.

3. Real-World Imaging of Pedestrians UsingHyperspectral Imager

As part of this work, a hyperspectral imager wasused to collect imagery data of pedestrians whilethey posed in an urban scene. This imagery was col-lected as part of the Hyperspectral Measurementsof Natural Signatures for Pedestrian (HYMNS-P)experiment [6]. The imager was placed on a roof over-looking an urban scene. The HSI sensor collected220 spectral bands from 450 to 2450 nm with a1 mrad resolution per pixel [7]. The HSI sensoralso had a spectral resolution range of 8–12 nmacross the bands. There were 18 pedestrians placedaround the scene in natural poses at known locationsduring the image capture. An example true-colorimage of the scene is shown in Fig. 1 where theground sample distance (GSD) was approximately2.5 cm at the center of the scene. During the experi-ment, several frames of imagery were captured with

the pedestrians moved between known locations foreach frame. Besides the several frames of imagery,an extensive ground-truth effort was conducted. Incertain frames of imagery, there were two roamingpedestrians collecting ground-truth information.There were several in-scene spectral-reflectancemeasurements of the backgroundmaterials collected.Calibration panels were also placed in the scene bothpropped up, facing the imager, as well as lying flat onthe ground with direct illumination. Additionally,each of the pedestrians was characterized by collect-ing spectral reflectance measurements of his or herhair, skin, and clothing. High-resolution photographsand metadata, such as skin type, clothing materials,and hair color were also collected. Portions of thisunique data set will become publicly accessible forfuture studies on pedestrian detection and trackingphenomenology.

As part of this data set, the spectral-reflectancemeasurements of the hair, skin, shirt, and trousersor shorts (if worn) for 28 unique pedestrians werecollected. The volunteer pedestrians were allowedto wear clothing of their choice from their respectivewardrobes. Though metadata were collected to cap-ture information about the clothing worn, costumingwas not imposed during the data collect. Note thatthere were 10 additional volunteer pedestrians whoprovided spectral reflectance measurements andmetadata for this portion of the study but were notin the hyperspectral images. For the skin measure-ments, measurements of each person’s right cheek,right forearm, and right calf (if exposed) were taken.These measurements were collected using an Analy-tical Spectral Devices, Inc., Full Range Field Spectro-meter [8]. The spectrometer was a three-detectorinstrument which could collect spectra from 350 to2500 nm and had a full width half-maximum spectralresolution of 3 to 12 nm, depending on the detector.Acontactprobewithanintegratedilluminationsourcewasused tocollect therelative reflectancesamples [9].An example of the collection setup for measuringa forearm using the contact probe with integratedillumination source is shown in Fig. 2. The material

Fig. 1. (Color online) True-color radiance image of the HYMNS-Pscene with pedestrians present. Several variations of this staticscene were used for this research effort.

Fig. 2. (Color online) Collection setup of a skin spectral-reflectance measurement on the forearm of a pedestrian usingthe custom field contact probe.

20 February 2013 / Vol. 52, No. 6 / APPLIED OPTICS 1331

Page 3: Separability between pedestrians in hyperspectral imagery …€¦ ·  · 2013-03-16hyperspectral imagery Jared Herweg,1,2,* John Kerekes,2 and ... spectral separation between pedestrian

reflectance was measured at 1 nm intervals and sub-sequently resampled from 2151 spectral bands downto 216 spectral bands using the nearest-neighbormethod in order to approximate the spectral bandsof the hyperspectral imagery. Examples of the datacollected fromthe28pedestrians for thehairand trou-sers subregions are shown in Figs. 3–6.

4. Pedestrian Spectral Characteristics

The pedestrian can be treated as a multiregion tar-get with subregions defined as hair, skin, torso, andtrousers. Each of these subregions has different spec-tral features, which allows for them to be indepen-dently distinguished within hyperspectral imagery.Using the HYMNS-P data set, some of these featurescan be identified.

A. Hair Spectral Features

The spectral reflectance samples shown in Fig. 3as taken from the pedestrians show visible differ-ences of the hair samples across the spectrum withthe most noticeable variations below 1800 nm. Byinspection, specific features can be seen at approxi-mately 1200, 1500, and 1740 nm. It should be notedthat differing amounts of hair on the top of thehead may allow for skin reflectance to be mixed intothe measured spectra.

B. Skin Spectral Features

The spectral-reflectance samples for skin are seen inFig. 4. The spectral differences in the skin primarilyoccur below 1400 nm. The water content of the skincauses the high absorption above 1400 nm [10]. Notethat due to different exposure levels to natural sun-light and other ultraviolet sources, the skin of thedifferent body parts may have differences in melaninlevels. This leads to slight variations of the skinreflectance signature of a pedestrian depending onthe visible skin observed. Blood content can also havean impact on the skin reflectance signature [4] butwas not evaluated under this study.

C. Clothing Spectral Features

While the torso and trouser regions of the pedestrianare treated as different subregions, the clothingmaterials can be similar with different colorants. Inthe HYMNS-P data set the pedestrians’ clothes wereprimarily cotton based with some minor materialstypically blended in with the cotton. There was onlyone pedestrian who had a 100% polyester shirt. Allother materials were blended with cotton. As such,the spectral-reflectance curves shown in Figs. 5and 6 are very similar above 1000 nm. However,there are spectral features seen above 1000 nm forthe clothing materials, as discussed below. It shouldbe pointed out that the constant reflectance spectraseen in Fig. 5 was considered suspect but was notremoved from the sample set used for analysisdescribed in Section 5.A.

Three of the spectral-reflectance signatures fromthe collection shown in Fig. 5 are shown in Fig. 7.Each of these materials exhibits a different spectral

Fig. 3. (Color online) Spectral profiles for the several relativespectral-reflectance measurements of pedestrians’ hair.

Fig. 4. (Color online) Spectral profiles for the several relativespectral-reflectance measurements of pedestrians’ facial skin.

Fig. 5. (Color online) Spectral profiles for the several relativespectral-reflectance measurements of pedestrians’ clothing fabricon the torso.

1332 APPLIED OPTICS / Vol. 52, No. 6 / 20 February 2013

Page 4: Separability between pedestrians in hyperspectral imagery …€¦ ·  · 2013-03-16hyperspectral imagery Jared Herweg,1,2,* John Kerekes,2 and ... spectral separation between pedestrian

characteristic as illustrated in Fig. 7. The 100% cottonmaterial was a reddish-brown t-shirt, the 67∕33cotton–polyester blend was a yellow t-shirt, and the100% polyester was a blue shirt. It is clear thatthere are spectral differences in the visible region(450–700 nm) and in the short-wave infrared (SWIR)region around 1550 nm and 1900–2250 nm among thespectral-reflectance profiles. Additionally, research byHaran [11] on the reflectance properties of cotton andpolyester in the SWIR spectrum (1000–2500 nm)showed the major absorption bands for cotton arecentered at approximately 1196, 1492, 1930, 2106, and2328 nm. For polyester, the absorption features arecentered at approximately 1122, 1395, 1656, 1900,2132, 2254, and 2328 nm. Certainly the features at1500, 1930, and 2106 nm for cotton and 1656 nmfor polyester are seen in Fig. 7. These features could

be used to distinguish between 100% cotton and100% polyester materials. However, the 67∕33 cotton–polyester blended fabric exhibits features from bothmaterials. It should be noted that there are significantwater absorption bands at approximately 1400 and1900 nm [1], which would mask the spectral featuresnear those wavelengths.

5. Estimating Separability

A. Separability of Spectral Reflectance

One of the things that we wanted to assess was thelikelihood of separability at different levels of signal-to-noise ratio (SNR). Distributions of noisy sampleswere generated for each signature such that

~x � x⃗� x⃗SNR

�ℵ�0; 1�; (1)

where x⃗ was the p-dimensional sample spectralvector for a pedestrian’s subregion, ℵ�0; 1� was ap-dimensional vector of random variables from thestandard normal distribution, the sample vector x⃗divided by the SNR was used to scale the noise toachieve a uniform SNR across bands [1], and � sig-nifies an element-by-element multiply operation. Byapplying a flat SNR level across all bands, this as-sessment was independent of noise characteristicsdue to a particular sensor and maintains generality.The separability was assessed by computing thespectral distance from each non-noise-modulatedsubregion spectral sample to all noisy sampleswithin the same subregion. The adjusted spectralEuclidean distance metric was used such that [12]

de�x⃗; y⃗� ��������������������������������1p

Xpi�1

�xi − yi�2vuut ; (2)

where x⃗ represents the p-dimensional spectral vectorof a material measurement from the pedestrian ofinterest(POI)and y⃗representsthespectral-reflectancevector of one of the POI’s noisy samples or anyof the noisy or non-noise-added samples of the samesubregion from other pedestrians.

To assess how likely the subregion samples fromone person could be distinguished from the samesubregion samples of all other pedestrians, twoclasses were defined. The first class represented thedistribution of distances between a particular pedes-trian’s subregion sample and the noise-modulatedversions of that sample. This was called the POIclass. The second class represented the distributionof distances between the non-noise-modulated POIsubregion sample and all the remaining noisy sam-ples of the other pedestrians. This constituted anon-POI class. With this one-versus-all construct,the probability of error per SNR for the POI class,p�errorjωPOI�, was calculated according to the Bayesminimum error threshold [13]. The probability oferror for the POI class was chosen instead of the total

Fig. 6. (Color online) Spectral profiles for the several relativespectral reflectance measurements of pedestrian’s trousers orshorts.

Fig. 7. Textile reflectance spectra taken from pedestrian clothingin the HYMNS-P data set. The spectral differences between thethree material types are apparent.

20 February 2013 / Vol. 52, No. 6 / APPLIED OPTICS 1333

Page 5: Separability between pedestrians in hyperspectral imagery …€¦ ·  · 2013-03-16hyperspectral imagery Jared Herweg,1,2,* John Kerekes,2 and ... spectral separation between pedestrian

probability of error because in real-world imaging itwas expected each subregion would only be covered byvery few pixels. As such, we were mainly concernedwith the SNR levels where the distributions begin tooverlap and that lead to missed detections. The Bayesrule for minimum cost could be used to overcome thislimitation, but the costs in our case were assumed tobe equal with no particular application presently con-sidered. An example of the two class distributionsfrom the hair subregion data at an SNR level of 8 isshown in Fig. 8. The distributions were estimated em-pirically using kernel-based density estimation with aGaussian kernel [13] and 100,000 samples generatedusing Eqs. (1) and (2). The non-POI class follows anonstandard distribution.

B. Combining Subregions for Improved Separability

In addition to computing the separability for the in-dividual subregions, the separability of the spectraldata between pedestrians was assessed when thespectral vectors of subregions were combined. Thiswas accomplished by concatenating the spectral vec-tors such that

x⃗a;b ��x⃗ax⃗b

�; (3)

where a and b represent two different subregionsof the same POI and it was assumed x⃗ is a columnvector with p-dimensions. Note that x⃗a;b now has di-mension 2p, assuming vectors of equal length, whichis why the adjusted spectral Euclidean distance wasused in Eq. (2). Note that there is an automatic biasin the spectral distance, but normalizing by thedimensionality allows for comparing separabilitymetrics between single- and multisubregion vectorsas dimensionality increases. The same process for

computing probability of error versus SNR, as out-lined above, was followed for this portion of the study.Note that in addition to the pairwise combinations,combinations of three and all four subregions to-gether were assessed.

C. Spectral Separability after Spectrum Subsetting

It should be pointed out thatmany current pedestrian-detection systems only utilize the visible portion ofthe spectrum [14]. One of the objectives of this workincluded looking at how the different spectral regionsof the data affected the separability. The probabilityof error analysis was performed for the full range(FR, 450–2250 nm), the visible region (450–700 nm)with only three bands, the visible region with 22bands, the visible to near-infrared region (VNIR,450–1000 nm) with 39 bands, the first SWIR region(SWIR1, 1000–1700 nm) with 66 bands, and a secondSWIR region (SWIR2, 1800–2250 nm) with 48 bands.It should be pointed out that for the RGB case, the22-band data from the visible region were spectrallysmoothed using three Gaussian kernels centered on450, 545, and 600 nm. The Gaussian kernels had a50 nm standard deviation. The peak wavelengthswere chosen to correspond with the wavelength loca-tions of the tristimulus value peaks from the Com-mission Internationale de I’éclairage 1931 StandardObserver Visual Response model [15].

D. Separability in Remotely Sensed Imagery

In addition to computing the separability of thepedestrians’ spectral-reflectance samples, the separ-ability of the pedestrian subregion data from remo-tely sensed HSI data was assessed. For this portionof the study, two images, labeled Image A for the firstimage and Image B for the second image, were usedfrom the HYMNS-P dataset where the pedestrianswere in two different locations between images,which were captured about 10 minutes apart. Eachof the pedestrians in the respective images weremanually extracted and segmented where regionsof interest were selected according to the respectivesubregions. The spectral data was thus labeled ac-cording to pedestrian and subregion. The imagerydata was assessed in sensor-reaching radiance andwas not atmospherically compensated to convert it toestimated reflectance. However, known bad bandswere removed prior to processing [14].

Each pedestrian was set as the POI in turn. Foreach subregion on the POI, the respective samplemeanwas calculated. The adjusted spectral Euclideandistances between the POI and their subregion sam-ples were calculated to generate the POI distanceclass. Likewise, the adjusted spectral Euclidean dis-tances between the POI subregion mean and all thesamples of the other pedestrian’s same subregionwerecalculated. This constituted the non-POI class. For thesubregions, there were a limited number of spectralsamples, ranging from as little as 7 for hair on somepedestrians to as many as 192 for the torso on certainpedestrians. As such, the kernel-density-estimation

0 0.05 0.1 0.150

50

100

150

200

Normalized Spectral Distance

Pro

babi

lity

Den

sity

p(de|ω

POI)

p(de|ω

non−POI)

Fig. 8. Example of the two class distance distributions for apedestrian’s face–skin reflectance sample. The probability of thePOI class distribution, denoted by the solid curve, is seen on theleft while the probability density function of the non-POI class dis-tribution, denoted by the dashed curve, is seen on the right. TheSNR level for these two distributions was 8.

1334 APPLIED OPTICS / Vol. 52, No. 6 / 20 February 2013

Page 6: Separability between pedestrians in hyperspectral imagery …€¦ ·  · 2013-03-16hyperspectral imagery Jared Herweg,1,2,* John Kerekes,2 and ... spectral separation between pedestrian

approach, which was performed using the spectralreflectance data, was not used in this case. Rather,the discrete probability mass functions were esti-mated from the normalized discrete histograms. Anexample normalized histogram from one of the subre-gions is shown in Fig. 9 with the POI and non-POIclass distributions separated for visibility. Using theBayes rule for minimum error and assuming equalcosts, the probability of error for the POI class wascalculated for each pedestrian. This process was per-formed twice. The classification error was computedusing the subregion spectral-radiance mean on thepedestrians of the first image. This can be referredto as a test-on-train case. The classification errorwas then computed using the subregion spectral-radiance mean of the first image and the subregiondata from the second image. This constituted lookingat the separability of pedestrians using data acrossimages.

6. Results and Discussion

A. Spectral Reflectance Data Results

The probability of error was computed for each sub-region of each pedestrian. Figure 10 shows the prob-ability of error versus SNR using the full spectral

range for the four subregions: hair, face, shirt, andtrousers. Each of the four curves are the averageprobability of error for all 28 pedestrians of the re-spective subregion. From the results in Fig. 10, SNRsas low as 14 achieved good separability for the sub-regions of hair, shirt, and trousers, with hair beingthe most separable. The face samples required amuch higher SNR to achieve the same probability oferror performance as the other regions. The resultsof combining the subregions using two, three, andfour subregions are shown in Figs. 11–13. Inspectionof the graphs indicated that for this particulardata set, significant improvements can be realizedwhen combining the spectral information of just twosubregions.

Table 1 tabulates the results of the classificationof error when different spectral subregions are used.From the results, it is apparent that using only thethree-color RGB bands, there is a high probabilityof error compared to other spectral regions wherethere is more spectral resolution. Additionally, theskin reflectance samples remained the most difficultover all spectral ranges. It also appears that, for thisdata set, the VNIR spectral range has probability-of-error results similar to the FR results when compared

0

0.02

0.04

0.06

0.08

0.1

Pro

babi

lity

Den

sity

p(de| POI)

0 1 2 3 4 5 6 7

0 1 2 3 4 5 6 70

0.02

0.04

0.06

0.08

0.1

p(de | non−POI)

Spectral Euclidean Distance

Pro

babi

lity

Den

sity

Fig. 9. (Color online) POI and non-POI class distributions for apedestrian’s torso subregion pixels.

5 10 150

0.2

0.4

0.6

0.8

1

SNR

P(e

rror

|ωP

OI)

Hair

Face

Shirt

Trousers

Fig. 10. Plot of the probability of error for the POI class versusSNR for each of the subregions. Note that fairly low probabilitiesof error were achieved with relatively low SNR, but in typicalimagery there are very few pixels on each subregion.

5 10 150

0.2

0.4

0.6

0.8

1

SNR

P(e

rror

| ωP

OI)

Hair−Face

Hair−Shirt

Hair−Trousers

Face−Shirt

Face−Trousers

Shirt−Trousers

Fig. 11. Plot of the probability of error for the POI class versusSNR when two of the considered subregions are combined.

5 10 150

0.2

0.4

0.6

0.8

1

SNR

P(e

rror

|ωP

OI)

Hair−Face−Shirt

Hair−Face−Trousers

Hair−Shirt−Trousers

Face−Shirt−Trousers

Fig. 12. Plot of the probability of error for the POI class versusSNR when three of the considered subregions are combined.

20 February 2013 / Vol. 52, No. 6 / APPLIED OPTICS 1335

Page 7: Separability between pedestrians in hyperspectral imagery …€¦ ·  · 2013-03-16hyperspectral imagery Jared Herweg,1,2,* John Kerekes,2 and ... spectral separation between pedestrian

to the other spectral ranges. This is an interestingresult for considering designing systems withoutrelying on the SWIR bands. This is not surprising forthis data set because most of the pedestrians worecotton-based clothing, which all had similar spectralsignatures in the SWIR as shown in Figs. 5 and 6.Additionally, it is clear that significant performanceimprovement can be realized for all spectral rangeswhen combining the spectral information of just twosubregions together. Combining three or four subre-gions together offers even greater performance im-provement, though not at the same significance asgoing from just one to two combined subregions.

B. Spectral Imaging Results

The results for computing the probability of error ofsubregions detected in remotely sensed imagery areshown in Tables 2 and 3. It is evident from lookingat Table 2 for the probability of error in Image Athat the classification performance is relatively lowamong the pedestrians. However, given the probabil-ity of error was averaged among the 18 pedestriansin the imagery, the standard deviations indicatethere was a range of error among the pedestrian.Using the FR spectral range, the torso and trousershad the lowest error rate while the skin had the high-est. Similar results were found for the other spectralregions. When looking at the binary classificationperformance among the spectral ranges for a singlesubregion, it is evident that the VNIR performs simi-lar to the FR. It is interesting to note that the resultsshow a similar trend as that shown in Table 1 regard-ing the FR and VNIR spectral regions. The skin re-mained the most difficult subregion to distinguish,and the RGB offered the lowest performance amongthe spectral regions. Due to the GSD, the hair andskin pixels had a high probability of being spectrallymixed with other materials, leading to lower classi-fication performance. It is also evident that for thisdata set the SWIR2 region offered little to no signif-icant information for distinguishing the subregions.Again, this is likely in part due to the similarityamong the clothing types of the different pedestrians,where features in the SWIR were similar. Results

Table 1. Summary Table of P�errorjωPOI� for Pedestrian Subregion Spectral–Reflectance Combinations at SNR � 5

Subregion FR RGB Vis VNIR SWIR1 SWIR2

Hair 0.250 0.860 0.640 0.516 0.465 0.610Face 0.906 1.000 0.985 0.973 0.977 0.833Shirt 0.525 0.707 0.314 0.534 0.941 0.934Trousers 0.406 0.903 0.652 0.481 0.820 0.928Hair–Face 0.178 0.896 0.801 0.642 0.388 0.320Hair–Shirt 0.082 0.256 0.111 0.279 0.424 0.762Hair–Trousers 0.043 0.470 0.273 0.168 0.452 0.827Face–Shirt 0.269 0.600 0.363 0.334 0.879 0.914Face–Trousers 0.248 0.871 0.421 0.299 0.781 0.881Shirt–Trousers 0.136 0.305 0.158 0.137 0.656 0.827Hair–Face–Shirt 0.031 0.429 0.219 0.126 0.277 0.573Hair–Face–Trousers 0.019 0.531 0.248 0.109 0.305 0.606Hair–Shirt–Trousers 0.018 0.222 0.081 0.076 0.292 0.652Face–Shirt–Trousers 0.051 0.302 0.124 0.077 0.582 0.807Hair–Face–Shirt–Trousers 0.005 0.206 0.089 0.034 0.188 0.475

Table 2. Summary Table of the Subregion Probability of Error for Subregions in Image A

Torso Skin Trousers Hair

Mean s.d. Mean s.d. Mean s.d. Mean s.d.

FR 0.377 0.335 0.906 0.205 0.372 0.309 0.595 0.361RGB 0.788 0.310 0.969 0.097 0.792 0.346 0.875 0.244VIS 0.553 0.375 0.975 0.060 0.596 0.358 0.762 0.337VNIR 0.360 0.337 0.901 0.183 0.348 0.305 0.612 0.324SWIR1 0.832 0.314 0.949 0.142 0.787 0.344 0.792 0.339SWIR2 0.998 0.006 0.990 0.040 0.962 0.242 0.820 0.242

5 10 150

0.2

0.4

0.6

0.8

1

SNR

P(e

rror

|ωP

OI)

Hair−Face−Shirt−Trousers

Fig. 13. Plot of the probability of error for the POI class versusSNR when all four of the considered subregions are combined.

1336 APPLIED OPTICS / Vol. 52, No. 6 / 20 February 2013

Page 8: Separability between pedestrians in hyperspectral imagery …€¦ ·  · 2013-03-16hyperspectral imagery Jared Herweg,1,2,* John Kerekes,2 and ... spectral separation between pedestrian

in Table 3 repeat this trend; however, it is evidentthat using the spectral information from the first im-age to classify the subregions of the second leads toapproximately a 10% degradation in performance.Because these were real-world images, a full charac-terization of the several environmental factors, suchas illumination variations, adjacency affects, andatmospheric characteristics could not be explicitlyknown. These aspects likely increased the variabilityamong the subregions, further overlapping the spec-tral distance distributions.

7. Conclusion

This paper describes a unique and novel data set andthe results from studying the spectral appearanceof pedestrians in hyperspectral imagery. The charac-terization of the empirical spectral-reflectance datacollected during this work indicated that significantimprovements in pedestrian distinguishability canbe realized when using the spectral information ofmore than one subregion.While this requires a higherspatial resolution then typical remote-sensing engage-ment scenarios (greater than approximately 10 cmGSD) and lends itself to more of a close-in-sensinggeometry, having the HSI information on pedestrianswould enable tracking systems to better distinguishamong them. The results showed that for pedestriansin the HYMNS-P data set, the VNIR spectral regionoffered a classification performance similar to the FRspectral range. Also, skin was the most difficult todistinguish among pedestrians using the adjustedspectral Euclidean distance metric.

The results of separability among pedestriansusing remotely sensed imagery wasmuch poorer, andpart of this may be due to the increased variabilityamong the pixel samples for the subregions in mixedillumination and viewing geometry. This variability,which was not fully characterized here, increasedthe overlap of the spectral distance distributionsused to assess the separability based on the binaryclassification-error rates. Further work is needed toconsider additional spectral separability techniques,illumination impacts, and data from other HSI sys-tems. Additionally, techniques that compensate forillumination and geometry variability for the remote-sensing case demonstrated here (using an oblique-oriented, building-mounted sensor) could be applied,such as those reported by Ientilucci and Bajorski [16].

The main points learned from the results shownare that it is evident the FR and VNIR spectralranges provide the best classification performanceamong the pedestrians. Also, it appears that theclothing of a pedestrian appears to be the mostsalient feature for improving the spectral separationbetween pedestrians in the real-world imagery.Portions of the unique data set used in this researchwill become publicly available upon request.

This material is based on research sponsoredby the Air Force Office of Scientific Research underagreement number FA9550-08-1-0028. The U.S.Government is authorized to reproduce and distributereprints for governmental purposes notwithstandingany copyright notation thereon. Additionally, theauthors would like to thank the Air Force ResearchLaboratory for volunteer pedestrian support duringthe data-collection efforts and the Air Force Instituteof Technology for use of the HST3 Sensor. Finally, wewould like to thank the Institute for the Developmentand Commercialization of Advanced Sensor Technol-ogy (IDCAST) for use of their facility during theHYMNS-P data collect. The views expressed in thisarticle are those of the authors and do not reflect theofficial policy or position of the United States AirForce, Department of Defense, or the United StatesGovernment.

References1. J. R. Schott, Remote Sensing, 2nd ed. (Oxford University,

2007).2. J. Blackburn, M.Mendenhall, A. Rice, P. Shelnutt, N. Soliman,

and J. Vasquez, “Feature aided tracking with hyperspectralimagery,” Proc. SPIE 6699, 1–12 (2007).

3. A. Rice, J. Vasquez, M. Mendenhall, and J. Kerekes, “Feature-aided tracking via synthetic hyperspectral imagery,” in FirstWorkshop onHyperspectral Image and Signal Processing: Evo-lution in Remote Sensing (WHISPERS), 2009 (IEEE, 2009),pp. 1–4.

4. A. S. Nunez, “A physical model of human skin and its applica-tion for search and rescue,” Ph.D. dissertation, Air ForceInstitute of Technology, Wright–Patterson Air Force Base,OH (2010).

5. J. D. Clark, M. J. Mendenhall, and G. L. Peterson, “Stochasticfeature selection with distributed feature spacing for hyper-spectral data,” in 2nd Workshop on Hyperspectral Image andSignal Processing: Evolution in Remote Sensing (WHISPERS)2010 (IEEE, 2010), pp. 1–4.

6. J. A. Herweg, J. P. Kerekes, and M. Eismann, “Hyperspectralimaging of natural signatures for pedestrians,” Proc. SPIE8390, 83901C (2012).

Table 3. Summary Table of the Subregion Probability of Error When Using Spectral Information from Image A toClassify POI Subregions from Image B

Torso Skin Trousers Hair

Mean s.d. Mean s.d. Mean s.d. Mean s.d.

FR 0.477 0.363 0.958 0.083 0.415 0.339 0.629 0.354RGB 0.865 0.176 1.000 0.000 0.956 0.053 0.942 0.141VIS 0.651 0.341 1.000 0.002 0.730 0.255 0.875 0.209VNIR 0.456 0.363 0.950 0.098 0.452 0.318 0.751 0.326SWIR1 0.786 0.313 0.980 0.037 0.702 0.280 0.806 0.310SWIR2 0.911 0.222 1.000 0.000 0.915 0.230 0.892 0.210

20 February 2013 / Vol. 52, No. 6 / APPLIED OPTICS 1337

Page 9: Separability between pedestrians in hyperspectral imagery …€¦ ·  · 2013-03-16hyperspectral imagery Jared Herweg,1,2,* John Kerekes,2 and ... spectral separation between pedestrian

7. C. M. Jengo and J. LaVeigne, “Sensor performance com-parison of HyperSpecTIR instruments 1 and 2,” in Aero-space Conference 2004 Proceedings (IEEE, 2004), Vol. 3,pp. 1799–1805.

8. Analytical Spectral Devices, Inc., “FieldSpec Pro User’sGuide,” 2002, retrieved, 15 October 2010, http://www.asdi.com.

9. D. Simmons, “Performance characterization of an innovativeillumination source for the analytical spectral device spectro-radiometer, FieldSpec Pro FR,” Rochester Institute of Tech-nology Digital Imaging and Remote Sensing Laboratory ProbeDevelopment Status Report (2006).

10. I. Pavlidis, P. Symosek, B. Fritz, M. Bazakos, and N.Papanikolopoulos, “Automatic detection of vehicle occupants:the imaging problem and its solution,” Machine Vis. Appl. 11,313–320 (2000).

11. T. L. Haran, “Short-wave infrared diffuse reflectance oftextile materials,” Masters’ thesis (Georgia State University,2008).

12. P. Bajorski,Statistics for Imaging, Optics, and Photonics (Wiley,2011).

13. A.Webb, Statistical Pattern Recognition, 2nd ed. (Wiley, 2005).14. J. A. Herweg, “Pedestrian detection phenomenology in

a cluttered urban environment using hyperspectral imag-ing,” Ph.D. dissertation (Rochester Institute of Technology,2012).

15. R. S. Berns, Billmeyer and Saltzman’s Principles of ColorTechnology, 3rd ed. (Wiley , 2000).

16. E. J. Ientilucci and P. Bajorski, “Stochastic modeling ofphysically derived signature spaces,” J. Appl. Remote Sens.2, 1–10 (2008).

1338 APPLIED OPTICS / Vol. 52, No. 6 / 20 February 2013