2002_dec_1307-1315

download 2002_dec_1307-1315

of 9

Transcript of 2002_dec_1307-1315

  • 8/12/2019 2002_dec_1307-1315

    1/9

    Creation of Digital Terrain Models Using anAdaptive Lidar Vegetation Point Removal ProcessGeorge T Raber John R. Jensen Steven R. Schlll and Karen Schuckman

    AbstractCommercial small-footprint lidar remote sensing has becomean established tool for the creation of digital terrain models[DTMS).Unfortunately, even after the application of lidarvegetation point removal algorithms, vertical DTM error is notuniform and varies according to land cover. This paperpresents the results of the application of an adaptive lidarvegetation removal process to a raw lidar dataset of a smallarea in North Carolina. This process utilized an existing lidarvegetation point removal algorithm in which the pammeterswere adoptively adjusted based on a vegetation map. Thevegetation map was derived through the exclusive use of thelidar dataset, making the process independent of ancillary data.The vertical error and suqace form of the resulting DTM werethen compared to DTMs created using traditional techniques.The results indicate that the adaptive method produces asuperior DTMIntroductionDigital terrain models [DTMS)re used to model landscape ele-vation, slope, and aspect characteristics for hydrologic, envi-ronmental, and urbanlsuburban applications. Methods forcreatingDTMS include 1) n situ measurement using surveyinginstruments (including the Global Positioning System), (2) pho-togrammetric techniques based on stereoscopic aerial photog-raphy, (3) interferometric synthetic aperture radar IFSAR), and(4) active light detection and ranging (lidar) echnology (Jen-sen, 2000). Each method has strengths and weaknesses and pro-duces DTMs of varying accuracy. Recently, there has beenconfusion in the DTM marketplace regarding which method issuperior. As a response to this, there has been a significantamount of research conducted to determine the best methodsfor creating accurate bald-earth surface DTMs using lidar tech-nology (e.g., Plaut et al., 1999; Hodgson et al., 2002). Thisresearch describes adaptive digital image post-processingtechniques that can be used to create more accurate DTMs usinglidar-derived vegetation canopy information.Multiple Return Udar CharacteristicsThe majority of current lidar instruments receive multiplereturns from a single transmitted laser pulse. This is com-monly referred to as multiple-return lidar. The interaction ofthe laser pulse with the Earth's surfacemay be relatively simpleor complex. In the simplest case, a pulse of laser energyinter-acts directly with bare soil, asphalt, andlor concrete, and mostof it is scattered back toward the receiver. This results in a sin-gle return. Similarly, depending upon the wavelength of thelaser, a single laser pulse may be scattered directly fromthe top

    of a very dense vegetation canopy, also resulting in a singlereturn (although this is less likely). The situation becomes morecomplex when apulse of laser energy passes through the top ofthe canopy and interacts with 1) he tree canopy trunkbranches, stems, and leaves; (2)understory trunk, branches,stems, and leaves; and, finally, (3) the terrain surface. Thisseries of events may result in several returns being recorded fora single pulse of transmitted energy. Current commercial lidarinstruments generate, at this writing, 5,000 to 50,000 pulsesper second, each of which may produce several returns. Adetailed discussion of lidar technology, capabilities, and appli-cation is available in Maune (2001).Improving the Accuracy of Lidar Derived DTMsIt is necessary to identify and segregate lidar returns that hit theterrain surface from non-terrain surface returns (e.g., withinthe vegetation canopy) in order to create accurate lidar-derivedDTMs. To this end. research has been conducted to assess theaccuracy and the potential sources of error in DTMS For exam-ple, Huising et al. (1998) examined the impact of instrumentand calibration error on lidar-derived DTMs. Cowen et al. (2000)evaluated the effect of percent canopy closure on lidar-derivedDTMs. Hodgson et al. (2003) found that land-cover types were asignificant factor when extracting elevation information fromleaf-on idar data in North Carolina.Setting aside instrument error (i.e., assuming all recordedlidar observation points are in their precise planirnetric loca-tion), there are theoretically two major explanations for theintroduction of lidar-produced TM error: 1) nterpolationerror, and (2) error that results from the incorporation of non-terrain points in the creation of the DTM. Interpolation error hastwo major sources. The first source of interpolation error issemi-systematic because it is related to the post spacing or dis-tance between lidar returns (e.g., 3 meters between lidar returnpostings). Post spacing is the result of a number of collectionparameters, including 1)lying height of the aircraft aboveground level [AGL), (2) speed of the aircraft, 3) the pulse rate,and (4) he scan angle.The second source of interpolation enor is a result of thelidar vegetation point removal process. Many common lidarvegetation point removal algorithms use a statistical surfacetrend to remove lidar-derived elevation points that appear tobe the result of interaction with vegetation cover rather than thedesired bald earth [Tao and Hu, 2001; Kraus and Pfeifer, 1998).The removal of lidar vegetation points means that fewer pointsare available for interpolation in certain areas (i.e., data voidsexist). Inaddition, if true bald-earth lidar points are mistakenly

    G.T. Raber,J.R. Jensen, and S.R. Schill are with the Departmentof Geography, University of South Carolina, Columbia, SC29208 ([email protected]).K Schuckman is with EarthData Technologies, LLC, 1912Eastchester Drive, High Point, NC 27265.

    Photogrammetric Engineering Remote SensingVol. 68, No. 12, December 2002, pp. 1307-1315.0099-1112/02/6812-1307$3.00/02002 American Society for Photogrammetryand Remote Sensing

    PHOTOGRAMMETRIC ENGINEERING REMOTE SENSING ecember 2002 I3 7

  • 8/12/2019 2002_dec_1307-1315

    2/9

    removed because they are confused with non-ground points,additional interpolation error may be introduced. Both condi-tions most often cause under-prediction of terrain elevationbecause high slopes are smoothed and small peaks are removed(Figure1 .This type of error often appears in patterns (not ran-dom) because it is the result of smoothing a slope, or clipping apeak. When non-ground lidar points are used i n the interpola-tion process, processed terrain elevation will be overestimatedin the DTM (Figure1 .The goal in the DTM reation process i s to have a lidar vege-tation point removal algorithm that results in the least overallDTM error. To date, little research has been conducted toaccount for the different types of vegetation encountered dur-ifig the lidar vegetation pointremoval process. This study com-pares vertical DTM accuracy in DTMs produced using tradi-tional lidar vegetation point removal techniques to a DTM cre-ated using a more sophisticated adaptive lidar vegetation pointremoval technique. The adaptive technique minimizes theoverall error by applying different vegetation point removalparameters based on information the lidar data itself revealsabout the vegetation type (land-cover class) present.

    13 8 ecember ZOO

    Lidar Derived Vegetation lnformatlonmultiple-return lidar system records the three-dimensionallocation of large numbers of points beneath the aircraft. In veg-etated or built-up areas many of these points are not on the ter-rain surface. Previous research has suggested that some of thisinformation can be used to characterize vegetation for environ-mental and forestry applications. For example, a number ofstudies ur large-footprint Scanning Lidar Imager of Canopiesand Echo wery (SLICER) data to measure vegetation charac-teristic~ SLICER instrument only recorded five samples, orfootp for each scan. Because one footprint is 10 metersacrr llle swath width is approximately 50 meters (Lefsky,19 .Means et al (1999) correlated characteristics of the

    ctual ElevationFigure1 A model of the vertical error present in a lidar-derived TM after the vegetation point removal process hasbeen preformed. This model assumes no instrument error(i.e., assuming all recorded lidar observation points are intheir precise planimetric location.)

    SLICER waveform with in situ canopy height measurements.Such large-footprint lidar data are not commercially available.Only a few studies have used conventional, small-foot-print lidar data to extract vegetation characteristics. Jensen e tal (1987) used first- and last-return lidar data to obtain bot-tomland hardwood wetland tree height information. Naesset(1996) used last-return lidar data to predict mean height withinforest stands (rZ=0.91).Means et al (2000) achieved goodpre-dictions (r2> 0.90) of height, volume, and basal area usinglidar-derived indices created by using aggregate statistical val-ues derived from lidar points found within each cell of a 10-by10-m grid. Height was estimated using the 90th percentile ofthe range (max-minreturn) and the average maximum height ofthe first return. Volume and basal area were estimated based onthe relative height positions of the vertical point distributionsat various percentiles of those distributions. A number of stud-ies conducted as part of the European Union HIGH-SCAN projecthave also explored the link between lidar data and vegetationcharacteristics. Hyppa and Hyppa (1999) tested the ability of anumber of remote sensing instruments, including SPOT, TM,and L D A R data, to predict forest stand attributes, includingheight and stem volume. They found that lidar data predictedthese relationships better than did the other sensors. Hyppgand Inkinen (1999) demonstrated a technique that employslidar data collected with a high measurement density (4 to 5 perm2) o predict attributes for individual tree crowns.Results from these studies suggest that useful vegetationinformation can be extracted from multiple-return, small-foot-print lidar data. This paper evaluates a unique method that wasdeveloped to identify vegetation characteristics based oninformation derived exclusively from multiple-return lidardata. The resulting vegetation classification informationderived from the lidar data were then input to an adaptive lidarvegetation point removal algorithm to determine if it increasedthe accuracy of the resultant bald-earth DTM.Study AreaHurricane Floyd made landfall in North Carolina on 16 Sep-tember 1999. The storm remained over North Carolina for sev-eral days, causing an estimated 3.5 billion in damage, mostlyfrom flooding. This catastrophic event revealed that currentflood plain maps need to be updated. Consequently,North Car-olina agencies decided to re-map the entire state using lidar toobtain an accurate bald-earth DTM. The statewide DTM will beused to predict areas at risk during any future flood event. Thelidar research in this paper was initiated as a partnershipbetween the NASA Affiliated Research Center at the Universityof South Carolina, the North Carolina Geodetic Survey, andEarthData, International of North Carolina, LLC., one of thecontactors for the North Carolina mapping project.The 3.25- by 3.5-km study area was in eastern North Caro-lina approximately 42 km northwest of Rocky Mount and 70krn northeast of Raleigh (Figure2 . The area was in the coastalplain and consisted primarily of agriculture and forest withvery few buildings (less than ten farmhouses and related struc-tures). Agriculture consisted of tobacco, soybean, corn, cotton,and alfalfa. Forest cover consisted of deciduous, coniferous,and mixed stands. Many of the coniferous stands were planta-tion pine at various levels of maturity. Elevation in the studyarea ranged from 50 to 80 m above sea level. For the most part,the terrain was gently rolling with little dramatic variation. RedBud Creek and Swift Creek converge near the center of thestudy area. Small valleys created by these two creeks add theimportant elements of terrain variation and dense riparian veg-etation to the study area.MethodologyThis study investigated several of the basic theoretical prob-lems associated with the traditional approach to removing

    PHOTOGRAMMETRIC ENGINEERING REMOTE SENSING

  • 8/12/2019 2002_dec_1307-1315

    3/9

    Figure 2. The location of the study site is shown as a square on thismap of the counties of North Carolina.

    lidar returnsinvegetated canopies. These problems include (1)traditional methods usually require a large amount of costlyhuman supervision and manual editing, 2) they are typicallyinconsistent in the application of parameters especially inlarge projects, and (3) the lidar vegetation point removal algo-rithms usually treat the entire dataset equally, i.e., they are notadaptive to local, site-specificconditions. It was hoped thatthese problems would be minimized by the use of an adaptivelidar vegetation point removal algorithm that incorporates veg-etation type information. Three hypotheses were tested inrela-tion to the following objectives:

    Information containedin the vertical distribution of lidar pointscan be used to classify vegetation cover type,Digital terrain model elevation accuracy can be improvedthrough an adaptive application of a lidar vegetation pointremoval algorithm that incorporates lidar-derived vegetationtype information, andDigital terrain model surface form (slope) can be improvedthrough an adaptive application of a lidar vegetation pointremoval algorithm that incorporates lidar-derived vegetationtype information.These hypotheses, stated above in alternative form, were

    tested in two separate phases. The first phase involved theextraction of vegetation type information using only lidar data.The second phase compared TM esults from(a) the application of a traditional, automatic lidar vegetationpoint removal algorithm consisting of a single set of algorithmparameters determined by a human operator,b) a human-edited version of the results from (a) above,(c] an optimized version of the automatic lidar vegetation pointremoval algorithm in (a] consisting of a single set of algorithmparameters determined through an automatic optimizationprocess, and(dl an adaptive lidar vegetation point removal algorithm wheredifferent algorithm parameters were developed for each land-cover class. Land-cover classes derived from the lidar data;each set of parameters determined through the automatic opti-mization process in (c) above.

    Udar Data CollectionThe lidar data were collected by EarthData, using the AeroScanlidar instrument on 12June 2000, during leaf on conditions.The instrument was flown at 5,000 f t (1524 m)AGL to obtain a 3-meter post spacing, i.e., the lidar points were on average 3meters apart. The instrument transmitted 15,000 pulses persecond and collected up to five returns from each pulse. Thelidar data were processed by EarthData and delivered asx yzASCII iles.A total of three lidar datasets were provided byEarthData, including:Raw Returns: Lidar-derived elevation postings processed toremove known instrument error.

    Automatic: Lidar-derived elevation postings after the applica-tion of a traditional vegetation removal algorithm.Automaticwith Human Edifing: Automatically processed datasubjected to human manual editing. This is the product EarthData normally delivers to a customer that has requested a bald-earth DTMThe raw return lidar data were merged from three different

    flight lines into a single dataset that was clipped to the studyarea boundary, Obvious blunder elevation postings that werevertically well above the main point cloud were discardedautomatically based on a threshold value. These postings weremost likely the result of some airborne object (e.g.,a bird). Thelidar data were then converted to ESRI point Shapefiles.ReferenceDataThree different sets of reference data were collected to test thehypotheses: 1)high spatial resolution color-infrared aerialphotography, (2) ground-surveyed elevation data, and (3)insitu vegetated land-cover class observations. Stereoscopic1:20,000-scale color-infrared aerial photography was collectedon 15 September 2000 (Figure 3). The photography was digi-tized and orthorectified o 0.5- by 0.5-m pixels, and was used inthe reference, raining, and error assessment phases of the proj-ect. The land cover in the study area was exceptionally stableover the two-month period between the collection of the lidardataand the aerial photography.Approximately 650 surveyed ground elevation pointswere collected by personnel of the North Carolina GeodeticSurvey NCGS) and National Aeronautics and Space Adminis-tration [NASA) sing survey gradeGPS units with an accuracyof 1cm in the horizontal X, nd vertical 2). he pointswere obtained at approximately 5- to 6-m intervals along sixtransects which ran across stream channels to capture informa-tion on terrain variation. The vegetation land cover was meas-ured in situ at approximately 500 observations along the sixtransects. This information was used to assess the accuracy ofthe land-cover classificationmap derived strictly from the lidardata, and to calibrate the lidar vegetation point removalalgorithms.VegeWbn ~ l f k t h nsing nly UdarDataOne goal of this research was to derive vegetation type informa-tion using only lidar data. If successful, land-cover informa-tion could be used adaptively in a lidar vegetation pointremoval algorithm o produce more accurateDTMS. The land-cover classification system used to characterize he land coverin the study area is summarized in Table1.The methodology to extract vegetation class informationfrom lidar data was initiated by first examining the vertical dis-tribution of the lidar elevation points. Multiple-return lidardata collected in forested environments is often characterized

    PHOTOGRAMMEINCENGINEERING REMOTE SENSING e cembe r 2002 13 9

  • 8/12/2019 2002_dec_1307-1315

    4/9

    Figure 3. black-and-white reproduction of a 1:20 000-scale color-infrared photograph of the study area.

    by point distributions throughout the top of the canopy, themid-canopy and understory vegetation, and finally the terrainsurface (Figure 4a). Rendering these points graphically yields alidar "point cloud" pattern that can be used to appreciate thevertical structure of the ground cover (Figure 4b). Integratingthe point cloud over a 15- by 15-m geographic area yielded ahistogram that revealed the frequency of laser returns along theZ-axis (Figure 412). This histogram is similar to a waveformbecause it changes based on the vertical vegetation structurepresent within each 15- by 15-m geographic area. Blair andHoften (1999)referred to a similar histogram as the vertical dis-tribution of intercepted surfaces ( IDIS). Interestingly, Blair andHoften (1999) used only single-return lidar data. They applied

    TABLE1. LANDCOVERLASSESAND DESCRIPTIONSSED IN THECLASSIFICATIONROCESS

    Class Description

    131 December ZOO

    Pine Natural growth forested areas dominated by pinespeciesPlantation pineDeciduous Forested areas dominated by deciduous speciesOld growth and riparian areasMixed Areas where neither pine or deciduous speciesseemed to dominateOften pine forests with young, dense deciduousgrowing in gapsScrub Short trees or shrubs (

  • 8/12/2019 2002_dec_1307-1315

    5/9

    Hypothetical Land Cover ctual LID R Point Cloud Corresponding VerticalHistogramopofc nopy .

    om

    * * * aa a* a f ***.a?

    b)

    umberofPoints0 )

    Figure 4. terrestrial view of a hypothetical land cover scene (a),along with an actual lidar point cloud b), and thecorresponding vertical histogram of intercepted surfaces (c)that was created from the point cloud.

    the center of the pixel represented by i and j; Zrepresents theset of the relative height values for the lidar points within a 15-by 15-m square surrounding the point represented by ind qandP s the mathematical notation for a percentile.This technique yielded a ten-band statistical image. Of thepossible ten bands, Plate l a displays three in a color compositeimage. The red image plane contains the values of the relative2-values of the point in which all of the other points in the ker-nel for that grid cell were below it (i.e., the maximum point).The green plane contains the relative Z-values of the point inwhich 40 percent of the points were below it. Finally, the blueplane contains the relative elevation value of the point inwhich 10 percent of the points were below it.Shades of gray, including white and black, are present inareas where all the values were approximately equal to oneanother (i.e., all the points were near maximum points). Densecanopy prevented much lidar penetration, so nearly all of thepoints resided near the top of the distribution. This meant thatthe very bright-white areas represent the talIest canopy withthe densest vegetation.An examination of the color-infraredimagery revealed that these areas usually contained bottom-land-hardwood found in the riparian stream channels. Thedarker gray shades indicate shorter regions of very dense can-opy. Finally, the very dark areas contain no trees. Like grayshades, blue hues represent areas where most of the pointswere near the maximum 2-value. These areas consist of shorterstands with even denser canopy Yellowish-green hues repre-sent areas with relatively high or balanced central region of thevertical histogram distribution. These areas consist of partiallyopen canopy. Pure green areas contain shorter pine, particu-larly plantation pine, while red areas are indicative of opencanopies typical of tall pine, or transitional areas betweendeciduous stands and open fields. Figure 5 depicts the curvesassociated with four land-cover types found at four specificpixel locations in the study area.In effect, these curves repre-sent a lidar-based signature for these land-cover classes.This ten-band image was classified using a supervisedminimum-distance classification algorithm (Jensen, 1996).The classification was performed by selecting approximately500 pixels for each class as training sites using the colorinfra-red aerial photography. These training sites were verifiedthrough in situ land-cover observation as previously dis-cussed. The resulting classification map is shown in Plate lb.

    Assessment of ElevationEmn by Vegetation ClassThe mean absolute vertical error was calculated for each of thelidar-derived vegetation classes using a DTM generated fromthe automatic dataset. n NOV was used to test for significantdifferences in this statistic. This test was conducted to validatethe theory behind the creation of an adaptive vegetationremoval algorithm. t was thought that i f lidar-derived eleva-tion error varied with vegetation type, it might be possible tominimize the error in a particular class. Reducing the error inanv class would cause the overall elevation error to be reduced.Mean absolute vertical error y l a d cover is shown inTable 2. The one-way NOV test revealed that there was a sig-nificant difference in the accuracy of the lidar-derived eleva-tion information by lidar-derived land-cover class. Note thatthe errors in the open low- and high-grass areas re at or nearthe optimum error level and close to zero. The other vegetatedareas could benefit from further algorithm refinement. Forexample, while the low-grass error may be near its optimumlevel, a more aggressive set of parameters for the lidar vegeta-tion point removal algorithm could help minimize the eleva-tion error in forested areas.Algorithm Callbration Optimization)The process used to create the automatic lidar dataset is notwithout human intervention. It requires a human operator todetermine the input parameters for the lidar vegetationpointremoval algorithm. The operator's decisions on what parame-ters to use are based on the theoretical error model discussedpreviously (Figure I ,and his or her own past experience.Using overlyaggressiveparameters can result in the semi-sys-tematic interpolation error that is the result of shaving off smallpeaks and smoothing steep slopes. Parameters that are not asaggressive can result in over-prediction because non-groundpoints are left in the D m .Often the operator decides to err inthe latter direction,because during the manual editing processit is easier to see and remove unwanted points, than to detectthe absence of valid points and reinstate them.Creation of a truly comparableadaptive surface, in whichthe operatorarbitrarilychooses algorithmparameters based onpast experience,would be impossible,aswell as undesirable. Toovercome this problem, and create two comparable surfaces, itwas decided to calibrateboth the automatic and adaptive lidar

    PHOTOGRAMMmUCENGINEERING REMOTE SENSING ecember ZOO2 1311

  • 8/12/2019 2002_dec_1307-1315

    6/9

    a ) b)Plate 1. a) color composite created by displaying three of the ten bands of the lidar-derivedstatistical image described n the text. b) vegetation map produced by classifyingthe lidar-derived statistical image. Pineeciduous

    Mixedcrubigh Grassow Grass

    vegetation point removal techniques by using the reference ele- each could create. The optimization methodology systematicallyvation points, thus effectively optimizing these methods. This generated algorithm parameters, created a surface, and thenallowed both methods to be compared based on the best surface tested the error for that surface [Figure6). The process sirnultane-ously derived 1)algorithm parameters for each land coverbased on the lidar-derived land-cover map), and 2) the entireset of elevation points. When this process was completed, twooptimized surfaces existed. The d ptive optimized utom ticsurface was created y considering land cover in the optimiza-tion process. Conversely, the optimized utom tic consideredall the elevation points together i.e., t ignored vegetation class).

    Percentile Band)Figure 5. Lidar-derived signatures for selected pixels ofknown land cover contained n the statistical image shownn Plate la .

    1312 ecember 2 2

    TABLE 2. THE MEANABSOLUTE RRO RSSSOCIATEDITH EACH LIDAR-DERIVEVEGETATIONLASSN THE DTM CREATED SING HE UTOM TIC DATASETLand of Reference Me an Absolute VerticalCover Points Error m)

    Lo w Grass 100 0 10Hi gh Grass 73 0 27Scrub 63 1 51Pine 134 0 46Deciduous 144 2 43M i x e d 134 2 42Total 648 1 33

  • 8/12/2019 2002_dec_1307-1315

    7/9

    Systematic generationof algorithm parametersApplication of algorithmto rawLIDAR dataset

    I---------~doptrve Creation of aTIN fkom >-G - :; ; =sdt@LW d et ;- - - - - - - - - - - - - - - - -j Error calculation foreach I Enor &culiition for theI veget tion type Mean t ntir set of poinfs MeanAbsolute Error and RMSE) I:Absolute Error and RMSE)I Usingthe Arc/Info UsingtheArc/InfoTINSPOTProcedm ; TINSPOTProcedure I: Recording ofparameters I:and associated errors; - - - - - - - - - - t - - - - - - - Hor each vegetation type I

    Figure 6. A flow diagram documenting the calibration proc-ess used to create optimized DTM surfaces Tor comparison.The process was implemented usinga Visual Basic programthat took advantage of Arc/lnfo's Open Development Envi-ronment ODE) and a number of Arc Macro Language ML)scripts.

    Surface Form Accuracy of Varlous Udar.DeM Dlgltel Elevation ModebSurface form (slope and aspect) accuracy is often consideredjust as important as the vertical accuracy ofDWS. For example,when modeling the overland flow of liquids, it is vitallyimportant to have a correct model of the shape or form of thesurface. Because all the reference data were collected in tran-sects, an overall assessment of surface form was not possible.However, it was possible to use the elevation informationobtained along the transects. The slope was calculatedbetween each in situ elevation point by converting the rise andrun information between each point. This process was per-formed for each of the four lidar-derived test surfaces and com-pared with the slope derived from the in situ reference data.The absolute slope error was calculated by taking the absolutedifference between the slope of the in situ reference informa-tion, and the slope of the surface derivedusing the lidar data.The mean of the absolute slope errors for each surface werethen calculated and tested for significant differences from theadaptive surface using three t-tests.Results

    TABLE ACCURACYASSESSMENT FOR THE LIDAR-DERIVEDLASSIFICATION.LARGE AMOUNTOF CONNSION XISTSBETWEEN THE HIGHAND LOWCATEGORIES

    Reference DataLow High Scrub Pine Mixed Dec Total Users

    Udar erived Vegetation ClassHicaUonMapThe lidar-based vegetation classification map was smoothedusing a 5-by 5-pixel majority filter to reduce noise. This wasperformed because the field surveys in which the referencedata were collected did not account for isolated or smallchanges in land cover. The overall accuracy of the map was 72percent with a Kappa Coefficient of Agreement of 0.65 (Table3).One factor that may contribute to the confusion between classesis terrain variability. For example, if a steep elevation changeoccurs entireIy within the kernel size (15 by 15 m), the s ip*ture from the vertical distribution of points may lookmore likea heavily vegetated area when in reality it is an open field or isvoid of vegetation entirely.A striking problem with thisclassi-fication was the confusion between high grass and low grass.Basically, it was not possible to discriminate between low- andhigh-grass cover using the lidar data. However, when theseclasses were combined into a single Open class, the overallaccuracy increased to 84 percent and the Kappa Coefficient ofPHOTOGRAMMETRICENGINEERING REMOTE SENSING

    LOW 51High 7 70 1Scrub 14Pine 2 2 5 55 3 6Mixed 2 9 96 20Dec 36 94Total 60 128 21 64 135 121Producers 85 55 67 86 71 78Overall Accuracy = 72 -K-Hat 0.65061The overall accuracy was 84 and the K-Hat 0.780when high and low grass classes were combined.

    Agreement to 0.780. Based on these results, it was possible toreject hypothesis 1 and state that information contained in thevertical distribution of lidar points can be used to classify vege-tation cover type.Vectlcrl Accuracy of DTMsCreated UsingUdar DataThe differences in mean absolute vertical error and root-mean-squarederror RMSE)for the four DTMs created using the fouralternative methods are displayedin Figure 7.Eachof the t-testswere significant at the 0.01 level. Based on the significance of thet-tests, it was possible to reject hypothesis 2 and state thatDTMvertical accuracy can be improved through an adaptive applica-tion of a lidar vegetation point removal algorithm that incorpo-rates lidar-derived vegetation type information, The adaptivetechnique creates a superiorDTM when compared to DTMs cre-ated in whichvegetation is not accounted for in the applicationof the vegetation removal algorithm. s an additional note of clar-ification, the automatic dataset does not represent a commerciallidar product that would be delivered to aDTM user. Although the

    Error omparison by Vegetation Removal Method2.48

    e a nbsolute ertical Error lMSE

    Automatic Automatic Optimized Adaptivewith HumanEdits Automabc OptimizedAutomaticFigure 7. A comparison of the error found in each of thelidarderived DTMS. The mean absolute error of the DTM pro-duced using the adaptive optimized method was significantlysmaller then all of the other techniques at the 0.01 level.These results indicate that an adaptive approach to vegeta-tion point removal is of value.

    December ZOO 1313

  • 8/12/2019 2002_dec_1307-1315

    8/9

    automaticwith human e itssurface was processed identicallyto a consumer product, the lidar data for this project was acquiredleaf-on. It has been shown that lidar-derived Mre less accu-rate when collected during leaf-on vs. leaf-off conditions Raberet al. 2002). This effect is a Likelya contributor to the reason theDTM errors reported in Figure 7 are not consistent with commer-cial lidar accuracy claims.Surface orm (Slope) w a c y of UdwQerived DTMsThe slope error descriptive statistics associated with eachmethod are summarized in Table4. The DTM derived using theoptimized adaptive method was significantly better then eachof the DTMs created using alternative methods at the 0.05 level,except for the optimized automatic method which was not sig-nificantly different. Based on these results, it was possible toreject Hypothesis 3 and state that DTM surface form slope) canbe improved through an adaptive application of a lidar vegeta-tion point removal algorithm that incorporates lidar-derivedvegetation type information. However, the optimized auto-matic methodology and the optimized adaptive methodologyproduced surface form results that were forall practical pur-poses nearly identical.SummaryPotential applications for vegetation classification based solelyon lidar data are exciting. Refinement of the methodology dis-cussed in this paper could result in the fully automated cre-ation of a virtual reality visualization of an area that includedelements of terrain and land cover based entirely on lidar datawithout the aid of aerial photography or other remote sensordata. Land-cover maps could be created at the same time lidarmissions are flown, thus creating a new tool for land plannersand foresters.A DTM produced using an adaptive lidar vegetation pointremoval algorithm was more accurate than one produced usingtechniques that did not consider vegetation cover Figure7).The differences in D m levation error between the optimizedadaptive and the optimized automatic algorithms approxi-mately cm), as well as the differences between the optimizedadaptive and the automatic with manual edits approximately13 cm), were not that large. However, these differences were sta-tistically significant.The method discussed in this paper classifies the lidarpoint distribution into land-cover classes and then uses eleva-tion reference data to calibrate vegetation removal algorithmparameters for each land-cover class. Calibration requires min-imal ground survey. However, there is potential for the develop-ment of a predictive relationship between the vertical distribu-tion of lidar points and the algorithm parameters needed for agiven area. Another topic for fnture research is the testing ofthis methodology using other lidar sensors, and under variousphysiographic and temporal conditions. It is important to notethat the adaptive method requires no manual editing and littleextra computer computation. Also, when humans edit lidardata to create and then refine a DTM, they 1) alibrate the algo-rithm parameters to their own past experiences, and 2) manu-ally edit out points based on their own heuristic rules of thumb.

    TABLE4. SLOPERRORABLEAbsolute Absolute SignificanceMaximum Mean (DifferentSlope Slope fromDSM Error Error Ada ptive)

    Automatic 53 6.7' 0.0001Automatic with Manual Edits 12 1.6' 0.022Optimized Automatic 12 1.4' 0.994*Optimized Adaptive 1 1.4' NA

    Conversely, when a DTM is produced using the adaptive algo-rithm applied to lidar data, it is possible to document explicitlythe rules that were used to create theD m .cknowledgmentsThe research presented in this paper was preformed at theNASAAffiliated Research Center at the University of South Carolina.This center is supported through a cooperative agreement withthe NASA Commercial Remote Sensing Program at the JohnC.

    Stennis Space Center. The authors wish to thank Michael Hodg-son, Jason Tullis, Laura Schmidt, Yingrning Zhou, Chris Rob-inson, Karl Heidemann, and Neil Eggleston for their contribu-tions to various aspects of the research.

    Blair, J.B., and M .A. Hofton, 1999.Modeling laser altimeter returnwaveforms over comp lex vegetation using high resolution eleva-tion data, Geophysical Research Letters 26(16):2509-2512.Cowen, D.J., J.R. Jensen, C. Hendrix, M.E. Hodgson, and S.R. Schill,2000.A GIs-assisted rail construc tion econometric mod el thatincorpo rates LIDAR data, Photogrammetric Engineering RemoteSensing, 66[11):1323-1328.Hod gson , M.E., J.R. Jensen, L. Sch mid t, S.R. Sc hill, an d B. Davis, 2003.An evaluation of LIDAR- and IFSAR-derived digital elevationmod els in leaf-on conditions wit h USGS Level 1 and Level 2DEMs,Remote Sensing of Environment in press.Huisin g, E.J., an d L.M.G. Pereira, 1998.Errors and accuracy of laserdata acquired by various laser sca nning systems for topographicapplications, ISPRS Journal of Photogrammetry and Remote Sens-ing 53245-261.Hyyppa, H., an d J. Hyy ppl, 1999.Comparing the accuracy of laserscanner with other optical remote sensing data sources for standattribute retrieval, Th e Photogrammetric Journal of Finland

    16[2):5-15.Hyyp pl, J., and M. Inkinen, 1999.Detection an d estimating attributesfor single trees using laser scanner, Th e Photogrammetric Journalof Finland 16[2):27-42.Jen sen , J.R., 1996. ntroductory Digital Image Processing RemoteSensing Perspective Prentice-Hall, Inc., Upp er S addle River, New

    Jersey, 318 p., 2000.Remote Sen sing of the Environment: n Earth ResourcePerspective Prentice-Hall, Inc., Upper Saddle River, New Jersey,544 p .

    Jensen , J.R., M.E. Ho dgson, H.E. Mackey, a nd W. KrabiIl, 1987.Correla-tion between aircraft and LIDAR remotely sensed data o n a for-ested wetland, Geocarto h fernat io nal 4:39-54.Kraus, K., and N. Pfeifer, 1998.Determination of terrain models inwooded areas with airborne laser scanner data, ISPRS Journal ofPhotogrummetry and Remote Sensing 53:193-203.Lefsk y, M.A., W B Cohen, S.A. Acker, G.G. Parker, T.A. S pies, a nd D.H a r d i g , 1999. LIDAR remote sensing of the canopy structurean d biophysical properties of Douglas-fir western hemlock forests,Remote Sensing of Environment 70:339-361.Maune, D.F. [editor), 2001.Digital Elevation M ode1 Technologies andApplications: T he DEM UsersManual The A merican So ciety forPhotogrammetry an d Remote Sensing, Bethesda, Maryland, 539 p.Means, J.E., S.A. Acker, D.J. Harding, J.B. Blair, M.A, Lefsky, W.B.Cahen, M E Harmon, and W.A. McKee, 1999.Use of large foot-prin t LIDAR to estimate forest stand characteristics in the w est-ern Cascades of Oregon, Remote Sensing of Environment67298-308.Mean s, J.E., S.A. Acker, B.J. Fitt, M. Renslow , L. Emerson, and C.J.Hendrix, 2000.Predicting forest stand characteristics with air-borne scanning LIDAR, Photogr~mmetricEngineering RemoteSensing 66[11):1367-1371.Naesset, E., 1996.Determination of mean tree height of forest standsusing airborne Iaser scanner data, ISPRS J o m o l of Photogramme-try and Remote Sensing 60(3):327-334Plaut, J.J. B. Rivard, an d M.A. D lorio, 1999.Radar: Sensors and casestudies, Remote S ensing for the Earth Sciences: Manual of Remote

    3 4 Decembe r 2002 PHOTOGRAMMETRIC ENGINEERING REMOTE SENSING

  • 8/12/2019 2002_dec_1307-1315

    9/9

    JOIN NOW. ..Membership Applications availableon-line at wmr.asprs.org.

    Sensing Third Edition Volume (A.N. Rencz, editor), John Wiley April, Washington,D C American Society for Photogrammetryand Sons, New York, N.Y., pp. 613-642. and Remote Sensing, Bethesda, Maryland), unpaginated CD-Raber,G.T. .E.Hodgson, J.R. Jensen,J.A. Tullis, G. Thompson, B. ROM.Davis, and K. Schuckman, 2002. Comparison of LID R data col-lected leaf-on vs leaf-off for the creation of digital elevation mod- (Received 03 October 2001; accepted 1 2 March 2002; revised 19els, Proceedings of the ASPRS 2002 Annual Convention, 19-26 April 2002)

    BECOME A PART OF THE I'-- U+++YOUR LIFEAS A PART OF ASPRSMonthlyYou receive your hand som e edition of Photogrammetric Engineering Remote Sensing( P E W ) , he premiere source of the latest papers in the fields of photogrammetry, remotesensing, an d geogr aphic information system s (GIS). Before turning to t he heart of thejournal, you peruse the industry n ews section then on t o the calendar whe re you di xo ve r anupcoming conference you would like to a ttend. Next, you check th e classified section, eyeingequip men t for sale or imaginin g yourself in on e of th e ma ny "Positions Ope n" listed.

    As a m em ber of ASPRS, you ar e invited t o join y our regional ASPRS association. You a re imm ediately c onn ect ed t o yourlocal imaging and geosp atial scientific comm unity throu gh a m onthly new sletter informing you of local news, events ,an d -ation elections. Translates-Netw orking.Each time you log onYour we b brow ser takes you directly t o th e ASPRS w eb site, www.asprs.org.Set as your browser's hom e page, youcheck this site often. You find several of the searc hable da tab ase s useful, an d especially appre cia te th e on-line bookstore.Her e you find books and m anuals that enrich your career. You do n't have t o be a mem ber t o purchase th ese publica-tions, but th e generous di xo un t available to m embers m akes you glad that you are.AnnuallyOr m ore often i you wish, you att end a conference; thoug h for this scenario, you a ttend the annualASPRS conference. You w ant t o be am on g the th ousan ds of presenters, vend or com panies, professionals,and stude nts, brought together by a shared commitment t o geospatial technology. As a mem ber ofASPRS, you receive a 100 discount off the registration fee. At th e conference you network, picking u pclients, equip me nt, ASPRS literature or research ideas.Eventually

    marketability manyfold.You apply to b e certified in photogram me try, WS/LIS or re m ot e sensing from on e oth er than, ASPRS, th e onlyorganization qualified to d o so. After careful preparation, you pass th e exam , bec ome certified, and improve yourIn TimeYou produce a paper of considerable quality, rigor, and originality. You subm it your pap er to t he P E A S manuscriptcoordinator a nd remarkably, after review, it is app rov ed for publication. Your paper g et s published in PELUS, th eforem ost journal in th e field. (By this time you know that.)FinallyYou receive your well-deserved fame an d fortune, and a n award for your published pa per(Again, congratulations ). Thanks to y ou, y our sma rts, a nd ASPRS.

    MEIM GNGGEOL I I ~ N ~ I ~ ~ V ~ I ~ V V ~U ILI

    e cembe r 2 2 1315