15arspc Submission 62

download 15arspc Submission 62

of 14

Transcript of 15arspc Submission 62

  • 8/8/2019 15arspc Submission 62

    1/14

    1

    APPLICATION OF ECOGNITIONSOFTWARE FOR PINEPLANTATION ASSESSMENT USING LIDAR AND DIGITAL

    CAMERA DATA IN NSW

    Peter Worsley1, Catherine Carney1, Christine Stone1, Russell Turner1,Christian Hoffmann2

    1Forest Science Centre, Industry & Investment NSWP.O. Box 100, Beecroft, NSW 2119, Australia

    02 [email protected]

    2Trimble Germany GmbHAm Prime Parc 11, 65479 Raunheim, Germany

    [email protected]

    Abstract

    Plantation managers require spatially accurate and current information on thestructure, stocking and health of their plantation estate. Harvesting, thinning andre-planting operations occur across plantations throughout the year andincremental stock losses due to drought, pests and diseases are common

    place. Consequently these forests are constantly changing and the systematicassessment of their planted resource is essential for predicting current andfuture stand volumes implementing silvicultural regimes aimed at maximizingreturns. Keeping plantation database records up to date is becoming morechallenging as the commercial forestry sector consolidates its workforce. Thispaper presents the results to date, of the application of eCognition Developersoftware for the multi-scale image segmentation and classification of P. radiataplantations using either airborne light detection and ranging (lidar) data or highspatial resolution, multispectral imagery acquired with a Leica ADS40 linearscanner sensor. We successfully mapped compartment net stocked areas(NSA) and classified treatment zones (i.e. thinned, unthinned, bare ground and

    exclusion zones). For both the lidar CHM and ADS40 imagery, commission andomission errors in classifying unthinned and thinned stands for net stockedareas were minimal (< 5.0 %) The greatest number of misclassified points inthe lidar imagery was associated with the exclusion zones, areas of vegetation,mostly native species, outside the compartment net stocked areas (producersaccuracy = 84.5% and users accuracy = 85.5%). A better result in classifyingthe non P. radiatavegetation was achieved with the ADS40 imagery (producersaccuracy = 98.8% and users accuracy = 95.5%). However, there was moreconfusion in classifying bare ground compared to the lidar results (producersaccuracy = 92.8% and users accuracy = 82.5%). These rulesets will beincorporated into an overall system enabling the operational adoption of highresolution airborne data for accurate and cost-effective plantation inventory.

    mailto:[email protected]:[email protected]:[email protected]
  • 8/8/2019 15arspc Submission 62

    2/14

    2

    Introduction

    Forests New South Wales (Forests NSW) manages the largest softwood

    plantation estate in Australia. In 2009, the net market value of timber of its193,000 ha estate was approximately A$600 million. The agency annuallyharvests millions of tonnes of products (pulp and sawlogs) within a long-term,sustainable management framework. These highly productive plantations arecontinuously harvested, thinned and re-planted and the dynamic nature of theestate presents a significant challenge for forest planners.

    Systematic assessment of their planted resource is essential for predictingcurrent and future stand volumes and implementing silvicultural regimes aimedat maximizing returns. Keeping database records up to date with theseplantation activities is becoming more challenging as the commercial forestry

    sector consolidates its workforce. This has traditionally been driven by plotbased inventory and dGPS measurements and supported by AerialPhotogrammetric Interpretation (API). However, manual delineation andinterpretation of air photos is a time consuming and hence expensive process.Mapping accuracies depend on the skills and experience of the imageinterpreters. This declining skill base is a problem facing forestry agenciesinternationally (e.g. Leckie et al. 2003; Wulder et al. 2008). As a consequence,cost effective, semi-automated stand classification mapping techniques tosupplement traditional methods need to be developed (e.g. Leckie et al. 2003,Hay et al. 2005, Wulder et al. 2008, Haywood and Stone 2010).

    A Pinus radiata plantation has a hierarchy of spatial features defined by thepresence of P. radiataplanted in management units (compartments), the age ofthe P. radiata stand and its thinning history. Within Forests NSW,compartments are planted to approximately 1000 stems per hectare (ha),thinned between the ages 13 to 17 years old down to 450-500 stems per ha andthen thinned again after 23 years down to 200 to 250 stems per ha. Mostcompartments are harvested before 35 years of age. We are using eCognitionsoftware (Trimble Navigation Ltd.) to develop a semi-automated, object-basedapproach for the multi-scale image segmentation and classification of these P.radiataplantations using airborne light detection and ranging (lidar) data or highspatial resolution, multispectral imagery acquired with a Leica ADS40 linear

    scanner. Our first aim was to compare segmentation schemes that extract netstocked areas at the compartment scale, derived solely from either the lidardata with the digital multispectral imagery. Net stocked area (NSA) is the areaof land that is effectively stocked with the species of interest. Although thesynergistic use of lidar data and high spatial resolution multispectral imageryhas been demonstrated for object-based forest classification (e.g. Ke et al.2010), access to coincident imagery may not always be possible or affordableand so we sought to identify their strengths and weaknesses before integratingthe two types of imagery.

    This study forms part of a larger project having the overall objective ofpromoting the operational adoption of high resolution airborne data within

    existing P. radiataresource inventory systems. Numerous recent studies have

  • 8/8/2019 15arspc Submission 62

    3/14

    3

    successfully demonstrated the application of object-based image analysis(OBIA) to classify forested landscapes (e.g. Tiede et al. 2006, Pascual et al.2008, Kim et al. 2009; Johansen et al. 2010, Ke et al. 2010). This paperpresents progress so far, in particular on the application of eCognitionsoftware

    to classify a selection of feature class items that are managed in the ForestsNSW forest resource geo-database. Smith and Morton (2010) claim that toexploit the benefits of geospatial object-based image analysis, the objectstructures must align themselves as closely as possible to what is already inuse and what fits with the users, in our case, Forests NSW. Failure to do so willprevent adoption of the proposed software tools.

    Methodology

    Existing plantation management databases

    Forests NSW has two integrated databases that are integral to their plantationplanning and management activities. The non-spatial GeoMaster database(Atlas Technology, Rotorua, New Zealand) is an event management systemthat captures the temporal dynamics of plantation activities, and a geo-database(supported by ESRI software). The geo-database holds a series of featureclasses that are described by an array of items such as compartment number,age class, species etc. The base unit within the softwoods feature class is theresource unit, a distinct sub-compartment object with a unique managementhistory. The geo-database also contains all the other administrative boundariesand operational spatial data, for example, native vegetation retention areas

    (exclusion zones) and net stocked areas within compartments. Forests NSWhas developed a front end interface that enables operational staff access toboth of these two databases.

    At present, the databases are updated through inventory assessment in plotslocated within the areas of interest using Atlas Cruisersoftware. Summaries ofthis data can be assigned as an event and entered into GeoMaster. Theboundary line-work within the geo-database is sourced from either hand-heldGPS data or manually delineated using digital camera imagery or highresolution satellite data (e.g. IKONOS).

    Study area

    The 5,000 ha study site is located within Green Hills State Forest (SF) (35.5oS,148.0oE), near Batlow within the Hume Region of Forests NSW. Green Hills SFis a large commercial P. radiata plantation planted on mostly undulatingtopography, with a mean elevation of 750 m and annual rainfall ofapproximately 1200 mm.

    Acquisition of imagery

    Small-footprint discrete return lidar data was acquired using a Lite Mapper LMS-Q5600 ALS system (Riegl, Australia) mounted in a fixed-wing aircraft and

  • 8/8/2019 15arspc Submission 62

    4/14

    4

    supplied through Digital Mapping Australia Pty. Ltd. (Perth, Australia). The lidarmission was flown in July 2008 to coincide with winter. The winter season wasselected because it is the leaf-off period for deciduous blackberries which arethe key understorey weed species in the plantation. The near infra-red (NIR)

    lidar system was configured for a pulse rate of 88,000 pulses per second, meanfootprint size of 50 cm, maximum scan angle of 15o (off vertical), mean swathwidth 500 m and a mean point density of 2 pulses m -2. The first and last returnfor each laser pulse was recorded.

    Laser scanning points were processed, geo-referenced and classified by theservice provider into ground and non-ground categories using TerraScansoftware (TerraSolid, Finland). Processed lidar point data was supplied on anexternal drive in LAS format with each file representing a 1 km X 1 km tile (GDA94, MGA 55). The company also provided processed tiles of the Digital TerrainModel (DTM) and Vegetation Elevation Model (VEM) generated at 1.0 m pixel

    resolution. Additional 0.5 m pixels resolution rasters were later generateddirectly from the original LAS point data using ENVI 4.7 (ITT Visual InformationSolutions). The tiles were mosaiced and a Canopy Height Model (CHM) wasderived by subtracting the DTM from the VEM using ENVI.

    The digital multispectral aerial photography (DMAP) was supplied by FugroSpatial Solutions Pty. Ltd., acquired using an airborne ADS40 Linear Scannersensor in late September 2009. The company provided 16-bit 4 band(NIR.R.G.B.) orthophoto mosaics acquired at 0.3 m ground sample distanceand with a forward overlap and side-lap of 60%. The processed imagery wasthen re-sampled from 30 cm to 50 cm pixel size. The 16 bit data was rescaledto 8 bit using ERDAS IMAGINE 2010 (ERDAS Inc., Hexagon Group, Sweden).

    A number of eCognition Developer algorithms are not optimized for 16 bitdata.

    Application of eCognition

    We developed rule-sets using eCognition Network Language (CNL) within theeCognition Developer (version 8.0.1) environment. The eCognition suite waschosen because of its ability to treat imagery both as a raster and vectordataset, significantly increasing the analysts ability to produce clean end useroutputs. Two separate rulesets were developed for the lidar and ADS40

    imagery and their classification accuracies compared. We also utilised theautomated tiling and stitching procedures available in eCognition Server. Thissoftware module is designed for the batch execution of thousands of image tilesprocessed using eCognitionDeveloper.

    The data workflow developed to extract net stocked areas from the lidar-derivedCHM is summarized in Figure 1. The CHM image was initially smoothed toremove gaps within individual crowns using a median and Gaussian filter.Thematic layers containing compartment level and resource unit level data,were extracted from Forests NSW geo-database and loaded into the Developerworkspace. Over 380 rules were defined resulting in repeated segmentations;image object fusions and looping processes (Figure 1). Ground and exclusionareas were identified before locating thinned and unthinned stands. Exclusion

  • 8/8/2019 15arspc Submission 62

    5/14

    5

    zones consist of a range of vegetation types including pine wildlings, eucalypts,blackberries and grasses. Thematic information was used in the process ofseparating the P. radiatavegetation from non P. radiatavegetation. Becausethe thematic line work was often inconsistent with many of the true boundaries

    (visible in the lidar imagery), a series of looping algorithms were usedthroughout the ruleset to improve sliver errors, bit by bit, to improve the existingboundary line work. A threshold area of 0.25 ha was used for locatingunthinned areas that fell completely within a thinned compartment, areas thatmay have been missed in the silvicultural treatment. Another section of theroutine was written to improve the convoluted object line work in order to moreclosely resemble manual delineations. A final routine was included thatimproved road connectivity in places where tree crowns merged across theroads which in turn improved thematic line-work. After stitching the tiles backtogether in eCognition Server, the results were exported as a vector shape fileto ArcGIS for manual processing of any remaining errors that could not be fixed

    in eCognitionServer.

    The ADS40 image was initially split by age class categories (5 years oryounger; 6 23 years old; older than 23 years) and processed by three ageclass category specific modules (Figure 2). Parameterisation of the ruleset wasbased on a prioriknowledge of average crown diameter per age class and stemdensity per sivicultural treatment. The extensive array of spectral and texturalstatistics available in eCognition were examined through recursive partitioningusing classification trees (Breiman et al. 1983) in R (2010). The separation ofP. radiata and native vegetation (mostly eucalypt species) was significantlyimproved with inclusion of the Plant Pigment Ratio (Red bandBlue band)/(Redband + Blue band) (Metternicht et al. 2000).

  • 8/8/2019 15arspc Submission 62

    6/14

    6

    Process routineNSA-Detection

    Pre-processing

    Create workspace & load lidar-derived 1m CHM image

    Create tiles 2000 x 2000 pixels & submit scenes for analysis

    Load thematic layers & apply smoothing filters

    Locate Ground & small trees in Ground using height thresholds

    Segment on height to protect stands 0.25ha in Thinned stands & reclassify

    Locate obvious Thinned & UT stands >5m tall & clean up. Assigndifficult stands to Post Active for amendment in Post-Process

    Export results as a shape file to ArcGIS for final manual clean up

    Merge each class, then segment to compartment level again & clean up slivers

    Classify Post Active to relevant classes using relationships to neighbours

    Segment to compartment level using thematic layers & clean up slivers

    Stitch tiles, load thematic layers & apply smoothing filters

    Segment remaining non-Ground pixels using thematic layers

    Process routinePost-Process

    Figure 1. Summary description of methodology for NSA classification of a Pinus radiataplantationusing eCognitionDeveloper & Server software with a lidar-derived CHM at 1m resolution

  • 8/8/2019 15arspc Submission 62

    7/14

    7

    Process routineNSA-Detection

    Pre-processing

    Segment image to ageclass level using thematic layers

    Remove non-vegetation (roads, shadow &water) using NDVI & Brightness values

    From remaining objects extract P.radiatastands from allvegetation types (other pine spp., eucalypts, blackberries,

    grasses) using PPR & R/G ratios

    Segment P.radiatastands to compartmentlevel using the thematic layers & improve the

    compartment boundaries

    Create tiles 2000 x 2000 pixels & submit scenes for analysis

    Assign P. radiatacompartments to Thinned/UTstands by their number of internal objects

    Locate small UT stands that fall within Thinned stands

    Generalise NSA boundaries in ArcGIS to improvevisual appearance for cartographic purposes

    Create workspace & load ADS40 60cm image

    Identify NSA polygons by their silviculture treatment (Thinned/UT)

    Stitch tiles to form a continuous surfacerepresentation of plantation

    Export results as a shape file to ArcGIS with area & perimeter statistics

    Process routinePost-Process

    Figure 2. Summary description of methodology for NSA classification of a Pinus radiataplantation usingeCognitionDeveloper software with an ADS40 image (NIR, R, G, B bands) at 60cm resolution

  • 8/8/2019 15arspc Submission 62

    8/14

    8

    Accuracy analysis:

    We examined classification accuracy through the derivation of a point-basederror matrix. We acknowledge, however that when dealing with spatial objectstheir geometrical accuracy needs to be assessed as well i.e. location and

    semantic agreement as well as evaluating how an object was delineated (e.g.Tiede et al. 2006) and this will be done later. One thousand reference pointswere visually identified in the original imagery and compared with the classifiedimages. We calculated producer and user accuracy statistics and the Kappacoefficient of agreement of both the classified lidar and ADS40 images.

    Results

    The final classification derived from the lidar CHM identifies five classes:thinned, unthinned < 5.0m in Ht., unthinned > 5.0m in Ht., ground and exclusion

    zone, with the first three classes representing P. radiataNSA (Figure 3). Theerror matrix for the classified scene selected in the CHM lidar data prior tomanual editing (Figure 3) produced an overall accuracy of 96.2%, supported bya Kappa coefficient of 94.9% (Table 1). The commission and omission errorsfor classifying the three stand categories of P. radiata for net stocked areaswere minimal (< 5.0 %) The greatest number of misclassified points wasassociated with the exclusion zones, areas of vegetation, mostly native species,outside the compartment net stocked areas (producers accuracy = 84.5% andusers accuracy = 85.5%). We did not include the near-infrared intensity valuesin the classification scheme because of issues with calibration (e.g. Donoghueet al. 2007). The other source of confusion was associated with separating

    ground visible from above in the thinned stands with ground outside the netstocked areas (Table 1).

    Table. 1 Error matrix of high order plantation classification derived from the 1.0 m lidarCanopy Height Model of a 2.5 x 2.5 km scene over the Green Hills Pinus radiata

    plantation

    Reference (Visual classification)Users accuracy

    (%)

    eCognition

    Classification

    Unthinned< 5.0 m Ht Unthinned> 5.0 m Ht Thinned(T1 & T2) Exclusionzone Ground

    Unthinned< 5.0 m Ht

    225 0 0 0 0 100

    Unthinned> 5.0 m Ht

    0 234 9 1 0 95.9

    Thinned(T1 & T2)

    0 1 349 2 2 98.6

    Exclusionzone

    4 2 6 71 0 85.5

  • 8/8/2019 15arspc Submission 62

    9/14

    9

    Ground1 0 0 10 83 88.3

    Producersaccuracy (%)

    97.8 98.7 95.9 84.5 97.6

    Overall accuracy: 96.2%Kappa coefficient: 94.9%

    The overall accuracy achieved using the ADS40 imagery was 97.9% with aKappa Coefficient of Agreement of 96.1% (Table 2). Both the users andproducers accuracies for correctly classifying thinned and unthinned standsexceeded 95% (Table 2). Separation of the exclusion zone areas (non P.radiata vegetation) was also successful (users accuracy = 95.5% and

    producers accuracy = 98.8%). This is a better result than that obtained usingthe lidar data. The multispectral imagery, however, did not achieve the samelevel of accuracy as the lidar data in detecting bare ground (Tables 1 and 2).The presence of shadows in the ADS40 image contributed to these errors.

    Table 2. Error matrix of high order plantation classification derived from 4 band digitalmultispectral aerial photography (ADS40, 60cm) covering a 1.8 x 1.9 km scene over

    the Green Hills Pinus radiataplantation

    Reference (Visual classification)Users

    accuracy (%)

    eCognition

    Classification

    Unthinnedstands

    Thinned(T1 & T2)

    Exclusionzone Ground

    Unthinnedstands

    125 0 0 0 100

    Thinned(T1 & T2)

    0 633 2 0 99.7

    Exclusionzone

    0 4 169 4 95.5

    Ground4 7 0 52 82.5

    Producersaccuracy (%)

    96.9 98.3 98.8 92.8

    Overall accuracy: 97.9%Kappa coefficient: 96.1%

  • 8/8/2019 15arspc Submission 62

    10/14

    10

    Figure 3. The classified lidar CHM scene used for accuracy assessment.Purple = thinned stands; pink = unthinned stands > 5.0 m Ht; green = unthinnedstands < 5m; brown = ground; pale blue = exclusion zones.

    Figure 3. Lidar scene used for accuracy assessment of the lidar CHM data after standclassification in eCognitionDeveloper

  • 8/8/2019 15arspc Submission 62

    11/14

    11

    Figure 4. ADS40 scene used for accuracy assessment after standclassification in eCognition Developer

  • 8/8/2019 15arspc Submission 62

    12/14

    12

    Discussion

    This study has demonstrated the use of the eCognitionDeveloper software todelineate and correctly identify silvicultural stands within a P. radiataplantationfor the extraction of net stocked areas and thinning status. Both the lidar andADS40 imagery were successfully classified to produce stand level features asa file geodatabase that require little manual editing before being used as aninput to the Forests NSW geo-database. While the process does require somesupervision, our results indicate that this OBIA approach will improve bothaccuracies and efficiencies relative to current inventory methodology.

    We anticipate gaining improved classification accuracies through combininglidar CHM data with digital camera images. This has been demonstrated in

    several past studies (e.g. Leckie et al. 2003, McCombs et al, 2003, Holmgren etal. 2008). Although the synergistic use of lidar data and high spatial resolutionmultispectral imagery has been demonstrated for object-based forestclassification elsewhere (e.g. Ke et al. 2010), access to coincident imagery maynot always be possible or affordable and hence we needed to determine whatcould be achieved using just lidar data or the ADS40 multispectral imagery.

    An issue arising from this study relates to reconciling the line-work associatedwith vector-based features currently in FNSWs geo-database (e.g. API derivedpolygon line work) with the often, more accurate object boundaries definedthrough the OBIA segmentation process. The updating or correction of

    compartment boundaries and thinning status at the compartment and subcompartment (resource unit) level was achieved through the application ofthese rule sets.

    Our overall aim is to develop a series of integrated rulesets that perform multi-scale segmentation matching the spatial resolution of GIS features accessed byplantation foresters. Tiede et al. 2006 also advocated the need to establish ascript library for scale specific target features. While not presented here, wehave significantly progressed the automated delineation of tree crowns usingOBIA techniques at the individual tree scale. In addition, to delineatingindividual tree crowns, we will classify further the thinned stands into first andsecond thinnings using a stem density function derived from the tree delineation

    process.

    After final classification, image information at both the stand and tree crownlevels will be extracted and modelled for a range of inventory parametersincluding mean stand height, volume and biomass. These statistics can also beused for optimising the design of plots required for stem grade assessment.Finally we will collate data capture standards for lidar and ADS40 imagery ofplantations to improve accuracies and ruleset transferability.

  • 8/8/2019 15arspc Submission 62

    13/14

    13

    Acknowledgements

    The authors wish to thank Duncan Watt (Planning Manager, Hume Region,Forests NSW) for his advice and Amrit Kathuria (Industry & Investment NSW)for biometrical assistance. The results presented here are part of a project

    partially funded by Forest & Wood Products Australia Ltd.

    References

    Breiman, L, Friedman, J.H., Olshen, R.A. and Stone, C.J. 1983, Classificationand Regression Trees. Wadsworth. Belmont, California.

    Donoghue, D.N.M., Watt, P.J., Cox, N.J. and Wilson, J. 2007, Remote sensingof species mixtures in conifer plantations using LiDAR height andintensity data. Remote Sensing of Environment110, 509-522.

    Hay, G.J., Castilla, G., Wulder, M.A. and Ruiz, J.R. 2005, An automated object-based approach for the multiscale image segmentation of forest scenes.International Journal of Applied Earth Observation and Geoinformation7,pp. 339-359.

    Haywood, A. and Stone, C., 2010, Updating forest stand information: Part A:Semi-automated stand delineation. Australian ForestryIn Press

    Holmgren, J., Persson, A. and Sderman, U., 2008, Species identification ofindividual trees by combining high resolution LiDAR data with multi-spectral images. International Journal of Remote Sensing29, pp. 1537-1552.

    Johansen, K., Arroyo, L.A., Armston, J., Phinn, S. and Witte, C., 2010, Mappingriparian condition indicators in a sub-tropical savanna environment fromdiscrete return LiDAR data using object-based image analysis.Ecological Indicators10, pp. 796-807.

    Ke, Y., Quackenbush, L.J. and Im, J., 2010, Synergistic use of QuickBirdmultispectral imagery and LIDAR data for object-based forest speciesclassification. Remote Sensing of Environment114, pp. 1141-1154.

    Kim, M., Madden, M. and Warner, T.A., 2009, Forest type mapping usingobject-specific texture measures from multispectral Ikonos imagery:

    Segmentation quality and image classification issues. PhotogrammetricEngineering & Remote Sensing75, pp. 819-829.

    Leckie, D.G., Gougeon, F.A., Walsworth, N. and Paradine, D., 2003, Standdelineation and composition estimation using semi-automated individualtree crown analysis. Remote Sensing of Environment85, pp. 355-369.

    McCombs, J.W., Roberts, S.D. and Evans, D.L. 2003, Influence of fusing lidarand multispectral imagery on remotely sensed estimates of stand densityand mean tree height in a managed loblolly pine plantation. ForestScience49, pp. 457-466.

  • 8/8/2019 15arspc Submission 62

    14/14

    14

    Metternicht, G., Honey, F., Beeston, G. and Gonzalez, S., 2000, Potential ofhigh resolution airborne videography for rapid assessment andmonitoring of vegetation conditions in agricultural landscapes.International Archives of Photogrammetry and Remote Sensing XXXIII,

    Part B7, pp.868-875.Pascual, C., Garia-Abril, A., Garia-Montero, L.G., Martn-Fernndez, S. and

    Cohen, W.B., 2008, Object-based semi-automatic approach for foreststructure characterization using lidar data in heterogeneous Pinussylvestrisstands. Forest Ecology and Management255, pp. 3677-3685.

    R Development Core Team, 2010, R: A language and environment for statisticalcomputing. R Foundation for Statistical Computing, Vienna, Austria.ISBN 3-900051-07-0, URL http://www.R-project.org.

    Smith, G.M. and Morton, R.D., 2010, Real world objects in GEOBIA through theexploitation of existing digital cartography and image segmentation.Photogrammetric Engineering & Remote Sensing76, pp. 163-171.

    Tiede, D., Lang S. Hoffmann C., 2006, Supervised and forest type-specificmulti-scale segmentation for a one-level-representation of single trees.In: International Archives of Photogrammetry, Remote Sensing andSpatial Information Science, Vol. No. XXXVI-4/C42, Salzburg, Austria.

    Wulder, M.A., White, J.C., Hay, G.J. and Castilla, G., 2008, Towards automatedsegmentation of forest inventory polygons on high spatial resolutionsatellite imagery. The Forestry Chronicle84, p. 221-224.