ASTER Processing Method FINAL 7-25-08

10
ASTER PROCESSING METHOD July 25, 2008 Jeffrey C. Milder Department of Natural Resource, Cornell University [email protected] We carried out the following sequence of steps for pre-processing ASTER imagery and using this imagery to derive ecologically relevant vegetation metrics. The steps are shown in the flowchart below and described in this document. Steps 1-7: Pre-process ASTER imagery to correct radiometric and geographic errors and to convert raw digital numbers to radiance and reflectance values. Steps 8-9: Use the reflectance data to calculate vegetation metrics. Steps 10-11: Create a single landscape-wide composite of the vegetation metrics that maximizes data availability across the landscape.

Transcript of ASTER Processing Method FINAL 7-25-08

Page 1: ASTER Processing Method FINAL 7-25-08

ASTER PROCESSING METHOD

July 25, 2008

Jeffrey C. Milder

Department of Natural Resource, Cornell University

[email protected]

We carried out the following sequence of steps for pre-processing ASTER imagery and using

this imagery to derive ecologically relevant vegetation metrics. The steps are shown in the

flowchart below and described in this document.

Steps 1-7: Pre-process ASTER imagery to correct radiometric and geographic errors and to

convert raw digital numbers to radiance and reflectance values.

Steps 8-9: Use the reflectance data to calculate vegetation metrics.

Steps 10-11: Create a single landscape-wide composite of the vegetation metrics that maximizes

data availability across the landscape.

Page 2: ASTER Processing Method FINAL 7-25-08

STEP 1: Select & purchase ASTER images

We obtained the ASTER product DMO 14, which provides orthorectified ASTER scenes as well

as a 30m digital elevation model (Abrams et al. 1999). In cases where there was no single scene

that provided substantially cloud-free coverage for a given landscape, we obtained two or more

scenes that, together, provided the maximum possible cloud-free coverage for the date range of

interest.

STEP 2: Re-sample bands 4-9 to 15m resolution

The ASTER bands of interest for this project are provided at different resolutions. The visible

and near-infrared (VNIR) bands 1, 2, and 3N are delivered at 15m resolution. The short-wave

infrared (SWIR) bands 4, 5, 6, 7, 8, and 9 are delivered at 30m resolution. To attain a consistent

resolution for use in subsequent processing, we used ArcMap 9.2 to resample the SWIR bands to

15m resolution based on a nearest neighbor algorithm.

STEP 3: Clip ASTER images to the study area

To reduce the ASTER file size and processing time, we clipped all the input rasters to the size

and shape of our study area (plus a 1.5 kilometer buffer) in each landscape. This was done only

for the three VNIR bands and six SWIR bands. We did not use the five thermal infrared (TIR)

bands in this study.

STEP 4: Geo-rectify ASTER image to underlying base layer (if necessary)

Although the ASTER product that we ordered is supposed to provide orthorectified imagery, in

some areas of some of the landscapes, we found significant discrepancies (several hundred

meters) between the ASTER imagery and the underlying GIS base layers, which were based on

high-resolution Quickbird or IKONOS imagery. (We verified that these errors were not due to

different projections or coordinate systems.) In the instances where we found such errors, we

were able to obtain additional ASTER scenes that did not produce the same discrepancies.

However, if this had not been possible, we would have needed to geo-rectify the mis-aligned

ASTER scenes to the underlying base layer by using control points.

INTERMEDIATE PRODUCT: geo-rectified, clipped rasters for each of the nine VNIR and

SWIR ASTER bands.

STEP 5: Calculate at-sensor radiance

Steps 5-7 consisted of image correction to convert the raw ―digital number‖ values recorded by

the ASTER sensor into calibrated data that better represent surface features within the landscapes

of interest. These corrections were critical for the meta-analysis because they allowed us to use

multiple ASTER images (both across the four study landscapes and within individual

landscapes) without introducing confounding covariates associated with differing properties

from the multiple images (e.g., different dates, times of day, and sensor settings).

In Step 5, we used a standard method to convert the ASTER raw digital number to at-sensor

radiance (Smith 2007, Yeskel 2008). This calibration corrects for the sensor gain and offset.

First, we identified any pixels with a value of 255, which are ―saturated‖ pixels that exceeded the

sensor’s detection limit. Because they are saturated, these pixels do not provide useful data for

Page 3: ASTER Processing Method FINAL 7-25-08

subsequent analysis. We also identified all cloudy areas, which were generally the same as the

areas with saturated pixels. Then, for each band, we created a NO DATA ―mask‖ of all areas

with saturated pixels or cloud cover.

Next, for each ASTER band, we created a new raster file of at-sensor radiance (ASRAD) by

applying the following equation:

Lsat = (DN-1) * UCC

Where:

Lsat = at-sensor spectral radiance

DN = digital number (the pixel values in the original ASTER files)

UCC = Unit Conversion Coefficient. This is different for each ASTER band, and also depends

on the gain setting that was used to acquire the image. To determine which gain setting was

used, we consulted the ASTER metadata files (these files accompany the rasters and are

denoted by the *.met suffix). Within this file, there is a section called ―ASTERGAINS‖ that

lists the gain setting that was used in acquiring each band: NOR = normal; HGH = high;

LOW = low gain 1; LO2 = low gain 2. Based on these gain settings, we selected the

appropriate UCC for each band from the table below:

Band Coefficient (W/m

2*sr*um)/DN)

High Gain Normal Low Gain 1 Low Gain 2

1 0.676 1.688 2.25

2 0.708 1.415 1.89

3N 0.423 0.862 1.15

3B 0.423 0.862 1.15

4 0.1087 0.2174 0.2900 0.2900

5 0.0348 0.0696 0.0925 0.4090

6 0.0313 0.0625 0.0830 0.3900

7 0.0299 0.0597 0.0795 0.3320

8 0.0209 0.0417 0.0556 0.2450

9 0.0159 0.0318 0.0424 0.2650

Source: Abrams et al. 1999 (p. 26).

The ASRAD raster consisted of floating point values in order to accommodate decimal values.

We used the following naming convention for the ASRAD raster:

AST_COPAN_17735_V2_02042007_ASRAD.tif

Where:

AST = ASTER – same for all images

COPAN = landscape name

17735 = last 4 or 5 numbers of the original ASTER file name

V2 = band number

02042007 = Date that image was acquired in DDMMYYYY format

ASRAD = stands for at-sensor radiance

Page 4: ASTER Processing Method FINAL 7-25-08

INTERMEDIATE PRODUCT: at-sensor radiance rasters for each of the nine VNIR and SWIR

ASTER bands.

STEP 6: Calculate surface reflectance

We calculated surface reflectance to use as the input for the Normalized Difference Vegetation

Index (NDVI). Surface reflectance corrects for two sets of factors: 1) variations in solar

illumination influenced by properties such as the solar elevation angle and earth-sun distance,

and 2) the influence of atmospheric haze and aerosols on the signal detected by the sensor. By

correcting for these factors, surface reflectance should characterize the land features themselves.

Since we used the surface reflectance as an input for the NDVI, we only needed to calculate

surface reflectance for ASTER bands 2 and 3N (red and near infrared, respectively).

To calculate surface reflectance, we applied the following equation from the method of Warner

(2008), which is based on earlier methods published by Chavez (1996) and Lu et al. (2002):

Where:

ρ = surface reflectance

Lsat = at-sensor radiance (the output raster from Step 5)

d = earth-sun distance, calculated using this equation:

d = (1 - 0.01672 * COS(RADIANS(0.9856 * (Julian Day - 4))))

Esunλ = a constant that is different for each ASTER band. The constants are listed in the table

below.

ASTER Band ESUNi

B1 1845.99

B2 1555.74

B3N 1119.47

B4 231.25

B5 79.81

B6 74.99

B7 68.66

B8 59.74

B9 56.92

Source: Smith 2007

θs (solar zenith angle): The solar zenith angle = 90 - Solar Elevation Angle. The Solar Elevation

Angle can be obtained from the ASTER metadata file in the ―Solar_Elevation_Angle‖

section. In the metadata file, this angle is given in degrees; note, however, that many

software programs calculate cosine based on radians, in which case it will first be necessary

to convert degrees to radians. Note that in the equation, the denominator contains the term

(cos(θs))2. Some versions of this equation use only the term cos(θs), but here the second

cos(θs) is used to approximate tau, the atmospheric transmittance. This method is appropriate

for humid climates, such as the tropical landscapes in this study (Warner 2008, USU 2008).

Page 5: ASTER Processing Method FINAL 7-25-08

Lhaze = estimate of upwelling scattered path radiance due to atmospheric haze, aerosols, etc. The

subtraction of Lhaze from Lsat is a ―dark object subtraction‖ approach to determine the portion

of the at-sensor radiance that is attributable to ground properties, while subtracting out the

portion that is attributable to atmospheric effects. This method has been found to be

reasonably accurate, and is the most feasible approach to atmospheric correction when actual

atmospheric data are not available (Chavez 1996, Lu et al. 2002). To determine Lhaze for each

ASTER band, we viewed the histogram of data values for the ASRAD raster and manually

selected the value at the toe of the histogram (right at the point where the histogram began to

register a significant number of pixels). This value should be around the 0.05th to 0.1th

percentile of all pixel values, as illustrated below. We avoided selecting the lowest value on

the histogram, which could be outlying ―noise‖ not representative of a typical dark object on

the landscape.

We verified that these ―dark object‖ pixels indeed represented suitable dark objects by using

ENVI to display the pixels that had been selected as dark objects. These pixels were typically

shadow areas for Band 2 (red) and water for Band 3N (NIR).

The raster of surface reflectance (SURFREF) consisted of floating point values. Those few

values that were calculated to be <0 were re-assigned to 0.

We used the following naming convention for the new raster of surface reflectance:

Toe of the histogram: 30.2 Percentile (of all values): 0.06th

Page 6: ASTER Processing Method FINAL 7-25-08

AST_COPAN_17735_V2_02042007_SURFREF.tif

Where:

AST = ASTER – same for all images

COPAN = landscape name

17735 = last 4 or 5 numbers of the original ASTER file name

V2 = band number

02042007 = Date that image was acquired in DDMMYYYY format

SURFREF = stands for surface reflectance

INTERMEDIATE PRODUCT: surface reflectance rasters for ASTER bands 2 and 3N only.

STEP 7: Calculate at-sensor reflectance

Although surface reflectance provides the best available representation of surface features, it is

inappropriate for calculating the second set of vegetation indices, the Kauth-Thomas

transformation (K-T). The K-T (which is a generalized term for the more specific ―tasseled-cap

transformation‖—a term that is used more commonly but inaccurately in the literature) is a

method for reducing the dimensionality of data from multispectral sensors into a few

components that reflect surface properties of the landscape (Kauth & Thomas 1976, Crist 1985,

Crist & Kauth 1986). This can be accomplished by means of a principal component algorithm

(e.g., Yarbrough 2006) or a Gram-Schmidt transformation (e.g., Yajuan & Danfeng 2005). Both

approaches typically resolve multispectral data into three components: brightness, greenness, and

wetness.

Once the K-T coefficients have been derived for a particular sensor, they can be applied in any

context, and the wetness, brightness, and greenness components can be calculated as simple

linear combinations of the input bands (in this case, ASTER bands 1 through 9). However, this

holds true only for K-T coefficients based on at-sensor radiance or at-sensor reflectance. Since

surface reflectance is a function of the specific atmospheric conditions of each ASTER scene, the

K-T coefficients would need to vary from scene to scene. The at-sensor reflectance, however,

corrects for sensor and planetary effects but not atmospheric effects. Therefore, ASTER K-T

coefficients derived with at-sensor reflectance data in one context should apply to such data in

any other context. Compared to at-sensor radiance, the at-sensor reflectance data have the benefit

of correcting for planetary variables such as the solar elevation angle and earth-sun distance.

Trials of different approaches to the K-T transformation confirmed that the at-sensor reflectance

algorithm did indeed provide superior results to the at-sensor radiance or surface reflectance

approaches. Accordingly, we calculated at-sensor reflectance to use as the input for generating

the K-T brightness, greenness, and wetness components.

The equation for at-sensor reflectance is similar to that for surface reflectance, except that it does

not include the dark object subtraction (Lhaze) or the provision for outgoing path radiance (the

second cos(θs)) (Smith 2007):

RTOA = (π * Lsat * d2) / (ESUNλ * cos(θs))

Page 7: ASTER Processing Method FINAL 7-25-08

Where:

RTOA = at-sensor reflectance (or top-of-atmosphere reflectance)

Lsat = at-sensor radiance (the output raster from Step 5)

d = earth-sun distance, calculated using this equation:

d = (1 - 0.01672 * COS(RADIANS(0.9856 * (Julian Day - 4))))

Esunλ = a constant that is different for each ASTER band. The constants are listed in the table

above, in Step 6.

θs (solar zenith angle): The solar zenith angle = 90 - Solar Elevation Angle. The Solar Elevation

Angle can be obtained from the ASTER metadata file in the ―Solar_Elevation_Angle‖

section.

The rasters of at-sensor reflectance (ASREF) consisted of floating point values. We used the

following naming convention for these rasters:

AST_COPAN_17735_V2_02042007_ASREF.tif

Where:

AST = ASTER – same for all images

COPAN = landscape name

17735 = last 4 or 5 numbers of the original ASTER file name

V2 = band number

02042007 = Date that image was acquired in DDMMYYYY format

ASREF = stands for at-sensor reflectance

INTERMEDIATE PRODUCT: at-sensor reflectance rasters for each of the nine VNIR and

SWIR ASTER bands.

STEP 8: Calculate NDVI

The Normalized Difference Vegetation Index (NDVI) is an indicator of the density and

photosynthetic activity of living vegetation that has been widely used in ecological studies. This

index is a function of the relative reflectance in the red and near infrared bands, and is calculated

according to the following formula:

NDVI = (NIR – RED) / (NIR + RED)

Where:

NIR = Reflectance in the near infrared band

RED = Reflectance in the red band

We used the raster calculator function in ArcGIS 9.2 to calculate NDVI based on the surface

reflectance rasters generated in Step 6.

Page 8: ASTER Processing Method FINAL 7-25-08

INTERMEDIATE PRODUCT: one or more NDVI rasters for each landscape (one raster for

each ASTER scene used).

STEP 9: Calculate Kauth-Thomas transformation (K-T) components

As discussed above, we implemented the Kauth-Thomas transformation with previously derived

coefficients, using at-sensor reflectance as the input data. These coefficients are shown in the

table below. Although coefficients are available for nine transformation components (which

would transform the original ASTER 9-dimensional space into a new 9-dimensional space), the

first three K-T components typically explain 97-99% of the variability in the data (Yarbrough

2006). In addition, these first three K-T components (brightness, greenness, and wetness) are

demonstrably related to biophysical properties of the land and therefore have clear interpretive

value. This is not the case with components 4 through 9, so we did not calculate these

components. Each K-T component is a different linear combination of the nine input bands, as

shown in the following table.

Axis BAND 1 BAND 2 BAND 3N BAND 4 BAND 5 BAND 6 BAND 7 BAND 8 BAND 9

Brightness -0.274 0.676 0.303 -0.256 -0.020 0.415 -0.255 0.073 -0.262

Greenness -0.006 -0.648 0.564 0.061 -0.055 0.394 -0.193 0.021 -0.249

Wetness 0.166 -0.087 -0.703 0.187 0.040 0.500 -0.287 0.030 -0.318

Source: Yarbrough et al. 2005

We used the raster calculator in ArcGIS 9.2 to calculate each K-T component based on the

preceding coefficients. For example, the brightness axis is calculated as:

Brightness = (-0.274 * Band 1) + (0.676 * Band 2) + (0.303 * Band 3N) + (-0.256 * Band 4) +

(-0.02 * Band 5) + (0.415 * Band 6) + (-0.255 * Band 7) + (0.073 * Band 8) + (-0.262 * Band 9)

INTERMEDIATE PRODUCT: one or more rasters for brightness, greenness, and wetness for

each landscape (one K-Tb raster, one K-Tg raster, and one K-Tw raster for each ASTER scene

used).

STEP 10: Compare NDVI and K-T results from different ASTER images

In landscapes where we needed to use multiple ASTER scenes to provide complete cloud-free

coverage of the landscape, we identified areas of overlap between two or more images and used

these areas to compare the NDVI and K-T results from the different scenes. This comparison

allowed us to evaluate the degree to which our results depended on the specific ASTER scenes

used in the analysis. We conducted two types of comparisons:

1) We selected approximately 100 randomly situated points within the overlap area and

recorded pixel values for each scene for the following data: Band 2 surface reflectance,

Band 3N surface reflectance, Band 2 at-sensor reflectance, Band 3N at-sensor

reflectance, NDVI, K-T brightness, K-T greenness, K-T wetness. We then created

scatterplots for each data type to evaluate the correlation among values from the different

scenes.

2) For these same data types, we created tables indicating the percentage of pixel values

falling within each of 20 equally sized data ranges across the full range of pixel values.

We then evaluated the similarity in the data distribution among the different scenes.

Page 9: ASTER Processing Method FINAL 7-25-08

STEP 11: Create final composite NDVI and K-T rasters for each landscape

The final step was to composite data from multiple ASTER scenes and/or apply the NO DATA

masks generated in Step 5 to create final rasters with the maximum available data coverage.

In landscapes where we used only a single ASTER scene, we overlaid the NO DATA masks on

the interim NDVI and K-T rasters created in Steps 8 and 9, respectively. Pixels that were within

a NO DATA mask for one or more input bands were re-assigned to NO DATA.

In landscapes where we used multiple ASTER scenes, we created mosaics of data from adjacent

scenes to cover the entire landscape. When possible, we also patched NO DATA areas from each

scene (for example, cloud-covered areas) with available data from other scenes. The resulting

rasters provided maximum coverage for each landscape.

FINAL PRODUCT: one NDVI raster, one K-Tb raster, one K-Tg raster, and one K-Tw raster

for each landscape.

References:

Abrams, M., S. Hook, and B. Ramachandran. 1999. ASTER user handbook, version 2. Jet

Propulsion Laboratory, Pasadena, CA and EROS Data Center, Sioux Falls, SD.

Chavez, P. 1996. Image-based atmospheric corrections - revisited and improved.

Photogrammetric Engineering & Remote Sensing 62: 1025-1036.

Crist, E.P. 1985. A TM tasseled cap equivalent transformation for reflectance factor data. Remote

Sensing of Environment 17: 301-306.

Crist, E. P., and R. J. Kauth. 1986. The tasseled cap demystified. Photogrammetric Engineering

and Remote Sensing 52: 81–86.

Kauth, R.J. and G.S. Thomas, G.S. 1976. Tasselled Cap: A graphic description of the spectral-

temporal development of agricultural crops as seen by Landsat. Pages 41-51 in Symposium

on Machine Processing of Remotely Sensed Data. National Telecommunications Conference

Record, West Lafayette, Indiana.

Lu, D., P. Mausel, E. Brondizio and E. Moran. 2002. Assessment of atmospheric correction

methods for Landsat TM data applicable to Amazon basin LBA research. International

Journal of Remote Sensing 23: 2651–2671.

Smith, A.M.S. 2007. How to convert ASTER radiance values to reflectance: an online guide.

College of Natural Resources, University Idaho. Online:

www.cnrhome.uidaho.edu/default.aspx?pid=85984, accessed 5/13/08.

[USU] Utah State University Remote Sensing/GIS Laboratory. 2008. Image standardization: at-

sensor reflectance and COST correction. USU, Logan, UT. Online:

http://ftp.nr.usu.edu/imagestd, accessed 5/13/08.

Warner, T. 2008. Conversion of Landsat DN to reflectance using the CosT approach. University

of West Virginia. Unpublished class notes.

Page 10: ASTER Processing Method FINAL 7-25-08

Yajuan, W. and S. Danfeng. 2005. The ASTER tasseled cap interactive transformation using

Gramm-Schmidt method. In L. Zhang, J. Zhang, and M. Liao, eds., MIPPR 2005: SAR and

Multispectral Image Processing. Proceedings of SPIE 6043: 60430R-1, doi:

10.1117/12.654861.

Yarbrough, L.D., G. Easson, and J.S. Kuszmaul. 2005. Using at-sensor radiance and reflectance

tasseled cap transforms applied to change detection for the ASTER sensor. IEEE Third

International Workshop on the Analysis of Multi-temporal Remote Sensing Images, 16–18

May 2005, Biloxi, MS.

Yarbrough, L.D. 2006. The legacy of the tasseled cap transform: a development of a more robust

Kauth-Thomas transform derivation. Dissertation, Department of Engineering Science:

Geological Engineering, University of Mississippi.

Yuksel, A., A.E. Akay, and R. Gundogan. 2008. Using ASTER imagery in land use/cover

classification of eastern Mediterranean landscapes according to CORINE land cover project.

Sensors 8:1237-1251.