Erdas FieldGuide

686
ERDAS Field Guide Fourth Edition, Revised and Expanded ERDAS ® , Inc. Atlanta, Georgia

Transcript of Erdas FieldGuide

Page 1: Erdas FieldGuide

ERDAS Field GuideFourth Edition, Revised and Expanded

ERDAS ®, Inc.Atlanta, Georgia

Page 2: Erdas FieldGuide
Page 3: Erdas FieldGuide

Copyright 1982 - 1997 by ERDAS, Inc. All rights reserved.

First Edition published 1990. Second Edition published 1991.

Third Edition reprinted 1995. Fourth Edition printed 1997.

Printed in the United States of America.

ERDAS proprietary - copying and disclosure prohibited without express permissionfrom ERDAS, Inc.

ERDAS, Inc.

2801 Buford Highway, NEAtlanta, Georgia 30329-2137 USAPhone: 404/248-9000Fax: 404/248-9400User Support: 404/248-9777

ERDAS International

Telford House, Fulbourn

Cambridge CBI 5HB England

Phone: 011 44 1223 881 774

Fax: 011 44 1223 880 160

The information in this document is subject to change without notice.

AcknowledgmentsThe ERDAS Field Guide was originally researched, written, edited, and designed byChris Smith and Nicki Brown of ERDAS, Inc. The Second Edition was produced byChris Smith, Nicki Brown, Nancy Pyden, and Dana Wormer of ERDAS, Inc., withassistance from Diana Margaret and Susanne Strater. The Third Edition was written andedited by Chris Smith, Nancy Pyden, and Pam Cole of ERDAS, Inc. The fourth editionwas written and edited by Stacey Schrader and Russ Pouncey of ERDAS, Inc. Many,many thanks go to David Sawyer, ERDAS Engineering Director, and the ERDASSoftware Engineers for their significant contributions to this and previous editions.Without them this manual would not have been possible. Thanks also to Derrold

Page 4: Erdas FieldGuide

Holcomb for lending his expertise on the Enhancement chapter. Many others at ERDASprovided valuable comments and suggestions in an extensive review process.

A special thanks to those industry experts who took time out of their hectic schedulesto review previous editions of the ERDAS Field Guide. Of these “external”reviewers,Russell G. Congalton, D. Cunningham, Thomas Hack, Michael E. Hodgson, DavidMcKinsey, and D. Way deserve recognition for their contributions to previous editions.

Cover image: The image on the front cover of the ERDAS IMAGINE Ver. 8.3 manuals isGlobal Relief Data from the National Geophysical Data Center (National Oceanic andAtmospheric Administration, U.S. Department of Commerce).

TrademarksERDAS and ERDAS IMAGINE are registered trademarks of ERDAS, Inc. IMAGINE Essentials,IMAGINE Advantage, IMAGINE Professional, IMAGINE Vista, IMAGINE Production, ModelMaker, CellArray, ERDAS Field Guide, and ERDAS IMAGINE Tour Guides are trademarks ofERDAS, Inc. OrthoMAX is a trademark of Autometric, Inc. Restoration is a trademark ofEnvironmental Research Institute of Michigan. Other brands and product names are trademarksof their respective owners. ERDAS IMAGINE Ver. 8.3. January, 1997. Part No. SWE-MFG4-8.3.0-ALLP.

Page 5: Erdas FieldGuide

Table of ContentsTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiiiIntroduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii

Conventions Used in this Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xiv

CHAPTER 1Raster Data

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Image Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Bands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Remote Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5Absorption/Reflection Spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15Spectral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15Spatial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15Radiometric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17Temporal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Data Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19Line Dropout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19Striping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

Data Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20Storage Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20Storage Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23Calculating Disk Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26ERDAS IMAGINE Format (.img) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

Image File Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31Consistent Naming Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31Keeping Track of Image Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

Geocoded Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

Using Image Data in GIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33Subsetting and Mosaicking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34Multispectral Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

Editing Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35Editing Continuous (Athematic) Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36Interpolation Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

CHAPTER 2Vector Layers

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41Vector Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

i

Page 6: Erdas FieldGuide

Table of Contents

Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41Vector Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

Attribute Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

Displaying Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45Symbolization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

Vector Data Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

Digitizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47Tablet Digitizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47Screen Digitizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

Imported Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

Raster to Vector Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

CHAPTER 3Raster and Vector Data Sources

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51Importing and Exporting Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51Importing and Exporting Vector Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

Satellite Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54Satellite System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54Satellite Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55Landsat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57SPOT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60NOAA Polar Orbiter Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

Radar Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64Advantages of Using Radar Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64Radar Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64Speckle Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67Applications for Radar Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68Current Radar Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69Future Radar Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

Image Data from Aircraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70AIRSAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70AVIRIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

Image Data from Scanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

ADRG Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72ARC System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72ADRG File Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.OVR (overview) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73.IMG (scanned image data) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74.Lxx (legend data) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74ADRG File Naming Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

ADRI Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78.OVR (overview) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79.IMG (scanned image data) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80ADRI File Naming Convention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

Topographic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81DEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

ii

Page 7: Erdas FieldGuide

DTED . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83Using Topographic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

Ordering Raster Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84Addresses to Contact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

Raster Data from Other Software Vendors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87ERDAS Ver. 7.X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87GRID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88Sun Raster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88TIFF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

Vector Data from Other Software Vendors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90ARCGEN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90AutoCAD (DXF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90DLG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92ETAK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93IGES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94TIGER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

CHAPTER 4Image Display

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97Display Memory Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97Pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98Colors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98Colormap and Colorcells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99Display Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1018-bit PseudoColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10224-bit DirectColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10324-bit TrueColor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104PC Displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

Displaying Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106Continuous Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106Thematic Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

Using the IMAGINE Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113Pyramid Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114Dithering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116Viewing Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117Viewing Multiple Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119Linking Viewers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120Zoom and Roam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121Geographic Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122Enhancing Continuous Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122Creating New Image Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

CHAPTER 5Enhancement

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125Display vs. File Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126Spatial Modeling Enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

iii

Page 8: Erdas FieldGuide

Table of Contents

Correcting Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129Radiometric Correction -Visible/Infrared Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129Atmospheric Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130Geometric Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

Radiometric Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132Contrast Stretching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133Histogram Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138Histogram Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140Brightness Inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

Spatial Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143Convolution Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144Crisp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149Resolution Merge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150Adaptive Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151

Spectral Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153Principal Components Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153Decorrelation Stretch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158Tasseled Cap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159RGB to IHS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160IHS to RGB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162Indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

Hyperspectral Image Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167Normalize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168IAR Reflectance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168Log Residuals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169Rescale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169Processing Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170Spectrum Average . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171Signal to Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172Mean per Pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172Profile Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172Wavelength Axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174Spectral Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175

Fourier Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176Fast Fourier Transform (FFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178Fourier Magnitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178Inverse Fast Fourier Transform (IFFT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185Fourier Noise Removal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188Homomorphic Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189

Radar Imagery Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191Speckle Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191Edge Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200Texture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204Radiometric Correction - Radar Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207Slant-to-Ground Range Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209

iv

Page 9: Erdas FieldGuide

Merging Radar with VIS/IR Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211

CHAPTER 6Classification

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215

The Classification Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216Decision Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217

Classification Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218Classification Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218Iterative Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219Supervised vs. Unsupervised Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219Classifying Enhanced Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219Dimensionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220

Supervised Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221Training Samples and Feature Space Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221

Selecting Training Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222Evaluating Training Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224

Selecting Feature Space Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224

Unsupervised Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227ISODATA Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228RGB Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232

Signature Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235

Evaluating Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236Alarm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236Ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237Contingency Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238Separability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238Signature Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242

Classification Decision Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243Non-parametric Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244Parametric Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244Parallelepiped . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246Feature Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248Minimum Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250Mahalanobis Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251Maximum Likelihood/Bayesian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252

Evaluating Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254Accuracy Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258

Output File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259

CHAPTER 7Photogrammetric ConceptsIntroduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261

Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262

v

Page 10: Erdas FieldGuide

Table of Contents

Coordinate Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263Pixel Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263Image Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263Ground Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264Geocentric and Topocentric Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264

Work Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264Image Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266

Aerial Camera Film . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266Exposure Station . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266Image Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266Strip of Photographs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266Block of Photographs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267

Digital Imagery from Satellites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268Correction Levels for SPOT Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268

Image Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268Scanning Aerial Film . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268

Photogrammetric Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270

Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270Aerial Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271SPOT Triangulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280Triangulation Accuracy Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287

Stereo Imagery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288Aerial Stereopairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288SPOT Stereopairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289Epipolar Stereopairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289

Generate Elevation Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291Traditional Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291Digital Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291Elevation Model Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292DEM Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292Image Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293

Image Matching Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294Area Based Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294Feature Based Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297Relation Based Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297

Orthorectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298Geometric Distortions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299Aerial and SPOT Orthorectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300Landsat Orthorectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301

Map Feature Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307Stereoscopic Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307Monoscopic Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307

Product Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308

Orthoimages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308Orthomaps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308

Topographic Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308Topographic Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308

vi

Page 11: Erdas FieldGuide

CHAPTER 8Rectification

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311Orthorectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312

When to Rectify . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313When to Georeference Only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314Disadvantages of Rectification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314Rectification Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315

Ground Control Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316GCPs in ERDAS IMAGINE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316Entering GCPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316

Orders of Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319Nonlinear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321Effects of Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324Minimum Number of GCPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328GCP Prediction and Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329

RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330Residuals and RMS Error Per GCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330Total RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332Error Contribution by Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332Tolerance of RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333Evaluating RMS Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334

Resampling Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335“Rectifying” to Lat/Lon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336Nearest Neighbor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337Bilinear Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338Cubic Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341

Map to Map Coordinate Conversions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345

CHAPTER 9Terrain Analysis

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347

Topographic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348

Slope Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349

Aspect Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352

Shaded Relief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354

Topographic Normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356Lambertian Reflectance Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356Non-Lambertian Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357

CHAPTER 10Geographic Information Systems

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359

Data Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361

vii

Page 12: Erdas FieldGuide

Table of Contents

Continuous Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364

Thematic Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366

Vector Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367

Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367Raster Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367Vector Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370

Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371ERDAS IMAGINE Analysis Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371Analysis Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372

Proximity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373

Contiguity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374

Neighborhood Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375

Recoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378

Overlaying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379

Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380

Matrix Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381

Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382

Graphical Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383Model Maker Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388Output Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388Using Attributes in Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389

Script Modeling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393

Vector Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394Editing Vector Coverages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394

Constructing Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395Building and Cleaning Coverages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395

CHAPTER 11Cartography

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399

Types of Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400Thematic Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402

Annotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404

Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405

Legends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409

Neatlines, Tick Marks, and Grid Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410

Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411

viii

Page 13: Erdas FieldGuide

Labels and Descriptive Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412Typography and Lettering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412

Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416Properties of Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416Projection Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419

Geographical and Planar Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422

Available Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423

Choosing a Map Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427

Spheroids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429

Map Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432

Map Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434

CHAPTER 12Hardcopy Output

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437

Printing Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437Scale and Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438Map Scaling Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439

Mechanics of Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441Halftone Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441Continuous Tone Printing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442Contrast and Color Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442RGB to CMY Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443

APPENDIX AMath Topics

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445

Summation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445

Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446Bin Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452Standard Deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454Covariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454Covariance Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455

Dimensionality of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456Measurement Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456Mean Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457Feature Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458Feature Space Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459n-Dimensional Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460Spectral Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460

Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461

ix

Page 14: Erdas FieldGuide

Table of Contents

Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461Transformation Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461

Matrix Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462Matrix Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463Transposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464

APPENDIX BFile Formats and Extensions

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465

ERDAS IMAGINE File Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465

ERDAS IMAGINE .img Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468Sensor Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469Raster Layer Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470Attribute Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472Map Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473Map Projection Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474Pyramid Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474

Machine Independent Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475MIF Data Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475MIF Data Dictionary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483

ERDAS IMAGINE HFA File Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485Hierarchical File Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485Pre-defined HFA File Object Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487Basic Objects of an HFA File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488HFA Object Directory for .img files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490

Vector Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515

APPENDIX CMap Projections

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517

USGS Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518Albers Conical Equal Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519Azimuthal Equidistant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522Conic Equidistant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525Equirectangular (Plate Carrée) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527General Vertical Near-side Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529Geographic (Lat/Lon) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531Gnomonic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533Lambert Azimuthal Equal Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535Lambert Conformal Conic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538Mercator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541Miller Cylindrical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544Modified Transverse Mercator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546Oblique Mercator (Hotine) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548Orthographic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551Polar Stereographic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554

x

Page 15: Erdas FieldGuide

Polyconic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557Sinusoidal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559Space Oblique Mercator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561State Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563Stereographic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575Transverse Mercator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578UTM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580Van der Grinten I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583

External Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585Bipolar Oblique Conic Conformal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586Cassini-Soldner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587Laborde Oblique Mercator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588Modified Polyconic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589Modified Stereographic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590Mollweide Equal Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591Rectified Skew Orthomorphic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592Robinson Pseudocylindrical . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592Southern Orientated Gauss Conformal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593Winkel’s Tripel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594

Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 645

xi

Page 16: Erdas FieldGuide

Table of Contents

xii

Page 17: Erdas FieldGuide

List of FiguresFigure 1: Pixels and Bands in a Raster Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2Figure 2: Typical File Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4Figure 3: Electromagnetic Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5Figure 4: Sun Illumination Spectral Irradiance at the Earth’s Surface . . . . . . . . . . . . . . . 7Figure 5: Factors Affecting Radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8Figure 6: Reflectance Spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Figure 7: Laboratory Spectra of Clay Minerals in the Infrared Region . . . . . . . . . . . . . . 12Figure 8: IFOV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17Figure 9: Brightness Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17Figure 10: Landsat TM - Band 2 (Four Types of Resolution). . . . . . . . . . . . . . . . . . . . . . 18Figure 11: Band Interleaved by Line (BIL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21Figure 12: Band Sequential (BSQ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22Figure 13: Image Files Store Raster Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27Figure 14: Example of a Thematic Raster Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28Figure 15: Examples of Continuous Raster Layers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29Figure 16: Vector Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39Figure 17: Vertices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40Figure 18: Workspace Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43Figure 19: Attribute Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44Figure 20: Symbolization Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46Figure 21: Digitizing Tablet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47Figure 22: Raster Format Converted to Vector Format . . . . . . . . . . . . . . . . . . . . . . . . . . 50Figure 23: Multispectral Imagery Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56Figure 24: Landsat MSS vs. Landsat TM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59Figure 25: SPOT Panchromatic vs. SPOT XS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61Figure 26: SLAR Radar (Lillesand and Kiefer 1987) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65Figure 27: Received Radar Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65Figure 28: Radar Reflection from Different Sources and Distances

(Lillesand and Kiefer 1987) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66Figure 29: ADRG Overview File Displayed in ERDAS IMAGINE Viewer . . . . . . . . . . . . . . 73Figure 30: Subset Area with Overlapping ZDRs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74Figure 31: Seamless Nine Image DR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78Figure 32: ADRI Overview File Displayed in ERDAS IMAGINE Viewer. . . . . . . . . . . . . . . 79Figure 33: ARC/Second Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81Figure 34: Example of One Seat with One Display and Two Screens . . . . . . . . . . . . . . . 97Figure 35: Transforming Data File Values to a Colorcell Value . . . . . . . . . . . . . . . . . . 102Figure 36: Transforming Data File Values to a Colorcell Value . . . . . . . . . . . . . . . . . . 103Figure 37: Transforming Data File Values to Screen Values . . . . . . . . . . . . . . . . . . . . . 104Figure 38: Contrast Stretch and Colorcell Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107Figure 39: Stretching by Min/Max vs. Standard Deviation . . . . . . . . . . . . . . . . . . . . . . 108Figure 40: Continuous Raster Layer Display Process . . . . . . . . . . . . . . . . . . . . . . . . . 109Figure 41: Thematic Raster Layer Display Process . . . . . . . . . . . . . . . . . . . . . . . . . . . 112Figure 42: Pyramid Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

Page 18: Erdas FieldGuide

Figure 43: Example of Dithering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116Figure 44: Example of Color Patches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117Figure 45: Linked Viewers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120Figure 46: Histograms of Radiometrically Enhanced Data . . . . . . . . . . . . . . . . . . . . . . 132Figure 47: Graph of a Lookup Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133Figure 48: Enhancement with Lookup Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133Figure 49: Nonlinear Radiometric Enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134Figure 50: Piecewise Linear Contrast Stretch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135Figure 51: Contrast Stretch By Manipulating Lookup Tables

and the Effect on the Output Histogram . . . . . . . . . . . . . . . . . . . . . . . . . 137Figure 52: Histogram Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138Figure 53: Histogram Equalization Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139Figure 54: Equalized Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140Figure 55: Histogram Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141Figure 56: Spatial Frequencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143Figure 57: Applying a Convolution Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144Figure 58: Output Values for Convolution Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145Figure 59: Local Luminance Intercept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152Figure 60: Two Band Scatterplot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154Figure 61: First Principal Component. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155Figure 62: Range of First Principal Component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155Figure 63: Second Principal Component . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156Figure 64: Intensity, Hue, and Saturation Color Coordinate System . . . . . . . . . . . . . . 161Figure 65: Hyperspectral Data Axes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167Figure 66: Rescale GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170Figure 67: Spectrum Average GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171Figure 68: Spectral Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172Figure 69: Two-Dimensional Spatial Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173Figure 70: Three-Dimensional Spatial Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173Figure 71: Surface Profile. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174Figure 72: One-Dimensional Fourier Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177Figure 73: Example of Fourier Magnitude . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179Figure 74: The Padding Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181Figure 75: Comparison of Direct and Fourier Domain Processing . . . . . . . . . . . . . . . . 183Figure 76: An Ideal Cross Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185Figure 77: High-Pass Filtering Using the Ideal Window . . . . . . . . . . . . . . . . . . . . . . . . 186Figure 78: Filtering Using the Bartlett Window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186Figure 79: Filtering Using the Butterworth Window . . . . . . . . . . . . . . . . . . . . . . . . . . . 187Figure 80: Homomorphic Filtering Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190Figure 81: Effects of Mean and Median Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193Figure 82: Regions of Local Region Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194Figure 83: One-dimensional, Continuous Edge, and Line Models . . . . . . . . . . . . . . . . 200Figure 84: A Very Noisy Edge Superimposed on an Ideal Edge . . . . . . . . . . . . . . . . . . 201Figure 85: Edge and Line Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201Figure 86: Adjust Brightness Function. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207Figure 87: Range Lines vs. Lines of Constant Range . . . . . . . . . . . . . . . . . . . . . . . . . . 208

Page 19: Erdas FieldGuide

Figure 88: Slant-to-Ground Range Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209Figure 89: Example of a Feature Space Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224Figure 90: Process for Defining a Feature Space Object . . . . . . . . . . . . . . . . . . . . . . . 225Figure 91: ISODATA Arbitrary Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229Figure 92: ISODATA First Pass. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230Figure 93: ISODATA Second Pass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230Figure 94: RGB Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233Figure 95: Ellipse Evaluation of Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237Figure 96: Classification Flow Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245Figure 97: Parallelepiped Classification Using Plus or Minus

Two Standard Deviations as Limits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246Figure 98: Parallelepiped Corners Compared to the Signature Ellipse . . . . . . . . . . . . . 248Figure 99: Feature Space Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248Figure 100: Minimum Spectral Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250Figure 101: Histogram of a Distance Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254Figure 102: Interactive Thresholding Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256Figure 103: Pixel Coordinates and Image Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . 263Figure 104: Sample Photogrammetric Work Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265Figure 105: Exposure Stations along a Flight Path . . . . . . . . . . . . . . . . . . . . . . . . . . . 266Figure 106: A Regular (Rectangular) Block of Aerial Photos . . . . . . . . . . . . . . . . . . . . 267Figure 107: Triangulation Work Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270Figure 108: Focal and Image Plane. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272Figure 109: Image Coordinates, Fiducials, and Principal Point . . . . . . . . . . . . . . . . . . 273Figure 110: Exterior Orientation of an Aerial Photo . . . . . . . . . . . . . . . . . . . . . . . . . . . 274Figure 111: Control Points in Aerial Photographs

(block of 8 X 4 photos) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277Figure 112: Ideal Point Distribution Over a Photograph for Aerial Triangulation . . . . . 278Figure 113: Tie Points in a Block of Photos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279Figure 114: Perspective Centers of SPOT Scan Lines . . . . . . . . . . . . . . . . . . . . . . . . . 280Figure 115: Image Coordinates in a Satellite Scene . . . . . . . . . . . . . . . . . . . . . . . . . . . 281Figure 116: Interior Orientation of a SPOT Scene. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282Figure 117: Inclination of a Satellite Stereo-Scene

(View from North to South) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284Figure 118: Velocity Vector and Orientation Angle of a Single Scene . . . . . . . . . . . . . 285Figure 119: Ideal Point Distribution Over a Satellite Scene for Triangulation . . . . . . . 286Figure 120: Aerial Stereopair (60% Overlap). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289Figure 121: SPOT Stereopair (80% Overlap) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289Figure 122: Epipolar Stereopair Creation Work Flow . . . . . . . . . . . . . . . . . . . . . . . . . . 290Figure 123: Generate Elevation Models Work Flow (Method 1) . . . . . . . . . . . . . . . . . . . 291Figure 124: Generate Elevation Models Work Flow (Method 2) . . . . . . . . . . . . . . . . . . . 291Figure 125: Image Pyramid for Matching at Coarse to Full Resolution . . . . . . . . . . . . . 293Figure 126: Orthorectification Work Flow (Method 1) . . . . . . . . . . . . . . . . . . . . . . . . . . 298Figure 127: Orthorectification Work Flow (Method 2) . . . . . . . . . . . . . . . . . . . . . . . . . . 298Figure 128: Orthographic Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299Figure 129: Digital Orthophoto - Finding Gray Values . . . . . . . . . . . . . . . . . . . . . . . . . 300Figure 130: Image Displacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304

Page 20: Erdas FieldGuide

Figure 131: Feature Collection Work Flow (Method 1) . . . . . . . . . . . . . . . . . . . . . . . . . 307Figure 132: Feature Collection Work Flow (Method 2) . . . . . . . . . . . . . . . . . . . . . . . . . 307Figure 133: Polynomial Curve vs. GCPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318Figure 134: Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320Figure 135: Nonlinear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321Figure 136: Transformation Example—1st-Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325Figure 137: Transformation Example—2nd GCP Changed . . . . . . . . . . . . . . . . . . . . . . 325Figure 138: Transformation Example—2nd-Order. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326Figure 139: Transformation Example—4th GCP Added. . . . . . . . . . . . . . . . . . . . . . . . . 326Figure 140: Transformation Example—3rd-Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327Figure 141: Transformation Example—Effect of a 3rd-Order Transformation . . . . . . . . 327Figure 142: Residuals and RMS Error Per Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331Figure 143:RMS Error Tolerance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333Figure 144: Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335Figure 145: Nearest Neighbor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337Figure 146: Bilinear Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338Figure 147: Linear Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338Figure 148: Cubic Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342Figure 149: Regularly Spaced Terrain Data Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348Figure 150: 3 × 3 Window Calculates the Slope at Each Pixel. . . . . . . . . . . . . . . . . . . . 349Figure 151: 3 × 3 Window Calculates the Aspect at Each Pixel. . . . . . . . . . . . . . . . . . . 352Figure 152: Shaded Relief . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354Figure 153: Data Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362Figure 154: Raster Attributes for lnlandc.img . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368Figure 155: Vector Attributes CellArray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370Figure 156: Proximity Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373Figure 157: Contiguity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374Figure 158: Using a Mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376Figure 159: Sum Option of Neighborhood Analysis (Image Interpreter) . . . . . . . . . . . . 377Figure 160: Overlay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379Figure 161: Indexing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380Figure 162: Graphical Model for Sensitivity Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 384Figure 163: Graphical Model Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385Figure 164: Modeling Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388Figure 165: Graphical and Script Models For Tasseled Cap Transformation . . . . . . . . 391Figure 166: Layer Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397Figure 167: Sample Scale Bars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405Figure 168: Sample Legend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409Figure 169: Sample Neatline, Tick Marks, and Grid Lines . . . . . . . . . . . . . . . . . . . . . . 410Figure 170: Sample Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411Figure 171: Sample Sans Serif and Serif Typefaces with Various Styles Applied. . . . . 413Figure 172: Good Lettering vs. Bad Lettering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415Figure 173: Projection Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418Figure 174: Tangent and Secant Cones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420Figure 175: Tangent and Secant Cylinders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420Figure 176: Ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429

Page 21: Erdas FieldGuide

Figure 177: Layout for a Book Map and a Paneled Map . . . . . . . . . . . . . . . . . . . . . . . . 438Figure 178: Sample Map Composition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439Figure 179: Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447Figure 180: Normal Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450Figure 181: Measurement Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456Figure 182: Mean Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457Figure 183: Two Band Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458Figure 184: Two Band Scatterplot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459Figure 185: Examples of Objects Stored in an .img File . . . . . . . . . . . . . . . . . . . . . . . . 468Figure 186: Example of a 512 x 512 Layer with a Block Size of 64 x 64 Pixels . . . . . . . 471Figure 187: HFA File Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485Figure 188: HFA File Structure Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486Figure 189: Albers Conical Equal Area Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521Figure 190: Polar Aspect of the Azimuthal Equidistant Projection . . . . . . . . . . . . . . . . 524Figure 191: Geographic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532Figure 192: Lambert Azimuthal Equal Area Projection . . . . . . . . . . . . . . . . . . . . . . . . . 537Figure 193: Lambert Conformal Conic Projection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540Figure 194: Mercator Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543Figure 195: Miller Cylindrical Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545Figure 196: Orthographic Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553Figure 197: Polar Stereographic Projection and its Geometric Construction . . . . . . . . 556Figure 198: Polyconic Projection of North America . . . . . . . . . . . . . . . . . . . . . . . . . . . 558Figure 199: Zones of the State Plane Coordinate System. . . . . . . . . . . . . . . . . . . . . . . 564Figure 200: Stereographic Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577Figure 201: Zones of the Universal Transverse Mercator Grid in the United States . . . 581Figure 202: Van der Grinten I Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584

Page 22: Erdas FieldGuide
Page 23: Erdas FieldGuide

List of TablesTable 1: Description of File Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42Table 2: Raster Data Formats for Direct Import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52Table 3: Vector Data Formats for Import and Export . . . . . . . . . . . . . . . . . . . . . . . . . . . 53Table 4: Commonly Used Bands for Radar Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66Table 5: Current Radar Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69Table 6: ARC System Chart Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75Table 7: Legend Files for the ARC System Chart Types . . . . . . . . . . . . . . . . . . . . . . . . . 76Table 8: Common Raster Data Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84Table 9: File Types Created by Screendump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88Table 10: The Most Common TIFF Format Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89Table 11: Conversion of DXF Entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91Table 12: Conversion of IGES Entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94Table 13: Colorcell Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100Table 14: Commonly Used RGB Colors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110Table 15: Overview of Zoom Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121Table 16: Description of Modeling Functions Available for Enhancement . . . . . . . . . . 127Table 17: Theoretical Coefficient of Variation Values. . . . . . . . . . . . . . . . . . . . . . . . . . 195Table 18: Parameters for Sigma Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197Table 19: Pre-Classification Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197Table 20: Training Sample Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223Table 21: Example of a Recoded Land Cover Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . 378Table 22: Attribute Information for parks.img . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389Table 23: General Editing Operations and Supporting Feature Types . . . . . . . . . . . . . 394Table 24: Comparison of Building and Cleaning Coverages . . . . . . . . . . . . . . . . . . . . . 396Table 25: Common Map Scales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406Table 26: Pixels per Inch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407Table 27: Acres and Hectares per Pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408Table 28: Map Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425Table 29: Projection Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426Table 30: Spheroids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431Table 31: ERDAS IMAGINE File Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466Table 32: Usage of Binning Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501Table 33: NAD27 State Plane coordinate system zone numbers, projection types,

and zone code numbers for the United States . . . . . . . . . . . . . . . . . . . . 565Table 34: NAD83 State Plane coordinate system zone numbers, projection types,

and zone code numbers for the United States . . . . . . . . . . . . . . . . . . . . 570Table 35: UTM zones, central meridians, and longitude ranges . . . . . . . . . . . . . . . . . . 582

Page 24: Erdas FieldGuide
Page 25: Erdas FieldGuide

Preface

Introduction The purpose of the ERDAS Field Guide is to provide background information on whyone might use particular GIS and image processing functions and how the software ismanipulating the data, rather than what buttons to push to actually perform thosefunctions. This book is also aimed at a diverse audience: from those who are new togeoprocessing to those savvy users who have been in this industry for years. For thenovice, the ERDAS Field Guide provides a brief history of the field, an extensive glossaryof terms, and notes about applications for the different processes described. For theexperienced user, the ERDAS Field Guide includes the formulas and algorithms that areused in the code, so that he or she can see exactly how each operation works.

Although the ERDAS Field Guide is primarily a reference to basic image processing andGIS concepts, it is geared toward ERDAS IMAGINE users and the functions withinERDAS IMAGINE software, such as GIS analysis, image processing, cartography andmap projections, graphics display hardware, statistics, and remote sensing. However,in some cases, processes and functions are described that may not be in the currentversion of the software, but planned for a future release. There may also be functionsdescribed that are not available on your system, due to the actual package that you areusing.

The enthusiasm with which the first three editions of the ERDAS Field Guide werereceived has been extremely gratifying, both to the authors and to ERDAS as a whole.First conceived as a helpful manual for ERDAS users, the ERDAS Field Guide is nowbeing used as a textbook, lab manual, and training guide throughout the world.

The ERDAS Field Guide will continue to expand and improve to keep pace with theprofession. Suggestions and ideas for future editions are always welcome, and shouldbe addressed to the Technical Writing division of Engineering at ERDAS, Inc., inAtlanta, Georgia.

xxiField Guide

Page 26: Erdas FieldGuide

Preface

Conventions Used inthis Book

The following paragraphs are used throughout the ERDAS Field Guide and otherERDAS IMAGINE documentation.

These paragraphs contain strong warnings or important tips.

These paragraphs direct you to the ERDAS IMAGINE software function that accomplishes thedescribed task.

These paragraphs lead you to other chapters in the ERDAS Field Guide or other manuals foradditional information.

xxii ERDAS

Page 27: Erdas FieldGuide

Image Data

CHAPTER 1Raster Data

Introduction The ERDAS IMAGINE system incorporates the functions of both image processing andgeographic information systems (GIS). These functions include importing, viewing,altering, and analyzing raster and vector data sets.

This chapter is an introduction to raster data, including:

• remote sensing

• data storage formats

• different types of resolution

• radiometric correction

• geocoded data

• using raster data in GIS

See "CHAPTER 2: Vector Layers" for more information on vector data.

Image Data In general terms, an image is a digital picture or representation of an object. Remotelysensed image data are digital representations of the earth. Image data are stored in datafiles, also called image files, on magnetic tapes, computer disks, or other media. Thedata consist only of numbers. These representations form images when they aredisplayed on a screen or are output to hardcopy.

Each number in an image file is a data file value. Data file values are sometimesreferred to as pixels. The term pixel is abbreviated from picture element. A pixel is thesmallest part of a picture (the area being scanned) with a single value. The data filevalue is the measured brightness value of the pixel at a specific wavelength.

Raster image data are laid out in a grid similar to the squares on a checker board. Eachcell of the grid is represented by a pixel, also known as a grid cell.

In remotely sensed image data, each pixel represents an area of the earth at a specificlocation. The data file value assigned to that pixel is the record of reflected radiation oremitted heat from the earth’s surface at that location.

1Field Guide

Page 28: Erdas FieldGuide

Data file values may also represent elevation, as in digital elevation models (DEMs).

NOTE: DEMs are not remotely sensed image data, but are currently being produced from stereopoints in radar imagery.

The terms “pixel” and “data file value” are not interchangeable in ERDAS IMAGINE. Pixel isused as a broad term with many meanings, one of which is data file value. One pixel in a file mayconsist of many data file values. When an image is displayed or printed, other types of values arerepresented by a pixel.

See "CHAPTER 4: Image Display" for more information on how images are displayed.

Bands Image data may include several bands of information. Each band is a set of data filevalues for a specific portion of the electromagnetic spectrum of reflected light oremitted heat (red, green, blue, near-infrared, infrared, thermal, etc.) or some other user-defined information created by combining or enhancing the original bands, or creatingnew bands from other sources.

ERDAS IMAGINE programs can handle an unlimited number of bands of image datain a single file.

Figure 1: Pixels and Bands in a Raster Image

See "CHAPTER 5: Enhancement" for more information on combining or enhancing bands ofdata.

Bands vs. Layers

In ERDAS IMAGINE, bands of data are usually referred to as layers. Once a band isimported into a GIS, it becomes a layer of information which can be processed invarious ways. Additional layers can be created and added to the image file (.imgextension) in ERDAS IMAGINE, such as layers created by combining existing layers.Read more about .img files in ERDAS IMAGINE Format (.img) on page 27.

3 bands

1 pixel

2 ERDAS

Page 29: Erdas FieldGuide

Image Data

Layers vs. Viewer Layers

The IMAGINE Viewer permits several images to be layered, in which case each image(including a multi-band image) may be a layer.

Numeral Types

The range and the type of numbers used in a raster layer determine how the layer isdisplayed and processed. For example, a layer of elevation data with values rangingfrom -51.257 to 553.401 would be treated differently from a layer using only two valuesto show land and water.

The data file values in raster layers will generally fall into these categories:

• Nominal data file values are simply categorized and named. The actual value usedfor each category has no inherent meaning—it is simply a class value. An exampleof a nominal raster layer would be a thematic layer showing tree species.

• Ordinal data are similar to nominal data, except that the file values put the classesin a rank or order. For example, a layer with classes numbered and named “1 -Good,” “2 - Moderate,” and “3 - Poor” is an ordinal system.

• Interval data file values have an order, but the intervals between the values are alsomeaningful. Interval data measure some characteristic, such as elevation or degreesFahrenheit, which does not necessarily have an absolute zero. (The differencebetween two values in interval data is meaningful.)

• Ratio data measure a condition that has a natural zero, such as electromagneticradiation (as in most remotely sensed data), rainfall, or slope.

Nominal and ordinal data lend themselves to applications in which categories, orthemes, are used. Therefore, these layers are sometimes calledcategorical orthematic.

Likewise, interval and ratio layers are more likely to measure a condition, causing thefile values to represent continuous gradations across the layer. Such layers are calledcontinuous.

3Field Guide

Page 30: Erdas FieldGuide

Coordinate Systems The location of a pixel in a file or on a displayed or printed image is expressed using acoordinate system. In two-dimensional coordinate systems, locations are organized ina grid of columns and rows. Each location on the grid is expressed as a pair of coordi-nates known as X and Y. The X coordinate specifies the column of the grid, and the Ycoordinate specifies the row. Image data organized into such a grid are known as rasterdata.

There are two basic coordinate systems used in ERDAS IMAGINE:

• file coordinates — indicate the location of a pixel within the image (data file)

• map coordinates — indicate the location of a pixel in a map

File Coordinates

File coordinates refer to the location of the pixels within the image (data) file. Filecoordinates for the pixel in the upper left corner of the image always begin at 0,0.

Figure 2: Typical File Coordinates

Map Coordinates

Map coordinates may be expressed in one of a number of map coordinate or projectionsystems. The type of map coordinates used by a data file depends on the method usedto create the file (remote sensing, scanning an existing map, etc.). In ERDAS IMAGINE,a data file can be converted from one map coordinate system to another.

For more information on map coordinates and projection systems, see "CHAPTER 11:Cartography" or "APPENDIX C: Map Projections.". See "CHAPTER 8: Rectification" formore information on changing the map coordinate system of a data file.

columns (x)

rows (y)

(3,1)x,y

0 1 2 3

0

1

2

3

4

4 ERDAS

Page 31: Erdas FieldGuide

Remote Sensing

Remote Sensing Remote sensing is the acquisition of data about an object or scene by a sensor that is farfrom the object (Colwell 1983). Aerial photography, satellite imagery, and radar are allforms of remotely sensed data.

Usually, remotely sensed data refer to data of the earth collected from sensors on satel-lites or aircraft. Most of the images used as input to the ERDAS IMAGINE system areremotely sensed. However, the user is not limited to remotely sensed data.

This section is a brief introduction to remote sensing. There are many books available for moredetailed information, including Colwell 1983, Swain and Davis 1978, and Slater 1980 (see“Bibliography”).

Electromagnetic Radiation Spectrum

The sensors on remote sensing platforms usually record electromagnetic radiation.Electromagnetic radiation (EMR) is energy transmitted through space in the form ofelectric and magnetic waves (Star and Estes 1990). Remote sensors are made up ofdetectors that record specific wavelengths of the electromagnetic spectrum. Theelectromagnetic spectrum is the range of electromagnetic radiation extending fromcosmic waves to radio waves (Jensen 1996).

All types of land cover—rock types, water bodies, etc.—absorb a portion of the electro-magnetic spectrum, giving a distinguishable “signature” of electromagnetic radiation.Armed with the knowledge of which wavelengths are absorbed by certain features andthe intensity of the reflectance, the user can analyze a remotely sensed image and makefairly accurate assumptions about the scene. Figure 3 illustrates the electromagneticspectrum (Suits 1983; Star and Estes 1990).

Figure 3: Electromagnetic Spectrum

micrometersµm (one millionth of a meter)

0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 11.0 12.0 13.0 14.0 15.0 16.0 17.0

Reflected Thermal

SWIR LWIR

Visible(0.4 - 0.7)

Blue (0.4 - 0.5)Green (0.5 - 0.6)Red (0.6 - 0.7)

Near-infrared(0.7 - 2.0)

Ultraviolet

Middle-infrared(2.0 - 5.0)

Far-infrared(8.0 - 15.0)

Radar

5Field Guide

Page 32: Erdas FieldGuide

SWIR and LWIR

The near-infrared and middle-infrared regions of the electromagnetic spectrum aresometimes referred to as the short wave infrared region (SWIR). This is to distinguishthis area from the thermal or far infrared region, which is often referred to as the longwave infrared region (LWIR). The SWIR is characterized by reflected radiationwhereas the LWIR is characterized by emitted radiation.

Absorption/ReflectionSpectra

When radiation interacts with matter, some wavelengths are absorbed and others arereflected.To enhance features in image data, it is necessary to understand howvegetation, soils, water, and other land covers reflect and absorb radiation. The studyof the absorption and reflection of EMR waves is called spectroscopy.

Spectroscopy

Most commercial sensors, with the exception of imaging radar sensors, are passivesolar imaging sensors. Passive solar imaging sensors can only receive radiation waves;they cannot transmit radiation. (Imaging radar sensors are active sensors which emit aburst of microwave radiation and receive the backscattered radiation.)

The use of passive solar imaging sensors to characterize or identify a material of interestis based on the principles of spectroscopy. Therefore, to fully utilize a visible/infrared(VIS/IR) multispectral data set and properly apply enhancement algorithms, it isnecessary to understand these basic principles. Spectroscopy reveals the:

• absorption spectra — the EMR wavelengths that are absorbed by specific materialsof interest

• reflection spectra — the EMR wavelengths that are reflected by specific materialsof interest

Absorption Spectra

Absorption is based on the molecular bonds in the (surface) material. Whichwavelengths are absorbed depends upon the chemical composition and crystallinestructure of the material. For pure compounds, these absorption bands are so specificthat the SWIR region is often called “an infrared fingerprint.”

Atmospheric Absorption

In remote sensing, the sun is the radiation source for passive sensors. However, the sundoes not emit the same amount of radiation at all wavelengths. Figure 4 shows the solarirradiation curve—which is far from linear.

6 ERDAS

Page 33: Erdas FieldGuide

Remote Sensing

Figure 4: Sun Illumination Spectral Irradiance at the Earth’s Surface

Solar radiation must travel through the earth’s atmosphere before it reaches the earth’ssurface. As it travels through the atmosphere, radiation is affected by four phenomena(Elachi 1987):

• absorption— the amount of radiation absorbed by the atmosphere

• scattering — the amount of radiation scattered by the atmosphere away from thefield of view

• scattering source — divergent solar irradiation scattered into the field of view

• emission source — radiation re-emitted after absorption

00.0

ElS

pect

ral I

rrad

ianc

e (W

m-2

µ-1

)

3.01.5 2.72.42.11.81.20.90.60.3Wavelength µm

2500

2000

1500

1000

500

UV VIS INFRARED

Modified from Chahine, et al 1983

Solar irradiation curve outside atmosphere

Solar irradiation curve at sea levelPeaks show absorption by H20, C02, and O3

7Field Guide

Page 34: Erdas FieldGuide

Figure 5: Factors Affecting Radiation

Absorption is not a linear phenomena; it is logarithmic with concentration (Flaschka1969). In addition, the concentration of atmospheric gases, especially water vapor, isvariable. The other major gases of importance are carbon dioxide (CO2) and ozone (O3),which can vary considerably around urban areas. Thus, the extent of atmosphericabsorbance will vary with humidity, elevation, proximity to (or downwind of) urbansmog, and other factors.

Scattering is modeled as Rayleigh scattering with a commonly used algorithm thataccounts for the scattering of short wavelength energy by the gas molecules in theatmosphere (Pratt 1991)—for example, ozone. Scattering is variable with bothwavelength and atmospheric aerosols. Aerosols differ regionally (ocean vs. desert) anddaily (for example, Los Angeles smog has different concentrations daily).

Scattering source and emission source may account for only 5% of the variance. Thesefactors are minor, but they must be considered for accurate calculation. After inter-action with the target material, the reflected radiation must travel back through theatmosphere and be subjected to these phenomena a second time to arrive at the satellite.

Absorption - the amount of

Scattering - the amount of radiation

Scattering Source - divergent solar

Emission Source - radiation

Radiation

radiation absorbed by theatmosphere

re-emitted after absorption

scattered by the atmosphereaway from the field of view

irradiations scattered into thefield of view

Source: Elachi 1987

8 ERDAS

Page 35: Erdas FieldGuide

Remote Sensing

The mathematical models that attempt to quantify the total atmospheric effect on thesolar illumination are called radiative transfer equations. Some of the most commonlyused are Lowtran (Kneizys 1988) and Modtran (Berk 1989).

See "CHAPTER 5: Enhancement" for more information on atmospheric modeling.

Reflectance Spectra

After rigorously defining the incident radiation (solar irradiation at target), it is possibleto study the interaction of the radiation with the target material. When an electromag-netic wave (solar illumination in this case) strikes a target surface, three interactions arepossible (Elachi 1987):

• reflection

• transmission

• scattering

It is the reflected radiation, generally modeled as bidirectional reflectance (Clark 1984),that is measured by the remote sensor.

Remotely sensed data are made up of reflectance values. The resulting reflectancevalues translate into discrete digital numbers (or values) recorded by the sensingdevice. These gray scale values will fit within a certain bit range (such as 0-255, whichis 8-bit data) depending on the characteristics of the sensor.

Each satellite sensor detector is designed to record a specific portion of the electromag-netic spectrum. For example, Landsat TM band 1 records the 0.45 to 0.52 µm portion ofthe spectrum and is designed for water body penetration, making it useful for coastalwater mapping. It is also useful for soil/vegetation discriminations, forest typemapping, and cultural features identification (Lillesand and Kiefer 1987).

The characteristics of each sensor provide the first level of constraints on how toapproach the task of enhancing specific features, such as vegetation or urban areas.Therefore, when choosing an enhancement technique, one should pay close attention tothe characteristics of the land cover types within the constraints imposed by theindividual sensors.

The use of VIS/IR imagery for target discrimination, whether the target is mineral,vegetation, man-made, or even the atmosphere itself, is based on the reflectancespectrum of the material of interest (see Figure 6). Every material has a characteristicspectrum based on the chemical composition of the material. When sunlight (the illumi-nation source for VIS/IR imagery) strikes a target, certain wavelengths are absorbed bythe chemical bonds; the rest are reflected back to the sensor. It is, in fact, thewavelengths that are not returned to the sensor that provide information about theimaged area.

9Field Guide

Page 36: Erdas FieldGuide

Specific wavelengths are also absorbed by gases in the atmosphere (H20 vapor, CO2, O2,etc.). If the atmosphere absorbs a large percentage of the radiation, it becomes difficultor impossible to use that particular wavelength(s) to study the earth. For the presentLandsat and SPOT sensors, only the water vapor bands were considered strong enoughto exclude the use of their spectral absorption region. Figure 6 shows how Landsat TMbands 5 and 7 were carefully placed to avoid these regions. Absorption by otheratmospheric gases was not extensive enough to eliminate the use of the spectral regionfor present day broad band sensors.

Figure 6: Reflectance Spectra

NOTE: This chart is for comparison purposes only. It is not meant to show actual values. Thespectra are offset to better display the lines.

An inspection of the spectra reveals the theoretical basis of some of the indices in theERDAS IMAGINE Image Interpreter. Consider the vegetation index TM4/TM3. It isreadily apparent that for vegetation this value could be very large; for soils, muchsmaller, and for clay minerals, near zero. Conversely, when the clay ratio TM5/TM7 isconsidered, the opposite applies.

100

80

60

40

20

0

Re

flect

an

ce%

.4 .6 .8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4

Wavelength, µm

Vegetation (green)

Silt loam

Modified from Fraser 1986, Crist 1986, Sabins 1987

Atmospheric

bands

4 5 6 7 Landsat MSS bands

1 2 3 4 5 7Landsat TM bands

Kaolinite

absorption

NOTE: Spectra are offset for clarity and scale.

10 ERDAS

Page 37: Erdas FieldGuide

Remote Sensing

Hyperspectral Data

As remote sensing moves toward the use of more and narrower bands (for example,AVIRIS with 224 bands only 10 nm wide), absorption by specific atmospheric gasesmust be considered. These multiband sensors are called hyperspectral sensors. Asmore and more of the incident radiation is absorbed by the atmosphere, the digitalnumber (DN) values of that band get lower, eventually becoming useless—unless oneis studying the atmosphere. Someone wanting to measure the atmospheric content of aspecific gas could utilize the bands of specific absorption.

NOTE: Hyperspectral bands are generally measured in nanometers (nm).

Figure 6 shows the spectral bandwidths of the channels for the Landsat sensors plottedabove the absorption spectra of some common natural materials (kaolin clay, silty loamsoil and green vegetation). Note that while the spectra are continuous, the Landsatchannels are segmented or discontinuous. We can still use the spectra in interpretingthe Landsat data. For example, an NDVI ratio for the three would be very different and,hence, could be used to discriminate between the three materials. Similarly, the ratioTM5/TM7 is commonly used to measure the concentration of clay minerals. Evaluationof the spectra shows why.

Figure 7 shows detail of the absorption spectra of three clay minerals. Because of thewide bandpass (2080-2350 nm) of TM band 7, it is not possible to discern between thesethree minerals with the Landsat sensor. As mentioned, the AVIRIS hyperspectral sensorhas a large number of approximately 10 nm wide bands. With the proper selection ofband ratios, mineral identification becomes possible. With this dataset, it would bepossible to discriminate between these three clay minerals, again using band ratios. Forexample, a color composite image prepared from RGB = 2160nm/2190nm,2220nm/2250nm, 2350nm/2488nm could produce a color coded clay mineral image-map.

The commercial airborne multispectral scanners are used in a similar fashion. TheAirborne Imaging Spectrometer from the Geophysical & Environmental ResearchCorp. (GER) has 79 bands in the UV, visible, SWIR, and thermal-infrared regions. TheAirborne Multispectral Scanner Mk2 by Geoscan Pty, Ltd., has up to 52 bands in thevisible, SWIR, and thermal-infrared regions. To properly utilize these hyperspectralsensors, the user must understand the phenomenon involved and have some idea of thetarget materials being sought.

11Field Guide

Page 38: Erdas FieldGuide

Figure 7: Laboratory Spectra of Clay Minerals in the Infrared Region

NOTE: Spectra are offset vertically for clarity.

The characteristics of Landsat, AVIRIS, and other data types are discussed in "CHAPTER 3:Raster and Vector Data Sources". See page 166 of "CHAPTER 5: Enhancement" for moreinformation on the NDVI ratio

2000 2200 2400 2600

Landsat TM band 7

2080 nm 2350 nm

Kaolinite

Montmorillonite

Illite

Modified from Sabins 1987

Re

flect

an

ce%

Wavelength, nm

12 ERDAS

Page 39: Erdas FieldGuide

Remote Sensing

Imaging Radar Data

Radar remote sensors can be broken into two broad categories: passive and active. Thepassive sensors record the very low intensity, microwave radiation naturally emittedby the Earth. Because of the very low intensity these images have low spatial resolution(i.e., large pixel size).

It is the active sensors, termed imaging radar, that are introducing a new generation ofsatellite imagery to remote sensing. To produce an image, these satellites emit adirected beam of microwave energy at the target and then collect the backscattered(reflected) radiation from the target scene. Because they must emit a powerful burst ofenergy, these satellites require large solar collectors and storage batteries. For thisreason, they cannot operate continuously; some satellites are limited to 10 minutes ofoperation per hour.

The microwave energy emitted by an active radar sensor is coherent and defined by anarrow bandwidth. The following table summarizes the bandwidths used in remotesensing.

*Wavelengths commonly used in imaging radars are shown in parentheses.

A key element of a radar sensor is the antenna. For a given position in space, theresolution of the resultant image is a function of the antenna size. This is termed a real-aperture radar (RAR). At some point, it becomes impossible to make a large enoughantenna to create the desired spatial resolution. To get around this problem, processingtechniques have been developed which combine the signals received by the sensor as ittravels over the target. Thus the antenna is perceived to be as long as the sensor pathduring backscatter reception. This is termed a synthetic aperture and the sensor asynthetic aperture radar (SAR).

Band Designation* Wavelength (λ), cmFrequency (υ), GHz(109 cycles ⋅ sec-1)

Ka (0.86 cm) 0.8 to 1.1 40.0 to 26.5

K 1.1 to 1.7 26.5 to 18.0

Ku 1.7 to 2.4 18.0 to 12.5

X (3.0 cm, 3.2 cm) 2.4 to 3.8 12.5 to 8.0

C 3.8 to 7.5 8.0 to 4.0

S 7.5 to 15.0 4.0 to 2.0

L (23.5 cm, 25.0 cm) 15.0 to 30.0 2.0 to 1.0

P 30.0 to 100.0 1.0 to 0.3

13Field Guide

Page 40: Erdas FieldGuide

The received signal is termed a phase history or echo hologram. It contains a timehistory of the radar signal over all the targets in the scene and is itself a low resolutionRAR image. In order to produce a high resolution image, this phase history is processedthrough a hardware/software system called a SAR processor. The SAR processorsoftware requires operator input parameters, such as information about the sensorflight path and the radar sensor's characteristics, to process the raw signal data into animage. These input parameters depend on the desired result or intended application ofthe output imagery.

One of the most valuable advantages of imaging radar is that it creates images from itsown energy source and therefore is not dependant on sunlight. Thus one can recorduniform imagery any time of the day or night. In addition, the microwave frequenciesat which imaging radars operate are largely unaffected by the atmosphere. This allowsimage collection through cloud cover or rain storms. However, the backscattered signalcan be affected. Radar images collected during heavy rainfall will often be seriouslyattenuated, which decreases the signal-to-noise ratio (SNR). In addition, theatmosphere does cause perturbations in the signal phase, which decreases resolution ofoutput products, such as the SAR image or generated DEMs.

14 ERDAS

Page 41: Erdas FieldGuide

Resolution

Resolution Resolution is a broad term commonly used to describe:

• the number of pixels the user can display on a display device, or

• the area on the ground that a pixel represents in an image file.

These broad definitions are inadequate when describing remotely sensed data. Fourdistinct types of resolution must be considered:

• spectral - the specific wavelength intervals that a sensor can record

• spatial - the area on the ground represented by each pixel

• radiometric - the number of possible data file values in each band (indicated by thenumber of bits into which the recorded energy is divided)

• temporal - how often a sensor obtains imagery of a particular area

These four domains contain separate information that can be extracted from the rawdata.

Spectral Spectral resolution refers to the specific wavelength intervals in the electromagneticspectrum that a sensor can record (Simonett 1983). For example, band 1 of the LandsatThematic Mapper sensor records energy between 0.45 and 0.52 µm in the visible part ofthe spectrum.

Wide intervals in the electromagnetic spectrum are referred to as coarse spectralresolution, and narrow intervals are referred to as fine spectral resolution. For example,the SPOT panchromatic sensor is considered to have coarse spectral resolution becauseit records EMR between 0.51 and 0.73 µm. On the other hand, band 3 of the Landsat TMsensor has fine spectral resolution because it records EMR between 0.63 and 0.69 µm(Jensen 1996).

NOTE: The spectral resolution does not indicate how many levels into which the signal is brokendown.

Spatial Spatial resolution is a measure of the smallest object that can be resolved by the sensor,or the area on the ground represented by each pixel (Simonett 1983). The finer theresolution, the lower the number. For instance, a spatial resolution of 79 meters iscoarser than a spatial resolution of 10 meters.

Scale

The terms large-scale imagery and small-scale imagery often refer to spatial resolution.Scale is the ratio of distance on a map as related to the true distance on the ground (Starand Estes 1990).

Large scale in remote sensing refers to imagery in which each pixel represents a smallarea on the ground, such as SPOT data, with a spatial resolution of 10 m or 20 m. Smallscale refers to imagery in which each pixel represents a large area on the ground, suchas AVHRR data, with a spatial resolution of 1.1 km.

15Field Guide

Page 42: Erdas FieldGuide

This terminology is derived from the fraction used to represent the scale of the map,such as 1:50,000. Small-scale imagery is represented by a small fraction (one over a verylarge number). Large-scale imagery is represented by a larger fraction (one over asmaller number). Generally, anything smaller than 1:250,000 is considered small-scaleimagery.

NOTE: Scale and spatial resolution are not always the same thing. An image always has thesame spatial resolution but it can be presented at different scales (Simonett 1983).

IFOV

Spatial resolution is also described as the instantaneous field of view (IFOV) of thesensor, although the IFOV is not always the same as the area represented by each pixel.The IFOV is a measure of the area viewed by a single detector in a given instant in time(Star and Estes 1990). For example, Landsat MSS data have an IFOV of 79 × 79 meters,but there is an overlap of 11.5 meters in each pass of the scanner, so the actual arearepresented by each pixel is 56.5 × 79 meters (usually rounded to 57 × 79 meters).

Even though the IFOV is not the same as the spatial resolution, it is important to knowthe number of pixels into which the total field of view for the image is broken. Objectssmaller than the stated pixel size may still be detectable in the image if they contrastwith the background, such as roads, drainage patterns, etc.

On the other hand, objects the same size as the stated pixel size (or larger) may not bedetectable if there are brighter or more dominant objects nearby. In Figure 8, a housesits in the middle of four pixels. If the house has a reflectance similar to itssurroundings, the data file values for each of these pixels will reflect the area aroundthe house, not the house itself, since the house does not dominate any one of the fourpixels. However, if the house has a significantly different reflectance than itssurroundings, it may still be detectable.

16 ERDAS

Page 43: Erdas FieldGuide

Resolution

Figure 8: IFOV

Radiometric Radiometric resolution refers to the dynamic range, or number of possible data filevalues in each band. This is referred to by the number of bits into which the recordedenergy is divided.

For instance, in 8-bit data, the data file values range from 0 to 255 for each pixel, but in7-bit data, the data file values for each pixel range from 0 to 128.

In Figure 9, 8-bit and 7-bit data are illustrated. The sensor measures the EMR in itsrange. The total intensity of the energy from 0 to the maximum amount the sensormeasures is broken down into 256 brightness values for 8-bit data and 128 brightnessvalues for 7-bit data.

Figure 9: Brightness Values

20m

20m

20m

20m

house

8-bit

0

7-bit

0 1 2 3 4 5 122 123 124 125 126 127

0 max. intensity

244 25524911109876543210

max. intensity

17Field Guide

Page 44: Erdas FieldGuide

Temporal Temporal resolution refers to how often a sensor obtains imagery of a particular area.For example, the Landsat satellite can view the same area of the globe once every 16days. SPOT, on the other hand, can revisit the same area every three days.

NOTE: Temporal resolution is an important factor to consider in change detection studies.

Figure 10 illustrates all four types of resolution:

Figure 10: Landsat TM - Band 2 (Four Types of Resolution)

Spatial Resolution:1 pixel = 79 m x 79 m

Temporal Resolution:same area viewed every16 days

79 m

79 m

Source: EOSAT

Day 1

Day 17

Day 31

RadiometricResolution:8-bit (0 - 255)

SpectralResolution:0.52 - 0.60 mm

18 ERDAS

Page 45: Erdas FieldGuide

Data Correction

Data Correction There are several types of errors that can be manifested in remotely sensed data. Amongthese are line dropout and striping. These errors can be corrected to an extent in GIS byradiometric and geometric correction functions.

NOTE: Radiometric errors are usually already corrected in data from EOSAT or SPOT.

See "CHAPTER 5: Enhancement" for more information on radiometric and geometriccorrection.

Line Dropout Line dropout occurs when a detector either completely fails to function or becomestemporarily saturated during a scan (like the effect of a camera flash on a human retina).The result is a line or partial line of data with higher data file values, creating ahorizontal streak until the detector(s) recovers, if it recovers.

Line dropout is usually corrected by replacing the bad line with a line of estimated datafile values, based on the lines above and below it.

You can correct line dropout using the 5 x 5 Median Filter from the Radar Speckle Suppressionfunction. The Convolution and Focal Analysis functions in Image Interpreter will also correctfor line dropout.

Striping Striping or banding will occur if a detector goes out of adjustment—that is, it providesreadings consistently greater than or less than the other detectors for the same bandover the same ground cover.

Use Image Interpreter or Spatial Modeler for implementing algorithms to eliminate striping.The Spatial Modeler editing capabilities allow you to adapt the algorithms to best address thedata.

19Field Guide

Page 46: Erdas FieldGuide

Data Storage Image data can be stored on a variety of media—tapes, CD-ROMs, or floppy diskettes,for example—but how the data are stored (e.g., structure) is more important than onwhat they are stored.

All computer data are in binary format. The basic unit of binary data is a bit. A bit canhave two possible values—0 and 1, or “off” and “on” respectively. A set of bits,however, can have many more values, depending on the number of bits used. Thenumber of values that can be expressed by a set of bits is 2 to the power of the numberof bits used.

A byte is 8 bits of data. Generally, file size and disk space are referred to by number ofbytes. For example, a PC may have 640 kilobytes (1,024 bytes = 1 kilobyte) of RAM(random access memory), or a file may need 55,698 bytes of disk space. A megabyte(Mb) is about one million bytes. A gigabyte (Gb) is about one billion bytes.

Storage Formats Image data can be arranged in several ways on a tape or other media. The most commonstorage formats are:

• BIL (band interleaved by line)

• BSQ (band sequential)

• BIP (band interleaved by pixel)

For a single band of data, all formats (BIL, BIP, and BSQ) are identical, as long as thedata are not blocked.

Blocked data are discussed under "Storage Media" on page 23.

BIL

In BIL (band interleaved by line) format, each record in the file contains a scan line (row)of data for one band (Slater 1980). All bands of data for a given line are stored consecu-tively within the file as shown in Figure 11.

20 ERDAS

Page 47: Erdas FieldGuide

Data Storage

Figure 11: Band Interleaved by Line (BIL)

NOTE: Although a header and trailer file are shown in this diagram, not all BIL data containheader and trailer files.

BSQ

In BSQ (band sequential) format, each entire band is stored consecutively in the samefile (Slater 1980). This format is advantageous, in that:

• one band can be read and viewed easily

• multiple bands can be easily loaded in any order

Line 1, Band 1Line 1, Band 2

•••

Line 1, Band x

Line 2, Band 1Line 2, Band 2

•••

Line 2, Band x

Line n, Band 1Line n, Band 2

•••

Line n, Band x

Header

Image

Trailer

21Field Guide

Page 48: Erdas FieldGuide

Figure 12: Band Sequential (BSQ)

Landsat Thematic Mapper (TM) data are stored in a type of BSQ format known as fastformat. Fast format data have the following characteristics:

• Files are not split between tapes. If a band starts on the first tape, it will end on thefirst tape.

• An end-of-file (EOF) marker follows each band.

• An end-of-volume marker marks the end of each volume (tape). An end-of-volumemarker consists of three end-of-file markers.

• There is one header file per tape.

• There are no header records preceding the image data.

• Regular products (not geocoded) are normally unblocked. Geocoded products arenormally blocked (EOSAT).

ERDAS IMAGINE will import all of the header and image file information.

See Geocoded Data on page 32 for more information on geocoded data.

Line 1, Band 1Line 2, Band 1Line 3, Band 1

•••

Line n, Band 1

Header File(s)

Trailer File(s)

Image FileBand 1

end-of-file

Image FileBand 2

end-of-file

Image FileBand x

Line 1, Band 2Line 2, Band 2Line 3, Band 2

•••

Line n, Band 2

Line 1, Band xLine 2, Band xLine 3, Band x

•••

Line n, Band x

22 ERDAS

Page 49: Erdas FieldGuide

Data Storage

BIP

In BIP (band interleaved by pixel) format, the values for each band are ordered withina given pixel. The pixels are arranged sequentially on the tape (Slater 1980). Thesequence for BIP format is:

Pixel 1, Band 1

Pixel 1, Band 2

Pixel 1, Band 3

.

.

.

Pixel 2, Band 1

Pixel 2, Band 2

Pixel 2, Band 3

.

.

.

Storage Media Today, most raster data are available on a variety of storage media to meet the needs ofusers, depending on the system hardware and devices available. When ordering data,it is sometimes possible to select the type of media preferred. The most common formsof storage media are discussed in the following section:

• 9-track tape

• 4 mm tape

• 8 mm tape

• 1/4” cartridge tape

• CD-ROM/optical disk

23Field Guide

Page 50: Erdas FieldGuide

Other types of storage media are:

• floppy disk (3.5” or 5.25”)

• film, photograph, or paper

• videotape

Tape

The data on a tape can be divided into logical records and physical records. A record isthe basic storage unit on a tape.

• A logical record is a series of bytes that form a unit. For example, all the data forone line of an image may form a logical record.

• A physical record is a consecutive series of bytes on a magnetic tape, followed bya gap, or blank space, on the tape.

Blocked Data

For reasons of efficiency, data can be blocked to fit more on a tape. Blocked data aresequenced so that there are more logical records in each physical record. The numberof logical records in each physical record is the blocking factor. For instance, a recordmay contain 28,000 bytes, but only 4000 columns due to a blocking factor of 7.

Tape Contents

Tapes are available in a variety of sizes and storage capacities. To obtain informationabout the data on a particular tape, read the tape label or box, or read the header file.Often, there is limited information on the outside of the tape. Therefore, it may benecessary to read the header files on each tape for specific information, such as:

• number of tapes that hold the data set

• number of columns (in pixels)

• number of rows (in pixels)

• data storage format—BIL, BSQ, BIP

• pixel depth—4-bit, 8-bit, 10-bit, 12-bit, or 16-bit

• number of bands

• blocking factor

• number of header files and header records

24 ERDAS

Page 51: Erdas FieldGuide

Data Storage

4 mm Tapes

The 4 mm tape is a relative newcomer in the world of GIS. This tape is a mere 2” × 1.75” in size, but it can hold up to 2 Gb of data. This petite cassette offers anobvious shipping and storage advantage because of its size.

8 mm Tapes

The 8 mm tape offers the advantage of storing vast amounts of data. Tapes are availablein 5 and 10 Gb storage capacities (although some tape drives cannot handle the 10 Gbsize). The 8 mm tape is a 2.5” × 4” cassette, which makes it easy to ship and handle.

1/4” Cartridge Tapes

This tape format falls between the 8 mm and 9-track in physical size and storagecapacity. The tape is approximately 4” × 6” in size and stores up to 150 Mb of data.

9-Track Tapes

A 9-track tape is an older format that was the standard for two decades. It is a largecircular tape approximately 10” in diameter. It requires a 9-track tape drive as aperipheral device for retrieving data. The size and storage capability make 9-track lessconvenient than 8 mm or 1/4” tapes. However, 9-track tapes are still widely used.

A single 9-track tape may be referred to as a volume. The complete set of tapes thatcontains one image is referred to as a volume set.

The storage format of a 9-track tape in binary format is described by the number of bitsper inch, bpi, on the tape. The tapes most commonly used have either 1600 or 6250 bpi.The number of bits per inch on a tape is also referred to as the tape density. Dependingon the length of the tape, 9-tracks can store between 120-150 Mb of data.

CD-ROM

Data such as ADRG and DLG are most often available on CD-ROM, although manytypes of data can be requested in CD-ROM format. A CD-ROM is an optical read-onlystorage device which can be read with a CD player. CD-ROM’s offer the advantage ofstoring large amounts of data in a small, compact device. Up to 644 Mb can be storedon a CD-ROM. Also, since this device is read-only, it protects the data from accidentallybeing overwritten, erased, or changed from its original integrity. This is the most stableof the current media storage types and data stored on CD-ROM are expected to last fordecades without degradation.

25Field Guide

Page 52: Erdas FieldGuide

Calculating Disk Space To calculate the amount of disk space a raster file will require on an ERDAS IMAGINEsystem, use the following formula:

where:

y = rowsx = columnsb = number of bytes per pixeln = number of bands1.4 adds 30% to the file size for pyramid layers and 10% for miscellaneous adjust-ments, such as histograms, lookup tables, etc.

NOTE: This output file size is approximate.

For example, to load a 3 band, 16-bit file with 500 rows and 500 columns, about2,100,000 bytes of disk space would be needed.

Bytes Per Pixel

The number of bytes per pixel is listed below:

4-bit data: .5

8-bit data: 1.0

16-bit data: 2.0

NOTE: On the PC, disk space is shown in bytes. On the workstation, disk space is shown askilobytes (1,024 bytes).

x y× b×( )( n×[ ) ] 1.4× output file size=

500 500×( ) 2×( ) 3×[ ] 1.4× 2 100 000, ,= or 2.1 Mb

26 ERDAS

Page 53: Erdas FieldGuide

Data Storage

ERDAS IMAGINE Format(.img)

In ERDAS IMAGINE, file name extensions identify the file type. When data areimported into IMAGINE, they are converted to the ERDAS IMAGINE file format andstored in .img files. ERDAS IMAGINE image files (.img) can contain two types of rasterlayers:

• thematic

• continuous

An image file can store a combination of thematic and continuous layers or just onetype.

Figure 13: Image Files Store Raster Layers

ERDAS Version 7.5 Users

For Version 7.5 users, when importing a GIS file from Version 7.5, it becomes an imagefile with one thematic raster layer. When importing a LAN file, each band becomes acontinuous raster layer within an image file.

Thematic Raster Layer

Thematic data are raster layers that contain qualitative, categorical information aboutan area. A thematic layer is contained within an .img file. Thematic layers lendthemselves to applications in which categories or themes are used. Thematic rasterlayers are used to represent data measured on a nominal or ordinal scale, such as:

• soils

• land use

• land cover

• roads

• hydrology

NOTE: Thematic raster layers are displayed as pseudo color layers.

Image File (.img)

Raster Layer(s)

Thematic Raster Layer(s) Continuous Raster Layer(s)

27Field Guide

Page 54: Erdas FieldGuide

Figure 14: Example of a Thematic Raster Layer

See "CHAPTER 4: Image Display" for information on displaying thematic raster layers.

Continuous Raster Layer

Continuous data are raster layers that contain quantitative (measuring a characteristicon an interval or ratio scale) and related, continuous values. Continuous raster layerscan be multiband (e.g., Landsat Thematic Mapper data) or single band (e.g., SPOTpanchromatic data). The following types of data are examples of continuous rasterlayers:

• Landsat

• SPOT

• digitized (scanned) aerial photograph

• digital elevation model (DEM)

• slope

• temperature

NOTE: Continuous raster layers can be displayed as either a gray scale raster layer or a truecolor raster layer.

soils

28 ERDAS

Page 55: Erdas FieldGuide

Data Storage

Figure 15: Examples of Continuous Raster Layers

Tiled Data

Data in the .img format are tiled data. Tiled data are stored in tiles that can be set to anysize.

The default tile size for .img files is 64 x 64 pixels.

Image File Contents

The .img files contain the following additional information about the data:

• the data file values

• statistics

• lookup tables

• map coordinates

• map projection

This additional information can be viewed in the Image Information function from the ERDASIMAGINE icon panel.

Landsat TM DEM

29Field Guide

Page 56: Erdas FieldGuide

Statistics

In ERDAS IMAGINE, the file statistics are generated from the data file values in thelayer and incorporated into the .img file. This statistical information is used to createmany program defaults, and helps the user make processing decisions.

Pyramid Layers

Sometimes a large image will take longer than normal to display in the ERDASIMAGINE Viewer. The pyramid layer option enables the user to display large imagesfaster. Pyramid layers are image layers which are successively reduced by the power of2 and resampled.

The Pyramid Layer option is available in the Image Information function from the ERDASIMAGINE icon panel and from the Import function.

See "CHAPTER 4: Image Display" for more information on pyramid layers. See "APPENDIXB: File Formats and Extensions" for detailed information on ERDAS IMAGINE file formats.

30 ERDAS

Page 57: Erdas FieldGuide

Image File Organization

Image FileOrganization

Data is easy to locate if the data files are well organized. Well organized files will alsomake data more accessible to anyone who uses the system. Using consistent namingconventions and the ERDAS IMAGINE Image Catalog will help keep image files wellorganized and accessible.

Consistent NamingConvention

Many processes create an output file, and every time a file is created, it will be necessaryto assign a file name. The name which is used can either cause confusion about theprocess that has taken place, or it can clarify and give direction. For example, if thename of the output file is “junk,” it is difficult to determine the contents of the file. Onthe other hand, if a standard nomenclature is developed in which the file name refersto a process or contents of the file, it is possible to determine the progress of a projectand contents of a file by examining the directory.

Develop a naming convention that is based on the contents of the file. This will helpeveryone involved know what the file contains. For example, in a project to create amap composition for Lake Lanier, a directory for the files may look similar to the onebelow:

lanierTM.imglanierSPOT.imglanierSymbols.ovrlanierlegends.map.ovrlanierScalebars.map.ovrlanier.maplanier.pltlanier.gcclanierUTM.img

From this listing, one can make some educated guesses about the contents of each filebased on naming conventions used. For example, “lanierTM.img” is probably aLandsat TM scene of Lake Lanier. “lanier.map” is probably a map composition that hasmap frames with lanierTM.img and lanierSPOT.img data in them. “lanierUTM.img”was probably created when lanierTM.img was rectified to a UTM map projection.

Keeping Track of ImageFiles

Using a database to store information about images enables the user to track image files(.img) without having to know the name or location of the file. The database can bequeried for specific parameters (e.g., size, type, map projection) and the database willreturn a list of image files that match the search criteria. This file information will helpto quickly determine which image(s) to use, where it is located, and its ancillary data.An image database is especially helpful when there are many image files and evenmany on-going projects. For example, one could use the database to search for all of theimage files of Georgia that have a UTM map projection.

Use the Image Catalog to track and store information for image files (.img) that are imported andcreated in IMAGINE.

NOTE: All information in the Image Catalog database, except archive information, is extractedfrom the image file header. Therefore, if this information is modified in the Image Info utility, itwill be necessary to re-catalog the image in order to update the information in the Image Catalogdatabase.

31Field Guide

Page 58: Erdas FieldGuide

ERDAS IMAGINE Image Catalog

The ERDAS IMAGINE Image Catalog database is designed to serve as a library andinformation management system for image files (.img) that are imported and created inERDAS IMAGINE. The information for the .img files is displayed in the Image CatalogCellArray. This CellArray enables the user to view all of the ancillary data for the imagefiles in the database. When records are queried based on specific criteria, the .img filesthat match the criteria will be highlighted in the CellArray. It is also possible to graph-ically view the coverage of the selected .img files on a map in a canvas window.

When it is necessary to store some data on a tape, the Image Catalog database enablesthe user to archive .img files to external devices. The Image Catalog CellArray willshow which tape the .img file is stored on, and the file can be easily retrieved from thetape device to a designated disk directory. The archived .img files are copies of the fileson disk—nothing is removed from the disk. Once the file is archived, it can be removedfrom the disk, if desired.

Geocoded Data Geocoding, also known as georeferencing, is the geographical registration or coding ofthe pixels in an image. Geocoded data are images that have been rectified to a particularmap projection and pixel size.

Raw, remotely sensed image data are gathered by a sensor on a platform, such as anaircraft or satellite. In this raw form, the image data are not referenced to a mapprojection. Rectification is the process of projecting the data onto a plane and makingthem conform to a map projection system.

It is possible to geocode raw image data with the ERDAS IMAGINE rectification tools.Geocoded data are also available from EOSAT and SPOT.

See "APPENDIX C: Map Projections" for detailed information on the different projectionsavailable. See "CHAPTER 8: Rectification" for information on geocoding raw imagery withERDAS IMAGINE.

32 ERDAS

Page 59: Erdas FieldGuide

Using Image Data in GIS

Using Image Data inGIS

ERDAS IMAGINE provides many tools designed to extract the necessary informationfrom the images in a data base. The following chapters in this book describe many ofthese processes.

This section briefly describes some basic image file techniques that may be useful forany application.

Subsetting andMosaicking

Within ERDAS IMAGINE, there are options available to make additional image filesfrom those acquired from EOSAT, SPOT, etc. These options involve combining files,mosaicking, and subsetting.

ERDAS IMAGINE programs allow image data with an unlimited number of bands, butthe most common satellite data types—Landsat and SPOT—have seven or fewer bands.Image files can be created with more than seven bands.

It may be useful to combine data from two different dates into one file. This is calledmultitemporal imagery. For example, a user may want to combine Landsat TM fromone date with TM data from a later date, then perform a classification based on thecombined data. This is particularly useful for change detection studies.

The user can also incorporate elevation data into an existing image file as another band,or create new bands through various enhancement techniques.

To combine two or more image files, each file must be georeferenced to the same coordinatesystem, or to each other. See "CHAPTER 8: Rectification" for information on georeferencingimages.

Subset

Subsetting refers to breaking out a portion of a large file into one or more smaller files.

Often, image files contain areas much larger than a particular study area. In these cases,it is helpful to reduce the size of the image file to include only the area of interest. Thisnot only eliminates the extraneous data in the file, but it speeds up processing due tothe smaller amount of data to process. This can be important when dealing withmultiband data.

The Import option lets you define a subset area of an image to preview or import. You can alsouse the Subset option from Image Interpreter to define a subset area.

Mosaic

On the other hand, the study area in which the user is interested may span severalimage files. In this case, it is necessary to combine the images to create one large file.This is called mosaicking.

To create a mosaicked image, use the Mosaic Images option from the Data Prep menu. All of theimages to be mosaicked must be georeferenced to the same coordinate system.

33Field Guide

Page 60: Erdas FieldGuide

Enhancement Image enhancement is the process of making an image more interpretable for aparticular application (Faust 1989). Enhancement can make important features of raw,remotely sensed data and aerial photographs more interpretable to the human eye.Enhancement techniques are often used instead of classification for extracting usefulinformation from images.

There are many enhancement techniques available. They range in complexity from asimple contrast stretch, where the original data file values are stretched to fit the rangeof the display device, to principal components analysis, where the number of imagefile bands can be reduced and new bands created to account for the most variance in thedata.

See "CHAPTER 5: Enhancement" for more information on enhancement techniques.

MultispectralClassification

Image data are often used to create thematic files through multispectral classification.This entails using spectral pattern recognition to identify groups of pixels that representa common characteristic of the scene, such as soil type or vegetation.

See "CHAPTER 6: Classification" for a detailed explanation of classification procedures.

34 ERDAS

Page 61: Erdas FieldGuide

Editing Raster Data

Editing Raster Data ERDAS IMAGINE provides raster editing tools for editing the data values of thematicand continuous raster data. This is primarily a correction mechanism that enables theuser to correct bad data values which produce noise, such as spikes and holes inimagery. The raster editing functions can be applied to the entire image or a user-selected area of interest (AOI).

With raster editing, data values in thematic data can also be recoded according to class.Recoding is a function which reassigns data values to a region or to an entire class ofpixels.

See "CHAPTER 10: Geographic Information Systems" for information about recoding data. See"CHAPTER 5: Enhancement" for information about reducing data noise using spatial filtering.

The ERDAS IMAGINE raster editing functions allow the use of focal and global spatialmodeling functions for computing the values to replace noisy pixels or areas incontinuous or thematic data.

Focal operations are filters which calculate the replacement value based on a window(3 × 3, 5 × 5, etc.) and replace the pixel of interest with the replacement value. Thereforethis function affects one pixel at a time, and the number of surrounding pixels whichinfluence the value is determined by the size of the moving window.

Global operations calculate the replacement value for an entire area rather thanaffecting one pixel at a time. These functions, specifically the Majority option, are moreapplicable to thematic data.

See the ERDAS IMAGINE On-Line Help for information about using and selecting AOIs.

The raster editing tools are available in the IMAGINE Viewer.

35Field Guide

Page 62: Erdas FieldGuide

Editing Continuous(Athematic) Data

Editing DEMs

DEMs will occasionally contain spurious pixels or bad data. These spikes, holes, andother noises caused by automatic DEM extraction can be corrected by editing the rasterdata values and replacing them with meaningful values. This discussion of rasterediting will focus on DEM editing.

The ERDAS IMAGINE Raster Editing functionality was originally designed to edit DEMs, butit can also be used with images of other continuous data sources, such as radar, SPOT, Landsat,and digitized photographs.

When editing continuous raster data, the user can modify or replace original pixelvalues with the following:

• a constant value — enter a known constant value for areas such as lakes

• the average of the buffering pixels — replace the original pixel value with theaverage of the pixels in a specified buffer area around the AOI. This is used wherethe constant values of the AOI are not known, but the area is flat or homogeneouswith little variation (for example, a lake).

• the original data value plus a constant value — add a negative constant value to theoriginal data values to compensate for the height of trees and other vertical featuresin the DEM. This technique is commonly used in forested areas.

• spatial filtering — filter data values to eliminate noise such as spikes or holes in thedata

• interpolation techniques (discussed below)

36 ERDAS

Page 63: Erdas FieldGuide

Editing Raster Data

InterpolationTechniques

While the previously listed raster editing techniques are perfectly suitable for someapplications, the following interpolation techniques provide the best methods for rasterediting:

• 2-D polynomial — surface approximation

• Multi-surface functions — with least squares prediction

• Distance weighting

Each pixel’s data value is interpolated from the reference points in the data file. Theseinterpolation techniques are described below.

2-D Polynomial

This interpolation technique provides faster interpolation calculations than distanceweighting and multi-surface functions. The following equation is used:

V = a0 + a1x + a2y + a2x2 + a4xy + a5y

2 +. . .

where:

V = data value (elevation value for DEM)a = polynomial coefficientsx = x coordinatey = y coordinate

Multi-surface Functions

The multi-surface technique provides the most accurate results for editing DEMs whichhave been created through automatic extraction. The following equation is used:

where:

V = output data value (elevation value for DEM)Wi = coefficients which are derived by the least squares methodQi = distance-related kernels which are actually interpretable as

continuous single value surfaces

Source: Wang 1990

V WiQi∑=

37Field Guide

Page 64: Erdas FieldGuide

Distance Weighting

The weighting function determines how the output data values will be interpolatedfrom a set of reference data points. For each pixel, the values of all reference points areweighted by a value corresponding with the distance between each point and the pixel.

The weighting function used in ERDAS IMAGINE is:

where:

S = normalization factor

D = distance from output data point and reference point

The value for any given pixel is calculated by taking the sum of weighting factors for allreference points multiplied by the data values of those points, and dividing by the sumof the weighting factors:

where:

V = output data value (elevation value for DEM)

i = ith reference point

Wi = weighting factor of point i

Vi = data value of point i

n = number of reference points

Source: Wang 1990

WSD---- 1–

=2

V

Wi Vi×i 1=

n

Wii 1=

n

∑------------------------------=

38 ERDAS

Page 65: Erdas FieldGuide

Introduction

CHAPTER 2Vector Layers

Introduction ERDAS IMAGINE is designed to integrate two data types into one system: raster andvector. While the previous chapter explored the characteristics of raster data, thischapter is focused on vector data. The vector data structure in ERDAS IMAGINE isbased on the ARC/INFO data model (developed by ESRI, Inc.). This chapter describesvector data, attribute information, and symbolization.

You do not need ARC/INFO software or an ARC/INFO license to use the vector capabilities inERDAS IMAGINE. Since the ARC/INFO data model is used in ERDAS IMAGINE, you canuse ARC/INFO coverages directly without importing them.

See "CHAPTER 10: Geographic Information Systems" for information on editing vector layersand using vector data in a GIS.

Vector data consist of:

• points

• lines

• polygons

Each is illustrated in Figure 16.

Figure 16: Vector Elements

polygons

points

line

label point

node

node

vertices

39Field Guide

Page 66: Erdas FieldGuide

Points

A point is represented by a single x,y coordinate pair. Points can represent the locationof a geographic feature or a point that has no area, such as a mountain peak. Labelpoints are also used to identify polygons (see below).

Lines

A line (polyline) is a set of line segments and represents a linear geographic feature,such as a river, road, or utility line. Lines can also represent non-geographical bound-aries, such as voting districts, school zones, contour lines, etc.

Polygons

A polygon is a closed line or closed set of lines defining a homogeneous area, such assoil type, land use, or water body. Polygons can also be used to represent non-geographical features, such as wildlife habitats, state borders, commercial districts, etc.Polygons also contain label points that identify the polygon. The label point links thepolygon to its attributes.

Vertex

The points that define a line are vertices. A vertex is a point that defines an element,such as the endpoint of a line segment or a location in a polygon where the line segmentdefining the polygon changes direction. The ending points of a line are called nodes.Each line has two nodes: a from-node and a to-node. The from-node is the first vertexin a line. The to-node is the last vertex in a line. Lines join other lines only at nodes. Aseries of lines in which the from-node of the first line joins the to-node of the last line isa polygon.

Figure 17: Vertices

In Figure 17, the line and the polygon are each defined by three vertices.

line polygon

vertices

label point

40 ERDAS

Page 67: Erdas FieldGuide

Introduction

Coordinates Vector data are expressed by the coordinates of vertices. The vertices that define eachelement are referenced with x,y, or Cartesian, coordinates. In some instances, thosecoordinates may be inches (as in some CAD applications), but often the coordinates aremap coordinates, such as State Plane, Universal Transverse Mercator (UTM), orLambert Conformal Conic. Vector data digitized from an ungeoreferenced image areexpressed in file coordinates.

Tics

Vector layers are referenced to coordinates or a map projection system using tic filesthat contain geographic control points for the layer. Every vector layer must have a ticfile. Tics are not topologically linked to other features in the layer and do not havedescriptive data associated with them.

Vector Layers Although it is possible to have points, lines, and polygons in a single layer, a layertypically consists of one type of feature. It is possible to have one vector layer forstreams (lines) and another layer for parcels (polygons). A vector layer is defined as aset of features where each feature has a location (defined by coordinates and topologicalpointers to other features) and, possibly attributes (defined as a set of named items orvariables) (ESRI 1989). Vector layers contain both the vector features (points, lines,polygons) and the attribute information.

Usually, vector layers are also divided by the type of information they represent. Thisenables the user to isolate data into themes, similar to the themes used in raster layers.Political districts and soil types would probably be in separate layers, even though bothare represented with polygons. If the project requires that the coincidence of features intwo or more layers be studied, the user can overlay them or create a new layer.

See "CHAPTER 10: Geographic Information Systems" for more information about analyzingvector layers.

Topology The spatial relationships between features in a vector layer are defined using topology.In topological vector data, a mathematical procedure is used to define connectionsbetween features, identify adjacent polygons, and define a feature as a set of otherfeatures (e.g., a polygon is made of connecting lines) (ESRI 1990).

Topology is not automatically created when a vector layer is created. It must be addedlater using specific functions. Topology must be updated after a layer is edited also.

"Digitizing" on page 47 describes how topology is created for a new or edited vector layer.

41Field Guide

Page 68: Erdas FieldGuide

Vector Files As mentioned above, the ERDAS IMAGINE vector structure is based in the ARC/INFOdata model used for ARC coverages. This georelational data model is actually a set offiles using the computer’s operating system for file management and input/output. AnERDAS IMAGINE vector layer is stored in subdirectories on the disk. Vector data arerepresented by a set of logical tables of information, stored as files within the subdi-rectory. These files may serve the following purposes:

• define features

• provide feature attributes

• cross-reference feature definition files

• provide descriptive information for the coverage as a whole

A workspace is a location which contains one or more vector layers. Workspacesprovide a convenient means for organizing layers into related groups. They alsoprovide a place for the storage of tabular data not directly tied to a particular layer. Eachworkspace is completely independent. It is possible to have an unlimited number ofworkspaces and an unlimited number of vector layers in a workspace. Table 1 summa-rizes the types of files that are used to make up vector layers.

Table 1: Description of File Types

File Type File Description

FeatureDefinitionFiles

ARC Line coordinates and topology

CNT Polygon centroid coordinates

LAB Label point coordinates and topology

TIC Tic coordinates

Feature AttributeFiles

AAT Line (arc) attribute table

PAT Polygon or point attribute table

Feature Cross-Reference File

PAL Polygon/line/node cross-reference file

LayerDescription Files

BND Coordinate extremes

LOG Layer history file

PRJ Coordinate definition file

TOL Layer tolerance file

42 ERDAS

Page 69: Erdas FieldGuide

Introduction

Figure 18 illustrates how a typical vector workspace is set up (ESRI 1992).

Figure 18: Workspace Structure

Because vector layers are stored in directories rather than in simple files, you MUST use theutilities provided in ERDAS IMAGINE to copy and rename them. A utility is also provided toupdate path names that are no longer correct due to the use of regular system commands onvector layers.

See the ESRI documentation for more detailed information about the different vector files.

georgia

parcels testdata

demo INFO roads streets

43Field Guide

Page 70: Erdas FieldGuide

AttributeInformation

Along with points, lines, and polygons, a vector layer can have a wealth of associateddescriptive, or attribute, information associated with it. Attribute information isdisplayed in ERDAS IMAGINE CellArrays. This is the same information that is storedin the INFO database of ARC/INFO. Some attributes are automatically generated whenthe layer is created. Custom fields can be added to each attribute table. Attribute fieldscan contain numerical or character data.

The attributes for a roads layer may look similar to the example in Figure 19. The usercan select features in the layer based on the attribute information. Likewise, when a rowis selected in the attribute CellArray, that feature is highlighted in the Viewer.

Figure 19: Attribute Example

Using Imported Attribute Data

When external data types are imported into ERDAS IMAGINE, only the requiredattribute information is imported into the attribute tables (AAT and PAT files) of thenew vector layer. The rest of the attribute information is written to one of the followingINFO files:

• <layer name>.ACODE - arc attribute information

• <layer name>.PCODE - polygon attribute information

• <layer name>.XCODE - point attribute information

To utilize all of this attribute information, the INFO files can be merged into the PATand AAT files. Once this attribute information has been merged, it can be viewed inIMAGINE CellArrays and edited as desired. This new information can then beexported back to its original format.

The complete path of the file must be specified when establishing an INFO file name inan ERDAS IMAGINE Viewer application, such as exporting attributes or mergingattributes, as shown in the example below:

/georgia/parcels/info!arc!parcels.pcode

44 ERDAS

Page 71: Erdas FieldGuide

Displaying Vector Data

Use the Attributes option in the IMAGINE Viewer to view and manipulate vector attributedata, including merging and exporting. (The Raster Attribute Editor is for raster attributes onlyand cannot be used to edit vector attributes.)

See the ERDAS IMAGINE On-Line Help for more information about using CellArrays.

Displaying VectorData

Vector data are displayed in Viewers, as are other data types in ERDAS IMAGINE. Theuser can display a single vector layer, overlay several layers in one Viewer, or displaya vector layer(s) over a raster layer(s).

In layers that contain more than one feature (a combination of points, lines, andpolygons), the user can select which features to display. For example, if a user isstudying parcels, he or she may want to display only the polygons in a layer that alsocontains street centerlines (lines).

Color Schemes

Vector data are usually assigned class values in the same manner as the pixels in athematic raster file. These class values correspond to different colors on the displayscreen. As with a pseudo color image, the user can assign a color scheme for displayingthe vector classes.

See "CHAPTER 4: Image Display" for a thorough discussion of how images are displayed.

Symbolization Vector layers can be displayed with symbolization, meaning that the attributes can beused to determine how points, lines, and polygons are rendered. Points, lines,polygons, and nodes are symbolized using styles and symbols similar to annotation.For example, if a point layer represents cities and towns, the appropriate symbol couldbe used at each point based on the population of that area.

Points

Point symbolization options include symbol, size, and color. The symbols available arethe same symbols available for annotation.

Lines

Lines can be symbolized with varying line patterns, composition, width, and color. Theline styles available are the same as those available for annotation.

Polygons

Polygons can be symbolized as lines or as filled polygons. Polygons symbolized as linescan have varying line styles (see Lines above). For filled polygons, either a solid fillcolor or a repeated symbol can be selected. When symbols are used, the user selects thesymbol to use, the symbol size, symbol color, background color, and the x- and y-separation between symbols. Figure 20 illustrates a pattern fill.

45Field Guide

Page 72: Erdas FieldGuide

Figure 20: Symbolization Example

See the ERDAS IMAGINE Tour Guides or On-Line Help for information about selectingfeatures and using CellArrays.

The vector layer will reflectthe symbolization that isdefined in the Symbology dialog.

46 ERDAS

Page 73: Erdas FieldGuide

Digitizing

Vector Data Sources

Vector data are created by:

• tablet digitizing—maps, photographs or other hardcopy data can be digitizedusing a digitizing tablet

• screen digitizing—create new vector layers by using the mouse to digitize on thescreen

• using other software packages—many external vector data types can be convertedto ERDAS IMAGINE vector layers

• converting raster layers—raster layers can be converted to vector layers

Each of these options is discussed in a separate section below.

Digitizing In the broadest sense, digitizing refers to any process that converts non-digital data intonumbers. However, in ERDAS IMAGINE, the digitizing of vectors refers to the creationof vector data from hardcopy materials or raster images that are traced using a digitizerkeypad on a digitizing tablet or a mouse on a displayed image.

Any image not already in digital format must be digitized before it can be read by thecomputer and incorporated into the data base. Most Landsat, SPOT, or other satellitedata are already in digital format upon receipt, so it is not necessary to digitize them.However, the user may also have maps, photographs, or other non-digital data thatcontain information they want to incorporate into the study. Or, the user may want toextract certain features from a digital image to include in a vector layer. Tabletdigitizing and screen digitizing enable the user to digitize certain features of a map orphotograph, such as roads, bodies of water, voting districts, and so forth.

Tablet Digitizing Tablet digitizing involves the use of a digitizing tablet to transfer non-digital data suchas maps or photographs to vector format. The digitizing tablet contains an internalelectronic grid that transmits data to ERDAS IMAGINE on cue from a digitizer keypadoperated by the user.

Figure 21: Digitizing Tablet

47Field Guide

Page 74: Erdas FieldGuide

Digitizer Set-Up

The map or photograph to be digitized is secured on the tablet, and a coordinate systemis established with a set-up procedure.

Digitizer Operation

The hand-held digitizer keypad features a small window with cross-hairs and keypadbuttons. Position the intersection of the cross-hairs directly over the point to bedigitized. Depending on the type of equipment and the program being used, one of theinput buttons is pushed to tell the system which function to perform, such as:

• digitize a point (i.e., transmit map coordinate data),

• connect a point to previous points,

• assign a particular value to the point or polygon,

• measure the distance between points, etc.

Move the puck along the desired polygon boundaries or lines, digitizing points atappropriate intervals (where lines curve or change direction), until all the points aresatisfactorily completed.

Newly created vector layers do not contain topological data. You must create topology using theBuild or Clean options. This is discussed further in “Chapter 9: Geographic InformationSystems.”

Digitizing Modes

There are two modes used in digitizing:

• point mode — one point is generated each time a keypad button is pressed

• stream mode — points are generated continuously at specified intervals, while thepuck is in proximity to the surface of the digitizing tablet

You can create a new vector layer from the IMAGINE Viewer. Select the Tablet Input functionfrom the Viewer to use a digitizing tablet to enter new information into that layer.

Measurement

The digitizing tablet can also be used to measure both linear and areal distances on amap or photograph. The digitizer puck is used to outline the areas to measure. The usercan measure:

• lengths and angles by drawing a line

• perimeters and areas using a polygonal, rectangular, or elliptical shape

• positions by specifying a particular point

48 ERDAS

Page 75: Erdas FieldGuide

Imported Vector Data

Measurements can be saved to a file, printed, and copied. These operations can also beperformed with screen digitizing.

Select the Measure function from the IMAGINE Viewer or click on the Ruler tool in the Viewertool bar to enable tablet or screen measurement.

Screen Digitizing In screen digitizing, vector data are drawn in the Viewer with a mouse using thedisplayed image as a reference. These data are then written to a vector layer.

Screen digitizing is used for the same purposes as tablet digitizing, such as:

• digitizing roads, bodies of water, political boundaries

• selecting training samples for input to the classification programs

• outlining an area of interest for any number of applications

Create a new vector layer from the IMAGINE Viewer.

Imported VectorData

Many types of vector data from other software packages can be incorporated into theERDAS IMAGINE system. These data formats include:

• ARC/INFO GENERATE format files from ESRI, Inc.

• ARC/INFO INTERCHANGE files from ESRI,Inc.

• ARCVIEW Shape files from ESRI,Inc.

• Digital Line Graphs (DLG) from U.S.G.S.

• Digital Exchange Files (DXF) from Autodesk, Inc.

• ETAK MapBase files from ETAK, Inc.

• Initial Graphics Exchange Standard (IGES) files

• Intergraph Design (DGN) files from Intergraph

• Spatial Data Transfer Standard (SDTS) vector files

• Topologically Integrated Geographic Encoding and Referencing System(TIGER) files from the U.S. Census Bureau

• Vector Product Format (VPF) files from the Defense Mapping Agency

See "CHAPTER 3: Raster and Vector Data Sources" for more information on these data.

49Field Guide

Page 76: Erdas FieldGuide

Raster to VectorConversion

A raster layer can be converted to a vector layer and used as another layer in a vectordata base. The diagram below illustrates a thematic file in raster format that has beenconverted to vector format.

Figure 22: Raster Format Converted to Vector Format

Most commonly, thematic raster data rather than continuous data are converted tovector format, since converting continuous layers may create more vector features thanare practical or even manageable.

Convert vector data to raster data, and vice versa, using ERDAS IMAGINE Vector.

Raster soils layer Soils layer converted to vector polygon layer

50 ERDAS

Page 77: Erdas FieldGuide

Introduction

CHAPTER 3Raster and Vector Data Sources

Introduction This chapter is an introduction to the most common raster and vector data types thatcan be used with the ERDAS IMAGINE software package. The raster data typescovered include:

• visible/infrared satellite data

• radar imagery

• airborne sensor data

• digital terrain models

• scanned or digitized maps and photographs

The vector data types covered include:

• ARC/INFO GENERATE format files

• USGS Digital Line Graphs (DLG)

• AutoCAD Digital Exchange Files (DXF)

• MapBase digital street network files (ETAK)

• U.S. Department of Commerce Initial Graphics Exchange Standard files (IGES)

• U.S. Census Bureau Topologically Integrated Geographic Encoding andReferencing System files (TIGER)

Importing and ExportingRaster Data

There is an abundance of data available for use in GIS today. In addition to satellite andairborne imagery, raster data sources include digital x-rays, sonar, microscopicimagery, video digitized data, and many other sources.

Because of the wide variety of data formats, ERDAS IMAGINE provides two optionsfor importing data:

• Direct import for specific formats

• Generic import for general formats

Direct Import

Table 2 lists some of the raster data formats that can be directly imported to andexported from ERDAS IMAGINE:

51Field Guide

Page 78: Erdas FieldGuide

a. See Generic Import on page 52.

Once imported, the raster data are converted to the ERDAS IMAGINE file format(.img). The direct import function will import the data file values that make up theraster image, as well as the ephemeris or additional data inherent to the data structure.For example, when the user imports Landsat data, ERDAS IMAGINE also imports thegeoreferencing data for the image.

Raster data formats cannot be exported as vector data formats, unless they are converted withthe Vector utilities.

Each direct function is programmed specifically for that type of data and cannot be used toimport other data types.

Generic Import

The Generic import option is a flexible program which enables the user to define thedata structure for ERDAS IMAGINE. This program allows the import of BIL, BIP, andBSQ data that are stored in left to right, top to bottom row order. Data formats fromunsigned 1-bit up to 64-bit floating point can be imported. This program imports onlythe data file values—it does not import ephemeris data, such as georeferencing infor-mation. However, this ephemeris data can be viewed using the Data View option (fromthe Utility menu or the Import dialog).

Table 2: Raster Data Formats for Direct Import

Data Type Import Export

ADRG

ADRI

ASCII

AVHRR

BIL, BIP, BSQa

DTED

ERDAS 7.X (.LAN, .GIS, .ANT)

GRID

Landsat

RADARSAT

SPOT

Sun Raster

TIFF

USGS DEM

52 ERDAS

Page 79: Erdas FieldGuide

Introduction

Complex data cannot be imported using this program; however, they can be importedas two real images and then combined into one complex image using the IMAGINESpatial Modeler.

You cannot import tiled or compressed data using the Generic import option.

Importing and ExportingVector Data

Vector layers can be created within ERDAS IMAGINE by digitizing points, lines, andpolygons using a digitizing tablet or the computer screen. Several vector data types,which are available from a variety of government agencies and private companies, canalso be imported. Table 3 lists some of the vector data formats that can be imported to,and exported from, ERDAS IMAGINE:

Once imported, the vector data are automatically converted to ERDAS IMAGINEvector layers.

These vector formats are discussed in more detail in "Vector Data from Other SoftwareVendors" on page 90. See "CHAPTER 2: Vector Layers" for more information on ERDASIMAGINE vector layers.

Import and export vector data with the Import/Export function. You can also convert vectorlayers to raster format, and vice versa, with the ERDAS IMAGINE Vector utilities.

Table 3: Vector Data Formats for Import and Export

Data Type Import Export

GENERATE

DXF

DLG

ETAK

IGES

TIGER

53Field Guide

Page 80: Erdas FieldGuide

Satellite Data There are several data acquisition options available including photography, aerialsensors, and sophisticated satellite scanners. However, a satellite system offers theseadvantages:

• Digital data gathered by a satellite sensor can be transmitted over radio ormicrowave communications links and stored on magnetic tapes, so they are easilyprocessed and analyzed by a computer.

• Many satellites orbit the earth, so the same area can be covered on a regular basisfor change detection.

• Once the satellite is launched, the cost for data acquisition is less than that foraircraft data.

• Satellites have very stable geometry, meaning that there is less chance for distortionor skew in the final image.

Use the Import/Export function to import a variety of satellite data.

Satellite System A satellite system is composed of a scanner with sensors and a satellite platform. Thesensors are made up of detectors.

• The scanner is the entire data acquisition system, such as the Landsat ThematicMapper scanner or the SPOT panchromatic scanner (Lillesand and Kiefer 1987). Itincludes the sensor and the detectors.

• A sensor is a device that gathers energy, converts it to a signal and presents it in aform suitable for obtaining information about the environment (Colwell 1983).

• A detector is the device in a sensor system that records electromagnetic radiation.For example, in the sensor system on the Landsat Thematic Mapper scanner thereare 16 detectors for each wavelength band (except band 6, which has 4 detectors).

In a satellite system, the total width of the area on the ground covered by the scanner iscalled the swath width, or width of the total field of view (FOV). FOV differs from IFOV(instantaneous field of view) in that the IFOV is a measure of the field of view of eachdetector. The FOV is a measure of the field of view off all the detectors combined.

54 ERDAS

Page 81: Erdas FieldGuide

Satellite Data

Satellite Characteristics The U. S. Landsat and the French SPOT satellites are two important data acquisitionsatellites. These systems provide the majority of remotely sensed digital images in usetoday. The Landsat and SPOT satellites have several characteristics in common:

• They have sun-synchronous orbits, meaning that they rotate around the earth atthe same rate as the earth rotates on its axis, so data are always collected at the samelocal time of day over the same region.

• They both record electromagnetic radiation in one or more bands. Multiband dataare referred to as multispectral imagery. Single band, or monochrome, imagery iscalled panchromatic.

• Both scanners can produce nadir views. Nadir is the area on the ground directlybeneath the scanner’s detectors.

NOTE: The current SPOT system has the ability to collect off-nadir stereo imagery, as will thefuture Landsat 7 system.

Image Data Comparison

Figure 23 shows a comparison of the electromagnetic spectrum recorded by LandsatTM, Landsat MSS, SPOT, and NOAA AVHRR data. These data are described in detailin the following sections.

55Field Guide

Page 82: Erdas FieldGuide

Figure 23: Multispectral Imagery Comparison

.5

.6

.7

.8

.91.01.11.21.3

1.51.61.71.81.92.02.12.22.32.42.52.6

1.4

Landsat MSS (1,2,3,4)

Landsat TM(4, 5)

SPOT XS

SPOT Pan

NOAAAVHRR1

Band 1Band 2Band 3

Band 4

Band 1Band 2Band 3

Band 4

Band 5

Band 7

Band 1Band 2

Band 3

Band 1 Band 1

Band 2

Band 3

Band 5Band 4

Band 6

3.03.54.0

5.06.07.08.09.0

10.0

12.013.0

11.0

mic

rom

eter

s

1 NOAA AVHRR band 5 is not on the NOAA 10 satellite, but is on NOAA 11.

56 ERDAS

Page 83: Erdas FieldGuide

Satellite Data

Landsat In 1972, the National Aeronautics and Space Administration (NASA) initiated the firstcivilian program specializing in the acquisition of remotely sensed digital satellite data.The first system was called ERTS (Earth Resources Technology Satellites), and laterrenamed to Landsat. There have been several Landsat satellites launched since 1972.Landsats 1, 2, and 3 are no longer operating, but Landsats 4 and 5 are still in orbitgathering data.

Landsats 1, 2, and 3 gathered Multispectral Scanner (MSS) data and Landsats 4 and 5collect MSS and Thematic Mapper (TM) data. MSS and TM are discussed in more detailbelow.

NOTE: Landsat data are available through the Earth Observation Satellite Company (EOSAT)or the EROS Data Center. See "Ordering Raster Data" on page 84 for more information.

MSS

The MSS (multispectral scanner) from Landsats 4 and 5 has a swath width of approxi-mately 185 × 170 km from a height of approximately 900 km for Landsats 1,2, and 3, and705 km for Landsats 4 and 5. MSS data are widely used for general geologic studies aswell as vegetation inventories.

The spatial resolution of MSS data is 56 × 79 m, with a 79 × 79 m IFOV. A typical scenecontains approximately 2340 rows and 3240 columns. The radiometric resolution is 6-bit, but it is stored as 8-bit (Lillesand and Kiefer 1987).

Detectors record electromagnetic radiation (EMR) in four bands:

• Bands 1 and 2 are in the visible portion of the spectrum and are useful in detectingcultural features, such as roads. These bands also show detail in water.

• Bands 3 and 4 are in the near-infrared portion of the spectrum and can be used inland/water and vegetation discrimination.

1 =Green, 0.50-0.60 µmThis band scans the region between the blue and red chlorophyll absorption bands. Itcorresponds to the green reflectance of healthy vegetation, and it is also useful formapping water bodies.

2 =Red, 0.60-0.70 µmThis is the red chlorophyll absorption band of healthy green vegetation and representsone of the most important bands for vegetation discrimination. It is also useful fordetermining soil boundary and geological boundary delineations and cultural features.

3 =Reflective infrared, 0.70-0.80 µmThis band is especially responsive to the amount of vegetation biomass present in ascene. It is useful for crop identification and emphasizes soil/crop and land/watercontrasts.

4 =Reflective infrared, 0.80-1.10 µmThis band is useful for vegetation surveys and for penetrating haze (Jensen 1996).

57Field Guide

Page 84: Erdas FieldGuide

TM

The TM (thematic mapper) scanner is a multispectral scanning system much like theMSS, except that the TM sensor records reflected/emitted electromagnetic energy fromthe visible, reflective-infrared, middle-infrared, and thermal-infrared regions of thespectrum. TM has higher spatial, spectral, and radiometric resolution than MSS.

TM has a swath width of approximately 185 km from a height of approximately 705 km.It is useful for vegetation type and health determination, soil moisture, snow and clouddifferentiation, rock type discrimination, etc.

The spatial resolution of TM is 28.5 × 28.5 m for all bands except the thermal (band 6),which has a spatial resolution of 120 × 120 m. The larger pixel size of this band isnecessary for adequate signal strength. However, the thermal band is resampled to 28.5× 28.5 m to match the other bands. The radiometric resolution is 8-bit, meaning that eachpixel has a possible range of data values from 0 to 255.

Detectors record EMR in seven bands:

• Bands 1, 2, and 3 are in the visible portion of the spectrum and are useful indetecting cultural features such as roads. These bands also show detail in water.

• Bands 4, 5, and 7 are in the reflective-infrared portion of the spectrum and can beused in land/water discrimination.

• Band 6 is in the thermal portion of the spectrum and is used for thermal mapping(Jensen 1996; Lillesand and Kiefer 1987).

1 =Blue, 0.45-0.52 µmUseful for mapping coastal water areas, differentiating between soil and vegetation,forest type mapping, and detecting cultural features.

2 =Green, 0.52-0.60 µmCorresponds to the green reflectance of healthy vegetation. Also useful for culturalfeature identification.

3 =Red, 0.63-0.69 µmUseful for discriminating between many plant species. It is also useful for determiningsoil boundary and geological boundary delineations as well as cultural features.

4 =Reflective-infrared, 0.76-0.90 µmThis band is especially responsive to the amount of vegetation biomass present in ascene. It is useful for crop identification and emphasizes soil/crop and land/watercontrasts.

5 =Mid-infrared, 1.55-1.74 µmThis band is sensitive to the amount of water in plants, which is useful in crop droughtstudies and in plant health analyses. This is also one of the few bands that can be usedto discriminate between clouds, snow, and ice.

58 ERDAS

Page 85: Erdas FieldGuide

Satellite Data

6 =Thermal-infrared, 10.40-12.50 µmThis band is useful for vegetation and crop stress detection, heat intensity, insecticideapplications, and for locating thermal pollution. It can also be used to locate geothermalactivity.

7 =Mid-infrared, 2.08-2.35 µmThis band is important for the discrimination of geologic rock type and soil boundaries,as well as soil and vegetation moisture content.

Figure 24: Landsat MSS vs. Landsat TM

Band Combinations for Displaying TM Data

Different combinations of the TM bands can be displayed to create different compositeeffects. The following combinations are commonly used to display images:

NOTE: The order of the bands corresponds to the Red, Green, and Blue (RGB) color guns of themonitor.

• Bands 3,2,1 create a true color composite. True color means that objects look as theywould to the naked eye—similar to a color photograph.

• Bands 4,3,2 create a false color composite. False color composites appear similar toan infrared photograph where objects do not have the same colors or contrasts asthey would naturally. For instance, in an infrared image, vegetation appears red,water appears navy or black, etc.

• Bands 5,4,2 create a pseudo color composite. (A thematic image is also a pseudocolor image.) In pseudo color, the colors do not reflect the features in natural colors.For instance, roads may be red, water yellow, and vegetation blue.

4 bands

7 bands

1 pixel=30x30m

1 pixel=57x79m

MSS

TMradiometricresolution

0-127

radiometricresolution

0-255

59Field Guide

Page 86: Erdas FieldGuide

Different color schemes can be used to bring out or enhance the features under study.These are by no means all of the useful combinations of these seven bands. The bandsto be used are determined by the particular application.

See "CHAPTER 4: Image Display" for more information on how images are displayed,"CHAPTER 5: Enhancement" for more information on how images can be enhanced, and"Ordering Raster Data" on page 84 for information on types of Landsat data available.

SPOT The first Systeme Pour l’observation de la Terre (SPOT) satellite, developed by theFrench Centre National d’Etudes Spatiales (CNES), was launched in early 1986. Thesecond SPOT satellite was launched in 1990 and the third was launched in 1993. Thesensors operate in two modes, multispectral and panchromatic. SPOT is commonlyreferred to as a pushbroom scanner meaning that all scanning parts are fixed andscanning is accomplished by the forward motion of the scanner. SPOT pushes3000/6000 sensors along its orbit. This is different from Landsat which scans with 16detectors perpendicular to its orbit.

The SPOT satellite can observe the same area on the globe once every 26 days. The SPOTscanner normally produces nadir views, but it does have off-nadir viewing capability.Off-nadir refers to any point that is not directly beneath the detectors, but off to anangle. Using this off-nadir capability, one area on the earth can be viewed as often asevery 3 days.

This off-nadir viewing can be programmed from the ground control station, and is quiteuseful for collecting data in a region not directly in the path of the scanner or in theevent of a natural or man-made disaster, where timeliness of data acquisition is crucial.It is also very useful in collecting stereo data from which elevation data can be extracted.

The width of the swath observed varies between 60 km for nadir viewing and 80 km foroff-nadir viewing at a height of 832 km (Jensen 1996).

Panchromatic

SPOT Panchromatic (meaning sensitive to all visible colors) has 10 × 10 m spatialresolution, contains 1 band—0.51 to 0.73 µm—and is similar to a black and white photo-graph. It has a radiometric resolution of 8 bits (Jensen 1996).

XS

SPOT XS, or multispectral, has 20 × 20 m spatial resolution, 8-bit radiometric resolution,and contains 3 bands (Jensen 1996).

1 =Green, 0.50-0.59 µmCorresponds to the green reflectance of healthy vegetation.

2 =Red, 0.61-0.68 µmUseful for discriminating between plant species. It is also useful for soil boundary andgeological boundary delineations.

60 ERDAS

Page 87: Erdas FieldGuide

Satellite Data

3 =Reflective infrared, 0.79-0.89 µmThis band is especially responsive to the amount of vegetation biomass present in ascene. It is useful for crop identification and emphasizes soil/crop and land/watercontrasts.

Figure 25: SPOT Panchromatic vs. SPOT XS

See "Ordering Raster Data" on page 84 for information on the types of SPOT data available.

Stereoscopic Pairs

Two observations can be made by the panchromatic scanner on successive days, so thatthe two images are acquired at angles on either side of the vertical, resulting in stereo-scopic imagery. Stereoscopic imagery can also be achieved by using one vertical sceneand one off-nadir scene. This type of imagery can be used to produce a single image, ortopographic and planimetric maps (Jensen 1996).

Topographic maps indicate elevation. Planimetric maps correctly represent horizontaldistances between objects (Star and Estes 1990).

See "Topographic Data" on page 81 and "CHAPTER 9: Terrain Analysis" for more informationabout topographic data and how SPOT stereopairs and aerial photographs can be used to createelevation data and orthographic images.

1 band

3 bands

1 pixel=20x20m

1 pixel=10x10m

Panchromatic

XS

radiometricresolution

0-255

61Field Guide

Page 88: Erdas FieldGuide

NOAA Polar Orbiter Data The National Oceanic and Atmospheric Administration (NOAA) has sponsored severalpolar orbiting satellites to collect data of the earth. These satellites were originallydesigned for meteorological applications, but the data gathered have been used inmany fields—from agronomy to oceanography (Needham 1986).

The first of these satellites to be launched was the TIROS-N in 1978. Since the TIROS-N,five additional NOAA satellites have been launched. Of these, the last three are still inorbit gathering data.

AVHRR

The NOAA AVHRR (Advanced Very High Resolution Radiometer) data are small-scaledata and often cover an entire country. The swath width is 2700 km and the satellitesorbit at a height of approximately 833 km (Kidwell 1988; Needham 1986).

The AVHRR system allows for direct transmission in real-time of data called HighResolution Picture Transmission (HRPT). It also allows for about ten minutes of datato be recorded over any portion of the world on two recorders on board the satellite.This recorded data are called Local Area Coverage (LAC). LAC and HRPT haveidentical formats; the only difference is that HRPT are transmitted directly and LAC arerecorded.

There are three basic formats for AVHRR data which can be imported into ERDASIMAGINE:

• Local Area Coverage (LAC) - data recorded on board the sensor with a spatialresolution of approximately 1.1 × 1.1 km.

• High Resolution Picture Transmission (HRPT) - direct transmission of AVHRRdata in real-time with the same resolution as LAC.

• Global Area Coverage (GAC) - data produced from LAC data by using only 1 outof every 3 scan lines. GAC data have a spatial resolution of approximately 4 × 4 km.

AVHRR data are available in 10-bit packed and 16-bit unpacked format. The termpacked refers to the way in which the data are written to the tape. Packed data arecompressed to fit more data on each tape (Kidwell 1988).

AVHRR images are useful for snow cover mapping, flood monitoring, vegetationmapping, regional soil moisture analysis, wildfire fuel mapping, fire detection, dustand sandstorm monitoring, and various geologic applications (Lillesand and Kiefer1987). The entire globe can be viewed in 14.5 days. There may be four or five bands,depending on when the data were acquired.

1 =Visible, 0.58-0.68 µmThis band corresponds to the green reflectance of healthy vegetation and is importantfor vegetation discrimination.

2 =Near-infrared, 0.725-1.10 µmThis band is especially responsive to the amount of vegetation biomass present in ascene. It is useful for crop identification and emphasizes soil/crop and land/watercontrasts.

62 ERDAS

Page 89: Erdas FieldGuide

Satellite Data

3 =Thermal-infrared, 3.55-3.93 µmThis is a thermal band that can be used for snow and ice discrimination. It is also usefulfor detecting fires.

4 =Thermal-infrared, 10.50-11.50 µm (NOAA-6, 8, 10),10.30-11.30 µm (NOAA-7, 9, 11)This band is useful for vegetation and crop stress detection. It can also be used to locategeothermal activity.

5 =Thermal-infrared, 10.50-11.50 µm (NOAA-6, 8, 10),11.50-12.50 µm (NOAA-7, 9, 11)See band 4.

AVHRR data have a radiometric resolution of 10-bits, meaning that each pixel has apossible data file value between 0 and 1023. AVHRR scenes may contain one band, acombination of bands, or all bands. All bands are referred to as a full set, and selectedbands are referred to as an extract.

See "Ordering Raster Data" on page 84 for information on the types of NOAA data available.

Use the Import/Export function to import AVHRR data.

63Field Guide

Page 90: Erdas FieldGuide

Radar Data Simply put, radar data are produced when:

• a radar transmitter emits a beam of micro or millimeter waves,

• the waves reflect from the surfaces they strike, and

• the backscattered radiation is detected by the radar system’s receiving antenna,which is tuned to the frequency of the transmitted waves.

The resultant radar data can be used to produce radar images.

While there is a specific importer for RADARSAT data, most types of radar image data can beimported into ERDAS IMAGINE with the Generic import option of Import/Export.

A radar system can be airborne, spaceborne, or ground-based. Airborne radar systemshave typically been mounted on civilian and military aircraft, but in 1978, the radarsatellite Seasat-1 was launched. The radar data from that mission and subsequentspaceborne radar systems have been a valuable addition to the data available for use inGIS processing. Researchers are finding that a combination of the characteristics ofradar data and visible/infrared data is providing a more complete picture of the earth.In the last decade, the importance and applications of radar have grown rapidly.

Advantages of UsingRadar Data

Radar data have several advantages over other types of remotely sensed imagery:

• Radar microwaves can penetrate the atmosphere day or night under virtually allweather conditions, providing data even in the presence of haze, light rain, snow,clouds, or smoke.

• Under certain circumstances, radar can partially penetrate arid and hyperaridsurfaces, revealing sub-surface features of the earth.

• Although radar does not penetrate standing water, it can reflect the surface actionof oceans, lakes, and other bodies of water. Surface eddies, swells, and waves aregreatly affected by the bottom features of the water body, and a careful study ofsurface action can provide accurate details about the bottom features.

Radar Sensors Radar images are generated by two different types of sensors:

• SLAR (Side-looking Airborne Radar) — uses an antenna which is fixed below anaircraft and pointed to the side to transmit and receive the radar signal. (See Figure26.)

• SAR (Synthetic Aperture Radar) — uses a side-looking, fixed antenna to create asynthetic aperture. SAR sensors are mounted on satellites and the NASA SpaceShuttle. The sensor transmits and receives as it is moving. The signals received overa time interval are combined to create the image.

64 ERDAS

Page 91: Erdas FieldGuide

Radar Data

Both SLAR and SAR systems use side-looking geometry. Figure 26 shows a represen-tation of an airborne SLAR system. Figure 27 shows a graph of the data received fromthe radiation transmitted in Figure 26. Notice how the data correspond to the terrain inFigure 26. These data can be used to produce a radar image of the target area. (A targetis any object or feature that is the subject of the radar scan.)

Figure 26: SLAR Radar (Lillesand and Kiefer 1987)

Figure 27: Received Radar Signal

Previous image linesAzimuthdirection

Azimuth resolution

Rangedirection

Se

nso

r h

eig

ht a

t n

ad

ir

Beamwidth

Tre

es

Riv

er

Hill

Hill

Sha

dow

Tre

es

Time

Str

engt

h (D

N)

65Field Guide

Page 92: Erdas FieldGuide

Active and Passive Sensors

An active radar sensor gives off a burst of coherent radiation that reflects from thetarget, unlike a passive microwave sensor which simply receives the low-levelradiation naturally emitted by targets.

Like the coherent light from a laser, the waves emitted by active sensors travel in phaseand interact minimally on their way to the target area. After interaction with the targetarea, these waves are no longer in phase. This is due to the different distances theytravel from different targets, or single versus multiple bounce scattering.

Figure 28: Radar Reflection from Different Sources and Distances (Lillesand and Kiefer 1987)

At present, these bands are commonly used for radar imaging systems:

More information about these radar systems is given later in this chapter.

Diffuse reflector Specularreflector

Cornerreflector

Radar wavesare transmittedin phase.

Once reflected, they are outof phase, interfering witheach other and producingspeckle noise.

Table 4: Commonly Used Bands for Radar Imaging

Band Frequency RangeWavelength

RangeRadar System

X 5.20-10.90 GHZ 5.77-2.75 cm USGS SLAR

C 3.9-6.2 GHZ 3.8-7.6 cm ERS-1, Fuyo 1

L 0.39-1.55 GHZ 76.9-19.3 cm SIR-A,B, Almaz

P 0.225-0.391 GHZ 40.0-76.9 cm AIRSAR

66 ERDAS

Page 93: Erdas FieldGuide

Radar Data

Radar bands were named arbitrarily when radar was first developed by the military.The letter designations have no special meaning.

NOTE: The C band overlaps the X band. Wavelength ranges may vary slightly between sensors.

Speckle Noise Once out of phase, the radar waves can interfere constructively or destructively toproduce light and dark pixels known as speckle noise. Speckle noise in radar data mustbe reduced before the data can be utilized. However, the radar image processingprograms used to reduce speckle noise also produce changes to the image. This consid-eration, combined with the fact that different applications and sensor outputs neces-sitate different speckle removal models, has lead ERDAS to offer several specklereduction algorithms in ERDAS IMAGINE Radar.

When processing radar data, the order in which the image processing programs are implementedis crucial. This is especially true when considering the removal of speckle noise. Since any imageprocessing done before removal of the speckle results in the noise being incorporated into anddegrading the image, do not rectify, correct to ground range, or in any way resample the pixelvalues before removing speckle noise. A rotation using nearest neighbor might be permissible.

ERDAS IMAGINE Radar enables the user to:

• import radar data into the GIS as a stand-alone source or as an additional layer withother imagery sources

• remove speckle noise

• enhance edges

• perform texture analysis

• perform radiometric and slant-to-ground range correction

See "CHAPTER 5: Enhancement" for more information on radar imagery enhancement.

67Field Guide

Page 94: Erdas FieldGuide

Applications for RadarData

Radar data can be used independently in GIS applications or combined with othersatellite data, such as Landsat, SPOT, or AVHRR. Possible GIS applications for radardata include:

• Geology — radar’s ability to partially penetrate land cover and sensitivity to microrelief makes radar data useful in geologic mapping, mineral exploration, andarchaeology.

• Classification — a radar scene can be merged with visible/infrared data as anadditional layer(s) in vegetation classification for timber mapping, cropmonitoring, etc.

• Glaciology — the ability to provide imagery of ocean and ice phenomena makesradar an important tool for monitoring climatic change through polar ice variation.

• Oceanography — radar is used for wind and wave measurement, sea-state andweather forecasting, and monitoring ocean circulation, tides, and polar oceans.

• Hydrology — radar data are proving useful for measuring soil moisture contentand mapping snow distribution and water content.

• Ship monitoring — the ability to provide day/night all-weather imaging, as well asdetect ships and associated wakes, makes radar a tool which can be used for shipnavigation through frozen ocean areas such as the Arctic or North Atlantic Passage.(The ERS-1 satellite provides excellent coverage of these specific target areas,revisiting every 35 days.)

• Offshore Oil Activities — radar data are used to provide ice updates for offshoredrilling rigs, determining weather and sea conditions for drilling and installationoperations, and detecting oil spills.

• Pollution monitoring — radar can detect oil on the surface of water and can be usedto track the spread of an oil spill.

68 ERDAS

Page 95: Erdas FieldGuide

Radar Data

Current Radar Sensors Table 5 gives a brief description of currently available radar sensors. This is not acomplete list of such sensors, but it does represent the ones most useful for GIS appli-cations.

Future Radar Sensors Several radar satellites are planned for launch within the next several years, but only afew programs will be successful. Following are two scheduled programs which areknown to be highly achievable.

Almaz 1-b

NPO Mashinostroenia plans to launch and operate Almaz-1b as a commercial programin 1998. The Almaz-1b system will include a unique, complex multisensor payloadconsisting of eight high resolution sensors which can operate in various sensor combi-nations, including high resolution, two-pass radar stereo and single- pass stereocoverage in the optical and multispectral bandwidths. Almaz-1b will feature threesynthetic aperture radars (SAR) that can collect multipolar, multifrequency (X, P, Sband) imagery in high resolution (5-7m spatial; 20-30 km swath), intermediate (5-15mspatial; 60-70km swath), or survey (20-40m spatial; 120-170km swath) modes.

Light SAR

NASA/JPL is currently designing a radar satellite called Light SAR. Present plans arefor this to be a multi-polar sensor operating at L-band.

Table 5: Current Radar Sensors

ERS-1, 2 JERS-1 SIR-A, B SIR-C RADARSAT Almaz-1

Availability operational operational 1981, 1984 1994 operational 1991-1992

Resolution 12.5 m 18 m 25 m 25 m 10-100 m 15 m

Revisit Time 35 days 44 days NA NA 3 days NA

Scene Area 100X100 km 75X100 km 30X60 km variableswath

50X50 to500X500 km

40X100 km

Bands C band L band L band L,C,X bands C band C band

69Field Guide

Page 96: Erdas FieldGuide

Image Data fromAircraft

Image data can also be acquired from multispectral scanners or radar sensors aboardaircraft, as well as satellites. This is useful if there isn’t time to wait for the next satelliteto pass over a particular area, or if it is necessary to achieve a specific spatial or spectralresolution that cannot be attained with satellite sensors.

For example, this type of data can be beneficial in the event of a natural or man-madedisaster, because there is more control over when and where the data are gathered.

Two common types of airborne image data are:

• AIRSAR

• AVIRIS

AIRSAR AIRSAR (Airborne Synthetic Aperture Radar) is an experimental airborne radar sensordeveloped by Jet Propulsion Laboratories (JPL), Pasadena, California, under a contractwith NASA. AIRSAR data have been available since 1983.

This sensor collects data at three frequencies:

• C-band

• L-band

• P-band

Because this sensor measures at three different wavelengths, different scales of surfaceroughness are obtained. The AIRSAR sensor has an IFOV of 10 m and a swath width of12 km.

AIRSAR data have been used in many applications such as measuring snow wetness,classifying vegetation, and estimating soil moisture.

NOTE: These data are distributed in a compressed format. They must be decompressed beforeloading with an algorithm available from JPL. See "Addresses to Contact" on page 85 for contactinformation.

AVIRIS The AVIRIS (Airborne Visible/Infrared Imaging Spectrometer) was also developed byJPL (Pasadena, California) under a contract with NASA. AVIRIS data have beenavailable since 1987.

This sensor produces multispectral data that have 224 narrow bands. These bands are10 nm wide and cover the spectral range of .4 - 2.4 nm. The swath width is 11 km andthe spatial resolution is 20 m. This sensor is flown at an altitude of approximately 20 km.The data are recorded at 10-bit radiometric resolution.

70 ERDAS

Page 97: Erdas FieldGuide

Image Data from Scanning

Image Data fromScanning

Hardcopy maps and photographs can be incorporated into the ERDAS IMAGINEsystem through the use of a scanning camera. Scanning is remote sensing in a mannerof speaking, but the term “remote sensing” is usually reserved for satellite or aerial datacollection. In GIS, scanning refers to the transfer of analog data, such as photographs,maps, or other viewable images, into a digital (raster) format.

In scanning, the map, photograph, transparency, or other object to be scanned istypically placed on a flat surface, and the camera scans across the object to record theimage, transferring it from analog to digital data. Different scanning systems havedifferent setups for scanning.

There are many commonly used scanning cameras for GIS and other desktop applica-tions, such as Eikonix (Eikonix Corp., Huntsville, Alabama) or Vexcel (Vexcel ImagingCorp., Boulder, Colorado). Many scanners produce a TIFF file, which can be readdirectly by ERDAS IMAGINE.

Use the Import/Export function to import scanned data.

Eikonix data can be obtained in the ERDAS IMAGINE .img format using the XSCAN™ Toolby Ektron and then imported directly into ERDAS IMAGINE.

71Field Guide

Page 98: Erdas FieldGuide

ADRG Data ADRG (ARC Digitized Raster Graphic) data, from the Defense Mapping Agency(DMA), are primarily used for military purposes by defense contractors. The data arein 128 × 128 pixel tiled, 8-bit format stored on CD-ROM. ADRG data provide largeamounts of hardcopy graphic data without having to store and maintain the actualhardcopy graphics.

ADRG data consist of digital copies of DMA hardcopy graphics transformed into theARC system and accompanied by ASCII encoded support files. These digital copies areproduced by scanning each hardcopy graphic into three images: red, green, and blue.The data are scanned at a nominal collection interval of 100 microns (254 lines per inch).When these images are combined, they provide a 3-band digital representation of theoriginal hardcopy graphic.

ARC System The ARC system (Equal Arc-Second Raster Chart/Map) provides a rectangularcoordinate and projection system at any scale for the earth’s ellipsoid, based on theWorld Geodetic System 1984 (WGS 84). The ARC System divides the surface of theellipsoid into 18 latitudinal bands called zones. Zones 1 - 9 cover the Northernhemisphere and zones 10 - 18 cover the Southern hemisphere. Zone 9 is the North Polarregion. Zone 18 is the South Polar region.

Distribution Rectangles

For distribution, ADRG are divided into geographic data sets called DistributionRectangles (DRs). A DR may include data from one or more source charts or maps. Theboundary of a DR is a geographic rectangle which typically coincides with chart andmap neatlines.

Zone Distribution Rectangles (ZDRs)

Each DR is divided into Zone Distribution Rectangles (ZDRs). There is one ZDR foreach ARC System zone covering any part of the DR. The ZDR contains all the DR datathat fall within that zone’s limits. ZDRs typically overlap by 1,024 rows of pixels, whichallows for easier mosaicking. Each ZDR is stored on the CD-ROM as a single rasterimage file (.IMG). Included in each image file are all raster data for a DR from a singleARC System zone, and padding pixels needed to fulfill format requirements. Thepadding pixels are black and have a zero value.

The padding pixels are not imported by ERDAS IMAGINE, nor are they counted when figuringthe pixel height and width of each image.

ADRG File Format Each CD-ROM contains up to eight different file types which make up the ADRGformat. ERDAS IMAGINE imports three types of ADRG data files:

• .OVR (Overview)

• .IMG (Image)

• .Lxx (Legend or marginalia data)

NOTE: Compressed ADRG (CADRG) is a different format, with its own importer.

72 ERDAS

Page 99: Erdas FieldGuide

ADRG Data

The ADRG .IMG and .OVR file formats are different from the ERDAS IMAGINE .img and .ovrfile formats.

.OVR (overview) The overview file contains a 16:1 reduced resolution image of the whole DR. There is anoverview file for each DR on a CD.

Importing ADRG Subsets

Since DRs can be rather large, it may be beneficial to import a subset of the DR data forthe application. ERDAS IMAGINE enables the user to define a subset of the data fromthe preview image (see Figure 30).

You can import from only one ZDR at a time. If a subset covers multiple ZDRs, they must beimported separately and mosaicked with the ERDAS IMAGINE Mosaic option.

Figure 29: ADRG Overview File Displayed in ERDAS IMAGINE Viewer

73Field Guide

Page 100: Erdas FieldGuide

The white rectangle in Figure 30 represents the DR. The subset area in this illustrationwould have to be imported as three files, one for each zone in the DR.

Notice how the ZDRs overlap. Therefore, the .IMG files for Zones 2 and 4 would alsobe included in the subset area.

Figure 30: Subset Area with Overlapping ZDRs

.IMG (scanned imagedata)

The .IMG files are the data files containing the actual scanned hardcopy graphic(s).Each .IMG file contains one ZDR plus padding pixels. The ERDAS IMAGINE Importfunction converts the .IMG data files on the CD-ROM to the IMAGINE file format(.img). The .img file can then be displayed in a Viewer.

.Lxx (legend data) Legend files contain a variety of diagrams and accompanying information. This is infor-mation which typically appears in the margin or legend of the source graphic.

This information can be imported into ERDAS IMAGINE and viewed. It can also be added to amap composition with the ERDAS IMAGINE Map Composer.

Zone 4

Zone 3

Zone 2

overlap area

overlap areaSubsetArea

74 ERDAS

Page 101: Erdas FieldGuide

ADRG Data

Each legend file contains information based on one of these diagram types:

• Index (IN) — shows the approximate geographical position of the graphic and itsrelationship to other graphics in the region.

• Elevation/Depth Tint (EL) — a multicolored graphic depicting the colors or tintsused to represent different elevations or depth bands on the printed map or chart.

• Slope (SL) — represents the percent and degree of slope appearing in slope bands.

• Boundary (BN) — depicts the geopolitical boundaries included on the map or chart.

• Accuracy (HA, VA, AC) — depicts the horizontal and vertical accuracies of selectedmap or chart areas. AC represents a combined horizontal and vertical accuracydiagram.

• Geographic Reference (GE) — depicts the positioning information as referenced tothe World Geographic Reference System.

• Grid Reference (GR) — depicts specific information needed for positionaldetermination with reference to a particular grid system.

• Glossary (GL) — gives brief lists of foreign geographical names appearing on themap or chart with their English-language equivalents.

• Landmark Feature Symbols (LS) — landmark feature symbols are used to depictnavigationally-prominent entities.

ARC System Charts

The ADRG data on each CD-ROM are based on one of these chart types from the ARCsystem:

Table 6: ARC System Chart Types

ARC System Chart Type Scale

GNC (Global Navigation Chart) 1:5,000,000

JNC-A (Jet Navigation Chart - Air) 1:3,000,000

JNC (Jet Navigation Chart) 1:2,000,000

ONC (Operational Navigation Chart) 1:1,000,000

TPC (Tactical Pilot Chart) 1:500,000

JOG-A (Joint Operations Graphic - Air) 1:250,000

JOG-G (Joint Operations Graphic - Ground) 1:250,000

JOG-C (Joint Operations Graphic - Combined) 1:250,000

JOG-R (Joint Operations Graphic - Radar) 1:250,000

ATC (Series 200 Air Target Chart) 1:200,000

TLM (Topographic Line Map) 1:50,000

75Field Guide

Page 102: Erdas FieldGuide

Each ARC System chart type has certain legend files associated with the image(s) on theCD-ROM. The legend files associated with each chart type are checked in Table 7.

ADRG File NamingConvention

The ADRG file naming convention is based on a series of codes: ssccddzz

• ss = the chart series code (see the table of ARC System charts)

• cc = the country code

• dd = the DR number on the CD-ROM (01-99). DRs are numbered beginning with01 for the northwesternmost DR and increasing sequentially west to east, thennorth to south.

• zz = the zone rectangle number (01-18)

For example, in the ADRG filename JNUR0101.IMG:

• JN = Jet Navigation. This ADRG file is taken from a Jet Navigation chart.

• UR = Europe. The data is coverage of a European continent.

• 01 = This is the first Distribution Rectangle on the CD-ROM, providing coverage ofthe northwestern edge of the image area.

• 01 = This is the first zone rectangle of the Distribution Rectangle.

• .IMG = This file contains the actual scanned image data for a ZDR.

You may change this name when the file is imported into ERDAS IMAGINE. If you do notspecify a file name, IMAGINE will use the ADRG file name for the image.

Table 7: Legend Files for the ARC System Chart Types

ARC System Chart IN EL SL BN VA HA AC GE GR GL LS

GNC

JNC / JNC-A

ONC

TPC

JOG-A

JOG-G / JOG-C

JOG-R

ATC

TLM

76 ERDAS

Page 103: Erdas FieldGuide

ADRG Data

Legend File Names

Legend file names will include a code to designate the type of diagram informationcontained in the file (see the previous legend file description). For example, the fileJNUR01IN.L01 means:

• JN = Jet Navigation. This ADRG file is taken from a Jet Navigation chart.

• UR = Europe. The data is coverage of a European continent.

• 01 = This is the first Distribution Rectangle on the CD-ROM, providing coverage ofthe northwestern edge of the image area.

• IN = This indicates that this file is an index diagram from the original hardcopygraphic.

• .L01 = This legend file contains information for the source graphic 01. The sourcegraphics in each DR are numbered beginning with 01 for the northwesternmostsource graphic, increasing sequentially west to east, then north to south. Sourcedirectories and their files include this number code within their names.

For more detailed information on ADRG file naming conventions, see the Defense MappingAgency Product Specifications for ARC Digitized Raster Graphics (ADRG), published byDMA Aerospace Center.

77Field Guide

Page 104: Erdas FieldGuide

ADRI Data ADRI (ARC Digital Raster Imagery), like ADRG data, are also from the DMA and arecurrently available only to Department of Defense contractors. The data are in 128 × 128tiled, 8-bit format, stored on 8 mm tape in band sequential format.

ADRI consists of SPOT panchromatic satellite imagery transformed into the ARCsystem and accompanied by ASCII encoded support files.

Like ADRG, ADRI data are stored in the ARC system in DRs. Each DR consists of all orpart of one or more images mosaicked to meet the ARC bounding rectangle, whichencloses a 1 degree by 1 degree geographic area. (See Figure 31.) Source images areorthorectified to mean sea level using DMA Level I Digital Terrain Elevation Data(DTED) or equivalent data (Air Force Intelligence Support Agency 1991).

See the previous section on ADRG data for more information on the ARC system. See moreabout DTED data on page 83.

Figure 31: Seamless Nine Image DR

In ADRI data, each DR contains only one ZDR. Each ZDR is stored as a single rasterimage file, with no overlapping areas.

Image 1Image 2

Image 4

Image 8

Image 5

Image 6

Image 9

7

3

78 ERDAS

Page 105: Erdas FieldGuide

ADRI Data

There are six different file types that make up the ADRI format: two types of data files,three types of header files, and a color test patch file. ERDAS IMAGINE imports the twotypes of ADRI data files:

• .OVR (Overview)

• .IMG (Image)

The ADRI .IMG and .OVR file formats are different from the ERDAS IMAGINE .img and .ovrfile formats.

.OVR (overview) The overview file (.OVR) contains a 16:1 reduced resolution image of the whole DR.There is an overview file for each DR on a tape. The .OVR images show the mosaickingfrom the source images and the dates when the source images were collected. (SeeFigure 32.) This does not appear on the ZDR image.

Figure 32: ADRI Overview File Displayed in ERDAS IMAGINE Viewer

79Field Guide

Page 106: Erdas FieldGuide

.IMG (scanned imagedata)

The .IMG files contain the actual mosaicked images. Each .IMG file contains one ZDRplus any padding pixels needed to fit the ARC boundaries. Padding pixels are black andhave a zero data value. The ERDAS IMAGINE Import function converts the .IMG datafiles to the IMAGINE file format (.img). The .img file can then be displayed in a Viewer.Padding pixels are not imported, nor are they counted in image height or width.

ADRI File NamingConvention

The ADRI file naming convention is based on a series of codes: ssccddzz

• ss = the image source code:

- SP (SPOT panchromatic)

- SX (SPOT multispectral) (not currently available)

- TM (Landsat Thematic Mapper) (not currently available)

• cc = the country code

• dd = the DR number on the tape (01-99). DRs are numbered beginning with 01 forthe northwesternmost DR and increasing sequentially west to east, then north tosouth.

• zz = the zone rectangle number (01-18)

For example, in the ADRI filename SPUR0101.IMG:

• SP = SPOT 10 m panchromatic image.

• UR = Europe. The data is coverage of a European continent.

• 01 = This is the first Distribution Rectangle on the CD-ROM, providing coverage ofthe northwestern edge of the image area.

• 01 = This is the first zone rectangle of the Distribution Rectangle.

• .IMG = This file contains the actual scanned image data for a ZDR.

You may change this name when the file is imported into ERDAS IMAGINE. If you do notspecify a file name, IMAGINE will use the ADRI file name for the image.

80 ERDAS

Page 107: Erdas FieldGuide

Topographic Data

Topographic Data Satellite data can also be used to create elevation, or topographic, data through the useof stereoscopic pairs, as discussed above under SPOT. Radar sensor data can also be asource of topographic information, as discussed in “Chapter 8: Terrain Analysis.”However, most available elevation data are created with stereo photography andtopographic maps.

ERDAS IMAGINE software can load and use:

• USGS Digital Elevation Models (DEMs)

• Digital Terrain Elevation Data (DTED)

Arc/Second Format

Most elevation data are in arc/second format. Arc/second refers to data in theLatitude/Longitude (Lat/Lon) coordinate system. The data are not rectangular, butfollow the arc of the earth’s latitudinal and longitudinal lines.

Each degree of latitude and longitude is made up of 60 minutes. Each minute is madeup of 60 seconds. Arc/second data are often referred to by the number of seconds ineach pixel. For example, 3 arc/second data have pixels which are 3 × 3 seconds in size.The actual area represented by each pixel is a function of its latitude. Figure 33 illus-trates a 1° × 1° area of the earth.

A row of data file values from a DEM or DTED file is called a profile. The profiles ofDEM and DTED run south to north, that is, the first pixel of the record is the southern-most pixel.

Figure 33: ARC/Second Format

In Figure 33 there are 1201 pixels in the first row and 1201 pixels in the last row, but thearea represented by each pixel increases in size from the top of the file to the bottom ofthe file. The extracted section in the example above has been exaggerated to illustratethis point.

1201

1201

1201

Lati tude

Long

itude

81Field Guide

Page 108: Erdas FieldGuide

Arc/second data used in conjunction with other image data, such as TM or SPOT, mustbe rectified or projected onto a planar coordinate system such as UTM.

DEM DEMs are digital elevation model data. DEM was originally a term reserved forelevation data provided by the United States Geological Survey (USGS), but it is nowused to describe any digital elevation data.

DEMs can be:

• purchased from USGS (for US areas only)

• created from stereopairs (derived from satellite data or aerial photographs)

See "CHAPTER 9: Terrain Analysis" for more information on using DEMs. See "OrderingRaster Data" on page 84 for information on ordering DEMs.

USGS DEMs

There are two types of DEMs that are most commonly available from USGS:

• 1:24,000 scale, also called 7.5-minute DEM, is usually referenced to the UTMcoordinate system. It has a spatial resolution of 30 × 30 m.

• 1:250,000 scale available only in arc/second format.

Both types have a 16-bit range of elevation values, meaning each pixel can have apossible elevation of -32,768 to 32,767.

DEM data are stored in ASCII format. The data file values in ASCII format are stored asASCII characters rather than as zeros and ones like the data file values in binary data.

DEM data files from USGS are initially oriented so that North is on the right side of theimage instead of at the top. ERDAS IMAGINE rotates the data 90° counterclockwise aspart of the Import process so that coordinates read with any IMAGINE program will becorrect.

82 ERDAS

Page 109: Erdas FieldGuide

Topographic Data

DTED DTED data are produced by the Defense Mapping Agency (DMA) and are availableonly to US government agencies and their contractors. DTED data are distributed on 9-track tapes and on CD-ROM.

There are two types of DTED data available:

• DTED 1—a 1° × 1° area of coverage

• DTED 2—a 1° × 1° or less area of coverage

Both are in arc/second format and are distributed in cells. A cell is a 1° × 1° area ofcoverage. Both have a 16-bit range of elevation values.

Like DEMs, DTED data files are also oriented so that North is on the right side of theimage instead of at the top. IMAGINE rotates the data 90° counterclockwise as part ofthe Import process so that coordinates read with any ERDAS IMAGINE program willbe correct.

Using Topographic Data Topographic data have many uses in a GIS. For example, topographic data can be usedin conjunction with other data to:

• calculate the shortest and most navigable path over a mountain range

• assess the visibility from various lookout points or along roads

• simulate travel through a landscape

• determine rates of snow melt

• orthocorrect satellite or airborne images

• create aspect and slope layers

• provide ancillary data from image classification

See "CHAPTER 9: Terrain Analysis" for more information about using topographic andelevation data.

83Field Guide

Page 110: Erdas FieldGuide

Ordering RasterData

Table 8 describes the different Landsat, SPOT, AVHRR, and DEM products that can beordered. Information in this chart does not reflect all the products that are available, butonly the most common types that can be imported into ERDAS IMAGINE.

Table 8: Common Raster Data Products

Data TypeGroundCovered

Pixel Size# of

BandsFormat

AvailableGeocoded

Landsat TMFull Scene

185 × 170 km 28.5 m 7 Fast (BSQ)

Landsat TMQuarter Scene

92.5 × 80 km 28.5 m 7 Fast (BSQ)

Landsat MSSFull Scene

185 × 170 km 79 × 56 m 4 BSQ, BIL

SPOT 60 × 60 km 10 m and20 m

1 - 3 BIL

NOAA AVHRR (LAC) 2700 × 2700 km 1.1 km 1-5 10-bit packedor unpacked

NOAA AVHRR (GAC) 4000 × 4000 km 4 km 1-5 10-bit packedor unpacked

USGS DEM 1:24,000 7.5’ × 7.5’ 30 m 1 ASCII (UTM)

USGS DEM 1:250,000 1° × 1° 3” × 3” 1 ASCII

84 ERDAS

Page 111: Erdas FieldGuide

Ordering Raster Data

Addresses to Contact For more information about these and related products, contact the agencies below:

• Landsat MSS, TM, and ETM data:EOSATInternational Headquarters4300 Forbes Blvd.Lanham, MD 20706 USATelephone: 1-800-344-9933Fax: 301-552-0507Internet: www.eosat.com

• SPOT data:SPOT Image Corporation1897 Preston White Dr.Reston, VA 22091-4368 USATelephone: 703-620-2200Fax: 703-648-1813Internet: www.spot.com

• NOAA AVHRR data:Satellite Data Service BranchNOAA/National Environment Satellite, Data, and Information ServiceWorld Weather Building, Room 100Washington, DC 20233 USA

• AVHRR Dundee FormatNERC Satellite StationUniversity of DundeeDundee, Scotland DD1 4HN

• Cartographic data including, maps, airphotos, space images, DEMs, planimetricdata, and related information from federal, state, and private agencies:National Cartographic Information CenterU.S. Geological Survey507 National CenterReston, VA 22092 USA

• ADRG data (available only to defense contractors):DMA (Defense Mapping Agency)ATTN: PMSCCombat Support CenterWashington, DC 20315-0010 USA

• ADRI data (available only to defense contractors):Rome Laboratory/IRRPImage Products BranchGriffiss AFB, NY 13440-5700 USA

• Landsat data:EROS Data CenterSioux Falls, SD 57198 USA

85Field Guide

Page 112: Erdas FieldGuide

• ERS-1 radar data:RADARSAT International, Inc.265 Carling Ave., Suite 204Ottawa, OntarioCanada K1S 2E1Telephone: 613-238-6413Fax: 613-238-5425Internet: www.rsi.ca

• JERS-1 (Fuyo 1) radar data:National Space Development Agency of Japan (NASDA)Earth Observation Program OfficeTokyo 105, JapanTelephone: 81-3-5470-4254Fax: 81-3-3432-3969

• SIR-A, B, C radar data:Jet Propulsion LaboratoriesCalifornia Institute of Technology4800 Oak Grove Dr.Pasadena, CA 91109-8099 USATelephone: 818-354-2386Internet: www.jpl.nasa.gov

• RADARSAT data:RADARSAT International, Inc.265 Carling Ave., Suite 204Ottawa, OntarioCanada K1S 2E1Telephone: 613-238-5424Fax: 613-238-5425Internet: www.rsi.ca

• Almaz radar data:NPO MashinostroeniaScientific Engineering Center “Almaz”Gagarin st., 33,Moscow Region143952, Reutov, RussiaTelephone: 7.095.307-9194Fax: 7.095.302-2001Email: [email protected]

• U.S. Government RADARSAT sales:Joel PorterLockheed Martin AstronauticsM/S: DC400112999 Deer Creek Canyon Rd.Littleton, CO 80127Telephone: 303-977-3233Fax: 303-971-9827email: [email protected]

86 ERDAS

Page 113: Erdas FieldGuide

Raster Data from Other Software Vendors

Raster Data fromOther SoftwareVendors

ERDAS IMAGINE also enables the user to import data created by other softwarevendors. This way, if another type of digital data system is currently in use, or if data isreceived from another system, it will easily convert to the ERDAS IMAGINE file formatfor use in ERDAS IMAGINE. The Import function will directly import these raster datatypes from other software systems:

• ERDAS Ver. 7.X

• GRID

• Sun Raster

• TIFF

Other data types might be imported using the Generic import option.

Vector to Raster Conversion

Vector data can also be a source of raster data by converting it to raster format.

Convert a vector layer to a raster layer, or vice versa, by using ERDAS IMAGINE Vector.

ERDAS Ver. 7.X The ERDAS Ver. 7.X series was the predecessor of ERDAS IMAGINE software. The twobasic types of ERDAS Ver. 7.X data files are indicated by the file name extensions:

• .LAN — a multiband continuous image file (the name is derived from the Landsatsatellite)

• .GIS — a single-band thematic data file in which pixels are divided into discretecategories (the name is derived from geographic information system)

.LAN and .GIS image files are stored in the same format. The image data are arrangedin a BIL format and can be 4-bit, 8-bit, or 16-bit. The ERDAS Ver. 7.X file structureincludes:

• a header record at the beginning of the file

• the data file values

• a statistics or trailer file

When you import a .GIS file, it becomes an .img file with one thematic raster layer. When youimport a .LAN file, each band becomes a continuous raster layer within the .img file.

87Field Guide

Page 114: Erdas FieldGuide

GRID GRID is a raster geoprocessing program distributed by Environmental SystemsResearch Institute, Inc. (Redlands, California). GRID is a spatial analysis and modelinglanguage that enables the user to perform per-cell, per-neighborhood, per-zone, andper-layer analyses. It was designed to function as a complement to the vector datamodel system, ARC/INFO, a well-known vector GIS which is also distributed by ESRI.

GRID files are in a compressed tiled raster data structure. The name is taken from theraster data format of presenting information in a grid of cells.

Sun Raster A Sun raster file is an image captured from a monitor display. In addition to GIS, SunRaster files can be used in desktop publishing applications or any application where ascreen capture would be useful.

There are two basic ways to create a Sun raster file on a Sun workstation:

• use the OpenWindows Snapshot application

• use the UNIX screendump command

Both methods read the contents of a frame buffer and write the display data to a user-specified file. Depending on the display hardware and options chosen, screendump cancreate any of the file types listed in Table 9.

The data are stored in BIP format.

TIFF The Tagged Image File Format (TIFF) was developed by Aldus Corp. (Seattle,Washington) in 1986 in conjunction with major scanner vendors who needed an easilyportable file format for raster image data. Today, the TIFF format is a widely supportedformat used in video, fax transmission, medical imaging, satellite imaging, documentstorage and retrieval, and desktop publishing applications. In addition, the GEOTIFFextensions permit TIFF files to be geocoded.

The TIFF format’s main appeal is its flexibility. It handles black and white line images,as well as gray scale and color images, which can be easily transported betweendifferent operating systems and computers.

TIFF File Formats

TIFF’s great flexibility can also cause occasional problems in compatibility. This isbecause TIFF is really a family of file formats that are comprised of a variety of elementswithin the format.

Table 10 shows the most common TIFF format elements. The elements supported inERDAS IMAGINE are checked.

Table 9: File Types Created by Screendump

File Type Available Compression

1-bit black and white None, RLE (run-length encoded)

8-bit color paletted (256 colors) None, RLE

24-bit RGB true color None, RLE

32-bit RGB true color None, RLE

88 ERDAS

Page 115: Erdas FieldGuide

Raster Data from Other Software Vendors

Any TIFF format that contains an unsupported element may not be compatible with ERDASIMAGINE.

NOTE: The checked items in Table 10 are supported in IMAGINE.

*Must be imported and exported as 4-bit data.

**All bands must contain the same number of bits (i.e., 4,4,4 or 8,8,8). Multi-band data assigned to differentbits cannot be imported into IMAGINE.

***Compression supported on import only.

****LZW is governed by patents and is not supported by the basic version of IMAGINE.

Table 10: The Most Common TIFF Format Elements

Byte Order Intel (LSB/MSB)

Motorola (MSB/LSB)

Black and white

Gray scale

Image Type Inverted gray scale

Color palette

RGB (3-band)

Configuration BIP

BSQ

Bits Per Plane** 1*,2*,4,8

3,5,6,7

None

CCITT G3 (B&W only)

Compression*** CCITT G4 (B&W only)

Packbits

LZW****

LZW with horizontal differencing****

89Field Guide

Page 116: Erdas FieldGuide

Vector Data fromOther SoftwareVendors

It is possible to directly import several common vector formats into ERDAS IMAGINE.These files become vector layers when imported. These data can then be used for theanalyses and, in most cases, exported back to its original format (if desired).

Although data can be converted from one type to another by importing a file intoIMAGINE and then exporting the IMAGINE file into another format, the import andexport routines were designed to work together. For example, if a user has informationin AutoCAD that they would like to use in the GIS, they can import a DXF file intoERDAS IMAGINE, do the analysis, and then export the data back to DXF format.

In most cases, attribute data are also imported into ERDAS IMAGINE. Each sectionbelow lists the types of attribute data that are imported.

Use Import/Export to import vector data from other software vendors into ERDAS IMAGINEvector layers. These routines are based on ARC/INFO data conversion routines.

See "CHAPTER 2: Vector Layers" for more information on ERDAS IMAGINE vector layers.See"CHAPTER 10: Geographic Information Systems" for more information about using vectordata in a GIS.

ARCGEN ARCGEN files are ASCII files created with the ARC/INFO UNGENERATE command.The import ARCGEN program is used to import features to a new layer. Topology isnot created or maintained, therefore the coverage must be built or cleaned after it isimported into ERDAS IMAGINE.

ARCGEN files must be properly prepared before they are imported into ERDAS IMAGINE. Ifthere is a syntax error in the data file, the import process may not work. If this happens, you mustkill the process, correct the data file, and then try importing again.

See the ARC/INFO documentation for more information about these files.

AutoCAD (DXF) AutoCAD is a vector software package distributed by Autodesk, Inc. (Sausalito,California). AutoCAD is a computer-aided design program that enables the user todraw two-and three-dimensional models. This software is frequently used in archi-tecture, engineering, urban planning, and many other applications.

The AutoCAD Drawing Interchange File (DXF) is the standard interchange format usedby most CAD systems. The AutoCAD program DXFOUT will create a DXF file that canbe converted to an ERDAS IMAGINE vector layer. AutoCAD files can also be output toIGES format using the AutoCAD program IGESOUT.

See "IGES" on page 94 for more information about IGES files.

90 ERDAS

Page 117: Erdas FieldGuide

Vector Data from Other Software Vendors

DXF files can be converted in the ASCII or binary format. The binary format is anoptional format for AutoCAD Releases 10 and 11. It is structured just like the ASCIIformat, only the data are in binary format.

DXF files are composed of a series of related layers. Each layer contains one or moredrawing elements or entities. An entity is a drawing element that can be placed into anAutoCAD drawing with a single command. When converted to an ERDAS IMAGINEvector layer, each entity becomes a single feature. Table 11 describes how various DXFentities are converted to IMAGINE.

The ERDAS IMAGINE import process also imports line and point attribute data (if theyexist) and creates an INFO directory with the appropriate ACODE (arc attributes) andXCODE (point attributes) files. If an imported DXF file is exported back to DXF format,this information will also be exported.

Refer to an AutoCAD manual for more information about the format of DXF files.

Table 11: Conversion of DXF Entries

DXFEntity

IMAGINEFeature

Comments

Line

3DLine

Line These entities become two point lines. The initial Zvalue of 3D entities is stored.

Trace

Solid

3DFace

Line These entities become four or five point lines. Theinitial Z value of 3D entities is stored.

Circle

Arc

Line These entities form lines. Circles are composed of361 points—one vertex for each degree. The firstand last point is at the same location.

Polyline Line These entities can be grouped to form a single linehaving many vertices.

Point

Shape

Point These entities become point features in a layer.

91Field Guide

Page 118: Erdas FieldGuide

DLG Digital Line Graphs (DLG) are furnished by the U.S. Geological Survey and provideplanimetric base map information, such as transportation, hydrography, contours, andpublic land survey boundaries. DLG files are available for the following USGS mapseries:

• 7.5- and 15-minute topographic quadrangles

• 1:100,000-scale quadrangles

• 1:2,000,000-scale national atlas maps

DLGs are topological files that contain nodes, lines, and areas (similar to the points,lines, and polygons in an ERDAS IMAGINE vector layer). DLGs also store attributeinformation in the form of major and minor code pairs. Code pairs are encoded in twointeger fields, each containing six digits. The major code describes the class of thefeature (road, stream, etc.) and the minor code stores more specific information aboutthe feature.

DLGs can be imported in standard format (144 bytes per record) and optional format(80 bytes per record). The user can export to DLG-3 optional format. Most DLGs are inthe Universal Transverse Mercator map projection. However, the 1:2,000,000 scaleseries are in geographic coordinates.

The ERDAS IMAGINE import process also imports point, line, and polygon attributedata (if they exist) and creates an INFO directory with the appropriate ACODE (arcattributes), PCODE (polygon attributes) and XCODE (point attributes) files. If animported DLG file is exported back to DLG format, this information will also beexported.

To maintain the topology of a vector layer created from a DLG file, you must Build or Clean it.See “Chapter 9: Geographic Information Systems” for information on this process.

92 ERDAS

Page 119: Erdas FieldGuide

Vector Data from Other Software Vendors

ETAK ETAK’s MapBase is an ASCII digital street centerline map product available fromETAK, Inc. (Menlo Park, California). ETAK files are similar in content to the DualIndependent Map Encoding (DIME) format used by the U.S. Census Bureau. Eachrecord represents a single linear feature with address and political, census, and ZIPcode boundary information. ETAK has also included road class designations and, insome areas, major landmark features.

There are four possible types of ETAK features:

• DIME or D types — if the feature type is D, a line is created along with acorresponding ACODE (arc attribute) record. The coordinates are stored inLat/Lon decimal degrees.

• Alternate address or A types — each record contains an alternate address record fora line. These records are written to the attribute file, and are useful for buildingaddress coverages.

• Shape features or S types — shape records are used to add vertices to the lines. Thecoordinates for these features are in Lat/Lon decimal degrees.

• Landmark or L types — if the feature type is L and the user opts to output alandmark layer, then a point feature is created along with an associated PCODErecord.

ERDAS IMAGINE vector data cannot be exported to ETAK format.

93Field Guide

Page 120: Erdas FieldGuide

IGES Initial Graphics Exchange Standard (IGES) files are often used to transfer CAD databetween systems. IGES Version 3.0 format, published by the U.S. Department ofCommerce, is in uncompressed ASCII format only.

IGES files can be produced in AutoCAD using the IGESOUT command. The followingIGES entities can be converted:

The ERDAS IMAGINE import process also imports line and point attribute data (if theyexist) and creates an INFO directory with the appropriate ACODE (arc attributes) andXCODE (point attributes) files. If an imported IGES file is exported back to IGES format,this information will also be exported.

Table 12: Conversion of IGES Entities

IGES Entity IMAGINE Feature

IGES Entity 100 (Circular Arc Entities) Lines

IGES Entity 106 (Copious Data Entities) Lines

IGES Entity 106 (Line Entities) Lines

IGES Entity 116 (Point Entities) Points

94 ERDAS

Page 121: Erdas FieldGuide

Vector Data from Other Software Vendors

TIGER Topologically Integrated Geographic Encoding and Referencing System (TIGER) filesare line network products of the U.S. Census Bureau. The Census Bureau is using theTIGER system to create and maintain a digital cartographic database that covers theUnited States, Puerto Rico, Guam, the Virgin Islands, American Samoa, and the TrustTerritories of the Pacific.

TIGER/Line is the line network product of the TIGER system. The cartographic base istaken from Geographic Base File/Dual Independent Map Encoding (GBF/DIME),where available, and from the USGS 1:100,000-scale national map series, SPOT imagery,and a variety of other sources in all other areas, in order to have continuous coveragefor the entire United States. In addition to line segments, TIGER files contain censusgeographic codes and, in metropolitan areas, address ranges for the left and right sidesof each segment. TIGER files are available in ASCII format on both CD-ROM and tapemedia. All released versions after April 1989 are supported.

There is a great deal of attribute information provided with TIGER/Line files. Line andpoint attribute information can be converted into ERDAS IMAGINE format. TheERDAS IMAGINE import process creates an INFO directory with the appropriateACODE (arc attributes) and XCODE (point attributes) files. If an imported TIGER fileis exported back to TIGER format, this information will also be exported.

TIGER attributes include the following:

• Version numbers—TIGER/Line file version number

• Permanent record numbers—each line segment is assigned a permanent recordnumber that is maintained throughout all versions of TIGER/Line files

• Source codes—each line and landmark point feature is assigned a code to specifythe original source

• Census feature class codes—line segments representing physical features are codedbased on the USGS classification codes in DLG-3 files

• Street attributes—includes street address information for selected urban areas

• Legal and statistical area attributes—legal areas include states, counties, townships,towns, incorporated cities, Indian reservations, and national parks. Statistical areasare areas used during the census-taking, where legal areas are not adequate forreporting statistics.

• Political boundaries—the election precincts or voting districts may contain avariety of areas, including wards, legislative districts, and election districts.

• Landmarks—landmark area and point features include schools, militaryinstallations, airports, hospitals, mountain peaks, campgrounds, rivers, and lakes

TIGER files for major metropolitan areas outside of the United States (e.g., Puerto Rico, Guam)do not have address ranges.

95Field Guide

Page 122: Erdas FieldGuide

Disk Space Requirements

TIGER/Line files are partitioned into counties ranging in size from less than amegabyte to almost 120 megabytes. The average size is approximately 10 megabytes.To determine the amount of disk space required to convert a set of TIGER/Line files,use this rule: the size of the converted layers is approximately the same size as the filesused in the conversion. The amount of additional scratch space needed depends on thelargest file and whether it will need to be sorted. The amount usually required is aboutdouble the size of the file being sorted.

The information presented in this section, "Vector Data from Other Software Vendors", wasobtained from the Data Conversion and the 6.0 ARC Command References manuals, bothpublished by ESRI, Inc., 1992.

96 ERDAS

Page 123: Erdas FieldGuide

Introduction

CHAPTER 4Image Display

Introduction This section defines some important terms that are relevant to image display. Most ofthe terminology and definitions used in this chapter are based on the X Window System(Massachusetts Institute of Technology) terminology. This may differ from othersystems, such as Microsoft Windows NT.

A seat is a combination of an X-server and a host workstation.

• A host workstation consists of a CPU, keyboard, mouse, and a display.

• A display may consist of multiple screens. These screens work together, making itpossible to move the mouse from one screen to the next.

• The display hardware contains the memory that is used to produce the image. Thishardware determines which types of displays are available (e.g., true color orpseudo color) and the pixel depth (e.g., 8-bit or 24-bit).

Figure 34: Example of One Seat with One Display and Two Screens

Display Memory Size The size of memory varies for different displays. It is expressed in terms of:

• display resolution, which is expressed as the horizontal and vertical dimensions ofmemory—the number of pixels that can be viewed on the display screen. Sometypical display resolutions are 1152 × 900, 1280 × 1024, and 1024 × 780. For the PC,typical resolutions are 640 × 480, 800 × 600,1024 × 768, and 1280 × 1024.

• the number of bits for each pixel or pixel depth, as explained below.

Screen Screen

97Field Guide

Page 124: Erdas FieldGuide

Bits for Image Plane

A bit is a binary digit, meaning a number that can have two possible values—0 and 1,or “off” and “on.” A set of bits, however, can have many more values, depending uponthe number of bits used. The number of values that can be expressed by a set of bits is2 to the power of the number of bits used. For example, the number of values that canbe expressed by 3 bits is 8 (23 = 8).

Displays are referred to in terms of a number of bits, such as 8-bit or 24-bit. These bitsare used to determine the number of possible brightness values. For example, in a 24-bit display, 24 bits per pixel breaks down to eight bits for each of the three color gunsper pixel. The number of possible values that can be expressed by eight bits is 28, or 256.Therefore, on a 24-bit display, each color gun of a pixel can have any one of 256 possiblebrightness values, expressed by the range of values 0 to 255.

The combination of the three color guns, each with 256 possible brightness values,yields 2563, (or 224, for the 24-bit image display), or 16,777,216 possible colors for eachpixel on a 24-bit display. If the display being used is not 24-bit, the example above willcalculate the number of possible brightness values and colors that can be displayed.

Pixel The term pixel is abbreviated from picture element. As an element, a pixel is thesmallest part of a digital picture (image). Raster image data are divided by a grid, inwhich each cell of the grid is represented by a pixel. A pixel is also called a grid cell.

Pixel is a broad term that is used for both:

• the data file value(s) for one data unit in an image (file pixels), or

• one grid location on a display or printout (display pixels).

Usually, one pixel in a file corresponds to one pixel in a display or printout. However,an image can be magnified or reduced so that one file pixel no longer corresponds toone pixel in the display or printout. For example, if an image is displayed with a magni-fication factor of 2, then one file pixel will take up 4 (2 × 2) grid cells on the displayscreen.

To display an image, a file pixel that consists of one or more numbers must be trans-formed into a display pixel with properties that can be seen, such as brightness andcolor. Whereas the file pixel has values that are relevant to data (such as wavelength ofreflected light), the displayed pixel must have a particular color or gray level that repre-sents these data file values.

Colors Human perception of color comes from the relative amounts of red, green, and bluelight that are measured by the cones (sensors) in the eye. Red, green, and blue light canbe added together to produce a wide variety of colors—a wider variety than can beformed from the combinations of any three other colors. Red, green, and blue aretherefore the additive primary colors.

A nearly infinite number of shades can be produced when red, green, and blue light arecombined. On a display, different colors (combinations of red, green, and blue) allowthe user to perceive changes across an image. Color displays that are available todayyield 224, or 16,777,216 colors. Each color has a possible 256 different values (28).

98 ERDAS

Page 125: Erdas FieldGuide

Introduction

Color Guns

On a display, color guns direct electron beams that fall on red, green, and bluephosphors. The phosphors glow at certain frequencies to produce different colors.Color monitors are often called RGB monitors, referring to the primary colors.

The red, green, and blue phosphors on the picture tube appear as tiny colored dots onthe display screen. The human eye integrates these dots together, and combinations ofred, green, and blue are perceived. Each pixel is represented by an equal number of red,green, and blue phosphors.

Brightness Values

Brightness values (or intensity values) are the quantities of each primary color to beoutput to each displayed pixel. When an image is displayed, brightness values arecalculated for all three color guns, for every pixel.

All of the colors that can be output to a display can be expressed with three brightnessvalues—one for each color gun.

Colormap and Colorcells A color on the screen is created by a combination of red, green, and blue values, whereeach of these components is represented as an 8-bit value. Therefore, 24 bits are neededto represent a color. Since many systems have only an 8-bit display, a colormap is usedto translate the 8-bit value into a color. A colormap is an ordered set of colorcells, whichis used to perform a function on a set of input values. To display or print an image, thecolormap translates data file values in memory into brightness values for each colorgun. Colormaps are not limited to 8-bit displays.

Colormap vs. Lookup Table

The colormap is a function of the display hardware, whereas a lookup table is afunction of ERDAS IMAGINE. When a contrast adjustment is performed on an imagein IMAGINE, lookup tables are used. However, if the auto-update function is beingused to view the adjustments in near real-time, then the colormap is being used to mapthe image through the lookup table. This process allows the colors on the screen to beupdated in near real-time. This chapter explains how the colormap is used to displayimagery.

Colorcells

There is a colorcell in the colormap for each data file value. The red, green, and bluevalues assigned to the colorcell control the brightness of the color guns for the displayedpixel (Nye 1990). The number of colorcells in a colormap is determined by the numberof bits in the display (e.g., 8-bit, 24-bit).

99Field Guide

Page 126: Erdas FieldGuide

For example, if a pixel with a data file value of 40 was assigned a display value (colorcellvalue) of 24, then this pixel would use the brightness values for the 24th colorcell in thecolormap. In the colormap below (Table 13), this pixel would be displayed as blue.

The colormap is controlled by the Windows system. There are 256 colorcells in acolormap with an 8-bit display. This means that 256 colors can be displayed simulta-neously on the display. With a 24-bit display, there are 256 colorcells for each color: red,green, and blue. This offers 256 × 256 × 256 or 16,777,216 different colors.

When an application requests a color, the server will specify which colorcell containsthat color and will return the color. Colorcells can be read-only or read/write.

Read-Only Colorcells

The color assigned to a read-only colorcell can be shared by other application windows,but it cannot be changed once it is set. To change the color of a pixel on the display, itwould not be possible to change the color for the corresponding colorcell. Instead, thepixel value would have to be changed and the image redisplayed. For this reason, it isnot possible to use auto update operations in ERDAS IMAGINE with read-onlycolorcells.

Read/Write Colorcells

The color assigned to a read/write colorcell can be changed, but it cannot be shared byother application windows. An application can easily change the color of displayedpixels by changing the color for the colorcell that corresponds to the pixel value. Thisallows applications to use auto update operations. However, this colorcell cannot beshared by other application windows, and all of the colorcells in the colormap couldquickly be utilized.

Changeable Colormaps

Some colormaps can have both read-only and read/write colorcells. This type ofcolormap allows applications to utilize the type of colorcell that would be mostpreferred.

Table 13: Colorcell Example

ColorcellIndex

Red Green Blue

1 255 0 0

2 0 170 90

3 0 0 255

24 0 0 255

100 ERDAS

Page 127: Erdas FieldGuide

Introduction

Display Types The possible range of different colors is determined by the display type. ERDASIMAGINE supports the following types of displays:

• 8-bit PseudoColor

• 15-bit HiColor (for Windows NT)

• 24-bit DirectColor

• 24-bit TrueColor

The above display types are explained in more detail below.

A display may offer more than one visual type and pixel depth. See “ERDAS IMAGINE 8.3Installing and Configuring” for more information on specific display hardware.

32-bit Displays

A 32-bit display is a combination of an 8-bit PseudoColor and 24-bit DirectColor orTrueColor display. Whether or not it is DirectColor or TrueColor depends on thedisplay hardware.

101Field Guide

Page 128: Erdas FieldGuide

8-bit PseudoColor An 8-bit PseudoColor display has a colormap with 256 colorcells. Each cell has a red,green, and blue brightness value, giving 256 combinations of red, green, and blue. Thedata file value for the pixel is transformed into a colorcell value. The brightness valuesfor the colorcell that is specified by this colorcell value are used to define the color to bedisplayed.

Figure 35: Transforming Data File Values to a Colorcell Value

In Figure 35, data file values for a pixel of three continuous raster layers (bands) is trans-formed to a colorcell value. Since the colorcell value is four, the pixel is displayed withthe brightness values of the fourth colorcell (blue).

This display grants a small number of colors to ERDAS IMAGINE. It works well withthematic raster layers containing less than 200 colors and with gray scale continuousraster layers. For image files with three continuous raster layers (bands), the colors willbe severely limited because, under ideal conditions, 256 colors are available on an 8-bitdisplay, while 8-bit, 3-band image files can contain over 16,000,000 different colors.

Auto Update

An 8-bit PseudoColor display has read-only and read/write colorcells, allowingERDAS IMAGINE to perform near real-time color modifications using Auto Updateand Auto Apply options.

Color-cell

Index

RedValue

GreenValue

BlueValue

1

2

3

4 0 0 255

5

6

Red bandvalue

Green bandvalue

Blue bandvalue

Colorcellvalue

(4)

Data File Values

Colormap

Blue pixel

102 ERDAS

Page 129: Erdas FieldGuide

Introduction

24-bit DirectColor A 24-bit DirectColor display enables the user to view up to three bands of data at onetime, creating displayed pixels that represent the relationships between the bands bytheir colors. Since this is a 24-bit display, it offers up to 256 shades of red, 256 shades ofgreen, and 256 shades of blue, which is approximately 16 million different colors (2563).The data file values for each band are transformed into colorcell values. The colorcellthat is specified by these values is used to define the color to be displayed.

Figure 36: Transforming Data File Values to a Colorcell Value

In Figure 36, data file values for a pixel of three continuous raster layers (bands) aretransformed to separate colorcell values for each band. Since the colorcell value is 1 forthe red band, 2 for the green band, and 6 for the blue band, the RGB brightness valuesare 0, 90, 200. This displays the pixel as a blue-green color.

This type of display grants a very large number of colors to ERDAS IMAGINE and itworks well with all types of data.

Auto Update

A 24-bit DirectColor display has read-only and read/write colorcells, allowing ERDASIMAGINE to perform real-time color modifications using the Auto Update and AutoApply options.

Red bandvalue

Green bandvalue

Blue bandvalue

Data File Values

Red bandvalue

Green bandvalue

Blue bandvalue

Colorcell Values

(1)

(2)

(6)

Blue-green pixel(0, 90, 200 RGB)

Colormap

Color-CellIndex

1

2

3

4

5

6

BlueValue

0

0

200

Color-CellIndex

GreenValue

RedValue

Color-CellIndex

1

2

3

4

5

6

0

90

55

1

2

3

4

5

6

0

0

55

103Field Guide

Page 130: Erdas FieldGuide

24-bit TrueColor A 24-bit TrueColor display enables the user to view up to three continuous raster layers(bands) of data at one time, creating displayed pixels that represent the relationshipsbetween the bands by their colors. The data file values for the pixels are transformedinto screen values and the colors are based on these values. Therefore, the color for thepixel is calculated without querying the server and the colormap. The colormap for a24-bit TrueColor display is not available for ERDAS IMAGINE applications. Once acolor is assigned to a screen value, it cannot be changed, but the color can be shared byother applications.

The screen values are used as the brightness values for the red, green, and blue colorguns. Since this is a 24-bit display, it offers 256 shades of red, 256 shades of green, and256 shades of blue, which is approximately 16 million different colors (2563).

Figure 37: Transforming Data File Values to Screen Values

In Figure 37, data file values for a pixel of three continuous raster layers (bands) aretransformed to separate screen values for each band. Since the screen value is 0 for thered band, 90 for the green band and 200 for the blue band, the RGB brightness valuesare 0, 90, and 200. This displays the pixel as a blue-green color.

Auto Update

The 24-bit TrueColor display does not use the colormap in ERDAS IMAGINE, and thusdoes not provide IMAGINE with any real-time color changing capability. Each time acolor is changed, the screen values must be calculated and the image must be re-drawn.

Color Quality

The 24-bit TrueColor visual provides the best color quality possible with standardequipment. There is no color degradation under any circumstances with this display.

Red bandvalue

Green bandvalue

Blue bandvalue

Data File Values

Red bandvalue

Green bandvalue

Blue bandvalue

Screen Values

(0)

(90)

(200)

Blue-green pixel(0, 90, 200 RGB)

104 ERDAS

Page 131: Erdas FieldGuide

Introduction

PC Displays ERDAS IMAGINE for Microsoft Windows NT supports the following visual type andpixel depths:

• 8-bit PseudoColor

• 15-bit HiColor

• 24-bit TrueColor

8-bit PseudoColor

An 8-bit PseudoColor display for the PC uses the same type of colormap as the XWindows 8-bit PseudoColor display, except that each colorcell has a range of 0 to 63 onmost video display adapters, instead of 0 to 255. Therefore, each colorcell has a red,green, and blue brightness value, giving 64 different combinations of red, green, andblue. The colormap, however, is the same as the X Windows 8-bit PseudoColor display.It has 256 colorcells allowing 256 different colors to be displayed simultaneously.

15-bit HiColor

A 15-bit HiColor display for the PC assigns colors the same way as the X Windows 24-bit TrueColor display, except that it offers 32 shades of red, 32 shades of green, and 32shades of blue, for a total of 32,768 possible color combinations. Some video displayadapters allocate 6 bits to the green color gun, allowing 64 thousand colors. Theseadapters use a 16-bit color scheme.

24-bit TrueColor

A 24-bit TrueColor display for the PC assigns colors the same way as the X Windows24-bit TrueColor display.

105Field Guide

Page 132: Erdas FieldGuide

Displaying RasterLayers

Image files (.img) are raster files in the IMAGINE format. There are two types of rasterlayers:

• continuous

• thematic

Thematic raster layers require a different display process than continuous raster layers.This section explains how each raster layer type is displayed.

Continuous RasterLayers

An image file (.img) can contain several continuous raster layers, and therefore, eachpixel can have multiple data file values. When displaying an image file with continuousraster layers, it is possible to assign which layers (bands) are to be displayed with eachof the three color guns. The data file values in each layer are input to the assigned colorgun. The most useful color assignments are those that allow for an easy interpretationof the displayed image. For example:

• a natural-color image will approximate the colors that would appear to a humanobserver of the scene.

• a color-infrared image shows the scene as it would appear on color-infrared film,which is familiar to many analysts.

Band assignments are often expressed in R,G,B order. For example, the assignment 4,2,1means that band 4 is assigned to red, band 2 to green, and band 1 to blue. Below aresome widely used band to color gun assignments (Faust 1989):

• Landsat TM - natural color: 3,2,1This is natural color because band 3 = red and is assigned to the red color gun, band2 = green and is assigned to the green color gun, and band 1 is blue and is assignedto the blue color gun.

• Landsat TM - color-infrared: 4,3,2This is infrared because band 4 = infrared.

• SPOT Multispectral - color-infrared: 3,2,1This is infrared because band 3 = infrared.

Contrast Table

When an image is displayed, ERDAS IMAGINE automatically creates a contrast tablefor continuous raster layers. The red, green, and blue brightness values for each bandare stored in this table.

Since the data file values in continuous raster layers are quantitative and related, thebrightness values in the colormap are also quantitative and related. The screen pixelsrepresent the relationships between the values of the file pixels by their colors. Forexample, a screen pixel that is bright red has a high brightness value in the red colorgun, and a high data file value in the layer assigned to red, relative to other data filevalues in that layer.

106 ERDAS

Page 133: Erdas FieldGuide

Displaying Raster Layers

The brightness values often differ from the data file values, but they usually remain inthe same order of lowest to highest. Some meaningful relationships between the valuesare usually maintained.

Contrast Stretch

Different displays have different ranges of possible brightness values. The range ofmost displays is 0 to 255 for each color gun.

Since the data file values in a continuous raster layer often represent raw data (such aselevation or an amount of reflected light), the range of data file values is often not thesame as the range of brightness values of the display. Therefore, a contrast stretch isusually performed, which stretches the range of the values to fit the range of thedisplay.

For example, Figure 38 shows a layer that has data file values from 30 to 40. When thesevalues are used as brightness values, the contrast of the displayed image is poor. Acontrast stretch simply “stretches” the range between the lower and higher data filevalues, so that the contrast of the displayed image is higher—that is, lower data filevalues are displayed with the lowest brightness values, and higher data file values aredisplayed with the highest brightness values.

The colormap stretches the range of colorcell values from 30 to 40 to the range 0 to 255.Since the output values are incremented at regular intervals, this stretch is a linearcontrast stretch. (The numbers in Figure 38 are approximations and do not show anexact linear relationship.)

Figure 38: Contrast Stretch and Colorcell Values

See "CHAPTER 5: Enhancement" for more information about contrast stretching. Contraststretching is performed the same way for display purposes as it is for permanent imageenhancement.

input colorcell values

outp

ut b

right

ness

val

ues

255

25500

30 to 40 range

30 0

31 25

32 51

33 76

34 102

35 127

36 153

37 178

38 204

39 229

40 255

107Field Guide

Page 134: Erdas FieldGuide

A two standard deviation linear contrast stretch is applied to stretch pixel values from 0 to 255of all .img files before they are displayed in the Viewer, unless a saved contrast stretch exists (thefile is not changed). This often improves the initial appearance of the data in the Viewer.

Statistics Files

To perform a contrast stretch, certain statistics are necessary, such as the mean and thestandard deviation of the data file values in each layer.

Use the Image Information utility to create and view statistics for a raster layer.

Usually, not all of the data file values are used in the contrast stretch calculations. Theminimum and maximum data file values of each band are often too extreme to producegood results. When the minimum and maximum are extreme in relation to the rest ofthe data, then the majority of data file values are not stretched across a very wide range,and the displayed image has low contrast.

Figure 39: Stretching by Min/Max vs. Standard Deviation

The mean and standard deviation of the data file values for each band are used to locatethe majority of the data file values. The number of standard deviations above and belowthe mean can be entered, which determines the range of data used in the stretch.

See "APPENDIX A: Math Topics" for more information on mean and standard deviation.

0 255-2σ mean +2σ

stretched data file values

freq

uenc

y

most of the data

Standard Deviation Stretch

values stretchedover 255 arenot displayed

values stretchedless than 0 are

not displayed

Original Histogram

0 255-2σ mean +2σstored data file values

freq

uenc

y

0 255-2σ mean +2σstretched data file values

freq

uenc

y

most of the data

Min/Max Stretch

108 ERDAS

Page 135: Erdas FieldGuide

Displaying Raster Layers

Use the Contrast Tools dialog, which is accessible from the Lookup Table Modification dialog, toenter the number of standard deviations to be used in the contrast stretch.

24-bit DirectColor and TrueColor Displays

Figure 40 illustrates the general process of displaying three continuous raster layers ona 24-bit DirectColor display. The process would be similar on a TrueColor displayexcept that the colormap would not be used.

Figure 40: Continuous Raster Layer Display Process

Band-to-color gunassignments:

Histograms

Ranges ofdata filevalues tobe displayed:

Colormap:

Colorguns:

Brightnessvalues ineach colorgun:

Color display:

Band 3assigned to

RED

Band 2assigned to

GREEN

Band 1assigned to

BLUE

data file values in

of each band:

brightness values out brightness values out brightness values out

0 255 0 255 0 255

data file values in data file values in

0 255 0 255 0 255

0 255 0 255 0 255

109Field Guide

Page 136: Erdas FieldGuide

8-bit PseudoColor Display

When displaying continuous raster layers on an 8-bit PseudoColor display, the data filevalues from the red, green, and blue bands are combined and transformed to a colorcellvalue in the colormap. This colorcell then provides the red, green, and blue brightnessvalues. Since there are only 256 colors available, a continuous raster layer looksdifferent when it is displayed in an 8-bit display than a 24-bit display that offers 16million different colors. However, the ERDAS IMAGINE Viewer performs ditheringwith the available colors in the colormap to let a smaller set of colors appear to be alarger set of colors.

See"Dithering" on page 116 for more information.

Thematic Raster Layers A thematic raster layer generally contains pixels that have been classified, or put intodistinct categories. Each data file value is a class value, which is simply a number for aparticular category. A thematic raster layer is stored in an image (.img) file. Only onedata file value—the class value—is stored for each pixel.

Since these class values are not necessarily related, the gradations that are possible intrue color mode are not usually useful in pseudo color. The class system gives thethematic layer a discrete look, in which each class can have its own color.

Color Table

When a thematic raster layer is displayed, ERDAS IMAGINE automatically creates acolor table. The red, green, and blue brightness values for each class are stored in thistable.

RGB Colors

Individual color schemes can be created by combining red, green, and blue in differentcombinations, and assigning colors to the classes of a thematic layer.

Colors can be expressed numerically, as the brightness values for each color gun.Brightness values of a display generally range from 0 to 255, however, IMAGINE trans-lates the values from 0 to 1. The maximum brightness value for the display device isscaled to 1. The colors listed in Table 14 are based on the range that would be used toassign brightness values in ERDAS IMAGINE.

Table 14 contains only a partial listing of commonly used colors. Over 16 million colorsare possible on a 24-bit display.

Table 14: Commonly Used RGB Colors

Color Red Green Blue

Red 1 0 0

Red-Orange 1 .392 0

Orange .608 .588 0

Yellow 1 1 0

Yellow-Green .490 1 0

110 ERDAS

Page 137: Erdas FieldGuide

Displaying Raster Layers

NOTE: Black is the absence of all color (0,0,0) and white is created from the highest values of allthree colors (1,1,1). To lighten a color, increase all three brightness values. To darken a color,decrease all three brightness values.

Use the Raster Attribute Editor to create your own color scheme.

24-bit DirectColor and TrueColor Displays

Figure 41 illustrates the general process of displaying thematic raster layers on a 24-bitDirectColor display. The process would be similar on a TrueColor display except thatthe colormap would not be used.

Display a thematic raster layer from the ERDAS IMAGINE Viewer.

Green 0 1 0

Cyan 0 1 1

Blue 0 0 1

Blue-Violet .392 0 .471

Violet .588 0 .588

Black 0 0 0

White 1 1 1

Gray .498 .498 .498

Brown .373 .227 0

Table 14: Commonly Used RGB Colors

Color Red Green Blue

111Field Guide

Page 138: Erdas FieldGuide

Figure 41: Thematic Raster Layer Display Process

8-bit PseudoColor Display

The colormap is a limited resource which is shared among all of the applications thatare running concurrently. Due to the limited resources, ERDAS IMAGINE does nottypically have access to the entire colormap.

1 2 3

4 3 5

2 1 4

RED GREEN BLUECLASS COLOR brightness value brightness value brightness value

1 Red = 255 0 02 Orange = 255 128 03 Yellow = 255 255 04 Violet = 128 0 2555 Green = 0 255 0

Colormap:

Colorguns:

Brightnessvalues ineach colorgun:

Display:

class values in

brightness values out brightness values out brightness values out

class values in class values in1 2 3 4 51 2 3 4 5 1 2 3 4 5

0 128 255 0 128 255 0 128 255

R O Y

V Y G

O R V

= 255= 128= 0

Colorscheme:

Originalimage byclass:

112 ERDAS

Page 139: Erdas FieldGuide

Using the IMAGINE Viewer

Using the IMAGINEViewer

The ERDAS IMAGINE Viewer is a window for displaying raster, vector, andannotation layers. The user can open as many Viewer windows as their windowmanager supports.

NOTE: The more Viewers that are opened simultaneously, the more RAM memory is necessary.

The ERDAS IMAGINE Viewer not only makes digital images visible quickly, but it canalso be used as a tool for image processing and raster GIS modeling. The uses of theViewer are listed briefly in this section, and described in greater detail in other chaptersof the ERDAS Field Guide.

Colormap

ERDAS IMAGINE does not use the entire colormap because there are other applica-tions that also need to use it, including the window manager, terminal windows, ARCView, or a clock. Therefore, there are some limitations to the number of colors that theViewer can display simultaneously, and flickering may occur as well.

Color Flickering

If an application requests a new color that does not exist in the colormap, the server willassign that color to an empty colorcell. However, if there are not any available colorcellsand the application requires a private colorcell, then a private colormap will be createdfor the application window. Since this is a private colormap, when the cursor is movedout of the window, the server will use the main colormap and the brightness valuesassigned to the colorcells. Therefore, the colors in the private colormap will not beapplied and the screen will flicker. Once the cursor is moved into the applicationwindow, the correct colors will be applied for that window.

Resampling

When a raster layer(s) is displayed, the file pixels may be resampled for display on thescreen. Resampling is used to calculate pixel values when one raster grid must be fittedto another. In this case, the raster grid defined by the file must be fit to the grid of screenpixels in the Viewer.

All Viewer operations are file-based. So, any time an image is resampled in the Viewer,the Viewer uses the file as its source. If the raster layer is magnified or reduced, theViewer re-fits the file grid to the new screen grid.

The resampling methods available are:

• Nearest Neighbor - uses the value of the closest pixel to assign to the output pixelvalue.

• Bilinear Interpolation - uses the data file values of four pixels in a 2 × 2 window tocalculate an output value with a bilinear function.

• Cubic Convolution - uses the data file values of 16 pixels in a 4 × 4 window tocalculate an output value with a cubic function.

These are discussed in detail in "CHAPTER 8: Rectification".

113Field Guide

Page 140: Erdas FieldGuide

The default resampling method is Nearest Neighbor.

Preference Editor

The ERDAS IMAGINE Preference Editor enables the user to set parameters for theERDAS IMAGINE Viewer that affect the way the Viewer operates.

See the ERDAS IMAGINE On-Line Help for the Preference Editor for information on how toset preferences for the Viewer.

Pyramid Layers Sometimes a large .img file may take a long time to display in the ERDAS IMAGINEViewer or to be resampled by an application. The Pyramid Layer option enables theuser to display large images faster and allows certain applications to rapidly access theresampled data. Pyramid layers are image layers which are copies of the original layersuccessively reduced by the power of 2 and then resampled. If the raster layer isthematic, then it is resampled using the Nearest Neighbor method. If the raster layer iscontinuous, it is resampled by a method that is similar to cubic convolution. The datafile values for sixteen pixels in a 4 × 4 window are used to calculate an output data filevalue with a filter function.

See "CHAPTER 8: Rectification" for more information on Nearest Neighbor.

The number of pyramid layers created depends on the size of the original image. Alarger image will produce more pyramid layers. When the Create Pyramid Layeroption is selected, ERDAS IMAGINE automatically creates successively reduced layersuntil the final pyramid layer can be contained in one block. The default block size is 64× 64 pixels.

See "CHAPTER 1: Raster Data" for information on block size.

Pyramid layers are added as additional layers in the .img file. However, these layerscannot be accessed for display. The file size is increased by approximately one-thirdwhen pyramid layers are created. The actual increase in file size can be determined bymultiplying the layer size by this formula:

where:

n = number of pyramid layers

1

4i

----

i 0=

n

114 ERDAS

Page 141: Erdas FieldGuide

Using the IMAGINE Viewer

Pyramid layers do not appear as layers which can be processed; they are for viewingpurposes only. Therefore, they will not appear as layers in other parts of the ERDASIMAGINE system (e.g., the Arrange Layers dialog).

Pyramid layers can be deleted through the Image Information utility. However, when pyramidlayers are deleted, they will not be deleted from the .img file - so the .img file size will not change,but ERDAS IMAGINE will utilize this file space, if necessary. Pyramid layers are deleted fromviewing and resampling access only - that is, they can no longer be viewed or used in an appli-cation.

Figure 42: Pyramid Layers

For example, a file which is 4K × 4K pixels could take a long time to display when theimage is fit to the Viewer. The Pyramid Layer option creates additional layers succes-sively reduced from 4K × 4K, to 2K × 2K, 1K × 1K, 512 × 512, 128 × 128, down to 64 × 64.ERDAS IMAGINE then selects the pyramid layer size most appropriate for display inthe Viewer window when the image is displayed.

The pyramid layer option is available from Import and the Image Information utility.

Original Image

Pyramid layer (2K × 2K)

Pyramid layer (1K × 1K)

Pyramid layer (512 × 512)

Pyramid layer (128 × 128)

Pyramid layer (64 × 64)

.img file

IMAGINE selectsthe pyramid layerthat will display thefastest in theViewer.

Viewer Window

(4K × 4K)

115Field Guide

Page 142: Erdas FieldGuide

For more information about the .img format, see "CHAPTER 1: Raster Data" and "APPENDIXB: File Formats and Extensions".

Dithering A display is capable of viewing only a limited number of colors simultaneously. Forexample, an 8-bit display has a colormap with 256 colorcells, therefore, a maximum of256 colors can be displayed at the same time. If some colors are being used for autoupdate color adjustment while other colors are still being used for other imagery, thecolor quality will degrade.

Dithering lets a smaller set of colors appear to be a larger set of colors. If the desireddisplay color is not available, a dithering algorithm mixes available colors to providesomething that looks like the desired color.

For a simple example, assume the system can display only two colors, black and white,and the user wants to display gray. This can be accomplished by alternating the displayof black and white pixels.

Figure 43: Example of Dithering

In Figure 43, dithering is used between a black pixel and a white pixel to obtain a graypixel.

The colors that the ERDAS IMAGINE Viewer will dither between will be similar to eachother, and will be dithered on the pixel level. Using similar colors and dithering on thepixel level makes the image appear smooth.

Dithering allows multiple images to be displayed in different Viewers without refreshing thecurrently displayed image(s) each time a new image is displayed.

Black Gray White

116 ERDAS

Page 143: Erdas FieldGuide

Using the IMAGINE Viewer

Color Patches

When the Viewer performs dithering, it uses patches of 2 × 2 pixels. If the desired colorhas an exact match, then all of the values in the patch will match it. If the desired coloris halfway between two of the usable colors, the patch will contain two pixels of each ofthe surrounding usable colors. If it is 3/4 of the way between two usable colors, thepatch will contain 3 pixels of the color it is closest to and 1 pixel of the color that issecond closest. Figure 44 shows what the color patches would look like if the usablecolors were black and white and the desired color was gray.

Figure 44: Example of Color Patches

If the desired color is not an even multiple of 1/4 of the way between two allowablecolors, it is rounded to the nearest 1/4. The Viewer separately dithers the red, green,and blue components of a desired color.

Color Artifacts

Since the Viewer requires 2 × 2 pixel patches to represent a color, and actual imagestypically have a different color for each pixel, artifacts may appear in an image that hasbeen dithered. Usually, the difference in color resolution is insignificant, becauseadjacent pixels are normally similar to each other. Similarity between adjacent pixelsusually smooths out artifacts that would appear.

Viewing Layers The ERDAS IMAGINE Viewer displays layers as one of the following types of viewlayers:

• annotation

• vector

• pseudo color

• gray scale

• true color

Exact 25% away 50% away 75% away Next color

117Field Guide

Page 144: Erdas FieldGuide

Annotation View Layer

When an annotation layer (xxx.ovr) is displayed in the Viewer, it is displayed as anannotation view layer.

Vector View Layer

Vector layers are displayed in the Viewer as a vector view layer.

Pseudo Color View Layer

When a raster layer is displayed as a pseudo color layer in the Viewer, the colormapuses the RGB brightness values for the one layer in the RGB table. This is most appro-priate for thematic layers. If the layer is a continuous raster layer, the layer wouldinitially appear gray, since there are not any values in the RGB table.

Gray Scale View Layer

When a raster layer is displayed as a gray scale layer in the Viewer, the colormap usesthe brightness values in the contrast table for one layer. This layer is then displayed inall three color guns, producing a gray scale image. A continuous raster layer may bedisplayed as a gray scale view layer.

True Color View Layer

Continuous raster layers should be displayed as true color layers in the Viewer. Thecolormap uses the RGB brightness values for three layers in the contrast table, one foreach color gun to display the set of layers.

118 ERDAS

Page 145: Erdas FieldGuide

Using the IMAGINE Viewer

Viewing Multiple Layers It is possible to view as many layers of all types (in the exception of vector layers, whichhave a limit of 10) at one time in a single Viewer.

To overlay multiple layers in one Viewer, they must all be referenced to the same mapcoordinate system. The layers are positioned geographically within the window, andresampled to the same scale as previously displayed layers. Therefore, raster layers inone Viewer can have different cell sizes.

When multiple layers are magnified or reduced, raster layers are resampled from thefile to fit to the new scale.

Display multiple layers from the Viewer. Be sure to turn off the Clear Display check box whenyou open subsequent layers.

Overlapping Layers

When layers overlap, the order in which the layers are opened is very important. Thelast layer that is opened will always appear to be “on top” of the previously openedlayers.

In a raster layer, it is possible to make values of zero transparent in the Viewer, meaningthat they have no opacity. Thus, if a raster layer with zeros is displayed over otherlayers, the areas with zero values will allow the underlying layers to show through.

Opacity is a measure of how opaque, or solid, a color is displayed in a raster layer.Opacity is a component of the color scheme of categorical data displayed in pseudocolor.

• 100% opacity means that a color is completely opaque, and cannot be seen through.

• 50% opacity lets some color show, and lets some of the underlying layers showthrough. The effect is like looking at the underlying layers through a colored fog.

• 0% opacity allows underlying layers to show completely.

By manipulating opacity, you can compare two or more layers of raster data that are displayedin a Viewer. Opacity can be set at any value in the range of 0% to 100%. Use the Arrange Layersdialog to re-stack layers in a Viewer so that they overlap in a different order, if needed.

Non-Overlapping Layers

Multiple layers that are opened in the same Viewer do not have to overlap. Layers thatcover distinct geographic areas can be opened in the same Viewer. The layers will beautomatically positioned in the Viewer window according to their map coordinates,and will be positioned relative to one another geographically. The map coordinatesystems for the layers must be the same.

119Field Guide

Page 146: Erdas FieldGuide

Linking Viewers Linking Viewers is appropriate when two Viewers cover the same geographic area (atleast partially) and are referenced to the same map units. When two Viewers are linked:

• either the same geographic point is displayed in the centers of both Viewers, or abox shows where one view fits inside the other

• scrolling one Viewer affects the other

• the user can manipulate the zoom ratio of one Viewer from another

• any inquire cursors in one Viewer appear in the other, for multiple-Viewer pixelinquiry

• the auto-zoom is enabled, if the Viewers have the same zoom ratio and nearly thesame window size

It is often helpful to display a wide view of a scene in one Viewer, and then a close-upof a particular area in another Viewer. When two such Viewers are linked, a box opensin the wide view window to show where the close-up view lies.

Any image that is displayed at a magnification (higher zoom ratio) of another image ina linked Viewer is represented in the other Viewer by a box. If several Viewers arelinked together, there may be multiple boxes in that Viewer.

Figure 45 shows how one view fits inside the other linked Viewer. The link box showsthe extent of the larger-scale view.

Figure 45: Linked Viewers

120 ERDAS

Page 147: Erdas FieldGuide

Using the IMAGINE Viewer

Zoom and Roam Zooming enlarges an image on the display. When an image is zoomed, it can be roamed(scrolled) so that the desired portion of the image appears on the display screen. Anyimage that does not fit entirely in the Viewer can be roamed and/or zoomed. Roamingand zooming have no effect on how the image is stored in the file.

The zoom ratio describes the size of the image on the screen in terms of the number offile pixels used to store the image. It is the ratio of the number of screen pixels in the Xor Y dimension to the number that are used to display the corresponding file pixels.

A zoom ratio greater than 1 is a magnification, which makes the image features appearlarger in the Viewer. A zoom ratio less than 1 is a reduction, which makes the imagefeatures appear smaller in the Viewer.

NOTE: ERDAS IMAGINE allows floating point zoom ratios, so that images can be zoomed atvirtually any scale (e.g., continuous fractional zoom). Resampling is necessary whenever animage is displayed with a new pixel grid. The resampling method used when an image is zoomedis the same one used when the image is displayed, as specified in the Open Raster Layer dialog.The default resampling method is Nearest Neighbor.

Zoom the data in the Viewer via the Viewer menu bar, the Viewer tool bar, or the Quick Viewright-button menu.

Table 15: Overview of Zoom Ratio

A zoom ratio of 1 means... each file pixel is displayed with 1 screenpixel in the Viewer.

A zoom ratio of 2 means... each file pixel is displayed with a blockof 2 × 2 screen pixels. Effectively, theimage is displayed at 200%.

A zoom ratio of 0.5 means... each block of 2 × 2 file pixels is dis-played with 1 screen pixel. Effectively,the image is displayed at 50%.

121Field Guide

Page 148: Erdas FieldGuide

Geographic Information To prepare to run many programs, it may be necessary to determine the data file coordi-nates, map coordinates, or data file values for a particular pixel or a group of pixels. Bydisplaying the image in the Viewer and then selecting the pixel(s) of interest, importantinformation about the pixel(s) can be viewed.

The Quick View right-button menu gives you options to view information about a specific pixel.Use the Raster Attribute Editor to access information about classes in a thematic layer.

See "CHAPTER 10: Geographic Information Systems" for information about attribute data.

Enhancing ContinuousRaster Layers

Working with the brightness values in the colormap is useful for image enhancement.Often, a trial-and-error approach is needed to produce an image that has the rightcontrast and highlights the right features. By using the tools in the Viewer, it is possibleto quickly view the effects of different enhancement techniques, undo enhancementsthat aren’t helpful, and then save the best results to disk.

Use the Raster options from the Viewer to enhance continuous raster layers.

See "CHAPTER 5: Enhancement" for more information on enhancing continuous raster layers.

Creating New ImageFiles

It is easy to create a new image file (.img) from the layer(s) displayed in the Viewer. Thenew .img file will contain three continuous raster layers (RGB), regardless of how manylayers are currently displayed. The IMAGINE Image Info utility must be used to createstatistics for the new .img file before the file is enhanced.

Annotation layers can be converted to raster format, and written to an .img file. Or,vector data can be gridded into an image, overwriting the values of the pixels in theimage plane, and incorporated into the same band as the image.

Use the Viewer to .img function to create a new .img file from the currently displayed rasterlayers.

122 ERDAS

Page 149: Erdas FieldGuide

Using the IMAGINE Viewer

123Field Guide

Page 150: Erdas FieldGuide

124 ERDAS

Page 151: Erdas FieldGuide

Introduction

CHAPTER 5Enhancement

Introduction Image enhancement is the process of making an image more interpretable for aparticular application (Faust 1989). Enhancement makes important features of raw,remotely sensed data more interpretable to the human eye. Enhancement techniquesare often used instead of classification techniques for feature extraction—studying andlocating areas and objects on the ground and deriving useful information from images.

The techniques to be used in image enhancement depend upon:

• The user’s data — the different bands of Landsat, SPOT, and other imaging sensorswere selected to detect certain features. The user must know the parameters of thebands being used before performing any enhancement. (See "CHAPTER 1: RasterData" for more details.)

• The user’s objective — for example, sharpening an image to identify features thatcan be used for training samples will require a different set of enhancementtechniques than reducing the number of bands in the study. The user must have aclear idea of the final product desired before enhancement is performed.

• The user’s expectations — what the user thinks he or she will find.

• The user’s background — the experience of the person performing theenhancement.

This chapter will briefly discuss the following enhancement techniques available withERDAS IMAGINE:

• Data correction — radiometric and geometric correction

• Radiometric enhancement — enhancing images based on the values of individualpixels

• Spatial enhancement — enhancing images based on the values of individual andneighboring pixels

• Spectral enhancement — enhancing images by transforming the values of eachpixel on a multiband basis

125Field Guide

Page 152: Erdas FieldGuide

• Hyperspectral image processing — an extension of the techniques used for multi-spectral datasets

• Fourier analysis — techniques for eliminating periodic noise in imagery

• Radar imagery enhancement— techniques specifically designed for enhancingradar imagery

See "Bibliography" on page 635 to find current literature which will provide a more detaileddiscussion of image processing enhancement techniques.

Display vs. FileEnhancement

With ERDAS IMAGINE, image enhancement may be performed:

• temporarily, upon the image that is displayed in the Viewer (by manipulating thefunction and display memories), or

• permanently, upon the image data in the data file.

Enhancing a displayed image is much faster than enhancing an image on disk. If one islooking for certain visual effects, it may be beneficial to perform some trial-and-errorenhancement techniques on the display. Then, when the desired results are obtained,the values that are stored in the display device memory can be used to make the samechanges to the data file.

For more information about displayed images and the memory of the display device, see"CHAPTER 4: Image Display".

Spatial ModelingEnhancements

Two types of models for enhancement can be created in ERDAS IMAGINE:

• Graphical models — use Model Maker (Spatial Modeler) to easily, and with greatflexibility, construct models which can be used to enhance the data.

• Script models — for even greater flexibility, use the Spatial Modeler Language toconstruct models in script form. The Spatial Modeler Language (SML) enables theuser to write scripts which can be written, edited, and run from the Spatial Modelercomponent or directly from the command line. The user can edit models createdwith Model Maker using the Spatial Modeling Language or Model Maker.

Although a graphical model and a script model look different, they will produce thesame results when applied.

Image Interpreter

ERDAS IMAGINE supplies many algorithms constructed as models, ready to beapplied with user-input parameters at the touch of a button. These graphical models,created with Model Maker, are listed as menu functions in the Image Interpreter. Thesefunctions are mentioned throughout this chapter. Just remember, these are modelingfunctions which can be edited and adapted as needed with Model Maker or the SpatialModeler Language.

126 ERDAS

Page 153: Erdas FieldGuide

Introduction

tic

See "CHAPTER 10: Geographic Information Systems"for more information on RasterModeling.

The modeling functions available for enhancement in Image Interpreter are brieflydescribed in Table 16.

Table 16: Description of Modeling Functions Available for Enhancement

Function Description

SPATIAL ENHANCEMENT These functions enhance the image using the values ofindividual and surrounding pixels.

Convolution Uses a matrix to average small sets of pixels across an image.

Non-directional Edge Averages the results from two orthogonal 1st derivative edgedetectors.

Focal Analysis Enables the user to perform one of several analyses on classvalues in an .img file using a process similar to convolutionfiltering.

Texture Defines texture as a quantitative characteristic in an image.

Adaptive Filter Varies the contrast stretch for each pixel depending upon theDN values in the surrounding moving window.

Statistical Filter Produces the pixel output DN by averaging pixels within amoving window that fall within a statistically defined range.

Resolution Merge Merges imagery of differing spatial resolutions.

Crisp Sharpens the overall scene luminance without distorting thethematic content of the image.

RADIOMETRICENHANCEMENT

These functions enhance the image using the values ofindividual pixels within each band.

LUT (Lookup Table)Stretch

Creates an output image that contains the data values as mod-ified by a lookup table.

HistogramEqualization

Redistributes pixel values with a nonlinear contrast stretch sothat there are approximately the same number of pixels witheach value within a range.

Histogram Match Mathematically determines a lookup table that will convertthe histogram of one image to resemble the histogram ofanother.

Brightness Inversion Allows both linear and nonlinear reversal of the image inten-sity range.

Haze Reduction* De-hazes Landsat 4 and 5 Thematic Mapper data and panchromadata.

Noise Reduction* Removes noise using an adaptive filter.

Destripe TM Data Removes striping from a raw TM4 or TM5 data file.

127Field Guide

Page 154: Erdas FieldGuide

ls

* Indicates functions that are not graphical models.

NOTE: There are other Image Interpreter functions that do not necessarily apply to imageenhancement.

SPECTRALENHANCEMENT

These functions enhance the image by transforming thevalues of each pixel on a multiband basis.

Principal Components Compresses redundant data values into fewer bands whichare often more interpretable than the source data.

Inverse PrincipalComponents

Performs an inverse principal components analysis.

Decorrelation Stretch Applies a contrast stretch to the principal components of animage.

Tasseled Cap Rotates the data structure axes to optimize data viewing forvegetation studies.

RGB to IHS Transforms red, green, blue values to intensity, hue, satura-tion values.

IHS to RGB Transforms intensity, hue, saturation values to red, green,blue values.

Indices Performs band ratios that are commonly used in mineral andvegetation studies.

Natural Color Simulates natural color for TM data.

FOURIER ANALYSISThese functions enhance the image by applying aFourier Transform to the data. NOTE: These functionsare currently view only—no manipulation is allowed.

Fourier Transform* Enables the user to utilize a highly efficient version of theDiscrete Fourier Transform (DFT).

Fourier TransformEditor*

Enables the user to edit Fourier images using many interactive tooand filters.

Inverse FourierTransform*

Computes the inverse two-dimensional Fast Fourier Trans-form of the spectrum stored.

Fourier Magnitude* Converts the Fourier Transform image into the more familiarFourier Magnitude image.

Periodic NoiseRemoval*

Automatically removes striping and other periodic noise fromimages.

Homomorphic Filter* Enhances imagery using an illumination/reflectance model.

Table 16: Description of Modeling Functions Available for Enhancement

Function Description

128 ERDAS

Page 155: Erdas FieldGuide

Correcting Data

Correcting Data Each generation of sensors shows improved data acquisition and image quality overprevious generations. However, some anomalies still exist that are inherent to certainsensors and can be corrected by applying mathematical formulas derived from thedistortions (Lillesand and Kiefer 1979). In addition, the natural distortion that resultsfrom the curvature and rotation of the earth in relation to the sensor platform producesdistortions in the image data, which can also be corrected.

Radiometric Correction

Generally, there are two types of data correction: radiometric and geometric.Radiometric correction addresses variations in the pixel intensities (digital numbers, orDNs) that are not caused by the object or scene being scanned. These variations include:

• differing sensitivities or malfunctioning of the detectors

• topographic effects

• atmospheric effects

Geometric Correction

Geometric correction addresses errors in the relative positions of pixels. These errorsare induced by:

• sensor viewing geometry

• terrain variations

Because of the differences in radiometric and geometric correction between traditional, passivelydetected visible/infrared imagery and actively acquired radar imagery, the two will be discussedseparately. See "Radar Imagery Enhancement" on page 191.

Radiometric Correction -Visible/Infrared Imagery

Striping

Striping or banding will occur if a detector goes out of adjustment—that is, it providesreadings consistently greater than or less than the other detectors for the same bandover the same ground cover.

Some Landsat 1, 2, and 3 data have striping every sixth line, due to improper calibrationof some of the 24 detectors that were used by the MSS. The stripes are not constant datavalues, nor is there a constant error factor or bias. The differing response of the errantdetector is a complex function of the data value sensed.

This problem has been largely eliminated in the newer sensors. Various algorithmshave been advanced in current literature to help correct this problem in the older data.Among these algorithms are simple along-line convolution, high-pass filtering, andforward and reverse principal component transformations (Crippen 1989).

129Field Guide

Page 156: Erdas FieldGuide

Data from airborne multi- or hyperspectral imaging scanners will also show apronounced striping pattern due to varying offsets in the multi-element detectors. Thiseffect can be further exacerbated by unfavorable sun angle. These artifacts can beminimized by correcting each scan line to a scene-derived average (Kruse 1988).

Use the Image Interpreter or the Spatial Modeler to implement algorithms to eliminatestriping.The Spatial Modeler editing capabilities allow you to adapt the algorithms to bestaddress the data. The Radar Adjust Brightness function will also correct some of these problems.

Line Dropout

Another common remote sensing device error is line dropout. Line dropout occurswhen a detector either completely fails to function, or becomes temporarily saturatedduring a scan (like the effect of a camera flash on the retina). The result is a line or partialline of data with higher data file values, creating a horizontal streak until the detector(s)recovers, if it recovers.

Line dropout is usually corrected by replacing the bad line with a line of estimated datafile values, based on the lines above and below it.

Atmospheric Effects The effects of the atmosphere upon remotely-sensed data are not considered “errors,”since they are part of the signal received by the sensing device (Bernstein 1983).However, it is often important to remove atmospheric effects, especially for scenematching and change detection analysis.

Over the past 20 years a number of algorithms have been developed to correct for varia-tions in atmospheric transmission. Four categories will be mentioned here:

• dark pixel subtraction

• radiance to reflectance conversion

• linear regressions

• atmospheric modeling

Use the Spatial Modeler to construct the algorithms for these operations.

Dark Pixel Subtraction

The dark pixel subtraction technique assumes that the pixel of lowest DN in each bandshould really be zero and hence its radiometric value (DN) is the result of atmosphere-induced additive errors. These assumptions are very tenuous and recent work indicatesthat this method may actually degrade rather than improve the data (Crane 1971,Chavez et al 1977).

130 ERDAS

Page 157: Erdas FieldGuide

Correcting Data

Radiance to Reflectance Conversion

Radiance to reflectance conversion requires knowledge of the true ground reflectanceof at least two targets in the image. These can come from either at-site reflectancemeasurements or they can be taken from a reflectance table for standard materials. Thelatter approach involves assumptions about the targets in the image.

Linear Regressions

A number of methods using linear regressions have been tried. These techniques usebispectral plots and assume that the position of any pixel along that plot is strictly aresult of illumination. The slope then equals the relative reflectivities for the twospectral bands. At an illumination of zero, the regression plots should pass through thebispectral origin. Offsets from this represent the additive extraneous components, dueto atmosphere effects (Crippen 1987).

Atmospheric Modeling

Atmospheric modeling is computationally complex and requires either assumptions orinputs concerning the atmosphere at the time of imaging. The atmospheric model usedto define the computations is frequently Lowtran or Modtran (Kneizys et al 1988). Thismodel requires inputs such as atmospheric profile (pressure, temperature, water vapor,ozone, etc.) aerosol type, elevation, solar zenith angle, and sensor viewing angle.

Accurate atmospheric modeling is essential in preprocessing hyperspectral data setswhere bandwidths are typically 10 nm or less. These narrow bandwidth corrections canthen be combined to simulate the much wider bandwidths of Landsat or SPOT sensors(Richter 1990).

Geometric Correction As previously noted, geometric correction is applied to raw sensor data to correct errorsof perspective due to the earth’s curvature and sensor motion. Today, some of theseerrors are commonly removed at the sensor’s data processing center. But in the past,some data from Landsat MSS 1, 2, and 3 were not corrected before distribution.

Many visible/infrared sensors are not nadir-viewing; they look to the side. For someapplications, such as stereo viewing or DEM generation, this is an advantage. For otherapplications, it is a complicating factor.

In addition, even a nadir-viewing sensor is viewing only the scene center at true nadir.Other pixels, especially those on the view periphery, are viewed off-nadir. For scenescovering very large geographic areas (such as AVHRR), this can be a significantproblem.

This and other factors, such as earth curvature, result in geometric imperfections in thesensor image. Terrain variations have the same distorting effect but on a smaller (pixel-by-pixel) scale. These factors can be addressed by rectifying the image to a map.

See "CHAPTER 8: Rectification" for more information on geometric correction usingrectification and "CHAPTER 7: Photogrammetric Concepts" for more information onorthocorrection.

131Field Guide

Page 158: Erdas FieldGuide

A more rigorous geometric correction utilizes a DEM and sensor position informationto correct these distortions. This is orthocorrection.

RadiometricEnhancement

Radiometric enhancement deals with the individual values of the pixels in the image.It differs from spatial enhancement (discussed on page 143), which takes into accountthe values of neighboring pixels.

Depending on the points and the bands in which they appear, radiometric enhance-ments that are applied to one band may not be appropriate for other bands. Therefore,the radiometric enhancement of a multiband image can usually be considered as aseries of independent, single-band enhancements (Faust 1989).

Radiometric enhancement usually does not bring out the contrast of every pixel in animage. Contrast can be lost between some pixels, while gained on others.

Figure 46: Histograms of Radiometrically Enhanced Data

In Figure 46, the range between j and k in the histogram of the original data is about onethird of the total range of the data. When the same data are radiometrically enhanced,the range between j and k can be widened. Therefore, the pixels between j and k gaincontrast—it is easier to distinguish different brightness values in these pixels.

However, the pixels outside the range between j and k are more grouped together thanin the original histogram, to compensate for the stretch between j and k. Contrast amongthese pixels is lost.

Original Data0 255j k

Fre

quen

cy

(j and k are reference points)

Enhanced Data0 255j k

Fre

quen

cy

132 ERDAS

Page 159: Erdas FieldGuide

Radiometric Enhancement

Contrast Stretching When radiometric enhancements are performed on the display device, the transfor-mation of data file values into brightness values is illustrated by the graph of a lookuptable.

For example, Figure 47 shows the graph of a lookup table that increases the contrast ofdata file values in the middle range of the input data (the range within the brackets).Note that the input range within the bracket is narrow, but the output brightness valuesfor the same pixels are stretched over a wider range. This process is called contraststretching.

Figure 47: Graph of a Lookup Table

Notice that the graph line with the steepest (highest) slope brings out the most contrastby stretching output values farther apart.

Linear and Nonlinear

The terms linear and nonlinear, when describing types of spectral enhancement, referto the function that is applied to the data to perform the enhancement. A piecewiselinear stretch uses a polyline function to increase contrast to varying degrees overdifferent ranges of the data, as in Figure 48.

Figure 48: Enhancement with Lookup Tables

input data file values

outp

ut b

right

ness

val

ues

255

25500

input data file values

outp

ut b

right

ness

val

ues

255

25500

linear

nonlinear

piecewiselinear

133Field Guide

Page 160: Erdas FieldGuide

Linear Contrast Stretch

A linear contrast stretch is a simple way to improve the visible contrast of an image. Itis often necessary to contrast-stretch raw image data, so that they can be seen on thedisplay.

In most raw data, the data file values fall within a narrow range—usually a range muchnarrower than the display device is capable of displaying. That range can be expandedto utilize the total range of the display device (usually 0 to 255).

A two standard deviation linear contrast stretch is automatically applied to images displayed inthe IMAGINE Viewer.

Nonlinear Contrast Stretch

A nonlinear spectral enhancement can be used to gradually increase or decreasecontrast over a range, instead of applying the same amount of contrast (slope) acrossthe entire image. Usually, nonlinear enhancements bring out the contrast in one rangewhile decreasing the contrast in other ranges. The graph of the function in Figure 49shows one example.

Figure 49: Nonlinear Radiometric Enhancement

Piecewise Linear Contrast Stretch

A piecewise linear contrast stretch allows for the enhancement of a specific portion ofdata by dividing the lookup table into three sections: low, middle, and high. It enablesthe user to create a number of straight line segments which can simulate a curve. Theuser can enhance the contrast or brightness of any section in a single color gun at a time.This technique is very useful for enhancing image areas in shadow or other areas of lowcontrast.

In ERDAS IMAGINE, the Piecewise Linear Contrast function is set up so that there are alwayspixels in each data file value from 0 to 255. You can manipulate the percentage of pixels in aparticular range but you cannot eliminate a range of data file values.

input data file values

outp

ut b

right

ness

val

ues

255

25500

134 ERDAS

Page 161: Erdas FieldGuide

Radiometric Enhancement

A piecewise linear contrast stretch normally follows two rules:

1) The data values are continuous; there can be no break in the values betweenHigh, Middle, and Low. Range specifications will adjust in relation to anychanges to maintain the data value range.

2) The data values specified can go only in an upward, increasing direction,as shown in Figure 50.

Figure 50: Piecewise Linear Contrast Stretch

The contrast value for each range represents the percent of the available output rangethat particular range occupies. The brightness value for each range represents themiddle of the total range of brightness values occupied by that range. Since rules 1 and2 above are enforced, as the contrast and brightness values are changed, they may affectthe contrast and brightness of other ranges. For example, if the contrast of the low rangeincreases, it forces the contrast of the middle to decrease.

Contrast Stretch on the Display

Usually, a contrast stretch is performed on the display device only, so that the data filevalues are not changed. Lookup tables are created that convert the range of data filevalues to the maximum range of the display device. The user can then edit and save thecontrast stretch values and lookup tables as part of the raster data .img file. These valueswill be loaded to the Viewer as the default display values the next time the image isdisplayed.

In ERDAS IMAGINE, you can permanently change the data file values to the lookup tablevalues. Use the Image Interpreter LUT Stretch function to create an .img output file with thesame data values as the displayed contrast stretched image.

See "CHAPTER 1: Raster Data" for more information on the data contained in .img files.

0 255

100%

Data Value Range

LUT

Val

ue

Low Middle High

135Field Guide

Page 162: Erdas FieldGuide

The statistics in the .img file contain the mean, standard deviation, and other statisticson each band of data. The mean and standard deviation are used to determine the rangeof data file values to be translated into brightness values or new data file values. Theuser can specify the number of standard deviations from the mean that are to be usedin the contrast stretch. Usually the data file values that are two standard deviationsabove and below the mean are used. If the data have a normal distribution, then thisrange represents approximately 95 percent of the data.

The mean and standard deviation are used instead of the minimum and maximum datafile values, because the minimum and maximum data file values are usually not repre-sentative of most of the data. (A notable exception occurs when the feature being soughtis in shadow. The shadow pixels are usually at the low extreme of the data file values,outside the range of two standard deviations from the mean.)

The use of these statistics in contrast stretching is discussed and illustrated in "CHAPTER 4:Image Display". Statistics terms are discussed in "APPENDIX A: Math Topics".

Varying the Contrast Stretch

There are variations of the contrast stretch that can be used to change the contrast ofvalues over a specific range, or by a specific amount. By manipulating the lookup tablesas in Figure 51, the maximum contrast in the features of an image can be brought out.

Figure 51 shows how the contrast stretch manipulates the histogram of the data,increasing contrast in some areas and decreasing it in others. This is also a goodexample of a piecewise linear contrast stretch, created by adding breakpoints to thehistogram.

136 ERDAS

Page 163: Erdas FieldGuide

Radiometric Enhancement

Figure 51: Contrast Stretch By Manipulating Lookup Tablesand the Effect on the Output Histogram

input data file valuesou

tput

brig

htne

ss v

alue

s

255

25500

input data file values

outp

ut b

right

ness

val

ues

255

25500

input data file values

outp

ut b

right

ness

val

ues

255

25500

input data file valuesou

tput

brig

htne

ss v

alue

s

255

25500

1. Linear stretch. Values are clipped at 255.

2. A breakpoint is added to the linear function, redistributing the contrast.

3. Another breakpoint added. Contrast at the peak of the histogram continues to increase.

4. The breakpoint at the top of the function is moved so that values are not clipped.

inputhistogram

outputhistogram

137Field Guide

Page 164: Erdas FieldGuide

Histogram Equalization Histogram equalization is a nonlinear stretch that redistributes pixel values so thatthere are approximately the same number of pixels with each value within a range. Theresult approximates a flat histogram. Therefore, contrast is increased at the “peaks” ofthe histogram and lessened at the “tails.”

Histogram equalization can also separate pixels into distinct groups, if there are fewoutput values over a wide range. This can have the visual effect of a crude classification.

Figure 52: Histogram Equalization

To perform a histogram equalization, the pixel values of an image (either data filevalues or brightness values) are reassigned to a certain number of bins, which aresimply numbered sets of pixels. The pixels are then given new values, based upon thebins to which they are assigned.

The following parameters are entered:

• N - the number of bins to which pixel values can be assigned. If there are many binsor many pixels with the same value(s), some bins may be empty.

• M - the maximum of the range of the output values. The range of the output valueswill be from 0 to M.

The total number of pixels is divided by the number of bins, equaling the number ofpixels per bin, as shown in the following equation:

EQUATION 1

where:

N = the number of binsT = the total number of pixels in the imageA = the equalized number of pixels per bin

Original Histogram After Equalization

peak

tail

pixels at peak are spreadapart - contrast is gained

pixels attail aregrouped -contrastis lost

ATN----=

138 ERDAS

Page 165: Erdas FieldGuide

Radiometric Enhancement

The pixels of each input value are assigned to bins, so that the number of pixels in eachbin is as close to A as possible. Consider Figure 53:

Figure 53: Histogram Equalization Example

There are 240 pixels represented by this histogram. To equalize this histogram to 10bins, there would be:

240 pixels / 10 bins = 24 pixels per bin = A

To assign pixels to bins, the following equation is used:

EQUATION 2

where:

A = equalized number of pixels per bin (see above)Hi = the number of values with the value i (histogram)int= integer function (truncating real numbers to integer)Bi = bin number for pixels with value i

Source: Modified from Gonzalez and Wintz 1977

0 1 2 3 4 5 6 7 8 9

5 5

10

15

60 60

40

30

10

5

num

ber

of p

ixel

sdata file values

A = 24

Bi int

Hkk 1=

i 1–

∑ Hi

2------+

A-----------------------------------=

139Field Guide

Page 166: Erdas FieldGuide

The 10 bins are rescaled to the range 0 to M. In this example, M = 9, since the input valuesranged from 0 to 9, so that the equalized histogram can be compared to the original. Theoutput histogram of this equalized image looks like Figure 54:

Figure 54: Equalized Histogram

Effect on Contrast

By comparing the original histogram of the example data with the one above, one cansee that the enhanced image gains contrast in the “peaks” of the original histogram—for example, the input range of 3 to 7 is stretched to the range 1 to 8. However, datavalues at the “tails” of the original histogram are grouped together—input values 0through 2 all have the output value of 0. So, contrast among the “tail” pixels, whichusually make up the darkest and brightest regions of the input image, is lost.

The resulting histogram is not exactly flat, since the pixels can rarely be groupedtogether into bins with an equal number of pixels. Sets of pixels with the same value arenever split up to form equal bins.

Level Slice

A level slice is similar to a histogram equalization in that it divides the data into equalamounts. A level slice on a true color display creates a “stair-stepped” lookup table. Theeffect on the data is that input file values are grouped together at regular intervals intoa discrete number of levels, each with one output brightness value.

To perform a true color level slice, the user must specify a range for the outputbrightness values and a number of output levels. The lookup table is then “stair-stepped” so that there is an equal number of input pixels in each of the output levels.

Histogram Matching Histogram matching is the process of determining a lookup table that will convert thehistogram of one image to resemble the histogram of another. Histogram matching isuseful for matching data of the same or adjacent scenes that were scanned on separatedays, or are slightly different because of sun angle or atmospheric effects. This isespecially useful for mosaicking or change detection.

0 1 2 3 4 5 6 7 8 9

15

60 60

40

30

num

ber

of p

ixel

s

output data file values

A = 242015

0

1

2

3

4 5

78

9

6

numbers inside bars are input data file values

0 0 0

140 ERDAS

Page 167: Erdas FieldGuide

Radiometric Enhancement

To achieve good results with histogram matching, the two input images should havesimilar characteristics:

• The general shape of the histogram curves should be similar.

• Relative dark and light features in the image should be the same.

• For some applications, the spatial resolution of the data should be the same.

• The relative distributions of land covers should be about the same, even whenmatching scenes that are not of the same area. If one image has clouds and the otherdoes not, then the clouds should be “removed” before matching the histograms.This can be done using the Area of Interest (AOI) function. The AOI function isavailable from the Viewer menu bar.

In ERDAS IMAGINE, histogram matching is performed band to band (e.g., band 2 of one imageis matched to band 2 of the other image, etc.).

To match the histograms, a lookup table is mathematically derived, which serves as afunction for converting one histogram to the other, as illustrated in Figure 55.

Figure 55: Histogram Matching

Source histogram (a), mapped through the lookup table (b),approximates model histogram (c).

freq

uenc

y

input0 255

freq

uenc

y

input0 255

freq

uenc

yinput

0 255

+ =

(a) (b) (c)

141Field Guide

Page 168: Erdas FieldGuide

Brightness Inversion The brightness inversion functions produce images that have the opposite contrast ofthe original image. Dark detail becomes light, and light detail becomes dark. This canalso be used to invert a negative image that has been scanned and produce a positiveimage.

Brightness inversion has two options: inverse and reverse. Both options convert theinput data range (commonly 0 - 255) to 0 - 1.0. A min-max remapping is used to simul-taneously stretch the image and handle any input bit format. The output image is infloating point format, so a min-max stretch is used to convert the output image into 8-bit format.

Inverse is useful for emphasizing detail that would otherwise be lost in the darkness ofthe low DN pixels. This function applies the following algorithm:

DNout = 1.0 if 0.0 < DNin < 0.1

DNout =

Reverse is a linear function that simply reverses the DN values:

DNout = 1.0 - DNin

Source: Pratt 1986

0.1

DNin

if 0.1 < DN < 1

142 ERDAS

Page 169: Erdas FieldGuide

Spatial Enhancement

SpatialEnhancement

While radiometric enhancements operate on each pixel individually, spatialenhancement modifies pixel values based on the values of surrounding pixels. Spatialenhancement deals largely with spatial frequency, which is the difference between thehighest and lowest values of a contiguous set of pixels. Jensen (1986) defines spatialfrequency as “the number of changes in brightness value per unit distance for anyparticular part of an image.”

Consider the examples in Figure 56:

• zero spatial frequency - a flat image, in which every pixel has the same value

• low spatial frequency - an image consisting of a smoothly varying gray scale

• highest spatial frequency - an image consisting of a checkerboard of black andwhite pixels

.

Figure 56: Spatial Frequencies

This section contains a brief description of the following:

• Convolution, Crisp, and Adaptive filtering

• Resolution merging

See "Radar Imagery Enhancement" on page 191 for a discussion of Edge Detection and TextureAnalysis. These spatial enhancement techniques can be applied to any type of data.

zero spatial frequency low spatial frequency high spatial frequency

143Field Guide

Page 170: Erdas FieldGuide

Convolution Filtering Convolution filtering is the process of averaging small sets of pixels across an image.Convolution filtering is used to change the spatial frequency characteristics of an image(Jensen 1996).

A convolution kernel is a matrix of numbers that is used to average the value of eachpixel with the values of surrounding pixels in a particular way. The numbers in thematrix serve to weight this average toward particular pixels. These numbers are oftencalled coefficients, because they are used as such in the mathematical equations.

In ERDAS IMAGINE, there are four ways you can apply convolution filtering to an image:

1) The kernel filtering option in the Viewer2) The Convolution function in Image Interpreter3) The Radar Edge Enhancement function4) The Convolution function in Model Maker

Filtering is a broad term, referring to the altering of spatial or spectral features forimage enhancement (Jensen 1996). Convolution filtering is one method of spatialfiltering. Some texts may use the terms synonymously.

Convolution Example

To understand how one pixel is convolved, imagine that the convolution kernel isoverlaid on the data file values of the image (in one band), so that the pixel to beconvolved is in the center of the window.

Figure 57: Applying a Convolution Kernel

Figure 57 shows a 3 × 3 convolution kernel being applied to the pixel in the thirdcolumn, third row of the sample data (the pixel that corresponds to the center of thekernel).

2 8 6 6 6

2 8 6 6 6

2 2 8 6 6

2 2 2 8 6

2 2 2 2 8

Data

Kernel

-1 -1 -1

-1 16 -1

-1 -1 -1

144 ERDAS

Page 171: Erdas FieldGuide

Spatial Enhancement

To compute the output value for this pixel, each value in the convolution kernel ismultiplied by the image pixel value that corresponds to it. These products are summed,and the total is divided by the sum of the values in the kernel, as shown here:

integer ((-1 x 8) + (-1 x 6) + (-1 x 6) +(-1 x 2) + (16 x 8) + (-1 x 6) +(-1 x 2) + (-1 x 2) + (-1 x 8) : (-1 + -1 + -1 + -1 + 16 + -1 + -1 + -1 + -1))

= int ((128-40) / (16-8))

= int (88 / 8) = int (11) = 11

When the 2 × 2 set of pixels in the center of this 5 x 5 image is convolved, the outputvalues are:

Figure 58: Output Values for Convolution Kernel

The kernel used in this example is a high frequency kernel, as explained below. It isimportant to note that the relatively lower values become lower, and the higher valuesbecome higher, thus increasing the spatial frequency of the image.

1 2 3 4 5

1 2 8 6 6 6

2 2 11 5 6 6

3 2 0 11 6 6

4 2 2 2 8 6

5 2 2 2 2 8

145Field Guide

Page 172: Erdas FieldGuide

Convolution Formula

The following formula is used to derive an output data file value for the pixel beingconvolved (in the center):

where:

fij = the coefficient of a convolution kernel at position i,j (in the kernel)

dij = the data value of the pixel that corresponds to fij

q = the dimension of the kernel, assuming a square kernel (if q=3, the kernel is3 × 3)

F = either the sum of the coefficients of the kernel, or 1 if the sum of coefficientsis zero

V = the output pixel value

In cases where V is less than 0, V is clipped to 0.

Source: Modified from Jensen 1996, Schowengerdt 1983

The sum of the coefficients (F) is used as the denominator of the equation above, so thatthe output values will be in relatively the same range as the input values. Since F cannotequal zero (division by zero is not defined), F is set to 1 if the sum is zero.

Zero-Sum Kernels

Zero-sum kernels are kernels in which the sum of all coefficients in the kernel equalszero. When a zero-sum kernel is used, then the sum of the coefficients is not used in theconvolution equation, as above. In this case, no division is performed (F = 1), sincedivision by zero is not defined.

This generally causes the output values to be:

• zero in areas where all input values are equal (no edges)

• low in areas of low spatial frequency

• extreme (high values become much higher, low values become much lower) inareas of high spatial frequency

V

f ij dijj 1=

q

i 1=

q

∑F

-------------------------------------=

146 ERDAS

Page 173: Erdas FieldGuide

Spatial Enhancement

Therefore, a zero-sum kernel is an edge detector, which usually smooths out or zerosout areas of low spatial frequency and creates a sharp contrast where spatial frequencyis high, which is at the edges between homogeneous groups of pixels. The resultingimage often consists of only edges and zeros.

147Field Guide

Page 174: Erdas FieldGuide

Zero-sum kernels can be biased to detect edges in a particular direction. For example,this 3 × 3 kernel is biased to the south (Jensen 1996).

See the section on "Edge Detection" on page 200 for more detailed information.

High-Frequency Kernels

A high-frequency kernel, or high-pass kernel, has the effect of increasing spatialfrequency.

High-frequency kernels serve as edge enhancers, since they bring out the edgesbetween homogeneous groups of pixels. Unlike edge detectors (such as zero-sumkernels), they highlight edges and do not necessarily eliminate other features.

When this kernel is used on a set of pixels in which a relatively low value is surroundedby higher values, like this...

...the low value gets lower. Inversely, when the kernel is used on a set of pixels in whicha relatively high value is surrounded by lower values...

...the high value becomes higher. In either case, spatial frequency is increased by thiskernel.

-1 -1 -1

1 -2 1

1 1 1

-1 -1 -1

-1 16 -1

-1 -1 -1

BEFORE AFTER

204 200 197 204 200 197

201 100 209 201 9 209

198 200 210 198 200 210

BEFORE AFTER

64 60 57 64 60 57

61 125 69 61 187 69

58 60 70 58 60 70

148 ERDAS

Page 175: Erdas FieldGuide

Spatial Enhancement

Low-Frequency Kernels

Below is an example of a low-frequency kernel, or low-pass kernel, which decreasesspatial frequency.

This kernel simply averages the values of the pixels, causing them to be more homoge-neous (homogeneity is low spatial frequency). The resulting image looks eithersmoother or more blurred.

For information on applying filters to thematic layers, see “Chapter 9: Geographic InformationSystems.”

Crisp The Crisp filter sharpens the overall scene luminance without distorting the interbandvariance content of the image. This is a useful enhancement if the image is blurred dueto atmospheric haze, rapid sensor motion, or a broad point spread function of thesensor.

The algorithm used for this function is:

1) Calculate principal components of multiband input image.

2) Convolve PC-1 with summary filter.

3) Retransform to RGB space.

Source: ERDAS (Faust 1993)

The logic of the algorithm is that the first principal component (PC-1) of an image isassumed to contain the overall scene luminance. The other PC’s represent intra-scenevariance. Thus, the user can sharpen only PC-1 and then reverse the principal compo-nents calculation to reconstruct the original image. Luminance is sharpened, butvariance is retained.

1 1 1

1 1 1

1 1 1

149Field Guide

Page 176: Erdas FieldGuide

Resolution Merge The resolution of a specific sensor can refer to radiometric, spatial, spectral, or temporalresolution.

See "CHAPTER 1: Raster Data" for a full description of resolution types.

Landsat TM sensors have seven bands with a spatial resolution of 28.5 m. SPOTpanchromatic has one broad band with very good spatial resolution—10 m. Combiningthese two images to yield a seven band data set with 10 m resolution would provide thebest characteristics of both sensors.

A number of models have been suggested to achieve this image merge. Welch andEhlers (1987) used forward-reverse RGB to IHS transforms, replacing I (from trans-formed TM data) with the SPOT panchromatic image. However, this technique islimited to three bands (R,G,B).

Chavez (1991), among others, uses the forward-reverse principal components trans-forms with the SPOT image, replacing PC-1.

In the above two techniques, it is assumed that the intensity component (PC-1 or I) isspectrally equivalent to the SPOT panchromatic image, and that all the spectral infor-mation is contained in the other PC’s or in H and S. Since SPOT data do not cover thefull spectral range that TM data do, this assumption does not strictly hold. It isunacceptable to resample the thermal band (TM6) based on the visible (SPOT panchro-matic) image.

Another technique (Schowengerdt 1980) combines a high frequency image derivedfrom the high spatial resolution data (i.e., SPOT panchromatic) additively with the highspectral resolution Landsat TM image.

The Resolution Merge function has two different options for resampling low spatialresolution data to a higher spatial resolution while retaining spectral information:

• forward-reverse principal components transform

• multiplicative

Principal Components Merge

Because a major goal of this merge is to retain the spectral information of the six TMbands (1-5, 7), this algorithm is mathematically rigorous. It is assumed that:

• PC-1 contains only overall scene luminance; all interband variation is contained inthe other 5 PCs, and

• Scene luminance in the SWIR bands is identical to visible scene luminance.

With the above assumptions, the forward transform into principal components is made.PC-1 is removed and its numerical range (min to max) is determined. The high spatialresolution image is then remapped so that its histogram shape is kept constant, but it isin the same numerical range as PC-1. It is then substituted for PC-1 and the reversetransform is applied. This remapping is done so that the mathematics of the reversetransform do not distort the thematic information (Welch and Ehlers 1987).

150 ERDAS

Page 177: Erdas FieldGuide

Spatial Enhancement

Multiplicative

The second technique in the Image Interpreter uses a simple multiplicative algorithm:

(DNTM1) (DNSPOT) = DNnew TM1

The algorithm is derived from the four component technique of Crippen (Crippen1989). In this paper, it is argued that of the four possible arithmetic methods to incor-porate an intensity image into a chromatic image (addition, subtraction, division, andmultiplication), only multiplication is unlikely to distort the color.

However, in his study Crippen first removed the intensity component via band ratios,spectral indices, or PC transform. The algorithm shown above operates on the originalimage. The result is an increased presence of the intensity component. For many appli-cations, this is desirable. Users involved in urban or suburban studies, city planning,utilities routing, etc., often want roads and cultural features (which tend toward highreflection) to be pronounced in the image.

Adaptive Filter Contrast enhancement (image stretching) is a widely applicable standard imageprocessing technique. However, even adjustable stretches like the piecewise linearstretch act on the scene globally. There are many circumstances where this is not theoptimum approach. For example, coastal studies where much of the water detail isspread through a very low DN range and the land detail is spread through a muchhigher DN range would be such a circumstance. In these cases, a filter that “adapts” thestretch to the region of interest (the area within the moving window) would produce abetter enhancement. Adaptive filters attempt to achieve this (Fahnestock andSchowengerdt 1983, Peli and Lim 1982, Schwartz 1977).

ERDAS IMAGINE supplies two adaptive filters with user-adjustable parameters. The AdaptiveFilter function in Image Interpreter can be applied to undegraded images, such as SPOT,Landsat, and digitized photographs. The Image Enhancement function in Radar is better fordegraded or difficult images.

151Field Guide

Page 178: Erdas FieldGuide

Scenes to be adaptively filtered can be divided into three broad and overlappingcategories:

• Undegraded — these scenes have good and uniform illumination overall. Given achoice, these are the scenes one would prefer to obtain from imagery sources suchas EOSAT or SPOT.

• Low luminance — these scenes have an overall or regional less-than-optimumintensity. An underexposed photograph (scanned) or shadowed areas would be inthis category. These scenes need an increase in both contrast and overall sceneluminance.

• High luminance — these scenes are characterized by overall excessively high DNvalues. Examples of such circumstances would be an over-exposed (scanned)photograph or a scene with a light cloud cover or haze. These scenes need adecrease in luminance and an increase in contrast.

No one filter with fixed parameters can address this wide variety of conditions. Inaddition, multiband images may require different parameters for each band. Withoutthe use of adaptive filters, the different bands would have to be separated into one-bandfiles, enhanced, and then recombined.

For this function, the image is separated into high and low frequency componentimages. The low frequency image is considered to be overall scene luminance. Thesetwo component parts are then recombined in various relative amounts using multi-pliers derived from look-up tables. These LUTs are driven by the overall sceneluminance:

DNout = K(DNHi) + DNLL

where:

K = user-selected contrast multiplier

Hi = high luminance (derives from the LUT)

LL = local luminance (derives from the LUT)

Figure 59: Local Luminance Intercept

0 255Low Frequency Image DN

Loca

l Lum

inan

ce

255

Intercept (I)

152 ERDAS

Page 179: Erdas FieldGuide

Spectral Enhancement

Figure 59 shows the local luminance intercept, which is the output luminance value thatan input luminance value of 0 would be assigned.

SpectralEnhancement

The enhancement techniques that follow require more than one band of data. They canbe used to:

• compress bands of data that are similar

• extract new bands of data that are more interpretable to the eye

• apply mathematical transforms and algorithms

• display a wider variety of information in the three available color guns (R,G,B)

In this documentation, some examples are illustrated with two-dimensional graphs. However,you are not limited to two-dimensional (two-band) data. ERDAS IMAGINE programs allow anunlimited number of bands to be used.

Keep in mind that processing such data sets can require a large amount of computer swap space.In practice, the principles outlined below apply to any number of bands.

Some of these enhancements can be used to prepare data for classification. However, this is arisky practice unless you are very familiar with your data, and the changes that you are makingto it. Anytime you alter values, you risk losing some information.

Principal ComponentsAnalysis

Principal components analysis (or PCA) is often used as a method of data compression.It allows redundant data to be compacted into fewer bands—that is, the dimensionalityof the data is reduced. The bands of PCA data are non-correlated and independent, andare often more interpretable than the source data (Jensen 1996; Faust 1989).

The process is easily explained graphically with an example of data in two bands.Below is an example of a two-band scatterplot, which shows the relationships of datafile values in two bands. The values of one band are plotted against those of the other.If both bands have normal distributions, an ellipse shape results.

Scatterplots and normal distributions are discussed in "APPENDIX A: Math Topics."

153Field Guide

Page 180: Erdas FieldGuide

Figure 60: Two Band Scatterplot

Ellipse Diagram

In an n-dimensional histogram, an ellipse (2 dimensions), ellipsoid (3 dimensions) orhyperellipsoid (more than 3) is formed if the distributions of each input band arenormal or near normal. (The term “ellipse” will be used for general purposes here.)

To perform principal components analysis, the axes of the spectral space are rotated,changing the coordinates of each pixel in spectral space, and the data file values as well.The new axes are parallel to the axes of the ellipse.

First Principal Component

The length and direction of the widest transect of the ellipse are calculated using matrixalgebra in a process explained below. The transect, which corresponds to the major(longest) axis of the ellipse, is called the first principal component of the data. Thedirection of the first principal component is the first eigenvector, and its length is thefirst eigenvalue (Taylor 1977).

A new axis of the spectral space is defined by this first principal component. The pointsin the scatterplot are now given new coordinates, which correspond to this new axis.Since, in spectral space, the coordinates of the points are the data file values, new datafile values are derived from this process. These values are stored in the first principalcomponent band of a new data file.

Band Adata file values

Ban

d B

data

file

val

ues

histogramBand A

hist

ogra

mB

and

B

0 255

255

0

154 ERDAS

Page 181: Erdas FieldGuide

Spectral Enhancement

Figure 61: First Principal Component

The first principal component shows the direction and length of the widest transect ofthe ellipse. Therefore, as an axis in spectral space, it measures the highest variationwithin the data. In Figure 62 it is easy to see that the first eigenvalue will always begreater than the ranges of the input bands, just as the hypotenuse of a right trianglemust always be longer than the legs.

Figure 62: Range of First Principal Component

Successive Principal Components

The second principal component is the widest transect of the ellipse that is orthogonal(perpendicular) to the first principal component. As such, the second principalcomponent describes the largest amount of variance in the data that is not alreadydescribed by the first principal component (Taylor 1977). In a two-dimensional analysis,the second principal component corresponds to the minor axis of the ellipse.

0 255

255

0

Principal Component(new axis)

Band Adata file values

Ban

d B

data

file

val

ues

0 255

255

0

range of Band A

rang

e of

Ban

d B

range of pc 1

155Field Guide

Page 182: Erdas FieldGuide

Figure 63: Second Principal Component

In n dimensions, there are n principal components. Each successive principalcomponent:

• is the widest transect of the ellipse that is orthogonal to the previous componentsin the n-dimensional space of the scatterplot (Faust 1989)

• accounts for a decreasing amount of the variation in the data which is not alreadyaccounted for by previous principal components (Taylor 1977)

Although there are n output bands in a principal components analysis, the first fewbands account for a high proportion of the variance in the data—in some cases, almost100%. Therefore, principal components analysis is useful for compressing data intofewer bands.

In other applications, useful information can be gathered from the principal componentbands with the least variance. These bands can show subtle details in the image thatwere obscured by higher contrast in the original image. These bands may also showregular noise in the data (for example, the striping in old MSS data) (Faust 1989).

Computing Principal Components

To compute a principal components transformation, a linear transformation isperformed on the data—meaning that the coordinates of each pixel in spectral space(the original data file values) are recomputed using a linear equation. The result of thetransformation is that the axes in n-dimensional spectral space are shifted and rotatedto be relative to the axes of the ellipse.

0 255

255

0

PC 1PC 2

90˚ angle(orthogonal)

156 ERDAS

Page 183: Erdas FieldGuide

Spectral Enhancement

To perform the linear transformation, the eigenvectors and eigenvalues of the nprincipal components must be mathematically derived from the covariance matrix, asshown in the following equation:

E Cov ET = V

where:

Cov = the covariance matrix

E = the matrix of eigenvectors

T = the transposition function

V = a diagonal matrix of eigenvalues, in which all non-diagonal elementsare zeros

V is computed so that its non-zero elements are ordered from greatest to least, so that v1

> v2 > v3... > vn .

Source: Faust 1989

A full explanation of this computation can be found in Gonzalez and Wintz 1977.

The matrix V is the covariance matrix of the output principal component file. The zerosrepresent the covariance between bands (there is none), and the eigenvalues are thevariance values for each band. Because the eigenvalues are ordered from v1 to vn,, thefirst eigenvalue is the largest and represents the most variance in the data.

V

v1 0 0 ... 0

0 v2 0 ... 0

...

0 0 0 ... vn

=

157Field Guide

Page 184: Erdas FieldGuide

Each column of the resulting eigenvector matrix, E, describes a unit-length vector inspectral space, which shows the direction of the principal component (the ellipse axis).The numbers are used as coefficients in the following equation, to transform theoriginal data file values into the principal component values.

where:

e = the number of the principal component (first, second, etc.)

Pe = the output principal component value for principal component band e

k = a particular input band

n = the total number of bands

dk = an input data file value in band k

E = the eigenvector matrix, such that Eke = the element of that matrix atrow k, column e

Source: Modified from Gonzalez and Wintz 1977

Decorrelation Stretch The purpose of a contrast stretch is to:

• alter the distribution of the image DN values within the 0 - 255 range of the displaydevice and

• utilize the full range of values in a linear fashion.

The decorrelation stretch stretches the principal components of an image, not to theoriginal image.

A principal components transform converts a multiband image into a set of mutuallyorthogonal images portraying inter-band variance. Depending on the DN ranges andthe variance of the individual input bands, these new images (PCs) will occupy only aportion of the possible 0 - 255 data range.

Each PC is separately stretched to fully utilize the data range. The new stretched PCcomposite image is then retransformed to the original data areas.

Either the original PCs or the stretched PCs may be saved as a permanent image file forviewing after the stretch.

NOTE: Storage of PCs as floating point-single precision is probably appropriate in this case.

Pe dkEkek 1=

n

∑=

158 ERDAS

Page 185: Erdas FieldGuide

Spectral Enhancement

Tasseled Cap The different bands in a multispectral image can be visualized as defining an N-dimen-sional space where N is the number of bands. Each pixel, positioned according to its DNvalue in each band, lies within the N-dimensional space. This pixel distribution is deter-mined by the absorption/reflection spectra of the imaged material. This clustering ofthe pixels is termed the data structure (Crist & Kauth 1986).

See "CHAPTER 1: Raster Data" for more information on absorption/reflection spectra. See thediscussion on "Principal Components Analysis" on page 153.

The data structure can be considered a multi-dimensional hyperellipsoid. The principalaxes of this data structure are not necessarily aligned with the axes of the data space(defined as the bands of the input image). They are more directly related to theabsorption spectra. For viewing purposes, it is advantageous to rotate the N-dimen-sional space, such that one or two of the data structure axes are aligned with the viewerX and Y axes. In particular, the user could view the axes that are largest for the datastructure produced by the absorption peaks of special interest for the application.

For example, a geologist and a botanist are interested in different absorption features.They would want to view different data structures and therefore, different datastructure axes. Both would benefit from viewing the data in a way that would maximizevisibility of the data structure of interest.

The Tasseled Cap transformation offers a way to optimize data viewing for vegetationstudies. Research has produced three data structure axes which define the vegetationinformation content (Crist et al 1986, Crist & Kauth 1986):

• Brightness — a weighted sum of all bands, defined in the direction of the principalvariation in soil reflectance.

• Greenness — orthogonal to brightness, a contrast between the near-infrared andvisible bands. Strongly related to the amount of green vegetation in the scene.

• Wetness — relates to canopy and soil moisture (Lillesand and Kiefer 1987).

A simple calculation (linear combination) then rotates the data space to present any ofthese axes to the user.

These rotations are sensor-dependent, but once defined for a particular sensor (sayLandsat 4 TM), the same rotation will work for any scene taken by that sensor. Theincreased dimensionality (number of bands) of TM vs. MSS allowed Crist et al to definethree additional axes, termed Haze, Fifth, and Sixth. Laurin (1986) has used this hazeparameter to devise an algorithm to de-haze Landsat imagery.

159Field Guide

Page 186: Erdas FieldGuide

The Tasseled Cap algorithm implemented in the Image Interpreter provides the correctcoefficient for MSS, TM4, and TM5 imagery. For TM4, the calculations are:

Brightness = .3037(TM1) + .2793)(TM2) + .4743 (TM3) +.5585 (TM4) + .5082 (TM5) + .1863 (TM7)

Greenness = -.2848 (TM1) - .2435 (TM2) - .5436 (TM3) +.7243 (TM4) + .0840 (TM5) - .1800 (TM7)

Wetness = .1509 (TM1) + .1973 (TM2) + .3279 (TM3) +.3406 (TM4) - .7112 (TM5) - .4572 (TM7)

Haze = .8832 (TM1) - .0819 (TM2) - .4580 (TM3) -.0032 (TM4) - .0563 (TM5) + .0130 (TM7)

Source: Modified from Crist et al 1986, Jensen 1996

RGB to IHS The color monitors used for image display on image processing systems have threecolor guns. These correspond to red, green, and blue (R,G,B), the additive primarycolors. When displaying three bands of a multiband data set, the viewed image is saidto be in R,G,B space.

However, it is possible to define an alternate color space that uses Intensity (I), Hue (H),and Saturation (S) as the three positioned parameters (in lieu of R,G, and B). This systemis advantageous in that it presents colors more nearly as perceived by the human eye.

• Intensity is the overall brightness of the scene (like PC-1) and varies from 0 (black)to 1 (white).

• Saturation represents the purity of color and also varies linearly from 0 to 1.

• Hue is representative of the color or dominant wavelength of the pixel. It variesfrom 0 at the red midpoint through green and blue back to the red midpoint at 360.It is a circular dimension (see Figure 64). In Figure 64, 0-255 is the selected range; itcould be defined as any data range. However, hue must vary from 0-360 to definethe entire sphere (Buchanan 1979).

160 ERDAS

Page 187: Erdas FieldGuide

Spectral Enhancement

Figure 64: Intensity, Hue, and Saturation Color Coordinate System

To use the RGB to IHS transform, use the RGB to IHS function from Image Interpreter.

The algorithm used in the Image Interpreter RGB to IHS transform is (Conrac 1980):

where:

R,G,B are each in the range of 0 to 1.0.M= largest value, R, G, or Bm= least value, R, G, or B

NOTE: At least one of the R, G, or B values is 0, corresponding to the color with the largestvalue, and at least one of the R, G, or B values is 1, corresponding to the color with the least value.

INT

EN

SIT

Y

255

255 0

255,0 Red

Blue

GreenSATURATION

Source: Buchanan 1979

HUE

RM R–M m–---------------=

GM G–M m–---------------=

BM B–M m–---------------=

161Field Guide

Page 188: Erdas FieldGuide

The equation for calculating intensity in the range of 0 to 1.0 is:

The equations for calculating saturation in the range of 0 to 1.0 are:

If M = m, S = 0

If I < 0.5,

If I > 0.5,

The equations for calculating hue in the range of 0 to 360 are:

If M = m, H = 0If R = M, H = 60 (2 + b - g)If G = M, H = 60 (4 + r - b)If B = M, H = 60 (6 + g - r)

where:

R,G,B are each in the range of 0 to 1.0.M= largest value, R, G, or Bm= least value, R, G, or B

IHS to RGB The family of IHS to RGB are intended as a complement to the standard RGB to IHStransform.

In the IHS to RGB algorithm, a min - max stretch is applied to either intensity (I),saturation (S), or both, so that they more fully utilize the 0 - 1 value range. The valuesfor hue (H), a circular dimension, are 0 - 360. However, depending on the dynamicrange of the DN values of the input image, it is possible that I or S or both will occupyonly a part of the 0 - 1 range. In this model, a min-max stretch is applied to either I, S,or both, so that they more fully utilize the 0 - 1 value range. After stretching, the full IHSimage is retransformed back to the original RGB space. As the parameter Hue is notmodified, it largely defines what we perceive as color, and the resultant image looksvery much like the input image.

It is not essential that the input parameters (IHS) to this transform be derived from anRGB to IHS transform. The user could define I and/or S as other parameters, set Hue at0-360, and then transform to RGB space. This is a method of color coding other data sets.

In another approach (Daily 1983), H and I are replaced by low- and high-frequencyradar imagery. The user can also replace I with radar intensity before the IHS to RGBtransform (Holcomb 1993). Chavez evaluates the use of the IHS to RGB transform toresolution merge Landsat TM with SPOT panchromatic imagery (Chavez 1991). NOTE:Use the Spatial Modeler for this analysis.

IM m+

2----------------=

SM m–M m+----------------=

SM m–

2 M– m–------------------------=

162 ERDAS

Page 189: Erdas FieldGuide

Spectral Enhancement

See the previous section on RGB to IHS transform for more information.

The algorithm used by ERDAS IMAGINE for the IHS to RGB function is (Conrac 1980):

Given: H in the range of 0 to 360; I and S in the range of 0 to 1.0

If I £ 0.5, M = I (1 + S)

If I > 0.5, M = I + S - I (S)

m = 2 * 1 - M

The equations for calculating R in the range of 0 to 1.0 are:

If H < 60, R = m + (M - m)

If 60 £ H < 180, R = M

If 180 £ H < 240, R = m + (M - m)

If 240 £ H £ 360, R = m

The equations for calculating G in the range of 0 to 1.0 are:

If H < 120, G = m

If 120 £ H < 180, m + (M - m)

If 180 £ H < 300, G = M

If 300 £ H £ 360, G = m + (M-m)

Equations for calculating B in the range of 0 to 1.0:

If H < 60, B = M

If 60 £ H < 120, B = m + (M - m)

If 120 £ H < 240, B = m

If 240 £ H < 300, B = m + (M - m)

If 300 £ H £ 360, B = M

H60------

240 H–60

-------------------

H 120–60

-------------------

360 H–60

-------------------

120 H–60

-------------------

H 240–60

-------------------

163Field Guide

Page 190: Erdas FieldGuide

Indices Indices are used to create output images by mathematically combining the DN valuesof different bands. These may be simplistic:

(Band X - Band Y)

or more complex:

In many instances, these indices are ratios of band DN values:

These ratio images are derived from the absorption/reflection spectra of the materialof interest. The absorption is based on the molecular bonds in the (surface) material.Thus, the ratio often gives information on the chemical composition of the target.

See "CHAPTER 1: Raster Data" for more information on the absorption/reflection spectra.

Applications

• Indices are used extensively in mineral exploration and vegetation analyses tobring out small differences between various rock types and vegetation classes. Inmany cases, judiciously chosen indices can highlight and enhance differenceswhich cannot be observed in the display of the original color bands.

• Indices can also be used to minimize shadow effects in satellite and aircraftmultispectral images. Black and white images of individual indices or a colorcombination of three ratios may be generated.

• Certain combinations of TM ratios are routinely used by geologists forinterpretation of Landsat imagery for mineral type. For example: Red 5/7, Green5/4, Blue 3/1.

Integer Scaling Considerations

The output images obtained by applying indices are generally created in floating pointto preserve all numerical precision. If there are two bands, A and B, then:

ratio = A/B

If A>>B (much greater than), then a normal integer scaling would be sufficient. If A>Band A is never much greater than B, scaling might be a problem in that the data rangemight only go from 1 to 2 or 1 to 3. Integer scaling in this case would give very littlecontrast.

Band X - Band Y

Band X + Band Y

Band X

Band Y

164 ERDAS

Page 191: Erdas FieldGuide

Spectral Enhancement

For cases in which A<B or A<<B, integer scaling would always truncate to 0. Allfractional data would be lost. A multiplication constant factor would also not be veryeffective in seeing the data contrast between 0 and 1, which may very well be asubstantial part of the data image. One approach to handling the entire ratio range is toactually process the function:

ratio = atan(A/B)

This would give a better representation for A/B < 1 as well as for A/B > 1 (Faust 1992).

Index Examples

The following are examples of indices which have been preprogrammed in the ImageInterpreter in ERDAS IMAGINE:

• IR/R (infrared/red)

• SQRT (IR/R)

• Vegetation Index = IR-R

• Normalized Difference Vegetation Index (NDVI) =

• Transformed NDVI (TNDVI) =

• Iron Oxide = TM 3/1

• Clay Minerals = TM 5/7

• Ferrous Minerals = TM 5/4

• Mineral Composite = TM 5/7, 5/4, 3/1

• Hydrothermal Composite = TM 5/7, 3/1, 4/3

Source: Modified from Sabins 1987, Jensen 1996, Tucker 1979

IR R–IR R+----------------

IR R–IR R+---------------- 0.5+

165Field Guide

Page 192: Erdas FieldGuide

The following table shows the infrared (IR) and red (R) band for some common sensors(Tucker 1979, Jensen 1996):

Image Algebra

Image algebra is a general term used to describe operations that combine the pixels oftwo or more raster layers in mathematical combinations. For example, the calculation:

(infrared band) - (red band)

DNir - DNred

yields a simple, yet very useful, measure of the presence of vegetation. At the otherextreme is the Tasseled Cap calculation (described in the following pages), which usesa more complicated mathematical combination of as many as six bands to definevegetation.

Band ratios, such as:

are also commonly used. These are derived from the absorption spectra of the materialof interest; the numerator is a baseline of background absorption and the denominatoris an absorption peak.

See "CHAPTER 1: Raster Data" for more information on absorption/reflection spectra.

The Normalized Difference Vegetation Index (NDVI) is a combination of addition,subtraction, and division:

SensorIR

BandR

Band

Landsat MSS 7 5

SPOT XS 3 2

Landsat TM 4 3

NOAA AVHRR 2 1

TM5TM7------------ = clay minerals

NDVIIR R–IR R+----------------=

166 ERDAS

Page 193: Erdas FieldGuide

Hyperspectral Image Processing

Hyperspectral ImageProcessing

Hyperspectral image processing is in many respects simply an extension of thetechniques used for multi-spectral datasets; indeed, there is no set number of bandsbeyond which a dataset is hyperspectral. Thus, many of the techniques or algorithmscurrently used for multi-spectral datasets are logically applicable, regardless of thenumber of bands in the dataset (see the discussion of Figure 7 on page 12 of thismanual). What is of relevance in evaluating these datasets is not the number of bandsper se, but the spectral band-width of the bands (channels). As the bandwidths getsmaller, it becomes possible to view the dataset as an absorption spectrum rather thana collection of discontinuous bands. Analysis of the data in this fashion is termed"imaging spectrometry".

A hyperspectral image data set is recognized as a three-dimensional pixel array. As ina traditional raster image, the x-axis is the column indicator and the y-axis is the rowindicator. The z-axis is the band number or, more correctly, the wavelength of that band(channel). A hyperspectral image can be visualized as shown in Figure 65.

Figure 65: Hyperspectral Data Axes

A dataset with narrow contiguous bands can be plotted as a continuous spectrum andcompared to a library of known spectra using full profile spectral pattern fittingalgorithms. A serious complication in using this approach is assuring that all spectra arecorrected to the same background.

At present, it is possible to obtain spectral libraries of common materials. The JPL andUSGS mineral spectra libraries are included in IMAGINE. These are laboratorymeasured reflectance spectra of reference minerals, often of high purity and definedparticle size. The spectrometer is commonly purged with pure nitrogen to avoid absor-bance by atmospheric gases. Conversely, the remote sensor records an image after thesunlight has (twice) passed through the atmosphere with variable and unknownamounts of water vapor, CO2, etc. (This atmospheric absorbance curve is shown inFigure 4.) The unknown atmospheric absorbances superimposed upon the Earthsurface reflectances makes comparison to laboratory spectra or spectra taken with adifferent atmosphere inexact. Indeed, it has been shown that atmospheric compositioncan vary within a single scene. This complicates the use of spectral signatures evenwithin one scene. Atmospheric absorption and scattering is discussed on pages 6through 10 of this manual.

Y

X

Z

167Field Guide

Page 194: Erdas FieldGuide

A number of approaches have been advanced to help compensate for this atmosphericcontamination of the spectra. These are introduced briefly on page 130 of this manualfor the general case. Two specific techniques, Internal Average Relative Reflectance(IARR) and Log Residuals, are implemented in IMAGINE 8.3. These have theadvantage of not requiring auxiliary input information; the correction parameters arescene-derived. The disadvantage is that they produce relative reflectances (i.e., they canbe compared to reference spectra in a semi-quantitative manner only).

Normalize Pixel albedo is affected by sensor look angle and local topographic effects. For airbornesensors this look angle effect can be large across a scene; it is less pronounced forsatellite sensors. Some scanners look to both sides of the aircraft. For these datasets, theaverage scene luminance between the two half-scenes can be large. To help minimizethese effects, an "equal area normalization" algorithm can be applied (Zamudio andAtkinson 1990). This calculation shifts each (pixel) spectrum to the same overall averagebrightness. This enhancement must be used with a consideration of whether thisassumption is valid for the scene. For an image which contains 2 (or more) distinctlydifferent regions (e.g., half ocean and half forest), this may not be a valid assumption.Correctly applied, this normalization algorithm helps remove albedo variations andtopographic effects.

IAR Reflectance As discussed above, it is desired to convert the spectra recorded by the sensor into aform that can be compared to known reference spectra. This technique calculates arelative reflectance by dividing each spectrum (pixel) by the scene average spectrum(Kruse 1988). The algorithm is based on the assumption that this scene averagespectrum is largely composed of the atmospheric contribution and that the atmosphereis uniform across the scene. However, these assumptions are not always valid. Inparticular, the average spectrum could contain absorption features related to targetmaterials of interest. The algorithm could then overcompensate for (i.e., remove) theseabsorbance features. The average spectrum should be visually inspected to check forthis possibility. Properly applied, this technique can remove the majority ofatmospheric effects.

168 ERDAS

Page 195: Erdas FieldGuide

Hyperspectral Image Processing

Log Residuals The Log Residuals technique was originally described by Green and Craig (1985), buthas been variously modified by researchers. The version implemented here is similar tothe approach of Lyon (1987). The algorithm can be conceptualized as:

Output Spectrum = (input spectrum) - (average spectrum)- (pixel brightness) + (image brightness)

All parameters in the above equation are in logarithmic space, hence the name.

This algorithm corrects the image for atmospheric absorption, systemic instrumentalvariation, and illuminance differences between pixels.

Rescale Many hyperspectral scanners record the data in a format larger than 8-bit. In addition,many of the calculations used to correct the data will be performed with a floating pointformat to preserve precision. At some point, it will be advantageous to compress thedata back into an 8-bit range for effective storage and/or display. However, whenrescaling data to be used for imaging spectrometry analysis, it is necessary to considerall data values within the data cube, not just within the layer of interest. This algorithmis designed to maintain the 3-dimensional integrity of the data values. Any bit formatcan be input. The output image will always be 8-bit.

When rescaling a data cube, a decision must be made as to which bands to include inthe rescaling. Clearly, a “bad” band (i.e., a low S/N layer) should be excluded. Somesensors image in different regions of the electromagnetic (EM) spectrum (e.g., reflectiveand thermal infra-red or long- and short-wave reflective infra-red). When rescalingthese data sets, it may be appropriate to rescale each EM region separately. These canbe input using the Select Layer option in the IMAGINE Viewer.

169Field Guide

Page 196: Erdas FieldGuide

Figure 66: Rescale GUI

NOTE: Bands 26 through 28 and 46 through 55 have been deleted from the calculation.Thedeleted bands will still be rescaled, but they will not be factored into the rescale calculation.

Processing Sequence The above (and other) processing steps are utilized to convert the raw image into a formthat is easier to interpret. This interpretation often involves comparing the imagery,either visually or automatically, to laboratory spectra or other known "end-member"spectra. At present there is no widely accepted standard processing sequence to achievethis, although some have been advanced in the scientific literature (Zamudio andAtkinson 1990; Kruse 1988; Green and Craig 1985; Lyon 1987). Two common processingsequences have been programmed as single automatic enhancements, as follows:

• Automatic Relative Reflectance — Implements the following algorithms:Normalize, IAR Reflectance, Rescale.

• Automatic Log Residuals — Implements the following algorithms: Normalize,Log Residuals, Rescale.

Use this dialog torescale the image

Enter the bands tobe included in thecalculation here

170 ERDAS

Page 197: Erdas FieldGuide

Hyperspectral Image Processing

Spectrum Average In some instances, it may be desirable to average together several pixels. This ismentioned above under IAR Reflectance as a test for applicability. In preparingreference spectra for classification, or to save in the Spectral Library, an averagespectrum may be more representative than a single pixel. Note that to implement thisfunction it is necessary to define which pixels to average using the IMAGINE AOI tools.This enables the user to average any set of pixels which are defined; they do not needto be contiguous and there is no limit on the number of pixels averaged. Note that theoutput from this program is a single pixel with the same number of input bands as theoriginal image.

Figure 67: Spectrum Average GUI

AOI Polygon

Use thisIMAGINEdialog torescale theimage

Click here toenter an Areaof Interest

171Field Guide

Page 198: Erdas FieldGuide

Signal to Noise The signal-to-noise (S/N) ratio is commonly used to evaluate the usefulness or validityof a particular band. In this implementation, S/N is defined as Mean/Std.Dev. in a 3X3moving window. After running this function on a data set, each layer in the outputimage should be visually inspected to evaluate suitability for inclusion into the analysis.Layers deemed unacceptable can be excluded from the processing by using the SelectLayers option of the various Graphical User Interfaces (GUIs). This can be used as asensor evaluation tool.

Mean per Pixel This algorithm outputs a single band, regardless of the number of input bands. Byvisually inspecting this output image, it is possible to see if particular pixels are "outsidethe norm". While this does not mean that these pixels are incorrect, they should beevaluated in this context. For example, a CCD detector could have several sites (pixels)that are dead or have an anomalous response, these would be revealed in the Mean perPixel image. This can be used as a sensor evaluation tool.

Profile Tools To aid in visualizing this three-dimensional data cube, three basic tools have beendesigned:

• Spectral Profile — a display that plots the reflectance spectrum of a designatedpixel, as shown in Figure 68.

Figure 68: Spectral Profile

172 ERDAS

Page 199: Erdas FieldGuide

Hyperspectral Image Processing

• Spatial Profile — a display that plots spectral information along a user-definedpolyline. The data can be displayed two-dimensionally for a single band, as inFigure 69.

Figure 69: Two-Dimensional Spatial Profile

The data can also be displayed three-dimensionally for multiple bands, as in Figure 70.

Figure 70: Three-Dimensional Spatial Profile

173Field Guide

Page 200: Erdas FieldGuide

• Surface Profile — a display that allows the operator to designate an x,y area andview any selected layer, z.

Figure 71: Surface Profile

Wavelength Axis Data tapes containing hyperspectral imagery commonly designate the bands as asimple numerical sequence. When plotted using the profile tools, this yields an x-axislabeled as 1,2,3,4, etc. Elsewhere on the tape or in the accompanying documentation isa file which lists the center frequency and width of each band. This information shouldbe linked to the image intensity values for accurate analysis or comparison to otherspectra, such as the Spectra Libraries.

Spectral Library As discussed on page 167, two spectral libraries are presently included in the softwarepackage (JPL and USGS). In addition, it is possible to extract spectra (pixels) from a dataset or prepare average spectra from an image and save these in a user-derived spectrallibrary. This library can then be used for visual comparison with other image spectra,or it can be used as input signatures in a classification.

174 ERDAS

Page 201: Erdas FieldGuide

Hyperspectral Image Processing

Classification The advent of datasets with very large numbers of bands has pressed the limits of the"traditional classifiers" such as Isodata, Maximum Likelihood, and Minimum Distance,but has not obviated their usefulness. Much research has been directed toward the useof Artificial Neural Networks (ANN) to more fully utilize the information content ofhyperspectral images (Merenyi, Taranik, Monor, and Farrand 1996). To date, however,these advanced techniques have proven to be only marginally better at a considerablecost in complexity and computation. For certain applications, both MaximumLikelihood (Benediktsson, Swain, Ersoy, and Hong 1990) and Minimum Distance(Merenyi, Taranik, Monor, and Farrand 1996) have proven to be appropriate."CHAPTER 6: Classification" contains a detailed discussion of these classificationtechniques.

A second category of classification techniques utilizes the imaging spectroscopy modelfor approaching hyperspectral datasets. This approach requires a library of possibleend-member materials. These can be from laboratory measurements using a scanningspectrometer and reference standards (Clark, Gallagher, and Swayze 1990). The JPLand USGS libraries are compiled this way. Or the reference spectra (signatures) can bescene-derived from either the scene under study or another similar scene (Adams,Smith, and Gillespie 1989).

System Requirements Because of the large number of bands, a hyperspectral dataset can be surprisingly large.For example, an AVIRIS scene is only 512 X 614 pixels in dimension — seems small.However, when multiplied by 224 bands (channels) and 16 bits, it requires over 140megabytes of data storage space. To process this scene will require corresponding largeswap and temp space. In practice, it has been found that a 48 Mb memory board and100 Mb of swap space is a minimum requirement for efficient processing. Temporaryfile space requirements will, of course, depend upon the process being run.

175Field Guide

Page 202: Erdas FieldGuide

Fourier Analysis Image enhancement techniques can be divided into two basic categories: point andneighborhood. Point techniques enhance the pixel based only on its value, with noconcern for the values of neighboring pixels. These techniques include contraststretches (non-adaptive), classification, level slices, etc. Neighborhood techniquesenhance a pixel based on the values of surrounding pixels. As a result, these techniquesrequire the processing of a possibly large number of pixels for each output pixel. Themost common way of implementing these enhancements is via a moving windowconvolution. However, as the size of the moving window increases, the number ofrequisite calculations becomes enormous. An enhancement that requires a convolutionoperation in the spatial domain can be implemented as a simple multiplication infrequency space—a much faster calculation.

In ERDAS IMAGINE, the Fast Fourier Transform (FFT) is used to convert a raster imagefrom the spatial (normal) domain into a frequency domain image. The FFT calculationconverts the image into a series of two-dimensional sine waves of various frequencies.The Fourier image itself cannot be easily viewed, but the magnitude of the image canbe calculated, which can then be displayed either in the IMAGINE Viewer or in the FFTEditor. Analysts can edit the Fourier image to reduce noise or remove periodic features,such as striping. Once the Fourier image is edited, it is then transformed back into thespatial domain by using an inverse Fast Fourier Transform. The result is an enhancedversion of the original image.

This section focuses on the Fourier editing techniques available in the ERDASIMAGINE FFT Editor. Some rules and guidelines for using these tools are presented inthis document. Also included are some examples of techniques that will generally workfor specific applications, such as striping.

NOTE: You may also want to refer to the works cited at the end of this section for moreinformation.

The basic premise behind a Fourier transform is that any one-dimensional function, f(x)(which might be a row of pixels), can be represented by a Fourier series consisting ofsome combination of sine and cosine terms and their associated coefficients. Forexample, a line of pixels with a high spatial frequency gray scale pattern might be repre-sented in terms of a single coefficient multiplied by a sin(x) function. High spatialfrequencies are those that represent frequent gray scale changes in a short pixeldistance. Low spatial frequencies represent infrequent gray scale changes that occurgradually over a relatively large number of pixel distances. A more complicatedfunction, f(x), might have to be represented by many sine and cosine terms with theirassociated coefficients.

176 ERDAS

Page 203: Erdas FieldGuide

Fourier Analysis

Figure 72: One-Dimensional Fourier Analysis

Figure 72 shows how a function f(x) can be represented as a linear combination of sineand cosine. The Fourier transform of that same function is also shown.

A Fourier transform is a linear transformation that allows calculation of the coefficientsnecessary for the sine and cosine terms to adequately represent the image. This theoryis used extensively in electronics and signal processing, where electrical signals arecontinuous and not discrete. Therefore, a discrete Fourier transform (DFT) has beendeveloped. Because of the computational load in calculating the values for all the sineand cosine terms along with the coefficient multiplications, a highly efficient version ofthe DFT was developed and called the Fast Fourier Transform (FFT).

To handle images which consist of many one-dimensional rows of pixels, a two-dimen-sional FFT has been devised that incrementally uses one-dimensional FFT’s in eachdirection and then combines the result. These images are symmetrical about the origin.

Applications

Fourier transformations are typically used for the removal of noise such as striping,spots, or vibration in imagery by identifying periodicities (areas of high spatialfrequency). Fourier editing can be used to remove regular errors in data such as thosecaused by sensor anomalies (e.g., striping). This analysis technique can also be usedacross bands as another form of pattern/feature recognition.

0 π 2π 0 π 2π

0 π 2π

Original Function f(x) Sine and Cosine of f(x)

Sine

Cosine

Fourier Transform of f(x)(These graphics are for illustration purposesonly and are not mathematically accurate.)

177Field Guide

Page 204: Erdas FieldGuide

Fast Fourier Transform(FFT)

The Fast Fourier Transform (FFT) calculation is:

where:

M = the number of pixels horizontallyN= the number of pixels verticallyu,v= spatial frequency variablese≈ 2.71828, the natural logarithm basej = the imaginary component of a complex number

The number of pixels horizontally and vertically must each be a power of two. If thedimensions of the input image are not a power of two, they are padded up to the nexthighest power of two. There is more information about this later in this section.

Source: Modified from Oppenheim 1975 and Press 1988

Images computed by this algorithm are saved with an .fft file extension.

You should run a Fourier Magnitude transform on an .fft file before viewing it in the ERDASIMAGINE Viewer. The FFT Editor automatically displays the magnitude without furtherprocessing.

Fourier Magnitude The raster image generated by the FFT calculation is not an optimum image for viewingor editing. Each pixel of a fourier image is a complex number (i.e., it has two compo-nents—real and imaginary). For display as a single image, these components arecombined in a root-sum of squares operation. Also, since the dynamic range of Fourierspectra vastly exceeds the range of a typical display device, the Fourier Magnitudecalculation involves a logarithmic function.

Finally, a Fourier image is symmetric about the origin (u,v = 0,0). If the origin is plottedat the upper left corner, the symmetry is more difficult to see than if the origin is at thecenter of the image. Therefore, in the Fourier magnitude image, the origin is shifted tothe center of the raster array.

F u v,( ) f x y,( )e j2πux M j2πvy N⁄–⁄–[ ]y 0=

N 1–

∑x 0=

M 1–

∑←

0 u M 1 0 v N≤ ≤, 1––≤ ≤

178 ERDAS

Page 205: Erdas FieldGuide

Fourier Analysis

In this transformation, each .fft layer is processed twice. First, the maximum magnitude,|X|max, is computed. Then, the following computation is performed for each FFTelement magnitude x:

where:

x= input FFT elementy= the normalized log magnitude of the FFT element|x|max= the maximum magnitudee≈ 2.71828, the natural logarithm base| |= the magnitude operator

This function was chosen so that y would be proportional to the logarithm of a linearfunction of x, with y(0)=0 and y (|x|max) = 255.

Source: ERDAS

In Figure 73, Image A is one band of a badly striped Landsat TM scene. Image B is theFourier Magnitude image derived from the Landsat image.

Figure 73: Example of Fourier Magnitude

y x( ) 255.0lnx

x max--------------

e 1–( ) 1+=

Image A Image B

origin

origin

179Field Guide

Page 206: Erdas FieldGuide

Note that although Image A has been transformed into Image B, these raster images arevery different symmetrically. The origin of Image A is at (x,y) = (0,0) in the upper leftcorner. In Image B, the origin (u,v) = (0,0) is in the center of the raster. The lowfrequencies are plotted near this origin while the higher frequencies are plotted furtherout. Generally, the majority of the information in an image is in the low frequencies.This is indicated by the bright area at the center (origin) of the Fourier image.

It is important to realize that a position in a Fourier image, designated as (u,v), does notalways represent the same frequency, because it depends on the size of the input rasterimage. A large spatial domain image contains components of lower frequency than asmall spatial domain image. As mentioned, these lower frequencies are plotted nearerto the center (u,v = 0,0) of the Fourier image. Note that the units of spatial frequency areinverse length, e.g., m-1.

The sampling increments in the spatial and frequency domain are related by:

where:

M= horizontal image size in pixelsN= vertical image size in pixels∆ x= pixel size∆ y= pixel size

For example, converting a 512 × 512 Landsat TM image (pixel size = 28.5m) into aFourier image:

u or v Frequency

0 0

1 6.85 × 10-5 × M-1

2 13.7 × 10-5 × M-1

∆u1

M∆x------------=

∆v1

N∆y-----------=

∆u ∆v1

512 28.5×------------------------- 6.85 10

5–m

1–×= = =

180 ERDAS

Page 207: Erdas FieldGuide

Fourier Analysis

If the Landsat TM image was 1024 × 1024:

So, as noted above, the frequency represented by a (u,v) position depends on the size ofthe input image.

For the above calculation, the sample images are 512 × 512 and 1024 × 1024—powers oftwo. These were selected because the FFT calculation requires that the height and widthof the input image be a power of two (although the image need not be square). Inpractice, input images will usually not meet this criterion. Three possible solutions areavailable in ERDAS IMAGINE:

• Subset the image.

• Pad the image — the input raster is increased in size to the next power of two byimbedding it in a field of the mean value of the entire input image.

• Resample the image so that its height and width are powers of two.

For example:

Figure 74: The Padding Technique

The padding technique is automatically performed by the FFT program. It produces aminimum of artifacts in the output Fourier image. If the image is subset using a powerof two (i.e., 64 × 64, 128 × 128, 64 × 128, etc.) no padding is used.

u or v Frequency

0 0

1 3.42 × 10-5

2 6.85 × 10-5

∆u ∆v1

1024 28.5×---------------------------- 3.42 10

5–× m1–

= = =

400

300512

512

mean value

181Field Guide

Page 208: Erdas FieldGuide

Inverse Fast FourierTransform (IFFT)

The Inverse Fast Fourier Transform (IFFT) computes the inverse two-dimensional FastFourier Transform of the spectrum stored.

• The input file must be in the compressed .fft format described earlier (i.e., outputfrom the Fast Fourier Transform or FFT Editor).

• If the original image was padded by the FFT program, the padding willautomatically be removed by IFFT.

• This program creates (and deletes, upon normal termination) a temporary file largeenough to contain one entire band of .fft data.The specific expression calculated bythis program is:

where:

M= the number of pixels horizontallyN= the number of pixels verticallyu,v= spatial frequency variablese≈ 2.71828, the natural logarithm base

Source: Modified from Oppenheim and Press 1988

Images computed by this algorithm are saved with an .ifft.img file extension by default.

Filtering Operations performed in the frequency (Fourier) domain can be visualized in thecontext of the familiar convolution function. The mathematical basis of this interrela-tionship is the convolution theorem, which states that a convolution operation in thespatial domain is equivalent to a multiplication operation in the frequency domain:

g(x,y) = h(x,y) ∗ f(x,y) ≡ G(u,v) = H(u,v) × F(u,v)

where:

f(x,y) = input imageh(x,y) = position invariant operation (convolution kernel)g(x,y) = output imageG, F, H = Fourier transforms of g, f, h

The names high-pass, low-pass, high-frequency, etc., indicate that these convolutionfunctions derive from the frequency domain.

f x y,( ) 1N1N2-------------- F u v,( )ej2πux M j2πvy N⁄+⁄[ ]

v 0=

N 1–

∑u 0=

M 1–

∑←

0 x M 1 0 y N≤ ≤, 1––≤ ≤

182 ERDAS

Page 209: Erdas FieldGuide

Fourier Analysis

Low-Pass Filtering

The simplest example of this relationship is the low-pass kernel. The name, low-passkernel, is derived from a filter that would pass low frequencies and block (filter out)high frequencies. In practice, this is easily achieved in the spatial domain by theM = N = 3 kernel:

Obviously, as the size of the image and, particularly, the size of the low-pass kernelincreases, the calculation becomes more time-consuming. Depending on the size of theinput image and the size of the kernel, it can be faster to generate a low-pass image viaFourier processing.

Figure 75 compares Direct and Fourier domain processing for finite area convolution.

Figure 75: Comparison of Direct and Fourier Domain Processing

In the Fourier domain, the low-pass operation is implemented by attenuating the pixelswhose frequencies satisfy:

D0 is frequently called the “cutoff frequency.”

As mentioned, the low-pass information is concentrated toward the origin of theFourier image. Thus, a smaller radius (r) has the same effect as a larger N (where N isthe size of a kernel) in a spatial domain low-pass convolution.

1 1 1

1 1 1

1 1 1

0

4

8

12

16

200 400 600 800 1000 1200

Siz

e of

nei

ghbo

rhoo

d fo

r ca

lcul

atio

n

Size of input image

Fourier processing more efficient

Direct processing more efficient

Source: Pratt, 1991

u2

v2

+ D02>

183Field Guide

Page 210: Erdas FieldGuide

As was pointed out earlier, the frequency represented by a particular u,v (or r) positiondepends on the size of the input image. Thus, a low-pass operation of r = 20 will beequivalent to a spatial low-pass of various kernel sizes, depending on the size of theinput image.

For example:

This table shows that using a window on a 64 × 64 Fourier image with a radius of 50 asthe cutoff is the same as using a 3 × 3 low-pass kernel on a 64 × 64 spatial domain image.

High-Pass Filtering

Just as images can be smoothed (blurred) by attenuating the high-frequency compo-nents of an image using low-pass filters, images can be sharpened and edge-enhancedby attenuating the low-frequency components using high-pass filters. In the Fourierdomain, the high-pass operation is implemented by attenuating pixels whosefrequencies satisfy:

Image SizeFourier Low-Pass

r =Convolution Low-Pass

N =

64 × 64 50 3

30 3.5

20 5

10 9

5 14

128 × 128 20 13

10 22

256 × 256 20 25

10 42

u2

v2

+ D02<

184 ERDAS

Page 211: Erdas FieldGuide

Fourier Analysis

Windows The attenuation discussed above can be done in many different ways. In ERDASIMAGINE Fourier processing, five window functions are provided to achieve differenttypes of attenuation:

• Ideal

• Bartlett (triangular)

• Butterworth

• Gaussian

• Hanning (cosine)

Each of these windows must be defined when a frequency domain process is used. Thisapplication is perhaps easiest understood in the context of the high-pass and low-passfilter operations. Each window is discussed in more detail below.

Ideal

The simplest low-pass filtering is accomplished using the ideal window, so namedbecause its cutoff point is absolute. Note that in Figure 76 the cross section is “ideal.”

H(u,v) = 1 if D(u,v) ≤ D0 0 if D(u,v) > D0

Figure 76: An Ideal Cross Section

All frequencies inside a circle of a radius D0 are retained completely (passed), and allfrequencies outside the radius are completely attenuated. The point D0 is termed thecutoff frequency.

High-pass filtering using the ideal window looks like the illustration below.

H(u,v)

D(u,v)

1

0 D0

gai

n

frequency

185Field Guide

Page 212: Erdas FieldGuide

H(u,v) = 0 if D(u,v) ≤ D0 1 if D(u,v) > D0

Figure 77: High-Pass Filtering Using the Ideal Window

All frequencies inside a circle of a radius D0 are completely attenuated, and allfrequencies outside the radius are retained completely (passed).

A major disadvantage of the ideal filter is that it can cause “ringing” artifacts, particu-larly if the radius (r) is small. The smoother functions (i.e., Butterworth, Hanning, etc.)minimize this effect.

Bartlett

Filtering using the Bartlett window is a triangular function, as shown in the low- andhigh-pass cross-sections below.

Figure 78: Filtering Using the Bartlett Window

H(u,v)

D(u,v)

1

0 D0

gai

n

frequency

H(u,v)

D(u,v)

1

0 D0

gai

n

frequency

H(u,v)

D(u,v)

1

0 D0

gai

n

frequency

Low-Pass High-Pass

186 ERDAS

Page 213: Erdas FieldGuide

Fourier Analysis

Butterworth, Gaussian, and Hanning

The Butterworth, Gaussian, and Hanning windows are all “smooth” and greatly reducethe effect of ringing. The differences between them are minor and are of interest mainlyto experts. For most “normal” types of Fourier image enhancement, they are essentiallyinterchangeable.

The Butterworth window reduces the ringing effect because it does not contain abruptchanges in value or slope. The low- and high-pass cross sections below illustrate this.

Figure 79: Filtering Using the Butterworth Window

The equation for the low-pass Butterworth window is:

H(u,v) =

NOTE: The Butterworth window approaches its window center gain asymptotically.

The equation for the Gaussian low-pass window is:

H(u,v) =

The equation for the Hanning low-pass window is:

H(u,v) =

for

0 otherwise

H(u,v)

D(u,v)

1

0

0.5

1 2 3 D0

gai

n

frequency

Low-Pass High-PassH(u,v)

D(u,v)

1

0

0.5

1 2 3 D0

gai

n

frequency

11 D u v,( )( ) D0⁄[ ]2n+-----------------------------------------------------

e

xD0------

–2

12--- 1

πx2D0----------

cos+

0 x 2D0≤ ≤

187Field Guide

Page 214: Erdas FieldGuide

Fourier Noise Removal Occasionally, images are corrupted by “noise” that is periodic in nature. An example ofthis is the scan lines that are present in some TM images. When these images are trans-formed into Fourier space, the periodic line pattern becomes a radial line. The ERDASIMAGINE Fourier Analysis functions provide two main tools for reducing noise inimages:

• Editing

• Automatic removal of periodic noise

Editing

In practice, it has been found that radial lines centered at the Fourier origin (u,v = 0,0)are best removed using back-to-back wedges centered at (0,0). It is possible to removethese lines using very narrow wedges with the Ideal window. However, the suddentransitions resulting from zeroing out sections of a Fourier image will cause a ringingof the image when it is transformed back into the spatial domain. This effect can belessened by using a less abrupt window, such as Butterworth.

Other types of noise can produce artifacts, such as lines not centered at u,v = 0,0 orcircular spots in the Fourier image. These can be removed using the tools provided inthe IMAGINE FFT Editor. As these artifacts are always symmetrical in the Fouriermagnitude image, editing tools operate on both components simultaneously. TheFourier Editor contains tools that enable the user to attenuate a circular or rectangularregion anywhere on the image.

Automatic Periodic Noise Removal

The use of the FFT Editor, as described above, enables the user to selectively andaccurately remove periodic noise from any image. However, operator interaction and abit of trial and error are required. The automatic periodic noise removal algorithm hasbeen devised to address images degraded uniformly by striping or other periodicanomalies. Use of this algorithm requires a minimum of input from the user.

The image is first divided into 128 x 128 pixel blocks. The Fourier Transform of eachblock is calculated and the log-magnitudes of each FFT block are averaged. Theaveraging removes all frequency domain quantities except those which are present ineach block (i.e., some sort of periodic interference). The average power spectrum is thenused as a filter to adjust the FFT of the entire image. When the inverse FourierTransform is performed, the result is an image which should have any periodic noiseeliminated or significantly reduced. This method is partially based on the algorithmsoutlined in Cannon et al. 1983 and Srinivasan et al. 1988.

Select the Periodic Noise Removal option from Image Interpreter to use this function.

188 ERDAS

Page 215: Erdas FieldGuide

Fourier Analysis

Homomorphic Filtering Homomorphic filtering is based upon the principle that an image may be modeled asthe product of illumination and reflectance components:

I(x,y) = i(x,y) × r(x,y)

where:

I(x,y) = image intensity (DN) at pixel x,yi(x,y) = illumination of pixel x,yr(x,y) = reflectance at pixel x,y

The illumination image is a function of lighting conditions, shadows, etc. The reflec-tance image is a function of the object being imaged. A log function can be used toseparate the two components (i and r) of the image:

ln I(x,y) = ln i(x,y) + ln r(x,y)

This transforms the image from multiplicative to additive superposition. With the twocomponent images separated, any linear operation can be performed. In this appli-cation, the image is now transformed into Fourier space. Because the illuminationcomponent usually dominates the low frequencies, while the reflectance componentdominates the higher frequencies, the image may be effectively manipulated in theFourier domain.

By using a filter on the Fourier image, which increases the high-frequency components,the reflectance image (related to the target material) may be enhanced, while the illumi-nation image (related to the scene illumination) is de-emphasized.

Select the Homomorphic Filter option from Image Interpreter to use this function.

By applying an inverse fast Fourier transform followed by an exponential function, theenhanced image is returned to the normal spatial domain. The flow chart in Figure 80summarizes the homomorphic filtering process in ERDAS IMAGINE.

189Field Guide

Page 216: Erdas FieldGuide

Figure 80: Homomorphic Filtering Process

As mentioned earlier, if an input image is not a power of two, the ERDAS IMAGINEFourier analysis software will automatically pad the image to the next largest size tomake it a power of two. For manual editing, this causes no problems. However, inautomatic processing, such as the homomorphic filter, the artifacts induced by thepadding may have a deleterious effect on the output image. For this reason, it is recom-mended that images that are not a power of two be subset before being used in anautomatic process.

A detailed description of the theory behind Fourier series and Fourier transforms is given inGonzales and Wintz (1977). See also Oppenheim (1975) and Press (1988).

InputImage Log Log

Image FFT FourierImage

Butter-worthFilter

IFFTExpo-nential

EnhancedImage

i × r ln i + ln r i = low freq.r = high freq.

i decreasedr increased

FilteredFourierImage

190 ERDAS

Page 217: Erdas FieldGuide

Radar Imagery Enhancement

Radar ImageryEnhancement

The nature of the surface phenomena involved in radar imaging is inherently differentfrom that of VIS/IR images. When VIS/IR radiation strikes a surface it is eitherabsorbed, reflected, or transmitted. The absorption is based on the molecular bonds inthe (surface) material. Thus, this imagery provides information on the chemical compo-sition of the target.

When radar microwaves strike a surface, they are reflected according to the physicaland electrical properties of the surface, rather than the chemical composition. Thestrength of radar return is affected by slope, roughness, and vegetation cover. Theconductivity of a target area is related to the porosity of the soil and its water content.Consequently, radar and VIS/IR data are complementary; they provide different infor-mation about the target area. An image in which these two data types are intelligentlycombined can present much more information than either image by itself.

See "CHAPTER 1: Raster Data" and "CHAPTER 3: Raster and Vector Data Sources" for moreinformation on radar data.

This section describes enhancement techniques that are particularly useful for radarimagery. While these techniques can be applied to other types of image data, thisdiscussion will focus on the special requirements of radar imagery enhancement.ERDAS IMAGINE Radar provides a sophisticated set of image processing toolsdesigned specifically for use with radar imagery. This section will describe thefunctions of ERDAS IMAGINE Radar.

For information on the Radar Image Enhancement function, see the section on "RadiometricEnhancement" on page 132.

Speckle Noise Speckle noise is commonly observed in radar (microwave or millimeter wave) sensingsystems, although it may appear in any type of remotely sensed image utilizingcoherent radiation. An active radar sensor gives off a burst of coherent radiation thatreflects from the target, unlike a passive microwave sensor that simply receives the low-level radiation naturally emitted by targets.

Like the light from a laser, the waves emitted by active sensors travel in phase andinteract minimally on their way to the target area. After interaction with the target area,these waves are no longer in phase. This is due to the different distances they travelfrom targets, or single versus multiple bounce scattering.

Once out of phase, radar waves can interact to produce light and dark pixels known asspeckle noise. Speckle noise must be reduced before the data can be effectively utilized.However, the image processing programs used to reduce speckle noise producechanges in the image.

Since any image processing done before removal of the speckle results in the noise being incor-porated into and degrading the image, you should not rectify, correct to ground range, or in anyway resample, enhance, or classify the pixel values before removing speckle noise. Functionsusing nearest neighbor are technically permissible, but not advisable.

191Field Guide

Page 218: Erdas FieldGuide

Since different applications and different sensors necessitate different speckle removalmodels, IMAGINE Radar includes several speckle reduction algorithms:

• Mean filter

• Median filter

• Lee-Sigma filter

• Local Region filter

• Lee filter

• Frost filter

• Gamma-MAP filter

These filters are described below.

NOTE: Speckle noise in radar images cannot be completely removed. However, it can be reducedsignificantly.

Mean Filter

The Mean filter is a simple calculation. The pixel of interest (center of window) isreplaced by the arithmetic average of all values within the window. This filter does notremove the aberrant (speckle) value—it averages it into the data.

In theory, a bright and a dark pixel within the same window would cancel each otherout. This consideration would argue in favor of a large window size (i.e.,7 × 7). However, averaging results in a loss of detail, which argues for a small windowsize.

In general, this is the least satisfactory method of speckle reduction. It is useful for“quick and dirty” applications or those where loss of resolution is not a problem.

Median Filter

A better way to reduce speckle, but still simplistic, is the Median filter. This filteroperates by arranging all DN (digital number) values within the user-defined windowin sequential order. The pixel of interest is replaced by the value in the center of thisdistribution. A Median filter is useful for removing pulse or spike noise. Pulse functionsof less than one-half of the moving window width are suppressed or eliminated. Inaddition, step functions or ramp functions are retained.

The effect of Mean and Median filters on various signals is shown (for one dimension)in Figure 81.

192 ERDAS

Page 219: Erdas FieldGuide

Radar Imagery Enhancement

Figure 81: Effects of Mean and Median Filters

The Median filter is useful for noise suppression in any image. It does not affect step orramp functions—it is an edge preserving filter (Pratt 1991). It is also applicable inremoving pulse function noise which results from the inherent pulsing of microwaves.An example of the application of the Median filter is the removal of dead-detectorstriping, such as is found in Landsat 4 TM data (Crippen 1989).

Local Region Filter

The Local Region filter divides the moving window into eight regions based on angularposition (North, South, East, West, NW, NE, SW, and SE). Figure 82 shows a 5 × 5moving window and the regions of the Local Region filter.

ORIGINAL MEAN FILTERED MEDIAN FILTERED

Step

Ramp

Single Pulse

Double Pulse

193Field Guide

Page 220: Erdas FieldGuide

Figure 82: Regions of Local Region Filter

For each region, the variance is calculated as follows:

The algorithm compares the variance values of the regions surrounding the pixel ofinterest. The pixel of interest is replaced by the mean of all DN values within the regionwith the lowest variance, i.e., the most uniform region. A region with low variance isassumed to have pixels minimally affected by wave interference yet very similar to thepixel of interest. A region of low variance will probably be such for several surroundingpixels.

The result is that the output image is composed of numerous uniform areas, the size ofwhich is determined by the moving window size. In practice, this filter can be utilizedsequentially 2 or 3 times, increasing the window size. The resultant output image is anappropriate input to a classification application.

= pixel of interest

= North region

= NE region

= SW region

VarianceΣ DNx y, Mean–( )2

n 1–-----------------------------------------------=

Source: Nagao 1979

194 ERDAS

Page 221: Erdas FieldGuide

Radar Imagery Enhancement

Sigma and Lee Filters

The Sigma and Lee filters utilize the statistical distribution of the DN values within themoving window to estimate what the pixel of interest should be.

Speckle in imaging radar can be mathematically modeled as multiplicative noise witha mean of 1. The standard deviation of the noise can be mathematically defined as:

Standard Deviation=> = Coefficient Of Variation => sigma (σ)

The coefficient of variation, as a scene-derived parameter, is used as an inputparameter in the Sigma and Lee filters. It is also useful in evaluating and modifyingvisible/infrared (VIS/IR) data for input to a 4-band composite image or in preparing a3-band ratio color composite (Crippen 1989).

It can be assumed that imaging radar data noise follows a Gaussian distribution. Thiswould yield a theoretical value for Standard Deviation (SD) of .52 for 1-look radar dataand SD = .26 for 4-look radar data.

Table 17 gives theoretical coefficient of variation values for various look-average radarscenes:

The Lee filters are based on the assumption that the mean and variance of the pixel ofinterest are equal to the local mean and variance of all pixels within the user-selectedmoving window.

Table 17: Theoretical Coefficient of Variation Values

# of Looks(scenes)

Coef. of VariationValue

1 .52

2 .37

3 .30

4 .26

6 .21

8 .18

VARIANCEMEAN

-----------------------------------

195Field Guide

Page 222: Erdas FieldGuide

The actual calculation used for the Lee filter is:

DNout = [Mean] + K[DN in - Mean]

where:

Mean = average of pixels in a moving window

K =

The variance of x [Var (x)] is defined as:

Source: Lee 1981

The Sigma filter is based on the probability of a Gaussian distribution. Briefly, it isassumed that 95.5% of random samples are within a 2 standard deviation (2 sigma)range. This noise suppression filter replaces the pixel of interest with the average of allDN values within the moving window that fall within the designated range.

As with all the Radar speckle filters, the user must specify a moving window size, thecenter pixel of which is the pixel of interest.

As with the Statistics filter, a coefficient of variation specific to the data set must beinput. Finally, the user must specify how many standard deviations to use (2, 1, or 0.5)to define the accepted range.

The statistical filters (Sigma and Statistics) are logically applicable to any data set forpreprocessing. Any sensor system has various sources of noise, resulting in a few erraticpixels. In VIS/IR imagery, most natural scenes are found to follow a normal distri-bution of DN values, thus filtering at 2 standard deviations should remove this noise.This is particularly true of experimental sensor systems that frequently have significantnoise problems.

Var x( )Mean[ ]2σ2

Var x( )+----------------------------------------------------

Var x( ) Variance within window[ ] Mean within window[ ]2+

Sigma[ ]21+

-------------------------------------------------------------------------------------------------------------------------------

=

Mean within window[ ]2–

196 ERDAS

Page 223: Erdas FieldGuide

Radar Imagery Enhancement

These speckle filters can be used iteratively. The user must view and evaluate theresultant image after each pass (the data histogram is useful for this), and then decideif another pass is appropriate and what parameters to use on the next pass. For example,three passes of the Sigma filter with the following parameters is very effective whenused with any type of data:

Similarly, there is no reason why successive passes must be of the same filter. Thefollowing sequence is useful prior to a classification:

With all speckle reduction filters there is a playoff between noise reduction and loss of resolution.Each data set and each application will have a different acceptable balance between these twofactors. The ERDAS IMAGINE filters have been designed to be versatile and gentle in reducingnoise (and resolution).

Table 18: Parameters for Sigma Filter

Pass Sigma ValueSigma

MultiplierWindow

Size

1 0.26 0.5 3 × 3

2 0.26 1 5 × 5

3 0.26 2 7 × 7

Table 19: Pre-Classification Sequence

Filter PassSigmaValue

SigmaMultiplier

WindowSize

Lee 1 0.26 NA 3 × 3

Lee 2 0.26 NA 5 × 5

Local Region 3 NA NA 5 × 5 or 7 × 7

197Field Guide

Page 224: Erdas FieldGuide

Frost Filter

The Frost filter is a minimum mean square error algorithm which adapts to the localstatistics of the image. The local statistics serve as weighting parameters for the impulseresponse of the filter (moving window). This algorithm assumes that noise is multipli-cative with stationary statistics.

The formula used is:

Where

K = normalization constant

Ι = local mean

σ = local variance

σ = image coefficient of variation value

|t| = |X-X0| + |Y-Y0|

n = moving window size

And

Source: Lopes, Nezry, Touzi, Laur, 1990

DN Kαeα t–

n n×∑=

α 4 nσ2⁄( ) σ2I2

⁄( )=

198 ERDAS

Page 225: Erdas FieldGuide

Radar Imagery Enhancement

Gamma-MAP Filter

The Maximum A Posteriori (MAP) filter attempts to estimate the original pixel DN,which is assumed to lie between the local average and the degraded (actual) pixel DN.MAP logic maximizes the a posteriori probability density function with respect to theoriginal image.

Many speckle reduction filters (e.g., Lee, Lee-Sigma, Frost) assume a Gaussian distri-bution for the speckle noise. Recent work has shown this to be an invalid assumption.Natural vegetated areas have been shown to be more properly modeled as having aGamma distributed cross section. This algorithm incorporates this assumption. Theexact formula used is the cubic equation:

Where

Î = sought value

Ι = local mean

DN = input value

σ = original image variance

Source: Frost, Stiles, Shanmugan, Holtzman, 1982

I3

II2

– σ I DN–( ) 0=+

199Field Guide

Page 226: Erdas FieldGuide

Edge Detection Edge and line detection are important operations in digital image processing. Forexample, geologists are often interested in mapping lineaments, which may be faultlines or bedding structures. For this purpose, edge and line detection is a majorenhancement technique.

In selecting an algorithm, it is first necessary to understand the nature of what is beingenhanced. Edge detection could imply amplifying an edge, a line, or a spot (see Figure83).

Figure 83: One-dimensional, Continuous Edge, and Line Models

• Ramp edge — an edge modeled as a ramp, increasing in DN value from a low to ahigh level, or vice versa. Distinguished by DN change, slope, and slope midpoint.

• Step edge — a ramp edge with a slope angle of 90 degrees.

• Line — a region bounded on each end by an edge; width must be less than themoving window size.

• Roof edge — a line with a width near zero.

The models in Figure 83 represent ideal theoretical edges. However, real data valueswill vary to produce a more distorted edge, due to sensor noise, vibration, etc. (seeFigure 84). There are no perfect edges in raster data, hence the need for edge detectionalgorithms.

Ramp edge

DN change

Step edge

Line

width

Roof edge

slope

slope midpoint

90o

width near 0

DN change

DN change

DN change

x or y

DN

Val

ue

DN

Val

ueD

N V

alue

DN

Val

ue

x or y

x or yx or y

200 ERDAS

Page 227: Erdas FieldGuide

Radar Imagery Enhancement

Figure 84: A Very Noisy Edge Superimposed on an Ideal Edge

Edge detection algorithms can be broken down into 1st-order derivative and 2nd-orderderivative operations. Figure 85 shows ideal one-dimensional edge and line intensitycurves with the associated 1st-order and 2nd-order derivatives.

Figure 85: Edge and Line Derivatives

Inte

nsity

Actual data values

Ideal model step edge

x

g(x)

x

g∂x∂

-----

x

g2∂

x2∂

--------

x

g(x)

x

g∂x∂

-----

x

g2∂

x2∂

--------

LineStep Edge

Original Feature

1st Derivative

2nd Derivative

201Field Guide

Page 228: Erdas FieldGuide

The 1st-order derivative kernel(s) derives from the simple Prewitt kernel:

The 2nd-order derivative kernel(s) derives from Laplacian operators:

1st-Order Derivatives (Prewitt)

ERDAS IMAGINE Radar utilizes sets of template matching operators. These operatorsapproximate to the eight possible compass orientations (North, South, East, West,Northeast, Northwest, Southeast, Southwest). The compass names indicate the slopedirection creating maximum response. (Gradient kernels with zero weighting, i.e., thesum of the kernel coefficient is zero, have no output in uniform regions.) The detectededge will be orthogonal to the gradient direction.

To avoid positional shift, all operating windows are odd number arrays, with the centerpixel being the pixel of interest. Extension of the 3 × 3 impulse response arrays to alarger size is not clear cut— different authors suggest different lines of rationale. Forexample, it may be advantageous to extend the 3-level (Prewitt 1970) to:

or the following might be beneficial:

∂∂x-----

1 1 1

0 0 0

1– 1– 1–

= and∂∂y-----

1 0 1–

1 0 1–

1 0 1–

=

∂2

∂x2

--------1– 2 1–

1– 2 1–

1– 2 1–

=∂2

∂y2

--------1– 1– 1–

2 2 2

1– 1– 1–

=and

1 1 0 1– 1–

1 1 0 1– 1–

1 1 0 1– 1–

1 1 0 1– 1–

1 1 0 1– 1–

2 1 0 1– 2–

2 1 0 1– 2–

2 1 0 1– 2–

2 1 0 1– 2–

2 1 0 1– 2–

or

4 2 0 2– 4–

4 2 0 2– 4–

4 2 0 2– 4–

4 2 0 2– 4–

4 2 0 2– 4–

202 ERDAS

Page 229: Erdas FieldGuide

Radar Imagery Enhancement

Larger template arrays provide a greater noise immunity, but are computationallymore demanding.

Zero-Sum Filters

A common type of edge detection kernel is a zero-sum filter. For this type of filter, thecoefficients are designed to add up to zero. Examples of two zero-sum filters are givenbelow:

Sobel =

Prewitt=

Prior to edge enhancement, you should reduce speckle noise by using the Radar SpeckleSuppression function.

2nd-Order Derivatives (Laplacian Operators)

The second category of edge enhancers is 2nd-order derivative or Laplacian operators.These are best for line (or spot) detection as distinct from ramp edges. ERDASIMAGINE Radar offers two such arrays:

Unweighted line:

Weighted line:

Source: Pratt 1991

1 0 1–

2 0 2–

1 0 1–

vertical

1– 2– 1–

0 0 0

1 2 1

horizontal

1 0 1–

1 0 1–

1 0 1–

vertical

1– 1– 1–

0 0 0

1 1 1

horizontal

1– 2 1–

1– 2 1–

1– 2 1–

1– 2 1–

2– 4 2–

1– 2 1–

203Field Guide

Page 230: Erdas FieldGuide

Some researchers have found that a combination of 1st- and 2nd-order derivative imagesproduces the best output. See Eberlein and Weszka (1975) for information about subtracting the2nd-order derivative (Laplacian) image from the 1st-order derivative image (gradient).

Texture According to Pratt (1991), “Many portions of images of natural scenes are devoid ofsharp edges over large areas. In these areas the scene can often be characterized asexhibiting a consistent structure analogous to the texture of cloth. Image texturemeasurements can be used to segment an image and classify its segments.”

As an enhancement, texture is particularly applicable to radar data, although it may beapplied to any type of data with varying results. For example, it has been shown (Blomet al 1982) that a three-layer variance image using 15 × 15, 31 × 31, and 61 × 61 windowscan be combined into a three-color RGB (red, green, blue) image that is useful forgeologic discrimination. The same could apply to a vegetation classification.

The user could also prepare a three-color image using three different functionsoperating through the same (or different) size moving window(s). However, each dataset and application would need different moving window sizes and/or texturemeasures to maximize the discrimination.

Radar Texture Analysis

While texture analysis has been useful in the enhancement of visible/infrared imagedata (VIS/IR), it is showing even greater applicability to radar imagery. In part, thisstems from the nature of the imaging process itself.

The interaction of the radar waves with the surface of interest is dominated by reflectioninvolving the surface roughness at the wavelength scale. In VIS/IR imaging, thephenomena involved is absorption at the molecular level. Also, as we know from array-type antennae, radar is especially sensitive to regularity that is a multiple of itswavelength. This provides for a more precise method for quantifying the character oftexture in a radar return.

The ability to use radar data to detect texture and provide topographic informationabout an image is a major advantage over other types of imagery where texture is not aquantitative characteristic.

The texture transforms can be used in several ways to enhance the use of radar imagery.Adding the radar intensity image as an additional layer in a (vegetation) classificationis fairly straightforward and may be useful. However, the proper texture image(function and window size) can greatly increase the discrimination. Using known testsites, one can experiment to discern which texture image best aids the classification. Forexample, the texture image could then be added as an additional layer to the TM bands.

As radar data come into wider use, other mathematical texture definitions will prove useful andwill be added to ERDAS IMAGINE Radar. In practice, you will interactively decide whichalgorithm and window size is best for your data and application.

204 ERDAS

Page 231: Erdas FieldGuide

Radar Imagery Enhancement

Texture Analysis Algorithms

While texture has typically been a qualitative measure, it can be enhanced with mathe-matical algorithms. Many algorithms appear in literature for specific applications(Haralick 1979, Irons 1981).

The algorithms incorporated into ERDAS IMAGINE are those which are applicable ina wide variety of situations and are not computationally over-demanding. This laterpoint becomes critical as the moving window size increases. Research has shown thatvery large moving windows are often needed for proper enhancement. For example,Blom (Blom et al 1982) uses up to a 61 × 61 window.

Four algorithms are currently utilized for texture enhancement in ERDAS IMAGINE:

• mean Euclidean distance (1st-order)

• variance (2nd-order)

• skewness (3rd-order)

• kurtosis (4th order)

Mean Euclidean Distance

These algorithms are shown below (Irons and Petersen 1981):

where:

xijλ= DN value for spectral band λ and pixel (i,j) of a multispectral imagexcλ= DN value for spectral band λ of a window’s center pixeln = number of pixels in a window

Variance

where:

xij = DN value of pixel (i,j)n= number of pixels in a windowM = Mean of the moving window, where:

Σ Σλ xcλ xij λ–( )2][n 1–

------------------------------------------------

12---

Mean Euclidean Distance =

VarianceΣ x( ij M )2

n 1–------------------------------=

MeanΣxij

n---------=

205Field Guide

Page 232: Erdas FieldGuide

Skewness

where:

xij = DN value of pixel (i,j)n = number of pixels in a windowM= Mean of the moving window (see above)V = Variance (see above)

Kurtosis

where:

xij = DN value of pixel (i,j)n = number of pixels in a windowM = Mean of the moving window (see above)V = Variance (see above)

Texture analysis is available from the Texture function in Image Interpreter and from the RadarTexture Analysis function.

SkewΣ x( ij M )3

n 1–( ) V( )

32---

--------------------------------=

KurtosisΣ x( ij M )4

n( 1) V( )2–

------------------------------=

206 ERDAS

Page 233: Erdas FieldGuide

Radar Imagery Enhancement

Radiometric Correction -Radar Imagery

The raw radar image frequently contains radiometric errors due to:

• imperfections in the transmit and receive pattern of the radar antenna

• errors due to the coherent pulse (i.e., speckle)

• the inherently stronger signal from a near range (closest to the sensor flight path)than a far range (farthest from the sensor flight path) target

Many imaging radar systems use a single antenna that transmits the coherent radarburst and receives the return echo. However, no antenna is perfect; it may have variouslobes, dead spots, and imperfections. This causes the received signal to be slightlydistorted radiometrically. In addition, range fall-off will cause far range targets to bedarker (less return signal).

These two problems can be addressed by adjusting the average brightness of eachrange line to a constant— usually the average overall scene brightness (Chavez 1986).This requires that each line of constant range be long enough to reasonably approx-imate the overall scene brightness (see Figure 86). This approach is generic; it is notspecific to any particular radar sensor.

The Adjust Brightness function in ERDAS IMAGINE works by correcting each range lineaverage. For this to be a valid approach, the number of data values must be large enough toprovide good average values. Be careful not to use too small an image. This will depend upon thecharacter of the scene itself.

Figure 86: Adjust Brightness Function

colu

mns

of d

ata

a = average data value of each row

Add the averages of all data rows:

= Overall average

Overall average

ax

= calibration coefficient

This subset would not give an accurate average for correcting the entire scene.

a1 + a2 + a3 + a4 ....ax

of line x

rows of data

x

207Field Guide

Page 234: Erdas FieldGuide

Range Lines/Lines of Constant Range

Lines of constant range are not the same thing as range lines:

• Range lines — lines that are perpendicular to the flight of the sensor

• Lines of constant range — lines that are parallel to the flight of the sensor

• Range direction — same as range lines

Because radiometric errors are a function of the imaging geometry, the image must becorrectly oriented during the correction process. For the algorithm to correctly addressthe data set, the user must tell ERDAS IMAGINE whether the lines of constant rangeare in columns or rows in the displayed image.

Figure 87 shows the lines of constant range in columns, parallel to the sides of thedisplay screen:

Figure 87: Range Lines vs. Lines of Constant Range

Range DirectionF

light

(A

zim

uth)

Dire

ctio

n

Lines Of Constant RangeR

ange

Lin

esDisplayScreen

208 ERDAS

Page 235: Erdas FieldGuide

Radar Imagery Enhancement

Slant-to-Ground RangeCorrection

Radar images also require slant-to-ground range correction, which is similar in conceptto orthocorrecting a VIS/IR image. By design, an imaging radar is always side-looking.In practice, the depression angle is usually 75o at most. In operation, the radar sensordetermines the range (distance to) each target, as shown in Figure 88.

Figure 88: Slant-to-Ground Range Correction

Assuming that angle ACB is a right angle, the user can approximate:

where:

Dists= slant range distanceDistg= ground range distance

Source: Leberl 1990

A B

C

H

90o

across trackantenna

θ = depression angle

θ

Dist s

Dist g

arcs of constant range

Dists Distg( ) θcos( )≈

θcosDistsDistg-------------=

209Field Guide

Page 236: Erdas FieldGuide

This has the effect of compressing the near range areas more than the far range areas.For many applications, this may not be important. However, to geocode the scene or toregister radar to infrared or visible imagery, the scene must be corrected to a groundrange format. To do this, the following parameters relating to the imaging geometry areneeded:

• Depression angle (θ) — angular distance between sensor horizon and scene center

• Sensor height (H) — elevation of sensor (in meters) above its nadir point

• Beam width— angular distance between near range and far range for entire scene

• Pixel size (in meters) — range input image cell size

This information is usually found in the header file of data. Use the Data View... option to viewthis information. If it is not contained in the header file, you must obtain this information fromthe data supplier.

Once the scene is range-format corrected, pixel size can be changed for coregistrationwith other data sets.

210 ERDAS

Page 237: Erdas FieldGuide

Radar Imagery Enhancement

Merging Radar withVIS/IR Imagery

As aforementioned, the phenomena involved in radar imaging is quite different fromthat in VIS/IR imaging. Because these two sensor types give different informationabout the same target (chemical vs. physical), they are complementary data sets. If thetwo images are correctly combined, the resultant image will convey both chemical andphysical information and could prove more useful than either image alone.

The methods for merging radar and VIS/IR data are still experimental and open forexploration. The following methods are suggested for experimentation:

• Co-displaying in a Viewer

• RGB to IHS transforms

• Principal components transform

• Multiplicative

The ultimate goal of enhancement is not mathematical or logical purity - it is feature extraction.There are currently no rules to suggest which options will yield the best results for a particularapplication; you must experiment. The option that proves to be most useful will depend upon thedata sets (both radar and VIS/IR), your experience, and your final objective.

Co-Displaying

The simplest and most frequently used method of combining radar with VIS/IRimagery is co-displaying on an RGB color monitor. In this technique the radar image isdisplayed with one (typically the red) gun, while the green and blue guns displayVIS/IR bands or band ratios. This technique follows from no logical model and does nottruly merge the two data sets.

Use the Viewer with the Clear Display option disabled for this type of merge. Select the colorguns to display the different layers.

RGB to IHS Transforms

Another common technique uses the RGB to IHS transforms. In this technique, an RGB(red, green, blue) color composite of bands (or band derivatives, such as ratios) is trans-formed into intensity, hue, saturation color space. The intensity component is replacedby the radar image, and the scene is reverse transformed. This technique integrallymerges the two data types.

For more information, see "RGB to IHS" on page 160.

211Field Guide

Page 238: Erdas FieldGuide

Principal Components Transform

A similar image merge involves utilizing the principal components (PC) transformationof the VIS/IR image. With this transform, more than three components can be used.These are converted to a series of principal components. The first principal component,PC-1, is generally accepted to correlate with overall scene brightness. This value isreplaced by the radar image and the reverse transform is applied.

For more information, see "Principal Components Analysis" on page 153.

Multiplicative

A final method to consider is the multiplicative technique. This requires severalchromatic components and a multiplicative component, which is assigned to the imageintensity. In practice, the chromatic components are usually band ratios or PC's; theradar image is input multiplicatively as intensity (Holcomb 1993).

The two sensor merge models using transforms to integrate the two data sets (PrincipalComponents and RGB to IHS) are based on the assumption that the radar intensitycorrelates with the intensity that the transform derives from the data inputs. However,the logic of mathematically merging radar with VIS/IR data sets is inherently differentfrom the logic of the SPOT/TM merges (as discussed under the section in this chapteron Resolution Merge). It cannot be assumed that the radar intensity is a surrogate for,or equivalent to, the VIS/IR intensity. The acceptability of this assumption will dependon the specific case.

For example, Landsat TM imagery is often used to aid in mineral exploration. Acommon display for this purpose is RGB = TM5/TM7, TM5/TM4, TM3/TM1; the logicbeing that if all three ratios are high, the sites suited for mineral exploration will bebright overall. If the target area is accompanied by silicification, which results in an areaof dense angular rock, this should be the case. However, if the alteration zone wasbasaltic rock to kaolinite/alunite, then the radar return could be weaker than thesurrounding rock. In this case, radar would not correlate with high 5/7, 5/4, 3/1intensity and the substitution would not produce the desired results (Holcomb 1993).

212 ERDAS

Page 239: Erdas FieldGuide

Radar Imagery Enhancement

213Field Guide

Page 240: Erdas FieldGuide

214 ERDAS

Page 241: Erdas FieldGuide

The Classification Process

CHAPTER 6Classification

Introduction Multispectral classification is the process of sorting pixels into a finite number ofindividual classes, or categories of data, based on their data file values. If a pixelsatisfies a certain set of criteria, the pixel is assigned to the class that corresponds to thatcriteria. This process is also referred to as image segmentation.

Depending on the type of information the user wants to extract from the original data,classes may be associated with known features on the ground or may simply representareas that “look different” to the computer. An example of a classified image is a landcover map, showing vegetation, bare land, pasture, urban, etc.

The ClassificationProcess

Pattern Recognition

Pattern recognition is the science—and art—of finding meaningful patterns in data,which can be extracted through classification. By spatially and spectrally enhancing animage, pattern recognition can be performed with the human eye—the human brainautomatically sorts certain textures and colors into categories.

In a computer system, spectral pattern recognition can be more scientific. Statistics arederived from the spectral characteristics of all pixels in an image. Then, the pixels aresorted based on mathematical criteria. The classification process breaks down into twoparts—training and classifying (using a decision rule).

Training First, the computer system must be trained to recognize patterns in the data. Trainingis the process of defining the criteria by which these patterns are recognized (Hord1982). Training can be performed with either a supervised or an unsupervised method,as explained below.

Supervised Training

Supervised training is closely controlled by the analyst. In this process, the user selectspixels that represent patterns or landcover features that they recognize, or that they canidentify with help from other sources, such as aerial photos, ground truth data, or maps.Knowledge of the data, and of the classes desired, is required before classification.

By identifying patterns, the user can “train” the computer system to identify pixels withsimilar characteristics. If the classification is accurate, the resulting classes represent thecategories within the data that the user originally identified.

215Field Guide

Page 242: Erdas FieldGuide

Unsupervised Training

Unsupervised training is more computer-automated. It enables the user to specifysome parameters that the computer uses to uncover statistical patterns that are inherentin the data. These patterns do not necessarily correspond to directly meaningful charac-teristics of the scene, such as contiguous, easily recognized areas of a particular soil typeor land use. They are simply clusters of pixels with similar spectral characteristics. Insome cases, it may be more important to identify groups of pixels with similar spectralcharacteristics than it is to sort pixels into recognizable categories.

Unsupervised training is dependent upon the data itself for the definition of classes.This method is usually used when less is known about the data before classification. Itis then the analyst’s responsibility, after classification, to attach meaning to the resultingclasses (Jensen 1996). Unsupervised classification is useful only if the classes can beappropriately interpreted.

Signatures The result of training is a set of signatures that defines a training sample or cluster. Eachsignature corresponds to a class, and is used with a decision rule (explained below) toassign the pixels in the image file (.img) to a class. Signatures in ERDAS IMAGINE canbe parametric or non-parametric.

A parametric signature is based on statistical parameters (e.g., mean and covariancematrix) of the pixels that are in the training sample or cluster. Supervised and unsuper-vised training can generate parametric signatures. A set of parametric signatures can beused to train a statistically-based classifier (e.g., maximum likelihood) to define theclasses.

A non-parametric signature is not based on statistics, but on discrete objects (polygonsor rectangles) in a feature space image. These feature space objects are used to definethe boundaries for the classes. A non-parametric classifier will use a set of non-parametric signatures to assign pixels to a class based on their location either inside oroutside the area in the feature space image. Supervised training is used to generate non-parametric signatures (Kloer 1994).

ERDAS IMAGINE enables the user to generate statistics for a non-parametric signature.This function will allow a feature space object to be used to create a parametricsignature from the image being classified. However, since a parametric classifierrequires a normal distribution of data, the only feature space object for which thiswould be mathematically valid would be an ellipse (Kloer 1994).

When both parametric and non-parametric signatures are used to classify an image, theuser is more able to analyze and visualize the class definitions than either type ofsignature provides independently (Kloer 1994).

See "APPENDIX A: Math Topics" for information on feature space images and how they arecreated.

216 ERDAS

Page 243: Erdas FieldGuide

The Classification Process

Decision Rule After the signatures are defined, the pixels of the image are sorted into classes based onthe signatures, by use of a classification decision rule. The decision rule is a mathe-matical algorithm that, using data contained in the signature, performs the actualsorting of pixels into distinct class values.

Parametric Decision Rule

A parametric decision rule is trained by the parametric signatures. These signatures aredefined by the mean vector and covariance matrix for the data file values of the pixelsin the signatures. When a parametric decision rule is used, every pixel is assigned to aclass, since the parametric decision space is continuous (Kloer 1994).

Non-Parametric Decision Rule

A non-parametric decision rule is not based on statistics, therefore, it is independentfrom the properties of the data. If a pixel is located within the boundary of a non-parametric signature, then this decision rule will assign the pixel to the signature’sclass. Basically, a non-parametric decision rule determines whether or not the pixel islocated inside or outside of a non-parametric signature boundary.

217Field Guide

Page 244: Erdas FieldGuide

Classification Tips

Classification Scheme Usually, classification is performed with a set of target classes in mind. Such a set iscalled a classification scheme (or classification system). The purpose of such a schemeis to provide a framework for organizing and categorizing the information that can beextracted from the data (Jensen 1983). The proper classification scheme will includeclasses that are both important to the study and discernible from the data on hand. Mostschemes have a hierarchical structure, which can describe a study area in several levelsof detail.

A number of classification schemes have been developed by specialists who haveinventoried a geographic region. Some references for professionally-developedschemes are listed below:

• Anderson, J.R., et al. 1976. “A Land Use and Land Cover Classification System forUse with Remote Sensor Data.” U.S. Geological Survey Professional Paper 964.

• Cowardin, Lewis M., et al. 1979. Classification of Wetlands and Deepwater Habitats ofthe United States. Washington, D.C.: U.S. Fish and Wildlife Service.

• Florida Topographic Bureau, Thematic Mapping Section. 1985. Florida Land Use,Cover and Forms Classification System. Florida Department of Transportation,Procedure No. 550-010-001-a.

• Michigan Land Use Classification and Reference Committee. 1975. Michigan LandCover/Use Classification System. Lansing, Michigan: State of Michigan Office of LandUse.

Other states or government agencies may also have specialized land use/cover studies.

It is recommended that the classification process is begun by the user defining a classi-fication scheme for the application, using previously developed schemes, like thoseabove, as a general framework.

218 ERDAS

Page 245: Erdas FieldGuide

Classification Tips

Iterative Classification A process is iterative when it repeats an action. The objective of the ERDAS IMAGINEsystem is to enable the user to iteratively create and refine signatures and classified .imgfiles to arrive at a desired final classification. The IMAGINE classification utilities are a“tool box” to be used as needed, not a numbered list of steps that must always befollowed in order.

The total classification can be achieved with either the supervised or unsupervisedmethods, or a combination of both. Some examples are below:

• Signatures created from both supervised and unsupervised training can be mergedand appended together.

• Signature evaluation tools can be used to indicate which signatures are spectrallysimilar. This will help to determine which signatures should be merged or deleted.These tools also help define optimum band combinations for classification. Usingthe optimum band combination may reduce the time required to run a classificationprocess.

• Since classifications (supervised or unsupervised) can be based on a particular areaof interest (either defined in a raster layer or an .aoi layer), signatures andclassifications can be generated from previous classification results.

Supervised vs.Unsupervised Training

In supervised training, it is important to have a set of desired classes in mind, and thencreate the appropriate signatures from the data. The user must also have some way ofrecognizing pixels that represent the classes that he or she wants to extract.

Supervised classification is usually appropriate when the user wants to identifyrelatively few classes, when the user has selected training sites that can be verified withground truth data, or when the user can identify distinct, homogeneous regions thatrepresent each class.

On the other hand, if the user wants the classes to be determined by spectral distinctionsthat are inherent in the data, so that he or she can define the classes later, then the appli-cation is better suited to unsupervised training. Unsupervised training enables the userto define many classes easily, and identify classes that are not in contiguous, easilyrecognized regions.

NOTE: Supervised classification also includes using a set of classes that was generated from anunsupervised classification. Using a combination of supervised and unsupervised classificationmay yield optimum results, especially with large data sets (e.g., multiple Landsat scenes). Forexample, unsupervised classification may be useful for generating a basic set of classes, thensupervised classification could be used for further definition of the classes.

Classifying EnhancedData

For many specialized applications, classifying data that have been merged, spectrallymerged or enhanced—with principal components, image algebra, or other transforma-tions—can produce very specific and meaningful results. However, without under-standing the data and the enhancements used, it is recommended that only the original,remotely-sensed data be classified.

219Field Guide

Page 246: Erdas FieldGuide

Dimensionality Dimensionality refers to the number of layers being classified. For example, a data filewith 3 layers is said to be 3-dimensional, since 3-dimensional feature space is “plotted”to analyze the data.

Feature space and dimensionality are discussed in "APPENDIX A: Math Topics".

Adding Dimensions

Using programs in ERDAS IMAGINE, the user can add layers to existing .img files.Therefore, the user can incorporate data (called ancillary data) other than remotely-sensed data into the classification. Using ancillary data enables the user to incorporatevariables into the classification from, for example, vector layers, previously classifieddata, or elevation data. The data file values of the ancillary data become an additionalfeature of each pixel, thus influencing the classification (Jensen 1996).

Limiting Dimensions

Although ERDAS IMAGINE allows an unlimited number of layers of data to be usedfor one classification, it is usually wise to reduce the dimensionality of the data as muchas possible. Often, certain layers of data are redundant or extraneous to the task at hand.Unnecessary data take up valuable disk space, and cause the computer system toperform more arduous calculations, which slows down processing.

Use the Signature Editor to evaluate separability to calculate the best subset of layer combina-tions. Use the Image Interpreter functions to merge or subset layers. Use the Image Informationtool (in the ERDAS IMAGINE icon panel) to delete a layer(s).

220 ERDAS

Page 247: Erdas FieldGuide

Supervised Training

Supervised Training Supervised training requires a priori (already known) information about the data, suchas:

• What type of classes need to be extracted? Soil type? Land use? Vegetation?

• What classes are most likely to be present in the data? That is, which types of landcover, soil, or vegetation (or whatever) are represented by the data?

In supervised training, the user relies on his or her own pattern recognition skills and apriori knowledge of the data to help the system determine the statistical criteria (signa-tures) for data classification.

To select reliable samples, the user should know some information—either spatial orspectral—about the pixels that they want to classify.

The location of a specific characteristic, such as a land cover type, may be knownthrough ground truthing. Ground truthing refers to the acquisition of knowledgeabout the study area from field work, analysis of aerial photography, personalexperience, etc. Ground truth data are considered to be the most accurate (true) dataavailable about the area of study. They should be collected at the same time as theremotely sensed data, so that the data correspond as much as possible (Star and Estes1990). However, some ground data may not be very accurate due to a number of errors,inaccuracies, and human shortcomings.

Training Samples andFeature Space Objects

Training samples (also called samples) are sets of pixels that represent what is recog-nized as a discernible pattern, or potential class. The system will calculate statistics fromthe sample pixels to create a parametric signature for the class.

The following terms are sometimes used interchangeably in reference to trainingsamples. For clarity, they will be used in this documentation as follows:

• Training sample, or sample, is a set of pixels selected to represent a potential class.The data file values for these pixels are used to generate a parametric signature.

• Training field, or training site, is the geographical area(s) of interest (AOI) in theimage represented by the pixels in a sample. Usually, it is previously identifiedwith the use of ground truth data.

Feature space objects are user-defined areas of interest (AOIs) in a feature space image.The feature space signature is based on this objects(s).

221Field Guide

Page 248: Erdas FieldGuide

Selecting TrainingSamples

It is important that training samples be representative of the class that the user is tryingto identify. This does not necessarily mean that they must contain a large number ofpixels or be dispersed across a wide region of the data. The selection of training samplesdepends largely upon the user’s knowledge of the data, of the study area, and of theclasses that he or she wants to extract.

ERDAS IMAGINE enables the user to identify training samples via one or more of thefollowing methods:

• using a vector layer

• defining a polygon in the image

• identifying a training sample of contiguous pixels with similar spectralcharacteristics

• identifying a training sample of contiguous pixels within a certain area, with orwithout similar spectral characteristics

• using a class from a thematic raster layer from an image file of the same area (i.e.,the result of an unsupervised classification)

Digitized Polygon

Training samples can be identified by their geographical location (training sites, usingmaps, ground truth data). The locations of the training sites can be digitized from mapswith the ERDAS IMAGINE Vector or AOI tools. Polygons representing these areas arethen stored as vector layers. The vector layers can then be used as input to the AOI toolsand used as training samples to create signatures.

Use the Vector and AOI tools to digitize training samples from a map. Use the Signature Editorto create signatures from training samples that are identified with digitized polygons.

User-Defined Polygon

Using his or her pattern recognition skills (with or without supplemental ground truthinformation), the user can identify samples by examining a displayed image of the dataand drawing a polygon around the training site(s) of interest. For example, if it isknown that oak trees reflect certain frequencies of green and infrared light according toground truth data, the user may be able to base his or her sample selections on the data(taking atmospheric conditions, sun angle, time, date, and other variations intoaccount). The area within the polygon(s) would be used to create a signature.

Use the AOI tools to define the polygon(s) to be used as the training sample. Use the SignatureEditor to create signatures from training samples that are identified with the polygons.

222 ERDAS

Page 249: Erdas FieldGuide

Selecting Training Samples

Identify Seed Pixel

With the Seed Properties dialog and AOI tools, the cursor (cross hair) can be used toidentify a single pixel (seed pixel) that is representative of the training sample. This seedpixel will be used as a model pixel, against which the pixels that are contiguous to it arecompared based on parameters specified by the user.

When one or more of the contiguous pixels is accepted, the mean of the sample is calcu-lated from the accepted pixels. Then, the pixels contiguous to the sample are comparedin the same way. This process repeats until no pixels that are contiguous to the samplesatisfy the spectral parameters. In effect, the sample “grows” outward from the modelpixel with each iteration. These homogenous pixels will be converted from individualraster pixels to a polygon and used as an area of interest (AOI) layer.

Select the Seed Properties option in the Viewer to identify training samples with a seed pixel.

Seed Pixel Method with Spatial Limits

The training sample identified with the seed pixel method can be limited to a particularregion by defining the geographic distance and area.

Vector layers (polygons or lines) can be displayed as the top layer in the Viewer, and theboundaries can then be used as an AOI for training samples defined under Seed Properties.

Thematic Raster Layer

A training sample can be defined by using class values from a thematic raster layer (seeTable 20). The data file values in the training sample will be used to create a signature.The training sample can be defined by as many class values as desired.

NOTE: The thematic raster layer must have the same coordinate system as the image file beingclassified.

Table 20: Training Sample Comparison

Training Samples

Method Advantages Disadvantages

Digitized Polygon precise map coordinates,represents known groundinformation

may overestimate classvariance, time- consuming

User-Defined Polygon high degree of user control may overestimate classvariance, time- consuming

Seed Pixel auto-assisted, less time may underestimate classvariance

Thematic Raster Layer allows iterative classifying must have previouslydefined thematic layer

223Field Guide

Page 250: Erdas FieldGuide

Evaluating TrainingSamples

Selecting training samples is often an iterative process. To generate signatures thataccurately represent the classes to be identified, the user may have to repeatedly selecttraining samples, evaluate the signatures that are generated from the samples, and theneither take new samples or manipulate the signatures as necessary. Signature manipu-lation may involve merging, deleting, or appending from one file to another. It is alsopossible to perform a classification using the known signatures, then mask out areasthat are not classified to use in gathering more signatures.

See "Evaluating Signatures" on page 236 for methods of determining the accuracy of thesignatures created from your training samples.

Selecting FeatureSpace Objects

The ERDAS IMAGINE Feature Space tools enable the user to interactively definefeature space objects (AOIs) in the feature space image(s). A feature space image issimply a graph of the data file values of one band of data against the values of anotherband (often called a scatterplot). In ERDAS IMAGINE, a feature space image has thesame data structure as a raster image; therefore, feature space images can be used withother IMAGINE utilities, including zoom, color level slicing, virtual roam, SpatialModeler, and Map Composer.

Figure 89: Example of a Feature Space Image

The transformation of a multilayer raster image into a feature space image is done bymapping the input pixel values to a position in the feature space image. This transfor-mation defines only the pixel position in the feature space image. It does not define thepixel’s value. The pixel values in the feature space image can be the accumulatedfrequency, which is calculated when the feature space image is defined. The pixelvalues can also be provided by a thematic raster layer of the same geometry as thesource multilayer image. Mapping a thematic layer into a feature space image can beuseful for evaluating the validity of the parametric and non-parametric decision bound-aries of a classification (Kloer 1994).

When you display a feature space image file (.fsp.img) in an ERDAS IMAGINE Viewer, thecolors reflect the density of points for both bands. The bright tones represent a high density andthe dark tones represent a low density.

band

2

band 1

224 ERDAS

Page 251: Erdas FieldGuide

Selecting Feature Space Objects

Create Non-parametric Signature

The user can define a feature space object (AOI) in the feature space image and use itdirectly as a non-parametric signature. Since the IMAGINE Viewers for the featurespace image and the image being classified are both linked to the IMAGINE SignatureEditor, it is possible to mask AOIs from the image being classified to the feature spaceimage, and vice versa. The user can also directly link a cursor in the image Viewer tothe feature space Viewer. These functions will help determine a location for the AOI inthe feature space image.

A single feature space image, but multiple AOIs, can be used to define the signature.This signature is taken within the feature space image, not the image being classified.The pixels in the image that correspond to the data file values in the signature (i.e.,feature space object) will be assigned to that class.

One fundamental difference between using the feature space image to define a trainingsample and the other traditional methods is that it is a non-parametric signature. Thedecisions made in the classification process have no dependency on the statistics of thepixels. This helps improve classification accuracies for specific non-normal classes, suchas urban and exposed rock (Faust, et al 1991).

See "APPENDIX A: Math Topics" for information on feature space images.

Figure 90: Process for Defining a Feature Space Object

Display .img file to be classified in a Viewer

Create feature space image from .img file being classified

(layers 3, 2, 1).

(layer 1 vs. layer 2).

Draw an AOI (feature space object around the desiredarea in the feature space image.Once the user has adesired AOI, it can be used as a signature.

A decision rule will be used to analyze each pixel in the.img file being classified, and the pixels with thecorresponding data file values will be assigned to thefeature space class.

225Field Guide

Page 252: Erdas FieldGuide

Evaluate Feature Space Signatures

Via the Feature Space tools, it is also possible to let use a feature space signature togenerate a mask. Once it is defined as a mask, the pixels under the mask will beidentified in the image file and highlighted in the Viewer. The image displayed in theViewer must be the image from which the feature space image was created. Thisprocess will help the user to visually analyze the correlations between various spectralbands to determine which combination of bands brings out the desired features in theimage.

The user can have as many feature space images with different band combinations asdesired. Any polygon or rectangle in these feature space images can be used as a non-parametric signature. However, only one feature space image can be used persignature. The polygons in the feature space image can be easily modified and/ormasked until the desired regions of the image have been identified.

Use the Feature Space tools in the Signature Editor to create a feature space image and mask thesignature. Use the AOI tools to draw polygons.

Feature Space Signatures

Advantages Disadvantages

Provide an accurate way to classify a classwith a non-normal distribution (e.g., resi-dential and urban).

The classification decision process allowsoverlap and unclassified pixels.

Certain features may be more visuallyidentifiable in a feature space image.

The feature space image may be difficult tointerpret.

The classification decision process is fast.

226 ERDAS

Page 253: Erdas FieldGuide

Unsupervised Training

UnsupervisedTraining

Unsupervised training requires only minimal initial input from the user. However, theuser will have the task of interpreting the classes that are created by the unsupervisedtraining algorithm.

Unsupervised training is also called clustering, because it is based on the naturalgroupings of pixels in image data when they are plotted in feature space. According tothe specified parameters, these groups can later be merged, disregarded, otherwisemanipulated, or used as the basis of a signature.

Feature space is explained in "APPENDIX A: Math Topics".

Clusters

Clusters are defined with a clustering algorithm, which often uses all or many of thepixels in the input data file for its analysis. The clustering algorithm has no regard forthe contiguity of the pixels that define each cluster.

• The ISODATA clustering method uses spectral distance as in the sequentialmethod, but iteratively classifies the pixels, redefines the criteria for each class, andclassifies again, so that the spectral distance patterns in the data gradually emerge.

• The RGB clustering method is more specialized than the ISODATA method. Itapplies to three-band, 8-bit data. RGB clustering plots pixels in three-dimensionalfeature space, and divides that space into sections that are used to define clusters.

Each of these methods is explained below, along with its advantages and disadvan-tages.

Some of the statistics terms used in this section are explained in "APPENDIX A: Math Topics".

227Field Guide

Page 254: Erdas FieldGuide

ISODATA Clustering ISODATA stands for Iterative Self-Organizing Data Analysis Technique (Tou andGonzalez 1974). It is iterative in that it repeatedly performs an entire classification(outputting a thematic raster layer) and recalculates statistics. Self-Organizing refers tothe way in which it locates clusters with minimum user input.

The ISODATA method uses minimum spectral distance to assign a cluster for eachcandidate pixel. The process begins with a specified number of arbitrary cluster meansor the means of existing signatures, and then it processes repetitively, so that thosemeans will shift to the means of the clusters in the data.

Because the ISODATA method is iterative, it is not biased to the top of the data file, asare the one-pass clustering algorithms.

Use the Unsupervised Classification utility in the Signature Editor to perform ISODATAclustering.

ISODATA Clustering Parameters

To perform ISODATA clustering, the user specifies:

• N - the maximum number of clusters to be considered. Since each cluster is the basisfor a class, this number becomes the maximum number of classes to be formed. TheISODATA process begins by determining N arbitrary cluster means. Some clusterswith too few pixels can be eliminated, leaving less than N clusters.

• T - a convergence threshold, which is the maximum percentage of pixels whoseclass values are allowed to be unchanged between iterations.

• M - the maximum number of iterations to be performed.

228 ERDAS

Page 255: Erdas FieldGuide

Unsupervised Training

Initial Cluster Means

On the first iteration of the ISODATA algorithm, the means of N clusters can bearbitrarily determined. After each iteration, a new mean for each cluster is calculated,based on the actual spectral locations of the pixels in the cluster, instead of the initialarbitrary calculation. Then, these new means are used for defining clusters in the nextiteration. The process continues until there is little change between iterations (Swain1973).

The initial cluster means are distributed in feature space along a vector that runsbetween the point at spectral coordinates (µ1-s1, µ2-s2, µ3-s3, ... µn-sn) and the coordi-

nates (µ1+s1, µ2+s2, µ3+s3, ... µn+sn). Such a vector in two dimensions is illustrated in

Figure 91. The initial cluster means are evenly distributed between (µ1-s1, µn-sn) and

(µ1+s1, µn+sn).

Figure 91: ISODATA Arbitrary Clusters

Pixel Analysis

Pixels are analyzed beginning with the upper-left corner of the image and going left toright, block by block.

The spectral distance between the candidate pixel and each cluster mean is calculated.The pixel is assigned to the cluster whose mean is the closest. The ISODATA functioncreates an output .img file with a thematic raster layer and/or a signature file (.sig) as aresult of the clustering. At the end of each iteration, an .img file exists that shows theassignments of the pixels to the clusters.

µA

0

B

5 arbitrary cluster means in two-dimensional spectral space

σ+

µ

Bσ-

B

Aσ-

Aσ+0

Band Adata file values

Ban

d B

data

file

val

ues

µµ

µ

µB

A A

B

229Field Guide

Page 256: Erdas FieldGuide

Considering the regular, arbitrary assignment of the initial cluster means, the firstiteration of the ISODATA algorithm will always give results similar to those in Figure92.

Figure 92: ISODATA First Pass

For the second iteration, the means of all clusters are recalculated, causing them to shiftin feature space. The entire process is repeated—each candidate pixel is compared tothe new cluster means and assigned to the closest cluster mean.

Figure 93: ISODATA Second Pass

Percentage Unchanged

After each iteration, the normalized percentage of pixels whose assignments areunchanged since the last iteration is displayed in the dialog. When this number reachesT (the convergence threshold), the program terminates.

Cluster1

Cluster2

Cluster3

Cluster4

Cluster5

Band Adata file values

Ban

d B

data

file

val

ues

Band Adata file values

Ban

d B

data

file

val

ues

230 ERDAS

Page 257: Erdas FieldGuide

Unsupervised Training

It is possible for the percentage of unchanged pixels to never converge or reach T (theconvergence threshold). Therefore, it may be beneficial to monitor the percentage, orspecify a reasonable maximum number of iterations, M, so that the program will not runindefinitely.

Recommended Decision Rule

Although the ISODATA algorithm is the most similar to the minimum distancedecision rule, the signatures can produce good results with any type of classification.Therefore, no particular decision rule is recommended over others.

In most cases, the signatures created by ISODATA will be merged, deleted, orappended to other signature sets. The .img file created by ISODATA is the same as the.img file that would be created by a minimum distance classification, except for the non-convergent pixels (100-T% of the pixels).

Use the Merge and Delete options in the Signature Editor to manipulate signatures.

Use the Unsupervised Classification utility in the Signature Editor to perform ISODATAclustering, generate signatures, and classify the resulting signatures.

ISODATA Clustering

Advantages Disadvantages

Clustering is not geographically biased tothe top or bottom pixels of the data file,because it is iterative.

The clustering process is time-consuming,because it can repeat many times.

This algorithm is highly successful at find-ing the spectral clusters that are inherentin the data. It does not matter where theinitial arbitrary cluster means are located,as long as enough iterations are allowed.

Does not account for pixel spatial homoge-neity.

A preliminary thematic raster layer is cre-ated, which gives results similar to using aminimum distance classifier (as explainedbelow) on the signatures that are created.This thematic raster layer can be used foranalyzing and manipulating the signa-tures before actual classification takesplace.

231Field Guide

Page 258: Erdas FieldGuide

RGB Clustering

The RGB Clustering and Advanced RGB Clustering functions in Image Interpreter create athematic raster layer. However, no signature file is created and no other classification decisionrule is used. In practice, RGB Clustering differs greatly from the other clustering methods, butit does employ a clustering algorithm and, therefore, it is explained here.

RGB clustering is a simple classification and data compression technique for threebands of data. It is a fast and simple algorithm that quickly compresses a 3-band imageinto a single band pseudocolor image, without necessarily classifying any particularfeatures.

The algorithm plots all pixels in 3-dimensional feature space and then partitions thisspace into clusters on a grid. In the more simplistic version of this function, each of theseclusters becomes a class in the output thematic raster layer.

The advanced version requires that a minimum threshold on the clusters be set, so thatonly clusters at least as large as the threshold will become output classes. This allowsfor more color variation in the output file. Pixels which do not fall into any of theremaining clusters are assigned to the cluster with the smallest city-block distance fromthe pixel. In this case, the city-block distance is calculated as the sum of the distancesin the red, green, and blue directions in 3-dimensional space.

Along each axis of the 3-dimensional scatterplot, each input histogram is scaled so thatthe partitions divide the histograms between specified limits— either a specifiednumber of standard deviations above and below the mean, or between the minimumand maximum data values for each band.

The default number of divisions per band is listed below:

• RED is divided into 7 sections (32 for advanced version)

• GREEN is divided into 6 sections (32 for advanced version)

• BLUE is divided into 6 sections (32 for advanced version)

232 ERDAS

Page 259: Erdas FieldGuide

Unsupervised Training

Figure 94: RGB Clustering

Partitioning Parameters

It is necessary to specify the number of R, G, and B sections in each dimension of the 3-dimensional scatterplot. The number of sections should vary according to the histo-grams of each band. Broad histograms should be divided into more sections, andnarrow histograms should be divided into fewer sections (see Figure 94).

It is possible to interactively change these parameters in the RGB Clustering function in theImage Interpreter. The number of classes is calculated based on the current parameters, and itdisplays on the command screen.

G

B

R

16

195

35

255

98

B

R

G

16

1634

55

35

016

35 195 25598

R

G

B

This cluster contains pixelsbetween 16 and 34 in RED,and between 35 and 55 inGREEN, and between 0 and16 in BLUE.

freq

uenc

y

input histograms

0

0

233Field Guide

Page 260: Erdas FieldGuide

Tips

Some starting values that usually produce good results with the simple RGB clusteringare:

R = 7G = 6B = 6

which results in 7 X 6 X 6 = 252 classes.

To decrease the number of output colors/classes or to darken the output, decrease thesevalues.

For the Advanced RGB clustering function, start with higher values for R, G, and B.Adjust by raising the threshold parameter and/or decreasing the R, G, and B parametervalues until the desired number of output classes is obtained.

RGB Clustering

Advantages Disadvantages

The fastest classification method. It isdesigned to provide a fast, simple classifi-cation for applications that do not requirespecific classes.

Exactly three bands must be input, whichis not suitable for all applications.

Not biased to the top or bottom of the datafile. The order in which the pixels areexamined does not influence the outcome.

Does not always create thematic classesthat can be analyzed for informationalpurposes.

(Advanced version only) A highly interac-tive function, allowing an iterative adjust-ment of the parameters until the numberof clusters and the thresholds are satisfac-tory for analysis.

234 ERDAS

Page 261: Erdas FieldGuide

Signature Files

Signature Files A signature is a set of data that defines a training sample, feature space object (AOI), orcluster. The signature is used in a classification process. Each classification decision rule(algorithm) requires some signature attributes as input—these are stored in thesignature file (.sig). Signatures in ERDAS IMAGINE can be parametric or non-parametric.

The following attributes are standard for all signatures (parametric and non-parametric):

• name — identifies the signature and is used as the class name in the outputthematic raster layer. The default signature name is Class <number>.

• color — the color for the signature and is used as the color for the class in the outputthematic raster layer. This color is also used with other signature visualizationfunctions, such as alarms, masking, ellipses, etc.

• value — the output class value for the signature. The output class value does notnecessarily need to be the class number of the signature. This value should be apositive integer.

• order — the order to process the signatures for order-dependent processes, such assignature alarms and parallelepiped classifications.

• parallelepiped limits — the limits used in the parallelepiped classification.

Parametric Signature

A parametric signature is based on statistical parameters (e.g., mean and covariancematrix) of the pixels that are in the training sample or cluster. A parametric signatureincludes the following attributes in addition to the standard attributes for signatures:

• the number of bands in the input image (as processed by the training program)

• the minimum and maximum data file value in each band for each sample or cluster(minimum vector and maximum vector)

• the mean data file value in each band for each sample or cluster (mean vector)

• the covariance matrix for each sample or cluster

• the number of pixels in the sample or cluster

Non-parametric Signature

A non-parametric signature is based on an AOI that the user defines in the featurespace image for the .img file being classified. A non-parametric classifier will use a setof non-parametric signatures to assign pixels to a class based on their location, eitherinside or outside the area in the feature space image.

The format of the .sig file is described in "APPENDIX B: File Formats and Extensions".Information on these statistics can be found in "APPENDIX A: Math Topics".

235Field Guide

Page 262: Erdas FieldGuide

EvaluatingSignatures

Once signatures are created, they can be evaluated, deleted, renamed, and merged withsignatures from other files. Merging signatures enables the user to perform complexclassifications with signatures that are derived from more than one training method(supervised and/or unsupervised, parametric and/or non-parametric).

Use the Signature Editor to view the contents of each signature, manipulate signatures, andperform your own mathematical tests on the statistics.

Using Signature Data

There are tests to perform that can help determine whether the signature data are a truerepresentation of the pixels to be classified for each class. The user can evaluate signa-tures that were created either from supervised or unsupervised training. The evaluationmethods in ERDAS IMAGINE include:

• Alarm — using his or her own pattern recognition ability, the user views theestimated classified area for a signature (using the parallelepiped decision rule)against a display of the original image.

• Ellipse — view ellipse diagrams and scatterplots of data file values for every pairof bands.

• Contingency matrix — do a quick classification of the pixels in a set of trainingsamples to see what percentage of the sample pixels are actually classified asexpected. These percentages are presented in a contingency matrix. This method isfor supervised training only, for which polygons of the training samples exist.

• Divergence — measure the divergence (statistical distance) between signaturesand determine band subsets that will maximize the classification.

• Statistics and histograms — analyze statistics and histograms of the signatures tomake evaluations and comparisons.

NOTE: If the signature is non-parametric (i.e., a feature space signature), you can use only thealarm evaluation method.

After analyzing the signatures, it would be beneficial to merge or delete them, eliminateredundant bands from the data, add new bands of data, or perform any other opera-tions to improve the classification.

Alarm The alarm evaluation enables the user to compare an estimated classification of one ormore signatures against the original data, as it appears in the ERDAS IMAGINEViewer. According to the parallelepiped decision rule, the pixels that fit the classifi-cation criteria are highlighted in the displayed image. The user also has the option toindicate an overlap by having it appear in a different color.

With this test, the user can use his or her own pattern recognition skills, or someground-truth data, to determine the accuracy of a signature.

236 ERDAS

Page 263: Erdas FieldGuide

Evaluating Signatures

Use the Signature Alarm utility in the Signature Editor to perform n-dimensional alarms on theimage in the Viewer, using the parallelepiped decision rule. The alarm utility creates a functionallayer, and the IMAGINE Viewer allows you to toggle between the image layer and the functionallayer.

Ellipse In this evaluation, ellipses of concentration are calculated with the means and standarddeviations stored in the signature file. It is also possible to generate parallelepipedrectangles, means, and labels.

In this evaluation, the mean and the standard deviation of every signature are used torepresent the ellipse in 2-dimensional feature space. The ellipse is displayed in a featurespace image.

Ellipses are explained and illustrated in "APPENDIX A: Math Topics" under the discussion ofScatterplots.

When the ellipses in the feature space image show extensive overlap, then the spectralcharacteristics of the pixels represented by the signatures cannot be distinguished in thetwo bands that are graphed. In the best case, there is no overlap. Some overlap,however, is expected.

Figure 95 shows how ellipses are plotted and how they can overlap. The first graphshows how the ellipses are plotted based on the range of 2 standard deviations from themean. This range can be altered, changing the ellipse plots. Analyzing the plots withdiffering numbers of standard deviations is useful for determining the limits of a paral-lelepiped classification.

Figure 95: Ellipse Evaluation of Signatures

Signature Overlap Distinct Signatures

Band Adata file values

Band B

data

file

valu

es

µA

2

+2µ B2 s

-2µ B2

µB2

signature 2signature 1

Band Cdata file values

Band D

data

file

valu

es

µC2

D2µsignature 2

signature 1D1µ

µC1

s

-2s

µA

2

µA

2+

2s

µB2+2s

µB2-2s

µB2

µ A2+

2s

µ A2-

2s µ A2

µD1

µD2

µC2 µC1

237Field Guide

Page 264: Erdas FieldGuide

By analyzing the ellipse graphs for all band pairs, the user can determine which signa-tures and which bands will provide accurate classification results.

Use the Signature Editor to create a feature space image and to view an ellipse(s) of signaturedata.

Contingency Matrix NOTE: This evaluation classifies all of the pixels in the selected AOIs and compares the resultsto the pixels of a training sample.

The pixels of each training sample are not always so homogeneous that every pixel in asample will actually be classified to its corresponding class. Each sample pixel onlyweights the statistics that determine the classes. However, if the signature statistics foreach sample are distinct from those of the other samples, then a high percentage of eachsample’s pixels will be classified as expected.

In this evaluation, a quick classification of the sample pixels is performed using theminimum distance, maximum likelihood, or Mahalanobis distance decision rule. Then,a contingency matrix is presented, which contains the number and percentages ofpixels that were classified as expected.

Use the Signature Editor to perform the contingency matrix evaluation.

Separability Signature separability is a statistical measure of distance between two signatures.Separability can be calculated for any combination of bands that will be used in theclassification, enabling the user to rule out any bands that are not useful in the resultsof the classification.

For the distance (Euclidean) evaluation, the spectral distance between the mean vectorsof each pair of signatures is computed. If the spectral distance between two samples isnot significant for any pair of bands, then they may not be distinct enough to producea successful classification.

The spectral distance is also the basis of the minimum distance classification (asexplained below). Therefore, computing the distances between signatures will help theuser predict the results of a minimum distance classification.

Use the Signature Editor to compute signature separability and distance and automaticallygenerate the report.

The formulas used to calculate separability are related to the maximum likelihooddecision rule. Therefore, evaluating signature separability helps the user predict theresults of a maximum likelihood classification. The maximum likelihood decision ruleis explained below.

There are three options for calculating the separability. All of these formulas take intoaccount the covariances of the signatures in the bands being compared, as well as themean vectors of the signatures.

238 ERDAS

Page 265: Erdas FieldGuide

Evaluating Signatures

Refer to "APPENDIX A: Math Topics" for information on the mean vector and covariancematrix.

Divergence

The formula for computing Divergence (Dij ) is as follows:

where:

i and j= the two signatures (classes) being comparedCi= the covariance matrix of signature iµi= the mean vector of signature itr= the trace function (matrix algebra)T= the transposition function

Source: Swain and Davis 1978

Transformed Divergence

The formula for computing Transformed Divergence (TD) is as follows:

where:

i and j= the two signatures (classes) being comparedCi= the covariance matrix of signature iµi= the mean vector of signature itr= the trace function (matrix algebra)T= the transposition function

Source: Swain and Davis 1978

Dij12---tr Ci Cj–( ) Ci

1–Cj

1––( )( ) 1

2---tr Ci

1–Cj

1––( ) µi µ j–( ) µi µ j–( )T( )+=

Dij12---tr Ci Cj–( ) Ci

1–Cj

1––( )( ) 1

2---tr Ci

1–Cj

1––( ) µi µ j–( ) µi µ j–( )T( )+=

TDij 2 1 expD– ij

8----------

– =

239Field Guide

Page 266: Erdas FieldGuide

Jeffries-Matusita Distance

The formula for computing Jeffries-Matusita Distance (JM) is as follows:

,

where:

i and j = the two signatures (classes) being comparedCi= the covariance matrix of signature iµi = the mean vector of signature iln = the natural logarithm function|Ci |= the determinant of Ci (matrix algebra)

Source: Swain and Davis 1978

Separability

Both transformed divergence and Jeffries-Matusita distance have upper and lowerbounds. If the calculated divergence is equal to the appropriate upper bound, then thesignatures can be said to be totally separable in the bands being studied. A calculateddivergence of zero means that the signatures are inseparable.

• TD is between 0 and 2000.

• JM is between 0 and 1414.

A separability listing is a report of the computed divergence for every class pair andone band combination. The listing contains every divergence value for the bandsstudied for every possible pair of signatures.

The separability listing also contains the average divergence and the minimum diver-gence for the band set. These numbers can be compared to other separability listings(for other band combinations), to determine which set of bands is the most useful forclassification.

Weight Factors

As with the Bayesian classifier (explained below with maximum likelihood), weightfactors may be specified for each signature. These weight factors are based on a priori(already known) probabilities that any given pixel will be assigned to each class. Forexample, if the user knows that twice as many pixels should be assigned to Class A asto Class B, then Class A should receive a weight factor that is twice that of Class B.

NOTE: The weight factors do not influence the divergence equations (for TD or JM), but they doinfluence the report of the best average and best minimum separability.

α 18--- µi µ j–( )T Ci Cj+

2------------------

1–

µi µ j–( ) 12--- ln

Ci Cj+( ) 2⁄

Ci Cj×--------------------------------

+=

JMij 2 1 eα–

–( )=

240 ERDAS

Page 267: Erdas FieldGuide

Evaluating Signatures

The weight factors for each signature are used to compute a weighted divergence withthe following calculation:

where:

i and j = the two signatures (classes) being comparedUij = the unweighted divergence between i and jWij = the weighted divergence between i and jc= the number of signatures (classes)fi = the weight factor for signature i

Probability of Error

The Jeffries-Matusita distance is related to the pairwise probability of error, which is theprobability that a pixel assigned to class i is actually in class j. Within a range, thisprobability can be estimated according to the expression below:

where:

i and j = the signatures (classes) being comparedJMij = the Jeffries-Matusita distance between i and jPe = the probability that a pixel will be misclassified from i to j

Source: Swain and Davis 1978

Wij

f i f j Uijj i 1+=

c

i 1=

c 1–

12--- f i

i 1=

c

∑ 2

f i2

i 1=

c

∑–

--------------------------------------------------------=

116------ 2 JMij

2–( )

2Pe 1

12--- 1

12---JMij

2+

–≤ ≤

241Field Guide

Page 268: Erdas FieldGuide

Signature Manipulation In many cases, training must be repeated several times before the desired signatures areproduced. Signatures can be gathered from different sources—different trainingsamples, feature space images, and different clustering programs— all using differenttechniques. After each signature file is evaluated, the user may merge, delete, or createnew signatures. The desired signatures can finally be moved to one signature file to beused in the classification.

The following operations upon signatures and signature files are possible with ERDASIMAGINE:

• View the contents of the signature statistics

• View histograms of the samples or clusters that were used to derive the signatures

• Delete unwanted signatures

• Merge signatures together, so that they form one larger class when classified

• Append signatures from other files. The user can combine signatures that arederived from different training methods for use in one classification.

Use the Signature Editor to view statistics and histogram listings and to delete, merge, append,and rename signatures within a signature file.

242 ERDAS

Page 269: Erdas FieldGuide

Classification Decision Rules

ClassificationDecision Rules

Once a set of reliable signatures has been created and evaluated, the next step is toperform a classification of the data. Each pixel is analyzed independently. Themeasurement vector for each pixel is compared to each signature, according to adecision rule, or algorithm. Pixels that pass the criteria that are established by thedecision rule are then assigned to the class for that signature. ERDAS IMAGINE enablesthe user to classify the data both parametrically with statistical representation, and non-parametrically as objects in feature space. Figure 96 shows the flow of an image pixelthrough the classification decision making process in ERDAS IMAGINE (Kloer 1994).

If a non-parametric rule is not set, then the pixel is classified using only the parametricrule. All of the parametric signatures will be tested. If a non-parametric rule is set, thepixel will be tested against all of the signatures with non-parametric definitions. Thisrule results in the following conditions:

• If the non-parametric test results in one unique class, the pixel will be assigned tothat class.

• If the non-parametric test results in zero classes (i.e., the pixel lies outside all thenon-parametric decision boundaries), then the unclassified rule will be applied.With this rule, the pixel will either be classified by the parametric rule or leftunclassified.

• If the pixel falls into more than one class as a result of the non-parametric test, theoverlap rule will be applied. With this rule, the pixel will either be classified by theparametric rule, processing order, or left unclassified.

243Field Guide

Page 270: Erdas FieldGuide

Non-parametric Rules ERDAS IMAGINE provides these decision rules for non-parametric signatures:

• parallelepiped

• feature space

Unclassified Options

ERDAS IMAGINE provides these options if the pixel is not classified by the non-parametric rule:

• parametric rule

• unclassified

Overlap Options

ERDAS IMAGINE provides these options if the pixel falls into more than one featurespace object:

• parametric rule

• by order

• unclassified

Parametric Rules ERDAS IMAGINE provides these commonly-used decision rules for parametric signa-tures:

• minimum distance

• Mahalanobis distance

• maximum likelihood (with Bayesian variation)

244 ERDAS

Page 271: Erdas FieldGuide

Classification Decision Rules

Figure 96: Classification Flow Diagram

Candidate Pixel

No

Yes

Resulting Number of Classes

>1

Unclassified Overlap Options

Parametric Rule

Unclassified

Assignment

Class

Assignment

1

Unclassified

ParametricUnclassifiedParametric By Order

Non-parametric Rule

0

Options

245Field Guide

Page 272: Erdas FieldGuide

Parallelepiped In the parallelepiped decision rule, the data file values of the candidate pixel arecompared to upper and lower limits. These limits can be either:

• the minimum and maximum data file values of each band in the signature,

• the mean of each band, plus and minus a number of standard deviations, or

• any limits that the user specifies, based on his or her knowledge of the data andsignatures. This knowledge may come from the signature evaluation techniquesdiscussed above.

These limits can be set using the Parallelepiped Limits utility in the Signature Editor.

There are high and low limits for every signature in every band. When a pixel’s data filevalues are between the limits for every band in a signature, then the pixel is assigned tothat signature’s class. Figure 97 is a two-dimensional example of a parallelepiped classi-fication.

Figure 97: Parallelepiped Classification Using Plus or MinusTwo Standard Deviations as Limits

The large rectangles in Figure 97 are called parallelepipeds. They are the regions withinthe limits for each signature.

µB2+2s

??

?

?

?

? ?

?

?

?

? ?

?

?

? ?

?

? ?

?

??

???

?

?

? ? ?

?

?

?

?

?

class 1

class 2

class 3

µB2-2s

µB2

µ A2+

2s

µ A2-

2sµ A2

Band Adata file values

Ban

d B

data

file

val

ues

µA2 = mean of Band A,class 2

µB2 = mean of Band B,class 2

?

= pixels in class 1= pixels in class 2

= pixels in class 3= unclassified pixels

246 ERDAS

Page 273: Erdas FieldGuide

Classification Decision Rules

Overlap Region

In cases where a pixel may fall into the overlap region of two or more parallelepipeds,the user must define how the pixel will be classified.

• The pixel can be classified by the order of the signatures. If one of the signatures isfirst and the other signature is fourth, the pixel will be assigned to the firstsignature’s class. This order can be set in the ERDAS IMAGINE Signature Editor.

• The pixel can be classified by the defined parametric decision rule. The pixel will betested against the overlapping signatures only. If neither of these signatures isparametric, then the pixel will be left unclassified. If only one of the signatures isparametric, then the pixel will be assigned automatically to that signature’s class.

• The pixel can be left unclassified.

Regions Outside of the Boundaries

If the pixel does not fall into one of the parallelepipeds, then the user must define howthe pixel will be classified.

• The pixel can be classified by the defined parametric decision rule. The pixel will betested against all of the parametric signatures. If none of the signatures isparametric, then the pixel will be left unclassified.

• The pixel can be left unclassified.

Use the Supervised Classification utility in the Signature Editor to perform a parallelepipedclassification.

Parallelepiped Decision Rule

Advantages Disadvantages

Fast and simple, since the data file valuesare compared to limits that remain con-stant for each band in each signature.

Since parallelepipeds have “corners,” pix-els that are actually quite far, spectrally,from the mean of the signature may beclassified. An example of this is shown inFigure 98.Often useful for a first-pass, broad classifi-

cation, this decision rule quickly narrowsdown the number of possible classes towhich each pixel can be assigned beforethe more time-consuming calculations aremade, thus cutting processing time (e.g.,minimum distance, Mahalanobis distanceor maximum likelihood).

Not dependent on normal distributions.

247Field Guide

Page 274: Erdas FieldGuide

Figure 98: Parallelepiped Corners Compared to the Signature Ellipse

Feature Space The feature space decision rule determines whether or not a candidate pixel lies withinthe non-parametric signature in the feature space image. When a pixel’s data file valuesare in the feature space signature, then the pixel is assigned to that signature’s class.Figure 99 is a two-dimensional example of a feature space classification. The polygonsin this figure are AOIs used to define the feature space signatures.

Figure 99: Feature Space Classification

µ

µ

B

Signature Ellipse

Parallelepipedboundary

A

*

candidate pixel

Ban

d B

data

file

val

ues

Band Adata file values

class 1

class 3

Band Adata file values

Ban

d B

data

file

val

ues

?

= pixels in class 1

= pixels in class 2

= pixels in class 3

= unclassified pixels

class 2

? ?

?

?

?

??

?

?

??

?

??

?

? ?

?

?

?

?

?

?

? ?

?

?

?

?

??

?

? ??

248 ERDAS

Page 275: Erdas FieldGuide

Classification Decision Rules

Overlap Region

In cases where a pixel may fall into the overlap region of two or more AOIs, the usermust define how the pixel will be classified.

• The pixel can be classified by the order of the feature space signatures. If one of thesignatures is first and the other signature is fourth, the pixel will be assigned to thefirst signature’s class. This order can be set in the ERDAS IMAGINE SignatureEditor.

• The pixel can be classified by the defined parametric decision rule. The pixel will betested against the overlapping signatures only. If neither of these feature spacesignatures is parametric, then the pixel will be left unclassified. If only one of thesignatures is parametric, then the pixel will be assigned automatically to thatsignature’s class.

• The pixel can be left unclassified.

Regions Outside of the AOIs

If the pixel does not fall into one of the AOIs for the feature space signatures, then theuser must define how the pixel will be classified.

• The pixel can be classified by the defined parametric decision rule. The pixel will betested against all of the parametric signatures. If none of the signatures isparametric, then the pixel will be left unclassified.

• The pixel can be left unclassified.

Use the Decision Rules utility in the Signature Editor to perform a feature space classification.

Feature Space Decision Rule

Advantages Disadvantages

Often useful for a first-pass, broad classifi-cation.

The feature space decision rule allowsoverlap and unclassified pixels.

Provides an accurate way to classify a classwith a non-normal distribution (e.g., resi-dential and urban).

The feature space image may be difficult tointerpret.

Certain features may be more visuallyidentifiable, which can help discriminatebetween classes that are spectrally similarand hard to differentiate with parametricinformation.

The feature space method is fast.

249Field Guide

Page 276: Erdas FieldGuide

Minimum Distance The minimum distance decision rule (also called spectral distance) calculates thespectral distance between the measurement vector for the candidate pixel and the meanvector for each signature.

Figure 100: Minimum Spectral Distance

In Figure 100, spectral distance is illustrated by the lines from the candidate pixel to themeans of the three signatures. The candidate pixel is assigned to the class with theclosest mean.

The equation for classifying by spectral distance is based on the equation for Euclideandistance:

where:

n= number of bands (dimensions)i= a particular bandc= a particular classXxyi= data file value of pixel x,y in band iµci= mean of data file values in band i for the sample for class cSDxyc= spectral distance from pixel x,y to the mean of class c

Source: Swain and Davis 1978

When spectral distance is computed for all possible values of c (all possible classes), theclass of the candidate pixel is assigned to the class for which SD is the lowest.

µB3

µB2

µB1

µA1 µA2 µA3

µ1

µ2

µ3

Band Adata file values

Ban

d B

data

file

val

ues

candidate pixel

oo

SDxyc µci Xxyi–( )2

i 1=

n

∑=

250 ERDAS

Page 277: Erdas FieldGuide

Classification Decision Rules

Mahalanobis Distance

The Mahalanobis distance algorithm assumes that the histograms of the bands have normaldistributions. If this is not the case, you may have better results with the parallelepiped orminimum distance decision rule, or by performing a first-pass parallelepiped classification.

Mahalanobis distance is similar to minimum distance, except that the covariancematrix is used in the equation. Variance and covariance are figured in so that clustersthat are highly varied will lead to similarly varied classes, and vice-versa. For example,when classifying urban areas—typically a class whose pixels vary widely—correctlyclassified pixels may be farther from the mean than those of a class for water, which isusually not a highly varied class (Swain and Davis 1978).

Minimum Distance Decision Rule

Advantages Disadvantages

Since every pixel is spectrally closer toeither one sample mean or another, thereare no unclassified pixels.

Pixels which should be unclassified (thatis, they are not spectrally close to the meanof any sample, within limits that are rea-sonable to the user) will become classified.However, this problem is alleviated bythresholding out the pixels that are far-thest from the means of their classes. (Seethe discussion of Thresholding onpage 254.)

The fastest decision rule to compute,except for parallelepiped.

Does not consider class variability. Forexample, a class like an urban land coverclass is made up of pixels with a high vari-ance, which may tend to be farther fromthe mean of the signature. Using this deci-sion rule, outlying urban pixels may beimproperly classified. Inversely, a classwith less variance, like water, may tend tooverclassify (that is, classify more pixelsthan are appropriate to the class), becausethe pixels that belong to the class are usu-ally spectrally closer to their mean thanthose of other classes to their means.

251Field Guide

Page 278: Erdas FieldGuide

The equation for the Mahalanobis distance classifier is as follows:

D = (X-Mc)T (Covc

-1) (X-Mc)

where:

D= Mahalanobis distancec= a particular classX= the measurement vector of the candidate pixelMc= the mean vector of the signature of class cCovc= the covariance matrix of the pixels in the signature of class cCovc

-1= inverse of CovcT= transposition function

The pixel is assigned to the class, c, for which D is the lowest.

MaximumLikelihood/Bayesian

The maximum likelihood algorithm assumes that the histograms of the bands of data have normaldistributions. If this is not the case, you may have better results with the parallelepiped orminimum distance decision rule, or by performing a first-pass parallelepiped classification.

The maximum likelihood decision rule is based on the probability that a pixel belongsto a particular class. The basic equation assumes that these probabilities are equal for allclasses, and that the input bands have normal distributions.

Bayesian Classifier

If the user has a priori knowledge that the probabilities are not equal for all classes, heor she can specify weight factors for particular classes. This variation of the maximumlikelihood decision rule is known as the Bayesian decision rule (Hord 1982). Unless theuser has a priori knowledge of the probabilities, it is recommended that they not bespecified. In this case, these weights default to 1.0 in the equation.

Mahalanobis Decision Rule

Advantages Disadvantages

Takes the variability of classes intoaccount, unlike minimum distance or par-allelepiped.

Tends to overclassify signatures with rela-tively large values in the covariancematrix. If there is a large dispersion of thepixels in a cluster or training sample, thenthe covariance matrix of that signature willcontain large values.

May be more useful than minimum dis-tance in cases where statistical criteria (asexpressed in the covariance matrix) mustbe taken into account, but the weightingfactors that are available with the maxi-mum likelihood/Bayesian option are notneeded.

Slower to compute than parallelepiped orminimum distance.

Mahalanobis distance is parametric, mean-ing that it relies heavily on a normal distri-bution of the data in each input band.

252 ERDAS

Page 279: Erdas FieldGuide

Classification Decision Rules

The equation for the maximum likelihood/Bayesian classifier is as follows:

D = ln(ac) - [0.5 ln(|Covc|)] - [0.5 (X-Mc)T (Covc-1) (X-Mc)]

where:

D = weighted distance (likelihood)c = a particular classX = the measurement vector of the candidate pixelMc = the mean vector of the sample of class cac = percent probability that any candidate pixel is a member of class c

(defaults to 1.0, or is entered from a priori knowledge)Covc = the covariance matrix of the pixels in the sample of class c|Covc| = determinant of Covc (matrix algebra)Covc-1 = inverse of Covc (matrix algebra)ln = natural logarithm functionT = transposition function (matrix algebra)

The inverse and determinant of a matrix, along with the difference and transposition ofvectors, would be explained in a textbook of matrix algebra.

The multiple inverse of the function is computed and the pixel is assigned to the class,c, for which D is the lowest.

Maximum Likelihood/Bayesian Decision Rule

Advantages Disadvantages

The most accurate of the classifiers in theERDAS IMAGINE system (if the inputsamples/clusters have a normal distribu-tion), because it takes the most variablesinto consideration.

An extensive equation that takes a longtime to compute. The computation timeincreases with the number of input bands.

Takes the variability of classes intoaccount by using the covariance matrix, asdoes Mahalanobis distance.

Maximum likelihood is parametric, mean-ing that it relies heavily on a normal distri-bution of the data in each input band.

Tends to overclassify signatures with rela-tively large values in the covariancematrix. If there is a large dispersion of thepixels in a cluster or training sample, thenthe covariance matrix of that signature willcontain large values.

253Field Guide

Page 280: Erdas FieldGuide

EvaluatingClassification

After a classification is performed, these methods are available for testing the accuracyof the classification:

• Thresholding — Use a probability image file to screen out misclassified pixels.

• Accuracy Assessment — Compare the classification to ground truth or other data.

Thresholding Thresholding is the process of identifying the pixels in a classified image that are themost likely to be classified incorrectly. These pixels are put into another class (usuallyclass 0). These pixels are identified statistically, based upon the distance measures thatwere used in the classification decision rule.

Distance File

When a minimum distance, Mahalanobis distance, or maximum likelihood classifi-cation is performed, a distance image file can be produced in addition to the outputthematic raster layer. A distance image file is a one-band, 32-bit offset continuousraster layer in which each data file value represents the result of a spectral distanceequation, depending upon the decision rule used.

• In a minimum distance classification, each distance value is the Euclidean spectraldistance between the measurement vector of the pixel and the mean vector of thepixel’s class.

• In a Mahalanobis distance or maximum likelihood classification, the distance valueis the Mahalanobis distance between the measurement vector of the pixel and themean vector of the pixel’s class.

The brighter pixels (with the higher distance file values) are spectrally farther from thesignature means for the classes to which they were assigned. They are more likely to bemisclassified.

The darker pixels are spectrally nearer, and more likely to be classified correctly. Ifsupervised training was used, the darkest pixels are usually the training samples.

Figure 101: Histogram of a Distance Image

Distance Image Histogram

distance value

num

ber

of p

ixel

s

00

254 ERDAS

Page 281: Erdas FieldGuide

Evaluating Classification

Figure 101 shows how the histogram of the distance image usually appears. This distri-bution is called a chi-square distribution, as opposed to a normal distribution, whichis a symmetrical bell curve.

Threshold

The pixels that are the most likely to be misclassified have the higher distance file valuesat the tail of this histogram. At some point that the user defines—either mathematicallyor visually—the “tail” of this histogram is cut off. The cutoff point is the threshold.

To determine the threshold:

• interactively change the threshold with the mouse, when a distance histogram isdisplayed while using the threshold function. This option enables the user to selecta chi-square value by selecting the cut-off value in the distance histogram, or

• input a chi-square parameter or distance measurement, so that the threshold can becalculated statistically.

In both cases, thresholding has the effect of cutting the tail off of the histogram of thedistance image file, representing the pixels with the highest distance values.

255Field Guide

Page 282: Erdas FieldGuide

Figure 102: Interactive Thresholding Tips

Figure 102 shows some example distance histograms. With each example is an expla-nation of what the curve might mean, and how to threshold it.

Smooth chi-square shape -try to find the “breakpoint”where the curve becomesmore horizontal, and cut off

Minor mode(s) (peaks) inthe curve probably indicatethat the class picked upother features that were not

Not a good class. Thesignature for this classprobably represented apolymodal (multi-peaked)

Peak of the curve is shiftedfrom 0. Indicates that thesignature mean is off-centerfrom the pixels it represents.

256 ERDAS

Page 283: Erdas FieldGuide

Evaluating Classification

Chi-square Statistics

If the minimum distance classifier was used, then the threshold is simply a certainspectral distance. However, if Mahalanobis or maximum likelihood were used, thenchi-square statistics are used to compare probabilities (Swain and Davis 1978).

When statistics are used to calculate the threshold, the threshold is more clearly definedas follows:

T is the distance value at which C% of the pixels in a class have a distance value greaterthan or equal to T.

where:

T = the threshold for a classC% = the percentage of pixels that are believed to be misclassified, known as the

confidence level

T is related to the distance values by means of chi-square statistics. The value X2 (chi-

squared) is used in the equation. X2 is a function of:

• the number of bands of data used—known in chi-square statistics as the number ofdegrees of freedom

• the confidence level

When classifying an image in ERDAS IMAGINE, the classified image automatically hasthe degrees of freedom (i.e., number of bands) used for the classification. The chi-squaretable is built into the threshold application.

NOTE: In this application of chi-square statistics, the value of X2 is an approximation. Chi-square statistics are generally applied to independent variables (having no covariance), which isnot usually true of image data.

A further discussion of chi-square statistics can be found in a statistics text.

Use the Classification Threshold utility to perform the thresholding.

257Field Guide

Page 284: Erdas FieldGuide

Accuracy Assessment Accuracy assessment is a general term for comparing the classification to geographicaldata that are assumed to be true, in order to determine the accuracy of the classificationprocess. Usually, the assumed-true data are derived from ground truth data.

It is usually not practical to ground truth or otherwise test every pixel of a classifiedimage. Therefore, a set of reference pixels is usually used. Reference pixels are pointson the classified image for which actual data are (or will be) known. The reference pixelsare randomly selected (Congalton 1991).

NOTE: You can use the ERDAS IMAGINE Accuracy Assessment utility to perform anaccuracy assessment for any thematic layer. This layer did not have to be classified by IMAGINE(e.g., you can run an accuracy assessment on a thematic layer that was classified in ERDASVersion 7.5 and imported into IMAGINE).

Random Reference Pixels

When reference pixels are selected by the analyst, it is often tempting to select the samepixels for testing the classification as were used in the training samples. This biases thetest, since the training samples are the basis of the classification. By allowing thereference pixels to be selected at random, the possibility of bias is lessened or eliminated(Congalton 1991).

The number of reference pixels is an important factor in determining the accuracy of theclassification. It has been shown that more than 250 reference pixels are needed toestimate the mean accuracy of a class to within plus or minus five percent (Congalton1991).

ERDAS IMAGINE uses a square window to select the reference pixels. The size of thewindow can be defined by the user. Three different types of distribution are offered forselecting the random pixels:

• random — no rules will be used

• stratified random — the number of points will be stratified to the distribution ofthematic layer classes

• equalized random — each class will have an equal number of random points

Use the Accuracy Assessment utility to generate random reference points.

Accuracy Assessment CellArray

An Accuracy Assessment CellArray is created to compare the classified image withreference data. This CellArray is simply a list of class values for the pixels in theclassified .img file and the class values for the corresponding reference pixels. The classvalues for the reference pixels are input by the user. The CellArray data reside in an.img file.

Use the Accuracy Assessment CellArray to enter reference pixels for the class values.

258 ERDAS

Page 285: Erdas FieldGuide

Output File

Error Reports

From the Accuracy Assessment CellArray, two kinds of reports can be derived.

• The error matrix simply compares the reference points to the classified points in ac × c matrix, where c is the number of classes (including class 0).

• The accuracy report calculates statistics of the percentages of accuracy, based uponthe results of the error matrix.

When interpreting the reports, it is important to observe the percentage of correctlyclassified pixels and to determine the nature of errors of the producer and the user.

Use the Accuracy Assessment utility to generate the error matrix and accuracy reports.

Kappa Coefficient

The Kappa coefficient expresses the proportionate reduction in error generated by aclassification process, compared with the error of a completely random classification.For example, a value of .82 would imply that the classification process was avoiding 82percent of the errors that a completely random classification would generate(Congalton 1991).

For more information on the Kappa coefficient, see a statistics manual.

Output File When classifying an .img file, the output file is an .img file with a thematic raster layer.This file will automatically contain the following data:

• class values

• class names

• color table

• statistics

• histogram

The .img file will also contain any signature attributes that were selected in the ERDASIMAGINE Supervised Classification utility.

The class names, values, and colors can be set with the Signature Editor or the Raster AttributeEditor.

259Field Guide

Page 286: Erdas FieldGuide

260 ERDAS

Page 287: Erdas FieldGuide

CHAPTER 7Photogrammetric Concepts

Introduction

This chapter is an introduction to photogrammetric concepts, many of which areequally applicable to traditional and digital photogrammetry. However, the focus hereis digital photogrammetry, so topics exclusive to traditional methods are omitted. Inaddition, some of the presented concepts, such as use of satellite data or automaticimage correlation, exist only in the digital realm.

There are numerous sources of image data for both traditional and digital photogram-metry. This document focuses on three main sources: aerial photographs (metric framecameras), SPOT satellite imagery, and Landsat satellite data. Many of the conceptspresented for aerial photographs also pertain to most imagery which has a singleperspective center. Likewise, the SPOT concepts have much in common with othersensors that also use a linear Charged Coupled Device (CCD) in a pushbroom fashion.Finally, a significantly different geometric model and approach is discussed for theLandsat satellite, an across-track scanning device.

261Field Guide

Page 288: Erdas FieldGuide

Definitions

Photogrammetry is the "art, science and technology of obtaining reliable informationabout physical objects and the environment through the process of recording,measuring and interpreting photographic images and patterns of electromagneticradiant imagery and other phenomena." (ASP, 1980)

Photogrammetry was invented in 1851 by Laussedat, and has continued to developover the last 140 years. Over time, the development of photogrammetry has passedthrough the phases of Plane Table Photogrammetry, Analog Photogrammetry,Analytical Photogrammetry, and has now entered the phase of Digital Photogram-metry.

The traditional, and largest, application of photogrammetry is to extract topographicinformation (e.g., topographic maps) from aerial images. However, photogrammetrictechniques have also been applied to process satellite images and close range images inorder to acquire topographic or non-topographic information of photographed objects.

Prior to the invention of the airplane, photographs taken on the ground were used toextract the relationships between objects using geometric principles. This was duringthe phase of Plane Table Photogrammetry.

In Analog Photogrammetry, starting with stereomeasurement in 1901, optical ormechanical instruments were used to reconstruct three-dimensional geometry fromtwo overlapping photographs. The main product during this phase was topographicmaps.

In Analytical Photogrammetry, the computer replaces some expensive optical andmechanical components by substituting analog measurement and calculation withmathematical computation. The resulting devices were analog/digital hybrids.Analytical aerotriangulation, analytical plotters, and orthophoto projectors were themain developments during this phase. Outputs of analytical photogrammetry can betopographic maps, but can also be digital products, such as digital maps and digitalelevation models (DEMs).

Digital Photogrammetry is photogrammetry as applied to digital images that arestored and processed on a computer. Digital images can be scanned from photographsor can be directly captured by digital cameras. Many photogrammetric tasks can behighly automated in digital photogrammetry (e.g., automatic DEM extraction anddigital orthophoto generation). Digital Photogrammetry is sometimes called SoftcopyPhotogrammetry. The output products are in digital form, such as digital maps, DEMs,and digital orthophotos saved on computer storage media. Therefore, they can be easilystored, managed, and applied by the user. With the development of digital photogram-metry, photogrammetric techniques are more closely integrated into Remote Sensingand GIS.

262 ERDAS

Page 289: Erdas FieldGuide

Coordinate Systems

Coordinate Systems There are a variety of coordinate systems used in photogrammetry. This chapter willreference these systems as described below.

Pixel Coordinates The file coordinates of a digital image are defined in a pixel coordinate system. A pixelcoordinate system is usually a coordinate system with its origin in the upper-left cornerof the image, the x-axis pointing to the right, the y-axis pointing downward, and theunit in pixels, as shown by axis c and r in Figure 103. These file coordinates (c,r) can alsobe thought of as the pixel column and row number. This coordinate system is refer-enced as pixel coordinates (c,r) in this chapter.

Image Coordinates An image coordinate system is usually defined as a two-dimensional coordinatesystem occurring on the image plane with its origin at the image center, as illustratedby axis x and y in Figure 103. Image coordinates are used to describe positions on thefilm plane. Image coordinate units are usually millimeters or microns. This coordinatesystem is referenced as image coordinates (x,y) in this chapter.

An image space coordinate system is identical to image coordinates, except that it addsa third axis (z). Image space coordinates are used to describe positions inside the cameraand usually use units in millimeters or microns. This coordinate system is referenced asimage space coordinates (x,y,z) in this chapter.

Figure 103: Pixel Coordinates and Image Coordinates

y

x

c

r

263Field Guide

Page 290: Erdas FieldGuide

Ground Coordinates A ground coordinate system is usually defined as a three-dimensional coordinatesystem which utilizes a known map projection. Ground coordinates (X,Y,Z) are usuallyexpressed in feet or meters. The Z value is elevation above mean sea level for a givenvertical datum. This coordinate system is referenced as ground coordinates (X,Y,Z) inthis chapter.

Geocentric andTopocentric Coordinates

Most photogrammetric applications account for earth curvature in their calculations.This is done by adding a correction value or by computing geometry in a coordinatesystem which includes curvature. Two such systems are geocentric and topocentriccoordinates.

A geocentric coordinate system has its origin at the center of the earth ellipsoid. TheZG-axis equals the rotational axis of the earth, and the XG-axis passes through theGreenwich meridian. The YG-axis is perpendicular to both the ZG-axis and XG-axis, soas to create a three-dimensional coordinate system that follows the right hand rule.

A topocentric coordinate system has its origin at the center of the image projected onthe earth ellipsoid. The three perpendicular coordinate axis are defined on a tangentialplane at this center point. The plane is called the reference plane or the local datum. Thex-axis is oriented eastward, the y-axis northward, and the z-axis is vertical to thereference plane (up).

For simplicity of presentation, the remainder of this chapter will not explicitly referencegeocentric or topocentric coordinates. Basic photogrammetric principles can bepresented without adding this additional level of complexity.

Work Flow The work flow of photogrammetry can be summarized in three steps: image acqui-sition, photogrammetric processing, and product output.

264 ERDAS

Page 291: Erdas FieldGuide

Work Flow

Figure 104: Sample Photogrammetric Work Flow

The remainder of this chapter is presented in the same sequence as the items in Figure104. For each section, the aerial model is presented first, followed by the SPOT model,when appropriate. A Landsat satellite model is described at the end of the orthorectifi-cation section.

Aerial Camera Film

Image Preprocessing:

Triangulation

Stereopair Creation

Orthorectification

Generate Elevation Models

Map Feature Collection

Orthoimages Topographic Database

Orthomaps Topographic Maps

Image Acquisition

PhotogrammetricProcessing

Product Output

Digital Imagery from Satellites

Scan Aerial FilmImport Digital Imagery

265Field Guide

Page 292: Erdas FieldGuide

Image Acquisition

Aerial Camera Film

Exposure Station Each point in the flight path at which the camera exposes the film is called an exposurestation.

Figure 105: Exposure Stations along a Flight Path

Image Scale The image scale expresses the average ratio between a distance in the image and thesame distance on the ground. It is computed as focal length divided by the flying heightabove the mean ground elevation. For example, with an altitude of 1,000m and a focallength of 15 cm, the image scale (SI) would be 1:6667.

NOTE: The flying height above ground is used, versus the altitude above sea level.

Strip of Photographs A strip of photographs consists of images captured along a flight-line, normally withan overlap of 60%. All photos in the strip are assumed to be taken at approximately thesame flying height and with a constant distance between exposure stations. Camera tiltrelative to the vertical is assumed to be minimal.

flight pathof airplane

exposure station

Flight Line 1

Flight Line 2

Flight Line 3

266 ERDAS

Page 293: Erdas FieldGuide

Aerial Camera Film

Block of Photographs The photographs from the flight path can be combined to form a block. A block ofphotographs consists of a number of parallel strips, normally with a sidelap of 20-30%.Photogrammetric triangulation is performed on the whole block of photographs totransform images and ground points into a homologous coordinate system.

A regular block of photos is a rectangular block in which the number of photos in eachstrip is the same. The figure below shows a block of 5 X 2 photographs.

Figure 106: A Regular (Rectangular) Block of Aerial Photos

flying direction

strip 2

strip 1

60% overlap

20-30%sidelap

267Field Guide

Page 294: Erdas FieldGuide

Digital Imagery fromSatellites

Digital image data from satellites are distributed on a variety of media, such as tapesand CD-ROMs. The internal format varies, depending on the specific sensor and datavendor. This section addresses photogrammetric operations on SPOT satellite images,though most of the concepts are universal for any pushbroom sensor.

Correction Levels forSPOT Imagery

SPOT scenes are delivered at different levels of correction. For example, SPOT ImageCorporation provides two correction levels that are of interest:

• Level 1A images correspond to raw camera data to which only radiometriccorrections have been applied.

• Level 1B images have been corrected for the earth’s rotation and viewing angle,producing roughly the same ground pixel size throughout the scene. Pixels areresampled from the level 1A camera data by cubic polynomials. This data isinternally transformed to level 1A before the triangulation calculations are applied.

Refer to "CHAPTER 3: Raster and Vector Data Sources" for more information on satelliteremote sensing and the characteristics of satellite data that can be read into ERDAS.

A SPOT scene covers an area of approximately 60 X 60 km, depending on the inclinationof the sensors. The resolution of one pixel corresponds to about 10 X 10 m on the groundfor panchromatic images. (Off-nadir scenes can cover up to 80 X 60 km.)

ImagePreprocessing

Images must be read into the computer before processing can begin. Usually the imagesare not digitally enhanced prior to photogrammetric processing. Most digital photo-grammetric software packages have basic image enhancement tools. The commonpractice is to perform more sophisticated enhancements on the end products (e.g.,orthoimages or orthomosaics).

Scanning Aerial Film Aerial film must be scanned (digitized) to create a digital image. Once scanned, thedigital image can be imported into a digital photogrammetric system.

Scanning Resolution

The storage requirement for digital image data can be huge. Therefore, obtaining theoptimal pixel size (or scanning density) is often a trade-off between capturingmaximum image information and the digital storage burden. For example, a standardpanchromatic image is 9 by 9 inches (23 x 23 cm). Scanning at 25 microns (roughly 1000pixels per inch) results in a file with 9000 rows and 9000 columns. Assuming 8 bits perpixel and no image compression, this file occupies about 81 megabytes. Photogram-metric projects often have hundreds or even thousands of photographs.

268 ERDAS

Page 295: Erdas FieldGuide

Image Preprocessing

Photogrammetric Scanners

Photogrammetric quality scanners are special devices capable of high image qualityand excellent positional accuracy. Use of this type of scanner results in geometricaccuracies similar to traditional analog and analytical photogrammetric instruments.These scanners are necessary for digital photogrammetric applications which have highaccuracy requirements. These units usually scan only film (either positive or negative),because film is superior to paper, both in terms of image detail and geometry. Theseunits usually have an RMSE (Root Mean Square Error) positional accuracy of 4 micronsor less, and are capable of scanning at a maximum resolution of 5 to 10 microns (5microns is equivalent to approximately 5,000 pixels per inch). The needed pixelresolution varies depending on the application. Aerial triangulation and featurecollection applications often scan in the 10 to 15 micron range. Orthophoto applicationsoften use 15 to 30 micron pixels. Color film is less sharp than panchromatic, thereforecolor orthoapplications often use 20 to 40 micron pixels.

Desktop Scanners

Desktop scanners are general purpose devices. They lack the image detail andgeometric accuracy of photogrammetric quality units, but they are much lessexpensive. When using a desktop scanner, the user should make sure that the activearea is at least 9 X 9 inches, enabling the entire photo frame to be captured. Desktopscanners are appropriate for less rigorous uses, such as digital photogrammetry insupport of GIS or remote sensing applications. Calibrating these units improvesgeometric accuracy, but the results are still inferior to photogrammetric units. Theimage correlation techniques which are necessary for automatic elevation extraction areoften sensitive to scan quality. Therefore, elevation extraction can become problematicif the scan quality is only marginal.

269Field Guide

Page 296: Erdas FieldGuide

Photogrammetric Processing

Photogrammetric processing consists of triangulation, stereopair creation, elevationmodel generation, orthorectification, and map feature collection.

Triangulation Triangulation establishes the geometry of the camera or sensor relative to objects on theearth’s surface. It is the first and most critical step of photogrammetric processing.Figure 107 illustrates the triangulation work flow. First, the interior orientation estab-lishes the geometry inside the camera or sensor. For aerial photographs, fiducial marksare measured on the digital imagery and camera calibration information is entered. Theinterior orientation information for SPOT is already known (they are fixed values). Thefinal step is to calculate the exterior orientation, which establishes the location andattitude (rotation angles) of the camera or sensor during the time of image acquisition.Ground control points aid this process.

Once triangulation is completed, the next step is usually stereopair generation.However, the user could proceed directly to generating elevation models ororthoimages.

Figure 107: Triangulation Work Flow

UncorrectedDigital Imagery

CalculateInterior Orientation Camera or Sensor Information

CalculateExterior Orientation Ground Control Points

Triangulation Results

270 ERDAS

Page 297: Erdas FieldGuide

Triangulation

Aerial Triangulation The following discussion assumes that a standard metric aerial camera is being used, inwhich the fiducial marks are readily visible on the scanned images and the cameracalibration information is available from an external source.

Aerial triangulation determines the exterior orientation parameters of images andthree-dimensional coordinates of unknown points, using ground control points orother kinds of known information. It is an economic technique to measure a largeamount of object points with very high accuracy.

Aerial triangulation is normally carried out for a block of images, containing aminimum of two images. The strip, independent model, and bundle methods are thecommon approaches for implementing triangulation, of which bundle blockadjustment is the most mathematically rigorous.

In bundle block adjustment, there are usually image coordinate observations, groundcoordinate point observations, and possibly observations from GPS and satellite orbitinformation. The observation equation can be represented as follows:

where

V = the matrix containing the image coordinate residualsA = the matrix containing the partial derivatives with respect to the

unknown parameters (exterior orientation parameters and XYZground coordinates)

X = the matrix containing the corrections to the unknown parametersL = the matrix containing the observations (i.e., image coordinates and

control point coordinates)

The equations can be solved using the iterative least squares adjustment:

where

X = the matrix containing the corrections to the unknown parametersA = the matrix containing the partial derivatives with respect to the

unknown parametersP = the matrix containing the weights of the observationsL = the matrix containing the observations

Before the triangulation can be computed, the user should acquire images that overlapin the block, measure the tie points on the images and digitize some control points.

V AX L–=

X ATPA( ) 1– ATPL=

271Field Guide

Page 298: Erdas FieldGuide

Interior Orientation of an Aerial Photo

The interior orientation of an image defines the geometry within the camera. Duringtriangulation, the interior orientation must be available in order to accurately define theexternal geometry of the camera. Concepts pertaining to interior orientation aredescribed in this section.

To record an image, light rays reflected by an object on the ground are projectedthrough a lens. Ideally, all light rays are straight and intersect at the perspective center.The light rays are then projected onto the film.

The plane of the film is called the focal plane. A virtual focal plane exists between theperspective center and the terrain. The virtual focal plane is the same distance (focallength) from the perspective center as is the plane of the film or scanner. The light raysintersect both planes in the same manner. Virtual focal planes are often more conve-nient to diagram, and therefore are often used in place of focal planes in photogram-metric diagrams.

NOTE: In the discussion following, the virtual focal plane is called the “image plane,” and isused to describe photogrammetric concepts.

Figure 108: Focal and Image Plane

For purposes of photogrammetric triangulation, a local coordinate system is defined ineach image. The location of each point in the image can be expressed in terms of thisimage coordinate system.

The perspective center is projected onto a point in the image plane that lies directlybeneath it. This point is called the principal point. The orthogonal distance from theperspective center to the image plane is the focal length of the lens.

terrain

actual focal plane(of film)

perspective center(all light rays intersect) virtual focal plane

(camera image plane)

272 ERDAS

Page 299: Erdas FieldGuide

Triangulation

Figure 109: Image Coordinates, Fiducials, and Principal Point

Fiducials are four or eight reference markers fixed on the frame of an aerial metriccamera and visible in each exposure as illustrated by points F1, F2, F3, and F4 in Figure109. The image coordinates of the fiducials are provided in a camera calibration report.Fiducials are used to compute the transformation from file coordinates to image coordi-nates.

The file coordinates of a digital image are defined in a pixel coordinate system. Forexample, in digital photogrammetry, it is usually a coordinate system with its origin inthe upper-left corner of the image, the x-axis pointing to the right, the y-axis pointingdownward, and the unit in pixels, as shown by A-XF and A-YF in Figure 109. These filecoordinates (XF, YF) can also be thought of as the pixel column and row number.

Once the file coordinates of fiducials are measured, the transformation from file coordi-nates to image coordinates can be carried out. Usually the six-parameter affine transfor-mation is used here:

Where

ao, a1, a2 = affine parameters

bo, b1, b 2 = affine parameters

XF, YF = file coordinates

x,y = image coordinates

y

x

F1

F4 F2

F3

XF

YF

P

A

x ao a1XF a2YF+ +=

y bo b1XF b2YF+ +=

273Field Guide

Page 300: Erdas FieldGuide

Once this transformation is in place, image coordinates are directly obtained and theinterior orientation is complete.

Exterior Orientation

The exterior orientation determines the relationship of an image to the groundcoordinate system. Each aerial camera image has six exterior orientation parameters,the three coordinates of the perspective center (Xo,Yo,Zo) in the ground coordinatesystem, and three rotation angles of (ω, ϕ, κ), as shown in Figure 110.

Figure 110: Exterior Orientation of an Aerial Photo

Z

Y

X

PG (XG, YG, ZG)

Y'

Z'

X'

PP

ω

κ

ϕ

XO

YO

ZO

O

PI

l

(x,y,-f)

ZG

YG

XG

y

z

x

274 ERDAS

Page 301: Erdas FieldGuide

Triangulation

Where

PP = principal pointO = perspective center with ground coordinates (XO, YO, ZO)

O-x, O-y, O-z = image space coordinate system with origin in theperspective center and the x,y-axis parallel to the imagecoordinate system axis

XG,YG,ZG = ground coordinates

O-X', O-Y', O-Z' = a local coordinate system which is parallel to the groundcoordinate system, but has its origin at the perspectivecenter. Used for expressing rotation angles (ω, ϕ, κ).

ω = omega rotation angle around the X'-axisϕ = phi rotation angle around the Y'-axisκ = kappa rotation angle around the Z'-axisPI = point in the image plane

PG = point on the ground

Collinearity Equations

The relationship among image coordinates, ground coordinates, and orientationparameters is described by the following collinearity equations:

Where:

x, y = image coordinatesX, Y, Z = ground coordinatesf = focal lengthXO,YO,ZO = ground coordinates of perspective center

r11 - r33 = coefficients of a 3 X 3 rotation matrix defined by angles ω, ϕ,κ, thattransforms the image system to the ground system

The collinearity equations are the most basic principle of photogrammetry.

x fr11 X XO–( ) r21 Y YO–( ) r31 Z ZO–( )+ +

r13 X XO–( ) r23 Y YO–( ) r33 Z ZO–( )+ +----------------------------------------------------------------------------------------------–=

y fr12 X XO–( ) r22 Y YO–( ) r32 Z ZO–( )+ +

r13 X XO–( ) r23 Y YO–( ) r33 Z ZO–( )+ +----------------------------------------------------------------------------------------------–=

275Field Guide

Page 302: Erdas FieldGuide

Control for Aerial Triangulation

The distribution and density of ground control is a major factor in the accuracy ofphotogrammetric triangulation. Control points for aerial triangulation are created byidentifying points with known ground coordinates in the aerial photos.

A control point is a point with known coordinates in the ground coordinate system,expressed in the units of the specified map projection. Control points are used toestablish a reference frame for the photogrammetric triangulation of a block of images.

These ground coordinates are typically three-dimensional. They consist of X,Y coordi-nates in a map projection system and Z coordinates, which are elevation valuesexpressed in units above datum that are consistent with the map coordinate system.

The user selects these points based on their relation to clearly defined and visibleground features. Ground control points serve as stable (known) values, so theiraccuracy determines the accuracy of the triangulation.

Ground coordinates of control points can be acquired by digitizing existing maps orfrom geodetic measurements, such as the global positioning system (GPS), a surveyinginstrument, or Electronic Distance Measuring devices (EDMs). For optimal accuracy,the coordinates should be accurate to within the distance on the ground that is repre-sented by approximately 0.1 to 0.5 pixels in the image.

In triangulation, there can be several types of control points. A full control pointspecifies map X,Y coordinates along with a Z (elevation of the point). Horizontal controlonly specifies the X,Y, while vertical control only specifies the Z.

Optimizing control distribution is part art and part science, and goes beyond the scopeof this document. However, the example presented in Figure 111 illustrates a specificcase.

General rules for control distribution within a block are:

• Whenever possible, locate control points that lie on multiple images

• Control is needed around the outside of a block

• Control is needed at certain distances within the block

276 ERDAS

Page 303: Erdas FieldGuide

Triangulation

Figure 111: Control Points in Aerial Photographs(block of 8 X 4 photos)

For optimal results, control points should be measured by geodetic techniques with anaccuracy that corresponds to about 0.1 to 0.5 pixels in the image. Digitization of existingmaps often does not yield this degree of accuracy.

For example, if a photograph was scanned with a resolution of 1000 dpi (9000 X 9000pixels), the pixel size in the image is 25 microns (0.025mm). For an image scale of1:40,000, each pixel covers approximately 1.0 X 1.0 meters on the ground. Applying theabove rule, the ground control points should be accurate to about 0.1 to 0.5 meters.

A greater number of known ground points should be available than will actually beused in the triangulation. These additional points become check points, and can beused to independently verify the degree of accuracy of the triangulation. This verifi-cation, called check point analysis, is discussed on page 287.

= control (along all edges of the block and after the 3rd photo of each strip)

277Field Guide

Page 304: Erdas FieldGuide

Ground control points need not be available in every image, but can be supplementedby other points which are identified as tie points. A tie point is a point whose groundcoordinates are not known, but can be recognized visually in the overlap or sidelap areabetween two or more images. Ground coordinates for tie points are computed duringthe photogrammetric triangulation.

Tie points should be visually well-defined in all images. Ideally, they should show goodcontrast in two directions, like the corner of a building or a road intersection. Tie pointsshould also be well distributed over the area of the block. Typically, nine tie points ineach image are adequate for photogrammetric triangulation of aerial photographs. If acontrol point already exists in the candidate location of a tie point, the tie point can beomitted.

Figure 112: Ideal Point Distribution Over a Photograph for Aerial Triangulation

Where

x,y = the image coordinates

y

x

tie pointsin a singleimage

278 ERDAS

Page 305: Erdas FieldGuide

Triangulation

In a block of aerial photographs with 60% overlap and 25-30% sidelap, nine points aresufficient to tie together the block as well as individual strips.

Figure 113: Tie Points in a Block of Photos

In summary:

• A control point must be visually identifiable in one or more images and haveknown ground coordinates. If, in later processing, the ground coordinates for acontrol point are found to have low reliability, the control point can be changed toa tie point.

• If a control point is in the overlap area, it helps to control both images.

• If the ground coordinates of a control point are not used in the triangulation, theycan serve as a check point for independent analysis of the accuracy of thetriangulation.

• A tie point is a point that is visually identifiable in at least two images for whichground coordinates are unknown.

Nine tie pointsin each imagetie the blocktogether.

279Field Guide

Page 306: Erdas FieldGuide

SPOT Triangulation The SPOT satellite carries two HRV (High Resolution Visible) sensors, each of whichis a pushbroom scanner that takes a sequence of line images while the satellite circlesthe earth.

The focal length of the camera optic is 1,084 mm, which is very large relative to thelength of the camera (78 mm). The field of view is 4.12o.

The satellite orbit is circular, north-south and south-north, about 830 km above theearth, and sun-synchronous. A sun-synchronous orbit is one in which the orbitalrotation is the same rate as the earth’s rotation.

For each line scanned, there is a unique perspective center and a unique set of rotationangles. The location of the perspective center relative to the line scanner is constant foreach line (interior orientation and focal length). Since the motion of the satellite issmooth and practically linear over the length of a scene, the perspective centers of allscan lines of a scene are assumed to lie along a smooth line.

Figure 114: Perspective Centers of SPOT Scan Lines

The satellite exposure station is defined as the perspective center in ground coordinatesfor the center scan line.

The image captured by the satellite is called a scene. A scene (SPOT Pan 1A) iscomposed of 6,000 lines. Each of these lines consists of 6000 pixels. Each line is exposedfor 1.5 milliseconds, so it takes 9 seconds to scan the entire scene. (A scene from SPOTXS 1A is composed of only 3000 lines and 3000 columns and has 20 meter pixels, whilePan has 10 meter pixels.)

NOTE: This section will address only the 10 meter Pan scenario.

perspective centersof scan lines

scan lines

motion ofsatellite

on image

ground

280 ERDAS

Page 307: Erdas FieldGuide

Triangulation

A pixel in the SPOT image records the light detected by one of the 6,000 light-sensitiveelements in the camera. Each pixel is defined by file coordinates (column and rownumbers).

The physical dimension of a single, light-sensitive element is 13 X 13 microns. This isthe pixel size in image coordinates.

The center of the scene is the center pixel of the center scan line. It is the origin of theimage coordinate system.

Figure 115: Image Coordinates in a Satellite Scene

Where

A = origin of file coordinatesA-XF, A-YF = file coordinate axes

C = origin of image coordinates (center of scene)C-x, C-y = image coordinate axes

y

x6000lines

6000 pixels (columns)

(rows)

A XF

YF

C

281Field Guide

Page 308: Erdas FieldGuide

SPOT Interior Orientation

Figure 116 shows the interior orientation of a satellite scene. The transformationbetween file coordinates and image coordinates is constant.

Figure 116: Interior Orientation of a SPOT Scene

For each scan line, a separate bundle of light-rays is defined, where

Pk = image point

xk = x value of image coordinates for scan line k

f = focal length of the cameraOk = perspective center for scan line k, aligned along the orbit

PPk = principal point for scan line k

lk = light rays for scan line, bundled at perspective center Ok

P1

xk

O1

Ok

On

PPn

PP1

Pk

Pn xn

x1

f

f

f

l1

lk

lnP1

scanlines

PPk

orbiting direction

(imageplane)

(N —> S)

282 ERDAS

Page 309: Erdas FieldGuide

Triangulation

SPOT Exterior Orientation

SPOT satellite geometry is stable and the sensor parameters (e.g., focal length) are well-known. However, the triangulation of SPOT scenes is somewhat unstable because ofthe narrow, almost parallel bundles of light rays.

Ephemeris data for the orbit are available in the header file of SPOT scenes. They givethe satellite’s position in three-dimensional, geocentric coordinates at 60-secondincrements. The velocity vector and some rotational velocities relating to the attitude ofthe camera are given, as well as the exact time of the center scan line of the scene.

The header of the data file of a SPOT scene contains ephemeris data, which providesinformation about the recording of the data and the satellite orbit.

Ephemeris data that are used in satellite triangulation are:

• the position of the satellite in geocentric coordinates (with the origin at the centerof the earth) to the nearest second,

• the velocity vector, which is the direction of the satellite’s travel,

• attitude changes of the camera, and

• the exact time of exposure of the center scan line of the scene.

The geocentric coordinates included with the ephemeris data are converted to a localground system for use in triangulation. The center of a satellite scene is interpolatedfrom the header data.

Light rays in a bundle defined by the SPOT sensor are almost parallel, lessening theimportance of the satellite’s position. Instead, the inclination angles of the camerasbecome the critical data.

The scanner can produce a nadir view. Nadir is the point directly below the camera.SPOT has off-nadir viewing capability. Off-nadir refers to any point that is not directlybeneath the satellite, but is off to an angle (i.e., east or west of the nadir).

A stereo-scene is achieved when two images of the same area are acquired on differentdays from different orbits, one taken east of the other. For this to occur, there must besignificant differences in the inclination angles.

Inclination is the angle between a vertical on the ground at the center of the scene anda light ray from the exposure station. This angle defines the degree of off-nadir viewingwhen the scene was recorded. The cameras can be tilted in increments of 0.6o to amaximum of 27o to the east (negative inclination) or west (positive inclination).

283Field Guide

Page 310: Erdas FieldGuide

Figure 117: Inclination of a Satellite Stereo-Scene(View from North to South)

Where

C = center of the sceneI- = Eastward inclinationI+ = Westward inclinationO1,O2 = exposure stations (perspective centers of imagery)

The orientation angle of a satellite scene is the angle between a perpendicular to thecenter scan line and the North direction. The spatial motion of the satellite is describedby the velocity vector. The real motion of the satellite above the ground is furtherdistorted by earth rotation.

The velocity vector of a satellite is the satellite’s velocity if measured as a vector througha point on the spheroid. It provides a technique to represent the satellite’s speed as ifthe imaged area were flat instead of being a curved surface.

Cearth’s surface (ellipsoid)

orbit 2orbit 1vertical

I -

I +

EAST WEST

sensors

O1 O2

scene coverage

284 ERDAS

Page 311: Erdas FieldGuide

Triangulation

Figure 118: Velocity Vector and Orientation Angle of a Single Scene

Where

O = orientation angleC = center of the sceneV = velocity vector

Satellite triangulation provides a model for calculating the spatial relationship betweenthe SPOT sensor and the ground coordinate system for each line of data. Thisrelationship is expressed as the exterior orientation, which consists of:

• the perspective center of the center scan line,

• the change of perspective centers along the orbit,

• the three rotations of the center scan line, and

• the changes of angles along the orbit.

In addition to fitting the bundle of light rays to the known points, satellite triangulationalso accounts for the motion of the satellite by determining the relationship of theperspective centers and rotation angles of the scan lines. It is assumed that the satellitetravels in a smooth motion as a scene is being scanned. Therefore, once the exteriororientation of the center scan line is determined, the exterior orientation of any otherscan line is calculated based on the distance of that scan line from the center and thechanges of the perspective center location and rotation angles.

center scan line

orbital path V

C

O

North

285Field Guide

Page 312: Erdas FieldGuide

Bundle adjustment for triangulating a satellite scene is similar to the bundle adjustmentused for aerial photos. Least squares adjustment is used to derive a set of parametersthat comes the closest to fitting the control points to their known ground coordinates,and to intersecting tie points.

The resulting parameters of satellite bundle adjustment are:

• the ground coordinates of the perspective center of the center scan line,

• the rotation angles for the center scan line,

• the coefficients, from which the perspective center and rotation angles of all otherscan lines can be calculated, and

• the ground coordinates of all tie points.

Collinearity Equations

Modified collinearity equations are applied to analyze the exterior orientation ofsatellite scenes. Each scan line has a unique perspective center and individual rotationangles. When the satellite moves from one scan line to the next, these parameterschange. Due to the smooth motion of the satellite in orbit, the changes are small and canbe modeled by low order polynomial functions.

Control for SPOT Triangulation

Both control and tie points can be used for satellite triangulation of a stereo scene. Fortriangulating a single scene, only control points are used. It is then called spaceresection. A minimum of six control points is necessary for good triangulation results.

The best locations for control points in the scene are shown below.

Figure 119: Ideal Point Distribution Over a Satellite Scene for Triangulation

y

x

control point

horizontalscan lines

286 ERDAS

Page 313: Erdas FieldGuide

Triangulation

In some cases, there are no reliable control points available in the area for which a DEMor orthophoto is to be created. In this instance, a local coordinate system may bedefined. The coordinate center is the center of the scene expressed in the longitude andlatitude taken from the header. When a local coordinate system is defined, the satellitepositions, velocity vectors, and rotation angles from the image header are used to definea datum.

The ground coordinates of tie points will be computed in such a case. The resultingDEM would display relative elevations, and the coordinate system would approxi-mately correspond to the real system of this area. However, the coordinate system islimited by the accuracy of the emphemeris information.

This might be especially useful for remote islands, in which case points along the shore-line can be very easily detected as tie points.

Triangulation AccuracyMeasures

The triangulation solution usually provides the standard deviation, the covariancematrix of unknowns, the residuals of observations, and check point analysis to aid indetermining the accuracy of triangulation.

Standard Deviation (σ0)

Each time the triangulation program completes one iteration, the σ0 value (square rootof variance of unit weight) is calculated. It gives the mean error of the image coordinatemeasurements used in the adjustment. This value decreases as the bundle fits better tothe control and tie points.

NOTE: The σ0 value usually should not be larger than 0.25 to 0.75 pixels.

Precision and Residual Values

After the adjustment, information describing the precision of the computed parameters,as well as the residuals of the observations, can be computed. The variance matrix andcovariance matrix are the theoretical precision of the unknown parameters, dependingon the geometry of the photo block. Residuals describe how much the measured imagecoordinates differ from their computed locations after the adjustment.

Check Point Analysis

An independent measure is needed to describe the accuracy of computed groundcoordinates of tie points.

A check point analysis compares photogrammetrically computed ground coordinatesof tie points with their known coordinates, measured independently by geodeticmethods or by digitizing existing maps. The result of this analysis is an RMS error,which is split into a horizontal (X,Y) and a vertical (Z) component.

NOTE: In the case of aerial mapping, the vertical accuracy is usually lower than the horizontalaccuracy by a factor of 1.5. For satellite stereo-scenes, the vertical accuracy depends on thedimension of the inclination angles (the separation of the two scenes).

287Field Guide

Page 314: Erdas FieldGuide

Robust Estimation

Using robust estimation or data-snooping techniques, some gross errors in observa-tions can be detected and flagged.

Editing Points and Repeating the Adjustment

To increase the accuracy of the triangulation, you may refine the input image coordi-nates to eliminate the gross errors in the observations. The residual of each imagecoordinate measurement can be a good reference to help decide whether to remove theobservation from further computations. However, because of the different redundancyof the different measurements, the residuals are not equivalent to their observationerrors. A better way is to consider a residual together with its redundancy. After editingthe input image coordinates, the triangulation can be repeated.

Stereo Imagery To perform photogrammetric stereo operations, two views of the same ground areacaptured from different locations are required. A stereopair is a set of two images thatoverlap, providing two views of the terrain in the overlap area. The relief displacementin a stereopair is required to extract three-dimensional information about the terrain.

Though digital photogrammetric principles can be applied to any type of imagery, thisdocument focuses on two main sources: aerial photographs (metric frame cameras) andSPOT satellite imagery. Many of the concepts presented for aerial photographs alsopertain to most imagery that has a single perspective center. Likewise, the SPOTconcepts have much in common with other sensors that also use a linear ChargedCoupled Device (CCD) in a pushbroom fashion.

Aerial Stereopairs For decades, aerial photographs have been used to create topographic maps in analogand analytical stereoplotters. Aerial photographs are taken by specialized cameras,mounted so that the lens is close to vertical, pointing out of a hole in the bottom of anairplane. Photos are taken in sequence at regular intervals. Neighboring photos alongthe flight line usually overlap by 60% or more. A stereopair can be constructed from anytwo overlapping photographs that share a common area on the ground, mostcommonly along the flight line.

288 ERDAS

Page 315: Erdas FieldGuide

Stereo Imagery

Figure 120: Aerial Stereopair (60% Overlap)

SPOT Stereopairs Satellite stereopairs are created by two scenes of the same terrain that are recorded fromdifferent viewpoints. Because of its off-nadir viewing capability, it is easy to acquirestereopairs from the SPOT satellite. The SPOT stereopairs are recorded from differentorbits on different days.

Figure 121: SPOT Stereopair (80% Overlap)

Epipolar Stereopairs Epipolar stereopairs are created from triangulated, overlapping imagery using theprocess in Figure 122. Digital photogrammetry creates a new set of digital images byresampling the overlap region into a stereo orientation. This orientation, called epipolargeometry, is characterized by relief displacement only occurring in one dimension(along the flight line). A feature unique to digital photogrammetry is that there is noneed to create a relative stereo model before proceeding to absolute map coordinates.

flight direction

terrain

1st satellitevisit

2nd satellitevisitmotion of

satelliteduringscanning

289Field Guide

Page 316: Erdas FieldGuide

Figure 122: Epipolar Stereopair Creation Work Flow

UncorrectedDigital Imagery

Generate Triangulation Results

Epipolar

StereopairEpipolar

Stereopair

290 ERDAS

Page 317: Erdas FieldGuide

Generate Elevation Models

Generate ElevationModels

Elevation models are generated from overlapping imagery (Figure 123). There are twomethods in digital photogrammetry. Method 1 uses the original images and triangu-lation results. Method 2 uses only the epipolar stereopairs (which are assumed toinclude geometric information derived from the triangulation results).

Figure 123: Generate Elevation Models Work Flow (Method 1)

Figure 124: Generate Elevation Models Work Flow (Method 2)

Traditional Methods The traditional method of deriving elevations was to visualize the stereopair in threedimensions using an analog or analytical stereo plotter. The user would then placepoints and breaklines at critical terrain locations. An alternative method was to set thepointer to a fixed elevation and then proceed to trace contour lines.

Digital Methods Both of the traditional methods described above can also be used in digital photogram-metry utilizing specialized stereo viewing hardware. However, a powerful newmethod is introduced with the advent of all digital systems - image correlation. Thegeneral idea is to use pattern matching algorithms to locate the same ground featureson two overlapping photographs. The triangulation information is then used tocalculate ground (X,Y,Z) values for each correlated feature.

UncorrectedDigital Imagery

Extract ElevationInformation Triangulation Results

Generated DTM

Extract ElevationInformation

Epipolar

Generated DTM

Stereopair

291Field Guide

Page 318: Erdas FieldGuide

Elevation ModelDefinitions

There are a variety of terms used to describe various digital elevation models. Theirusage in the industry are not always consistent. However, in this section the followingmeanings will be assigned to specific terms, and this terminology will be used consis-tently.

DTMs

A digital terrain model (DTM) is a discrete expression of terrain surface in a data array,consisting of a group of planimetric coordinates (X,Y) and the elevations of the groundpoints and breaklines. A DTM can be in the regular grid form, or it can be representedwith irregular points. Consider DTM as being a general term for elevation models, withDEMs and TINs (defined below) as specific representations.

A DTM can be extracted from stereo imagery based on the automatic matching of pointsin the overlap areas of a stereo model. The stereo model can be a satellite stereo sceneor a pair of digitized aerial photographs.

The resulting DTM can be used as input to geoprocessing software. In particular, it canbe utilized to produce an orthophoto or used in an appropriate 3-D viewing package(e.g., IMAGINE Virtual GIS).

DEMs

A digital elevation model (DEM) is a specific representation of DTMs in which theelevation points consist of a regular grid. Often, DEMs are stored raster files in whicheach grid cell value contains an elevation value.

TINs

A triangulated irregular network (TIN) is a specific representation of DTMs in whichelevation points can occur at irregular intervals. In addition to elevation points, break-lines are often included in TINs. A breakline is an elevation polyline, in which eachvertex has its own X, Y, Z value.

DEM Interpolation The direct results from most image matching techniques are irregular and discreteobject surface points. In order to generate a DEM, the irregular set of object points needto be interpolated. For each grid point, the elevation is computed by a surface interpo-lation method.

There are many algorithms for DEM interpolation. Some of them have been introducedin "CHAPTER 1: Raster Data." Other methods, including Least Square Collocation andFinite Elements, are also used for DEM interpolation.

Often, to describe the terrain surface more accurately, breaklines should be added andused in the DEM interpolation. TIN based interpolation methods can deal with break-lines more efficiently.

292 ERDAS

Page 319: Erdas FieldGuide

Generate Elevation Models

Image Matching Image matching refers to the automatic acquisition of corresponding image points onthe overlapping area of two images.

For more information on image matching, see "Image Matching Techniques" on page 294.

Image Pyramid

Because of the large amounts of image data, the image pyramid is usually adopted inthe image matching techniques to reduce the computation time and to increase thematching reliability. The pyramid is a data structure consisting of the same imagerepresented several times, at a decreasing spatial resolution each time. Each level of thepyramid contains the image at a particular resolution.

The matching process is performed at each level of resolution. The search is firstperformed at the lowest resolution level and subsequently at each higher level ofresolution. Figure 125 shows a four-level image pyramid.

Figure 125: Image Pyramid for Matching at Coarse to Full Resolution

Epipolar Image pair

The epipolar image pair is a stereopair without y-parallax. It can be generated from theoriginal stereopair if the orientation parameters are known. When image matchingalgorithms are applied to the epipolar stereopair, the search can be constrained to theepipolar line to significantly reduce search times and false matching. However, some ofthe image detail may be lost during the epipolar resampling process.

Level 2

Full resolution (1:1)

256 x 256 pixels

Level 1

512 x 512 pixels

Level 3128 x 128 pixels

Level 464 x 64 pixels

Matching begins on level 4

and

Matching finishes on level 1

Resolution of 1:8

Resolution of 1:4

Resolution of 1:2

293Field Guide

Page 320: Erdas FieldGuide

Image MatchingTechniques

The image matching methods can be divided into three categories:

• area based matching

• feature based matching

• relation based matching

Area Based Matching Area based matching can also be called signal based matching. This method deter-mines the correspondence between two image areas according to the similarity of theirgray level values. The cross correlation and least squares correlation techniques arewell-known methods for area based matching.

Correlation Windows

Area based matching uses correlation windows. These windows consist of a localneighborhood of pixels. One example of correlation windows is square neighborhoods(e.g., 3 X 3, 5 X 5, 7 X 7 pixels). In practice, the windows vary in shape and dimension,based on the matching technique. Area correlation uses the characteristics of thesewindows to match ground feature locations in one image to ground features on theother.

A reference window is the source window on the first image, which remains at aconstant location. Its dimensions are usually square in size (e.g., 3 X 3, 5 X 5, etc.). Searchwindows are candidate windows on the second image that are evaluated relative to thereference window. During correlation, many different search windows are examineduntil a location is found that best matches the reference window.

Correlation Calculations

Two correlation calculations are described below: cross correlation and least squarescorrelation. Most area based matching calculations, including these methods,normalize the correlation windows. Therefore, it is not necessary to balance the contrastor brightness prior to running correlation. Cross correlation is more robust in that itrequires a less accurate a priori position than least squares. However, its precision islimited to 1.0 pixels. Least squares correlation can achieve precision levels of 0.1 pixels,but requires an a priori position that is accurate to about 2 pixels. In practice, cross corre-lation is often followed by least squares.

294 ERDAS

Page 321: Erdas FieldGuide

Image Matching Techniques

Cross Correlation

Cross correlation computes the correlation coefficient of the gray values between thetemplate window and the search window, according the following equation:

where

ρ = the correlation coefficient

g(c,r) = the gray value of the pixel (c,r)c1,r1 = the pixel coordinates on the left image

c2,r2 = the pixel coordinates on the right image

n = the total number of pixels in the windowi, j = pixel index into the correlation window

When using the area based cross correlation, it is necessary to have a good initialposition for the two correlation windows. Also, if the contrast in the windows is verypoor, the correlation will fail.

Least Squares Correlation

Least squares correlation uses the least squares estimation to derive parameters thatbest fit a search window to a reference window. This technique accounts for both grayscale and geometric differences, making it especially useful when ground features onone image look somewhat different on the other image (differences which occur whenthe surface terrain is quite steep or when the viewing angles are quite different).

Least squares correlation is iterative. The parameters calculated during the initial passare used in the calculation of the second pass and so on, until an optimum solution hasbeen determined. Least squares matching can result in high positional accuracy (about0.1 pixels). However, it is sensitive to initial approximations. The initial coordinates forthe search window prior to correlation must be accurate to about 2 pixels or better.

When least squares correlation fits a search window to the reference window, bothradiometric (pixel gray values) and geometric (location, size, and shape of the searchwindow) transformations are calculated.

ρg1 c1 r1,( ) g1–[ ] g2 c2 r2,( ) g2–[ ]

i j,∑

g1 c1 r1,( ) g1–[ ]2g2 c2 r2,( ) g2–[ ]

i j,∑ 2

i j,∑

-------------------------------------------------------------------------------------------------------=

with

g11n--- g1 c1 r1,( )

i j,∑= g2

1n--- g2 c2 r2,( )

i j,∑=

295Field Guide

Page 322: Erdas FieldGuide

For example, suppose the change in gray values between two correlation windows canbe represented as a linear relationship. Also, assume that the change in the window’sgeometry can be represented by an affine transformation.

NOTE: The following formulas do not follow use the coordinate system nomenclatureestablished elsewhere in this chapter. The pixel coordinate values are presented as (x,y) insteadof (c,r).

where

x1,y1 = the pixel coordinate in the reference window

x2,y2 = the pixel coordinate in the search window

g1(x1,y1) = the gray value of pixel (x1,y1)

g2(x2,y2) = the gray value of pixel (x1,y1)

h0, h1 = linear gray value transformation parameters

a0, a1, a2 = affine geometric transformation parameters

b0, b1, b2 = affine geometric transformation parameters

Based on this assumption, the error equation for each pixel can be derived, as is shownin the following equation:

where gc and gr are the gradients of g2 (c2,r2).

g2 c2 r2,( ) h0 h1g1 c1 r1,( )+=

c2 a0 a1c1 a2r1+ +=

r2 b0 b1c1 b2r1+ +=

v a1 a2c1 a3r1+ +( )gx b1 b2c1 b3r1+ +( )gy h1– h2g1 c1 r1,( )– ∆g+ +=

with ∆g g2 c2 r2,( ) g1 c1 r1,( )–=

296 ERDAS

Page 323: Erdas FieldGuide

Image Matching Techniques

Feature Based Matching Feature based matching determines the correspondence between two image features.Most feature based techniques match extracted point features (this is called featurepoint matching), as opposed to other features, such as lines or complex objects. Poorcontrast areas can be avoided with feature based matching.

In order to implement feature based matching, the image features must initially beextracted. There are several well-known operators for feature point extraction.Examples include:

• Moravec Operator

• Dreschler Operator

• Förstner Operator

After the features are extracted, the attributes of the features are compared between twoimages. The feature pair with the attributes which are the best fit will be recognized asa match.

Relation Based Matching Relation based matching is also called structure based matching. This kind of matchingtechnique uses not only the image features, but also the relation among the features.With relation based matching, the corresponding image structures can be recognizedautomatically, without any a priori information. However, the process is time-consuming, since it deals with varying types of information. Relation based matchingcan also be applied for the automatic recognition of control points.

297Field Guide

Page 324: Erdas FieldGuide

Orthorectification Orthorectification takes a raw digital image and applies an elevation model (DTM) andtriangulation results to create an orthoimage (digital orthophoto). The DTM can besourced from an externally derived product, or generated from the raw digital imagesearlier in the photogrammetric process (see Figures 126 and 127).

Figure 126: Orthorectification Work Flow (Method 1)

Figure 127: Orthorectification Work Flow (Method 2)

An image or photograph with an orthographic projection is one for which every pointlooks as if an observer were looking straight down at it, along a line of sight that isorthogonal (perpendicular) to the earth.

UncorrectedDigital Imagery

OrthorectifyImagery Triangulation Results

Generated

Generated DTM

Orthoimage

UncorrectedDigital Imagery

OrthorectifyImagery Triangulation Results

Generated

External DTM

Orthoimage

298 ERDAS

Page 325: Erdas FieldGuide

Orthorectification

Figure 128 illustrates the relationship of a remotely-sensed image to the orthographicprojection. The digital orthophoto is a representation of the surface projected onto theplane of zero elevation, which is called the datum, or reference plane.

Figure 128: Orthographic Projection

The resulting image is known as a digital orthophoto or orthoimage. An aerial photoor satellite scene transformed by the orthographic projection yields a map that is free ofmost significant geometric distortions.

Geometric Distortions When a remotely-sensed image or an aerial photograph is recorded, there is inherentgeometric distortion caused by terrain and by the angle of the sensor or camera to theground. In addition, there are distortions caused by earth curvature, atmosphericdiffraction, the camera or sensor itself (e.g., radial lens distortion), and the mechanics ofacquisition (e.g., for SPOT, earth rotation and change in orbital position during acqui-sition). In the following material, only the most significant of these distortions arepresented. These are terrain, sensor position, and rotation angles, as well as earthcurvature for small-scale images.

terrain

reference plane(elevation zero)

satellite sensor or cameraperspectiveprojection

orthographicprojection

299Field Guide

Page 326: Erdas FieldGuide

Aerial and SPOTOrthorectification

Creating Digital Orthophotos

The following are necessary to create digital orthophoto:

• a satellite image or scanned aerial photograph,

• a digital terrain model (DTM) of the area covered by the image, and

• the orientation parameters of the sensor or camera.

In overlap regions of orthoimage mosaics, the digital orthophoto can be used, whichminimizes problems with contrast, cloud cover, occlusions, and reflections from waterand snow.

Relief displacement is corrected for by taking each pixel of a DTM and finding the equiv-alent position in the satellite or aerial image. A brightness value is determined for thislocation based on resampling of the surrounding pixels. The brightness value, elevation,and orientation are used to calculate the equivalent location in the orthoimage file.

Figure 129: Digital Orthophoto - Finding Gray Values

Where

P = ground pointP1 = image point

O = perspective center (origin)X,Z = ground coordinates (in DTM file)f = focal length

DTM

orthoimage gray values

Z

X

Pl f

P

O

300 ERDAS

Page 327: Erdas FieldGuide

Orthorectification

The resulting orthoimages have similar basic characteristics to images created by othermeans of rectification, such as polynomial warping or rubber sheeting. On any rectifiedimage, a map ground coordinate can be quickly calculated for a pixel position. Theorthorectification process almost always explicitly models the ground terrain andsensor attitude (position and rotation angles), which makes it much more accurate foroff-nadir imagery, larger image scales, and mountainous regions. Also, orthorectifi-cation often requires less control points than other methods.

Resampling methods used are nearest neighbor, bilinear interpolation, and cubicconvolution.

See "CHAPTER 8: Rectification" for a complete explanation of rectification and resamplingmethods.

Generally, when the cell sizes of orthoimage pixels are selected, they should be similaror larger than the cell sizes of the original image. For example, if the image was scanned9K X 9K, 1 pixel would represent 0.025mm on the image. Assuming that the image scale(SI) of this photo is 1:40,000, then the cell size on the ground is about 1m. For theorthoimage, it would be appropriate to choose a pixel spacing of 1m or larger. Choosinga smaller pixel size would oversample the original image.

For SPOT Pan images, a cell size of 10 x 10 meters is appropriate. Any furtherenlargement from the original scene to the orthophoto would not improve the imagedetail.

LandsatOrthorectification

Landsat TM or Landsat MSS sensor systems have a complex geometry which includesfactors such as a rotating mirror inside the sensor, changes in orbital position duringacquisition, and earth rotation. The resulting “zero level” imagery requires sophisti-cated rectification that is beyond the capabilities of many end users. For this reason,almost all Landsat data formats have already been preprocessed to minimize thesedistortions. Applying simple polynomial rectification techniques to these formatsusually fulfills most georeferencing needs when the terrain is relatively flat. However,imagery of mountainous regions needs to account for relief displacement for effectivegeoreferencing. A solution to this problem is discussed below.

This section illustrates just one example of how to correct the relief distortion by using thepolynomial formulation.

Vertical imagery taken by a line scanner, such as Landsat TM or Landsat MSS, can beused with an existing DTM to yield an orthoimage. No information about the sensor orthe orbit of the satellite is needed. Instead, a transformation is calculated using infor-mation about the ground region and image capture. The correction takes into account:

• elevation

• local earth curvature

• distance from nadir

• flying height above datum

301Field Guide

Page 328: Erdas FieldGuide

The image coordinates are adjusted for the above factors, and a least squares regressionmethod similar to the one used for rectification is used to get a polynomial transfor-mation between image and ground coordinates.

Vertical imagery is assumed to be captured by Landsat satellites.

Approximating the Nadir Line

The nadir line is a mathematically derived line based on the nadir. The nadir is a pointdirectly beneath the satellite. For vertically viewed imagery, the nadir point for a scanline is assumed to be the center of the line. Thus the nadir line.

The edges of a Landsat image can be determined by a search based on a gray levelthreshold between the image and background fill values. Sample points are determinedalong the edges of the image. Each edge line is then obtained by a least squaresregression. The nadir line is found by averaging the left and right edge lines.

image point

image plane

scan line

nadirnadir line

image edge

302 ERDAS

Page 329: Erdas FieldGuide

Orthorectification

For simplicity, each line (the four edges and the nadir line) can be approximated by astraight line without losing generality.

NOTE: The following formulas do not follow use the coordinate system nomenclatureestablished elsewhere in this chapter. The pixel coordinate values are presented as (x,y) insteadof (c,r).

left edge: x = a0 + a1y

right edge: x = b0 + b1y

nadir line: x = c0 + c1y

where

x,y = image coordinates,a0,b0,c0 = intercepts of straight lines, and

a1,b1,c1 = slopes of straight lines.

and

c0 = 0.5 * (a0 + b0)

c1 = 0.5 * ( a1 + b1)

Determining a Point’s Distance from Nadir

The distance (d) of a point from the nadir is calculated based on its position along thescan line, as follows:

(1)

Where g1 is the slope of the scan line (y = g0 +g1x), which can be obtained based on thetop and bottom edges with a method similar to the one described for the left and rightedges.

d1 g1

2+

1 c1g1–---------------------

x c0– c1y–( )=

303Field Guide

Page 330: Erdas FieldGuide

Displacement

Displacement is the degree of geometric distortion for a point that is not on the nadirline. A point on the nadir line represents an area in the image with zero distortion. Thedisplacement of a point is based on its relationship to the nadir point in the scan line inwhich the point is located.

Figure 130: Image Displacement

where

R = radius of local earth curvature at the nadir pointH = flying height of satellite above datum at the nadir point∆d = displacementd = distance of image point from nadir pointZ = elevation of the ground pointα = angle between nadir and image point before adjustment for terrain

displacementβ = angle between nadir and image point by vertical view

H

R

d

αβ

z

∆d

datum

earthellipsoid

earth’s center

image plane

exposure station

304 ERDAS

Page 331: Erdas FieldGuide

Orthorectification

Solving for Displacement (∆d)

The displacement of an image point is expressed in the following equations:

(2)

(3)

Considering α and β are very tiny values, the following approximations can be usedwith sufficient accuracy:

Then, an explicit approximate equation can be derived from equations (1), (2), and (3):

(4)

Solving for Transformations (F1,F2)

Before performing the polynomial transformation, the displacement is applied. Thus,the polynomial equations become:

where

X,Y,Z = 3-D ground coordinatesx,y = image coordinatesF1, F2 = polynomial expressions

and where

F1 = A0 + A1X + A2Y + A3X2 + A4XY +A5Y2 + ...

F2 = B0 + B1X + B2Y + B3X2 + B4XY +B5Y2 + ...

and where

Ai, Bi = polynomial transformation coefficients

R Z+( ) βsinR H+( ) R Z+( ) βcos–

-------------------------------------------------------- R αsinR H R αcos–+-------------------------------------=

∆dd

------- 1 βtanαtan

------------–=

αcos 1≈

βcos 1≈

∆d1 g1

2+

1 c1g1–---------------------

Z

H----

R H+R Z+--------------

x c0– c1y–( )=

x1

1 c1g1–--------------------

ZH----

R H+R Z+--------------

x c0– c1y–( )– F1 X Y,( )=

yg1

1 c1g1–--------------------

ZH----

R H+R Z+--------------

x c0– c1y–( )– F2 X Y,( )=

305Field Guide

Page 332: Erdas FieldGuide

Solving for Image Coordinates

Once all the polynomial coefficients are derived using least squares regression, thefollowing pair of equations can easily be solved for image coordinates (x,y) duringorthocorrection:

where

1.0 p–( )x c1py+ F1 X Y,( ) c0p–=

g1px 1.0 c1g1p–( )y+ F2 X Y,( ) c0g1p–=

p1

1 c1g1–--------------------

ZH----

R H+R Z+--------------

=

306 ERDAS

Page 333: Erdas FieldGuide

Map Feature Collection

Map FeatureCollection

Feature collection is the process of identifying, delineating, and labeling various typesof natural and man-made phenomena from remotely-sensed images. The features arerepresented by attribute points, lines, and polygons. General categories of featuresinclude elevation models, hydrology, infrastructure, and land cover. There can be manydifferent elements within a general category. For instance, infrastructure can be brokendown into roads, utilities, and buildings. To achieve high levels of positional accuracy,photogrammetric processing is applied to the imagery prior to collecting features (seeFigures 131 and 132).

Figure 131: Feature Collection Work Flow (Method 1)

Figure 132: Feature Collection Work Flow (Method 2)

Stereoscopic Collection Method 1 (Figure 131), which uses a stereopair as the image backdrop, is the mostcommon approach. Viewing the stereopair in three dimensions provides greater imagecontent and the ability to obtain three-dimensional feature ground coordinates (X,Y,Z).

Monoscopic Collection Method 2 (Figure 132), which uses an orthoimage as the image backdrop, works wellfor non-urban areas and/or smaller image scales. The features are collected fromorthoimages while viewing them in mono. Therefore, only X and Y ground coordinatescan be obtained. Monoscopic collection from orthoimages has no special hardwarerequirements, making orthoimages an ideal image source for many applications.

Collect FeaturesStereoscopically

Stereopair

Collected MapFeatures

Collect FeaturesMonoscopically

Generated

Collected Map

Orthoimage

Features

307Field Guide

Page 334: Erdas FieldGuide

Product Output

Orthoimages Orthoimages are the end product of orthorectification. Once created, these digitalimages can be enhanced, merged with other data sources, and mosaicked with adjacentorthoimages. The resulting digital file makes an ideal image backdrop for many appli-cations, including feature collection, visualization, and input into GIS/Remote sensingsystems. Orthoimages have very good positional accuracy, making them an excellentprimary data source for all types of mapping.

Orthomaps Orthomaps use orthoimages, or orthoimage mosaics, to produce an imagemapproduct. This product is similar to a standard map in that it usually includes additionalinformation, such as map coordinate grids, scale bars, north arrows, etc. The images areoften clipped to represent the same ground area and map projection as standard mapseries. In addition, other geographic data can be superimposed on the image. Forexample, items from a topographic database (e.g., roads, contour lines, and featuredescriptions) can improve the interpretability of the orthomap.

TopographicDatabase

Features obtained from elevation extraction and feature collection can serve as primaryinputs into a topographic database. This database can then be utilized by GIS and mappublishing systems.

Topographic Maps Topographic maps are the traditional end product of the photogrammetric process. Inthe digital era, topographic maps are often produced by map publishing systems whichutilize a topographic database.

308 ERDAS

Page 335: Erdas FieldGuide

Topographic Database

309Field Guide

Page 336: Erdas FieldGuide

310 ERDAS

Page 337: Erdas FieldGuide

Introduction

CHAPTER 8Rectification

Introduction Raw, remotely sensed image data gathered by a satellite or aircraft are representationsof the irregular surface of the earth. Even images of seemingly “flat” areas are distortedby both the curvature of the earth and the sensor being used. This chapter covers theprocesses of geometrically correcting an image so that it can be represented on a planarsurface, conform to other images, and have the integrity of a map.

A map projection system is any system designed to represent the surface of a sphere orspheroid (such as the earth) on a plane. There are a number of different map projectionmethods. Since flattening a sphere to a plane causes distortions to the surface, each mapprojection system compromises accuracy between certain properties, such as conser-vation of distance, angle, or area. For example, in equal area map projections, a circle ofa specified diameter drawn at any location on the map will represent the same totalarea. This is useful for comparing land use area, density, and many other applications.However, to maintain equal area, the shapes, angles, and scale in parts of the map maybe distorted (Jensen 1996).

There are a number of map coordinate systems for determining location on an image.These coordinate systems conform to a grid, and are expressed as X,Y (column, row)pairs of numbers. Each map projection system is associated with a map coordinatesystem.

Rectification is the process of transforming the data from one grid system into anothergrid system using an nth order polynomial. Since the pixels of the new grid may notalign with the pixels of the original grid, the pixels must be resampled. Resampling isthe process of extrapolating data values for the pixels on the new grid from the valuesof the source pixels.

Registration

In many cases, images of one area that are collected from different sources must be usedtogether. To be able to compare separate images pixel by pixel, the pixel grids of eachimage must conform to the other images in the data base. The tools for rectifying imagedata are used to transform disparate images to the same coordinate system.Registration is the process of making an image conform to another image. A mapcoordinate system is not necessarily involved. For example, if image A is not rectifiedand it is being used with image B, then image B must be registered to image A, so thatthey conform to each other. In this example, image A is not rectified to a particular mapprojection, so there is no need to rectify image B to a map projection.

311Field Guide

Page 338: Erdas FieldGuide

Georeferencing

Georeferencing refers to the process of assigning map coordinates to image data. Theimage data may already be projected onto the desired plane, but not yet referenced tothe proper coordinate system. Rectification, by definition, involves georeferencing,since all map projection systems are associated with map coordinates. Image-to-imageregistration involves georeferencing only if the reference image is already georefer-enced. Georeferencing, by itself, involves changing only the map coordinate infor-mation in the image file. The grid of the image does not change.

Geocoded data are images that have been rectified to a particular map projection andpixel size, and usually have had radiometric corrections applied. It is possible topurchase image data that is already geocoded. Geocoded data should be rectified onlyif they must conform to a different projection system or be registered to other rectifieddata.

Latitude/Longitude

Latitude/Longitude is a spherical coordinate system that is not associated with a mapprojection. Lat/Lon expresses locations in the terms of a spheroid, not a plane.Therefore, an image is not usually “rectified” to Lat/Lon, although it is possible toconvert images to Lat/Lon, and some tips for doing so are included in this chapter.

You can view map projection information for a particular file using the ERDAS IMAGINEImage Information utility. Image Information allows you to modify map information that isincorrect. However, you cannot rectify data using Image Information. You must use the Recti-fication tools described in this chapter.

The properties of map projections and of particular map projection systems are discussed in"CHAPTER 11: Cartography" and "APPENDIX C: Map Projections."

Orthorectification Orthorectification is a form of rectification that corrects for terrain displacement andcan be used if there is a digital elevation model (DEM) of the study area. In relativelyflat areas, orthorectification is not necessary, but in mountainous areas (or on aerialphotographs of buildings), where a high degree of accuracy is required, orthorectifi-cation is recommended.

See "CHAPTER 7: Photogrammetric Concepts" for more information on orthocorrection.

312 ERDAS

Page 339: Erdas FieldGuide

When to Rectify

When to Rectify Rectification is necessary in cases where the pixel grid of the image must be changed tofit a map projection system or a reference image. There are several reasons for rectifyingimage data:

• scene-to-scene comparisons of individual pixels in applications, such as changedetection or thermal inertia mapping (day and night comparison)

• developing GIS data bases for GIS modeling

• identifying training samples according to map coordinates prior to classification

• creating accurate scaled photomaps

• overlaying an image with vector data, such as ARC/INFO

• comparing images that are originally at different scales

• extracting accurate distance and area measurements

• mosaicking images

• performing any other analyses requiring precise geographic locations

Before rectifying the data, one must determine the appropriate coordinate system forthe data base. To select the optimum map projection and coordinate system, theprimary use for the data base must be considered.

If the user is doing a government project, the projection may be pre-determined. Acommonly used projection in the United States government is State Plane. Use an equalarea projection for thematic or distribution maps and conformal or equal area projec-tions for presentation maps. Before selecting a map projection, consider the following:

• How large or small an area will be mapped? Different projections are intended fordifferent size areas.

• Where on the globe is the study area? Polar regions and equatorial regions requiredifferent projections for maximum accuracy.

• What is the extent of the study area? Circular, north-south, east-west, and obliqueareas may all require different projection systems (ESRI 1992).

313Field Guide

Page 340: Erdas FieldGuide

When to GeoreferenceOnly

Rectification is not necessary if there is no distortion in the image. For example, if animage file is produced by scanning or digitizing a paper map that is in the desiredprojection system, then that image is already planar and does not require rectificationunless there is some skew or rotation of the image. Scanning and digitizing produceimages that are planar, but do not contain any map coordinate information. Theseimages need only to be georeferenced, which is a much simpler process than rectifi-cation. In many cases, the image header can simply be updated with new mapcoordinate information. This involves redefining:

• the map coordinate of the upper left corner of the image, and

• the cell size (the area represented by each pixel).

This information is usually the same for each layer of an image (.img) file, although itcould be different. For example, the cell size of band 6 of Landsat TM data is differentthan the cell size of the other bands.

Use the Image Information utility to modify image file header information that is incorrect.

Disadvantages ofRectification

During rectification, the data file values of rectified pixels must be resampled to fit intoa new grid of pixel rows and columns. Although some of the algorithms for calculatingthese values are highly reliable, some spectral integrity of the data can be lost duringrectification. If map coordinates or map units are not needed in the application, then itmay be wiser not to rectify the image. An unrectified image is more spectrally correctthan a rectified image.

Classification

Some analysts recommend classification before rectification, since the classification willthen be based on the original data values. Another benefit is that a thematic file has onlyone band to rectify instead of the multiple bands of a continuous file. On the other hand,it may be beneficial to rectify the data first, especially when using Global PositioningSystem (GPS) data for the ground control points. Since these data are very accurate, theclassification may be more accurate if the new coordinates help to locate better trainingsamples.

Thematic Files

Nearest neighbor is the only appropriate resampling method for thematic files, whichmay be a drawback in some applications. The available resampling methods arediscussed in detail later in this chapter.

314 ERDAS

Page 341: Erdas FieldGuide

When to Rectify

Rectification Steps NOTE: Registration and rectification involve similar sets of procedures. Throughout thisdocumentation, many references to rectification also apply to image-to-image registration.

Usually, rectification is the conversion of data file coordinates to some other grid andcoordinate system, called a reference system. Rectifying or registering image data ondisk involves the following general steps, regardless of the application:

1. Locate ground control points.

2. Compute and test a transformation matrix.

3. Create an output image file with the new coordinate information in the header. The pix-els must be resampled to conform to the new grid.

Images can be rectified on the display (in a Viewer) or on the disk. Display rectificationis temporary, but disk rectification is permanent, because a new file is created. Diskrectification involves:

• rearranging the pixels of the image onto a new grid, which conforms to a plane inthe new map projection and coordinate system, and

• inserting new information to the header of the file, such as the upper left cornermap coordinates and the area represented by each pixel.

315Field Guide

Page 342: Erdas FieldGuide

Ground ControlPoints

Ground control points (GCPs) are specific pixels in an image for which the output mapcoordinates (or other output coordinates) are known. GCPs consist of two X,Y pairs ofcoordinates:

• source coordinates — usually data file coordinates in the image being rectified

• reference coordinates — the coordinates of the map or reference image to whichthe source image is being registered

The term “map coordinates” is sometimes used loosely to apply to reference coordi-nates and rectified coordinates. These coordinates are not limited to map coordinates.For example, in image-to-image registration, map coordinates are not necessary.

GCPs in ERDASIMAGINE

Any ERDAS IMAGINE image can have one GCP set associated with it. The GCP set isstored in the image file (.img) along with the raster layers. If a GCP set exists for the topfile that is displayed in the Viewer, then those GCPs can be displayed when the GCPTool is opened.

In the CellArray of GCP data that displays in the GCP Tool, one column shows the pointID of each GCP. The point ID is a name given to GCPs in separate files that representthe same geographic location. Such GCPs are called corresponding GCPs.

A default point ID string is provided (such as “GCP #1”), but the user can enter his orher own unique ID strings to set up corresponding GCPs as needed. Even though onlyone set of GCPs is associated with an image file, one GCP set can include GCPs for anumber of rectifications by changing the point IDs for different groups of corre-sponding GCPs.

Entering GCPs Accurate ground control points are essential for an accurate rectification. From theground control points, the rectified coordinates for all other points in the image areextrapolated. Select many GCPs throughout the scene. The more dispersed the GCPsare, the more reliable the rectification will be. GCPs for large-scale imagery mightinclude the intersection of two roads, airport runways, utility corridors, towers, orbuildings. For small-scale imagery, larger features such as urban areas or geologicfeatures may be used. Landmarks that can vary (e.g., the edges of lakes or other waterbodies, vegetation, etc.) should not be used.

The source and reference coordinates of the ground control points can be entered in thefollowing ways:

• They may be known a priori, and entered at the keyboard.

• Use the mouse to select a pixel from an image in the Viewer. With both the sourceand destination Viewers open, enter source coordinates and reference coordinatesfor image-to-image registration.

• Use a digitizing tablet to register an image to a hardcopy map.

Information on the use and setup of a digitizing tablet is discussed in "CHAPTER 2: VectorLayers."

316 ERDAS

Page 343: Erdas FieldGuide

Ground Control Points

Digitizing Tablet Option

If GCPs will be digitized from a hardcopy map and a digitizing tablet, accurate basemaps must be collected. The user should try to match the resolution of the imagery withthe scale and projection of the source map. For example, 1:24,000 scale USGSquadrangles make good base maps for rectifying Landsat TM and SPOT imagery.Avoid using maps over 1:250,000, if possible. Coarser maps (i.e., 1:250,000) are moresuitable for imagery of lower resolution (i.e., AVHRR) and finer base maps (i.e.,1:24,000) are more suitable for imagery of finer resolution (i.e., Landsat and SPOT).

Mouse Option

When entering GCPs with the mouse, the user should try to match coarser resolutionimagery to finer resolution imagery (i.e., Landsat TM to SPOT) and avoid stretchingresolution spans greater than a cubic convolution radius (a 4 × 4 area). In other words,the user should not try to match Landsat MSS to SPOT or Landsat TM to an aerialphotograph.

How GCPs are Stored

GCPs entered with the mouse are stored in the .img file, and those entered at thekeyboard or digitized using a digitizing tablet are stored in a separate file with theextension .gcc.

Refer to "APPENDIX B: File Formats and Extensions" for more information on the format of.img and .gcc files.

317Field Guide

Page 344: Erdas FieldGuide

Orders ofTransformation

Polynomial equations are used to convert source file coordinates to rectified mapcoordinates. Depending upon the distortion in the imagery, the number of GCPs used,and their locations relative to one another, complex polynomial equations may berequired to express the needed transformation. The degree of complexity of thepolynomial is expressed as the order of the polynomial. The order is simply the highestexponent used in the polynomial.

The order of transformation is the order of the polynomial used in the transformation.ERDAS IMAGINE allows 1st- through nth-order transformations. Usually, 1st-order or2nd-order transformations are used.

You can specify the order of the transformation you want to use in the Transform Editor.

A discussion of polynomials and order is included in "APPENDIX A: Math Topics".

Transformation Matrix

A transformation matrix is computed from the GCPs. The matrix consists of coeffi-cients which are used in polynomial equations to convert the coordinates. The size ofthe matrix depends upon the order of transformation. The goal in calculating the coeffi-cients of the transformation matrix is to derive the polynomial equations for whichthere is the least possible amount of error when they are used to transform the referencecoordinates of the GCPs into the source coordinates. It is not always possible to derivecoefficients that produce no error. For example, in Figure 133, GCPs are plotted on agraph and compared to the curve that is expressed by a polynomial.

Figure 133: Polynomial Curve vs. GCPs

Every GCP influences the coefficients, even if there is not a perfect fit of each GCP to thepolynomial that the coefficients represent. The distance between the GCP referencecoordinate and the curve is called RMS error, which is discussed later in this chapter.The least squares regression method is used to calculate the transformation matrixfrom the GCPs. This common method is discussed in statistics textbooks.

Source X coordinate

Ref

eren

ce X

coo

rdin

ate

GCP

Polynomial curve

318 ERDAS

Page 345: Erdas FieldGuide

Orders of Transformation

Linear Transformations A 1st-order transformation is a linear transformation. A linear transformation canchange:

• location in X and/or Y

• scale in X and/or Y

• skew in X and/or Y

• rotation

First-order transformations can be used to project raw imagery to a planar mapprojection, convert a planar map projection to another planar map projection, and whenrectifying relatively small image areas. The user can perform simple linear transforma-tions to an image displayed in a Viewer or to the transformation matrix itself. Lineartransformations may be required before collecting GCPs on the displayed image. Theuser can reorient skewed Landsat TM data, rotate scanned quad sheets according to theangle of declination stated in the legend, and rotate descending data so that north is up.

A 1st-order transformation can also be used for data that are already projected onto aplane. For example, SPOT and Landsat Level 1B data are already transformed to aplane, but may not be rectified to the desired map projection. When doing this type ofrectification, it is not advisable to increase the order of transformation if at first a highRMS error occurs. Examine other factors first, such as the GCP source and distribution,and look for systematic errors.

ERDAS IMAGINE provides the following options for 1st-order transformations:

• scale

• offset

• rotate

• reflect

Scale

Scale is the same as the zoom option in the Viewer, except that the user can specifydifferent scaling factors for X and Y.

If you are scaling an image in the Viewer, the zoom option will undo any changes to the scalethat you do, and vice versa.

Offset

Offset moves the image by a user-specified number of pixels in the X and Y directions.For rotation, the user can specify any positive or negative number of degrees forclockwise and counterclockwise rotation. Rotation occurs around the center pixel of theimage.

319Field Guide

Page 346: Erdas FieldGuide

Reflection

Reflection options enable the user to perform the following operations:

• left to right reflection

• top to bottom reflection

• left to right and top to bottom reflection (equal to a 180° rotation)

Linear adjustments are available from the Viewer or from the Transform Editor. You canperform linear transformations in the Viewer and then load that transformation to theTransform Editor, or you can perform the linear transformations directly on the transformationmatrix.

Figure 134 illustrates how the data are changed in linear transformations.

Figure 134: Linear Transformations

original image change of scale in X change of scale in Y

change of skew in X change of skew in Y rotation

320 ERDAS

Page 347: Erdas FieldGuide

Orders of Transformation

The transformation matrix for a 1st-order transformation consists of six coefficients—three for each coordinate (X and Y)...

a1 a2 a3

b1 b2 b3

...which are used in a 1st-order polynomial as follows:

EQUATION 3

where:

xi and yi are source coordinates (input)xo and yo are rectified coordinates (output)the coefficients of the transformation matrix are as above

The position of the coefficients in the matrix and the assignment of the coefficients in thepolynomial is an ERDAS IMAGINE convention. Other representations of a 1st-order transfor-mation matrix may take a different form.

NonlinearTransformations

Transformations of the 2nd-order or higher are nonlinear transformations. Thesetransformations can correct nonlinear distortions. The process of correcting nonlineardistortions is also known as rubber sheeting. Figure 135 illustrates the effects of somenonlinear transformations.

Figure 135: Nonlinear Transformations

xo b1 b2xi b3yi+ +=

yo a1 a2xi a3yi+ +=

original image

some possible outputs

321Field Guide

Page 348: Erdas FieldGuide

Second-order transformations can be used to convert Lat/Lon data to a planarprojection, for data covering a large area (to account for the earth’s curvature), and withdistorted data (for example, due to camera lens distortion). Third-order transforma-tions are used with distorted aerial photographs, on scans of warped maps and withradar imagery. Fourth-order transformations can be used on very distorted aerialphotographs.

The transformation matrix for a transformation of order t contains this number ofcoefficients:

EQUATION 4

It is multiplied by two for the two sets of coefficients—one set for X, one for Y.

An easier way to arrive at the same number is:

EQUATION 5

Clearly, the size of the transformation matrix increases with the order of the transfor-mation.

2 i

i 1=

t 1+

t 1+( ) t 2+( )×

322 ERDAS

Page 349: Erdas FieldGuide

Orders of Transformation

3

Higher Order Polynomials

The polynomial equations for a t-order transformation take this form:

EQUATION 6

where:

A, B, C, D, E, F ... Q ... Ω are coefficients

t is the order of the polynomial

i and j are exponents

All combinations of xi times yj are used in the polynomial expression, such that:

EQUATION 7

The equation for yo takes the same format with different coefficients. An example of 3rd-order transformation equations for X and Y, using numbers, is:

These equations use a total of 20 coefficients, or

EQUATION 8

xo A Bx Cy Dx2 Exy Fy2 ... Qxi y j ... Ωyt+ + + + + + + + +=

i j+ t≤

xo 5 4x 6y– 10x2 5xy– 1y2 3x3 7x2y 11xy2– 4y3+ + + + + +=

yo 13 12x 4y 1x2 21xy– 11y2 1x3– 2x2y 5xy2 12y+ + + + + + +=

3 1+( ) 3 2+( )×

323Field Guide

Page 350: Erdas FieldGuide

Effects of Order The computation and output of a higher-order polynomial equation are more complexthan that of a lower-order polynomial equation. Therefore, higher-order polynomialsare used to perform more complicated image rectifications. To understand the effects ofdifferent orders of transformation in image rectification, it is helpful to see the outputof various orders of polynomials.

The example below uses only one coordinate (X), instead of two (X,Y), which are usedin the polynomials for rectification. This enables the user to draw two-dimensionalgraphs that illustrate the way that higher orders of transformation affect the outputimage.

NOTE: Because only the X coordinate is used in these examples, the number of GCPs used is lessthan the numbers required to actually perform the different orders of transformation.

Coefficients like those presented in this example would generally be calculated by theleast squares regression method. Suppose GCPs are entered with these X coordinates:

These GCPs allow a 1st-order transformation of the X coordinates, which is satisfied bythis equation (the coefficients are in parentheses):

EQUATION 9

where:

xr= the reference X coordinatexi= the source X coordinate

This equation takes on the same format as the equation of a line (y = mx + b). In mathe-matical terms, a 1st-order polynomial is linear. Therefore, a 1st-order transformation isalso known as a linear transformation. This equation is graphed in Figure 136.

Source XCoordinate

(input)

Reference XCoordinate

(output)

1 17

2 9

3 1

xr 25( ) 8–( )+ xi=

324 ERDAS

Page 351: Erdas FieldGuide

Orders of Transformation

Figure 136: Transformation Example—1st-Order

However, what if the second GCP were changed as follows?

These points are plotted against each other in Figure 137.

Figure 137: Transformation Example—2nd GCP Changed

A line cannot connect these points, which illustrates that they cannot be expressed bya 1st-order polynomial, like the one above. In this case, a 2nd-order polynomial equationwill express these points:

EQUATION 10

Polynomials of the 2nd-order or higher are nonlinear. The graph of this curve is drawnin Figure 138.

0 1 2 3 4

0

4

8

12

16

refe

renc

e X

coo

rdin

ate

source X coordinate

xr = (25) + (-8)xi

Source XCoordinate

(input)

Reference XCoordinate

(output)

1 17

2 7

3 1

0 1 2 3 4

0

4

8

12

16

refe

renc

e X

coo

rdin

ate

source X coordinate

xr 31( ) 16–( )xi 2( )xi2

+ +=

325Field Guide

Page 352: Erdas FieldGuide

Figure 138: Transformation Example—2nd-Order

What if one more GCP were added to the list?

Figure 139: Transformation Example—4th GCP Added

As illustrated in Figure 139, this fourth GCP does not fit on the curve of the 2nd-orderpolynomial equation. So that all of the GCPs would fit, the order of the transformationcould be increased to 3rd-order. The equation and graph in Figure 140 would thenresult.

0 1 2 3 4

0

4

8

12

16

refe

renc

e X

coo

rdin

ate

source X coordinate

xr = (31) + (-16)xi + (2)xi2

Source XCoordinate

(input)

Reference XCoordinate

(output)

1 17

2 7

3 1

4 5

0 1 2 3 4

0

4

8

12

16

refe

renc

e X

coo

rdin

ate

source X coordinate

xr = (31) + (-16)xi + (2)xi2

(4,5)

326 ERDAS

Page 353: Erdas FieldGuide

Orders of Transformation

Figure 140: Transformation Example—3rd-Order

Figure 140 illustrates a 3rd-order transformation. However, this equation may beunnecessarily complex. Performing a coordinate transformation with this equation maycause unwanted distortions in the output image for the sake of a perfect fit for all theGCPs. In this example, a 3rd-order transformation probably would be too high, becausethe output pixels would be arranged in a different order than the input pixels, in the Xdirection.

EQUATION 11

EQUATION 12

Figure 141: Transformation Example—Effect of a 3rd-Order Transformation

0 1 2 3 4

0

4

8

12

16

refe

renc

e X

coo

rdin

ate

source X coordinate

xr = (25) + (-5)xi + (-4)xi2 + (1)xi

3

Source XCoordinate

(input)

Reference XCoordinate

(output)

1 xo(1) = 17

2 xo(2) = 7

3 xo(3) = 1

4 xo(4) = 5

xo 1( ) xo 2( ) xo 4( ) xo 3( )> > >

17 7 5 1> > >

1 2 3 4 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

1 2 3 4 3 4 2 1

input imageX coordinates

output imageX coordinates

327Field Guide

Page 354: Erdas FieldGuide

In this case, a higher order of transformation would probably not produce the desiredresults.

Minimum Number ofGCPs

Higher orders of transformation can be used to correct more complicated types ofdistortion. However, to use a higher order of transformation, more GCPs are needed.For instance, three points define a plane. Therefore, to perform a 1st-order transfor-mation, which is expressed by the equation of a plane, at least three GCPs are needed.Similarly, the equation used in a 2nd-order transformation is the equation of a parab-oloid. Six points are required to define a paraboloid. Therefore, at least six GCPs arerequired to perform a 2nd-order transformation. The minimum number of pointsrequired to perform a transformation of order t equals:

EQUATION 13

Use more than the minimum number of GCPs whenever possible. Although it ispossible to get a perfect fit, it is rare, no matter how many GCPs are used.

For 1st- through 10th-order transformations, the minimum number of GCPs requiredto perform a transformation is listed in the table below.

For the best rectification results, you should always use more than the minimum number ofGCPs and they should be well distributed.

Order ofTransformation

MinimumGCPs Required

1 3

2 6

3 10

4 15

5 21

6 28

7 36

8 45

9 55

10 66

t 1+( ) t 2+( )( )2

-------------------------------------

328 ERDAS

Page 355: Erdas FieldGuide

Orders of Transformation

GCP Prediction andMatching

Automated GCP prediction enables the user to pick a GCP in either coordinate systemand automatically locate that point in the other coordinate system based on the currenttransformation parameters.

Automated GCP matching is a step beyond GCP prediction. For image to image recti-fication, a GCP selected in one image is precisely matched to its counterpart in the otherimage using the spectral characteristics of the data and the transformation matrix. GCPmatching enables the user to fine tune a rectification for highly accurate results.

Both of these methods require an existing transformation matrix. A transformationmatrix is a set of coefficients used to convert the coordinates to the new projection.Transformation matrices are covered in more detail below.

GCP Prediction

GCP prediction is a useful technique to help determine if enough ground control pointshave been gathered. After selecting several GCPs, select a point in either the source orthe destination image, then use GCP prediction to locate the corresponding GCP on theother image (map). This point is determined based on the current transformationmatrix. Examine the automatically generated point and see how accurate it is. If it iswithin an acceptable range of accuracy, then there may be enough GCPs to perform anaccurate rectification (depending upon how evenly dispersed the GCPs are). If theautomatically generated point is not accurate, then more GCPs should be gatheredbefore rectifying the image.

GCP prediction can also be used when applying an existing transformation matrix toanother image in a data set. This saves time in selecting another set of GCPs by hand.Once the GCPs are automatically selected, those that do not meet an acceptable level oferror can be edited.

GCP Matching

In GCP matching the user can select which layers from the source and destinationimages to use. Since the matching process is based on the reflectance values, selectlayers that have similar spectral wavelengths, such as two visible bands or two infraredbands. The user can perform histogram matching to ensure that there is no offsetbetween the images. The user can also select the radius from the predicted GCP fromwhich the matching operation will search for a spectrally similar pixel. The searchwindow can be any odd size between 5 × 5 and 21 × 21.

Histogram matching is discussed in "CHAPTER 5: Enhancement".

A correlation threshold is used to accept or discard points. The correlation ranges from-1.000 to +1.000. The threshold is an absolute value threshold ranging from 0.000 to1.000. A value of 0.000 indicates a bad match and a value of 1.000 indicates an exactmatch. Values above 0.8000 or 0.9000 are recommended. If a match cannot be madebecause the absolute value of the correlation is greater than the threshold, the user hasthe option to discard points.

329Field Guide

Page 356: Erdas FieldGuide

RMS Error In most cases, a perfect fit for all GCPs would require an unnecessarily high order oftransformation. Instead of increasing the order, the user has the option to tolerate acertain amount of error. When a transformation matrix is calculated, the inverse of thetransformation matrix is used to retransform the reference coordinates of the GCPs backto the source coordinate system. Unless the order of transformation allows for a perfectfit, there is some discrepancy between the source coordinates and the retransformedreference coordinates.

RMS error (root mean square) is the distance between the input (source) location of aGCP and the retransformed location for the same GCP. In other words, it is thedifference between the desired output coordinate for a GCP and the actual outputcoordinate for the same point, when the point is transformed with the transformationmatrix.

RMS error is calculated with a distance equation:

EQUATION 14

where:

xi and yi are the input source coordinates

xr and yr are the retransformed coordinates

RMS error is expressed as a distance in the source coordinate system. If data file coordi-nates are the source coordinates, then the RMS error is a distance in pixel widths. Forexample, an RMS error of 2 means that the reference pixel is 2 pixels away from theretransformed pixel.

Residuals and RMS ErrorPer GCP

The ERDAS IMAGINE GCP Tool contains columns for the X and Y residuals. Residualsare the distances between the source and retransformed coordinates in one direction.They are shown for each GCP. The X residual is the distance between the source Xcoordinate and the retransformed X coordinate. The Y residual is the distance betweenthe source Y coordinate and the retransformed Y coordinate.

If the GCPs are consistently off in either the X or the Y direction, more points should beadded in that direction. This is a common problem in off-nadir data.

RMS error xr xi–( )2yr yi–( )2

+=

330 ERDAS

Page 357: Erdas FieldGuide

RMS Error

RMS Error Per GCP

The RMS error of each point is reported to help the user evaluate the GCPs. This iscalculated with a distance formula:

EQUATION 15

where:

Ri= the RMS error for GCPiXRi= the X residual for GCPiYRi= the Y residual for GCPi

Figure 142 illustrates the relationship between the residuals and the RMS error perpoint.

Figure 142: Residuals and RMS Error Per Point

Ri XRi2 Y Ri

2+=

source GCP

retransformed GCP

RMS error

X residual

Y residual

331Field Guide

Page 358: Erdas FieldGuide

Total RMS Error From the residuals, the following calculations are made to determine the total RMSerror, the X RMS error, and the Y RMS error:

where:

Rx= X RMS errorRy= Y RMS errorT= total RMS errorn= the number of GCPsi= GCP numberXRi= the X residual for GCPiYRi= the Y residual for GCPi

Error Contribution byPoint

A normalized value representing each point’s RMS error in relation to the total RMSerror is also reported. This value is listed in the Contribution column of the GCP Tool.

EQUATION 16

where:

Ei= error contribution of GCPiRi= the RMS error for GCPiT = total RMS error

Rx1n--- XRi

2

i 1=

n

∑=

Ry1n--- Y Ri

2

i 1=

n

∑=

T Rx2 Ry

2+= or1n--- XRi

2 Y Ri2+

i 1=

n

Ei

Ri

T-----=

332 ERDAS

Page 359: Erdas FieldGuide

RMS Error

Tolerance of RMS Error In most cases, it will be advantageous to tolerate a certain amount of error rather thantake a higher order of transformation. The amount of RMS error that is tolerated can bethought of as a window around each source coordinate, inside which a retransformedcoordinate is considered to be correct (that is, close enough to use). For example, if theRMS error tolerance is 2, then the retransformed pixel can be 2 pixels away from thesource pixel and still be considered accurate.

Figure 143:RMS Error Tolerance

Acceptable RMS error is determined by the end use of the data base, the type of databeing used, and the accuracy of the GCPs and ancillary data being used. For example,GCPs acquired from GPS should have an accuracy of about 10 m, but GCPs from1:24,000-scale maps should have an accuracy of about 20 m.

It is important to remember that RMS error is reported in pixels. Therefore, if the useris rectifying Landsat TM data and wants the rectification to be accurate to within 30meters, the RMS error should not exceed 0.50. If the user is rectifying AVHRR data, anRMS error of 1.50 might be acceptable. Acceptable accuracy will depend on the imagearea and the particular project.

2 pixel RMS error tolerance(radius)

sourcepixel

Retransformed coordinateswithin this range are consideredcorrect

333Field Guide

Page 360: Erdas FieldGuide

Evaluating RMS Error To determine the order of transformation, the user can assess the relative distortion ingoing from image to map or map to map. One should start with a 1st-order transfor-mation unless it is known that it will not work. It is possible to repeatedly computetransformation matrices until an acceptable RMS error is reached.

Most rectifications are either 1st-order or 2nd-order. The danger of using higher order rectifica-tions is that the more complicated the equation for the transformation, the less regular andpredictable the results will be. To fit all of the GCPs, there may be very high distortion in theimage.

After each computation of a transformation matrix and RMS error, there are fouroptions:

• Throw out the GCP with the highest RMS error, assuming that this GCP is the leastaccurate. Another transformation matrix can then be computed from the remainingGCPs. A closer fit should be possible. However, if this is the only GCP in aparticular region of the image, it may cause greater error to remove it.

• Tolerate a higher amount of RMS error.

• Increase the order of transformation, creating more complex geometric alterationsin the image. A transformation matrix can then be computed that can accommodatethe GCPs with less error.

• Select only the points for which you have the most confidence.

334 ERDAS

Page 361: Erdas FieldGuide

Resampling Methods

ResamplingMethods

The next step in the rectification/registration process is to create the output file. Sincethe grid of pixels in the source image rarely matches the grid for the reference image,the pixels are resampled so that new data file values for the output file can be calcu-lated.

Figure 144: Resampling

The following resampling methods are supported in ERDAS IMAGINE:

• Nearest neighbor — uses the value of the closest pixel to assign to the output pixelvalue.

• Bilinear interpolation — uses the data file values of four pixels in a 2 × 2 windowto calculate an output value with a bilinear function.

• Cubic convolution — uses the data file values of sixteen pixels in a 4 × 4 windowto calculate an output value with a cubic function.

Additionally, IMAGINE Restoration, a deconvolution algorithm that models knownsensor-specific parameters to produce a better estimate of the original scene radiance,is available as an add-on module to ERDAS IMAGINE. TheRestoration™ algorithm was developed by the Environmental Research Institute ofMichigan (ERIM). It produces sharper, crisper rectified images by preserving andenhancing the high spatial frequency component of the image during the resamplingprocess. Restoration can also provide higher spatial resolution from oversampledimages and is well suited for data fusion and GIS data integration applications.

See the IMAGINE Restoration documentation for more information.

1. The input image withsource GCPs.

2. The output grid, withreference GCPs shown.

3. To compare the two grids, theinput image is laid over theoutput grid, so that the GCPsof the two grids fit together.

4. Using a resampling method,the pixel values of the inputimage are assigned to pixelsin the output grid.

GCP

GCP

GCP

GCP

335Field Guide

Page 362: Erdas FieldGuide

In all methods, the number of rows and columns of pixels in the output is calculatedfrom the dimensions of the output map, which is determined by the transformationmatrix and the cell size. The output corners (upper left and lower right) of the outputfile can be specified. The default values are calculated so that the entire source file willbe resampled to the destination file.

If an image to image rectification is being performed, it may be beneficial to specify theoutput corners relative to the reference file system, so that the images will be co-registered. In this case, the upper left X and upper left Y coordinate will be 0,0 and notthe defaults.

If the output units are pixels, then the origin of the image is the upper left corner.Otherwise, the origin is the lower left corner.

“Rectifying” to Lat/Lon The user can specify the nominal cell size if the output coordinate system is Lat/Lon.The output cell size for a geographic projection (i.e., Lat/Lon) is always in angular unitsof decimal degrees. However, if the user wants the cell to be a specific size in meters, heor she can enter meters and calculate the equivalent size in decimal degrees. Forexample, if the user wants the output file cell size to be 30 × 30 meters, then the programwould calculate what this size would be in decimal degrees and automatically updatethe output cell size. Since the transformation between angular (decimal degrees) andnominal (meters) measurements varies across the image, the transformation is based onthe center of the output file.

Enter the nominal cell size in the Nominal Cell Size dialog.

336 ERDAS

Page 363: Erdas FieldGuide

Resampling Methods

Nearest Neighbor To determine an output pixel’s nearest neighbor, the rectified coordinates (xo,yo) of thepixel are retransformed back to the source coordinate system using the inverse of thetransformation matrix. The retransformed coordinates (xr,yr) are used in bilinear inter-polation and cubic convolution as well. The pixel that is closest to the retransformedcoordinates (xr,yr) is the nearest neighbor. The data file value(s) for that pixel becomethe data file value(s) of the pixel in the output image.

Figure 145: Nearest Neighbor

(xr,yr)

nearest to(xr,yr)

Nearest Neighbor Resampling

Advantages Disadvantages

Transfers original data values withoutaveraging them, as the other methods do,therefore the extremes and subtleties of thedata values are not lost. This is an impor-tant consideration when discriminatingbetween vegetation types, locating an edgeassociated with a lineament, or determin-ing different levels of turbidity or temper-atures in a lake (Jensen 1996).

When this method is used to resamplefrom a larger to a smaller grid size, there isusually a “stair stepped” effect arounddiagonal lines and curves.

Suitable for use before classification. Data values may be dropped, while othervalues may be duplicated.

The easiest of the three methods to com-pute and the fastest to use.

Using on linear thematic data (e.g., roads,streams) may result in breaks or gaps in anetwork of linear data.

Appropriate for thematic files, which canhave data file values based on a qualitative(nominal or ordinal) system or a quantita-tive (interval or ratio) system. The averag-ing that is performed with bilinearinterpolation and cubic convolution is notsuited to a qualitative class value system.

337Field Guide

Page 364: Erdas FieldGuide

Bilinear Interpolation In bilinear interpolation, the data file value of the rectified pixel is based upon thedistances between the retransformed coordinate location (xr,yr) and the four closestpixels in the input (source) image (see Figure 146). In this example, the neighbor pixelsare numbered 1, 2, 3, and 4. Given the data file values of these four pixels on a grid, thetask is to calculate a data file value for r (Vr).

Figure 146: Bilinear Interpolation

To calculate Vr, first Vm and Vn are considered. By interpolating Vm and Vn, the user canperform linear interpolation, which is a simple process to illustrate. If the data filevalues are plotted in a graph relative to their distances from one another, then a visuallinear interpolation is apparent. The data file value of m (Vm) is a function of the changein the data file value between pixels 3 and 1 (that is, V3 - V1).

Figure 147: Linear Interpolation

(xr,yr)

1 2

3 4

m r n

dy

dx

D

r is the location of the retransformed coordinate

V3

Vm

V1

(V3 - V1) / D

Y1 Ym Y3

D

data file coordinates

data

file

val

ues

(Y)

Calculating a data file value as a functionof spatial distance between two pixels

338 ERDAS

Page 365: Erdas FieldGuide

Resampling Methods

The equation for calculating Vm from V1 and V3 is:

EQUATION 17

where:

Yi= the Y coordinate for pixel iVi= the data file value for pixel idy= the distance between Y1 and Ym in the source coordinate systemD= the distance between Y1 and Y3 in the source coordinate system

If one considers that (V3 - V1 / D) is the slope of the line in the graph above, then thisequation translates to the equation of a line in y = mx + b form.

Similarly, the equation for calculating the data file value for n (Vn) in the pixel grid is:

EQUATION 18

From Vn and Vm, the data file value for r, which is at the retransformed coordinatelocation (xr,yr),can be calculated in the same manner:

EQUATION 19

Vm

V3 V1–

D------------------- dy V1+×=

Vn

V4 V2–

D------------------- dy V2+×=

Vr

Vn Vm–

D--------------------- dx Vm+×=

339Field Guide

Page 366: Erdas FieldGuide

) dy( )-------------

The following is attained by plugging in the equations for Vm and Vn to this finalequation for Vr:

In most cases D = 1, since data file coordinates are used as the source coordinates anddata file coordinates increment by 1.

Some equations for bilinear interpolation express the output data file value as:

EQUATION 20

where:

wi is a weighting factor

The equation above could be expressed in a similar format, in which the calculation ofwi is apparent:

EQUATION 21

where:

∆xi = the change in the X direction between (xr,yr) and the data file coordinateof pixel i

∆yi = the change in the Y direction between (xr,yr) and the data file coordinateof pixel i

Vi = the data file value for pixel i

D = the distance between pixels (in X or Y) in the source coordinate system

For each of the four pixels, the data file value is weighted more if the pixel is closer to(xr,yr).

Vr

V4 V2–

D------------------- dy V2+×

V3 V1–

D------------------- dy V1+×–

D--------------------------------------------------------------------------------------------------------- dx

V3 V1–

D------------------- dy V1+×+×=

Vr

V1 D dx–( ) D dy–( ) V2 dx( ) D dy–( ) V3 D dx–( ) dy( ) V4 dx(+ + +

D2--------------------------------------------------------------------------------------------------------------------------------------------------------------------=

Vr wiVi∑=

Vr

D xi∆–( ) D yi∆–( )

D2

----------------------------------------------- Vi×i 1=

4

∑=

340 ERDAS

Page 367: Erdas FieldGuide

Resampling Methods

See "CHAPTER 5: Enhancement" for more about convolution filtering.

Cubic Convolution Cubic convolution is similar to bilinear interpolation, except that:

• a set of 16 pixels, in a 4 × 4 array, are averaged to determine the output data filevalue, and

• an approximation of a cubic function, rather than a linear function, is applied tothose 16 input values.

To identify the 16 pixels in relation to the retransformed coordinate (xr,yr), the pixel (i,j)is used, such that...

i = int (xr)

j = int (yr)

...and assuming that (xr,yr) is expressed in data file coordinates (pixels). The pixelsaround (i,j) make up a 4 × 4 grid of input pixels, as illustrated in Figure 148.

Bilinear Interpolation Resampling

Advantages Disadvantages

Results in output images that aresmoother, without the “stair stepped”effect that is possible with nearest neigh-bor.

Since pixels are averaged, bilinear interpo-lation has the effect of a low-frequencyconvolution. Edges are smoothed, andsome extremes of the data file values arelost.

More spatially accurate than nearestneighbor.

This method is often used when changingthe cell size of the data, such as inSPOT/TM merges within the2 × 2 resampling matrix limit.

341Field Guide

Page 368: Erdas FieldGuide

Figure 148: Cubic Convolution

Since a cubic, rather than a linear, function is used to weight the 16 input pixels, thepixels farther from (xr,yr) have exponentially less weight than those closer to (xr,yr).

Several versions of the cubic convolution equation are used in the field. Differentequations have different effects upon the output data file values. Some convolutionsmay have more of the effect of a low-frequency filter (like bilinear interpolation),serving to average and smooth the values. Others may tend to sharpen the image, likea high-frequency filter. The cubic convolution used in ERDAS IMAGINE is acompromise between low-frequency and high-frequency. The general effect of thecubic convolution will depend upon the data.

(i,j)

(Xr,Yr)

342 ERDAS

Page 369: Erdas FieldGuide

Resampling Methods

The formula used in ERDAS IMAGINE is:

EQUATION 22

where:

i = int (xr)

j = int (yr)

d(i,j) = the distance between a pixel with coordinates (i,j) and (xr,yr)

V(i,j) = the data file value of pixel (i,j)

Vr = the output data file value

a = -0.5 (a constant which differs in other applications of cubic convolution)

f(x) = the following function:

Source: Atkinson 1985

Vr V i 1– j n 2–+,( )n 1=

4

∑ f d i 1– j n 2–+,( ) 1+( )×=

V i j n 2–+,( ) f d i j n 2–+,( )( )×+

V i 1+ j n 2–+,( ) f d i 1+ j n 2–+,( ) 1–( )×+

V i 2+ j n 2–+,( ) f d i 2+ j n 2–+,( ) 2–( )×+

f x( )a 2+( ) x

3a 3+( ) x

2– 1+

a x3

5a x22– 8a x 4a–+

0

=

if x 1<

if 1 x 2< <

otherwise

343Field Guide

Page 370: Erdas FieldGuide

In most cases, a value for a of -0.5 tends to produce output layers with a mean andstandard deviation closer to that of the original data (Atkinson 1985).

Cubic Convolution Resampling

Advantages Disadvantages

Uses 4 × 4 resampling. In most cases, themean and standard deviation of the out-put pixels match the mean and standarddeviation of the input pixels more closelythan any other resampling method.

Data values may be altered.

The effect of the cubic curve weighting canboth sharpen the image and smooth outnoise (Atkinson 1985). The actual effectswill depend upon the data being used.

The most computationally intensive resa-mpling method, and is therefore the slow-est.

This method is recommended when theuser is dramatically changing the cell sizeof the data, such as in TM/aerial photomerges (i.e., matches the 4 × 4 windowmore closely than the 2 × 2 window).

344 ERDAS

Page 371: Erdas FieldGuide

Map to Map Coordinate Conversions

Map to MapCoordinateConversions

There are many instances when the user will need to change a map that is already regis-tered to a planar projection to another projection. Some examples of when this isrequired are listed below (ESRI 1992).

• When combining two maps with different projection characteristics.

• When the projection used for the files in the data base does not produce the desiredproperties of a map.

• When it is necessary to combine data from more than one zone of a projection, suchas UTM or State Plane.

A change in the projection is a geometric change—distances, areas, and scale are repre-sented differently. Therefore, the conversion process requires that pixels be resampled.

Resampling causes some of the spectral integrity of the data to be lost (see the disadvan-tages of the resampling methods explained previously). So, it is not usually wise toresample data that have already been resampled if the accuracy of data file values isimportant to the application. If the original unrectified data are available, it is usuallywiser to rectify that data to a second map projection system than to “lose a generation”by converting rectified data and resampling it a second time.

Conversion Process

To convert the map coordinate system of any georeferenced image, ERDAS IMAGINEprovides a shortcut to the rectification process. In this procedure, GCPs are generatedautomatically along the intersections of a grid that the user specifies. The programcalculates the reference coordinates for the GCPs with the appropriate conversionformula and a transformation that can be used in the regular rectification process.

Vector Data

Converting the map coordinates of vector data is much easier than converting rasterdata. Since vector data are stored by the coordinates of nodes, each coordinate is simplyconverted using the appropriate conversion formula. There are no coordinates betweennodes to extrapolate.

345Field Guide

Page 372: Erdas FieldGuide

346 ERDAS

Page 373: Erdas FieldGuide

Introduction

CHAPTER 9Terrain Analysis

Introduction Terrain analysis involves the processing and graphic simulation of elevation data.Terrain analysis software functions usually work with topographic data (also calledterrain data or elevation data), in which an elevation (or Z value) is recorded at each X,Ylocation. Terrain analysis functions are not restricted to topographic data, however.Any series of values, such as population densities, ground water pressure values,magnetic and gravity measurements, and chemical concentrations, may be used.

Topographic data are essential for studies of trafficability, route design, non-pointsource pollution, intervisibility, siting of recreation areas, etc. (Welch 1990). Especiallyuseful are products derived from topographic data. These include:

• slope images — illustrating changes in elevation over distance. Slope images areusually color-coded according to the steepness of the terrain at each pixel.

• aspect images — illustrating the prevailing direction that the slope faces at eachpixel.

• shaded relief images — illustrating variations in terrain by differentiating areasthat would be illuminated or shadowed by a light source simulating the sun.

Topographic data and its derivative products have many applications, including:

• calculating the shortest and most navigable path over a mountain range forconstructing a road or routing a transmission line

• determining rates of snow melt based on variations in sun shadow, which isinfluenced by slope, aspect, and elevation

Terrain data are often used as a component in complex GIS modeling or classificationroutines. They can, for example, be a key to identifying wildlife habitats that areassociated with specific elevations. Slope and aspect images are often an importantfactor in assessing the suitability of a site for a proposed use. Terrain data can also beused for vegetation classification based on species that are terrain-sensitive (i.e., Alpinevegetation).

Although this chapter mainly discusses the use of topographic data, the ERDAS IMAGINEterrain analysis functions can be used on data types other than topographic data.

347Field Guide

Page 374: Erdas FieldGuide

See "CHAPTER 10: Geographic Information Systems" for more information about GISmodeling.

Topographic Data Topographic data are usually expressed as a series of points with X,Y, and Z values.When topographic data are collected in the field, they are surveyed at a series of pointsincluding the extreme high and low points of the terrain, along features of interest thatdefine the topography such as streams and ridge lines, and at various points inbetween.

DEM (digital elevation models) and DTED (Digital Terrain Elevation Data) areexpressed as regularly spaced points. To create DEM and DTED files, a regular grid isoverlaid on the topographic contours. Elevations are read at each grid intersectionpoint, as shown in Figure 149.

Figure 149: Regularly Spaced Terrain Data Points

Elevation data are derived from ground surveys and through manual photogrammetricmethods. Elevation points can also be generated through digital orthographic methods.

See "CHAPTER 3: Raster and Vector Data Sources" for more details on DEM and DTED data.See "CHAPTER 7: Photogrammetric Concepts" for more information on the digitalorthographic process.

To make topographic data usable in ERDAS IMAGINE, they must be represented as a surface,or DEM. A DEM is a one band .img file where the value of each pixel is a specific elevation value.A gray scale is used to differentiate variations in terrain.

DEMs can be edited with the Raster Editing capabilities of ERDAS IMAGINE. See “Chapter 1:Raster Layers” for more information.

Topographic image withgrid overlay...

...resulting DEM or regularly spacedterrain data points (Z values)

20

30

40

50

30

20

20 22 29 34

31 39 38 34

45 48 41 30

348 ERDAS

Page 375: Erdas FieldGuide

Slope Images

Slope Images Slope is expressed as the change in elevation over a certain distance. In this case thecertain distance is the size of the pixel. Slope is most often expressed as a percentage,but can also be calculated in degrees.

Use the Slope function in Image Interpreter to generate a slope image.

In ERDAS IMAGINE, the relationship between percentage and degree expressions ofslope is as follows:

• a 45° angle is considered a 100% slope

• a 90° angle is considered a 200% slope

• slopes less than 45° fall within the 1 - 100% range

• slopes between 45° and 90° are expressed as 100 - 200% slopes

Slope images are often used in road planning. For example, if the Department of Trans-portation specifies a maximum of 15% slope on any road, it would be possible to recodeall slope values that are greater than 15% as unsuitable for road building.

A 3 × 3 pixel window is used to calculate the slope at each pixel. For a pixel at locationX,Y, the elevations around it are used to calculate the slope as shown below. Ahypothetical example is shown with the slope calculation formulas. In Figure 150, eachpixel is 30 × 30 meters.

Figure 150: 3 × 3 Window Calculates the Slope at Each Pixel

a b c

d e f

g h i

10 m 20 m 25 m

22 m 30 m 25 m

20 m 24 m 18 m

Pixel X,Y has

a,b,c,d,f,g,h, and i are the elevations of the pixels around it in a 3 X 3 window.

elevation e.

349Field Guide

Page 376: Erdas FieldGuide

First, the average elevation changes per unit of distance in the x and y direction (∆x and∆y) are calculated as:

where:

a...i = elevation values of pixels in a 3 × 3 window, as shown abovexs= x pixel size = 30 metersys= y pixel size = 30 meters

So, for the hypothetical example...

x1∆ c a–=

x2∆ f d–=

x3∆ i g–=

y1∆ a g–=

y2∆ b h–=

y3∆ c i–=

∆x ∆x1 ∆x2 ∆x3+ +( ) 3⁄ xs×=

∆y ∆y1 ∆y2 ∆y3+ +( ) 3⁄ ys×=

∆x1 25 10– 15= =

∆x2 25 22– 3= =

∆x3 18 20– 2–= =

∆y1 10 20– 10–= =

∆y2 20 24– 4–= =

∆y3 25 18– 7= =

∆x 15 3 2–+30 3×

---------------------- 0.177= = ∆y10– 4– 7+30 3×

-------------------------- 0.078–= =

350 ERDAS

Page 377: Erdas FieldGuide

Slope Images

...the slope at pixel x,y is calculated as:

For the example, the slope is:

sx∆( )2 y∆( )2+

2--------------------------------------= s 0.0967=

ifs 1≤ percent slope s 100×=

slope in degrees s( )tan 1– 180π

-------×=

percent slope 200100

s-------–=or else

slope in degrees s( )tan 1– 180π

-------× 0.0967( ) 57.30×tan 1– 5.54= = =

percent slope 0.0967 100× 9.67%= =

351Field Guide

Page 378: Erdas FieldGuide

Aspect Images An aspect image is an .img file that is gray scale-coded according to the prevailingdirection of the slope at each pixel. Aspect is expressed in degrees from north,clockwise, from 0 to 360. Due north is 0 degrees. A value of 90 degrees is due east, 180degrees is due south, and 270 degrees is due west. A value of 361 degrees is used toidentify flat surfaces such as water bodies.

Aspect files are used in many of the same applications as slope files. In transportationplanning, for example, north facing slopes are often avoided. Especially in northernclimates, these would be exposed to the most severe weather and would hold snow andice the longest. It would be possible to recode all pixels with north facing aspects asundesirable for road building.

Use the Aspect function in Image Interpreter to generate an aspect image.

As with slope calculations, aspect uses a 3x3 window around each pixel to calculate theprevailing direction it faces. For pixel x,y with the following elevation values around it,the average changes in elevation in both x and y directions are calculated first. Eachpixel is 30x30 meters in the following example:

Figure 151: 3 × 3 Window Calculates the Aspect at Each Pixel

a b c

d e f

g h i

10 m 20 m 25 m

22 m 30 m 25 m

20 m 24 m 18 m

Pixel X,Y haselevation e.

a,b,c,d,f,g,h, and i are the elevations of the pixels around it in a 3x3 window.

x1∆ c a–=

x2∆ f d–=

x3∆ i g–=

y1∆ a g–=

y2∆ b h–=

y3∆ c i–=

∆x ∆x1 ∆x2 ∆x3+ +( ) 3⁄=

∆y ∆y1 ∆y2 ∆y3+ +( ) 3⁄=

352 ERDAS

Page 379: Erdas FieldGuide

Aspect Images

where:

a...i =elevation values of pixels in a 3 × 3 window as shown above

If ∆x = 0 and ∆y = 0, then the aspect is flat (coded to 361 degrees). Otherwise, is calcu-lated as:

then aspect is 180 + (in degrees).

For the example above,

1.98 radians = 113.6 degrees

aspect = 180 + 113.6 = 293.6 degrees

∆x15 3 2–+

3------------------------ 5.33= = ∆y

10– 4– 7+3

---------------------------- 2.33–= =

θ

θ ∆x∆y------

tan 1–=

θ

θ 5.332.33–

------------- tan 1– 1.98= =

353Field Guide

Page 380: Erdas FieldGuide

Shaded Relief A shaded relief image provides an illustration of variations in elevation. Based on auser-specified position of the sun, areas that would be in sunlight are highlighted andareas that would be in shadow are shaded. Shaded relief images are generated from anelevation surface, alone or in combination with an .img file, draped over the terrain.

It is important to note that the relief program identifies shadowed areas—i.e., those thatare not in direct sun. It does not calculate the shadow that is cast by topographicfeatures onto the surrounding surface.

For example, a high mountain with sunlight coming from the northwest would besymbolized as follows in shaded relief. Only the portions of the mountain that wouldbe in shadow from a northwest light would be shaded. The software would notsimulate a shadow that the mountain would cast on the southeast side.

Figure 152: Shaded Relief

Shaded relief images are an effective graphic tool. They can also be used in analysis,e.g., of snow melt over an area spanned by an elevation surface. A series of relief imagescan be generated to simulate the movement of the sun over the landscape. Snow meltrates can then be estimated for each pixel based on the amount of time it spends in sunor shadow. Shaded relief images can also be used to enhance subtle detail in gray scaleimages such as aeromagnetic, radar, gravity maps, etc.

Use the Shaded Relief function in Image Interpreter to generate a relief image.

=

30

4050

in sun shaded

This condition produces... this... not this

354 ERDAS

Page 381: Erdas FieldGuide

Shaded Relief

In calculating relief, the software compares the user-specified sun position and anglewith the angle each pixel faces. Each pixel is assigned a value between -1 and +1 toindicate the amount of light reflectance at that pixel.

• Negative numbers and zero values represent shadowed areas.

• Positive numbers represent sunny areas, with +1 assigned to the areas of highestreflectance.

The reflectance values are then applied to the original pixel values to get the final result.All negative values are set to 0 or to the minimum light level specified by the user. Theseindicate shadowed areas. Light reflectance in sunny areas falls within a range of valuesdepending on whether the pixel is directly facing the sun or not. (In the example above,pixels facing northwest would be the brightest. Pixels facing north-northwest and west-northwest would not be quite as bright.)

In a relief file that includes an .img file along with the elevation surface, the surfacereflectance values are multiplied by the color lookup values for the .img file.

355Field Guide

Page 382: Erdas FieldGuide

TopographicNormalization

Digital imagery from mountainous regions often contains a radiometric distortionknown as topographic effect. Topographic effect results from the differences in illumi-nation due to the angle of the sun and the angle of the terrain. This causes a variation inthe image brightness values. Topographic effect is a combination of:

• incident illumination — the orientation of the surface with respect to the rays of thesun

• exitance angle — the amount of reflected energy as a function of the slope angle

• surface cover characteristics — rugged terrain with high mountains or steep slopes(Hodgson and Shelley 1993)

One way to reduce topographic effect in digital imagery is by applying transformationsbased on the Lambertian or Non-Lambertian reflectance models. These modelsnormalize the imagery, making it appear as if it were a flat surface.

The Topographic Normalize function in Image Interpreter uses a Lambertian Reflectance modelto normalize topographic effect in VIS/IR imagery.

When using the Topographic Normalization model, the following information isneeded:

• solar elevation and azimuth at time of image acquisition

• DEM file

• original imagery file (after atmospheric corrections)

Lambertian ReflectanceModel

The Lambertian Reflectance model assumes that the surface reflects incident solarenergy uniformly in all directions, and that variations in reflectance are due to theamount of incident radiation.

The following equation produces normalized brightness values (Colby 1991, Smith et al1980):

BVnormal λ= BV observedλ / cos i

where:

BVnormal λ = normalized brightness valuesBVobservedλ= observed brightness valuescos i= cosine of the incidence angle

356 ERDAS

Page 383: Erdas FieldGuide

Topographic Normalization

Incidence Angle

The incidence angle is defined from:

cos i = cos (90- θs) cosθn + sin (90 -θs) sinθn cos (φs - φn)

where:

i= the angle between the solar rays and the normal to the surfaceθs= the elevation of the sunφs= the azimuth of the sunθn= the slope of each surface elementφn= the aspect of each surface element

If the surface has a slope of 0 degrees, then aspect is undefined and i is simply90 - θs.

Non-Lambertian Model Minnaert (1961) proposed that the observed surface does not reflect incident solarenergy uniformly in all directions. Instead, he formulated the Non-Lambertian model,which takes into account variations in the terrain. This model, although more compu-tationally demanding than the Lambertian model, may present more accurate results.

In a Non-Lambertian Reflectance model, the following equation is used to normalizethe brightness values in the image (Colby 1991, Smith et al 1980):

BVnormal λ= (BVobservedλ cos e) / (cosk i cosk e)

where:

BVnormal λ = normalized brightness valuesBVobservedλ = observed brightness valuescos i = cosine of the incidence anglecos e = cosine of the exitance angle, or slope anglek = the empirically derived Minnaert constant

Minnaert Constant

The Minnaert constant (k) may be found by regressing a set of observed brightnessvalues from the remotely sensed imagery with known slope and aspect values,provided that all the observations in this set are the same type of land cover. The k valueis the slope of the regression line (Hodgson and Shelley 1993):

log (BVobservedλ cos e) = log BVnormal λ+ k log (cos i cos e)

Use the Spatial Modeler to create a model based on the Non-Lambertian Model.

NOTE: The Non-Lambertian model does not detect surfaces that are shadowed by interveningtopographic features between each pixel and the sun. For these areas, a line-of-sight algorithmwill identify such shadowed pixels.

357Field Guide

Page 384: Erdas FieldGuide

358 ERDAS

Page 385: Erdas FieldGuide

Introduction

CHAPTER 10Geographic Information Systems

Introduction The beginnings of geographic information systems (GIS) can legitimately be tracedback to the beginnings of man. The earliest known map dates back to 2500 B.C., but therewere probably maps before that. Since then, man has been continually improving themethods of conveying spatial information. The mid-eighteenth century brought the useof map overlays to show troop movements in the Revolutionary War. This could beconsidered an early GIS. The first British census in 1825 led to the science of demog-raphy, another application for GIS. During the 1800’s many different cartographers andscientists were all discovering the power of overlays to convey multiple levels of infor-mation about an area (Star and Estes).

Frederick Law Olmstead has long been considered the father of Landscape Architecturefor his pioneering work in the early 20th century. Many of the methods Olmstead usedin Landscape Architecture also involved the use of hand-drawn overlays. This type ofanalysis was beginning to be used for a much wider range of applications, such aschange detection, urban planning, and resource management (Rado 1992).

The first system to be called a GIS was the Canadian Geographic Information System,developed in 1962 by Roger Tomlinson of the Canada Land Inventory. Unlike earliersystems that were developed for a specific application, this system was designed tostore digitized map data and land-based attributes in an easily accessible format for allof Canada. This system is still in operation today (Parent and Church 1987).

In 1969, Ian McHarg’s influential work, Design with Nature, was published. This workon land suitability/capability analysis (SCA), a system designed to analyze many datalayers to produce a plan map, discussed the use of overlays of spatially referenced datalayers for resource planning and management (Star and Estes 1990).

The era of modern GIS really started in the 1970s, as analysts began to programcomputers to automate some of the manual processes. Software companies like ESRI(Redlands, CA) and ERDAS developed software packages that could input, display,and manipulate geographic data to create new layers of information. The steadyadvances in features and power of the hardware over the last ten years and the decreasein hardware costs have made GIS technology accessible to a wide range of users. Thegrowth rate of the GIS industry in the last several years has exceeded even the mostoptimistic projections.

359Field Guide

Page 386: Erdas FieldGuide

Today, a geographic information system (or GIS) is a unique system designed to input,store, retrieve, manipulate, and analyze layers of geographic data to produce inter-pretable information. A GIS should also be able to create reports and maps (Marble1990). The GIS data base may include computer images, hardcopy maps, statistical data,or any other data that is needed in a study. Although the term GIS is commonly used todescribe software packages, a true GIS includes knowledgeable staff, a trainingprogram, budgets, marketing, hardware, data, and software (Walker and Miller 1990).GIS technology can be used in almost any geography-related discipline, fromLandscape Architecture to natural resource management to transportation routing.

The central purpose of a GIS is to turn geographic data into useful information—theanswers to real-life questions—questions such as:

• How will we monitor the influence of global climatic changes on the earth’sresources?

• How should political districts be redrawn in a growing metropolitan area?

• Where is the best place for a shopping center that will be most convenient toshoppers and least harmful to the local ecology?

• What areas should be protected to ensure the survival of endangered species?

• How can communities be better prepared to face natural disasters, such asearthquakes, tornadoes, hurricanes, and floods?

Information vs. Data

Information, as opposed to data, is independently meaningful. It is relevant to aparticular problem or question:

• “The land cover at coordinate N875250, E757261 has a data file value 8,” is data.

• “Land cover with a value of 8 are on slopes too steep for development,” isinformation.

The user can input data into a GIS and output information. The information the userwishes to derive determines the type of data that must be input. For example, if one islooking for a suitable refuge for bald eagles, zip code data is probably not needed, whileland cover data may be useful.

For this reason, this first step in any GIS project is usually an assessment of the scopeand goals of the study. Once the project is defined, the user can begin the processing ofbuilding the data base. Although software and data are commercially available, acustom data base must be created for the particular project and study area. It must bedesigned to meet the needs of the organization and objectives. ERDAS IMAGINEprovides all the tools required to build and manipulate a GIS data base.

360 ERDAS

Page 387: Erdas FieldGuide

Data Input

Successful GIS implementation typically includes two major steps:

• data input

• analysis

Data input involves collecting the necessary data layers into the image data base. In theanalysis phase, these data layers will be combined and manipulated in order to createnew layers and to extract meaningful information from them. This chapter discussesthese steps in detail.

Data Input Acquiring the appropriate data for a project involves creating a data base of layers thatencompass the study area. A data base created with ERDAS IMAGINE can consist of:

• continuous layers (satellite imagery, aerial photographs, elevation data, etc.)

• thematic layers (land use, vegetation, hydrology, soils, slope, etc.)

• vector layers (streets, utility and communication lines, parcels, etc.)

• statistics (frequency of an occurrence, population demographics, etc.)

• attribute data (characteristics of roads, land, imagery, etc.)

The ERDAS IMAGINE software package employs a hierarchical, object-oriented archi-tecture that utilizes both raster imagery and topological vector data. Raster images arestored in .img files, and vector layers are coverages based on the ARC/INFO datamodel. The seamless integration of these two data types enables the user to reap thebenefits of both data formats in one system.

361Field Guide

Page 388: Erdas FieldGuide

Figure 153: Data Input

Raster data might be more appropriate in the following applications:

• site selection

• natural resource management

• petroleum exploration

• mission planning

• change detection

On the other hand, vector data may be better suited for these applications:

• urban planning

• tax assessment and planning

• traffic engineering

• facilities management

The advantage of an integrated raster and vector system such as ERDAS IMAGINE isthat one data structure does not have to be chosen over the other. Both data formats canbe used and the functions of both types of systems can be accessed. Depending uponthe project, only raster or vector data may be needed, but most applications benefit fromusing both.

GIS analyst using ERDAS IMAGINE

Landsat TMSPOT panchromaticAerial photographSoils dataLand cover

RoadsCensus dataOwnership parcelsPolitical boundariesLandmarks

Vector Data InputRaster Data Input

Raster Attributes Vector Attributes

362 ERDAS

Page 389: Erdas FieldGuide

Data Input

Themes and Layers

A data base usually consists of files with data of the same geographical area, with eachfile containing different types of information. For example, a data base for the city recre-ation department might include files of all the parks in the area. These files might depictpark boundaries, county and municipal boundaries, vegetation types, soil types,drainage basins, slope, roads, etc. Each of these files contains different information—each is a different theme. The concept of themes has evolved from early GISs, in whichtransparent overlays were created for each theme and combined (overlaid) in differentways to derive new information.

A single theme may require more than a simple raster or vector file to fully describe it.In addition to the image, there may be attribute data that describe the information, acolor scheme, or meaningful annotation for the image. The full collection of data thatdescribe a certain theme is called a layer.

Depending upon the goals of a project, it may be helpful to combine several themes intoone layer. For example, if a user wanted to propose a new park site, he or she mightcreate one layer that shows roads, land cover, land ownership, slope, etc., and indicatethrough the use of colors and/or annotation which areas would be best for the new site.This one layer would then include many separate themes. Much of GIS analysis isconcerned with combining individual themes into one or more layers that answer thequestions driving the analysis. This chapter explores these analysis techniques.

363Field Guide

Page 390: Erdas FieldGuide

Continuous Layers Continuous raster layers are quantitative (measuring a characteristic) and have related,continuous values. Continuous raster layers can be multiband (e.g., Landsat TM) orsingle band (e.g., SPOT panchromatic).

Satellite images, aerial photographs, elevation data, scanned maps, and othercontinuous raster layers can be incorporated into a data base and provide a wealth ofinformation that is not available in thematic layers or vector layers. In fact, these layersoften form the foundation of the data base. Extremely accurate base maps can be createdfrom rectified satellite images or aerial photographs. Then, all other layers that areadded to the data base can be registered to this base map.

Once used only for image processing, continuous data are now being incorporated intoGIS data bases and used in combination with thematic data to influence processingalgorithms or as backdrop imagery on which to display the results of analyses. Currentsatellite data and aerial photographs are also effective in updating outdated vector data.The vectors can be overlaid on the raster backdrop and updated dynamically to reflectnew or changed features, such as roads, utility lines, or land use. This chapter willexplore the many uses of continuous data in a GIS.

See "CHAPTER 1: Raster Data" for more information on continuous data.

Thematic Layers Thematic data are typically represented as single layers of information stored as .imgfiles and containing discrete classes. Classes are simply categories of pixels whichrepresent the same condition. An example of a thematic layer is a vegetation classifi-cation with discrete classes representing coniferous forest, deciduous forest, wetlands,agriculture, urban, etc.

A thematic layer is sometimes called a variable, because it represents one of manycharacteristics about the study area. Since thematic layers usually have only one“band,” they are usually displayed in pseudo color mode, where particular colors areoften assigned to help others visualize the information. For example, blues are usuallyused for water features, greens for healthy vegetation, etc.

See "CHAPTER 4: Image Display" for information on pseudo color display.

364 ERDAS

Page 391: Erdas FieldGuide

Thematic Layers

Class Numbering Systems

As opposed to the data file values of continuous raster layers, which are generallymultiband and statistically related, the data file values of thematic raster layers canhave a nominal, ordinal, interval, or ratio relationship (Star and Estes 1990).

• Nominal classes represent categories with no particular order. Usually, these arecharacteristics that are not associated with quantities (e.g., soil type or politicalarea).

• Ordinal classes are those that have a sequence, such as “poor,” “good,” “better,”and “best.” An ordinal class numbering system is often created from a nominalsystem, in which classes have been ranked by some criteria. In the case of therecreation department data base used in the previous example, the final layer mayrank the proposed park sites according to their overall suitability.

• Interval classes also have a natural sequence, but the distance between each valueis meaningful as well. This numbering system might be used for temperature data.

• Ratio classes differ from interval classes only in that ratio classes have a naturalzero point, such as rainfall amounts.

The variable being analyzed and the way that it contributes to the final product deter-mines the class numbering system used in the thematic layers. Layers that have onenumbering system can easily be recoded to a new system. This is discussed in detailunder "Recoding" on page 378.

Classification

Thematic layers can be generated from remotely sensed data (e.g., Landsat TM, SPOT)by using the ERDAS IMAGINE Image Interpreter, Classification, and Spatial Modelertools. A frequent and popular application is the creation of land cover classificationschemes through the use of both supervised (user-assisted) and unsupervised(automatic) pattern-recognition algorithms contained within ERDAS IMAGINE. Theoutput is a single thematic layer which represents specific classes based on theapproach selected.

See "CHAPTER 6: Classification" for more information.

Vector Data Converted to Raster Format

Vector layers can be converted to raster format if the raster format is more appropriatefor an application. Typical vector layers, such as communication lines, streams, bound-aries, and other linear features, can easily be converted to raster format within ERDASIMAGINE for further analysis.

Use the Vector Utilities menu from the Vector icon in the IMAGINE icon panel to convertvector layers to raster format.

365Field Guide

Page 392: Erdas FieldGuide

Other sources of raster data are discussed in "CHAPTER 3: Raster and Vector Data Sources".

Statistics Both continuous and thematic layers include statistical information. Thematic layerscontain the following information:

• a histogram of the data values, which is the total number of pixels in each class

• a list of class names that correspond to class values

• a list of class values

• a color table, stored as brightness values in red, green, and blue, which make up thecolors of each class when the layer is displayed

For thematic data, these statistics are called attributes and may be accompanied bymany other types of information, as described below.

Use the Image Information option in the ERDAS IMAGINE icon panel to generate or updatestatistics for .img files.

See "CHAPTER 1: Raster Data" for more information about the statistics stored withcontinuous layers.

366 ERDAS

Page 393: Erdas FieldGuide

Attributes

Vector Layers The vector layers used in ERDAS IMAGINE are based on the ARC/INFO data modeland consist of points, lines, and polygons. These layers are topologically complete,meaning that the spatial relationships between features are maintained. Vector layerscan be used to represent transportation routes, utility corridors, communication lines,tax parcels, school zones, voting districts, landmarks, population density, etc. Vectorlayers can be analyzed independently or in combination with continuous and thematicraster layers.

Vector data can be acquired from several private and governmental agencies. Vectordata can also be created in ERDAS IMAGINE by digitizing on the screen, using adigitizing tablet, or converting other data types to vector format.

See "CHAPTER 2: Vector Layers" for more information on the characteristics of vector data.

Attributes Text and numerical data that are associated with the classes of a thematic layer orthe features in a vector layer are called attributes. This information can take the form ofcharacter strings, integer numbers, or floating point numbers. Attributes work muchlike the data that are handled by data base management software. The user may definefields, which are categories of information about each class. A record is the set of allattribute data for one class. Each record is like an index card, containing informationabout one class or feature in a file of many index cards, which contain similar infor-mation for the other classes or features.

Attribute information for raster layers is stored in the image (.img) file. Vector attributeinformation is stored in an INFO file. In both cases, there are fields that are automati-cally generated by the software, but more fields can be added as needed to fullydescribe the data. Both are viewed in ERDAS IMAGINE CellArrays, which allow theuser to display and manipulate the information. However, raster and vector attributesare handled slightly differently, so a separate section on each follows.

Raster Attributes In ERDAS IMAGINE, raster attributes for .img files are accessible from the RasterAttribute Editor. The Raster Attribute Editor contains a CellArray, which is similar to atable or spreadsheet that not only displays the information, but includes options forimporting, exporting, copying, editing, and other operations.

Figure 154 shows the attributes for a land cover classification layer.

367Field Guide

Page 394: Erdas FieldGuide

Figure 154: Raster Attributes for lnlandc.img

Most thematic layers contain the following attribute fields:

• Class Name

• Class Value

• Color table (red, green, and blue values)

• Opacity percentage

• Histogram (number of pixels in the file that belong to the class)

As many additional attribute fields as needed can be defined for each class.

See "CHAPTER 6: Classification" for more information about the attribute information that isautomatically generated when new thematic layers are created in the classification process.

Viewing Raster Attributes

Simply viewing attribute information can be a valuable analysis tool. Depending on thetype of information associated with the layers of a data base, processing may be furtherrefined by comparing the attributes of several files. When both the raster layer and itsassociated attribute information are displayed, the user can select features in one usingthe other. For example, to locate the class name associated with a particular polygon ina displayed image, simply click on that polygon with the mouse and that row ishighlighted in the Raster Attribute Editor.

Attribute information is accessible in several places throughout ERDAS IMAGINE. Insome cases it is read-only and in other cases it is a fully functioning editor, allowing theinformation to be modified.

368 ERDAS

Page 395: Erdas FieldGuide

Attributes

Manipulating Raster Attributes

The applications for manipulating attributes are as varied as the applications for GIS.The attribute information in a data base will depend on the goals of the project. Someof the attribute editing capabilities in ERDAS IMAGINE include:

• import/export ASCII information to and from other software packages, such asspreadsheets and word processors

• cut, copy, and paste individual cells, rows, or columns to and from the same RasterAttribute Editor or among several Raster Attribute Editors

• generate reports that include all or a subset of the information in the RasterAttribute Editor

• use formulas to populate cells

• directly edit cells by entering in new information

The Raster Attribute Editor in ERDAS IMAGINE also includes a color cell column, sothat class (object) colors can be viewed or changed. In addition to direct user manipu-lation, attributes can be changed by other programs. For example, some of the ImageInterpreter functions calculate statistics that are automatically added to the RasterAttribute Editor. Models that read and/or modify attribute information can also bewritten.

See "CHAPTER 5: Enhancement" for more information on the Image Interpreter. There is moreinformation on GIS modeling, starting on page 383.

369Field Guide

Page 396: Erdas FieldGuide

Vector Attributes Vector attributes are stored in the Vector Attributes CellArrays. The user can simplyview attributes or use them to:

• select features in a vector layer for further processing

• determine how vectors are symbolized

• label features

Figure 155 shows the attributes for a roads layer.

Figure 155: Vector Attributes CellArray

See "CHAPTER 2: Vector Layers" for more information about vector attributes.

370 ERDAS

Page 397: Erdas FieldGuide

Analysis

Analysis

ERDAS IMAGINEAnalysis Tools

In ERDAS IMAGINE, GIS analysis functions and algorithms are accessible throughthree main tools:

• script models created with the Spatial Modeler Language

• graphical models created with Model Maker

• pre-packaged functions in Image Interpreter

Spatial Modeler Language

The Spatial Modeler Language is the basis for all ERDAS IMAGINE GIS functions andit is the most powerful. It is a modeling language that enables the user to create script(text) models for a variety of applications. Models may be used to create customalgorithms that best suit the user’s data and objectives.

Model Maker

Model Maker is essentially the Spatial Modeler Language linked to a graphicalinterface. This enables the user to create graphical models using a palette of easy-to-usetools. Graphical models can be run, edited, saved in libraries, or converted to scriptform and edited further, using the Spatial Modeler Language.

NOTE: References to the Spatial Modeler in this chapter mean that the named procedure can beaccomplished using both Model Maker and the Spatial Modeler Language.

Image Interpreter

The Image Interpreter houses a set of common functions that were all created usingeither Model Maker or the Spatial Modeler Language. They have been given a dialoginterface to match the other processes in ERDAS IMAGINE. In most cases, theseprocesses can be run from a single dialog. However, the actual models are alsoprovided with the software to enable customized processing.

Many of the functions described in the following sections can be accomplished usingany of these tools. Model Maker is also easy to use and requires many of the same stepsthat would be performed when drawing a flow chart of an analysis. The SpatialModeler Language is intended for more advanced analyses, and has been designedusing natural language commands and simple syntax rules. Some applications mayrequire a combination of these tools.

Customizing ERDAS IMAGINE Tools

ERDAS Macro Language (EML) enables the user to create and add new and/orcustomized dialogs. If new capabilities are needed, they can be created with the CProgrammers’ Toolkit. Using these tools, a GIS that is completely customized to aspecific application and its preferences can be created.

The ERDAS Macro Language and the C Programmers’ Toolkit are part of the ERDASIMAGINE Developers’ Toolkit.

371Field Guide

Page 398: Erdas FieldGuide

See the ERDAS IMAGINE On-Line Help for more information about the Developers’ Toolkit.

Analysis Procedures Once the data base (layers and attribute data) is assembled, the layers can be analyzedand new information extracted. Some information can be extracted simply by lookingat the layers and visually comparing them to other layers. However, new informationcan be retrieved by combining and comparing layers using the procedures outlinedbelow:

• Proximity analysis — the process of categorizing and evaluating pixels based ontheir distances from other pixels in a specified class or classes.

• Contiguity analysis — enables the user to identify regions of pixels in the sameclass and to filter out small regions.

• Neighborhood analysis — any image processing technique that takes surroundingpixels into consideration, such as convolution filtering and scanning. This is similarto the convolution filtering performed on continuous data. Several types ofanalyses can be performed, such as boundary, density, mean, sum, etc.

• Recoding — enables the user to assign new class values to all or a subset of theclasses in a layer.

• Overlaying — creates a new file with either the maximum or minimum value of theinput layers.

• Indexing — adds the values of the input layers.

• Matrix analysis — outputs the coincidence of values in the input layers.

• Graphical modeling — enables the user to combine data layers in an unlimitednumber of ways. For example, an output layer created from modeling can representthe desired combination of themes from many input layers.

• Script modeling — offers all of the capabilities of graphical modeling with theability to perform more complex functions, such as conditional looping.

Using an Area of Interest (AOI)

Any of these functions can be performed on a single layer or multiple layers. The usercan also select a particular area of interest (AOI) that is defined in a separate file (AOIlayer, thematic raster layer, or vector layer) or an area of interest that is selectedimmediately preceding the operation by entering specific coordinates or by selectingthe area in a Viewer.

372 ERDAS

Page 399: Erdas FieldGuide

Proximity Analysis

Proximity Analysis Many applications require some measurement of distance, or proximity. For example,a real estate developer would be concerned with the distance between a potential sitefor a shopping center and an interchange to a major highway.

A proximity analysis determines which pixels of a layer are located at specifieddistances from pixels in a certain class or classes. A new thematic layer (.img file) iscreated, which is categorized by the distance of each pixel from specified classes of theinput layer. This new file then becomes a new layer of the data base and provides abuffer zone around the specified class(es). In further analysis, it may be beneficial toweight other factors, based on whether they fall in or outside the buffer zone.

Figure 156 shows a layer containing lakes and streams and the resulting layer after aproximity analysis is run to create a buffer zone around all of the water features.

Figure 156: Proximity Analysis

Use the Search (GIS Analysis) function in Image Interpreter or Spatial Modeler to perform aproximity analysis.

Original layer After proximityanalysis performed

Buffer

Lake

Streams

zones

373Field Guide

Page 400: Erdas FieldGuide

Contiguity Analysis A contiguity analysis is concerned with the ways in which pixels of a class are groupedtogether. Groups of contiguous pixels in the same class, called raster regions, or“clumps,” can be identified by their sizes and manipulated. One application of this toolwould be an analysis for locating helicopter landing zones that require at least 250contiguous pixels at 10 meter resolution.

Contiguity analysis can be used to:

• further divide a large class into separate raster regions, or

• eliminate raster regions that are too small to be considered for an application.

Filtering Clumps

In cases where very small clumps are not useful, they can be filtered out according totheir sizes. This is sometimes referred to as eliminating the “salt and pepper” effects, or“sieving.” In Figure 157, all of the small clumps in the original (clumped) layer areeliminated.

Figure 157: Contiguity Analysis

Use the Clump and Sieve (GIS Analysis) function in Image Interpreter or Spatial Modeler toperform contiguity analysis.

Clumped layer Sieved layer

374 ERDAS

Page 401: Erdas FieldGuide

Neighborhood Analysis

NeighborhoodAnalysis

With a process similar to the convolution filtering of continuous raster layers, thematicraster layers can also be filtered. The GIS filtering process is sometimes referred to as“scanning,” but is not to be confused with data capture via a digital camera. Neigh-borhood analysis is based on local or neighborhood characteristics of the data (Star andEstes 1990).

Every pixel is analyzed spatially, according to the pixels that surround it. The numberand the location of the surrounding pixels is determined by a scanning window, whichis defined by the user. These operations are known as focal operations. The scanningwindow can be:

• circular, with a maximum diameter of 512 pixels

• doughnut-shaped, with a maximum outer radius of 256

• rectangular, up to 512 × 512 pixels, with the option to mask out certain pixels

Use the Neighborhood (GIS Analysis) function in Image Interpreter or Spatial Modeler toperform neighborhood analysis. The scanning window used in Image Interpreter can be3 × 3, 5 × 5, or 7 × 7. The scanning window in Model Maker is user-defined and can be up to512 × 512.

Defining Scan Area

The user may define the area of the file to be scanned. The scanning window will moveonly through this area as the analysis is performed. The area may be defined in one orall of the following ways:

• Specify a rectangular portion of the file to scan. The output layer will contain onlythe specified area.

• Specify an area of interest that is defined by an existing AOI layer, an annotationoverlay, or a vector layer. The area(s) within the polygon will be scanned, and theother areas will remain the same. The output layer will be the same size as the inputlayer or the selected rectangular portion.

• Specify a class or classes in another thematic layer to be used as a mask. The pixelsin the scanned layer that correspond to the pixels of the selected class or classes inthe mask layer will be scanned, while the other pixels will remain the same.

375Field Guide

Page 402: Erdas FieldGuide

Figure 158: Using a Mask

In Figure 158, class 2 in the mask layer was selected for the mask. Only the corre-sponding (shaded) pixels in the target layer will be scanned—the other values willremain unchanged.

Neighborhood analysis creates a new thematic layer. There are several types of analysisthat can be performed upon each window of pixels, as described below:

• Boundary — detects boundaries between classes. The output layer contains onlyboundary pixels. This is useful for creating boundary or edge lines from classes,such as a land/water interface.

• Density — outputs the number of pixels that have the same class value as the center(analyzed) pixel. This is also a measure of homogeneity (sameness), based upon theanalyzed pixel. This is often useful in assessing vegetation crown closure.

• Diversity — outputs the number of class values that are present within thewindow. Diversity is also a measure of heterogeneity (difference).

• Majority — outputs the class value that represents the majority of the class valuesin the window. The value is user-defined. This option operates like a low-frequencyfilter to clean up a “salt and pepper” layer.

• Maximum — outputs the greatest class value within the window. This can be usedto emphasize classes with the higher class values or to eliminate linear features orboundaries.

• Mean — averages the class values. If class values represent quantitative data, thenthis option can work like a convolution filter. This is mostly used on ordinal orinterval data.

• Median — outputs the statistical median of the class values in the window. Thisoption may be useful if class values represent quantitative data.

• Minimum — outputs the least or smallest class value within the window. Thevalue is user-defined. This can be used to emphasize classes with the low classvalues.

88222288222

26666226668

88888668334

45533344544

44454444555

55555

mask layer target layer

376

ERDAS
Page 403: Erdas FieldGuide

Neighborhood Analysis

• Minority — outputs the least common of the class values that are within thewindow. This option can be used to identify the least common classes. It can alsobe used to highlight disconnected linear features.

• Rank — outputs the number of pixels in the scan window whose value is less thanthe center pixel.

• Standard deviation — outputs the standard deviation of class values in thewindow.

• Sum — totals the class values. In a file where class values are ranked, totalingenables the user to further rank pixels based on their proximity to high-rankingpixels.

Figure 159: Sum Option of Neighborhood Analysis (Image Interpreter)

In Figure 159, the Sum option of Neighborhood (Image Interpreter) is applied to a3 × 3 window of pixels in the input layer. In the output layer, the analyzed pixel will begiven a value based on the total of all of the pixels in the window.

The analyzed pixel is always the center pixel of the scanning window. In this example, only thepixel in the third column and third row of the file is “summed.”

22222

88222

66822

66682

66668

822

6482

668

Output of oneiteration of thesum operation

8 + 6 + 6 + 2 + 8 + 6 + 2 + 2 + 8 = 48

377Field Guide

Page 404: Erdas FieldGuide

Recoding Class values can be recoded to new values. Recoding involves the assignment of newvalues to one or more classes. Recoding is used to:

• reduce the number of classes

• combine classes

• assign different class values to existing classes

When an ordinal, ratio, or interval class numbering system is used, recoding can beused to assign classes to appropriate values. Recoding is often performed to make latersteps easier. For example, in creating a model that will output “good,” “better,” and“best” areas, it may be beneficial to recode the input layers so all of the “best” classeshave the highest class values.

In the following example (Table 21), a land cover layer is recoded so that the mostenvironmentally sensitive areas (Riparian and Wetlands) have higher class values.

Use the Recode (GIS Analysis) function in Image Interpreter or Spatial Modeler to recode layers.

Table 21: Example of a Recoded Land Cover Layer

Value New Value Class Name

0 0 Background

1 4 Riparian

2 1 Grassland and Scrub

3 1 Chaparral

4 4 Wetlands

5 1 Emergent Vegetation

6 1 Water

378 ERDAS

Page 405: Erdas FieldGuide

Overlaying

Overlaying Thematic data layers can be overlaid to create a composite layer. The output layercontains either the minimum or the maximum class values of the input layers. Forexample, if an area was in class 5 in one layer, and in class 3 in another, and themaximum class value dominated, then the same area would be coded to class 5 in theoutput layer, as shown in Figure 160.

Figure 160: Overlay

The application example in Figure 160 shows the result of combining two layers—slopeand land use. The slope layer is first recoded to combine all steep slopes into one value.When overlaid with the land use layer, the highest data file values (the steep slopes)dominate in the output layer.

Use the Overlay (GIS Analysis) function in Image Interpreter or Spatial Modeler to overlaylayers.

68

92

16

13

5

22

34

12

25

3

99

94

19

25

3

99

90

09

00

0

Recode

Overlay

Original Slope1-5 = flat slopes6-9 = steep slopes

Recoded Slope0 = flat slopes9 = steep slopes

Land Use1 = commercial2 = residential3 = forest4 = industrial5 = wetlands

Overlay Composite1 = commercial2 = residential3 = forest4 = industrial5 = wetlands9 = steep slopes

(Land Use masked

53

Basic Overlay Application Example

379Field Guide

Page 406: Erdas FieldGuide

Indexing Thematic layers can be indexed (added) to create a composite layer. The output layercontains the sums of the input layer values. For example, the intersection of class 3 inone layer and class 5 in another would result in class 8 in the output layer, as shown inFigure 161.

Figure 161: Indexing

The application example in Figure 161 shows the result of indexing. In this example, theuser wants to develop a new subdivision, and the most likely sites are where there isthe best combination (highest value) of good soils, good slope, and good access. Sincegood slope is a more critical factor to the user than good soils or good access, aweighting factor is applied to the slope layer. A weighting factor has the effect of multi-plying all input values by some constant. In this example, slope is given a weight of 2.

Use the Index (GIS Analysis) function in the Image Interpreter or Spatial Modeler to indexlayers.

99

59

91

19

5

95

19

95

99

9

3624

1636

368

2836

16

1810

1018

182

1818

2

Soils9 = good5 = fair

Slope9 = good5 = fair

Access9 = good5 = fair1 = poor

1 = poor

1 = poor

Output values calculated

WeightingImportance×1×1×1

WeightingImportance×2×2×2

WeightingImportance×1×1×1

+

+

=

3 58

Basic Index Application Example

380 ERDAS

Page 407: Erdas FieldGuide

Matrix Analysis

Matrix Analysis

Matrix analysis produces a thematic layer that contains a separate class for every coinci-dence of classes in two layers. The output is best described with a matrix diagram.

In this diagram, the classes of the two input layers represent the rows and columns ofthe matrix. The output classes are assigned according to the coincidence of any twoinput classes.

All combinations of 0 and any other class are coded to 0, because 0 is usually the backgroundclass, representing an area that is not being studied.

Unlike overlaying or indexing, the resulting class values of a matrix operation areunique for each coincidence of two input class values. In this example, the output classvalue at column 1, row 3 is 11, and the output class at column 3, row 1 is 3. If these fileswere indexed (summed) instead of matrixed, both combinations would be coded toclass 4.

Use the Matrix (GIS Analysis) function in Image Interpreter or Spatial Modeler to matrixlayers.

input layer 2 data values (columns)

0 1 2 3 4 5

input layer 1data values

(rows)

0 0 0 0 0 0 0

1 0 1 2 3 4 5

2 0 6 7 8 9 10

3 0 11 12 13 14 15

381Field Guide

Page 408: Erdas FieldGuide

Modeling Modeling is a powerful and flexible analysis tool. Modeling is the process of creatingnew layers from combining or operating upon existing layers. Modeling enables theuser to create a small set of layers—perhaps even a single layer—which, at a glance,contains many types of information about the study area.

For example, if a user wants to find the best areas for a bird sanctuary, taking intoaccount vegetation, availability of water, climate, and distance from highly developedareas, he or she would create a thematic layer for each of these criteria. Then, each ofthese layers would be input to a model. The modeling process would create onethematic layer, showing only the best areas for the sanctuary.

The set of procedures that define the criteria is called a model. In ERDAS IMAGINE,models can be created graphically and resemble a flow chart of steps, or they can becreated using a script language. Although these two types of models look different, theyare essentially the same—input files are defined, functions and/or operators arespecified, and outputs are defined. The model is run and a new output layer(s) iscreated. Models can utilize analysis functions that have been previously defined, ornew functions can be created by the user.

Use the Model Maker function in Spatial Modeler to create graphical models and the SpatialModeler Language to create script models.

Data Layers

In modeling, the concept of layers is especially important. Before computers were usedfor modeling, the most widely used approach was to overlay registered maps on paperor transparencies, with each map corresponding to a separate theme. Today, digitalfiles replace these hardcopy layers and allow much more flexibility for recoloring,recoding, and reproducing geographical information (Steinitz, Parker, and Jordan1976).

In a model, the corresponding pixels at the same coordinates in all input layers areaddressed as if they were physically overlaid like hardcopy maps.

382 ERDAS

Page 409: Erdas FieldGuide

Graphical Modeling

Graphical Modeling Graphical modeling enables the user to “draw” models using a palette of tools thatdefines inputs, functions, and outputs. This type of modeling is very similar to drawingflowcharts, in that the user identifies a logical flow of steps needed to perform thedesired action. Through the extensive functions and operators available in the ERDASIMAGINE graphical modeling program, the user can analyze many layers of data invery few steps, without creating intermediate files that occupy extra disk space.Modeling is performed using a graphical editor that eliminates the need to learn aprogramming language. Complex models can be developed easily and then quicklyedited and re-run on another data set.

Use the Model Maker function in Spatial Modeler to create graphical models.

Image Processing and GIS

In ERDAS IMAGINE, the traditional GIS functions (e.g., neighborhood analysis,proximity analysis, recode, overlay, index, etc.) can be performed in models, as well asimage processing functions. Both thematic and continuous layers can be input intomodels that accomplish many objectives at once.

For example, suppose there is a need to assess the environmental sensitivity of an areafor development. An output layer can be created that ranks most to least sensitiveregions based on several factors, such as slope, land cover, and floodplain. To visualizethe location of these areas, the output thematic layer can be overlaid onto a highresolution, continuous raster layer (e.g., SPOT panchromatic) that has had a convo-lution filter applied. All of this can be accomplished in a single model (as shown inFigure 162).

383Field Guide

Page 410: Erdas FieldGuide

Figure 162: Graphical Model for Sensitivity Analysis

See the ERDAS IMAGINE Tour Guides manual for step-by-step instructions on creating theenvironmental sensitivity model in Figure 162. Descriptions of all of the graphical modelsdelivered with ERDAS IMAGINE are available in the On-Line Help.

Model Structure

A model created with Model Maker is essentially a flow chart that defines:

• the input image(s), matrix(ces), table(s), and scalar(s) to be analyzed

• calculations, functions, or operations to be performed on the input data

• the output image(s) to be created

The graphical models created in Model Maker all have the same basic structure: input,function, output. The number of inputs, functions, and outputs can vary, but the overallform remains constant. All components must be connected to one another before themodel can be executed. The model on the left in Figure 163 is the most basic form. Themodel on the right is more complex, but it retains the same input/function/outputflow.

384 ERDAS

Page 411: Erdas FieldGuide

Graphical Modeling

Figure 163: Graphical Model Structure

Graphical models are stored in ASCII files with the .gmd extension. There are severalsample graphical models delivered with ERDAS IMAGINE that can be used as is oredited for more customized processing.

See the On-Line Help for instructions on editing existing models.

Input

Input

Function Output

Basic Model

Input

Function

Output

FunctionInput

Output

Complex Model

385Field Guide

Page 412: Erdas FieldGuide

Model Maker Functions The functions available in Model Maker are divided into 19 categories:

These functions are also available for script modeling.

Category Description

Analysis Includes convolution filtering, histogram matching, contraststretch, principal components, and more.

Arithmetic Perform basic arithmetic functions including addition, subtraction,multiplication, division, factorial, and modulus.

Bitwise Use bitwise and, or, exclusive or, and not.

Boolean Perform logical functions including and, or, and not.

Color Manipulate colors to and from RGB (red, green, blue) and IHS(intensity, hue, saturation).

Conditional Run logical tests using conditional statements andeither...if...or...otherwise.

DataGeneration

Create raster layers from map coordinates, column numbers, orrow numbers. Create a matrix or table from a list of scalars.

Descriptor Read attribute information and map a raster through an attributecolumn.

Distance Perform distance functions, including proximity analysis.

Exponential Use exponential operators, including natural and common loga-rithmic, power, and square root.

Focal (Scan) Perform neighborhood analysis functions, including boundary,density, diversity, majority, mean, minority, rank, standard devia-tion, sum, and others.

Global Analyze an entire layer and output one value, such as diversity,maximum, mean, minimum, standard deviation, sum, and more.

Matrix Multiply, divide, and transpose matrices, as well as convert amatrix to a table and vice versa.

Other Includes over 20 miscellaneous functions for data type conversion,various tests, and other utilities.

Relational Includes equality, inequality, greater than, less than, greater than orequal, less than or equal, and others.

Statistical Includes density, diversity, majority, mean, rank, standard devia-tion, and more.

String Manipulate character strings.

Surface Calculate aspect and degree/percent slope and produce shadedrelief.

Trigonometric Use common trigonometric functions, including sine/arcsine,cosine/arccosine, tangent/arctangent, hyperbolic arcsine, arc-cosine, cosine, sine, and tangent.

386 ERDAS

Page 413: Erdas FieldGuide

Graphical Modeling

See the ERDAS IMAGINE Tour Guides manual and the on-line Spatial Modeler Languagemanual for complete instructions on using Model Maker and more detailed information aboutthe available functions and operators.

Objects Within Model Maker, an object is an input to or output from a function. The four basicobject types used in Model Maker are:

• raster

• scalar

• matrix

• table

Raster

A raster object is a single layer or set of layers. Rasters are typically used to specify andmanipulate data from image (.img) files.

Scalar

A scalar object is a single numeric value. Scalars are often used as weighting factors.

Matrix

A matrix object is a set of numbers arranged in a two-dimensional array. A matrix hasa fixed number of rows and columns. Matrices may be used to store convolution kernelsor the neighborhood definition used in neighborhood functions. They can also be usedto store covariance matrices, eigenvector matrices, or matrices of linear combinationcoefficients.

Table

A table object is a series of numeric values or character strings. A table has one columnand a fixed number of rows. Tables are typically used to store columns from the RasterAttribute Editor or a list of values which pertains to the individual layers of a set oflayers. For example, a table with four rows could be used to store the maximum valuefrom each layer of a four layer image file. A table may consist of up to 32,767 rows.Information in the table can be attributes, calculated (e.g., histograms), or user-defined.

The graphics used in Model Maker to represent each of these objects are shown inFigure 164.

387Field Guide

Page 414: Erdas FieldGuide

Figure 164: Modeling Objects

Data Types The four object types described above may be any of the following data types:

• Binary — either 0 (false) or 1 (true)

• Integer — integer values from -2,147,483,648 to 2,147,483,648 (signed 32-bit integer)

• Float — floating point data (double precision)

• String — a character string (for table objects only)

Input and output data types do not have to be the same. Using the Spatial ModelerLanguage, the user can change the data type of input files before they are processed.

Output Parameters Since it is possible to have several inputs in one model, one can optionally define theworking window and the pixel cell size of the output data.

Working Window

Raster layers of differing areas can be input into one model. However, the image area,or working window, must be specified in order to use in the model calculations. Eitherof the following options can be selected:

• Union — the model will operate on the union of all input rasters. (This is thedefault.)

• Intersection — the model will use only the area of the rasters that is common to allinput rasters.

Input layers must be referenced to the same coordinate system (i.e., Lat/Lon, UTM, State Plane,etc.).

a1

Raster Matrix

Scalar Table

388 ERDAS

Page 415: Erdas FieldGuide

Graphical Modeling

Pixel Cell Size

Input rasters may also be of differing resolution (pixel size), so the user must select theoutput cell size as either:

• Minimum — the minimum cell size of the input layers will be used (this is thedefault setting).

• Maximum — the maximum cell size of the input layers will be used.

• Other — specify a new cell size.

Using Attributes inModels

With the criteria function in Model Maker, attribute data can be used to determineoutput values. The criteria function simplifies the process of creating a conditionalstatement. The criteria function can be used to build a table of conditions that must besatisfied to output a particular row value for an attribute (or cell value) associated withthe selected raster.

The inputs to a criteria function are rasters. The columns of the criteria table representeither attributes associated with a raster layer or the layer itself, if the cell values are ofdirect interest. Criteria which must be met for each output column are entered in a cellin that column (e.g., >5). Multiple sets of criteria may be entered in multiple rows. Theoutput raster will contain the first row number of a set of criteria that were met for araster cell.

Example

For example, consider the sample thematic layer, parks.img, that contains the followingattribute information:

A simple model could create one output layer that showed only the parks in need ofrepairs. The following logic would be coded into the model:

“If Turf Condition is not Good or Excellent, and if Path Condition is not Good orExcellent, then the output class value is 1. Otherwise, the output class value is 2.”

More than one input layer could also be used. For example, a model could be created,using the input layers parks.img and soils.img, which would show the soil types forparks with Fair or Poor turf condition. Attributes can be used from every input file.

Table 22: Attribute Information for parks.img

Class Name Histogram Acres Path Condition Turf Condition Car Spaces

Grant Park 2456 403.45 Fair Good 127

Piedmont Park 5167 547.88 Good Fair 94

Candler Park 763 128.90 Excellent Excellent 65

Springdale Park 548 46.33 None Excellent 0

389Field Guide

Page 416: Erdas FieldGuide

The following is a slightly more complex example:

If a user had a land cover file and wanted to create a file of pine forests larger than 10acres, the criteria function could be used to output values only for areas that satisfiedthe conditions of being both pine forest and larger than 10 acres. The output file wouldhave two classes: pine forests larger than 10 acres and background. If the user wantedthe output file to show varying sizes of pine forest, he or she would simply add moreconditions to the criteria table.

Comparisons of attributes can also be combined with mathematical and logicalfunctions on the class values of the input file(s). With these capabilities, highly complexmodels can be created.

See the ERDAS IMAGINE Tour Guides manual or the On-Line Help for specific instructionson using the criteria function.

Script Modeling The Spatial Modeler Language is a script language used internally by Model Maker toexecute the operations specified in the graphical models that are created. The SpatialModeler Language can also be used to directly write to user-created models. It includesall of the functions available in Model Maker, plus:

• conditional branching and looping

• the ability to use complex and color data types

• more flexibility in using raster objects and attributes

Graphical models created with Model Maker can be output to a script file (text only) inthe Spatial Modeler Language. These scripts can then be edited with a text editor usingthe Spatial Modeler Language syntax and re-run or saved in a library. Script models canalso be written from scratch in the text editor. They are stored in ASCII .mdl files.

The Text Editor is available from the ERDAS IMAGINE icon panel and from the Script Library(Spatial Modeler).

In Figure 165, both the graphical and script models are shown for a tasseled cap trans-formation. Notice how even the annotation on the graphical model is included in theautomatically generated script model. Generating script models from graphical modelsmay aid in learning the Spatial Modeler Language.

390 ERDAS

Page 417: Erdas FieldGuide

Script Modeling

Figure 165: Graphical and Script Models For Tasseled Cap Transformation

Convert graphical models to scripts using Model Maker. Open existing script models from theScript Librarian (Spatial Modeler).

Graphical Model

Script Model

Tasseled CapTransformation

Models

# TM Tasseled Cap Transformation# of Lake Lanier, Georgia## declarations#INTEGER RASTER n1_tm_lanier FILE OLD NEAREST NEIGHBOR "/usr/imagine/examples/tm_lanier.img";FLOAT MATRIX n2_Custom_Matrix;FLOAT RASTER n4_lntassel FILE NEW ATHEMATIC FLOAT SINGLE "/usr/imagine/examples/lntassel.img";## set cell size for the model#SET CELLSIZE MIN;## set window for the model#SET WINDOW UNION;## load matrix n2_Custom_Matrix#n2_Custom_Matrix = MATRIX(3, 7:

0.331830, 0.331210, 0.551770, 0.425140, 0.480870, 0.000000, 0.252520,-0.247170, -0.162630, -0.406390, 0.854680, 0.054930, 0.000000, -0.117490,0.139290, 0.224900, 0.403590, 0.251780, -0.701330, 0.000000, -0.457320);

## function definitions#n4_lntassel = LINEARCOMB ( $n1_tm_lanier , $n2_Custom_Matrix ) ;QUIT;

391Field Guide

Page 418: Erdas FieldGuide

Statements A script model consists primarily of one or more statements. Each statement falls intoone of the following categories:

• Declaration — defines objects to be manipulated within the model

• Assignment — assigns a value to an object

• Show and View — enables the user to see and interpret results from the model

• Set — defines the scope of the model or establishes default values used by theModeler

• Macro Definition — defines substitution text associated with a macro name

• Quit — ends execution of the model

The Spatial Modeler Language also includes flow control structures, so that the user canutilize conditional branching and looping in the models and statement block structures,which cause a set of statements to be executed as a group.

Declaration Example

In the script model in Figure 165, the following lines form the declaration portion of themodel:

INTEGER RASTER n1_tm_lanier FILE OLD NEAREST NEIGHBOR"/usr/imagine/examples/tm_lanier.img";

FLOAT MATRIX n2_Custom_Matrix;

FLOAT RASTER n4_lntassel FILE NEW ATHEMATIC FLOAT SINGLE"/usr/imagine/examples/lntassel.img";

Set Example

The following set statements are used:

SET CELLSIZE MIN;

SET WINDOW UNION;

392 ERDAS

Page 419: Erdas FieldGuide

Script Modeling

Assignment Example

The following assignment statements are used:

n2_Custom_Matrix = MATRIX(3, 7:

0.331830, 0.331210, 0.551770, 0.425140, 0.480870,0.000000, 0.252520,

-0.247170, -0.162630, -0.406390, 0.854680, 0.054930,0.000000, -0.117490,

0.139290, 0.224900, 0.403590, 0.251780, -0.701330,0.000000, -0.457320);

n4_lntassel = LINEARCOMB ( $n1_tm_lanier ,$n2_Custom_Matrix ) ;

Data Types In addition to the data types utilized by Graphical Modeling, script model objects canstore data in the following data types:

• Complex — complex data (double precision)

• Color — three floating point numbers in the range of 0.0 to 1.0, representingintensity of red, green, and blue

Variables Variables are objects in the Modeler which have been associated with a name using adeclaration statement. The declaration statement defines the data type and object typeof the variable. The declaration may also associate a raster variable with certain layersof an image file or a table variable with an attribute table. Assignment statements areused to set or change the value of a variable.

For script model syntax rules, descriptions of all available functions and operators, and samplemodels, see the on-line Spatial Modeler Language manual.

393Field Guide

Page 420: Erdas FieldGuide

Vector Analysis Most of the operations discussed in the previous pages of this chapter focus on rasterdata. However, in a complete GIS data base, both raster and vector layers will bepresent. One of the most common applications involving the combination of raster andvector data is the updating of vector layers using current raster imagery as a backdropfor vector editing. For example, if a vector data base is more than one or two years old,then there are probably errors due to changes in the area (new roads, moved roads, newdevelopment, etc.). When displaying existing vector layers over a raster layer, the usercan dynamically update the vector layer by digitizing new or changed features on thescreen.

Vector layers can also be used to indicate an area of interest (AOI) for furtherprocessing. Assume the user wants to run a site suitability model on only areas desig-nated for commercial development in the zoning ordinances. By selecting these zonesin a vector polygon layer, the user could restrict the model to only those areas in theraster input files.

Editing VectorCoverages

Editable features are polygons (as lines), lines, label points, and nodes. There can bemultiple features selected with a mixture of any and all feature types. Editing opera-tions and commands can be performed on multiple or single selections. In addition tothe basic editing operations (e.g., cut, paste, copy, delete), the user can also perform thefollowing operations on the line features in multiple or single selections:

• spline — smooths or generalizes all currently selected lines using a specified graintolerance

• generalize — weeds vertices from selected lines using a specified tolerance

• split/unsplit — makes two lines from one by adding a node or joins two lines byremoving a node

• densify — adds vertices to selected lines at a user-specified tolerance

• reshape (for single lines only) — enables the user to move the vertices of a line

Reshaping (adding, deleting, or moving a vertex or node) can be done on a singleselected line. Below, in Table 23, are general editing operations and the feature typesthat will support each of those operations.

The Undo utility may be applied to any edits. The software stores all edits in sequentialorder, so that continually pressing Undo will reverse the editing.

Table 23: General Editing Operations and Supporting Feature Types

Add Delete Move Reshape

Points yes yes yes no

Lines yes yes yes yes

Polygons yes yes yes no

Nodes yes yes yes no

394 ERDAS

Page 421: Erdas FieldGuide

Constructing Topology

ConstructingTopology

Either the build or clean option can be used to construct topology. To create spatialrelationships between features in a vector layer, it is necessary to create topology. Aftera vector layer is edited, the topology must be constructed to maintain the topologicalrelationships between features. When topology is constructed, each feature is assignedan internal number. These numbers are then used to determine line connectivity andpolygon contiguity. Once calculated, these values are recorded and stored in thatlayer’s associated attribute table.

You must also reconstruct the topology of vector layers imported into ERDAS IMAGINE.

When topology is constructed, feature attribute tables are created with several automat-ically created fields. Different fields are stored for the different types of layers. Theautomatically generated fields for a line layer are:

• FNODE# — the internal node number for the beginning of a line (from-node)

• TNODE# — the internal number for the end of a line (to-node)

• LPOLY# — the internal number for the polygon to the left of the line (will be zerofor layers containing only lines and no polygons)

• RPOLY# — the internal number for the polygon to the right of the line (will be zerofor layers containing only lines and no polygons)

• LENGTH — length of each line, measured in layer units

• Cover# — internal line number (values assigned by ERDAS IMAGINE)

• Cover-ID — user-ID (values modified by the user)

The automatically generated fields for a point or polygon layer are:

• AREA — area of each polygon, measured in layer units (will be zero for layerscontaining only points and no polygons)

• PERIMETER — length of each polygon boundary, measured in layer units (will bezero for layers containing only points and no polygons)

• Cover# — internal polygon number (values assigned by ERDAS IMAGINE)

• Cover-ID — user-ID (values modified by the user)

Building and CleaningCoverages

Build processes points, lines, and polygons, but clean processes only lines andpolygons. Build recognizes only existing intersections (nodes), whereas clean createsintersections (nodes) wherever lines cross one another. The differences in these twooptions are summarized in Table 24 (ESRI 1990).

395Field Guide

Page 422: Erdas FieldGuide

Errors

Constructing topology also helps to identify errors in the layer. Some of the commonerrors found are:

• Lines with less than two nodes

• Polygons that are not closed

• Polygons that have no label point or too many label points

• User-IDs that are not unique

Constructing typology can identify the errors mentioned above. When topology isconstructed, line intersections are created, the lines that make up each polygon areidentified, and a label point is associated with each polygon. Until topology isconstructed, no polygons exist and lines that cross each other are not connected at anode, since there is no intersection.

Construct topology using the Vector Utilities menu from the Vector icon in the IMAGINE iconpanel.

You should not build or clean a layer that is displayed in a Viewer, nor should you try to displaya layer that is being built or cleaned.

Table 24: Comparison of Building and Cleaning Coverages

Capabilities Build Clean

Processes:

Polygons Yes Yes

Lines Yes Yes

Points Yes No

Numbers features Yes Yes

Calculates spatial measurements Yes Yes

Creates intersections No Yes

Processing speed Faster Slower

396 ERDAS

Page 423: Erdas FieldGuide

Constructing Topology

When the build or clean options are used to construct the topology of a vector layer,potential node errors are marked with special symbols. These symbols are listed below(ESRI 1990).

Pseudo nodes, drawn with a diamond symbol, occur where a single line connectswith itself (an island) or where only two lines intersect. Pseudo nodes do not neces-sarily indicate an error or a problem. Acceptable pseudo nodes may represent an island(a spatial pseudo node) or the point where a road changes from pavement to gravel (anattribute pseudo node).

A dangling node, represented by a square symbol, refers to the unconstructednode of a dangling line. Every line begins and ends at a node point. So if a line does notclose properly, or was digitized past an intersection, it will register as a dangling node.In some cases, a dangling node may be acceptable. For example, in a street centerlinemap, cul-de-sacs are often represented by dangling nodes.

In polygon layers there may be label errors—usually no label point for a polygon, ormore than one label point for a polygon. In the latter case, two or more points may havebeen mistakenly digitized for a polygon, or it may be that a line does not intersectanother line, resulting in an open polygon.

Figure 166: Layer Errors

Errors detected in a layer can be corrected by changing the tolerances set for that layerand building or cleaning again, or by editing the layer manually, then running build orclean.

Refer to the ERDAS IMAGINE Tour Guides manual for step-by-step instructions on editingvector layers.

Label points in one polygon(due to dangling node)

Dangling nodes

No label pointin polygonPseudo node

(island)

397Field Guide

Page 424: Erdas FieldGuide

398 ERDAS

Page 425: Erdas FieldGuide

CHAPTER 11Cartography

Introduction Maps and mapping are the subject of the art and science known as cartography—creating 2-dimensional representations of our 3-dimensional Earth. These representa-tions were once hand-drawn with paper and pen. But now, map production is largelyautomated—and the final output is not always paper. The capabilities of a computersystem are invaluable to map users, who often need to know much more about an areathan can be reproduced on paper, no matter how large that piece of paper is or howsmall the annotation is. Maps stored on a computer can be queried, analyzed, andupdated quickly.

As the veteran GIS and image processing authority Roger F. Tomlinson said, “Mappedand related statistical data do form the greatest storehouse of knowledge about thecondition of the living space of mankind.” With this thought in mind, it only makessense that maps be created as accurately as possible and be as accessible as possible.

In the past, map making was carried out by mapping agencies who took the analyst’s(be they surveyors, photogrammetrists, or draftsmen) information and created a mapto illustrate that information. But today, in many cases, the analyst is the cartographerand can design his maps to best suit the data and the end user.

This chapter defines some basic cartographic terms and explains how maps are createdwithin the ERDAS IMAGINE environment.

Use the ERDAS IMAGINE Map Composer to create hardcopy and softcopy maps and presen-tation graphics.

This chapter concentrates on the production of digital maps. See "CHAPTER 12: HardcopyOutput" for information about printing hardcopy maps.

399Field Guide

Page 426: Erdas FieldGuide

Types of Maps A map is a graphic representation of spatial relationships on the earth or other planets.Maps can take on many forms and sizes, depending on the intended use of the map.Maps no longer refer only to hardcopy output. In this manual, the maps discussedbegin as digital files and may be printed later as desired.

Some of the different types of maps are defined below.

Map Purpose

Aspect A map that shows the prevailing direction that a slope faces at each pixel.Aspect maps are often color-coded to show the eight major compassdirections or any of 360 degrees.

Base A map portraying background reference information onto which otherinformation is placed. Base maps usually show the location and extent ofnatural earth surface features and permanent man-made objects. Rasterimagery, orthophotos, and orthoimages are often used as base maps.

Bathymetric A map portraying the shape of a water body or reservoir using isobaths(depth contours).

Cadastral A map showing the boundaries of the subdivisions of land for purposesof describing and recording ownership or taxation.

Choropleth A map portraying properties of a surface using area symbols. Area sym-bols usually represent categorized classes of the mapped phenomenon.

Composite A map on which the combined information from different thematic mapsis presented.

Contour A map in which lines are used to connect points of equal elevation. Linesare often spaced in increments of ten or twenty feet or meters.

Derivative A map created by altering, combining, or through the analysis of othermaps.

Index A reference map that outlines the mapped area, identifies all of the com-ponent maps for the area if several map sheets are required, and identi-fies all adjacent map sheets.

Inset A map that is an enlargement of some congested area of a smaller scalemap, and that is usually placed on the same sheet with the smaller scalemain map.

Isarithmic A map that uses isorithms (lines connecting points of the same value forany of the characteristics used in the representation of surfaces) to repre-sent a statistical surface. Also called an isometric map.

Isopleth A map on which isopleths (lines representing quantities that cannot existat a point, such as population density) are used to represent someselected quantity.

Morphometric A map representing morphological features of the earth’s surface.

Outline A map showing the limits of a specific set of mapping entities, such ascounties, NTS quads, etc. Outline maps usually contain a very smallnumber of details over the desired boundaries with their descriptivecodes.

Planimetric A map showing only the horizontal position of geographic objects, with-out topographic features or elevation contours.

400 ERDAS

Page 427: Erdas FieldGuide

Types of Maps

In ERDAS IMAGINE, maps are stored as a map file with a .map extension.

See "APPENDIX B: File Formats and Extensions" for information on the format of the .map file.

Relief Any map that appears to be, or is, 3-dimensional. Also called a shadedrelief map.

Slope A map which shows changes in elevation over distance. Slope maps areusually color-coded according to the steepness of the terrain at eachpixel.

Thematic A map illustrating the class characterizations of a particular spatial vari-able such as soils, land cover, hydrology, etc.

Topographic A map depicting terrain relief.

Viewshed A map showing only those areas visible (or invisible) from a specifiedpoint(s). Also called a line-of-sight map or a visibility map.

Map Purpose

401Field Guide

Page 428: Erdas FieldGuide

Thematic Maps Thematic maps comprise a large portion of the maps that many organizations create.For this reason, this map type will be explored in more detail.

Thematic maps may be subdivided into two groups:

• qualitative

• quantitative

A qualitative map shows the spatial distribution or location of a kind of nominal data.For example, a map showing corn fields in the United States would be a qualitativemap. It would not show how much corn is produced in each location, or productionrelative to the other areas.

A quantitative map displays the spatial aspects of numerical data. A map showing cornproduction (volume) in each area would be a quantitative map. Quantitative mapsshow ordinal (less than/greater than) and interval/ratio (how much different) scaledata (Dent 1985).

You can create thematic data layers from continuous data (aerial photography and satelliteimages) using the ERDAS IMAGINE classification capabilities. See “Chapter 6: Classification”for more information.

Base Information

Thematic maps should include a base of information so that the reader can easily relatethe thematic data to the real world. This base may be as simple as an outline of counties,states, or countries, to something more complex, such as an aerial photograph orsatellite image. In the past, it was difficult and expensive to produce maps that includedboth thematic and continuous data, but technological advances have made this easy.

For example, in a thematic map showing flood plains in the Mississippi River valley,the user could overlay the thematic data onto a line coverage of state borders or asatellite image of the area. The satellite image can provide more detail about the areasbordering the flood plains. This may be valuable information when planningemergency response and resource management efforts for the area. Satellite images canalso provide very current information about an area, and can assist the user in assessingthe accuracy of a thematic image.

In ERDAS IMAGINE, you can include multiple layers in a single map composition. See MapComposition on page 432 for more information about creating maps.

402 ERDAS

Page 429: Erdas FieldGuide

Types of Maps

Color Selection

The colors used in thematic maps may or may not have anything to do with the class orcategory of information shown. Cartographers usually try to use a color scheme thathighlights the primary purpose of the map. The map reader’s perception of colors alsoplays an important role. Most people are more sensitive to red, followed by green,yellow, blue, and purple. Although color selection is left entirely up to the mapdesigner, some guidelines have been established (Robinson and Sale 1969).

• When mapping interval or ordinal data, the higher ranks and greater amounts aregenerally represented by darker colors.

• Use blues for water.

• When mapping elevation data, start with blues for water, greens in the lowlands,ranging up through yellows and browns to reds in the higher elevations. Thisprogression should not be used for series other than elevation.

• In temperature mapping, use red, orange, and yellow for warm temperatures andblue, green, and gray for cool temperatures.

• In land cover mapping, use yellows and tans for dryness and sparse vegetation andgreens for lush vegetation.

• Use browns for land forms.

Use the Raster Attributes option in the Viewer to select and modify class colors.

403Field Guide

Page 430: Erdas FieldGuide

Annotation A map is more than just an image(s) on a background. Since a map is a form of commu-nication, it must convey information that may not be obvious by looking at the image.Therefore, maps usually contain several annotation elements to explain the map.Annotation is any explanatory material that accompanies a map to denote graphicalfeatures on the map. This annotation may take the form of:

• scale bars

• legends

• neatlines, tick marks, and grid lines

• symbols (north arrows, etc.)

• labels (rivers, mountains, cities, etc.)

• descriptive text (title, copyright, credits, production notes, etc.)

The annotation listed above is made up of single elements. The basic annotationelements in ERDAS IMAGINE include:

• rectangles (including squares)

• ellipses (including circles)

• polygons and polylines

• text

These elements can be used to create more complex annotation, such as legends, scalebars, etc. These annotation components are actually groups of the basic elements andcan be ungrouped and edited like any other graphic. The user can also create his or herown groups to form symbols that are not in the IMAGINE symbol library. (Symbols arediscussed in more detail under "Symbols" on page 411.)

Create annotation using the Annotation tool palette in the Viewer or in a map composition.

How Annotation is Stored

An annotation layer is a set of annotation elements that is drawn in a Viewer or MapComposer window and stored in a file. Annotation that is created in a Viewer windowis stored in a separate file from the other data in the Viewer. These annotation files arecalled overlay files (.ovr extension). Map annotation that is created in a Map Composerwindow is also stored in a .ovr file, which is named after the map composition. Forexample, the annotation for a file called lanier.map would be lanier.map.ovr.

See "APPENDIX B: File Formats and Extensions" for information on the format of the .ovr file.

404 ERDAS

Page 431: Erdas FieldGuide

Scale

Scale Map scale is a statement that relates distance on a map to distance on the earth’ssurface. It is perhaps the most important information on a map, since the level of detailand map accuracy are both factors of the map scale. Scale is directly related to the mapextent, or the area of the earth’s surface to be mapped. If a relatively small area is to bemapped, such as a neighborhood or subdivision, then the scale can be larger. If a largearea is to be mapped, such as an entire continent, the scale must be smaller. Generally,the smaller the scale, the less detailed the map can be. As a rule, anything smaller than1:250,000 is considered small-scale.

Scale can be reported in several ways, including:

• representative fraction

• verbal statement

• scale bar

Representative Fraction

Map scale is often noted as a simple ratio or fraction called a representative fraction. Amap in which one inch on the map equals 24,000 inches on the ground could bedescribed as having a scale of 1:24,000 or 1/24,000. The units on both sides of the ratiomust be the same.

Verbal Statement

A verbal statement of scale describes the distance on the map to the distance on theground. A verbal statement describing a scale of 1:1,000,000 is approximately 1 inch to16 miles. The units on the map and on the ground do not have to be the same in a verbalstatement. One-inch and 6-inch maps of the British Ordnance Survey are often referredto by this method (1 inch to 1 mile, 6 inches to 1 mile) (Robinson and Sale 1969).

Scale Bars

A scale bar is a graphic annotation element that describes map scale. It shows thedistance on paper that represents a geographical distance on the map. Maps ofteninclude more than one scale bar to indicate various measurement systems, such askilometers and miles.

Figure 167: Sample Scale Bars

Use the Scale Bar tool in the Annotation tool palette to automatically create representativefractions and scale bars. Use the Text tool to create a verbal statement.

Kilometers

Miles

1 0 1 2 3 4

1 0 1 2

405Field Guide

Page 432: Erdas FieldGuide

Common Map Scales

The user can create maps with an unlimited number of scales, however, there are somecommonly used scales. Table 25 lists these scales and their equivalents (Robinson andSale 1969).

Table 25: Common Map Scales

Map Scale1/40 inch

represents1 inch

represents

1centimeterrepresents

1 mile isrepresented

by

1 kilometeris

represented by

1:2,000 4.200 ft 56.000 yd 20.000 m 31.680 in 50.00 cm

1:5,000 10.425 ft 139.000 yd 50.000 m 12.670 in 20.00 cm

1:10,000 6.952 yd 0.158 mi 0.100 km 6.340 in 10.00 cm

1:15,840 11.000 yd 0.250 mi 0.156 km 4.000 in 6.25 cm

1:20,000 13.904 yd 0.316 mi 0.200 km 3.170 in 5.00 cm

1:24,000 16.676 yd 0.379 mi 0.240 km 2.640 in 4.17 cm

1:25,000 17.380 yd 0.395 mi 0.250 km 2.530 in 4.00 cm

1:31,680 22.000 yd 0.500 mi 0.317 km 2.000 in 3.16 cm

1:50,000 34.716 yd 0.789 mi 0.500 km 1.270 in 2.00 cm

1:62,500 43.384 yd 0.986 mi 0.625 km 1.014 in 1.60 cm

1:63,360 0.025 mi 1.000 mi 0.634 km 1.000 in 1.58 cm

1:75,000 0.030 mi 1.180 mi 0.750 km 0.845 in 1.33 cm

1:80,000 0.032 mi 1.260 mi 0.800 km 0.792 in 1.25 cm

1:100,000 0.040 mi 1.580 mi 1.000 km 0.634 in 1.00 cm

1:125,000 0.050 mi 1.970 mi 1.250 km 0.507 in 8.00 mm

1:250,000 0.099 mi 3.950 mi 2.500 km 0.253 in 4.00 mm

1:500,000 0.197 mi 7.890 mi 5.000 km 0.127 in 2.00 mm

1:1,000,000 0.395 mi 15.780 mi 10.000 km 0.063 in 1.00 mm

406 ERDAS

Page 433: Erdas FieldGuide

Scale

Table 26 shows the number of pixels per inch for selected scales and pixel sizes.

Courtesy of D. Cunningham and D. Way, The Ohio State University.

Table 26: Pixels per Inch

PixelSize(m)

SCALE

1”=100’1:1200

1”=200’1:2400

1”=500’1:6000

1”=1000’1:12000

1”=1500’1:18000

1”=2000’1:24000

1”=4167’1:50000

1”=1mile

1:63360

1 30.49 60.96 152.40 304.80 457.20 609.60 1270.00 1609.35

2 15.24 30.48 76.20 152.40 228.60 304.80 635.00 804.67

2.5 12.13 24.38 60.96 121.92 182.88 243.84 508.00 643.74

5 6.10 12.19 30.48 60.96 91.44 121.92 254.00 321.87

10 3.05 6.10 15.24 30.48 45.72 60.96 127.00 160.93

15 2.03 4.06 10.16 20.32 30.48 40.64 84.67 107.29

20 1.52 3.05 7.62 15.24 22.86 30.48 63.50 80.47

25 1.22 2.44 6.10 12.19 18.29 24.38 50.80 64.37

30 1.02 2.03 5.08 10.16 15.24 20.32 42.33 53.64

35 .87 1.74 4.35 8.71 13.08 17.42 36.29 45.98

40 .76 1.52 3.81 7.62 11.43 15.24 31.75 40.23

45 .68 1.35 3.39 6.77 10.16 13.55 28.22 35.76

50 .61 1.22 3.05 6.10 9.14 12.19 25.40 32.19

75 .41 .81 2.03 4.06 6.10 8.13 16.93 21.46

100 .30 .61 1.52 3.05 4.57 6.10 12.70 16.09

150 .20 .41 1.02 2.03 3.05 4.06 8.47 10.73

200 .15 .30 .76 1.52 2.29 3.05 6.35 8.05

250 .12 .24 .61 1.22 1.83 2.44 5.08 6.44

300 .10 .30 .51 1.02 1.52 2.03 4.23 5.36

350 .09 .17 .44 .87 1.31 1.74 3.63 4.60

400 .08 .15 .38 .76 1.14 1.52 3.18 4.02

450 .07 .14 .34 .68 1.02 1.35 2.82 3.58

500 .06 .12 .30 .61 .91 1.22 2.54 3.22

600 .05 .10 .25 .51 .76 1.02 2.12 2.69

700 .04 .09 .22 .44 .65 .87 1.81 2.30

800 .04 .08 .19 .38 .57 .76 1.59 2.01

900 .03 .07 .17 .34 .51 .68 1.41 1.79

1000 .03 .06 .15 .30 .46 .61 1.27 1.61

407Field Guide

Page 434: Erdas FieldGuide

Table 27 lists the number of acres and hectares per pixel for various pixel sizes.

Courtesy of D. Cunningham and D. Way, The Ohio State University.

Table 27: Acres and Hectares per Pixel

Pixel Size (m) Acres Hectares

1 0.0002 0.0001

2 0.0010 0.0004

2.5 0.0015 0.0006

5 0.0062 0.0025

10 0.0247 0.0100

15 0.0556 0.0225

20 0.0988 0.0400

25 0.1544 0.0625

30 0.2224 0.0900

35 0.3027 0.1225

40 0.3954 0.1600

45 0.5004 0.2025

50 0.6178 0.2500

75 1.3900 0.5625

100 2.4710 1.0000

150 5.5598 2.2500

200 9.8842 4.0000

250 15.4440 6.2500

300 22.2394 9.0000

350 30.2703 12.2500

400 39.5367 16.0000

450 50.0386 20.2500

500 61.7761 25.0000

600 88.9576 36.0000

700 121.0812 49.0000

800 158.1468 64.0000

900 200.1546 81.0000

1000 247.1044 100.0000

408 ERDAS

Page 435: Erdas FieldGuide

Legends

Legends A legend is a key to the colors, symbols, and line styles that are used in a map. Legendsare especially useful for maps of categorical data displayed in pseudo color, where eachcolor represents a different feature or category. A legend can also be created for a singlelayer of continuous data, displayed in gray scale. Legends are likewise used to describeall unknown or unique symbols utilized. Symbols in legends should appear exactly thesame size and color as they appear on the map (Robinson and Sale 1969).

Figure 168: Sample Legend

Use the Legend tool in the Annotation tool palette to automatically create color legends. Symbollegends are not created automatically, but can be created manually.

pasture

forest

swamp

developed

Legend

409Field Guide

Page 436: Erdas FieldGuide

Neatlines, TickMarks, and GridLines

Neatlines, tick marks, and grid lines serve to provide a georeferencing system for mapdetail and are based on the map projection of the image shown.

• A neatline is a rectangular border around the image area of a map. It differs fromthe map border in that the border usually encloses the entire map, not just theimage area.

• Tick marks are small lines along the edge of the image area or neatline that indicateregular intervals of distance.

• Grid lines are intersecting lines that indicate regular intervals of distance, based ona coordinate system. Usually, they are an extension of tick marks. It is often helpfulto place grid lines over the image area of a map. This is becoming less common onthematic maps, but is really up to the map designer. If the grid lines will helpreaders understand the content of the map, they should be used.

Figure 169: Sample Neatline, Tick Marks, and Grid Lines

Grid lines may also be referred to as a graticule.

Graticules are discussed in more detail in "Map Projections" on page 416.

Use the Grid/Tick tool in the Annotation tool palette to create neatlines, tick marks, and gridlines. Tick marks and grid lines can also be created over images displayed in a Viewer. See theOn-Line Help for instructions.

neatline

tick marksgrid lines

410 ERDAS

Page 437: Erdas FieldGuide

Symbols

Symbols Since maps are a greatly reduced version of the real-world, objects cannot be depictedin their true shape or size. Therefore, a set of symbols is devised to represent real-worldobjects. There are two major classes of symbols:

• replicative

• abstract

Replicative symbols are designed to look like their real-world counterparts; theyrepresent tangible objects, such as coastlines, trees, railroads, and houses. Abstractsymbols usually take the form of geometric shapes, such as circles, squares, andtriangles. They are traditionally used to represent amounts that vary from place toplace, such as population density, amount of rainfall, etc. (Dent 1985).

Both replicative and abstract symbols are composed of one or more of the followingannotation elements:

• point

• line

• area

Symbol Types

These basic elements can be combined to create three different types of replicativesymbols:

• plan — formed after the basic outline of the object it represents. For example, thesymbol for a house might be a square, since most houses are rectangular.

• profile — formed like the profile of an object. Profile symbols generally representvertical objects, such as trees, windmills, oil wells, etc.

• function — formed after the activity that a symbol represents. For example, on amap of a state park, a symbol of a tent would indicate the location of a campingarea.

Figure 170: Sample Symbols

Plan Profile Function

411Field Guide

Page 438: Erdas FieldGuide

Symbols can have different sizes, colors, and patterns to indicate different meaningswithin a map. The use of size, color, and pattern are generally used to show qualitativeor quantitative differences among areas marked. For example, if a circle is used to showcities and towns, larger circles would be used to show areas with higher population. Aspecific color could be used to indicate county seats. Since symbols are not drawn toscale, their placement is crucial to effective communication.

Use the Symbol tool in the Annotation tool palette and the symbol library to place symbols inmaps.

Labels andDescriptive Text

Place names and other labels convey important information to the reader about thefeatures on the map. Any features that will help orient the reader or are important tothe content of the map should be labeled. Descriptive text on a map can include the maptitle and subtitle, copyright information, captions, credits, production notes, or otherexplanatory material.

Title

The map title usually draws attention by virtue of its size. It focuses the reader’sattention on the primary purpose of the map. The title may be omitted, however, ifcaptions are provided outside of the image area (Dent 1985).

Credits

Map credits (or source) can include the data source and acquisition date, accuracyinformation, and other details that are required or helpful to readers. For example, if theuser includes data which they do not own in a map, they must give credit to the owner.

Use the Text tool in the Annotation tool palette to add labels and descriptive text to maps.

Typography andLettering

The choice of type fonts and styles and how names are lettered can make the differencebetween a clear and attractive map and a jumble of imagery and text. As with manyother aspects of map design, this is a very subjective area and many organizationsalready have guidelines to use. This section is intended as an introduction to theconcepts involved and to convey traditional guidelines, where available.

If your organization does not have a set of guidelines for the appearance of maps andyou plan to produce many in the future, it would be beneficial to develop a style guidespecifically for mapping. This will ensure that all of the maps produced follow the sameconventions, regardless of who actually makes the map.

ERDAS IMAGINE enables you to make map templates to facilitate the development of mapstandards within your organization.

412 ERDAS

Page 439: Erdas FieldGuide

Labels and Descriptive Text

Type Styles

Type style refers to the appearance of the text and may include font, size, and style(bold, italic, underline, etc.). Although the type styles used in maps are purely a matterof the designer’s taste, the following techniques help to make maps more legible(Robinson and Sale 1969; Dent 1985).

• Do not use too many different typefaces in a single map. Generally, one or twostyles are enough when also using the variations of those type faces (e.g., bold,italic, underline, etc.). When using two typefaces, use a serif and a sans serif, ratherthan two different serif fonts or two different sans serif fonts [e.g., Sans (sans serif)and Roman (serif) could be used together in one map].

• Avoid ornate text styles because they can be difficult to read.

• Exercise caution in using very thin letters that may not reproduce well. On the otherhand, using letters that are too bold may obscure important information in theimage.

• Use different sizes of type for showing varying levels of importance. For example,on a map with city and town labels, city names will usually be in a larger type sizethan the town names. Use no more than four to six different type sizes.

• Put more important text in labels, titles, and names in all capital letters and lesserimportant text in lowercase with initial capitals. This is a matter of personalpreference, although names in which the letters must be spread out across a largearea are better in all capital letters. (Studies have found that capital letters are moredifficult to read, therefore lowercase letters might improve the legibility of themap.)

• In the past, hydrology, landform, and other natural features were labeled in italic.However, this is not strictly adhered to by map makers today, although waterfeatures are still nearly always labeled in italic.

Figure 171: Sample Sans Serif and Serif Typefaces with Various Styles Applied

Use the Styles dialog to adjust the style of text.

Sans 10 pt regular

Sans 10 pt italic

Sans 10 pt bold

Sans 10 pt bold italic

SANS 10 PT ALL CAPS

Roman 10 pt regular

Roman 10 pt italic

Roman 10 pt bold

Roman 10 pt bold italic

ROMAN 10 PT ALL CAPS

Sans Serif Serif

413Field Guide

Page 440: Erdas FieldGuide

Lettering

Lettering refers to the way in which place names and other labels are added to a map.Letter spacing, orientation, and position are the three most important factors inlettering. Here again, there are no set rules for how lettering is to appear. Much is deter-mined by the purpose of the map and the end user. Many organizations havedeveloped their own rules for lettering. Here is a list of guidelines that have been usedby cartographers in the past (Robinson and Sale 1969; Dent 1985).

• Names should be either entirely on land or water—not overlapping both.

• Lettering should generally be oriented to match the orientation structure of themap. In large-scale maps this means parallel with the upper and lower edges; insmall-scale maps, in line with the parallels of latitude.

• Type should not be curved (i.e., different from preceding bullet) unless it isnecessary to do so.

• If lettering must be disoriented, it should never be set in a straight line, but shouldalways have a slight curve.

• Names should be letter spaced (space between individual letters - kerning) as littleas necessary.

• Where the continuity of names and other map data, such as lines and tones,conflicts with the lettering, the data, not the names, should be interrupted.

• Lettering should never be upside-down in any respect.

• Lettering that refers to point locations should be placed above or below the point,preferably above and to the right.

• The letters identifying linear features (roads, rivers, railroads, etc.) should not bespaced. The word(s) should be repeated along the feature as often as necessary tofacilitate identification. These labels should be placed above the feature and rivernames should slant in the direction of the river flow (if the label is italic).

• For geographical names, use the native language of the intended map user. For anEnglish-speaking audience, the name “Germany” should be used, rather than“Deutscheland.”

414 ERDAS

Page 441: Erdas FieldGuide

Labels and Descriptive Text

Figure 172: Good Lettering vs. Bad Lettering

Text Color

Many cartographers argue that all lettering on a map should be black,. However, themap may be well served by incorporating color into its design. In fact, studies haveshown that coding labels by color can improve a reader’s ability to find information(Dent 1985).

Atlanta

G E O R G I A

Better

Atlanta

G e o r g i a

SavannahSavannah

Worse

415Field Guide

Page 442: Erdas FieldGuide

Map Projections

This section is adapted from Map Projections for Use with the Geographic Information Systemby Lee and Walsh, 1984.

A map projection is the manner in which the spherical surface of the earth is repre-sented on a flat (two-dimensional) surface. This can be accomplished by directgeometric projection or by a mathematically derived transformation. There are manykinds of projections, but all involve transfer of the distinctive global patterns of parallelsof latitude and meridians of longitude onto an easily flattened surface, or developablesurface.

The three most common developable surfaces are the cylinder, cone, and plane (Figure173). A plane is already flat, while a cylinder or cone may be cut and laid out flat,without stretching. Thus, map projections may be classified into three general families:cylindrical, conical, and azimuthal or planar.

Map projections are selected in the Projection Chooser. The Projection Chooser is accessible fromthe ERDAS IMAGINE icon panel, and from several other locations.

Properties of MapProjections

Regardless of what type of projection is used, it is inevitable that some error ordistortion will occur in transforming a spherical surface into a flat surface. Ideally, adistortion-free map has four valuable properties:

• conformality

• equivalence

• equidistance

• true direction

Each of these properties is explained below. No map projection can be true in all of theseproperties. Therefore, each projection is devised to be true in selected properties, ormost often, a compromise among selected properties. Projections that compromise inthis manner are known as compromise projections.

Conformality is the characteristic of true shape, wherein a projection preserves theshape of any small geographical area. This is accomplished by exact transformation ofangles around points. One necessary condition is the perpendicular intersection of gridlines as on the globe. The property of conformality is important in maps which are usedfor analyzing, guiding, or recording motion, as in navigation. A conformal map orprojection is one that has the property of true shape.

416 ERDAS

Page 443: Erdas FieldGuide

Map Projections

Equivalence is the characteristic of equal area, meaning that areas on one portion of amap are in scale with areas in any other portion. Preservation of equivalence involvesinexact transformation of angles around points and thus, is mutually exclusive withconformality except along one or two selected lines. The property of equivalence isimportant in maps that are used for comparing density and distribution data, as inpopulations.

Equidistance is the characteristic of true distance measuring. The scale of distance isconstant over the entire map. This property can be fulfilled on any given map from one,or at most two, points in any direction or along certain lines. Equidistance is importantin maps that are used for analyzing measurements (i.e., road distances). Typically,reference lines such as the equator or a meridian are chosen to have equidistance andare termed standard parallels or standard meridians.

True direction is characterized by a direction line between two points that crossesreference lines, for example, meridians, at a constant angle or azimuth. An azimuth isan angle measured clockwise from a meridian, going north to east. The line of constantor equal direction is termed a rhumb line.

The property of constant direction makes it comparatively easy to chart a navigationalcourse. However, on a spherical surface, the shortest surface distance between twopoints is not a rhumb line, but a great circle, being an arc of a circle whose center is thecenter of the earth. Along a great circle, azimuths constantly change (unless the greatcircle is the equator or a meridian).

Thus, a more desirable property than true direction may be where great circles arerepresented by straight lines. This characteristic is most important in aviation. Note thatall meridians are great circles, but the only parallel that is a great circle is the equator.

417Field Guide

Page 444: Erdas FieldGuide

Figure 173: Projection Types

Regular Cylindrical

TransverseCylindrical

Oblique Cylindrical

Regular Conic

Polar Azimuthal(planar)

Oblique Azimuthal(planar)

418 ERDAS

Page 445: Erdas FieldGuide

Map Projections

Projection Types Although a great number of projections have been devised, the majority of them aregeometric or mathematical variants of the basic direct geometric projection familiesdescribed below. Choice of the projection to be used will depend upon the true propertyor combination of properties desired for effective cartographic analysis.

Azimuthal Projections

Azimuthal projections, also called planar projections, are accomplished by drawinglines from a given perspective point through the globe onto a tangent plane. This isconceptually equivalent to tracing a shadow of a figure cast by a light source. A tangentplane intersects the global surface at only one point and is perpendicular to a linepassing through the center of the sphere. Thus, these projections are symmetricalaround a chosen center or central meridian. Choice of the projection center determinesthe aspect, or orientation, of the projection surface.

Azimuthal projections may be centered:

• on the poles (polar aspect)

• at a point on the equator (equatorial aspect)

• at any other orientation (oblique aspect)

The origin of the projection lines—that is, the perspective point—may also assumevarious positions. For example, it may be:

• the center of the earth (gnomonic)

• an infinite distance away (orthographic)

• on the earth’s surface, opposite the projection plane (stereographic)

Conical Projections

Conical projections are accomplished by intersecting, or touching, a cone with theglobal surface and mathematically projecting lines onto this developable surface.

A tangent cone intersects the global surface to form a circle. Along this line of inter-section, the map will be error-free and possess equidistance. Usually, this line is aparallel, termed the standard parallel.

Cones may also be secant, and intersect the global surface, forming two circles that willpossess equidistance. In this case, the cone slices underneath the global surface,between the standard parallels. Note that the use of the word “secant,” in this instance,is only conceptual and not geometrically accurate. Conceptually, the conical aspect maybe polar, equatorial, or oblique. Only polar conical projections are supported in ERDASIMAGINE.

419Field Guide

Page 446: Erdas FieldGuide

Figure 174: Tangent and Secant Cones

Cylindrical Projections

Cylindrical projections are accomplished by intersecting, or touching, a cylinder withthe global surface. The surface is mathematically projected onto the cylinder, which isthen “cut” and “unrolled.”

A tangent cylinder will intersect the global surface on only one line to form a circle, aswith a tangent cone. This central line of the projection is commonly the equator and willpossess equidistance.

If the cylinder is rotated 90 degrees from the vertical (i.e., the long axis becomeshorizontal), then the aspect becomes transverse, wherein the central line of theprojection becomes a chosen standard meridian as opposed to a standard parallel. Asecant cylinder, one slightly less in diameter than the globe, will have two linespossessing equidistance.

Figure 175: Tangent and Secant Cylinders

Perhaps the most famous cylindrical projection is the Mercator, which became thestandard navigational map, possessing true direction and conformality.

Secanttwo standard parallels

Tangentone standard parallel

Tangentone standard parallel

Secanttwo standard parallels

420 ERDAS

Page 447: Erdas FieldGuide

Map Projections

Other Projections

The projections discussed so far are projections that are created by projecting from asphere (the earth) onto a plane, cone, or cylinder. Many other projections cannot becreated so easily.

Modified projections are modified versions of another projection. For example, theSpace Oblique Mercator projection is a modification of the Mercator projection. Thesemodifications are made to reduce distortion, often by including additional standardlines or a different pattern of distortion.

Pseudo projections have only some of the characteristics of another class projection.For example, the Sinusoidal is called a pseudocylindrical projection because all lines oflatitude are straight and parallel, and all meridians are equally spaced. However, itcannot truly be a cylindrical projection, because all meridians except the centralmeridian are curved. This results in the Earth appearing oval instead of rectangular(ESRI 1991).

421Field Guide

Page 448: Erdas FieldGuide

Geographical andPlanar Coordinates

Map projections require a point of reference on the earth’s surface. Most often this is thecenter, or origin, of the projection. This point is defined in two coordinate systems:

• geographical

• planar

Geographical

Geographical, or spherical, coordinates are based on the network of latitude andlongitude (Lat/Lon) lines that make up the graticule of the earth. Within the graticule,lines of longitude are called meridians, which run north/south, with the primemeridian at 0˚ (Greenwich, England). Meridians are designated as 0˚ to 180˚, east orwest of the prime meridian. The 180˚ meridian (opposite the prime meridian) is theInternational Dateline.

Lines of latitude are called parallels, which run east/west. Parallels are designated as0˚ at the equator to 90˚ at the poles. The equator is the largest parallel. Latitude andlongitude are defined with respect to an origin located at the intersection of the equatorand the prime meridian. Lat/Lon coordinates are reported in degrees, minutes, andseconds. Map projections are various arrangements of the earth’s latitude andlongitude lines onto a plane.

Planar

Planar, or Cartesian, coordinates are defined by a column and row position on a planargrid (X,Y). The origin of a planar coordinate system is typically located south and westof the origin of the projection. Coordinates increase from 0,0 going east and north. Theorigin of the projection, being a “false” origin, is defined by values of false easting andfalse northing. Grid references always contain an even number of digits, and the firsthalf refers to the easting and the second half the northing.

In practice, this eliminates negative coordinate values and allows locations on a mapprojection to be defined by positive coordinate pairs. Values of false easting are readfirst and may be in meters or feet.

422 ERDAS

Page 449: Erdas FieldGuide

Available Map Projections

Available MapProjections

In ERDAS IMAGINE, map projection information appears in the Projection Chooser,which is used to georeference images and to convert map coordinates from one type ofprojection to another. The Projection Chooser provides the following projections:

USGS Projections

Albers Conical Equal AreaAzimuthal EquidistantEquidistant ConicEquirectangularGeneral Vertical Near-Side PerspectiveGeographic (Lon/Lat)GnomonicLambert Azimuthal Equal AreaLambert Conformal ConicMercatorMiller CylindricalModified Transverse MercatorOblique Mercator (Hotine)OrthographicPolar StereographicPolyconicSinusoidalSpace Oblique MercatorState PlaneStereographicTransverse MercatorUTMVan Der Grinten I

External Projections

Bipolar Oblique Conic ConformalCassini-SoldnerLaborde Oblique MercatorModified PolyconicModified StereographicMollweide Equal AreaPlate CarréeRectified Skew OrthomorphicRobinson PseudocylindricalSouthern Orientated Gauss ConformalWinkel’s Tripel

423Field Guide

Page 450: Erdas FieldGuide

Choice of the projection to be used will depend upon the desired major property andthe region to be mapped (Table 1). After choosing the desired map projection, severalparameters are required for its definition (Table 2). These parameters fall into threegeneral classes:

• definition of the spheroid

• definition of the surface viewing window

• definition of scale

For each map projection, a menu of spheroids displays, along with appropriateprompts that enable the user to specify these parameters.

Units

Use the units of measure that are appropriate for the map projection type.

• Lat/Lon coordinates are expressed in decimal degrees. When prompted, the usercan use the DD function to convert coordinates in degrees, minutes, seconds formatto decimal. For example, for 30˚51’12’’:

dd(30,51,12) = 30.85333-dd(30,51,12) = -30.85333

or

30:51:12 = 30.85333

The user can also enter Lat/Lon coordinates in radians.

• State Plane coordinates are expressed in feet.

• All other coordinates are expressed in meters.

Note also that values for longitude west of Greenwich, England, and values for latitude south ofthe equator are to be entered as negatives.

424 ERDAS

Page 451: Erdas FieldGuide

Available Map Projections

Table 28: Map Projections

Map projection Construction Property Use

0 Geographic N/A N/A Data entry, spherical coordinates

1 Universal TransverseMercator

Cylinder(see #9)

Conformal Data entry, plane coordinates

2 State Plane (see #4, 7,9,20) Conformal Data entry, plane coordinates

3 Albers Conical EqualArea

Cone Equivalent Middle latitudes, E-W expanses

4 Lambert Conformal Conic Cone Conformal TrueDirection

Middle latitudes, E-W expansesflight (straight great circles)

5 Mercator Cylinder Conformal TrueDirection

Non-polar regions, navigation(straight rhumb lines)

6 Polar Stereographic Plane Conformal Polar regions

7 Polyconic Cone Compromise N-S expanses

8 Equidistant Conic Cone Equidistant Middle latitudes, E-W expanses

9 Transverse Mercator Cylinder Conformal N-S expanses

10 Stereographic Plane Conformal Hemispheres, continents

11 Lambert Azimuthal EqualArea

Plane Equivalent Square or round expanses

12 Azimuthal Equidistant Plane Equidistant Polar regions, radio/seismicwork (straight great circles)

13 Gnomonic Plane Compromise Navigation, seismic work(straight great circles)

14 Orthographic Plane Compromise Globes, pictorial

15 General Vertical Near-Side Perspective

Plane Compromise Hemispheres or less

16 Sinusoidal Pseudo-Cylinder

Equivalent N-S expanses or equatorialregions

17 Equirectangular Cylinder Compromise City maps, computer plotting(simplistic)

18 Miller Cylindrical Cylinder Compromise World maps

19 Van der Grinten I N/A Compromise World maps

20 Oblique Mercator Cylinder Conformal Oblique expanses (e.g., Hawai-ian islands), satellite tracking

21 Space Oblique Mercator Cylinder Conformal Mapping of Landsat imagery

22 Modified TransverseMercator

Cylinder Conformal Alaska

425Field Guide

Page 452: Erdas FieldGuide

a. Numbers are used for reference only and correspond to the numbers used in Table 1. Parameters for definition of mapprojection types 0-2 are not applicable and are described in the text.

b. Additional parameters required for definition of the map projection are described in the text of Appendix C.

Table 29: Projection Parameters

Projection type (#)a

Parameter 3 4 5 6 7 8b 9 10

11

12

13

14

15

16

17

18

19

20b

21b

22

Definition ofSpheroid

Spheroidselections

X X X X X X X X X X X X X X X X X X X

Definition ofSurface ViewingWindow

False easting X X X X X X X XX

X X X X X X X X X X X

False northing X X X X X X X X X X X X X X X X X X X X

Longitude ofcentral meridian

X X X X X X XX X

X

Latitude of originof projection

X X X X X X

Longitude of cen-ter of projection

X X X X X X

Latitude of centerof projection

X X X X X X

Latitude of firststandard parallel

X X X

Latitude of secondstandard parallel

X X X

Latitude of truescale

X X

Longitude belowpole

X

Definition ofScale

Scale factor atcentral meridian

X

Height of perspec-tive point abovesphere

X

Scale factor atcenter of projection

X

426 ERDAS

Page 453: Erdas FieldGuide

Choosing a Map Projection

Choosing a MapProjection

Map Projections Uses in a GIS

Selecting a map projection for the GIS data base will enable the user to (Maling 1992):

• decide how to best display the area of interest or illustrate the results of analysis

• register all imagery to a single coordinate system for easier comparisons

• test the accuracy of the information and perform measurements on the data

Deciding Factors

Depending on the user’s applications and the uses for the maps created, one or severalmap projections may be used. Many factors must be weighed when selecting aprojection, including:

• type of map

• special properties that must be preserved

• types of data to be mapped

• map accuracy

• scale

If the user is mapping a relatively small area, virtually any map projection will do. It isin mapping large areas (entire countries, continents, and the world) that the choice ofmap projection becomes more critical. In small areas, the amount of distortion in aparticular projection is barely, if at all, noticeable. In large areas, there may be little orno distortion in the center of the map, but distortion will increase outward toward theedges of the map.

Guidelines

Since the sixteenth century, there have been three fundamental rules regarding mapprojection use (Maling 1992):

• if the country to be mapped lies in the tropics, use a cylindrical projection

• if the country to be mapped lies in the temperate latitudes, use a conical projection

• if the map is required to show one of the polar regions, use an azimuthal projection

427Field Guide

Page 454: Erdas FieldGuide

These rules are no longer held so strongly. There are too many factors to consider inmap projection selection for broad generalizations to be effective today. The purpose ofa particular map and the merits of the individual projections must be examined beforean educated choice can be made. However, there are some guidelines that may help auser select a projection (Pearson 1990):

• Statistical data should be displayed using an equal area projection to maintainproper proportions (although shape may be sacrificed)

• Equal area projections are well suited to thematic data

• Where shape is important, use a conformal projection

428 ERDAS

Page 455: Erdas FieldGuide

Spheroids

Spheroids The previous discussion of direct geometric map projections assumes that the earth is asphere, and for many maps this is satisfactory. However, due to rotation of the eartharound its axis, the planet bulges slightly at the equator. This flattening of the spheremakes it an oblate spheroid, which is an ellipse rotated around its shorter axis.

Figure 176: Ellipse

An ellipse is defined by its semi-major (long) and semi-minor (short) axes.

The amount of flattening of the earth is expressed as the ratio:

EQUATION 23

where:

a = the equatorial radius (semi-major axis)

b = the polar radius (semi-minor axis)

Most map projections use eccentricity (e2) rather than flattening. The relationship is:

EQUATION 24

The flattening of the earth is about 1 part in 300 and becomes significant in mapaccuracy at a scale of 1:100,000 or larger.

Calculation of a map projection requires definition of the spheroid (or ellipsoid) interms of axes lengths and eccentricity squared (or radius of the reference sphere).Several principal spheroids are in use by one or more countries. Differences are dueprimarily to calculation of the spheroid for a particular region of the earth’s surface.Only recently have satellite tracking data provided spheroid determinations for theentire earth. However, these spheroids may not give the “best fit” for a particularregion. In North America, the spheroid in use is the Clarke 1866 for NAD27 and GRS1980 for NAD83 (State Plane).

Minor axis

Major axis

semi-minoraxis

semi-major axis

f a b–( ) a⁄=

e2 2 f f 2–=

429Field Guide

Page 456: Erdas FieldGuide

If other regions are to be mapped, different projections should be used. Upon choosinga desired projection type, the user has the option to choose from the following list ofspheroids:

Clarke 1866Clarke 1880BesselNew International 1967International 1909WGS 72EverestWGS 66GRS 1980AiryModified EverestModified AiryWalbeckSoutheast AsiaAustralian NationalKrasovskyHoughMercury 1960Modified Mercury 1968Sphere of Radius 6370977mWGS 84HelmertSphere of Nominal Radius of Earth

The spheroids listed above are the most commonly used. There are many other spheroidsavailable, and they are listed in the Projection Chooser. These additional spheroids are notdocumented in this manual. You can use the ERDAS IMAGINE Developers’ Toolkit to addyour own map projections and spheroids to IMAGINE.

430 ERDAS

Page 457: Erdas FieldGuide

Spheroids

The semi-major and semi-minor axes of all supported spheroids are listed in Table 30,as well as the principal uses of these spheroids.

Table 30: Spheroids

SpheroidSemi-Major

AxisSemi-Minor

AxisUse

Clarke 1866 6378206.4 6356583.8 North America and thePhilippines

Clarke 1880 6378249.145 6356514.86955 France and Africa

Bessel (1841) 6377397.155 6356078.96284 Central Europe, Chile, andIndonesia

New International 1967 6378157.5 6356772.2 As International 1909 below,more recent calculation

International 1909(= Hayford)

6378388.0 6356911.94613 Remaining parts of the world notlisted here

WGS 72 (WorldGeodetic System 1972)

6378135.0 6356750.519915 NASA (satellite)

Everest (1830) 6377276.3452 6356075.4133 India, Burma, and Pakistan

WGS 66 (WorldGeodetic System 1966)

6378145.0 6356759.769356 As WGS 72 above, older version

GRS 1980 (GeodeticReference System)

6378137.0 6356752.31414 Expected to be adopted in NorthAmerica for 1983 earth-centeredcoordinate system (satellite)

Airy (1940) 6377563.0 6356256.91 England

Modified Everest 6377304.063 6356103.039 As Everest above, more recentversion

Modified Airy 6377341.89 6356036.143 As Airy above, more recentversion

Walbeck (1819) 6376896.0 6355834.8467 Soviet Union, up to 1910

Southeast Asia 6378155.0 6356773.3205 As named

Australian National (1965) 6378160.0 6356774.719 Australia

Krasovsky (1940) 6378245.0 6356863.0188 Former Soviet Union and someEast European countries

Hough 6378270.0 6356794.343479 As International 1909 above, withmodification of ellipse axes

Mercury 1960 6378166.0 6356794.283666 Early satellite, rarely used

Modified Mercury 1968 6378150.0 6356768.337303 As Mercury 1960 above, morerecent calculation

Sphere of Radius 6370997m

6370997.0 6370997.0 A perfect sphere with the samesurface area as the Clarke 1866spheroid

431Field Guide

Page 458: Erdas FieldGuide

Map Composition Learning Map Composition

Cartography and map composition may seem like an entirely new discipline to manyGIS and image processing analysts—and that is partly true. But, by learning the basicsof map design, the results of a user’s analyses can be communicated much more effec-tively. Map composition is also much easier than in the past, when maps were handdrawn. Many GIS analysts may already know more about cartography than theyrealize, simply because they have access to map making software. Perhaps the firstmaps you made were imitations of existing maps, but that is how we learn. This chapteris certainly not a textbook on cartography; it is merely an overview of some of the issuesinvolved in creating cartographically-correct products.

Plan the Map

After the user’s analysis is complete, he or she can begin map composition. The first stepin creating a map is to plan its contents and layout. The following questions will aid inthe planning process:

• How will this map be used?

• Will the map have a single theme or many?

• Is this a single map, or is it part of a series of similar maps?

• Who is the intended audience? What is the level of their knowledge about thesubject matter?

• Will it remain in digital form and be viewed on the computer screen or will it beprinted?

• If it is going to be printed, how big will it be? Will it be printed in color or black andwhite?

• Are there map guidelines already set up by your organization?

WGS 84 6378137.0 6356752.31424517929 As WGS 72, more recentcalculation

Helmert 6378200.0 6356818.16962789092 Egypt

Sphere of Nominal Radiusof Earth

6370997.0 6370997.0 A perfect sphere

Table 30: Spheroids

SpheroidSemi-Major

AxisSemi-Minor

AxisUse

432 ERDAS

Page 459: Erdas FieldGuide

Map Composition

The answers to these questions will help to determine the type of information that mustgo into the composition and the layout of that information. For example, suppose youare going to do a series of maps about global deforestation for presentation to Congress,and you are going to print these maps in color on an electrostatic printer. This scenariomight lead to the following conclusions:

• A format (layout) should be developed for the series, so that all the maps producedhave the same style.

• The colors used should be chosen carefully, since the maps will be printed in color.

• Political boundaries might need to be included, since they will influence the typesof actions that can be taken in each deforested area.

• The typeface size and style to be used for titles, captions, and labels will have to belarger than for maps printed on 8.5” x 11.0” sheets. The type styles selected shouldbe the same for all maps.

• Select symbols that are widely recognized, and make sure they are all explained ina legend.

• Cultural features (roads, urban centers, etc.) may be added for locational reference.

• Include a statement about the accuracy of each map, since these maps may be usedin very high-level decisions.

Once this information is in hand, the user can actually begin sketching the look of themap on a sheet of paper. It is helpful for the user to know how they want the map tolook before starting the ERDAS IMAGINE Map Composer. Doing so will ensure that allof the necessary data layers are available, and will make the composition phase goquickly.

See the Map Composer section of the ERDAS IMAGINE Tour Guides manual for step-by-stepinstructions on creating a map. Refer to the On-Line Help for details about how Map Composerworks.

433Field Guide

Page 460: Erdas FieldGuide

Map Accuracy Maps are often used to influence legislation, promote a cause, or enlighten a particulargroup before decisions are made. In these cases, especially, map accuracy is of theutmost importance. There are many factors that influence map accuracy: the projectionused, scale, base data, generalization, etc. The analyst/cartographer must be aware ofthese factors before map production begins, because the accuracy of the map will, in alarge part, determine its usefulness. It is usually up to individual organizations toperform accuracy assessment and decide how those findings are reflected in theproducts they produce. However, several agencies have established guidelines for mapmakers.

US National Map Accuracy Standard

The United States Bureau of the Budget has developed the US National Map AccuracyStandard in an effort to standardize accuracy reporting on maps. These guidelines aresummarized below (Fisher 1991):

• On scales smaller than 1:20,000, not more than 10 percent of points tested should bemore than 1/50 inch in horizontal error, where points refer only to points that canbe well-defined on the ground.

• On maps with scales larger than 1:20,000, the corresponding error term is 1/30 inch.

• At no more than 10 percent of the elevations tested will contours be in error by morethan one half of the contour interval.

• Accuracy should be tested by comparison of actual map data with survey data ofhigher accuracy (not necessarily with ground truth).

• If maps have been tested and do meet these standards, a statement should be madeto that effect in the legend.

• Maps that have been tested but fail to meet the requirements should omit allmention of the standards on the legend.

USGS Land Use and Land Cover Map Guidelines

The United States Geological Survey (USGS) has set standards of their own for land useand land cover maps (Fisher 1991):

• The minimum level of accuracy in identifying land use and land cover categories is85%.

• The several categories shown should have about the same accuracy.

• Accuracy should be maintained between interpreters and times of sensing.

434 ERDAS

Page 461: Erdas FieldGuide

Map Accuracy

USDA SCS Soils Maps Guidelines

The United States Department of Agriculture (USDA) has set standards for Soil Conser-vation Service (SCS) soils maps (Fisher 1991):

• Up to 25% of the pedons may be of other soil types than those named if they do notpresent a major hindrance to land management.

• Up to only 10% of pedons may be of other soil types than those named if they dopresent a major hindrance to land management.

• No single included soil type may occupy more than 10% of the area of the map unit.

Digitized Hardcopy Maps

Another method of expanding the data base is by digitizing existing hardcopy maps.Although this may seem like an easy way to gather more information, care must betaken in pursuing this avenue if it is necessary to maintain a particular level of accuracy.If the hardcopy maps that are digitized are outdated, or were not produced using thesame accuracy standards that are currently in use, the digitized map may negativelyinfluence the overall accuracy of the data base.

435Field Guide

Page 462: Erdas FieldGuide

436 ERDAS

Page 463: Erdas FieldGuide

Printing Maps

CHAPTER 12Hardcopy Output

Introduction Hardcopy output refers to any output of image data to paper. These topics are coveredin this chapter:

• printing maps

• the mechanics of printing

Printing Maps ERDAS IMAGINE enables the user to create and output a variety of types of hardcopymaps, with several referencing features.

Scaled Maps

A scaled map is a georeferenced map that has been projected to a map projection, andis accurately laid-out and referenced to represent distances and locations. A scaled mapusually has a legend, that includes a scale, such as “1 inch = 1000 feet”. The scale is oftenexpressed as a ratio, like 1:12,000, where 1 inch on the map represents 12,000 inches onthe ground.

See "CHAPTER 8: Rectification" for information on rectifying and georeferencing images and"CHAPTER 11: Cartography" for information on creating maps.

Printing Large Maps

Some scaled maps will not fit on the paper that is used by the printer. These methodsare used to print and store large maps:

• A book map is laid out like the pages of a book. Each page fits on the paper usedby the printer. There is a border, but no tick marks on every page.

• A paneled map is designed to be spliced together into a large paper map.Therefore, borders and tick marks appear on the outer edges of the large map.

437Field Guide

Page 464: Erdas FieldGuide

Figure 177: Layout for a Book Map and a Paneled Map

Scale and Resolution The following scales and resolutions will be noticeable during the process of creating amap composition and sending the composition to a hardcopy device:

• spatial resolution of the image

• display scale of the map composition

• map scale of the image(s)

• map composition to paper scale

• device resolution

Spatial Resolution

Spatial resolution is the area on the ground represented by each raw image data pixel.

Display Scale

Display scale is the distance on the screen as related to one unit on paper. For example,if the map composition is 24 inches by 36 inches, it would not be possible to view theentire composition on the screen. Therefore, the scale could be set to 1:0.25 so that theentire map composition would be in view.

Map Scale

The map scale is the distance on a map as related to the true distance on the ground; orthe area that one pixel represents, measured in map units. The map scale is definedwhen the user creates an image area in the map composition. One map composition canhave multiple image areas set at different scales. These areas may need to be shown atdifferent scales for different applications.

neatline

map composition

neatline

map composition

++

++

++

++

tick marks

Book Map Paneled Map

438 ERDAS

Page 465: Erdas FieldGuide

Printing Maps

Map Composition to Paper Scale

This scale is the original size of the map composition as related to the desired outputsize on paper.

Device Resolution

The number of dots that are printed per unit—for example, 300 dots per inch (DPI).

Use the ERDAS IMAGINE Map Composer to define the above scales and resolutions.

Map Scaling Examples The ERDAS IMAGINE Map Composer enables the user to define a map size, as well asthe size and scale for the image area within the map composition. The examples in thissection focus on the relationship between these factors and the output file created byMap Composer for the specific hardcopy device or file format. Figure 178 is the mapcomposition that will be used in the examples. This composition was originally createdusing IMAGINE Map Composer at a size of 22” × 34” and the hardcopy output mustbe in two different formats.

• It must be output to a PostScript printer on an 8.5” × 11” piece of paper.

• A TIFF file must be created and sent to a film recorder having a 1,000 DPIresolution.

Figure 178: Sample Map Composition

439Field Guide

Page 466: Erdas FieldGuide

Output to PostScript Printer

Since the map was created at 22” × 34”, the map composition to paper scale will needto be calculated so that the composition will fit on an 8.5” × 11” piece of paper. If thisscale is set for a 1 to 1 ratio, then the composition will be paneled.

To determine the map composition to paper scale factor, it is necessary to calculate themost limiting direction. Since the printable area for the printer is approximately8.1” × 8.6”, these numbers will be used in the calculation.

• 8.1” / 22” = 0.36 (horizontal direction)

• 8.6” / 34” = 0.23 (vertical direction)

The vertical direction is the most limiting, therefore the map composition to paper scalewould be set for 0.23.

If the specified size of the map (width and height) is greater than the printable area for the printer,the output hardcopy map will be paneled. See the hardware manual of the hardcopy device forinformation about the printable area of the device.

Use the Print Map Composition dialog to output a map composition to a PostScript printer.

Output to TIFF

The limiting factor in this example is not page size, but disk space (600 MB total). Athree-band .img file must be created in order to convert the map composition to a .tiffile. Due to the three bands and the high resolution, the .img file could be very large.The .tif file will be output to a film recorder with a 1,000 DPI device resolution.

To determine the number of megabytes for the map composition, the X and Y dimen-sions need to be calculated:

• X = 22 inches * 1,000 dots/inch = 22,000

• Y = 34 * 1,000 = 34,000

• 22,000 * 34,000 * 3 = 2244 MB (multiplied by 3 since there are 3 bands)

Although this appears to be an unmanageable file size, it is possible to reduce the filesize with little image degradation. The .img file created from the map composition mustbe less than half to accommodate the .tif file, since the total disk space is only 600megabytes. Dividing the map composition by three in both X and Y directions (2,244MB / 3 /3) results in approximately a 250 megabyte file. This file size is small enoughto process and leaves enough room for the .img to .tif conversion. This division isaccomplished by specifying a 1/3 or 0.333 map composition to paper scale whenoutputting the map composition to an .img file.

Once the .img file is created and exported to TIFF format, it can be sent to a film recorderthat accepts .tif files. Remember, the file must be enlarged three times to compensate forthe reduction during the .img file creation.

440 ERDAS

Page 467: Erdas FieldGuide

Mechanics of Printing

See the hardware manual of the hardcopy device for information about the DPI device resolution.

Use the ERDAS IMAGINE Print Map Composition dialog to output a map composition to an.img file.

Mechanics ofPrinting

This section describes the mechanics of transferring an image or map composition froma data file to a hardcopy map.

Halftone Printing Halftoning is the process of converting a continuous tone image into a pattern of dots.A newspaper photograph is a common example of halftoning.

To make a color illustration, halftones in the primary colors (cyan, magenta, andyellow), plus black, are overlaid. The halftone dots of different colors, in closeproximity, create the effect of blended colors in much the same way that phospho-rescent dots on a color computer monitor combine red, green, and blue to create othercolors. By using different patterns of dots, colors can have different intensities. The dotsfor halftoning are a fixed density—either a dot is there or it is not there.

For scaled maps, each output pixel may contain one or more dot patterns. If a very largeimage file is being printed onto a small piece of paper, data file pixels will be skippedto accommodate the reduction.

Hardcopy Devices

The following hardcopy devices use halftoning to output an image or map composition:

• CalComp Electrostatic Plotters

• Canon PostScript Intelligent Processing Unit

• Linotronic Imagesetter

• Tektronix Inkjet Printer

• Tektronix Phaser Printer

• Versatec Electrostatic Plotter

See the user’s manual for the hardcopy device for more information about halftone printing.

441Field Guide

Page 468: Erdas FieldGuide

Continuous TonePrinting

Continuous tone printing enables the user to output color imagery using the fourprocess colors (cyan, magenta, yellow, and black). By using varying percentages ofthese colors, it is possible to create a wide range of colors. The printer converts digitaldata from the host computer into a continuous tone image. The quality of the outputpicture is similar to a photograph. The output is smoother than halftoning because thedots for continuous tone printing can vary in density.

Example

There are different processes by which continuous tone printers generate a map. Oneexample is a process called thermal dye transfer. The entire image or map compositionis loaded into the printer’s memory. While the paper moves through the printer, heat isused to transfer the dye from a ribbon, which has the dyes for all of the four processcolors, to the paper. The density of the dot depends on the amount of heat applied bythe printer to transfer the dye. The amount of heat applied is determined by thebrightness values of the input image. This allows the printer to control the amount ofdye that is transferred to the paper to create a continuous tone image.

Hardcopy Devices

The following hardcopy devices use continuous toning to output an image or mapcomposition:

• IRIS Color Inkjet Printer

• Kodak XL7700 Continuous Tone Printer

• Tektronix Phaser II SD

NOTE: The above printers do not necessarily use the thermal dye transfer process to generate amap.

See the user’s manual for the hardcopy device for more information about continuous toneprinting.

Contrast and ColorTables

ERDAS IMAGINE contrast and color tables are used for some printing processes, justas they are used in displaying an image. For continuous raster layers, they are loadedfrom the IMAGINE contrast table. For thematic layers, they are loaded from the colortable. The translation of data file values to brightness values is performed entirely bythe software program.

442 ERDAS

Page 469: Erdas FieldGuide

Mechanics of Printing

RGB to CMY Conversion

Colors

Since a printer uses ink instead of light to create a visual image, the primary colors ofpigment (cyan, magenta, and yellow) are used in printing, instead of the primary colorsof light (red, green, and blue). Cyan, magenta, and yellow can be combined to makeblack through a subtractive process, whereas the primary colors of light are additive—red, green, and blue combine to make white (Gonzalez and Wintz 1977).

The data file values that are sent to the printer and the contrast and color tables thataccompany the data file are all in the RGB color scheme. The RGB brightness values inthe contrast and color tables must be converted to CMY values.

The RGB primary colors are the opposites of the CMY colors—meaning, for example,that the presence of cyan in a color means an equal lack of red. To convert the values,each RGB brightness value is subtracted from the maximum brightness value toproduce the brightness value for the opposite color.

C = MAX - RM = MAX - GY = MAX - B

where:

MAX = the maximum brightness valueR = red value from lookup tableG = green value from lookup tableB = blue value from lookup tableC = calculated cyan valueM = calculated magenta valueY = calculated yellow value

Black Ink

Although, theoretically, cyan, magenta, and yellow combine to create black ink, thecolor that results is often a dark, muddy brown. Many printers also use black ink for atruer black.

NOTE: Black ink is not available on all printers. Consult the user’s manual for your printer.

Images often appear darker when printed than they do when displayed on the displaydevice. Therefore, it may be beneficial to improve the contrast and brightness of animage before it is printed.

Use the programs discussed in "CHAPTER 5: Enhancement" to brighten or enhance an imagebefore it is printed.

443Field Guide

Page 470: Erdas FieldGuide

444 ERDAS

Page 471: Erdas FieldGuide

Summation

APPENDIX AMath Topics

Introduction This appendix is a cursory overview of some of the basic mathematical concepts that areapplicable to image processing. Its purpose is to educate the novice reader, and to putthese formulas and concepts into the context of image processing and remote sensingapplications.

Summation A commonly used notation throughout this and other discussions is the Sigma (Σ), usedto denote a summation of values.

For example, the notation

is the sum of all values of i, ranging from 1 to 10, which equals:

1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 = 55.

i

i 1=

10

445Field Guide

Page 472: Erdas FieldGuide

Similarly, the value i may be a subscript, which denotes an ordered set of values. Forexample,

where:

Q1 = 3

Q2 = 5

Q3 = 7

Q4 = 2

Statistics

Histogram In ERDAS IMAGINE image data files (.img), each data file value (defined by its row,column, and band) is a variable. IMAGINE supports the following data types:

• 1, 2, and 4-bit

• 8, 16, and 32-bit signed

• 8, 16, and 32-bit unsigned

• 32 and 64-bit floating point

• 64 and 128-bit complex floating point

Distribution, as used in statistics, is the set of frequencies with which an event occurs,or that a variable will have a particular value.

A histogram is a graph of data frequency or distribution. For a single band of data, thehorizontal axis of a histogram is the range of all possible data file values. The verticalaxis is the number of pixels that have each data value.

Qii 1=

4

∑ 3 5 7 2+ + + 17= =

446 ERDAS

Page 473: Erdas FieldGuide

Statistics

Figure 179: Histogram

Figure 179 shows the histogram for a band of data in which Y pixels have data value X.For example, in this graph, 300 pixels (y) have the data file value of 100 (x).

Bin Functions Bins are used to group ranges of data values together for better manageability. Histo-grams and other descriptor columns for 1, 2, 4, and 8-bit data are easy to handle, sincethey contain a maximum of 256 rows. However, to have a row in a descriptor table forevery possible data value in floating point, complex, and 32-bit integer data would yieldan enormous amount of information. Therefore, the bin function is provided to serve asa data reduction tool.

Example of a Bin Function

Suppose a user has a floating point data layer with values ranging from 0.0 to 1.0. Theuser could set up a descriptor table of 100 rows, with each row or bin corresponding toa data range of .01 in the layer.

The bins would look like the following:

Then, for example, row 23 of the histogram table would contain the number of pixels inthe layer whose value fell between .023 and .024.

00 255

1000

X

Y

300

100

histogram

data file values

num

ber

of p

ixel

s

Bin Number Data Range

0 X < 0.01

1 0.01 ≤ X < 0.02

2 0.02 ≤ X < 0.03

.

.

.

98 0.98 ≤ X < 0.99

99 0.99 ≤ X

447Field Guide

Page 474: Erdas FieldGuide

Types of Bin Functions

The bin function establishes the relationship between data values and rows in thedescriptor table. There are four types of bin functions used in ERDAS IMAGINE imagelayers:

• DIRECT — one bin per integer value. Used by default for 1, 2, 4, and 8-bit integerdata, but may be used for other data types as well. The direct bin function mayinclude an offset for negative data or data in which the minimum value is greaterthan zero.

For example, a direct bin with 900 bins and an offset of -601 would look like the fol-lowing:

Bin Number Data Range

0 X ≤ -600.5

1 -600.5 < X ≤ -599.5

.

.

.

599 -2.5 < X ≤ -1.5

600 -1.5 < X ≤ -0.5

601 -0.5 < X < 0.5

602 0.5 ≤ X < 1.5

603 1.5 ≤ X < 2.5

.

.

.

898 296.5 ≤ X < 297.5

899 297.5 ≤ X

448 ERDAS

Page 475: Erdas FieldGuide

Statistics

• LINEAR — establishes a linear mapping between data values and bin numbers, asin our first example, mapping the data range 0.0 to 1.0 to bin numbers 0 to 99.

The bin number is computed by:

bin = numbins * (x - min) / (max - min)

if (bin < 0) bin = 0

if (bin >= numbins) bin = numbins - 1

where:

bin = resulting bin number

numbins = number of bins

x = data value

min = lower limit (usually minimum data value)

max = upper limit (usually maximum data value)LOG - establishes a loga-rithmic mapping between data values and bin numbers.

• LOG — establishes a logarithmic mapping between data values and bin numbers.

The bin number is computed by:

bin = numbins * (ln (1.0 + ((x - min)/(max - min)))/ ln (2.0))

if (bin < 0) bin = 0

if (bin >= numbins) bin = numbins - 1

• EXPLICIT — explicitly defines mapping between each bin number and data range.

449Field Guide

Page 476: Erdas FieldGuide

Mean The mean (µ) of a set of values is its statistical average, such that, if Qi represents a setof k values:

or

The mean of data with a normal distribution is the value at the peak of the curve—thepoint where the distribution balances.

Normal Distribution Our general ideas about an average, whether it be average age, average test score, or theaverage amount of spectral reflectance from oak trees in the spring,

are made visible in the graph of a normal distribution, or bell curve.

Figure 180: Normal Distribution

Average usually refers to a central value on a bell curve, although all distributions haveaverages. In a normal distribution, most values are at or near the middle, as shown bythe peak of the bell curve. Values that are more extreme are more rare, as shown by thetails at the ends of the curve.

The Normal Distributions are a family of bell shaped distributions that turn upfrequently under certain special circumstances. For example, a normal distributionwould occur if one were to compare the bands in a desert image. The bands would bevery similar, but would vary slightly.

µQ1 Q2 Q3 ... Qk+ + + +

k----------------------------------------------------------=

µQi

k-----

i 1=

k

∑=

data file values0 255

num

ber

of p

ixel

s

0

1000

450 ERDAS

Page 477: Erdas FieldGuide

Statistics

Each Normal Distribution uses just two parameters, σ and µ, to control the shape andlocation of the resulting probability graph through the equation:

where

x = the quantity whose distribution is being approximated

π and e = famous mathematical constants

The parameter, µ, controls how much the bell is shifted horizontally so that its averagewill match the average of the distribution of x, while σ adjusts the width of the bell totry to encompass the spread of the given distribution. In choosing to approximate adistribution by the nearest of the Normal Distributions, we describe the many values inthe bin function of its distribution with just two parameters. It is a significant simplifi-cation that can greatly ease the computational burden of many operations, but like allsimplifications, it reduces the accuracy of the conclusions we can draw.

The normal distribution is the most widely encountered model for probability. Manynatural phenomena can be predicted or estimated according to “the law of averages”that is implied by the bell curve (Larsen and Marx 1981).

A normal distribution in remotely sensed data is meaningful—it is a sign that somecharacteristic of an object can be measured by the average amount of electromagneticradiation that the object reflects. This relationship between the data and a physicalscene or object is what makes image processing applicable to various types of landanalysis.

The mean and standard deviation are often used by computer programs that processand analyze image data.

f x( ) e

x µ–2σ

------------ 2

σ 2π---------------------=

451Field Guide

Page 478: Erdas FieldGuide

Variance The mean of a set of values locates only the average value—it does not adequatelydescribe the set of values by itself. It is helpful to know how much the data varies fromits mean. However, a simple average of the differences between each value and themean equals zero in every case, by definition of the mean. Therefore, the squares ofthese differences are averaged so that a meaningful number results (Larsen and Marx1981).

In theory, the variance is calculated as follows:

where:

E = expected value (weighted average)2 = squared to make the distance a positive number

In practice, the use of this equation for variance does not usually reflect the exact natureof the values that are used in the equation. These values are usually only samples of alarge data set, and therefore, the mean and variance of the entire data set are estimated,not known.

The equation used in practice is shown below. This is called the “minimum varianceunbiased estimator” of the variance, or the sample variance (notated σ2).

where:

i = a particular pixelk = the number of pixels (the higher the number, the better the approximation)

The theory behind this equation is discussed in chapters on “Point Estimates” and“Sufficient Statistics,” and covered in most statistics texts.

NOTE: The variance is expressed in units squared (e.g., square inches, square data values, etc.),so it may result in a number that is much higher than any of the original values.

VarQ E Q µQ–( )2⟨ ⟩=

σQ2

Qi µQ–( )2

i 1=

k

∑k 1–

-------------------------------------≈

452 ERDAS

Page 479: Erdas FieldGuide

Statistics

Standard Deviation Since the variance is expressed in units squared, a more useful value is the square rootof the variance, which is expressed in units and can be related back to the originalvalues (Larsen and Marx 1981). The square root of the variance is the standarddeviation.

Based on the equation for sample variance (s2), the sample standard deviation (sQ) fora set of values Q is computed as follows:

In any distribution:

• approximately 68% of the values are within one standard deviation of µ: that is,between µ-s and µ+s

• more than 1/2 of the values are between µ-2s and µ+2s

• more than 3/4 of the values are between µ-3s and µ+3s

An example of a simple application of these rules is seen in the ERDAS IMAGINEViewer. When 8-bit data are displayed in the Viewer, IMAGINE automatically appliesa 2 standard deviation stretch that remaps all data file values betweenµ-2s and µ+2s (more than 1/2 of the data) to the range of possible brightness values onthe display device.

Standard deviations are used because the lowest and highest data file values may bemuch farther from the mean than 2s.

For more information on contrast stretch, see "CHAPTER 5: Enhancement."

sQ

Qi µQ–( )2

i 1=

k

∑k 1–

-------------------------------------=

453Field Guide

Page 480: Erdas FieldGuide

Parameters As described above, the standard deviation describes how a fixed percentage of thedata varies from the mean. The mean and standard deviation are known as parameters,which are sufficient to describe a normal curve (Johnston 1980).

When the mean and standard deviation are known, they can be used to estimate othercalculations about the data. In computer programs, it is much more convenient toestimate calculations with a mean and standard deviation than it is to repeatedlysample the actual data.

Algorithms that use parameters are parametric. The closer that the distribution of thedata resembles a normal curve, the more accurate the parametric estimates of the datawill be. ERDAS IMAGINE classification algorithms that use signature files (.sig) areparametric, since the mean and standard deviation of each sample or cluster are storedin the file to represent the distribution of the values.

Covariance In many image processing procedures, the relationships between two bands of data areimportant. Covariance measures the tendencies of data file values in the same pixel, butin different bands, to vary with each other, in relation to the means of their respectivebands. These bands must be linear.

Theoretically speaking, whereas variance is the average square of the differencesbetween values and their mean in one band, covariance is the average product of thedifferences of corresponding values in two different bands from their respective means.Compare the following equation for covariance to the previous one for variance:

where:

Q and R = data file values in two bands

E = expected value

In practice, the sample covariance is computed with this equation:

where:

i = a particular pixelk = the number of pixels

Like variance, covariance is expressed in units squared.

CovQR E Q µQ–( ) R µR–( )⟨ ⟩=

CQR

Qi µQ–( ) Ri µR–( )i 1=

k

∑k

----------------------------------------------------------≈

454 ERDAS

Page 481: Erdas FieldGuide

Statistics

Covariance Matrix The covariance matrix is an n × n matrix that contains all of the variances and covari-ances within n bands of data. Below is an example of a covariance matrix for 4 bands ofdata:

The covariance matrix is symmetrical—for example, CovAB = CovBA.

The covariance of one band of data with itself is the variance of that band:

Therefore, the diagonal of the covariance matrix consists of the band variances.

The covariance matrix is an organized format for storing variance and covariance infor-mation on a computer system, so that it needs to be computed only once. Also, thematrix itself can be used in matrix equations, as in principal components analysis.

See "Matrix Algebra" on page 462 for more information on matrices.

band A band B band C band D

band A VarA CovBA CovCA CovDA

band B CovAB VarB CovCB CovDB

band C CovAC CovBC VarC CovDC

band D CovAD CovBD CovCD VarD

CQQ

Qi µQ–( ) Qi µQ–( )i 1=

k

∑k 1–

-----------------------------------------------------------

Qi µQ–( )2

i 1=

k

∑k 1–

-------------------------------------= =

455Field Guide

Page 482: Erdas FieldGuide

Dimensionality ofData

Spectral Dimensionality is determined by the number of sets of values being used in aprocess. In image processing, each band of data is a set of values. An image with fourbands of data is said to be 4-dimensional (Jensen 1996).

NOTE: The letter n is used consistently in this documentation to stand for the number ofdimensions (bands) of image data.

Measurement Vector The measurement vector of a pixel is the set of data file values for one pixel in all nbands. Although image data files are stored band-by-band, it is often necessary toextract the measurement vectors for individual pixels.

Figure 181: Measurement Vector

According to Figure 181:

i = particular bandVi = the data file value of the pixel in band i, then the measurement vector

for this pixel is:

See "Matrix Algebra" on page 462 for an explanation of vectors.

1 pixel

Band 1

Band 2

Band 3

V1

V2

V3

n = 3

V1

V2

V3

456 ERDAS

Page 483: Erdas FieldGuide

Dimensionality of Data

Mean Vector When the measurement vectors of several pixels are analyzed, a mean vector is oftencalculated. This is the vector of the means of the data file values in each band. It has nelements.

Figure 182: Mean Vector

According to Figure 182:

i = a particular bandµi = the mean of the data file values of the pixels being studied, in band i,

then the mean vector for this training sample is:

Band 1

Band 2

Band 3

Training samplemean of values in samplein band 1 =µ1

mean of these values= µ2

mean of these values= µ3

µ1

µ2

µ3

457Field Guide

Page 484: Erdas FieldGuide

Feature Space Many algorithms in image processing compare the values of two or more bands of data.The programs that perform these functions abstractly plot the data file values of thebands being studied against each other. An example of such a plot in two dimensions(two bands) is illustrated in Figure 183.

Figure 183: Two Band Plot

NOTE: If the image is 2-dimensional, the plot doesn’t always have to be 2-dimensional.

In Figure 183, the pixel that is plotted has a measurement vector of:

The graph above implies physical dimensions for the sake of illustration. Actually,these dimensions are based on spectral characteristics, represented by the digital imagedata. As opposed to physical space, the pixel above is plotted in feature space. Featurespace is an abstract space that is defined by spectral units, such as an amount of electro-magnetic radiation.

Band Adata file values

Ban

d B

data

file

val

ues

0 255

255

0

85

180

(180, 85)

180

85

458 ERDAS

Page 485: Erdas FieldGuide

Dimensionality of Data

Feature Space Images Several techniques for the processing of multiband data make use of a two-dimensionalhistogram, or feature space image. This is simply a graph of the data file values of oneband of data against the values of another band.

Figure 184: Two Band Scatterplot

The scatterplot pictured in Figure 184 can be described as a simplification of a 2-dimensional histogram, where the data file values of one band have been plottedagainst the data file values of another band. This figure shows that when the values inthe bands being plotted have jointly normal distributions, the feature space forms anellipse.

This ellipse is used in several algorithms—specifically, for evaluating training samplesfor image classification. Also, two-dimensional feature space images with ellipses arehelpful to illustrate principal components analysis.

See "CHAPTER 5: Enhancement" for more information on principal components analysis,"CHAPTER 6: Classification" for information on training sample evaluation, and"CHAPTER 8: Rectification"for more information on orders of transformation.

Band Adata file values

Ban

d B

data

file

val

ues

0 255

255

0

459Field Guide

Page 486: Erdas FieldGuide

n-DimensionalHistogram

If 2-dimensional data can be plotted on a 2-dimensional histogram, as above, then n-dimensional data can, abstractly, be plotted on an n-dimensional histogram, defining n-dimensional spectral space.

Each point on an n-dimensional scatterplot has n coordinates in that spectral space — acoordinate for each axis. The n coordinates are the elements of the measurement vectorfor the corresponding pixel.

In some image enhancement algorithms (most notably, principal components), thepoints in the scatterplot are replotted, or the spectral space is redefined in such a waythat the coordinates are changed, thus transforming the measurement vector of thepixel.

When all data sets (bands) have jointly normal distributions, the scatterplot forms ahyperellipsoid. The prefix “hyper” refers to an abstract geometrical shape, which isdefined in more than three dimensions.

NOTE: In this documentation, 2-dimensional examples are used to illustrate concepts that applyto any number of dimensions of data. The 2-dimensional examples are best suited for creatingillustrations to be printed.

Spectral Distance Euclidean Spectral distance is distance in n-dimensional spectral space. It is a numberthat allows two measurement vectors to be compared for similarity. The spectraldistance between two pixels can be calculated as follows:

where:

D = spectral distancen = number of bands (dimensions)i = a particular banddi = data file value of pixel d in band iei = data file value of pixel e in band i

This is the equation for Euclidean distance—in two dimensions (when n = 2), it can besimplified to the Pythagorean Theorem (c2 = a2 + b2), or in this case:

D2 = (di - ei)2 + (dj - ej)

2

D di ei–( )2

i 1=

n

∑=

460 ERDAS

Page 487: Erdas FieldGuide

Polynomials

Polynomials A polynomial is a mathematical expression consisting of variables and coefficients. Acoefficient is a constant, which is multiplied by a variable in the expression.

Order The variables in polynomial expressions can be raised to exponents. The highestexponent in a polynomial determines the order of the polynomial.

A polynomial with one variable, x, takes this form:

A + Bx+ Cx2 + Dx3 + .... + Ωxt

where:

A, B, C, D ... Ω = coefficientst = the order of the polynomial

NOTE: If one or all of A, B, C, D ... are 0, then the nature, but not the complexity, of thetransformation is changed. Mathematically, Ω cannot be 0.

A polynomial with two variables, x and y, takes this form:

A + Bx + Cy + Dx2 + Exy + Fy2+ ... + Qxiyj + ... + Wyt

where:

A, B, C, D, E, F ... Q ... W = coefficientst = the order of the polynomiali and j = exponents

All combinations of xi times yj are used in the polynomial expression, such that:

i + j ≤ t

A numerical example of 3rd-order transformation equations for x and y is:

xo = 5 + 4x - 6y + 10x2 - 5xy + 1y2 + 3x3 + 7x2y - 11xy2 + 4y3

yo = 13 + 12x + 4y + 1x2 - 21xy + 1y2 - 1x3 + 2x2y + 5xy2 + 12y3

Polynomial equations are used in image rectification to transform the coordinates of aninput file to the coordinates of another system. The order of the polynomial used in thisprocess is the order of transformation.

Transformation Matrix In the case of first order image rectification, the variables in the polynomials (x and y)are the source coordinates of a ground control point (GCP). The coefficients arecomputed from the GCPs and stored as a transformation matrix.

A detailed discussion of GCPs, orders of transformation, and transformation matrices isincluded in "CHAPTER 8: Rectification."

461Field Guide

Page 488: Erdas FieldGuide

Matrix Algebra A matrix is a set of numbers or values arranged in a rectangular array. If a matrix has irows and j columns, it is said to be an i by j matrix.

A one-dimensional matrix, having one column (i by 1) is one of many kinds of vectors.For example, the measurement vector of a pixel is an n-element vector of the data filevalues of the pixel, where n is equal to the number of bands.

See "CHAPTER 5: Enhancement" for information on eigenvectors.

Matrix Notation Matrices and vectors are usually designated with a single capital letter, such as M. Forexample:

One value in the matrix M would be specified by its position, which is its row andcolumn (in that order) in the matrix. One element of the array (one value) is designatedwith a lower case letter and its position:

m3,2 = 12.4

With column vectors, it is simpler to use only one number to designate the position:

G2 = 6.5

M2.2 4.6

6.1 8.3

10.0 12.4

=

G2.8

6.5

10.1

=

462 ERDAS

Page 489: Erdas FieldGuide

Matrix Algebra

Matrix Multiplication A simple example of the application of matrix multiplication is a 1st-order transfor-mation matrix. The coefficients are stored in a 2 by 3 matrix:

Then, where:

xo = a1 + a2xi + a3yi

yo = b1 + b2xi + b3yi

xi and yi = source coordinatesxo and yo = rectified coordinatesthe coefficients of the transformation matrix are as above

The above could be expressed by a matrix equation:

R =CS, or

where:

S = a matrix of the source coordinates (3 by 1)C = the transformation matrix (2 by 3)R = the matrix of rectified coordinates (2 by 1)

The sizes of the matrices are shown above to demonstrate a rule of matrix multipli-cation. To multiply two matrices, the first matrix must have the same number ofcolumns as the second matrix has rows. For example, if the first matrix is a by b, and thesecond matrix is m by n, then b must equal m, and the product matrix will have the sizea by n.

Ca1 a2 a3

b1 b2 b3

=

x0

y0

a1 a2 a3

b1 b2 b3

1

xi

yi

=

463Field Guide

Page 490: Erdas FieldGuide

The formula for multiplying two matrices is:

for every i from 1 to afor every j from 1 to n

where:

i = a row in the product matrixj = a column in the product matrixf = an (a by b) matrixg = an (m by n) matrix (b must equal m)

fg is an a by n matrix.

Transposition The transposition of a matrix is derived by interchanging its rows and columns. Trans-position is denoted by T, as in the example below (Cullen 1972).

For more information on transposition, see "Computing Principal Components" in"CHAPTER 5: Enhancement" and "Classification Decision Rules" in "CHAPTER 6:Classification."

fg( )ij f ikgkjk 1=

m

∑=

G2 3

6 4

10 12

=

GT 2 6 10

3 4 12=

464 ERDAS

Page 491: Erdas FieldGuide

APPENDIX BFile Formats and Extensions

Introduction This appendix describes all of the file formats and extensions that are used withinERDAS IMAGINE software. However, this does not include files that are introducedinto IMAGINE by third party products. Please refer to the product‘s documentation forinformation on those files.

Topics include:

• IMAGINE file extensions

• .img Files

• Hierarchial File Architecture (HFA) System

• IMAGINE Machine Independent Format (MIF)

• MIF Data Dictionary

ERDAS IMAGINE FileExtensions

A file name extension is a suffix, usually preceded by a period, that often identifies thetype of data in a file. ERDAS IMAGINE automatically assigns the default extensionwhen the user is prompted to enter a file name. The part of the file name before theextension can be used in a manner that is helpful to the user and others. The files usedwithin the ERDAS IMAGINE system, their extensions, and their formats are conven-tions of ERDAS, Inc.

All of the types of files used within IMAGINE are listed in Table 31 by their extensions.Files with an ASCII format are simply text files which can be viewed with the IMAGINEText Editor utility. IMAGINE HFA (hierarchal file architecture) files can be viewed withthe IMAGINE HfaView utility. The list in Table 31 does not include files that are usedby third party products. Please refer to the product‘s documentation for information onthose files.

465Field Guide

Page 492: Erdas FieldGuide

Table 31: ERDAS IMAGINE File Extensions

Extension Format Description

.aoi HFA Area of Interest file— stores a user-defined area of an .img file. Itincludes everything that an .img file contains.

.aux HFA Auxiliary information (Projection, Attributes) — used to augmenta directly readable format (e.g., SGI FIT) when the format doesnot handle such information.

.atx ASCII Text form of .aux file — used for input purposes only.

.cff HFA Coefficient file — stores transformation matrices created byrectifying a file.

Note: This format is now obsolete. Use .gms instead.

.chp HFA Image chip — a greatly reduced preview of a raster image.

.clb ASCII Color Library file

.eml ASCII ERDAS Macro Language file — stores scripts which control theoperation of the IMAGINE graphical user interface. New .emlfiles can be created with the ERDAS Macro Language andincorporated into the IMAGINE interface.

.fft HFA Fast Fourier Transform file — stores raster layers in a compressedformat, created by performing a Fast Fourier Transformation onan .img file.

.flb HFA Fill styles Library file

.fls ASCII File List file — stores the list of files that is used for mosaickingimages.

.fsp.img HFA Feature Space Image file — stores the same information as an .imgfile plus the information required to create the feature spaceimage (e.g., transformation).

.gcc HFA Ground Control Coordinates file — stores ground control points.

.gmd ASCII Graphical Model file — stores scripts that draw the graphicalmodel (i.e., flow chart), created with the Spatial Modeler ModelMaker.

.gms HFA Geometric Model file — contains the parameters of a transforma-tion and, optionally, the transformation itself for any geometricmodel.

.ifft.img HFA Inverse Fast Fourier Transform file — stores raster layers createdby performing an inverse Fast Fourier Transformation on an .imgfile.

.img HFA Image file — stores single or multiple raster layers, contrast andcolor tables, descriptor tables, pyramid layers, and fileinformation. See the .img Files section in this chapter for moreinformation on .img files.

.klb ASCII Kernel Library file — stores convolution kernels.

.llb HFA Line style Library file

466 ERDAS

Page 493: Erdas FieldGuide

ERDAS IMAGINE File Extensions

.mag.img HFA Magnitude Image file — an .img file that stores the magnitude ofa Fourier Transform image file.

.map HFA (ver. 8.3)ASCII (pre-8.3)

Map file — stores map frames created with Map Composer.

.map.ovr HFA Map/Overlay file — stores annotation layers created in MapComposer outside of the map frame (e.g., legends, grids, lines,scales)

.mdl ASCII Model file — stores Spatial Modeler scripts. It does not store anygraphical model (i.e., flow chart) information. This file isnecessary for running a model. If only a .gmd file exists, then atemporary .mdl file is created when a model is run.

.olh FrameMaker On-Line Help file —- stores the IMAGINE On-Line Helpdocumentation.

.ovr HFA Overlay file — stores an annotation layer that was created in amap frame, in a blank Viewer, or on an image in a Viewer.

.pdf ASCII Preference Definition file — stores information that is used by thePreference Editor.

.plb ASCII Projection Library file

.plt ASCII Plot file — stores the names of the panel files produced byMapMaker. MapMaker processes the .map file to produce one ormore map panels. Each panel consists of two files. One is thename file with the extension .plt.panel_xx.name, which names thevarious fonts used in the panel along with name of the actual filethat contains the panel output. The other is the panel file itselfwith the .plt.panel_xx extension. The .plt file contains thecomplete path names (one per line) of the panel name files.

.plt.panel_xx.name

ASCII Panel Name file — stores the name of the panel data file and anyfonts used by the panel (the font names are present only forPostScript output)

.plt.panel_xx ASCII/HFA Panel Data file — stores actual processed data output byMapMaker. If the destination device was a PostScript device, thenthis is an ASCII file that contains PostScript commands. If theoutput device was a non-PostScript raster device, then this file isan HFA file that contains one or three layers of raster imagery. Itcan be viewed with the Viewer.

.pmdl ASCII Permanent Model files — stores the permanent version of the.mdl files that are provided by ERDAS.

.sig HFA Signature file — stores a signature set, which was created by theClassification Signature Editor or imported from ERDAS Version7.5.

.sml HFA Symbol Library file — stores annotation symbols for the symbollibrary.

.tlb HFA Text style Library file

Table 31: ERDAS IMAGINE File Extensions

Extension Format Description

467Field Guide

Page 494: Erdas FieldGuide

ERDAS IMAGINE.img Files

ERDAS IMAGINE uses .img files to store raster data. These files use the HFA structure.Figure 185 shows the different objects of data stored in an .img file. The user’s file maynot have all of this information present, because these objects are not definitive, that is,data may be added or removed when a process is run (e.g., add ground control points).Also, other sources, such as programs incorporated by third party vendors, may addobjects to the .img file.

The information stored in an .img file can be used to help the user visualize howdifferent processes change the data. For example, if the user runs a filter over the fileand creates a new file, the statistics for the two files can be compared to see how thefilter changed the data.

Figure 185: Examples of Objects Stored in an .img File

The objects of an .img file are described in more detail on the following pages.

Use the IMAGINE Image Information utility or the HfaView utility to view the informationthat is stored in an .img file.

The information in the Image Info and HfaView utilities should be modified with caution becauseIMAGINE programs use this information for data input. If it is incorrect, there will be errors inthe output data for these programs.

.img File

Ground Covariance Sensor Layer_1 Layer_2 Layer_n Control Matrix Info. Info. Info. Info.

Attribute Statistics Map Info. Projection Pyramid Data File Data Info. Layers Values

468 ERDAS

Page 495: Erdas FieldGuide

ERDAS IMAGINE .img Files

Sensor Information When importing satellite imagery, there is usually a header file on the tape or CD-ROMthat is separate from the data. This object contains ephemeris information about thesensor, such as:

• date and time scene was scanned

• calibration information of the sensor

• orientation of the sensor

• original dimensions for data

• data storage format

• number of bands

The data presented are dependent upon the sensor. Each sensor provides differenttypes of information. The sensor object is named:

<format type>_Header

Some examples of the various sensor types are listed in the chart below.

Use the HfaView utility to view the header file.

Sensor Sensor Object

ADRG ADRG_Header

Landsat TM TM_Header

NOAA AVHRR AVHRR_Header

RADARSAT RADARSAT_Header

SPOT SPOT_Header

469Field Guide

Page 496: Erdas FieldGuide

Raster Layer Information Each raster layer within an .img file has its own ancillary data, including the followingparameters:

• height and width (rows and columns)

• layer type (continuous or thematic)

• data type (signed 8-bit, floating point, etc.)

• compression (see below)

• block size (see below)

This information is usually the same for each layer.

These parameters are defined when the raster layer is created or imported into IMAGINE. Usethe Image Information utility to view the parameters.

Compression

When importing a file into IMAGINE, the user has the option to compress the data.Currently, IMAGINE uses the run-length compression method. The amount that thedata are compressed depends on data in the layer. For example, if the layer containslarge, homogenous areas (e.g., blocks of water), then compressing the layer would saveon disk space. However, if the layer is very heterogenous, run-length compressionwould not save much disk space.

Data will be compressed only when it is stored. IMAGINE automatically uncompressesdata before the layer is run through a process. The time that it takes to uncompress thedata is minimal.

Use the Import function to compress data when it is imported into IMAGINE.

Block Size

IMAGINE software uses a tiled format to store raster layers. The tiled format allowsraster layers to be displayed and resampled quickly. The raster layer is divided into tiles(i.e., blocks) when IMAGINE creates or imports an .img file. The size of this block canbe defined when the user either creates the file or imports it. The default block size is 64pixels by 64 pixels.

NOTE: The default block size is acceptable for most applications and should not need to bechanged.

470 ERDAS

Page 497: Erdas FieldGuide

ERDAS IMAGINE .img Files

Figure 186: Example of a 512 x 512 Layer with a Block Size of 64 x 64 Pixels

64

64

512

512

columns

rows

pixels

pixels

471Field Guide

Page 498: Erdas FieldGuide

Attribute Data Continuous Raster Layer

The attribute table object for a continuous raster layer, by default, includes thefollowing information:

• histogram

• contrast table

Thematic Raster Layer

For a thematic raster layer, the attribute table object, by default, includes the followinginformation:

• histogram

• class names

• class values

• color table (red, green, and blue values).

Attribute data can also include additional information for thematic raster layers, suchas the area, opacity, and attributes for each class.

Use the Raster Attribute Editor to view or modify the contents of these attribute tables.

Statistics The following statistics are calculated for each raster layer:

• minimum and maximum data file values

• mean of the data file values

• median of the data file values

• mode of the data file values

• standard deviation of the data file values

See "APPENDIX A: Math Topics" for more information on these statistics.

These statistics are based on the data file values of the pixels in the layer. Knowing thestatistics for a layer will aid the user in determining the process to be used in extractingthe features that are of the most interest. For example, if a user planning to use theISODATA classifier, the statistics could be used to see if the layer has a normal distri-bution of data, which is preferred.

If they do not exist, statistics should be created for a layer. Certain Viewer functions(e.g., contrast tools) will not run without layer statistics. Rebuilding statistics for a rasterlayer may be necessary. For example, if the user does not want to include zero filevalues in the statistics calculation (and they are currently included), the statistics couldbe rebuilt without zero file values.

472 ERDAS

Page 499: Erdas FieldGuide

ERDAS IMAGINE .img Files

Use the Image Information utility to view, create, or rebuild statistics for a raster layer. If thestatistics do not exist, the information in the Image Information utility will be inactive andshaded.

Map Information Map information for a raster layer will be created only when the layer has been georef-erenced. If the layer has been georeferenced, the following information will be stored inthe raster layer:

• upper left X,Y coordinates

• pixel size

• map unit used for measurement (e.g., meters, inches, feet)

See "CHAPTER 11: Cartography" for information on map data.

The user should add or change the map information only when he or she has valid mapinformation to enter. If incorrect information is entered, then the data for this file willno longer be valid. Since IMAGINE programs use these data, they must be correct.

When you import a file, the map information may not have imported correctly. If this occurs, usethe Image Info utility to update the information.

Use the Image Information utility to view, add, or change map information for a raster layer inan .img file.

473Field Guide

Page 500: Erdas FieldGuide

Map ProjectionInformation

If the raster layer has been georeferenced, the following projection information will begenerated for the layer:

• map projection

• spheroid

• zone number

See "APPENDIX C: Map Projections" for information on map projections.

Do not add or change the map projection unless the projection listed in the Image Info utility isincorrect or missing. This may occur when you import a file.

Changing the map projection with the Projections Editor dialog will not rectify thelayer. If the user enters incorrect information, then the data for this layer will no longerbe valid. Since IMAGINE programs use these data, they need to be correct.

Use the Image Information utility to view, add, or change the map projection for a raster layerin an .img file. If the layer has not been georeferenced, the information in the Image Informationutility will be inactive and shaded.

Pyramid Layers IMAGINE gives the user the option to “pyramid” large raster layers for fasterprocessing and display in the Viewer. When the user generates pyramid layers,reduced subsampled raster layers are created from the original raster layer. Thenumber of pyramid layers that are created depends on the size of the raster layer andthe block size.

For example, a raster layer that is 4k × 4k pixels could take a long time to display whenusing the Fit To Window option in the Viewer. Using the Create Pyramid Layersoption, IMAGINE would create additional raster layers successively reduced from 4k× 4k, to 2k × 2k, 1k × 1k, 512 × 512, 128 × 128, down to 64 × 64. Then IMAGINE wouldselect the pyramid layer size most appropriate for display in the Viewer window.

See "CHAPTER 4: Image Display" for more information on pyramid layers.

Pyramid layers can be created using the Image Information utility or when the raster layer isimported. You can also use the Image Information utility to delete pyramid layers.

474 ERDAS

Page 501: Erdas FieldGuide

Machine Independent Format

MachineIndependent Format

MIF Data Elements ERDAS IMAGINE uses the Machine Independent Format (MIF) to store data in afashion which can be read by a variety of machines. This format provides support forconverting data between the IMAGINE standard data format and that of the specifichost's architecture. Files created using this package on one machine will be readablefrom another machine with no explicit data translation.

Each MIF file is made up of one or more of the data elements explained below.

EMIF_T_U1 (Unsigned 1-bit Integer)

U1 is for unsigned 1-bit integers (0 - 1). This data type can be used for bitmap imageswith “yes/no” conditions. When the data are read from a MIF file, they are automati-cally expanded to give one value per byte in memory. When they are written to the file,they are automatically compressed to place eight values into one output byte.

EMIF_T_U2 (Unsigned 2-bit Integer)

U2 is for unsigned 2-bit integers (0 - 3). This data type can be used for thematic data with4 or fewer classes. When the data are read from a MIF file they are automaticallyexpanded to give one value per byte in memory. When they are written to the file, theyare automatically compressed to place four values into one output byte.

EMIF_T_U4 (Unsigned 4-bit Integer)

U4 is for unsigned 4-bit integers (0 - 15). This data type can be used for thematic datawith 16 or fewer classes. When these data are read from a MIF file, they are automati-cally expanded to give one value per byte. When they are written to the file it isautomatically compressed to place two values into one output byte.

7 6 5 4 3 2 1 0

U1 _7 U1_ 6 U1_ 5 U1_ 4 U1_ 3 U1_ 2 U1_ 1 U1_ 0

byte 0

7 5 3 1

U2_3 U2_2 U2_1 U2_0

byte 0

7 3

U4_1 U4_0

byte 0

475Field Guide

Page 502: Erdas FieldGuide

EMIF_T_UCHAR (8-bit Unsigned Integer)

This stores an 8-bit unsigned integer. It is most typically used to stored characters andraster imagery.

EMIF_T_CHAR (8-bit Signed Integer)

This stores an 8-bit signed integer.

EMIF_T_USHORT (16-bit Unsigned Integer)

This stores a 16-bit unsigned integer, stored in Intel byte order. The least significant byteis stored first.

EMIF_T_SHORT (16-bit Signed Integer)

This stores a 16-bit two-complement signed integer, stored in Intel byte order. The leastsignificant byte is stored first.

7

integer

byte 0

7

integer

byte 0

15

integer

byte 1 byte 0

15

integer

byte 1 byte 0

476 ERDAS

Page 503: Erdas FieldGuide

Machine Independent Format

EMIF_T_ENUM (Enumerated Data Types)

This stores an enumerated data type as a 16-bit unsigned integer, stored in Intel byteorder. The least significant byte is stored first. The list of strings associated with the typeare defined in the data dictionary which is defined below. The first item in the list isindicated by 0.

EMIF_T_ULONG (32-bit Unsigned Integer)

This stores a 32-bit unsigned integer, stored in Intel byte order. The least significant byteis stored first.

EMIF_T_LONG (32-bit Signed Integer)

This stores a 32-bit two-complement signed integer value, stored in Intel byte order.The least significant byte is stored first.

EMIF_T_PTR (32-bit Unsigned Integer)

This stores a 32-bit unsigned integer, which is used to provide a byte address within thefile. Byte 0 is the first byte, byte 1 is the second, etc. This allows for indexing into a 4-Gigabyte file, however most UNIX systems only allow 2-Gigabyte files.

NOTE: Currently, this element appears in the data dictionary as an EMIF_T_ULONG element.In future versions of the file format, the EMIF_T_PTR will be expanded to an 8-byte formatwhich will allow indexing using 64 bits which allow addressing of 16 billion Gigabytes of filespace.

15

integer

byte 1 byte 0

31

integer

byte 3 byte 2 byte 1 byte 0

31

integer

byte 3 byte 2 byte 1 byte 0

31

integer

byte 3 byte 2 byte 1 byte 0

477Field Guide

Page 504: Erdas FieldGuide

EMIF_T_TIME (32-bit Unsigned Integer)

This stores a 32-bit unsigned integer, which represents the number of seconds since00:00:00 1 JAN 1970. This is the standard used in UNIX time keeping. The least signif-icant byte is stored first.

EMIF_T_FLOAT (Single Precision Floating Point)

Single precision floating point values are IEEE floating point values.

s = sign (0 = positive, 1 = negative)exp = 8 bit excess 127 exponentfraction = 24 bits of precision (+1 hidden bit)

EMIF_T_DOUBLE (Double Precision Floating Point)

Double precision floating point data are IEEE double precision.

s = sign (0 = positive, 1 = negative)exp = 11 bit excess 1023 exponentfraction = 53 bits of precision (+1 hidden bit)

31

integer

byte 3 byte 2 byte 1 byte 0

31 30 22

s exp fraction

byte 3 byte 2 byte 1 byte 0

63 52 51

s exp fraction

byte 7 byte 6 byte 5 byte 4 byte 3 byte 2 byte 1 byte 0

478 ERDAS

Page 505: Erdas FieldGuide

Machine Independent Format

EMIF_T_COMPLEX (Single Precision Complex)

A complex data element has a real part and an imaginary part. Single precision floatingpoint values are IEEE floating point values.

s = sign (0 = positive, 1 = negative)exp = 8 bit excess 127 exponentfraction = 24 bits of precision (+1 hidden bit)

Real part: first single precision

Imaginary part: second single precision

EMIF_T_DCOMPLEX (Double Precision Complex)

A complex data element has a real part and an imaginary part. Double precisionfloating point data are IEEE double precision.

s = sign (0 = positive, 1 = negative)exp = 11 bit excess 1023 exponentfraction = 53 bits of precision (+1 hidden bit)

Real part: first double precision

Imaginary part: second double precision

31 30 22

s exp fraction

byte 3 byte 2 byte 1 byte 0

31 30 22

s exp fraction

byte 7 byte 6 byte 5 byte 4

63 52 51

s exp fraction

byte 7 byte 6 byte 5 byte 4 byte 3 byte 2 byte 1 byte 0

63 52 51

s exp fraction

byte15 byte 14 byte 13 byte 12 byte 11 byte 10 byte 9 byte 8

479Field Guide

Page 506: Erdas FieldGuide

EMIF_T_BASEDATA (Matrix of Numbers)

A Basedata is a generic two dimensional array of values. It can store any of the types ofdata used by IMAGINE. It is a variable length object whose size is determined by thedata type, the number of rows, and the number of columns.

numrows: This indicates the number of rows of data in this item.

numcolumns: This indicates the number of columns of data in this item.

datatype: This indicates the type of data stored here. The types are:

objecttype: This indicates the object type of the data. This is used in the IMAGINESpatial Modeler. The valid values are:

31

integer

byte 3 byte 2 byte 1 byte 0

31

integer

byte 7 byte 6 byte 5 byte 4

DataType BytesPerObject

0 EMIT_T_U1 1/8

1 EMIF_T_U2 1/4

3 EMIT_T_U4 1/2

4 EMIF_T_UCHAR 1

5 EMIF_T_CHAR 1

6 EMIF_T_USHORT 2

7 EMIF_T_SHORT 2

8 EMIF_T_ULONG 4

9 EMIF_T_LONG 4

10 EMIF_T_FLOAT 4

11 EMIF_T_DOUBLE 8

12 EMIF_T_COMPLEX 8

13 EMIF_T_DCOMPLEX 16

15

integer

byte 9 byte 8

480 ERDAS

Page 507: Erdas FieldGuide

Machine Independent Format

data: This is the actual data. The number of bytes is given as:

bytecount = numrows * numcolumns * BytesPerObject

EMIF_M_INDIRECT (Indication of Indirect Data)

This is used when the following data belongs to an indirect reference of data. Forexample, when one object is defined by referring to another object.

The first four bytes provide the object repeat count.

The next four bytes provide the file pointer, which points to the data comprising theobject.

EMIF_M_PTR (Indication of Indirect Data)

This is used when the following data belong to an indirect reference of data of variablelength. For example, when one object is defined by referring to another object. This isidentical in file format to the EMIF_M_INDIRECT element. Its main difference is in thememory resident object which gets created. In the case of the EMIF_M_PTR the countand data pointer are placed into memory. Whereas only the data gets placed intomemory when the EMIF_M_INDIRECT element is read in. (The size of the object isinherent in the data definitions.)

0 SCALAR. This will not normally be the case, since a scalar has asingle value.

1 TABLE: This indicates that the object is an array. The numcolumns should be1.

2 MATRIX: This indicates the number of rows and columns is greater than one.This is used for Coefficient matrices, etc.

3 RASTER: This indicates that the number of rows and columns is greater thanone and the data are just a part of a larger raster object. This would be thecase for blocks of images which are written to the file.

15

integer

byte 11 byte 10

31

integer

byte 3 byte 2 byte 1 byte 0

31

integer

byte 7 byte 6 byte 5 byte 4

481Field Guide

Page 508: Erdas FieldGuide

The first four bytes provide the object repeat count.

The next four bytes provide the file pointer which points to the data comprising theobject.

31

integer

byte 3 byte 2 byte 1 byte 0

31

integer

byte 7 byte 6 byte 5 byte 4

482 ERDAS

Page 509: Erdas FieldGuide

Machine Independent Format

MIF Data Dictionary IMAGINE HFA files have a data dictionary that describes the contents of each of thedifferent types of nodes. The dictionary is a compact ASCII string which is usuallyplaced at the end of the file with a pointer to the start the dictionary that is stored in theheader of the file.

Each object is defined like a structure in C, and consists of one or more items. Each itemis composed of an ItemType and a name. The ItemType indicates the type of data andthe name indicates the name by which the item will be known.

The syntax of the dictionary string is:

Dict ionary ObjectDef in i t ion[ObjectDef in i t ion. . . ].

The dictionary is one or more ObjectDefinitions terminated by aperiod. This is the complete collection of object type definitions.

ObjectDef in i t ion I temDefini t ion[ I temDefini t ion. . . ]name,

An ObjectDefinition is an ItemDefinition enclosed in braces followed by a name and terminated by a comma. This is a com-plete definition of a single object.

I temDefini t ion number:[ * |p] I temType[EnumData]name,

An ItemDefinition is a number followed by a colon, followedoptionally by either an asterisk or a p, followed by an ItemType,followed optionally by EnumData, followed by an item name,and terminated by a comma. This is the complete definition of asingle Item. The * and the p both indicate that when the data areread into memory, they will not be placed directly into thestructure being built, but that a new structure will be allocatedand filled with the data. The pointer to that structure is placedinto the initial structure. The asterisk indicates that the numberof items in the indirect object is given by the number in the itemdefinition. The p indicates that the number is variable. In bothcases, the count precedes the data in the input stream.

EnumData number:name,[<name>,. . . ]

EnumData is a number, followed by a colon, followed by one ormore names each of which is terminated by a comma. The num-ber defines the number of names which will follow. This is thecomplete set of names associated with an individual enumtype.

name Any sequence of alphanumeric characters excluding thecomma.

number A positive integer number. This composed of any sequence ofthese digits: 0,1,2,3,4,5,6,7,8,9.

I temType 1|2|4|c|C|s|S| l|L | f|d| t|m|M |b|e|o|x

This is used to indicate the type of an item. The following tableindicates how the characters correspond to one of the basicEMIF_T types.

483Field Guide

Page 510: Erdas FieldGuide

The following table describes the single character codes used to identify the ItemTypein the MIF Dictionary Definition. The Interpretation column describes the type of dataindicated by the item type. The Number of Bytes column is the number of bytes that thedata type will occupy in the MIF file. If the number of bytes is not fixed, then it is givenas dynamic.

ItemType InterpretationNumber of

Bytes

1 EMIF_T_U1 1

2 EMIF_T_U2 1

4 EMIF_T_U4 1

c EMIF_T_UCHAR 1

C EMIF_T_CHAR 1

e EMIF_T_ENUM. 2

s EMIF_T_USHORT 2

S EMIF_T_SHORT 2

t EMIF_T_TIME 4

l EMIF_T_ULONG 4

L EMIF_T_LONG 4

f EMIF_T_FLOAT 4

d EMIF_T_DOUBLE 8

m EMIF_T_COMPLEX 8

M EMIF_T_DCOMPLEX 16

b EMIT_T_BASEDATA dynamic

o Previously defined object. This indicates thatthe description of the following data has beenpreviously defined in the dictionary. This is likeusing a previously defined structure in astructure definition.

dynamic

x Defined object for this entry. This indicates thatthe description of the following data follows.This is like using a structure definition within astructure definition.

dynamic

484 ERDAS

Page 511: Erdas FieldGuide

ERDAS IMAGINE HFA File Format

ERDAS IMAGINEHFA File Format

Many of the files created and used by ERDAS IMAGINE are stored in a hierarchical filearchitecture (HFA). This format allows any number of different types of data elementsto be stored in the file in a tree structured fashion. This tree is built of nodes whichcontain a variety of types of data. The contents of the nodes (as well as the structuralinformation) is saved in the file in a machine independent format (MIF) which allowsthe files to be shared between computers of differing architectures.

Hierarchical FileArchitecture

The hierarchical file architecture maintains an object-oriented representation of data inan IMAGINE disk file through use of a tree structure. Each object is called an entry andoccupies one node in the tree. Each object has a name and a type. The type refers to adescription of the data contained by that object. Additionally each object may contain apointer to a subtree of more nodes. All entries are stored in MIF and can be accesseddirectly by name.

Use the IMAGINE HfaView utility to view the objects of a file that uses the HFA format.

Figure 187: HFA File Structure

Header

Dictionary Root Node

Node_1 Node_2 Node_3

Data Data

Data

Node_4 Node_5

Data

Data

485Field Guide

Page 512: Erdas FieldGuide

Nodes and Objects

Each node within the HFA tree structure contains an object and each object has its owndata. The types of objects in a file are dependent upon the type of file. For example, an.img file will have different objects than an .ovr file because these files store differenttypes of data. The list of objects in a file is not fixed, that is objects may be added orremoved depending on the data in the file (e.g., all .img files with continuous rasterlayers will not have a node for ground control points).

Figure 188 is an example of an HFA file structure for a thematic raster layer in an .imgfile. If there were more attributes in the IMAGINE Raster Attribute Editor, then theywould appear as objects under the Descriptor Table object.

Figure 188: HFA File Structure Example

Layer_1 Eimg_Layer

StatisticsEsta_Statistics

Projection

#BinFunction#Edsc_BinFunction

RedEdsc_Column

GreenEdsc_Column

BlueEdsc_Column

Class_NamesEdsc_Column

HistogramEdsc_Column

Descriptor TableEdsc_Table Eprj_Pro Parameters

486 ERDAS

Page 513: Erdas FieldGuide

ERDAS IMAGINE HFA File Format

Pre-defined HFA FileObject Types

There are three categories of pre-defined HFA File Object Types found in .img files:

• Basic HFA File Object Types

• .img Object Types

• External File Format Header Object Types

These sections list each object with two different detailed definitions. The firstdefinition shows how the object appears in the data dictionary in the HFA file. Thesecond definition is a table that shows the type, name, and description of each item inthe object. An item within an object can be an element or another object.

If an item is an element, then the item type is one of the basic types previously givenwith the EMIF_T_ prefix omitted. For example, the item type for EMIF_T_CHAR wouldbe shown as CHAR.

If an item is a previously defined object type, then the type is simply the name of thepreviously defined item.

If the item is an array, then the number of elements is given in square brackets [n] afterthe type. For example, the type for an item with an array of 16 EMIF_T_CHAR wouldappear as CHAR[16]. If the item is an indirect item of fixed size (it is a pointer to anitem), then the type is followed by an asterisk “*.” For example, a pointer to an item withan array of 16 EMIF_T_CHAR would appear as CHAR[16] *. If the item is an indirectitem of variable size (it is a pointer to an item and the number of items), then the typeis followed by a “p.” For example, a pointer to an item with a variable sized array ofcharacters would look like CHAR p.

NOTE: If the item type is shown as PTR, then this item will be encoded in the data dictionaryas a ULONG element.

487Field Guide

Page 514: Erdas FieldGuide

Basic Objects of an HFAFile

This is a list of types of basic objects found in all HFA files:

• Ehfa_HeaderTag

• Ehfa_File

• Ehfa_Entry

Ehfa_HeaderTag

The Ehfa_HeaderTag is used as a unique signature at the beginning of an ERDASIMAGINE HFA file. It must always occupy the first 20 bytes of the file.

16:clabel,1:lheaderPtr,Ehfa_HeaderTag,

Ehfa_File

The Ehfa_File is composed of several main parts, including the free list, the dictionary,and the object tree. This entry is used to keep track of these items in the file, since theymay begin anywhere in the file.

1:Lversion,1:lfreeList,1:lrootEntryPtr,1:SentryHeaderLength,1:ldictionaryPtr,Ehfa_File,

Type Name Description

CHAR[16] label This contains the string“EHFA_HEADER_TAG”

PTR headerPtr The file pointer to the Ehfa_File headerrecord.

Type Name Description

LONG version This defines the version number of the ehfa file.It is currently 1.

PTR freeList This points to list of freed blocks within the file.This list is searched first whenever new space isneeded. As blocks of space are released in thefile, they are placed on the free list so that theymay be reused later.

PTR rootEntryPtr This points to the root node of the object tree.

SHORT entryHeaderLength This defines the length of the entry portion ofeach node. Each node consists of two parts. Thefirst part is the entry which contains the nodename, node type, and parent/child informa-tion. The second part is the data for the node.

PTR dictionaryPtr This points to the starting position of the file forthe MIF Dictionary. The dictionary must beread and decoded before any of the otherobjects in the file can be decoded.

488 ERDAS

Page 515: Erdas FieldGuide

ERDAS IMAGINE HFA File Format

Ehfa_Entry

The Ehfa_Entry contains the header information for each node in the object tree,including the name and type of the node as well as the parent/child information.

1:lnext,1:lprev,1:lparent,1:lchild,1:ldata,1LdataSize,64:cname,32:ctype,1:tmodTime,Ehfa_Entry,

Type Name Description

PTR next This is a file pointer which gives the location of thenext node in the tree at the current level. If this isthe last node at this level, then this contains 0.

PTR prev This is a file pointer which gives the location of theprevious node in the tree at the current level. If thisis the first node at this level, then this contains 0.

PTR parent This is a file pointer which gives the location of theparent for this node. This is 0 for the root node.

PTR child This is a file pointer which gives the location of thefirst of the list of children for this node. If there areno children, then this contains 0.

PTR data This points to the data for this node. If there is nodata for this node then it contains 0.

LONG dataSize This contains the number of bytes contained in thedata record associated with this node.

CHAR[64] name This contains a NULL terminated string that is thename for this node. The string can be no longer then64 bytes including the NULL terminator byte.

CHAR[32] type This contains a NULL terminated string whichnames the type of data to be found at this node. Thetype must match one of the types found in the datadictionary. The type name can be no longer then 32bytes including the NULL terminator byte.

TIME modTime This contains the time of the last modification to thedata in this node.

489Field Guide

Page 516: Erdas FieldGuide

HFA Object Directory for.img files

The following section defines the list of objects which comprise IMAGINE image files(.img extension). This is not a complete list, because users and developers may createnew items and add them to any ERDAS IMAGINE file.

Eimg_Layer

An Eimg_Layer object is the base node for a single layer of imagery. This objectdescribes the basic information for the layer, including its width and height in pixels,its data type, and the width and height of the blocks used to store the image. Otherinformation such as the actual pixel data, map information, projection information, etc.,are stored as child objects under this node. The child objects that are usually foundunder the Eimg_Layer include:

• RasterDMS (an Edms_State which actually contains the imagery)

• Descriptor_Table (an Edsc_Table object which contains the histogram and otherpixel value related data)

• Projection (an Eprj_ProParameters object which contains the projectioninformation)

• Map_Info (an Eprj_MapInfo object which contains the map information)

• Ehfa_Layer (an Ehfa_Layer object which describes the type of data in the layer)

490 ERDAS

Page 517: Erdas FieldGuide

ERDAS IMAGINE HFA File Format

1:lwidth,1:lheight,1:e3:thematic,athematic,fft of real valued data,layerType,1e13:u1,u2,u4,u8, s8,u16,s16,u32,s32,f32,f64,c64,c128,pixelType,1:lblockWidth,1:lblockHeight, Eimg_Layer,

NOTE: In the following definitions, an Emif_String is of type CHAR p(i.e., 0:pcstring,Emif_String).

Type Name Description

LONG width The width of the layer in pixels.

LONG height The height of the layer in pixels.

ENUM layerType The type of layer.

0=”thematic”

1=”athematic”

ENUM pixelType The type of the pixels.

0=”u1”

1=”u2”

2=”u4”

3=”u8”

4=”s8”

5=”u16”

6=”s16”

7=”u32”

8=”s32”

9=”f32”

10=”f64”

11=”c64”

12=”c128”

LONG blockWidth The width of each block in the layer.

LONG blockHeight The height of each block in the layer.

491Field Guide

Page 518: Erdas FieldGuide

Eimg_DependentFile

The Eimg_DependentFile object contains the base name of the file for which the currentfile is serving as an .aux. The object is written as a child of the root with the name Depen-dentFile.

1:oEmif_String,dependent,Eimg_DependentFile,

Eimg_DependentLayerName

The Eimg_DependentLayerName object normally exists as the child of an Eimg_Layerin an .aux file. It contains the original name of the layer of which it is a child in theoriginal imagery file being served by this .aux. It only exists in .aux files servingimagery files of a format supported by a RasterFormats DLL Instance which does notdefine a FileLayerNamesSet interface function (because these DLL Instances areobviously incapable of supporting layer name changes).

1:oEmif_String,ImageLayerName,Eimg_DependentLayerName,

Type Name Description

Emif_String dependent The dependent file name.

Type Name Description

Emif_String ImageLayerName The original dependent layer name.

492 ERDAS

Page 519: Erdas FieldGuide

ERDAS IMAGINE HFA File Format

Eimg_Layer_SubSample

An Eimg_Layer_SubSample object is a node which contains a subsampled version ofthe layer defined by the parent node. The node of this form are named _ss_2, _ss_4,_ss_8, etc. This stands for SubSampled by 2, SubSampled by 4, etc. This node will havean Edms_State node called RasterDMS and an Ehfa_Layer node called Ehfa_layerunder it. This will be present if pyramid layers have been computed.

1:lwidth,1:lheight,1:e3:thematic,athematic,fft of real valued data,layerType,1e13:u1,u2,u4,u8, s8,u16,s16,u32,s32,f32,f64,c64,c128,pixelType,1:lblockWidth,1:lblockHeight, Eimg_Layer_SubSample,

Type Name Description

LONG width The width of the layer in pixels.

LONG height The height of the layer in pixels.

ENUM layerType The type of layer.

0 =”thematic”

1 =”athematic”

ENUM pixelType The type of the pixels.

0=”u1”

1=”u2”

2=”u4”

3=”u8”

4=”s8”

5=”u16”

6=”s16”

7=”u32”

8=”s32”

9=”f32”

10=”f64”

11=”c64”

12=”c128”

LONG blockWidth The width of each block in the layer.

LONG blockHeight The height of each block in the layer.

493Field Guide

Page 520: Erdas FieldGuide

Eimg_NonInitializedValue

The Eimg_NonInitializedValue object is used to record the value that is to be assignedto any uninitialized blocks of raster data in a layer.

1:*bvalueBD,Eimg_NonInitializedValue,

Eimg_MapInformation

The Eimg_MapInformation object contains the map projection system and the mapunits applicable to the MapToPixelXForm object that is its sibling. As a child of anEimg_Layer, it will have the name MapInformation.

1:oEmif_String,projection,1:oEmif_String,units,Eimg_MapInformation,

Eimg_RRDNamesList

The Eimg_RRDNamesList object contains a list of layers of a resolution different(reduced) than the original. As a child of an Eimg_Layer, it will have the nameRRDNamesList.

1:oEmif_String,algorithm,0:poEmif_String,nameList,Eimg_RRDNamesList,

Type Name Description

BASEDATA * valueBD A basedata structure containing the excludedvalues

Type Name Description

Emif_String projection The name of the map projection system associ-ated with the MapToPixelTransform siblingobject.

Emif_String units The name of the map units of the coordinatesreturned by the transforming layer pixel coor-dinates through the inverse of the MapToPixel-Transform sibling object.

Type Name Description

Emif_String algorithm The name of the algorithm used to compute thelayers in nameList.

Emif_String p nameList A list of the reduced resolution layers associ-ated with the parent layer. These are full layernames.

494 ERDAS

Page 521: Erdas FieldGuide

ERDAS IMAGINE HFA File Format

Eimg_StatisticsParameters830

The Eimg_StatisticsParameters830 object contains statistics parameters that control thecomputation of certain statistics. The parameters can apply to the computation ofCovariance, scalar Statistics of a layer, or the Histogram of a layer. In these cases, theobject will be named CovarianceParameters, StatisticsParameters, and HistogramPa-rameters. The CovarianceParameters will exist as a sibling of the Covariance, and theStatisticsParameters and HistogramParameters will be children of the Eimg_Layer towhich they apply.

0:poEmif_String,LayerNames,1:*bExcludedValues,1:oEmif_String,AOIname,1:lSkipFactorX,1:lSkipFactorY,1:*oEdsc_BinFunction,BinFunction,Eimg_StatisticsParameters830,

Ehfa_Layer

The Ehfa_Layer is used to indicate the type of layer. The initial design for the IMAGINEfiles allowed for both raster and vector layers. Currently, the vector layers have notbeen implemented.

1:e2:raster,vector,type,1:ldictionaryPtr,Ehfa_Layer,

Type Name Description

Emif_String p LayerNames The list of (full) layer names that were involvedin this computation (covariance only).

BASEDATA * ExcludedValues The values excluded during this computation.

Emif_String AOIname The name of the AOI file used to limit the com-putation.

LONG SkipFactorX The skip factor in X.

LONG SkipFactorY The skip factor in Y.

Edsc_BinFunction * BinFunction The bin function used for this computation(statistics and histogram only).

Type Name Description

ENUM type The type of layer.

0=”raster”

1=”vector”

ULONG dictionaryPtr This points to a dictionary entry whichdescribes the data. In the case of raster data, itpoints to a dictionary pointer which describesthe contents of each block via the RasterDMSdefinition given below.

495Field Guide

Page 522: Erdas FieldGuide

RasterDMS

The RasterDMS object definition must be present in the EMIF dictionary pointed to byan Ehfa_Layer object that is of type “raster”. It describes the logical make-up of a blockof raster data in the Ehfa_Layer. The physical representation of the raster data isactually managed by the DMS system through objects of type Ehfa_Layer andEdms_State. The RasterDMS definition should describe the raster data in terms of totalnumber of data values in a block and the type of data value.

<n>:<t>data,RasterDMS,

Type Name Description

<t>[<n>] data The data is described in terms of total number, <n>, ofdata file values in a block of the raster layer (which issimply <block width> * <block height>) and the datavalue type, <t>, which can have any one of the follow-ing values:

1 - Unsigned 1-bit

2 - Unsigned 2-bit

4 - Unsigned 4-bit

c - Unsigned 8-bit

C - Signed 8-bit

s - Unsigned 16-bit

S - Signed 16-bit

l - Unsigned 32-bit

L - Signed 32-bit

f - Single precision floating point

d - Double precision floating point

m - Single precision complex

M - Double precision complex

496 ERDAS

Page 523: Erdas FieldGuide

ERDAS IMAGINE HFA File Format

Edms_VirtualBlockInfo

An Edms_VirtualBlockInfo object describes a single raster data block of a layer. Itdescribes where to find the data in the file, how many bytes are in the data block, andhow to unpack the data from the block. For uncompressed data the unpacking isstraight forward. The scheme for compressed data is described below.

1:SfileCode,1:loffset,1:lsize,1:e2:false,true,logvalid,1:e2:no compression,ESRI GRIDcompression,compressionType,1:LblockHeight,Edms_VirtualBlockInfo,

For uncompressed blocks, the data are simply packed into the block one pixel value ata time. Each pixel is read from the block as indicated by its data type. All non-integerdata are uncompressed.

Type Name Description

SHORT fileCode This is included to allow expansion of the layerinto multiple files. The number indicates thefile in which the block is located. Currently thisis always 0, since the multiple file scheme hasnot been implemented.

PTR offset This points to the byte location in the file wherethe block data actually resides.

LONG size The number of bytes in the block.

ENUM logvalid This indicates whether the block actually con-tains valid data. This allows blocks to exist inthe map, but not in the file.

0=”false”

1=”true”

ENUM compressionType This indicates the type of compression used forthis block.

0=”no compression”

1=”ESRI GRID compression”

No compression indicates that the data locatedat offset are uncompressed data. The stream ofbytes is to be interpreted as a sequence of byteswhich defines the data as indicated by the datatype.

The ESRI GRID compression is a two stage run-length encoding.

497Field Guide

Page 524: Erdas FieldGuide

The compression scheme used by ERDAS IMAGINE is a two level run-length encodingscheme. If the data are an integral type, then the following steps are performed:

• The minimum and maximum values for a block are determined.

• The byte size of the output pixels is determined by examining the differencebetween the maximum and the minium. If the difference is less than or equal to 256,then 8-bit data are used. If the difference is less than 65,536 then, 16-bit data areused, otherwise 32-bit data are used.

• The minimum is subtracted from each of the values.

• A run-length encoding scheme is used to encode runs of the same pixel value. Thedata minimum value occupies the first 4 bytes of the block. The number of run-length segments occupies the next 4 bytes, and the next 4 bytes are an offset into theblock which indicates where the compressed pixel values begin. The next byteindicates the number of bits per pixel (1,2,4,8,16,32). These four values are encodedin the standard MIF format (unsigned long, or ULONG). Following this is the listof segment counts, following the segment counts are the pixel values. There is onesegment count per pixel value.

NOTE: No compression scheme is used if the data are non-integral.

Each data count is encoded as follows:

There may be 1, 2, 3, or 4 bytes per count. The first two bits of the first count bytecontains 0,1,2,3 indicating that the count is contained in 1, 2,3, or 4 bytes. Then the restof the byte (6 bits) represent the six most significant bytes of the count. The next byte, ifpresent, represents decreasing significance.

NOTE: This order is different than the rest of the package. This was done so that the high bytewith the encoded byte count would be first in the byte stream. This pattern is repeated as manytimes as indicated by the numsegments field.

The data values are compressed into the remaining space packed into as many bits perpixel as indicated by the numbitpervalue field.

min numsegments

dataoffset

numbitsper value

data counts data values

next 8 bits next 8 bits next 8 bits bytecount

high 6 bits

byte 3 byte 2 byte 1 byte 0

498 ERDAS

Page 525: Erdas FieldGuide

ERDAS IMAGINE HFA File Format

Edms_FreeIDList

An Edms_FreeIDList is used to track blocks which have been freed from the layer. Thefreelist consists of an array of min/max pairs which indicate unused contiguous blocksof data which lie within the allocated layer space. Currently this object is unused andreserved for future expansion.

1:Lmin,1:Lmax,Edms_FreeIDList,

Edms_State

The Edms_State describes the location of each of the blocks of a single layer of imagery.Basically, this object is an index of all of the blocks in the layer.

1:lnumvirtualblocks,1:lnumobjectsperblock,1:lnextobjectnum,1:e2:no compression,RLC compression,compressionType,0:poEdms_VirtualBlockInfo,blockinfo,0:poEdms_FreeIDList,freelist,1:tmodTime,Edms_State

Type Name Description

LONG min The minimum block number in the group.

LONG max The maximum block number in the group.

Type Name Description

LONG numvirtualblocks The number of blocks in thislayer.

LONG numobjectsperblock The number of pixels representedby one block.

LONG nextobjectnum Currently, this type is not beingused and is reserved for futureexpansion.

ENUM compressionType This indicates the type of com-pression used for this block.

0=”no compression”

1=”ESRI GRID compression”

No compression indicates thatthe data located at offset areuncompressed data. The streamof bytes is to be interpreted as asequence of bytes which definesthe data as indicated by the datatype.

The ESRI GRID compression is atwo stage run-length encoding.

499Field Guide

Page 526: Erdas FieldGuide

Edsc_Table

An Edsc_Table is a base node used to store columns of information. This serves simplyas a parent node for each of the columns which are a part of the table.

1:lnumRows, Edsc_Table,

Edsc_BinFunction

The Edsc_BinFunction describes how pixel values from the associated layer are to bemapped into an index for the columns.

1:lnumBins,1:e4:direct,linear,logarithmic,explicit,binFunction Type,1:dminLimit,1:dmaxLimit,1:*bbinLimits, Edsc_BinFunction,

Table 32 describes how the binning functions are used.

Edms_VirtualBlockInfo p blockinfo This is the table of entries whichdescribes the state and location ofeach block in the layer.

Edms_FreeIDList p freelist Currently, this type is not beingused and is reserved for futureexpansion.

TIME modTime This is the time of the last modifi-cation to this layer.

Type Name Description

Type Name Description

LONG numRows This defines the number of rows in the table.

Type Name Description

LONG numBins The number of bins.

ENUM binFunction Type The type of bin function.

0=”direct”

1=”linear”

2=” exponential”

3=”explicit”

DOUBLE minLimit The lowest value defined by the bin function.

DOUBLE maxLimit The highest value defined by the bin func-tion.

BASEDATA binLimits The limits used to define the bins.

500 ERDAS

Page 527: Erdas FieldGuide

ERDAS IMAGINE HFA File Format

Edsc_Column

The columns of information which are stored in a table are stored in this format.

1:lnumRows,1:LcolumnDataPtr,1:e4:integer,real,comples,string,dataType,1:lmaxNumChar, Edsc_Column,

The types of information stored in columns are given in the following table.

Table 32: Usage of Binning Functions

Bin Type Description

DIRECT Direct binning means that the pixel value minus the minimumis used as is with no translation to index into the columns. Forexample, if the minimum value is zero, then value 0 isindexed into location 0, 1 is indexed into 1, etc.

LINEAR Linear binning means that the pixel value is first scaled by theformula:index = (value-minLimit)*numBins/(maxLimit-minLimit).

This allows a very large range of data, or even floating pointdata, to be used to index into a table.

EXPONENTIAL Exponential binning is used to compress data with a largedynamic range. The formula used is index =numBins*(log(1+(value-minLimit)) / (maxLimit-minLimit).

EXPLICIT Explicit binning is used to map the data into indices using anarbitrary set of boundaries. The data are compared against thelimits set in the binLimit table. If the pixel is less than or equalto the first value, then the index is 0. If the pixel is less than orequal to the next value, then the index is 1, etc.

Type Name Description

LONG numRows The number of rows in this column.

PTR columnDataPtr Starting point of column data in the file. Thispoints to the location in the file which containsthe data.

ENUM dataType The data type of this column

0=”integer” (EMIF_T_LONG)1=”real” (EMIF_T_DOUBLE)2=”complex” (EMIF_T_DCOMPLEX)

3=”string” (EMIF_T_CHAR)

LONG maxNumChars The maximum string length (for string dataonly). It is 0 if the type is not a String.

501Field Guide

Page 528: Erdas FieldGuide

Name Data Type Description

Histogram real This is found in the descriptor table of almostevery layer. It defines the number of pixelswhich fall into each bin.

Class_Names string This is found in the descriptor table of almostevery thematic layer. It defines the name foreach class.

Red real This is found in the descriptor table of almostevery thematic layer. It defines the red compo-nent of the color for each class. The range of thevalue is from 0.0 to 1.0.

Green real This is found in the descriptor table of almostevery thematic layer. It defines the green com-ponent of the color for each class. The range ofthe value is from 0.0 to 1.0.

Blue real This is found in the descriptor table of almostevery thematic layer. It defines the blue compo-nent of the color for each class. The range of thevalue is from 0.0 to 1.0.

Opacity real This is found in the descriptor table of almostevery thematic layer. It defines the opacityassociated with the class. A value of 0 meansthat the color will be solid. A value of 0.5means that 50% of the underlying pixel wouldshow through, and 1.0 means that all of thepixel value in the underlying layer wouldshow through.

Contrast real This is found in the descriptor table of mostcontinuous raster layers. It is used to define anintensity stretch which is normally used toimprove contrast. The table is stored as normal-ized values from 0.0 to 1.0.

GCP_Names string This is found in the GCP_Table in files whichhave ground control points. This is the table ofnames for the points.

GCP_xCoords real This is found in the GCP_Table in files whichhave ground control points. This is the X coor-dinate for the point.

GCP_yCoords real This is found in the GCP_Table in files whichhave ground control points. This is the Y coor-dinate for the point.

GCP_Color string This is found in the GCP_Table in files whichhave ground control points. This is the name ofthe color that is used to display this point.

502 ERDAS

Page 529: Erdas FieldGuide

ERDAS IMAGINE HFA File Format

Eded_ColumnAttributes_1

The Eded_ColumnAttributes_1 stores the descriptor column properties which are usedby the Raster Attribute Editor for the format and layout of the descriptor columndisplay in the Raster Attribute Editor CellArray. The properties include the position ofthe descriptor column within the CellArray, the name, alignment, format, and width ofthe column, whether the column is editable, the formula (if any) for the column, theunits (for numeric data), and whether the column is a component of a color column.Each Eded_ColumnAttributes_1 is a child of the Edsc_Column containing the data forthe descriptor column. The properties for a color column are stored as a child of theEded_ColumnAttributes_1 for the red component of the color column.

1:lposition,0:pcname,1:e2:FALSE,TRUE,editable,1:e3:LEFT,CENTER,RIGHT,alignment,0:pcformat,1:e3:DEFAULT,APPLY,AUTO-APPLY,formulamode,0:pcformula,1:dcolumnwidth,0:pcunits,1:e5:NO_COLOR,RED,GREEN,BLUE,COLOR,colorflag,0:pcgreenname,0:pcbluename,Eded_ColumnAttributes_1,

Type Name Description

LONG position The position of this descriptor column in theRaster Attribute Editor CellArray. The posi-tions for all descriptor columns are sorted andthe columns are displayed in ascending order.

CHAR P name The name of the descriptor column. This is thesame as the name of the parent Edsc_Columnnode, for all columns except color columns,Color columns have no correspondingEdsc_Column.

ENUM editable Specifies whether this column is editable.0 = NO1 = YES

ENUM alignment Alignment of this column in CellArray.0 = LEFT1 = CENTER2 = RIGHT

CHAR P format The format for display of numeric data.

ENUM formulamode Mode for formula application.0 = DEFAULT1 = APPLY2 = AUTO-APPLY

CHAR P formula The formula for the column.

DOUBLE columnwidth The width of the CellArray column

CHAR P units The name of the units for numeric data storedin the column.

503Field Guide

Page 530: Erdas FieldGuide

Esta_Statistics

The Esta_Statistics is used to describe the statistics for a layer.

1:dminimum,1:dmaximum,1:dmean,1:dmedian,1d:mode,1:dstddev,Esta_Statistics,

ENUM colorflag Indicates whether column is a color column, acomponent of a color column, or a normal col-umn.0 = NO_COLOR1 = RED2 = GREEN3 = BLUE4 = COLOR

CHAR P greenname Name of green component column associatedwith color column. Empty string for other col-umn types.

CHAR P bluename Name of blue component column associatedwith color column. Empty string for other col-umn types.

Type Name Description

Type Name Description

DOUBLE minimum The minimum of all of the pixels in the image.This may exclude values as defined by the user.

DOUBLE maximum The maximum of all of the pixels in the image.This may exclude values as defined by the user.

DOUBLE mean The mean of all of the pixels in the image. Thismay exclude values as defined by the user.

DOUBLE median The median of all of the pixels in the image.This may exclude values as defined by the user.

DOUBLE mode The mode of all of the pixels in the image. Thismay exclude values as defined by the user.

DOUBLE stddev The standard deviation of the pixels in theimage. This may exclude values as defined bythe user.

504 ERDAS

Page 531: Erdas FieldGuide

ERDAS IMAGINE HFA File Format

Esta_Covariance

The Esta_Covariance object is used to record the covariance matrix for the layers in an.img file

1:bcovariance,Esta_Covariance,

Esta_SkipFactors

The Esta_SkipFactors object is used to record the skip factors that were used when thestatistics or histogram was calculated for a raster layer or when the covariance wascalculated for an .img file.

1:LskipFactorX,1:LskipFactorY,Esta_SkipFactors,

Esta_ExcludedValues

The Esta_ExcludedValues object is used to record the values that were excluded fromconsideration when the statistics or histogram was calculated for a raster layer or whenthe covariance was calculated for a .img file.

1:*bvalueBD,Esta_ExcludedValues,

Type Name Description

BASEDATA covariance A basedata structure containing the covariancematrix

Type Name Description

LONG skipFactorX The horizontal sampling interval used for sta-tistics measured in image columns/sample

LONG skipFactorY The vertical sampling interval used for statis-tics measured in image rows/sample

Type Name Description

BASEDATA * valueBD A basedata structure containing the excludedvalues

505Field Guide

Page 532: Erdas FieldGuide

Eprj_Datum

The Eprj_Datum object is used to record the datum information which is part of theprojection information for an .img file.

0:pcdatumname,1:e3:EPRJ_DATUM_PARAMETRIC,EPRJ_DATUM_GRID,EPRJ_DATUM_REGRESSION,type,0:pdparams,0:pcgridname,Eprj_Datum,

Eprj_Spheroid

The Eprj_Spheroid is used to describe spheroid parameters used to describe the shapeof the earth.

0:pcsphereName,1:da,1:db,1:deSquared,1:dradius,Eprj_Spheroid,

Type Name Description

CHAR datumname The datum name.

ENUM type The datum type which could be one of threedifferent types: parametric type, grid type andregression type.

DOUBLE params The seven parameters of a parametric datumwhich describe the translations, rotations andscale change between the current datum andthe reference datum WGS84.

CHAR gridname The name of a grid datum file which stores thecoordinate shifts among North AmericaDatums NAD27, NAD83 and HARN.

Type Name Description

CHAR p sphereName The name of the spheroid/ellipsoid. This nameis can be found in:<$IMAGINE_HOME>/etc/spheroid.tab.

DOUBLE a The semi-major axis of the ellipsoid in meters.

DOUBLE b The semi-minor axis of the ellipsoid in meters.

DOUBLE eSquared The eccentricity of the ellipsoid, squared.

DOUBLE radius The radius of the spheroid in meters.

506 ERDAS

Page 533: Erdas FieldGuide

ERDAS IMAGINE HFA File Format

Eprj_ProParameters

The Eprj_Parameters is used to define the map projection for a layer.

1:e2:EPRJ_INTERNAL,EPRJ_EXTERNAL,proType,1:lproNumber,0:pcproExeName,0:pcproName,1:lproZone,0:pdproParams,1:*oEprj_Spheroid,proSpheroid,Eprj_ProParameters.

Type Name Description

ENUM proType This defines whether the projection is internal orexternal.

0=”EPRJ_INTERNAL”

1=” EPRJ_EXTERNAL”

LONG proNumber The projection number for internal projections.The current internal projections are:

0=”Geographic(Latitude/Longitude)”

1=”UTM”

2=”State Plane”

3=”Albers Conical Equal Area”

4=”Lambert Conformal Conic”

5=”Mercator”

6=”Polar Stereographic”

7=”Polyconic”

8=”Equidistant Conic”

9=”Transverse Mercator”

10=”Sterographic”

11=”Lambert Azimuthal Equal-area”

12=”Azimuthal Equidistant”

13=”Gnomonic”

14=”Orthographic”

15=”General Vertical Near-Side Perspective”

16=”Sinusoidal”

17=”Equirectangular”

18=”Miller Cylindrical”

19=”Van der Grinten I”

20=”Oblique Mercator (Hotine)”

21=”Space Oblique Mercator”

22=”Modified Transverse Mercator”

CHAR p proExeName The name of the executable to run for an externalprojection.

507Field Guide

Page 534: Erdas FieldGuide

The following table defines the contents of the proParams array which is defined above.The Parameters column defines the meaning of the various elements of the proParamsarray for the different projections. Each one is described by one or more statements ofthe form n: Description. n is the index into the array.

CHAR p proName The name of the projection. This will be one of thenames given above in the description of proNum-ber.

LONG proZone The zone number for internal State Plane or UTMprojections.

DOUBLE p proParams The array of parameters for the projection.

Eprj_Spheroid * proSpheroid The parameters of the spheroid used to approxi-mate the earth. See the proceeding description forthe Eprj_Spheroid object.

Type Name Description

Name Parameters

0 ”Geographic(Latitude/Longitude)” None Used

1 ”UTM” 3: 1=North, -1=South

2 ”State Plane” 0: 0=NAD27, 1=NAD83

3 ”Albers Conical Equal Area” 2: Latitude of 1st standard parallel

3: Latitude of 2nd standard parallel

4: Longitude of central meridian

5: Latitude of origin of projection

6: False Easting

7: False Northing

4 ”Lambert Conformal Conic” 2: Latitude of 1st standard parallel

3: Latitude of 2nd standard parallel

4: Longitude of central meridian

5: Latitude of origin of projection

6: False Easting

7: False Northing

5 ”Mercator” 4: Longitude of central meridian

5: Latitude of origin of projection

6: False Easting

7: False Northing

508 ERDAS

Page 535: Erdas FieldGuide

ERDAS IMAGINE HFA File Format

6 ”Polar Stereographic” 4: Longitude directed straight down belowpole of map.

5: Latitude of true scale.

6: False Easting

7: False Northing.

7 ”Polyconic” 4: Longitude of central meridian

5: Latitude of origin of projection

6: False Easting

7: False Northing

8 ”Equidistant Conic” 2: Latitude of standard parallel (Case 0)

2: Latitude of 1st Standard Parallel (Case 1)

3: Latitude of 2nd standard Parallel (Case 1)

4: Longitude of central meridian

5: Latitude of origin of projection

6: False Easting

7: False Northing

8: 0=Case 0, 1=Case 1.

9 ”Transverse Mercator” 2: Scale Factor at Central Meridian

4: Longitude of center of projection

5: Latitude of center of projection

6: False Easting

7: False Northing

10 ”Stereographic” 4: Longitude of center of projection

5: Latitude of center of projection

6: False Easting

7: False Northing

11 ”Lambert Azimuthal Equal-area” 4: Longitude of center of projection

5: Latitude of center of projection

6: False Easting

7: False Northing

Name Parameters

509Field Guide

Page 536: Erdas FieldGuide

12 ”Azimuthal Equidistant” 4: Longitude of center of projection

5: Latitude of center of projection

6: False Easting

7: False Northing

13 ”Gnomonic” 4: Longitude of center of projection

5: Latitude of center of projection

6: False Easting

7: False Northing

14 ”Orthographic” 4: Longitude of center of projection

5: Latitude of center of projection

6: False Easting

7: False Northing

15 ”General Vertical Near-Side Perspec-tive

2: Height of perspective point above sphere.

4: Longitude of center of projection

5: Latitude of center of projection

6: False Easting

7: False Northing

16 ”Sinusoidal” 4: Longitude of central meridian

6: False Easting

7: False Northing

17 ”Equirectangular” 4: Longitude of central meridian

5: Latitude of True Scale.

6: False Easting

7: False Northing

18 ”Miller Cylindrical” 4: Longitude of central meridian

6: False Easting

7: False Northing

19 ”Van der Grinten I” 4: Longitude of central meridian

6: False Easting

7: False Northing

Name Parameters

510 ERDAS

Page 537: Erdas FieldGuide

ERDAS IMAGINE HFA File Format

20 ”Oblique Mercator (Hotine)” 2: Scale Factor at center of projection

3: Azimuth east of north for central line. (Case1)

4: Longitude of point of origin (Case 1)

5: Latitude of point of origin.

6: False Easting

7: False Northing.

8: Longitude of 1st Point defining central line(Case 0)

9: Latitude of 1st Point defining central line(Case 0)

10: Longitude of 2nd Point defining centralline. (Case 0)

11: Latitude of 2nd Point defining central line(Case 0).

12: 0=Case 0, 1=Case 1

21 ”Space Oblique Mercator” 4: Landsat Vehicle ID (1-5)

5: Orbital Path Number (1-251 or 1-233)

6: False Easting

7: False Northing

22 ”Modified Transverse Mercator” 6: False Easting

7: False Northing

Name Parameters

511Field Guide

Page 538: Erdas FieldGuide

Eprj_Coordinate

An Eprj_Coordiante is a pair of doubles used to define X and Y.

1:dx,1:dy,Eprj_Coordinate,

Eprj_Size

The Eprj_Size is a pair of doubles used to define a rectangular size.

1:dx,1:dy,Eprj_Size,

Eprj_MapInfo

The Eprj_MapInfo object is used to define the basic map information for a layer. Itdefines the map coordinates for the center of the upper left and lower right pixels, aswell as the cell size and the name of the map projection.

0:pcproName,1:*oEprj_Coordinate,upperLeftCenter,1:*oEprj_Coordinate,lowerRightCenter,1:*oEprj_Size,pixelSize,0:pcunits,Eprj_MapInfo,

Type Name Description

DOUBLE x The X value of the coordinate.

DOUBLE y The Y value of the coordinate.

Type Name Description

DOUBLE width The X value of the coordinate.

DOUBLE height The Y value of the coordinate.

Type Name Description

CHAR p proName The name of the projection.

Eprj_Coordinate *

upperLeftCenter The coordinates of the center of the upper leftpixel.

Eprj_Coordinate *

lowerRightCenter The coordinates of the center of the lower rightpixel.

Eprj_Size * pixelSize The size of the pixel in the image.

CHAR * units The units of the above values.

512 ERDAS

Page 539: Erdas FieldGuide

ERDAS IMAGINE HFA File Format

Efga_Polynomial

The Efga_Polynomial is used to store transformation coefficients created by theIMAGINE GCP Tool.

1:Lorder,1:Lnumdimtransforms,1:numdimpolynomial,1:Ltermcount,1:*exponentList,1:bpolycoefmtx,1:bpolycoefvector,Efga_Polynomial,

Exfr_GenericXFormHeader

The Exfr_GenericXFormHeader contains a list of GeometricModels titles for thecomponent XForms making up a composite Exfr_XForm. The components are writtenas children of the Exfr_GenericXFormHeader with names XForm0, XForm1, ..., XFormi.where i is the number of components listed by the Exfr_GenericXFormHeader. Thedesign of component XFormi is defined by the specific GeometricModels DLL instancethat controls XForms of the title specified as the ith title string in theExfr_GenericXFormHeader unless XFormi is of type Exfr_ASCIIXform (see below). Asa child of an Eimg_Layer, it will have the name MapToPixelXForm.

0:poEmif_String,titleList,Exfr_GenericXFormHeader,

Type Name Description

LONG order The order of the polynomial.

LONG numdimtransform The number of dimensions of the transfor-mation (always 2).

LONG numdimpolynomial The number of dimensions of the polyno-mial (always 2).

LONG termcount The number of terms in the polynomial.

LONG * exponentlist The ordered list of powers for the polyno-mial.

BASEDATA polycoefmtx The polynomial coefficients.

BASEDATA polycoefvector The polynomial vectors.

Type Name Description

Emif_String titleList The list of titles of the component XForms thatare children of this node.

513Field Guide

Page 540: Erdas FieldGuide

Exfr_ASCIIXForm

An Exfr_ASCIIXForm is an ASCII string representation of an Exfr_XForm componentcontrolled by a DLL that does not have an XFormConvertToMIF function defined butdoes define an XFormSprintf function.

0:pcxForm,Exfr_ASCIIXForm,

Calibration_Node

An object of type Calibration_Node is an empty object — it contains no data. A node ofthis type simply serves as the parent node of four related child objects. The children ofthe Calibration_Node are used to provide information which converts pixel coordinatesto map coordinates and vice versa.There is no dictionary definition for this object type.A node of this type will be a child of the root node and will be named “Calibration.” The“Calibration” node will have the four children described below.

Type Name Description

CHAR p xForm An ASCII string representation of an XFormcomponent.

Node ObjectType Description

Projection Eprj_ProParameters The projection associated withthe output coordinate system.

Map_Info Eprj_MapInfo The nominal map informationassociated with the transforma-tion.

InversePolynomial Efga_Polynomial This is the nth order polynomialcoefficient used to convert frommap coordinates to pixel coordi-nates.

ForwardPolynomial Efga_Polynomial This is the nth order polynomialused to convert from pixel coor-dinates to map coordinates

514 ERDAS

Page 541: Erdas FieldGuide

Vector Layers

Vector Layers The vector data structure in ERDAS IMAGINE is based on the ARC/INFO data model(developed by ESRI, Inc.).

See "CHAPTER 2: Vector Layers" for more information on vector layers. Refer to theARC/INFO users manuals for detailed information on the vector data structure.

515Field Guide

Page 542: Erdas FieldGuide

516 ERDAS

Page 543: Erdas FieldGuide

Introduction

APPENDIX CMap Projections

Introduction This appendix is an alphabetical listing of the map projections supported in ERDASIMAGINE. It is divided into two sections:

• USGS projections, beginning on page 518

• External projections, beginning on page 585

The external projections were implemented outside of ERDAS IMAGINE so that userscould add to these using the Developers’ Toolkit. The projections in each section arepresented in alphabetical order.

The information in this appendix is adapted from:

• Map Projections for Use with the Geographic Information System (Lee and Walsh 1984)

• Map Projections—A Working Manual (Snyder 1987)

Other sources are noted in the text.

For general information about map projection types, refer to "CHAPTER 11: Cartography".

Rectify an image to a particular map projection using the ERDAS IMAGINE Rectificationtools. View, add, or change projection information using the Image Information option.

NOTE: You cannot rectify to a new map projection using the Image Information option. Youshould change map projection information using Image Information only if you know theinformation to be incorrect. Use the rectification tools to actually georeference an image to a newmap projection system.

517Field Guide

Page 544: Erdas FieldGuide

USGS Projections The following USGS map projections are supported in ERDAS IMAGINE and aredescribed in this section:

Albers Conical Equal Area

Azimuthal Equidistant

Equidistant Conic

Equirectangular

General Vertical Near-side Perspective

Geographic (Lat/Lon)

Gnomonic

Lambert Azimuthal Equal Area

Lambert Conformal Conic

Mercator

Miller Cylindrical

Modified Transverse Mercator

Oblique Mercator (Hotine)

Orthographic

Polar Stereographic

Polyconic

Sinusoidal

Space Oblique Mercator

State Plane

Stereographic

Transverse Mercator

UTM

Van der Grinten I

518 ERDAS

Page 545: Erdas FieldGuide

USGS Projections

Albers Conical EqualArea

Summary

The Albers Conical Equal Area projection is mathematically based on a cone that isconceptually secant on two parallels. There is no areal deformation. The North or SouthPole is represented by an arc. It retains its properties at various scales, and individualsheets can be joined along their edges.

This projection produces very accurate area and distance measurements in the middlelatitudes (Figure 189). Thus, Albers Conical Equal Area is well-suited to countries orcontinents where north-south depth is about 3/5 the breadth of east-west. When thisprojection is used for the continental U.S., the two standard parallels are 29.5˚ and 45.5˚North.

This projection possesses the property of equal area, and the standard parallels arecorrect in scale and in every direction. Thus, there is no angular distortion (i.e.,meridians intersect parallels at right angles) and conformality exists along the standardparallels. Like other conics, Albers Conical Equal Area has concentric arcs for parallelsand equally spaced radii for meridians. Parallels are not equally spaced, but are farthestapart between the standard parallels and closer together on the north and south edges.

Albers Conical Equal Area is the projection exclusively used by the USGS for sectionalmaps of all 50 states of the U.S. in the National Atlas of 1970.

Construction Cone

Property Equal area

Meridians Meridians are straight lines converging on the polar axis, but notat the pole.

Parallels Parallels are arcs of concentric circles concave toward a pole.

Graticule spacing

Meridian spacing is equal on the standard parallels and decreasestoward the poles. Parallel spacing decreases away from the stan-dard parallels and increases between them. Meridians and paral-lels intersect each other at right angles. The graticule spacingpreserves the property of equivalence of area. The graticule issymmetrical.

Linear scaleLinear scale is true on the standard parallels. Maximum scaleerror is 1.25% on a map of the United States (48 states) with stan-dard parallels of 29.5˚N and 45.5˚N.

Uses

Used for thematic maps. Used for large countries with an east-west orientation. Maps based on the Albers Conical Equal Areafor Alaska use standard parallels 55˚N and 65˚N; for Hawaii, thestandard parallels are 8˚N and 18˚N. The National Atlas of theUnited States, United States Base Map (48 states), and the Geo-logic map of the United States are based on the standard parallelsof 29.5˚N and 45.5˚N.

519Field Guide

Page 546: Erdas FieldGuide

Prompts

The following prompts display in the Projection Chooser once Albers Conical EqualArea is selected. Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Latitude of 1st standard parallel

Latitude of 2nd standard parallel

Enter two values for the desired control lines of the projection, i.e., the standardparallels. Note that the first standard parallel is the southernmost.

Then, define the origin of the map projection in both spherical and rectangular coordi-nates.

Longitude of central meridian

Latitude of origin of projection

Enter values for longitude of the desired central meridian and latitude of the origin ofprojection.

False easting at central meridian

False northing at origin

Enter values of false easting and false northing, corresponding to the intersection of thecentral meridian and the latitude of the origin of projection. These values must be inmeters. It is very often convenient to make them large enough to prevent negativecoordinates from occurring within the region of the map projection. That is, the originof the rectangular coordinate system should fall outside of the map projection to thesouth and west.

520 ERDAS

Page 547: Erdas FieldGuide

USGS Projections

Figure 189: Albers Conical Equal Area Projection

In Figure 189, the standard parallels are 20˚N and 60˚N. Note the change in spacing ofthe parallels.

521Field Guide

Page 548: Erdas FieldGuide

Azimuthal Equidistant

Summary

Construction Plane

Property Equidistant

Meridians

Polar aspect: the meridians are straight lines radiating from thepoint of tangency.

Oblique aspect: the meridians are complex curves concavetoward the point of tangency.

Equatorial aspect: the meridians are complex curves concavetoward a straight central meridian, except the outer meridian of ahemisphere, which is a circle.

Parallels

Polar aspect: the parallels are concentric circles.

Oblique aspect: the parallels are complex curves.

Equatorial aspect: the parallels are complex curves concavetoward the nearest pole; the equator is straight.

Graticule spacing

Polar aspect: the meridian spacing is equal and increases awayfrom the point of tangency. Parallel spacing is equidistant. Angu-lar and area deformation increase away from the point of tan-gency.

Linear scale

Polar aspect: linear scale is true from the point of tangency alongthe meridians only.

Oblique and equatorial aspects: linear scale is true from the pointof tangency. In all aspects, the projection shows distances true toscale when measured between the point of tangency and anyother point on the map.

Uses

The Azimuthal Equidistant projection is used for radio and seis-mic work, as every place in the world will be shown at its truedistance and direction from the point of tangency. The U.S. Geo-logical Survey uses the oblique aspect in the National Atlas andfor large-scale mapping of Micronesia. The polar aspect is used asthe emblem of the United Nations.

522 ERDAS

Page 549: Erdas FieldGuide

USGS Projections

The Azimuthal Equidistant projection is mathematically based on a plane tangent to theearth. The entire earth can be represented, but generally less than one hemisphere isportrayed, though the other hemisphere can be portrayed, but is much distorted. It hastrue direction and true distance scaling from the point of tangency.

This projection is used mostly for polar projections because latitude rings dividemeridians at equal intervals with a polar aspect (Figure 190). Linear scale distortion ismoderate and increases toward the periphery. Meridians are equally spaced, and alldistances and directions are shown accurately from the central point.

This projection can also be used to center on any point on the earth—a city, forexample—and distance measurements will be true from that central point. Distancesare not correct or true along parallels, and the projection is neither equal area norconformal. Also, straight lines radiating from the center of this projection representgreat circles.

Prompts

The following prompts display in the Projection Chooser if Azimuthal Equidistant isselected. Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Define the center of the map projection in both spherical and rectangular coordinates.

Longitude of center of projection

Latitude of center of projection

Enter values for the longitude and latitude of the desired center of the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the center of theprojection. These values must be in meters. It is very often convenient to make themlarge enough so that no negative coordinates will occur within the region of the mapprojection. That is, the origin of the rectangular coordinate system should fall outsideof the map projection to the south and west.

523Field Guide

Page 550: Erdas FieldGuide

Figure 190: Polar Aspect of the Azimuthal Equidistant Projection

This projection is commonly used in atlases for polar maps.

524 ERDAS

Page 551: Erdas FieldGuide

USGS Projections

Conic Equidistant

Summary

With Equidistant Conic (Simple Conic) projections, correct distance is achieved alongthe line(s) of contact with the cone, and parallels are equidistantly spaced. It can be usedwith either one (A) or two (B) standard parallels. This projection is neither conformalnor equal area, but the north-south scale along meridians is correct. The North or SouthPole is represented by an arc. Because scale distortion increases with increasingdistance from the line(s) of contact, the Equidistant Conic is used mostly for mappingregions predominantly east-west in extent. The USGS uses the Equidistant Conic in anapproximate form for a map of Alaska.

Prompts

The following prompts display in the Projection Chooser if Equidistant Conic isselected. Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Construction Cone

Property Equidistant

Meridians Meridians are straight lines converging on a polar axis but not atthe pole.

Parallels Parallels are arcs of concentric circles concave toward a pole.

Graticule spacing

Meridian spacing is true on the standard parallels and decreasestoward the pole. Parallels are placed at true scale along themeridians. Meridians and parallels intersect each other at rightangles. The graticule is symmetrical.

Linear scale Linear scale is true along all meridians and along the standardparallel or parallels.

Uses

The Equidistant Conic projection is used in atlases for portrayingmid-latitude areas. It is good for representing regions with a fewdegrees of latitude lying on one side of the equator. It was used inthe former Soviet Union for mapping the entire country (ESRI1992).

525Field Guide

Page 552: Erdas FieldGuide

Define the origin of the projection in both spherical and rectangular coordinates.

Longitude of central meridian

Latitude of origin of projection

Enter values for the longitude of the desired central meridian and the latitude of theorigin of projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the intersection of thecentral meridian and the latitude of the origin of projection. These values must be inmeters. It is very often convenient to make them large enough so that no negativecoordinates will occur within the region of the map projection. That is, the origin of therectangular coordinate system should fall outside of the map projection to the southand west.

One or two standard parallels?

Latitude of standard parallel

Enter one or two values for the desired control line(s) of the projection, i.e., the standardparallel(s). Note that if two standard parallels are used, the first is the southernmost.

526 ERDAS

Page 553: Erdas FieldGuide

USGS Projections

Equirectangular (PlateCarrée)

Summary

Also called Simple Cylindrical, Equirectangular is composed of equally spaced, parallelmeridians and latitude lines that cross at right angles on a rectangular map. Eachrectangle formed by the grid is equal in area, shape, and size. Equirectangular is notconformal nor equal area, but it does contain less distortion than the Mercator in polarregions. Scale is true on all meridians and on the central parallel. Directions due north,south, east, and west are true, but all other directions are distorted. The equator is thestandard parallel, true to scale and free of distortion. However, this projection may becentered anywhere.

This projection is valuable for its ease in computer plotting. It is useful for mappingsmall areas, such as city maps, because of its simplicity. The USGS uses Equirectangularfor index maps of the conterminous U.S. with insets of Alaska, Hawaii, and variousislands. However, neither scale nor projection is marked to avoid implying that themaps are suitable for normal geographic information.

Prompts

The following prompts display in the Projection Chooser if Equirectangular is selected.Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

Construction Cylinder

Property Compromise

Meridians All meridians are straight lines.

Parallels All parallels are straight lines.

Graticule spacing Equally spaced parallel meridians and latitude lines cross at rightangles.

Linear scale The scale is correct along all meridians and along the standardparallels (ESRI 1992).

Uses

Best used for city maps, or other small areas with map scalessmall enough to reduce the obvious distortion. Used for simpleportrayals of the world or regions with minimal geographic data,such as index maps (ESRI 1992).

527Field Guide

Page 554: Erdas FieldGuide

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Longitude of central meridian

Latitude of true scale

Enter a value for longitude of the desired central meridian to center the projection andthe latitude of true scale.

False easting

False northing

Enter values of false easting and false northing corresponding to the desired center ofthe projection. These values must be in meters. It is very often convenient to make themlarge enough so that no negative coordinates will occur within the region of the mapprojection. That is, the origin of the rectangular coordinate system should fall outsideof the map projection to the south and west.

528 ERDAS

Page 555: Erdas FieldGuide

USGS Projections

General Vertical Near-side Perspective

Summary

General Vertical Near-side Perspective presents a picture of the earth as if a photographwere taken at some distance less than infinity. The map user simply identifies area ofcoverage, distance of view, and angle of view. It is a variation of the General Perspectiveprojection in which the “camera” precisely faces the center of the earth.

Central meridian and a particular parallel (if shown) are straight lines. Other meridiansand parallels are usually arcs of circles or ellipses, but some may be parabolas or hyper-bolas. Like all perspective projections, General Vertical Near-side Perspective cannotillustrate the entire globe on one map—it can represent only part of one hemisphere.

Prompts

The following prompts display in the Projection Chooser if General Vertical Near-sidePerspective is selected. Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

Construction Plane

Property Compromise

MeridiansThe central meridian is a straight line in all aspects. In the polaraspect all meridians are straight. In the equatorial aspect theequator is straight (ESRI 1992).

Parallels

Parallels on vertical polar aspects are concentric circles. Nearly allother parallels are elliptical arcs, except that certain angles of tiltmay cause some parallels to be shown as parabolas or hyperbo-las.

Graticule spacing

Polar aspect: parallels are concentric circles that are not evenlyspaced. Meridians are evenly spaced and spacing increases fromthe center of the projection.

Equatorial and oblique aspects: parallels are elliptical arcs that arenot evenly spaced. Meridians are elliptical arcs that are not evenlyspaced, except for the central meridian, which is a straight line.

Linear scaleRadial scale decreases from true scale at the center to zero on theprojection edge. The scale perpendicular to the radii decreases,but not as rapidly (ESRI 1992).

UsesOften used to show the earth or other planets and satellites asseen from space. Used as an aesthetic presentation, rather thanfor technical applications (ESRI 1992).

529Field Guide

Page 556: Erdas FieldGuide

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Height of perspective point above sphere

Enter a value for desired height of the perspective point above the sphere in the sameunits as the radius.

Then, define the center of the map projection in both spherical and rectangular coordi-nates.

Longitude of center of projection

Latitude of center of projection

Enter values for the longitude and latitude of the desired center of the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the center of theprojection. These values must be in meters. It is very often convenient to make themlarge enough so that no negative coordinates will occur within the region of the mapprojection. That is, the origin of the rectangular coordinate system should fall outsideof the map projection to the south and west.

530 ERDAS

Page 557: Erdas FieldGuide

USGS Projections

Geographic (Lat/Lon) The Geographic is a spherical coordinate system composed of parallels of latitude (Lat)and meridians of longitude (Lon) (Figure 191). Both divide the circumference of theearth into 360 degrees, which are further subdivided into minutes and seconds (60 sec= 1 minute, 60 min = 1 degree).

Because the earth spins on an axis between the North and South Poles, this allowsconstruction of concentric, parallel circles, with a reference line exactly at the north-south center, termed the equator. The series of circles north of the equator are termednorth latitudes and run from 0˚ latitude (the equator) to 90˚ North latitude (the NorthPole), and similarly southward. Position in an east-west direction is determined fromlines of longitude. These lines are not parallel and they converge at the poles. However,they intersect lines of latitude perpendicularly.

Unlike the equator in the latitude system, there is no natural zero meridian. In 1884, itwas finally agreed that the meridian of the Royal Observatory in Greenwich, England,would be the prime meridian. Thus, the origin of the geographic coordinate system isthe intersection of the equator and the prime meridian. Note that the 180˚ meridian isthe international date line.

If the user chooses Geographic from the Projection Chooser, the following prompts willdisplay:

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Note that in responding to prompts for other projections, values for longitude are negative westof Greenwich and values for latitude are negative south of the equator.

531Field Guide

Page 558: Erdas FieldGuide

Figure 191: Geographic

Figure 191 shows the graticule of meridians and parallels on the global surface.

0

30

60

Parallel

Equator

North Pole

(Latitude)

Meridian (Longitude)

3 0

6 0

532 ERDAS

Page 559: Erdas FieldGuide

USGS Projections

Gnomonic

Summary

Gnomonic is a perspective projection that projects onto a tangent plane from a positionin the center of the earth. Because of the close perspective, this projection is limited toless than a hemisphere. However, it is the only projection which shows all great circlesas straight lines. With a polar aspect, the latitude intervals increase rapidly from thecenter outwards.

With an equatorial or oblique aspect, the equator is straight. Meridians are straight andparallel, while intervals between parallels increase rapidly from the center and parallelsare convex to the equator.

Because great circles are straight, this projection is useful for air and sea navigation.Rhumb lines are curved, which is the opposite of the Mercator projection.

Construction Plane

Property Compromise

Meridians

Polar aspect: the meridians are straight lines radiating from thepoint of tangency.

Oblique and equatorial aspects: the meridians are straight lines.

Parallels

Polar aspect: the parallels are concentric circles.

Oblique and equatorial aspects: parallels are ellipses, parabolas,or hyperbolas concave toward the poles (except for the equator,which is straight).

Graticule spacing

Polar aspect: the meridian spacing is equal and increases awayfrom the pole. The parallel spacing increases very rapidly fromthe pole.

Oblique and equatorial aspects: the graticule spacing increasesvery rapidly away from the center of the projection.

Linear scale Linear scale and angular and areal deformation are extreme, rap-idly increasing away from the center of the projection.

UsesThe Gnomonic projection is used in seismic work because seismicwaves travel in approximately great circles. It is used with theMercator projection for navigation.

533Field Guide

Page 560: Erdas FieldGuide

Prompts

The following prompts display in the Projection Chooser if Gnomonic is selected.Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Define the center of the map projection in both spherical and rectangular coordinates.

Longitude of center of projection

Latitude of center of projection

Enter values for the longitude and latitude of the desired center of the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the center of theprojection. These values must be in meters. It is very often convenient to make themlarge enough to prevent negative coordinates within the region of the map projection.That is, the origin of the rectangular coordinate system should fall outside of the mapprojection to the south and west.

534 ERDAS

Page 561: Erdas FieldGuide

USGS Projections

Lambert AzimuthalEqual Area

Summary

The Lambert Azimuthal Equal Area projection is mathematically based on a planetangent to the earth. It is the only projection that can accurately represent both area andtrue direction from the center of the projection (Figure 192). This central point can belocated anywhere. Concentric circles are closer together as toward the edge of the map,and the scale distorts accordingly. This projection is well-suited to square or round landmasses. This projection generally represents only one hemisphere.

In the polar aspect, latitude rings decrease their intervals from the center outwards. Inthe equatorial aspect, parallels are curves flattened in the middle. Meridians are alsocurved, except for the central meridian, and spacing decreases toward the edges.

Construction Plane

Property Equal Area

Meridians

Polar aspect: the meridians are straight lines radiating from thepoint of tangency.

Oblique and equatorial aspects: meridians are complex curvesconcave toward a straight central meridian, except the outermeridian of a hemisphere, which is a circle.

Parallels

Polar aspect: parallels are concentric circles.

Oblique and equatorial aspects: the parallels are complex curves.The equator on the equatorial aspect is a straight line.

Graticule spacing

Polar aspect: the meridian spacing is equal and increases, and theparallel spacing is unequal and decreases toward the periphery ofthe projection. The graticule spacing, in all aspects, retains theproperty of equivalence of area.

Linear scale

Linear scale is better than most azimuthals, but not as good as theequidistant. Angular deformation increases toward the peripheryof the projection. Scale decreases radially toward the periphery ofthe map projection. Scale increases perpendicular to the radiitoward the periphery.

UsesThe polar aspect is used by the U.S. Geological Survey in theNational Atlas. The polar, oblique, and equatorial aspects areused by the U.S. Geological Survey for the Circum-Pacific Map.

535Field Guide

Page 562: Erdas FieldGuide

Prompts

The following prompts display in the Projection Chooser if Lambert Azimuthal EqualArea is selected. Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Define the center of the map projection in both spherical and rectangular coordinates.

Longitude of center of projection

Latitude of center of projection

Enter values for the longitude and latitude of the desired center of the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the center of theprojection. These values must be in meters. It is very often convenient to make themlarge enough to prevent negative coordinates within the region of the map projection.That is, the origin of the rectangular coordinate system should fall outside of the mapprojection to the south and west.

In Figure 192, three views of the Lambert Azimuthal Equal Area projection are shown:A) Polar aspect, showing one hemisphere; B) Equatorial aspect, frequently used in oldatlases for maps of the eastern and western hemispheres; C) Oblique aspect, centeredon 40˚N.

536 ERDAS

Page 563: Erdas FieldGuide

USGS Projections

Figure 192: Lambert Azimuthal Equal Area Projection

A

B

C

537Field Guide

Page 564: Erdas FieldGuide

Lambert ConformalConic

Summary

This projection is very similar to Albers Conical Equal Area, described previously. It ismathematically based on a cone that is tangent at one parallel or, more often, that isconceptually secant on two parallels (Figure 193). Areal distortion is minimal, butincreases away from the standard parallels. North or South Pole is represented by apoint—the other pole cannot be shown. Great circle lines are approximately straight. Itretains its properties at various scales, and sheets can be joined along their edges. Thisprojection, like Albers, is most valuable in middle latitudes, especially in a countrysprawling east to west like the U.S. The standard parallels for the U.S. are 33˚ and 45˚N.

The major property of this projection is its conformality. At all coordinates, meridiansand parallels cross at right angles. The correct angles produce correct shapes. Also,great circles are approximately straight. The conformal property of Lambert ConformalConic and the straightness of great circles makes it valuable for landmark flying.

Lambert Conformal Conic is the State Plane coordinate system projection for states ofpredominant east-west expanse. Since 1962, Lambert Conformal Conic has been usedfor the International Map of the World between 84˚N and 80˚S.

In comparison with Albers Conical Equal Area, Lambert Conformal Conic possessestrue shape of small areas, whereas Albers possesses equal area. Unlike Albers, parallelsof Lambert Conformal Conic are spaced at increasing intervals the farther north orsouth they are from the standard parallels.

Construction Cone

Property Conformal

Meridians Meridians are straight lines converging at a pole.

Parallels Parallels are arcs of concentric circles concave toward a pole andcentered at a pole.

Graticule spacing

Meridian spacing is true on the standard parallels and decreasestoward the pole. Parallel spacing increases away from the stan-dard parallels and decreases between them. Meridians and paral-lels intersect each other at right angles. The graticule spacingretains the property of conformality. The graticule is symmetrical.

Linear scaleLinear scale is true on standard parallels. Maximum scale error is2.5% on a map of the United States (48 states) with standard par-allels at 33˚N and 45˚N.

Uses

Used for large countries in the mid-latitudes having an east-westorientation. The United States (50 states) Base Map uses standardparallels at 37˚N and 65˚N. Some of the National TopographicMap Series 7.5-minute and 15-minute quadrangles and the StateBase Map series are constructed on this projection. The latterseries uses standard parallels of 33˚N and 45˚N. Aeronauticalcharts for Alaska use standard parallels at 55˚N and 65˚N. TheNational Atlas of Canada uses standard parallels at 49˚N and77˚N.

538 ERDAS

Page 565: Erdas FieldGuide

USGS Projections

Prompts

The following prompts display in the Projection Chooser if Lambert Conformal Conicis selected. Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Latitude of 1st standard parallel

Latitude of 2nd standard parallel

Enter two values for the desired control lines of the projection, i.e., the standardparallels. Note that the first standard parallel is the southernmost.

Then, define the origin of the map projection in both spherical and rectangular coordi-nates.

Longitude of central meridian

Latitude of origin of projection

Enter values for longitude of the desired central meridian and latitude of the origin ofprojection.

False easting at central meridian

False northing at origin

Enter values of false easting and false northing corresponding to the intersection of thecentral meridian and the latitude of the origin of projection. These values must be inmeters. It is very often convenient to make them large enough to ensure that there willbe no negative coordinates within the region of the map projection. That is, the originof the rectangular coordinate system should fall outside of the map projection to thesouth and west.

539Field Guide

Page 566: Erdas FieldGuide

Figure 193: Lambert Conformal Conic Projection

In Figure 193, the standard parallels are 20˚N and 60˚N. Note the change in spacing ofthe parallels.

540 ERDAS

Page 567: Erdas FieldGuide

USGS Projections

Mercator

Summary

This famous cylindrical projection was originally designed by Flemish map makerGerhardus Mercator in 1569 to aid navigation (Figure 194). Meridians and parallels arestraight lines and cross at 90˚ angles. Angular relationships are preserved. However, topreserve conformality, parallels are placed increasingly farther apart with increasingdistance from the equator. Due to extreme scale distortion in high latitudes, theprojection is rarely extended beyond 80˚N or S unless the latitude of true scale is otherthan the equator. Distance scales are usually furnished for several latitudes.

This projection can be thought of as being mathematically based on a cylinder tangentat the equator. Any straight line is a constant-azimuth (rhumb) line. Areal enlargementis extreme away from the equator; poles cannot be represented. Shape is true onlywithin any small area. It is a reasonably accurate projection within a 15˚ band along theline of tangency.

Rhumb lines, which show constant direction, are straight. For this reason a Mercatormap was very valuable to sea navigators. However, rhumb lines are not the shortestpath; great circles are the shortest path. Most great circles appear as long arcs whendrawn on a Mercator map.

Construction Cylinder

Property Conformal

Meridians Meridians are straight and parallel.

Parallels Parallels are straight and parallel.

Graticule spacing

Meridian spacing is equal and the parallel spacing increases awayfrom the equator. The graticule spacing retains the property ofconformality. The graticule is symmetrical. Meridians intersectparallels at right angles.

Linear scale

Linear scale is true along the equator only (line of tangency), oralong two parallels equidistant from the equator (the secantform). Scale can be determined by measuring one degree of lati-tude, which equals 60 nautical miles, 69 statute miles, or 111 kilo-meters.

Uses

An excellent projection for equatorial regions. Otherwise the Mer-cator is a special-purpose map projection best suited for naviga-tion. Secant constructions are used for large-scale coastal charts.The use of the Mercator map projection as the base for nauticalcharts is universal. Examples are the charts published by theNational Ocean Survey, U.S. Dept. of Commerce.

541Field Guide

Page 568: Erdas FieldGuide

Prompts

The following prompts display in the Projection Chooser if Mercator is selected.Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Define the origin of the map projection in both spherical and rectangular coordinates.

Longitude of central meridian

Latitude of true scale

Enter values for longitude of the desired central meridian and latitude at which truescale is desired. Selection of a parameter other than the equator can be useful for makingmaps in extreme north or south latitudes.

False easting at central meridian

False northing at origin

Enter values of false easting and false northing corresponding to the intersection of thecentral meridian and the latitude of true scale. These values must be in meters. It is veryoften convenient to make them large enough so that no negative coordinates will occurwithin the region of the map projection. That is, the origin of the rectangular coordinatesystem should fall outside of the map projection to the south and west.

542 ERDAS

Page 569: Erdas FieldGuide

USGS Projections

Figure 194: Mercator Projection

In Figure 194, all angles are shown correctly, therefore small shapes are true (i.e., themap is conformal). Rhumb lines are straight, which makes it useful for navigation.

543Field Guide

Page 570: Erdas FieldGuide

Miller Cylindrical

Summary

Miller Cylindrical is a modification of the Mercator projection (Figure 195). It is similarto the Mercator from the equator to 45˚, but latitude line intervals are modified so thatthe distance between them increases less rapidly. Thus, beyond 45˚, Miller Cylindricallessens the extreme exaggeration of the Mercator. Miller Cylindrical also includes thepoles as straight lines whereas the Mercator does not.

Meridians and parallels are straight lines intersecting at right angles. Meridians areequidistant, while parallels are spaced farther apart the farther they are from theequator. Miller Cylindrical is not equal-area, equidistant, or conformal. Miller Cylin-drical is used for world maps and in several atlases.

Prompts

The following prompts display in the Projection Chooser if Miller Cylindrical isselected. Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Longitude of central meridian

Enter a value for the longitude of the desired central meridian to center the projection.

Construction Cylinder

Property Compromise

Meridians All meridians are straight lines.

Parallels All parallels are straight lines.

Graticule spacing

Meridians are parallel and equally spaced, the lines of latitude areparallel, and the distance between them increases toward thepoles. Both poles are represented as straight lines. Meridians andparallels intersect at right angles (ESRI 1992).

Linear scale While the standard parallels, or lines true to scale and free of dis-tortion, are at latitudes 45˚N and S, only the equator is standard.

Uses Useful for world maps.

544 ERDAS

Page 571: Erdas FieldGuide

USGS Projections

False easting

False northing

Enter values of false easting and false northing corresponding to the desired center ofthe projection. These values must be in meters. It is very often convenient to make themlarge enough to prevent negative coordinates within the region of the map projection.That is, the origin of the rectangular coordinate system should fall outside of the mapprojection to the south and west.

Figure 195: Miller Cylindrical Projection

This projection resembles the Mercator, but has less distortion in polar regions. MillerCylindrical is neither conformal nor equal area.

545Field Guide

Page 572: Erdas FieldGuide

Modified TransverseMercator

Summary

In 1972, the USGS devised a projection specifically for the revision of a 1954 map ofAlaska which, like its predecessors, was based on the Polyconic projection. Thisprojection was drawn to a scale of 1:2,000,000 and published at 1:2,500,000 (map “E”)and 1:1,584,000 (map “B”). Graphically prepared by adapting coordinates for theUniversal Transverse Mercator projection, it is identified as the Modified TransverseMercator projection. It resembles the Transverse Mercator in a very limited manner andcannot be considered a cylindrical projection. It resembles the Equidistant Conicprojection for the ellipsoid in actual construction. The projection was also used in 1974for a base map of the Aleutian-Bering Sea Region published at 1:2,500,000 scale.

It is found to be most closely equivalent to the Equidistant Conic for the Clarke 1866ellipsoid, with the scale along the meridians reduced to 0.9992 of true scale and thestandard parallels at latitude 66.09˚ and 53.50˚N.

Construction Cone

Property Equidistant

MeridiansOn pre-1973 editions of the Alaska Map E, meridians are curvedconcave toward the center of the projection. On post-1973 edi-tions, the meridians are straight.

Parallels Parallels are arcs concave to the pole.

Graticule spacingMeridian spacing is approximately equal and decreases towardthe pole. Parallels are approximately equally spaced. The grati-cule is symmetrical on post-1973 editions of the Alaska Map E.

Linear scale Linear scale is more nearly correct along the meridians than alongthe parallels.

Uses

The U.S. Geological Survey’s Alaska Map E at the scale of1:2,500,000. The Bathymetric Maps Eastern Continental MarginU.S.A., published by the American Association of PetroleumGeologists, uses the straight meridians on its Modified Trans-verse Mercator and is more equivalent to the Equidistant Conicmap projection.

546 ERDAS

Page 573: Erdas FieldGuide

USGS Projections

Prompts

The following prompts display in the Projection Chooser if Modified TransverseMercator is selected. Respond to the prompts as described.

False easting

False northing

Enter values of false easting and false northing corresponding to the desired center ofthe projection. These values must be in meters. It is very often convenient to make themlarge enough to prevent negative coordinates within the region of the map projection.That is, the origin of the rectangular coordinate system should fall outside of the mapprojection to the south and west.

547Field Guide

Page 574: Erdas FieldGuide

Oblique Mercator(Hotine)

Summary

Oblique Mercator is a cylindrical, conformal projection that intersects the global surfacealong a great circle. It is equivalent to a Mercator projection that has been altered byrotating the cylinder so that the central line of the projection is a great circle path insteadof the equator. Shape is true only within any small area. Areal enlargement increasesaway from the line of tangency. Reasonably accurate projection within a 15˚ band alongthe line of tangency.

The USGS uses the Hotine version of Oblique Mercator. The Hotine version is based ona study of conformal projections published by British geodesist Martin Hotine in 1946-47. Prior to the implementation of the Space Oblique Mercator, the Hotine version wasused for mapping Landsat satellite imagery.

Prompts

The following prompts display in the Projection Chooser if Oblique Mercator (Hotine)is selected. Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Construction Cylinder

Property Conformal

Meridians Meridians are complex curves concave toward the line of tan-gency, except each 180th meridian is straight.

Parallels Parallels are complex curves concave toward the nearest pole.

Graticule spacing Graticule spacing increases away from the line of tangency andretains the property of conformality.

Linear scale Linear scale is true along the line of tangency, or along two linesof equidistance from and parallel to the line of tangency.

Uses

Useful for plotting linear configurations that are situated along aline oblique to the earth’s equator. Examples are: NASA SurveyorSatellite tracking charts, ERTS flight indexes, strip charts for navi-gation, and the National Geographic Society’s maps “WestIndies,” “Countries of the Caribbean,” “Hawaii,” and “NewZealand.”

548 ERDAS

Page 575: Erdas FieldGuide

USGS Projections

Scale factor at center of projection

Designate the desired scale factor along the central line of the projection. This parametermay be used to modify scale distortion away from this central line. A value of 1.0indicates true scale only along the central line. A value of less than, but close to, one isoften used to lessen scale distortion away from the central line.

Latitude of point of origin of projection

False easting

False northing

The center of the projection is defined by rectangular coordinates of false easting andfalse northing. The origin of rectangular coordinates on this projection occurs at thenearest intersection of the central line with the earth’s equator. To shift the origin to theintersection of the latitude of the origin entered above and the central line of theprojection, compute coordinates of the latter point with zero false eastings andnorthings, reverse the signs of the coordinates obtained, and use these for false eastingsand northings. These values must be in meters. It is very often convenient to addadditional values so that no negative coordinates will occur within the region of themap projection. That is, the origin of the rectangular coordinate system should falloutside of the map projection to the south and west.

Do you want to enter either:

A) Azimuth East of North for central line and the longi-tude of the point of origin

B) The latitude and longitude of the first and secondpoints defining the central line

These formats differ slightly in definition of the central line of the projection.

Format A

For format A the additional prompts are:

Azimuth east of north for central line

Longitude of point of origin

Format A defines the central line of the projection by the angle east of north to thedesired great circle path and by the latitude and longitude of the point along the greatcircle path from which the angle is measured. Appropriate values should be entered.

549Field Guide

Page 576: Erdas FieldGuide

Format B

For format B the additional prompts are:

Longitude of 1st point defining central line

Latitude of 1st point defining central line

Longitude of 2nd point defining central line

Latitude of 2nd point defining central line

Format B defines the central line of the projection by the latitude of a point on the centralline which has the desired scale factor entered previously and by the longitude andlatitude of two points along the desired great circle path. Appropriate values should beentered.

550 ERDAS

Page 577: Erdas FieldGuide

USGS Projections

Orthographic

Summary

The Orthographic projection is geometrically based on a plane tangent to the earth, andthe point of projection is at infinity (Figure 196). The earth appears as it would fromouter space. Light rays that cast the projection are parallel and intersect the tangentplane at right angles. This projection is a truly graphic representation of the earth andis a projection in which distortion becomes a visual aid. It is the most familiar of theazimuthal map projections. Directions from the center of the projection are true.

Construction Plane

Property Compromise

Meridians

Polar aspect: the meridians are straight lines radiating from thepoint of tangency.

Oblique aspect: the meridians are ellipses, concave toward thecenter of the projection.

Equatorial aspect: the meridians are ellipses concave toward thestraight central meridian.

Parallels

Polar aspect: the parallels are concentric circles.

Oblique aspect: the parallels are ellipses concave toward thepoles.

Equatorial aspect: the parallels are straight and parallel.

Graticule spacing

Polar aspect: meridian spacing is equal and increases, and theparallel decreases from the point of tangency.

Oblique and equatorial aspects: the graticule spacing decreasesaway from the center of the projection.

Linear scaleScale is true on the parallels in the polar aspect and on all circlescentered at the pole of the projection in all aspects. Scaledecreases along lines radiating from the center of the projection.

Uses The U.S. Geological Survey uses the Orthographic map projectionin the National Atlas.

551Field Guide

Page 578: Erdas FieldGuide

This projection is limited to one hemisphere and shrinks those areas toward theperiphery. In the polar aspect, latitude ring intervals decrease from the center outwardsat a much greater rate than with Lambert Azimuthal. In the equatorial aspect, thecentral meridian and parallels are straight, with spaces closing up toward the outeredge.

The Orthographic projection seldom appears in atlases. Its utility is more pictorial thantechnical. Orthographic has been used as a basis for artistic maps by Rand McNally andthe USGS.

Prompts

The following prompts display in the Projection Chooser if Orthographic is selected.Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Define the center of the map projection in both spherical and rectangular coordinates.

Longitude of center of projection

Latitude of center of projection

Enter values for the longitude and latitude of the desired center of the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the center of theprojection. These values must be in meters. It is very often convenient to make themlarge enough so that no negative coordinates will occur within the region of the mapprojection. That is, the origin of the rectangular coordinate system should fall outsideof the map projection to the south and west.

Three views of the Orthographic projection are shown in Figure 196: A) Polar aspect; B)Equatorial aspect; C) Oblique aspect, centered at 40˚N and showing the classic globe-like view.

552 ERDAS

Page 579: Erdas FieldGuide

USGS Projections

Figure 196: Orthographic Projection

A

C

B

553Field Guide

Page 580: Erdas FieldGuide

Polar Stereographic

Summary

The Polar Stereographic may be used to accommodate all regions not included in theUTM coordinate system, regions north of 84˚N and 80˚S. This form is called UniversalPolar Stereographic (UPS). The projection is equivalent to the polar aspect of the Stereo-graphic projection on a spheroid. The central point is either the North Pole or the SouthPole. Of all the polar aspect planar projections, this is the only one that is conformal.

The point of tangency is a single point—either the North Pole or the South Pole. If theplane is secant instead of tangent, the point of global contact is a line of latitude (ESRI1992).

Polar Stereographic is an azimuthal projection obtained by projecting from the oppositepole (Figure 197). All of either the northern or southern hemispheres can be shown, butnot both. This projection produces a circular map with one of the poles at the center.

Polar Stereographic stretches areas toward the periphery, and scale increases for areasfarther from the central pole. Meridians are straight and radiating; parallels areconcentric circles. Even though scale and area are not constant with Polar Stereo-graphic, this projection, like all stereographic projections, possesses the property ofconformality.

The Astrogeology Center of the Geological Survey at Flagstaff, Arizona, has been usingthe Polar Stereographic projection for the mapping of polar areas of every planet andsatellite for which there is sufficient information.

Construction Plane

Property Conformal

Meridians Meridians are straight.

Parallels Parallels are concentric circles.

Graticule spacing The distance between parallels increases with distance from thecentral pole.

Linear scaleThe scale increases with distance from the center. If a standardparallel is chosen rather than one of the poles, this latitude repre-sents the true scale, and the scale nearer the pole is reduced.

UsesPolar regions (conformal). In the UPS system, the scale factor atthe pole is made 0.994, thus making the standard parallel (lati-tude of true scale) approximately 81˚07’N or S.

554 ERDAS

Page 581: Erdas FieldGuide

USGS Projections

Prompts

The following prompts display in the Projection Chooser if Polar Stereographic isselected. Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Define the origin of the map projection in both spherical and rectangular coordinates.Ellipsoid projections of the polar regions normally use the International 1909 spheroid(ESRI 1992).

Longitude directed straight down below pole of map

Enter a value for longitude directed straight down below the pole for a north polaraspect, or straight up from the pole for a south polar aspect. This is equivalent tocentering the map with a desired meridian.

Latitude of true scale

Enter a value for latitude at which true scale is desired. For secant projections, specifythe latitude of true scale as any line of latitude other than 90˚N or S. For tangentialprojections, specify the latitude of true scale as the North Pole, 90 00 00, or the SouthPole, -90 00 00 (ESRI 1992).

False easting

False northing

Enter values of false easting and false northing corresponding to the pole. These valuesmust be in meters. It is very often convenient to make them large enough to preventnegative coordinates within the region of the map projection. That is, the origin of therectangular coordinate system should fall outside of the map projection to the southand west.

555Field Guide

Page 582: Erdas FieldGuide

Figure 197: Polar Stereographic Projection and its Geometric Construction

This projection is conformal and is the most scientific projection for polar regions.

N. Pole

S. Pole

Plane of projection

Equator

556 ERDAS

Page 583: Erdas FieldGuide

USGS Projections

Polyconic

Summary

Polyconic was developed in 1820 by Ferdinand Hassler specifically for mapping theeastern coast of the U.S. (Figure 198). Polyconic projections are made up of an infinitenumber of conic projections tangent to an infinite number of parallels. These conicprojections are placed in relation to a central meridian. Polyconic projectionscompromise properties such as equal area and conformality, although the centralmeridian is held true to scale.

This projection is used mostly for north-south oriented maps. Distortion increasesgreatly the farther east and west an area is from the central meridian.

Prompts

The following prompts display in the Projection Chooser if Polyconic is selected.Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Construction Cone

Property Compromise

Meridians The central meridian is a straight line, but all other meridians arecomplex curves.

Parallels Parallels (except the equator) are nonconcentric circular arcs. Theequator is a straight line.

Graticule spacing

All parallels are arcs of circles, but not concentric. All meridians,excepting the central meridian, are concave toward the centralmeridian. Parallels cross the central meridian at equal intervalsbut get farther apart at the east and west peripheries.

Linear scaleThe scale along each parallel and along the central meridian ofthe projection is accurate. It increases along the meridians as thedistance from the central meridian increases (ESRI 1992).

Uses

Used for 7.5-minute and 15-minute topographic USGS quadsheets, from 1886 to about 1957 (ESRI 1992). Used almost exclu-sively in slightly modified form for large-scale mapping in theUnited States until the 1950s.

557Field Guide

Page 584: Erdas FieldGuide

Define the origin of the map projection in both spherical and rectangular coordinates.

Longitude of central meridian

Latitude of origin of projection

Enter values for longitude of the desired central meridian and latitude of the origin ofprojection.

False easting at central meridian

False northing at origin

Enter values of false easting and false northing corresponding to the intersection of thecentral meridian and the latitude of the origin of projection. These values must be inmeters. It is very often convenient to make them large enough so that no negativecoordinates will occur within the region of the map projection. That is, the origin of therectangular coordinate system should fall outside of the map projection to the southand west.

Figure 198: Polyconic Projection of North America

In Figure 198, the central meridian is 100˚W. This projection is used by the U.S.Geological Survey for topographic quadrangle maps.

558 ERDAS

Page 585: Erdas FieldGuide

USGS Projections

Sinusoidal

Summary

Sometimes called the Sanson-Flamsteed, Sinusoidal is a projection with some character-istics of a cylindrical projection—often called a pseudo-cylindrical type. The centralmeridian is the only straight meridian—all others become sinusoidal curves. Allparallels are straight and the correct length. Parallels are also the correct distance fromthe equator, which, for a complete world map, is twice as long as the central meridian.

Sinusoidal maps achieve the property of equal area but not conformality. The equatorand central meridian are distortion free, but distortion becomes pronounced near outermeridians, especially in polar regions.

Interrupting a Sinusoidal world or hemisphere map can lessen distortion. The inter-rupted Sinusoidal contains less distortion because each interrupted area can beconstructed to contain a separate central meridian. Central meridians may be differentfor the northern and southern hemispheres and may be selected to minimize distortionof continents or oceans.

Sinusoidal is particularly suited for less than world areas, especially those borderingthe equator, such as South America or Africa. Sinusoidal is also used by the USGS as abase map for showing prospective hydrocarbon provinces and sedimentary basins ofthe world.

Construction Pseudo-cylinder

Property Equal area

Meridians Meridians are sinusoidal curves, curved toward a straight centralmeridian.

Parallels All parallels are straight, parallel lines.

Graticule spacingMeridian spacing is equal and decreases toward the poles. Paral-lel spacing is equal. The graticule spacing retains the property ofequivalence of area.

Linear scale Linear scale is true on the parallels and the central meridian.

Uses

Used as an equal area projection to portray areas that have a max-imum extent in a north-south direction. Used as a world equal-area projection in atlases to show distribution patterns. Used bythe U.S. Geological Survey as the base for maps showing prospec-tive hydrocarbon provinces of the world and sedimentary basinsof the world.

559Field Guide

Page 586: Erdas FieldGuide

Prompts

The following prompts display in the Projection Chooser if Sinusoidal is selected.Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Longitude of central meridian

Enter a value for the longitude of the desired central meridian to center the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the desired center ofthe projection. These values must be in meters. It is very often convenient to make themlarge enough to prevent negative coordinates within the region of the map projection.That is, the origin of the rectangular coordinate system should fall outside of the mapprojection to the south and west.

560 ERDAS

Page 587: Erdas FieldGuide

USGS Projections

Space Oblique Mercator

Summary

The Space Oblique Mercator (SOM) projection is nearly conformal and has little scaledistortion within the sensing range of an orbiting mapping satellite such as Landsat. Itis the first projection to incorporate the earth’s rotation with respect to the orbitingsatellite.

The method of projection used is the modified cylindrical, for which the central line iscurved and defined by the groundtrack of the orbit of the satellite.The line of tangencyis conceptual and there are no graticules.

The Space Oblique Mercator projection is defined by USGS. According to USGS, the Xaxis passes through the descending node for each daytime scene. The Y axis is perpen-dicular to the X axis, to form a Cartesian coordinate system. The direction of the X axisin a daytime Landsat scene is in the direction of the satellite motion — south. The Y axisis directed east. For SOM projections used by EOSAT, the axes are switched; the X axisis directed east and the Y axis is directed south.

The Space Oblique Mercator projection is specifically designed to minimize distortionwithin sensing range of a mapping satellite as it orbits the Earth. It can be used for therectification of, and continuous mapping from, satellite imagery. It is the standardformat for data from Landsats 4 and 5. Plots for adjacent paths do not match withouttransformation (ESRI 1991).

Construction Cylinder

Property Conformal

Meridians All meridians are curved lines except for the meridian crossed bythe groundtrack at each polar approach.

Parallels All parallels are curved lines.

Graticule spacing There are no graticules.

Linear scale Scale is true along the groundtrack, and varies approximately0.01% within sensing range (ESRI 1992).

UsesUsed for georectification of, and continuous mapping from, satel-lite imagery. Standard format for data from Landsats 4 and 5(ESRI 1992).

561Field Guide

Page 588: Erdas FieldGuide

Prompts

The following prompts display in the Projection Chooser if Space Oblique Mercator isselected. Respond to the prompts as described.

Spheroid Name:

Datum Name:

Landsat vehicle ID (1-5)

Specify whether the data are from Landsat 1, 2, 3, 4, or 5.

Orbital path number (1-251 or 1-233)

For Landsats 1, 2, and 3, the path range is from 1 to 251. For Landsats 4 and 5, the pathrange is from 1 to 233.

False easting

False northing

Enter values of false easting and false northing corresponding to the desired center ofthe projection. These values must be in meters. It is very often convenient to make themlarge enough to prevent negative coordinates within the region of the map projection.That is, the origin of the rectangular coordinate system should fall outside of the mapprojection to the south and west.

562 ERDAS

Page 589: Erdas FieldGuide

USGS Projections

State Plane The State Plane is an X,Y coordinate system (not a map projection) whose zones dividethe U.S. into over 130 sections, each with its own projection surface and grid network(Figure 199). With the exception of very narrow States, such as Delaware, New Jersey,and New Hampshire, most States are divided into two to ten zones. The LambertConformal projection is used for zones extending mostly in an east-west direction. TheTransverse Mercator projection is used for zones extending mostly in a north-southdirection. Alaska, Florida, and New York use either Transverse Mercator or LambertConformal for different areas. The Aleutian panhandle of Alaska is prepared on theOblique Mercator projection.

Zone boundaries follow state and county lines, and, because each zone is small,distortion is less than one in 10,000. Each zone has a centrally located origin and acentral meridian which passes through this origin. Two zone numbering systems arecurrently in use—the U.S. Geological Survey (USGS) code system and the NationalOcean Service (NOS) code system (Tables 1 and 2)—but other numbering systems exist.

Prompts

The following prompts will appear in the Projection Chooser if State Plane is selected.Respond to the prompts as described.

State Plane Zone

Enter either the USGS zone code number as a positive value, or the NOS zone codenumber as a negative value.

NAD27 or 83

Either North America Datum 1927 (NAD27) or North America Datum 1983 (NAD83)may be used to perform the State Plane calculations.

• NAD27 is based on the Clarke 1866 spheroid.

• NAD83 is based on the GRS 1980 spheroid. Some zone numbers have been changedor deleted from NAD27.

Tables for both NAD27 and NAD83 zone numbers follow (Tables 1 and 2). These tablesinclude both USGS and NOS code systems.

563Field Guide

Page 590: Erdas FieldGuide

Figure 199: Zones of the State Plane Coordinate System

The following abbreviations are used in Table 33 andTable 34:

Tr Merc = Transverse MercatorLambert = Lambert Conformal ConicOblique = Oblique Mercator (Hotine)Polycon = Polyconic

564 ERDAS

Page 591: Erdas FieldGuide

USGS Projections

Table 33: NAD27 State Plane coordinate system zone numbers, projection types,and zone code numbers for the United States

Code Number

State Zone Name Type USGS NOS

Alabama East Tr Merc 3101 -101

West Tr Merc 3126 -102

Alaska 1 Oblique 6101 -5001

2 Tr Merc 6126 -5002

3 Tr Merc 6151 -5003

4 Tr Merc 6176 -5004

5 Tr Merc 6201 -5005

6 Tr Merc 6226 -5006

7 Tr Merc 6251 -5007

8 Tr Merc 6276 -5008

9 Tr Merc 6301 -5009

10 Lambert 6326 -5010

American Samoa ------- Lambert ------ -5302

Arizona East Tr Merc 3151 -201

Central Tr Merc 3176 -202

West Tr Merc 3201 -203

Arkansas North Lambert 3226 -301

South Lambert 3251 -302

California I Lambert 3276 -401

II Lambert 3301 -402

III Lambert 3326 -403

IV Lambert 3351 -404

V Lambert 3376 -405

VI Lambert 3401 -406

VII Lambert 3426 -407

Colorado North Lambert 3451 -501

Central Lambert 3476 -502

South Lambert 3501 -503

Connecticut -------- Lambert 3526 -600

Delaware -------- Tr Merc 3551 -700

District of Columbia Use Maryland or Virginia North

565Field Guide

Page 592: Erdas FieldGuide

Florida East Tr Merc 3601 -901

West Tr Merc 3626 -902

North Lambert 3576 -903

Georgia East Tr Merc 3651 -1001

West Tr Merc 3676 -1002

Guam ------- Polycon ------- -5400

Hawaii 1 Tr Merc 5876 -5101

2 Tr Merc 5901 -5102

3 Tr Merc 5926 -5103

4 Tr Merc 5951 -5104

5 Tr Merc 5976 -5105

Idaho East Tr Merc 3701 -1101

Central Tr Merc 3726 -1102

West Tr Merc 3751 -1103

Illinois East Tr Merc 3776 -1201

West Tr Merc 3801 -1202

Indiana East Tr Merc 3826 -1301

West Tr Merc 3851 -1302

Iowa North Lambert 3876 -1401

South Lambert 3901 -1402

Kansas North Lambert 3926 -1501

South Lambert 3951 -1502

Kentucky North Lambert 3976 -1601

South Lambert 4001 -1602

Louisiana North Lambert 4026 -1701

South Lambert 4051 -1702

Offshore Lambert 6426 -1703

Maine East Tr Merc 4076 -1801

West Tr Merc 4101 -1802

Maryland ------- Lambert 4126 -1900

Massachusetts Mainland Lambert 4151 -2001

Island Lambert 4176 -2002

Table 33: NAD27 State Plane coordinate system zone numbers, projection types,and zone code numbers for the United States (Continued)

Code Number

State Zone Name Type USGS NOS

566 ERDAS

Page 593: Erdas FieldGuide

USGS Projections

Michigan (Tr Merc) East Tr Merc 4201 -2101

Central Tr Merc 4226 -2102

West Tr Merc 4251 -2103

Michigan (Lambert) North Lambert 6351 -2111

Central Lambert 6376 -2112

South Lambert 6401 -2113

Minnesota North Lambert 4276 -2201

Central Lambert 4301 -2202

South Lambert 4326 -2203

Mississippi East Tr Merc 4351 -2301

West Tr Merc 4376 -2302

Missouri East Tr Merc 4401 -2401

Central Tr Merc 4426 -2402

West Tr Merc 4451 -2403

Montana North Lambert 4476 -2501

Central Lambert 4501 -2502

South Lambert 4526 -2503

Nebraska North Lambert 4551 -2601

South Lambert 4576 -2602

Nevada East Tr Merc 4601 -2701

Central Tr Merc 4626 -2702

West Tr Merc 4651 -2703

New Hampshire --------- Tr Merc 4676 -2800

New Jersey --------- Tr Merc 4701 -2900

New Mexico East Tr Merc 4726 -3001

Central Tr Merc 4751 -3002

West Tr Merc 4776 -3003

New York East Tr Merc 4801 -3101

Central Tr Merc 4826 -3102

West Tr Merc 4851 -3103

Long Island Lambert 4876 -3104

North Carolina -------- Lambert 4901 -3200

Table 33: NAD27 State Plane coordinate system zone numbers, projection types,and zone code numbers for the United States (Continued)

Code Number

State Zone Name Type USGS NOS

567Field Guide

Page 594: Erdas FieldGuide

North Dakota North Lambert 4926 -3301

South Lambert 4951 -3302

Ohio North Lambert 4976 -3401

South Lambert 5001 -3402

Oklahoma North Lambert 5026 -3501

South Lambert 5051 -3502

Oregon North Lambert 5076 -3601

South Lambert 5101 -3602

Pennsylvania North Lambert 5126 -3701

South Lambert 5151 -3702

Puerto Rico -------- Lambert 6001 -5201

Rhode Island -------- Tr Merc 5176 -3800

South Carolina North Lambert 5201 -3901

South Lambert 5226 -3902

South Dakota North Lambert 5251 -4001

South Lambert 5276 -4002

St. Croix --------- Lambert 6051 -5202

Tennessee --------- Lambert 5301 -4100

Texas North Lambert 5326 -4201

North Central Lambert 5351 -4202

Central Lambert 5376 -4203

South Central Lambert 5401 -4204

South Lambert 5426 -4205

Utah North Lambert 5451 -4301

Central Lambert 5476 -4302

South Lambert 5501 -4303

Vermont -------- Tr Merc 5526 -4400

Virginia North Lambert 5551 -4501

South Lambert 5576 -4502

Virgin Islands -------- Lambert 6026 -5201

Washington North Lambert 5601 -4601

South Lambert 5626 -4602

Table 33: NAD27 State Plane coordinate system zone numbers, projection types,and zone code numbers for the United States (Continued)

Code Number

State Zone Name Type USGS NOS

568 ERDAS

Page 595: Erdas FieldGuide

USGS Projections

West Virginia North Lambert 5651 -4701

South Lambert 5676 -4702

Wisconsin North Lambert 5701 -4801

Central Lambert 5726 -4802

South Lambert 5751 -4803

Wyoming East Tr Merc 5776 -4901

East Central Tr Merc 5801 -4902

West Central Tr Merc 5826 -4903

West Tr Merc 5851 -4904

Table 33: NAD27 State Plane coordinate system zone numbers, projection types,and zone code numbers for the United States (Continued)

Code Number

State Zone Name Type USGS NOS

569Field Guide

Page 596: Erdas FieldGuide

Table 34: NAD83 State Plane coordinate system zone numbers, projection types,and zone code numbers for the United States

Code Number

State Zone Name Type USGS NOS

Alabama East Tr Merc 3101 -101

West Tr Merc 3126 -102

Alaska 1 Oblique 6101 -5001

2 Tr Merc 6126 -5002

3 Tr Merc 6151 -5003

4 Tr Merc 6176 -5004

5 Tr Merc 6201 -5005

6 Tr Merc 6226 -5006

7 Tr Merc 6251 -5007

8 Tr Merc 6276 -5008

9 Tr Merc 6301 -5009

10 Lambert 6326 -5010

Arizona East Tr Merc 3151 -201

Central Tr Merc 3176 -202

West Tr Merc 3201 -203

Arkansas North Lambert 3226 -301

South Lambert 3251 -302

California I Lambert 3276 -401

II Lambert 3301 -402

III Lambert 3326 -403

IV Lambert 3351 -404

V Lambert 3376 -405

VI Lambert 3401 -406

Colorado North Lambert 3451 -501

Central Lambert 3476 -502

South Lambert 3501 -503

Connecticut -------- Lambert 3526 -600

Delaware -------- Tr Merc 3551 -700

District of Columbia Use Maryland or Virginia North

570 ERDAS

Page 597: Erdas FieldGuide

USGS Projections

Florida East Tr Merc 3601 -901

West Tr Merc 3626 -902

North Lambert 3576 -903

Georgia East Tr Merc 3651 -1001

West Tr Merc 3676 -1002

Hawaii 1 Tr Merc 5876 -5101

2 Tr Merc 5901 -5102

3 Tr Merc 5926 -5103

4 Tr Merc 5951 -5104

5 Tr Merc 5976 -5105

Idaho East Tr Merc 3701 -1101

Central Tr Merc 3726 -1102

West Tr Merc 3751 -1103

Illinois East Tr Merc 3776 -1201

West Tr Merc 3801 -1202

Indiana East Tr Merc 3826 -1301

West Tr Merc 3851 -1302

Iowa North Lambert 3876 -1401

South Lambert 3901 -1402

Kansas North Lambert 3926 -1501

South Lambert 3951 -1502

Kentucky North Lambert 3976 -1601

South Lambert 4001 -1602

Louisiana North Lambert 4026 -1701

South Lambert 4051 -1702

Offshore Lambert 6426 -1703

Maine East Tr Merc 4076 -1801

West Tr Merc 4101 -1802

Maryland ------- Lambert 4126 -1900

Massachusetts Mainland Lambert 4151 -2001

Island Lambert 4176 -2002

Table 34: NAD83 State Plane coordinate system zone numbers, projection types,and zone code numbers for the United States (Continued)

Code Number

State Zone Name Type USGS NOS

571Field Guide

Page 598: Erdas FieldGuide

Michigan North Lambert 6351 -2111

Central Lambert 6376 -2112

South Lambert 6401 -2113

Minnesota North Lambert 4276 -2201

Central Lambert 4301 -2202

South Lambert 4326 -2203

Mississippi East Tr Merc 4351 -2301

West Tr Merc 4376 -2302

Missouri East Tr Merc 4401 -2401

Central Tr Merc 4426 -2402

West Tr Merc 4451 -2403

Montana --------- Lambert 4476 -2500

Nebraska --------- Lambert 4551 -2600

Nevada East Tr Merc 4601 -2701

Central Tr Merc 4626 -2702

West Tr Merc 4651 -2703

New Hampshire --------- Tr Merc 4676 -2800

New Jersey --------- Tr Merc 4701 -2900

New Mexico East Tr Merc 4726 -3001

Central Tr Merc 4751 -3002

West Tr Merc 4776 -3003

New York East Tr Merc 4801 -3101

Central Tr Merc 4826 -3102

West Tr Merc 4851 -3103

Long Island Lambert 4876 -3104

North Carolina --------- Lambert 4901 -3200

North Dakota North Lambert 4926 -3301

South Lambert 4951 -3302

Ohio North Lambert 4976 -3401

South Lambert 5001 -3402

Oklahoma North Lambert 5026 -3501

South Lambert 5051 -3502

Table 34: NAD83 State Plane coordinate system zone numbers, projection types,and zone code numbers for the United States (Continued)

Code Number

State Zone Name Type USGS NOS

572 ERDAS

Page 599: Erdas FieldGuide

USGS Projections

Oregon North Lambert 5076 -3601

South Lambert 5101 -3602

Pennsylvania North Lambert 5126 -3701

South Lambert 5151 -3702

Puerto Rico --------- Lambert 6001 -5201

Rhode Island --------- Tr Merc 5176 -3800

South Carolina --------- Lambert 5201 -3900

South Dakota --------- Lambert 5251 -4001

South Lambert 5276 -4002

Tennessee --------- Lambert 5301 -4100

Texas North Lambert 5326 -4201

North Central Lambert 5351 -4202

Central Lambert 5376 -4203

South Central Lambert 5401 -4204

South Lambert 5426 -4205

Utah North Lambert 5451 -4301

Central Lambert 5476 -4302

South Lambert 5501 -4303

Vermont --------- Tr Merc 5526 -4400

Virginia North Lambert 5551 -4501

South Lambert 5576 -4502

Virgin Islands --------- Lambert 6026 -5201

Washington North Lambert 5601 -4601

South Lambert 5626 -4602

West Virginia North Lambert 5651 -4701

South Lambert 5676 -4702

Wisconsin North Lambert 5701 -4801

Central Lambert 5726 -4802

South Lambert 5751 -4803

Table 34: NAD83 State Plane coordinate system zone numbers, projection types,and zone code numbers for the United States (Continued)

Code Number

State Zone Name Type USGS NOS

573Field Guide

Page 600: Erdas FieldGuide

Wyoming East Tr Merc 5776 -4901

East Central Tr Merc 5801 -4902

West Central Tr Merc 5826 -4903

West Tr Merc 5851 -4904

Table 34: NAD83 State Plane coordinate system zone numbers, projection types,and zone code numbers for the United States (Continued)

Code Number

State Zone Name Type USGS NOS

574 ERDAS

Page 601: Erdas FieldGuide

USGS Projections

Stereographic

Summary

Stereographic is a perspective projection in which points are projected from a positionon the opposite side of the globe onto a plane tangent to the earth (Figure 200). All ofone hemisphere can easily be shown, but it is impossible to show both hemispheres intheir entirety from one center. It is the only azimuthal projection that preserves truth ofangles and local shape. Scale increases and parallels become more widely spacedfarther from the center.

In the equatorial aspect, all parallels except the equator are circular arcs. In the polaraspect, latitude rings are spaced farther apart, with increasing distance from the pole.

Construction Plane

Property Conformal

Meridians

Polar aspect: the meridians are straight lines radiating from thepoint of tangency.

Oblique and equatorial aspects: the meridians are arcs of circlesconcave toward a straight central meridian. In the equatorialaspect, the outer meridian of the hemisphere is a circle centered atthe projection center.

Parallels

Polar aspect: the parallels are concentric circles.

Oblique aspect: the parallels are nonconcentric arcs of circles con-cave toward one of the poles with one parallel being a straightline.

Equatorial aspect: parallels are nonconcentric arcs of circles con-cave toward the poles; the equator is straight.

Graticule spacing The graticule spacing increases away from the center of the pro-jection in all aspects and it retains the property of conformality.

Linear scale Scale increases toward the periphery of the projection.

Uses

The Stereographic projection is the most widely used azimuthalprojection, mainly used for portraying large, continent-size areasof similar extent in all directions. It is used in geophysics for solv-ing problems in spherical geometry. The polar aspect is used fortopographic maps and navigational charts. The American Geo-graphical Society uses this projection as the basis for its “Map ofthe Arctic.” The U.S. Geological Survey uses it as the basis formaps of Antarctica.

575Field Guide

Page 602: Erdas FieldGuide

Prompts

The following prompts display in the Projection Chooser if Stereographic is selected.Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Define the center of the map projection in both spherical and rectangular coordinates.

Longitude of center of projection

Latitude of center of projection

Enter values for the longitude and latitude of the desired center of the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the center of theprojection. These values must be in meters. It is very often convenient to make themlarge enough so that no negative coordinates will occur within the region of the mapprojection. That is, the origin of the rectangular coordinate system should fall outsideof the map projection to the south and west.

The Stereographic is the only azimuthal projection which is conformal. Figure 200shows two views: A) Equatorial aspect, often used in the 16th and 17th centuries formaps of hemispheres; B) Oblique aspect, centered on 40˚N.

576 ERDAS

Page 603: Erdas FieldGuide

USGS Projections

Figure 200: Stereographic Projection

A

B

577Field Guide

Page 604: Erdas FieldGuide

Transverse Mercator

Summary

Transverse Mercator is similar to the Mercator projection except that the axis of theprojection cylinder is rotated 90˚ from the vertical (polar) axis. The contact line is thena chosen meridian instead of the equator and this central meridian runs from pole topole. It loses the properties of straight meridians and straight parallels of the standardMercator projection (except for the central meridian, the two meridians 90˚ away, andthe equator).

Transverse Mercator also loses the straight rhumb lines of the Mercator map, but it is aconformal projection. Scale is true along the central meridian or along two straight linesequidistant from, and parallel to, the central meridian. It cannot be edge-joined in aneast-west direction if each sheet has its own central meridian.

In the United States, Transverse Mercator is the projection used in the State Planecoordinate system for states with predominant north-south extent. The entire earthfrom 84˚N to 80˚S is mapped with a system of projections called the Universal Trans-verse Mercator.

Construction Cylinder

Property Conformal

MeridiansMeridians are complex curves concave toward a straight centralmeridian that is tangent to the globe. The straight central merid-ian intersects the equator and one meridian at a 90˚ angle.

Parallels Parallels are complex curves concave toward the nearest pole; theequator is straight.

Graticule spacingParallels are spaced at their true distances on the straight centralmeridian. Graticule spacing increases away from the tangentmeridian. The graticule retains the property of conformality.

Linear scale Linear scale is true along the line of tangency, or along two linesequidistant from, and parallel to, the line of tangency.

Uses

Used where the north-south dimension is greater than the east-west dimension. Used as the base for the U.S. Geological Survey’s1:250,000-scale series, and for some of the 7.5-minute and 15-minute quadrangles of the National Topographic Map Series.

578 ERDAS

Page 605: Erdas FieldGuide

USGS Projections

Prompts

The following prompts display in the Projection Chooser if Transverse Mercator isselected. Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Scale factor at central meridian

Designate the desired scale factor at the central meridian. This parameter is used tomodify scale distortion. A value of one indicates true scale only along the centralmeridian. It may be desirable to have true scale along two lines equidistant from andparallel to the central meridian, or to lessen scale distortion away from the centralmeridian. A factor of less than, but close to, one is often used.

Finally, define the origin of the map projection in both spherical and rectangular coordi-nates.

Longitude of central meridian

Latitude of origin of projection

Enter values for longitude of the desired central meridian and latitude of the origin ofprojection.

False easting

False northing

Enter values of false easting and false northing corresponding to the intersection of thecentral meridian and the latitude of the origin of projection. These values must be inmeters. It is very often convenient to make them large enough so that there will be nonegative coordinates within the region of the map projection. That is, origin of therectangular coordinate system should fall outside of the map projection to the southand west.

579Field Guide

Page 606: Erdas FieldGuide

UTM The Universal Transverse Mercator (UTM) is an international plane (rectangular)coordinate system developed by the U.S. Army that extends around the world from84˚N to 80˚S. The world is divided into 60 zones each covering six degrees longitude.Each zone extends three degrees eastward and three degrees westward from its centralmeridian. Zones are numbered consecutively west to east from the 180˚ meridian(Figure 201, Table 35).

The Transverse Mercator projection is then applied to each UTM zone. TransverseMercator is a transverse form of the Mercator cylindrical projection. The projectioncylinder is rotated 90˚ from the vertical (polar) axis and can then be placed to intersectat a chosen central meridian. The UTM system specifies the central meridian of eachzone. With a separate projection for each UTM zone, a high degree of accuracy ispossible (one part in 1000 maximum distortion within each zone).

If the map to be projected extends beyond the border of the UTM zone, the entire mapmay be projected for any UTM zone specified by the user.

See "Transverse Mercator" on page 578 for more information.

Prompts

The following prompts display in the Projection Chooser if UTM is chosen.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

UTM Zone

Is the data North or South of the equator?

All values in Table 35 are in full degrees east (E) or west (W) of the Greenwich primemeridian (0).

580 ERDAS

Page 607: Erdas FieldGuide

USGS Projections

Figure 201: Zones of the Universal Transverse Mercator Grid in the United States

126˚ 120˚ 114˚ 108˚ 102˚ 96˚ 90˚ 84˚ 78˚ 72˚ 66˚

581Field Guide

Page 608: Erdas FieldGuide

Table 35: UTM zones, central meridians, and longitude ranges

ZoneCentral

MeridianRange Zone

CentralMeridian

Range

1 177W 180W-174W 31 3E 0-6E

2 171W 174W-168W 32 9E 6E-12E

3 165W 168W-162W 33 15E 12E-18E

4 159W 162W-156W 34 21E 18E-24E

5 153W 156W-150W 35 27E 24E-30E

6 147W 150W-144W 36 33E 30E-36E

7 141W 144W-138W 37 39E 36E-42E

8 135W 138W-132W 38 45E 42E-48E

9 129W 132W-126W 39 51E 48E-54E

10 123W 126W-120W 40 57E 54E-60E

11 117W 120W-114W 41 63E 60E-66E

12 111W 114W-108W 42 69E 66E-72E

13 105W 108W-102W 43 75E 72E-78E

14 99W 102W-96W 44 81E 78E-84E

15 93W 96W-90W 45 87E 84E-90E

16 87W 90W-84W 46 93E 90E-96E

17 81W 84W-78W 47 99E 96E-102E

18 75W 78W-72W 48 105E 102E-108E

19 69W 72W-66W 49 111E 108E-114E

20 63W 66W-60W 50 117E 114E-120E

21 57W 60W-54W 51 123E 120E-126E

22 51W 54W-48W 52 129E 126E-132E

23 45W 48W-42W 53 135E 132E-138E

24 39W 42W-36W 54 141E 138E-144E

25 33W 36W-30W 55 147E 144E-150E

26 27W 30W-24W 56 153E 150E-156E

27 21W 24W-18W 57 159E 156E-162E

28 15W 18W-12W 58 165E 162E-168E

29 9W 12W-6W 59 171E 168E-174E

30 3W 6W-0 60 177E 174E-180E

582 ERDAS

Page 609: Erdas FieldGuide

USGS Projections

Van der Grinten I

Summary

The Van der Grinten I projection produces a map that is neither conformal nor equalarea (Figure 202). It compromises all properties, and represents the earth within a circle.

All lines are curved except the central meridian and the equator. Parallels are spacedfarther apart toward the poles. Meridian spacing is equal at the equator. Scale is truealong the equator, but increases rapidly toward the poles, which are usually not repre-sented.

Van der Grinten I avoids the excessive stretching of the Mercator and the shapedistortion of many of the equal area projections. It has been used to show distributionof mineral resources on the ocean floor.

Prompts

The following prompts display in the Projection Chooser if Van der Grinten I isselected. Respond to the prompts as described.

Spheroid Name:

Datum Name:

Select the spheroid and datum to use.

The list of available spheroids is located on page 430 in "CHAPTER 11: Cartography."

Construction Miscellaneous

Property Compromise

Meridians Meridians are circular arcs concave toward a straight centralmeridian.

Parallels Parallels are circular arcs concave toward the poles, except for astraight equator.

Graticule spacing

Meridian spacing is equal at the equator. The parallels are spacedfarther apart toward the poles. The central meridian and equatorare straight lines. The poles commonly are not represented. Thegraticule spacing results in a compromise of all properties.

Linear scale Linear scale is true along the equator. Scale increases rapidlytoward the poles.

UsesThe Van der Grinten projection is used by the National Geo-graphic Society for world maps. Used by the U.S. Geological Sur-vey to show distribution of mineral resources on the sea floor.

583Field Guide

Page 610: Erdas FieldGuide

Longitude of central meridian

Enter a value for the longitude of the desired central meridian to center the projection.

False easting

False northing

Enter values of false easting and false northing corresponding to the center of theprojection. These values must be in meters. It is very often convenient to make themlarge enough to prevent negative coordinates within the region of the map projection.That is, the origin of the rectangular coordinate system should fall outside of the mapprojection to the south and west.

Figure 202: Van der Grinten I Projection

The Van der Grinten I projection resembles the Mercator, but it is not conformal.

584 ERDAS

Page 611: Erdas FieldGuide

External Projections

External Projections The following external projections are supported in ERDAS IMAGINE and aredescribed in this section. Some of these projections were discussed in the previoussection. Those descriptions are not repeated here. Simply refer to the page number inparentheses for more information.

NOTE: ERDAS IMAGINE does not support datum shifts for these external projections.

• Albers Conical Equal Area (see page 519)

• Azimuthal Equidistant (see page 522)

• Bipolar Oblique Conic Conformal

• Cassini-Soldner

• Conic Equidistant (see page 525)

• Laborde Oblique Mercator

• Lambert Azimuthal Equal Area (see page 535)

• Lambert Conformal Conic (see page 538)

• Mercator (see page 541)

• Modified Polyconic

• Modified Stereographic

• Mollweide Equal Area

• Oblique Mercator (see page 548)

• Orthographic (see page 551)

• Plate Carrée (see page 527)

• Rectified Skew Orthomorphic

• Regular Polyconic (see page 557)

• Robinson Pseudocylindrical

• Sinusoidal (see page 559)

• Southern Orientated Gauss Conformal

• Stereographic (see page 575)

• Stereographic (Oblique) (see page 575)

585Field Guide

Page 612: Erdas FieldGuide

• Transverse Mercator (see page 578)

• Universal Transverse Mercator (see page 580)

• Van der Grinten (see page 583)

• Winkel’s Tripel

Bipolar Oblique ConicConformal

Summary

The Bipolar Oblique Conic Conformal projection was developed by O.M. Miller andWilliam A. Briesemeister in 1941 specifically for mapping North and South America,and maintains conformality for these regions. It is based upon the Lambert ConformalConic, using two oblique conic projections side-by-side. The two oblique conics arejoined with the poles 104˚ apart. A great circle arc 104˚ long begins at 20˚S and 110˚W,cuts through Central America, and terminates at 45˚N and approximately 19˚59’36”W.The scale of the map is then increased by approximately 3.5%. The origin of the coordi-nates is made 17˚15’N, 73˚02’W.

Refer to "Lambert Conformal Conic" on page 538 for more information.

Prompts

The following prompts display in the Projection Chooser if Bipolar Oblique ConicConformal is selected.

Projection Name

Spheroid Type

Datum Name

Construction Cone

Property Conformal

Meridians Meridians are complex curves concave toward the center of theprojection.

Parallels Parallels are complex curves concave toward the nearest pole.

Graticule spacing Graticule spacing increases away from the lines of true scale andretains the property of conformality.

Linear scale

Linear scale is true along two lines that do not lie along anymeridian or parallel. Scale is compressed between these lines andexpanded beyond them. Linear scale is generally good, but thereis as much as a 10% error at the edge of the projection as used.

UsesUsed to represent one or both of the American continents. Exam-ples are the Basement map of North America and the Tectonicmap of North America.

586 ERDAS

Page 613: Erdas FieldGuide

External Projections

Cassini-Soldner

Summary

The Cassini projection was devised by C. F. Cassini de Thury in 1745 for the survey ofFrance. Mathematical analysis by J. G. von Soldner in the early 19th century led to moreaccurate ellipsoidal formulas. Today, it has largely been replaced by the TransverseMercator projection, although it is still in limited use outside of the United States. It wasone of the major topographic mapping projections until the early 20th century.

The spherical form of the projection bears the same relation to the Equidistant Cylin-drical or Plate Carrée projection that the spherical Transverse Mercator bears to theregular Mercator. Instead of having the straight meridians and parallels of theEquidistant Cylindrical, the Cassini has complex curves for each, except for the equator,the central meridian, and each meridian 90˚ away from the central meridian, all ofwhich are straight.

There is no distortion along the central meridian if it is maintained at true scale, whichis the usual case. If it is given a reduced scale factor, the lines of true scale are twostraight lines on the map, parallel to and equidistant from, the central meridian. Thereis no distortion along them instead.

The scale is correct along the central meridian and also along any straight line perpen-dicular to the central meridian. It gradually increases in a direction parallel to thecentral meridian as the distance from that meridian increases, but the scale is constantalong any straight line on the map that is parallel to the central meridian. Therefore,Cassini-Soldner is more suitable for regions that are predominantly north-south inextent, such as Great Britain, than regions extending in other directions. The projectionis neither equal area nor conformal, but it has a compromise of both features.

Construction Cylinder

Property Compromise

MeridiansCentral meridian, each meridian 90˚ from the central meridian,and the equator are straight lines. Other meridians are complexcurves.

Parallels Parallels are complex curves.

Graticule spacingComplex curves for all meridians and parallels, except for theequator, the central meridian, and each meridian 90˚ away fromthe central meridian, all of which are straight.

Linear scale

Scale is true along the central meridian, and along lines perpen-dicular to the central meridian. Scale is constant but not truealong lines parallel to the central meridian on the spherical formand nearly so for the ellipsoid.

UsesUsed for topographic mapping, formerly in England and cur-rently in a few other countries, such as Denmark, Germany, andMalaysia.

587Field Guide

Page 614: Erdas FieldGuide

The Cassini-Soldner projection was adopted by the Ordnance Survey for the officialsurvey of Great Britain during the second half of the 19th century. A system equivalentto the oblique Cassini-Soldner projection was used in early coordinate transformationsfor ERTS (now Landsat) satellite imagery, but it was changed to Oblique Mercator(Hotine) in 1978 and to the Space Oblique Mercator in 1982.

Prompts

The following prompts display in the Projection Chooser if Cassini-Soldner is selected.

Projection Name

Spheroid Type

Datum Name

Laborde ObliqueMercator

In 1928, Laborde combined a conformal sphere with a complex-algebra transformationof the Oblique Mercator projection for the topographic mapping of Madagascar. Thisvariation is now known as the Laborde Oblique Mercator. The central line is a greatcircle arc.

See "Oblique Mercator (Hotine)" on page 548 for more information.

Prompts

The following prompts display in the Projection Chooser if Laborde Oblique Mercatoris selected.

Projection Name

Spheroid Type

Datum Name

588 ERDAS

Page 615: Erdas FieldGuide

External Projections

Modified Polyconic

Summary

The Modified Polyconic projection was devised by Lallemand of France, and in 1909 itwas adopted by the International Map Committee (IMC) in London as the basis for the1:1,000,000-scale International Map of the World (IMW) series.

The projection differs from the ordinary Polyconic in two principal features: allmeridians are straight, and there are two meridians that are made true to scale.Adjacent sheets exactly fit together not only north to south, but east to west. There isstill a gap when mosaicking in all directions, in that there is a gap between eachdiagonal sheet, and either one or the other adjacent sheet.

In 1962, a U.N. conference on the IMW adopted the Lambert Conformal Conic and thePolar Stereographic projections to replace the Modified Polyconic.

See "Polyconic" on page 557 for more information.

Prompts

The following prompts display in the Projection Chooser if Modified Polyconic isselected.

Projection Name

Spheroid Type

Datum Name

Construction Cone

Property Compromise

Meridians All meridians are straight.

Parallels Parallels are circular arcs. The top and bottom parallels of eachsheet are nonconcentric circular arcs.

Graticule spacing

The top and bottom parallels of each sheet are nonconcentric cir-cular arcs. The two parallels are spaced from each other accord-ing to the true scale along the central meridian, which is slightlyreduced.

Linear scale Scale is true along each parallel and along two meridians, but noparallel is “standard.”

Uses Used for the International Map of the World series until 1962.

589Field Guide

Page 616: Erdas FieldGuide

Modified Stereographic

Summary

The meridians and parallels of the Modified Stereographic projection are generallycurved, and there is usually no symmetry about any point or line. There are limitationsto these transformations. Most of them can only be used within a limited range. As thedistance from the projection center increases, the meridians, parallels, and shorelinesbegin to exhibit loops, overlapping, and other undesirable curves. A world map usingthe GS50 (50-State) projection is almost illegible with meridians and parallels inter-twined like wild vines.

Prompts

The following prompts display in the Projection Chooser if Modified Stereographic isselected.

Projection Name

Spheroid Type

Datum Name

Construction Plane

Property Conformal

Meridians All meridians are normally complex curves, although some maybe straight under certain conditions.

Parallels All parallels are complex curves, although some may be straightunder certain conditions.

Graticule spacing The graticule is normally not symmetrical about any axis orpoint.

Linear scale Scale is true along irregular lines, but the map is usually designedto minimize scale variation throughout a selected region.

Uses Used for maps of continents in the Eastern Hemisphere, for thePacific Ocean, and for maps of Alaska and the 50 United States.

590 ERDAS

Page 617: Erdas FieldGuide

External Projections

Mollweide Equal Area

Summary

The second oldest pseudo-cylindrical projection that is still in use (after the Sinusoidal)was presented by Carl B. Mollweide (1774 - 1825) of Halle, Germany, in 1805. It is anequal area projection of the earth within an ellipse. It has had a profound effect onworld map projections in the 20th century, especially as an inspiration for otherimportant projections, such as the Van der Grinten.

The Mollweide is normally used for world maps and occasionally for a very largeregion, such as the Pacific Ocean. This is because only two points on the Mollweide arecompletely free of distortion unless the projection is interrupted. These are the points atlatitudes 40˚44’12”N and S on the central meridian(s).

The world is shown in an ellipse with the equator, its major axis, twice as long as thecentral meridian, its minor axis. The meridians 90˚ east and west of the central meridianform a complete circle. All other meridians are elliptical arcs which, with their oppositenumbers on the other side of the central meridian, form complete ellipses that meet atthe poles.

Prompts

The following prompts display in the Projection Chooser if Mollweide Equal Area isselected.

Projection Name

Spheroid Type

Datum Name

Construction Pseudo-cylinder

Property Equal area

Meridians All of the meridians are ellipses. The central meridian is a straightline and 90˚ meridians are circular arcs (Pearson 1990).

Parallels The equator and parallels are straight lines perpendicular to thecentral meridian, but they are not equally spaced.

Graticule spacing

Linear graticules include the central meridian and the equator(ESRI 1992). Meridians are equally spaced along the equator andalong all other parallels. The parallels are straight parallel lines,but they are not equally spaced. The poles are points.

Linear scaleScale is true along latitudes 40˚44’N and S. Distortion increaseswith distance from these lines and becomes severe at the edges ofthe projection (ESRI 1992).

UsesOften used for world maps (Pearson 1990). Suitable for thematicor distribution mapping of the entire world, frequently in inter-rupted form (ESRI 1992).

591Field Guide

Page 618: Erdas FieldGuide

Rectified SkewOrthomorphic

Martin Hotine (1898 - 1968) called the Oblique Mercator projection the Rectified SkewOrthomorphic projection.

See "Oblique Mercator (Hotine)" on page 548 for more information.

Prompts

The following prompts display in the Projection Chooser if Rectified Skew Ortho-morphic is selected.

Projection Name

Spheroid Type

Datum Name

RobinsonPseudocylindrical

Summary

The Robinson Pseudocylindrical projection provides a means of showing the entireearth in an uninterrupted form. The continents appear as units and are in relativelycorrect size and location. Poles are represented as lines.

Meridians are equally spaced and resemble elliptical arcs, concave toward the centralmeridian. The central meridian is a straight line 0.51 times the length of the equator.Parallels are equally spaced straight lines between 38˚N and S, and then the spacingdecreases beyond these limits. The poles are 0.53 times the length of the equator. Theprojection is based upon tabular coordinates instead of mathematical formulas (ESRI1992).

Construction Pseudo-cylinder

Property Compromise

Meridians Meridians are elliptical arcs, equally spaced, and concave towardthe central meridian.

Parallels Parallels are straight lines.

Graticule spacing Parallels are straight lines and are parallel. The individual paral-lels are evenly divided by the meridians (Pearson 1990).

Linear scaleGenerally, scale is made true along latitudes 38˚N and S. Scale isconstant along any given latitude, and for the latitude of oppositesign (ESRI 1992).

Uses

Developed for use in general and thematic world maps. Used byRand McNally since the 1960s and by the National GeographicSociety since 1988 for general and thematic world maps (ESRI1992).

592 ERDAS

Page 619: Erdas FieldGuide

External Projections

Prompts

The following prompts display in the Projection Chooser if Robinson Pseudocylindricalis selected.

Projection Name

Spheroid Type

Datum Name

Southern OrientatedGauss Conformal

Southern Orientated Gauss Conformal is another name for the Transverse Mercatorprojection, after mathematician Friedrich Gauss (1777 - 1855). It is also called the Gauss-Krüger projection.

See "Transverse Mercator" on page 578 for more information.

Prompts

The following prompts display in the Projection Chooser if Southern Orientated GaussConformal is selected.

Projection Name

Spheroid Type

Datum Name

593Field Guide

Page 620: Erdas FieldGuide

Winkel’s Tripel

Summary

Winkel’s Tripel was formulated in 1921 by Oswald Winkel of Germany. It is a combinedprojection that is the arithmetic mean of the Plate Carrée and Aitoff’s projection(Maling).

Prompts

The following prompts display in the Projection Chooser if Winkel’s Tripel is selected.

Projection Name

Spheroid Type

Datum Name

Construction Modified azimuthal

Property Neither conformal nor equal area

MeridiansCentral meridian is straight. Other meridians are curved and areequally spaced along the equator and concave toward the centralmeridian.

ParallelsEquidistant spacing of parallels. Equator and the poles arestraight. Other parallels are curved and concave toward the near-est pole.

Graticule spacing Symmetry is maintained along the central meridian or the Equa-tor.

Linear scale Scale is true along the central meridian and constant along theEquator.

Uses Used for world maps.

594 ERDAS

Page 621: Erdas FieldGuide

A

Glossary

A absorption spectra - the electromagnetic radiation wavelengths that are absorbed byspecific materials of interest.

abstract symbol - an annotation symbol that has a geometric shape, such as a circle,square, or triangle. These symbols often represent amounts that vary from placeto place, such as population density, yearly rainfall, etc.

a priori - already or previously known.

accuracy assessment - the comparison of a classification to geographical data that isassumed to be true. Usually, the assumed-true data are derived from groundtruthing.

accuracy report - in classification accuracy assessment, a list of the percentages ofaccuracy, computed from the error matrix.

active sensors - the solar imaging sensors that both emit and receive radiation.

ADRG - see ARC Digitized Raster Graphic.

ADRI - see ARC Digital Raster Imagery.

aerial stereopair - two photos taken at adjacent exposure stations.

Airborne Synthetic Aperture Radar (AIRSAR) - an experimental airborne radar sensordeveloped by Jet Propulsion Laboratories (JPL), Pasadena, California, under acontract with NASA. AIRSAR data have been available since 1983.

Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) - a sensor developed byJPL (Pasadena, California) under a contract with NASA that produces multi-spectral data with 224 narrow bands. These bands are 10 nm wide and cover thespectral range of .4 - 2.4 nm. AVIRIS data have been available since 1987.

alarm - a test of a training sample, usually used before the signature statistics are calcu-lated. An alarm highlights an area on the display which is an approximation ofthe area that would be classified with a signature. The original data can then becompared to the highlighted area.

Almaz - a Russian radar satellite that completed its mission in 1992.

Analog Photogrammetry - optical or mechanical instruments used to reconstruct three-dimensional geometry from two overlapping photographs.

Analytical Photogrammetry - the computer replaces some expensive optical andmechanical components by substituting analog measurement and calculationwith mathematical computation.

595Field Guide

Page 622: Erdas FieldGuide

GlossaryA

ancillary data - the data, other than remotely sensed data, that are used to aid in theclassification process.

annotation - the explanatory material accompanying an image or map. In ERDASIMAGINE, annotation consists of text, lines, polygons, ellipses, rectangles,legends, scale bars, grid lines, tick marks, neatlines, and symbols which denotegeographical features.

annotation layer - a set of annotation elements that is drawn in a Viewer or MapComposer window and stored in a file (.ovr extension).

arc - see line.

ARC system (Equal Arc-Second Raster Chart/Map) - a system that provides a rectan-gular coordinate and projection system at any scale for the earth’s ellipsoid, basedon the World Geodetic System 1984 (WGS 84).

ARC Digital Raster Imagery (ADRI) - Defense Mapping Agency (DMA) data thatconsist of SPOT panchromatic, SPOT multispectral, or Landsat TM satelliteimagery transformed into the ARC system and accompanied by ASCII encodedsupport files. These data are available only to Department of Defense contractors.

ARC Digitized Raster Graphic (ADRG) - data from the Defense Mapping Agency(DMA) that consist of digital copies of DMA hardcopy graphics transformed intothe ARC system and accompanied by ASCII encoded support files. These data areprimarily used for military purposes by defense contractors.

ARC GENERATE data - vector data created with the ARC/INFO UNGENERATEcommand.

arc/second - a unit of measure that can be applied to data in the Lat/Lon coordinatesystem. Each pixel represents the distance covered by one second of latitude orlongitude. For example, in “3 arc/second” data, each pixel represents an areathree seconds latitude by three seconds longitude.

area - a measurement of a surface.

area based matching - an image matching technique that determines the correspon-dence between two image areas according to the similarity of their gray levelvalues.

area of interest (AOI) - a point, line, or polygon that is selected as a training sample oras the image area to be used in an operation. AOIs can be stored in separate .aoifiles.

aspect - the orientation, or the direction that a surface faces, with respect to the direc-tions of the compass: north, south, east, west.

aspect image - a thematic raster image which shows the prevailing direction that eachpixel faces.

aspect map - a map that is color-coded according to the prevailing direction of the slopeat each pixel.

attribute - the tabular information associated with a raster or vector layer.

596 ERDAS

Page 623: Erdas FieldGuide

B

average - the statistical mean; the sum of a set of values divided by the number of valuesin the set.

AVHRR - Advanced Very High Resolution Radiometer data. Small-scale imageryproduced by an NOAA polar orbiting satellite. It has a spatial resolution of 1.1×1.1 km or 4 × 4 km.

azimuth - an angle measured clockwise from a meridian, going north to east.

azimuthal projection - a map projection that is created from projecting the surface ofthe earth to the surface of a plane.

B band - a set of data file values for a specific portion of the electromagnetic spectrum ofreflected light or emitted heat (red, green, blue, near-infrared, infrared, thermal,etc.) or some other user-defined information created by combining or enhancingthe original bands, or creating new bands from other sources. Sometimes called“channel.”

banding - see striping.

base map - a map portraying background reference information onto which other infor-mation is placed. Base maps usually show the location and extent of naturalsurface features and permanent man-made features.

batch file - a file that is created in the batch mode of ERDAS IMAGINE. All steps arerecorded for a later run. This file can be edited.

batch mode - a mode of operating ERDAS IMAGINE in which steps are recorded forlater use.

bathymetric map - a map portraying the shape of a water body or reservoir usingisobaths (depth contours).

Bayesian - a variation of the maximum likelihood classifier, based on the Bayes Law ofprobability. The Bayesian classifier allows the application of a priori weightingfactors, representing the probabilities that pixels will be assigned to each class.

BIL - band interleaved by line. A form of data storage in which each record in the filecontains a scan line (row) of data for one band. All bands of data for a given lineare stored consecutively within the file.

bilinear interpolation - a resampling method that uses the data file values of four pixelsin a 2 by 2 window to calculate an output data file value by computing a weightedaverage of the input data file values with a bilinear function.

bin function - a mathematical function that establishes the relationship between datafile values and rows in a descriptor table.

bins - ordered sets of pixels. Pixels are sorted into a specified number of bins. The pixelsare then given new values based upon the bins to which they are assigned.

BIP - band interleaved by pixel. A form of data storage in which the values for eachband are ordered within a given pixel. The pixels are arranged sequentially on thetape.

597Field Guide

Page 624: Erdas FieldGuide

GlossaryB

bit - a binary digit, meaning a number that can have two possible values 0 and 1, or“off” and “on.” A set of bits, however, can have many more values, dependingupon the number of bits used. The number of values that can be expressed by aset of bits is 2 to the power of the number of bits used. For example, the numberof values that can be expressed by 3 bits is 8 (23 = 8).

block of photographs - formed by the combined exposures of a flight. The blockconsists of a number of parallel strips with a sidelap of 20-30%.

blocked - a method of storing data on 9-track tapes so that there are more logicalrecords in each physical record.

blocking factor - the number of logical records in each physical record. For instance, arecord may contain 28,000 bytes, but only 4,000 columns due to a blocking factorof 7.

book map - a map laid out like the pages of a book. Each page fits on the paper used bythe printer. There are neatlines and tick marks on all sides of every page.

Boolean - logical; based upon, or reducible to a true or false condition.

border - on a map, a line that usually encloses the entire map, not just the image area asdoes a neatline.

boundary - a neighborhood analysis technique that is used to detect boundariesbetween thematic classes.

bpi - bits per inch. A measure of data storage density for magnetic tapes.

breakline - an elevation polyline, in which each vertex has its own X, Y, Z value.

brightness value - the quantity of a primary color (red, green, blue) to be output to apixel on the display device. Also called “intensity value,” “function memoryvalue,” “pixel value,” “display value,” “screen value.”

BSQ - band sequential. A data storage format in which each band is contained in aseparate file.

buffer zone - a specific area around a feature that is isolated for or from further analysis.For example, buffer zones are often generated around streams in site assessmentstudies, so that further analyses will exclude these areas that are often unsuitablefor development.

build - the process of constructing the topology of a vector layer by processing points,lines, and polygons. See clean.

bundle - the unit of photogrammetric triangulation after each point measured in animage is connected with the perspective center by a straight light ray. There is onebundle of light rays for each image.

bundle attitude - defined by a spatial rotation matrix consisting of three angles (κ, ω, ϕ).

bundle location - defined by the perspective center, expressed in units of the specifiedmap projection.

byte - 8 bits of data.

598 ERDAS

Page 625: Erdas FieldGuide

C

C cadastral map - a map showing the boundaries of the subdivisions of land for purposesof describing and recording ownership or taxation.

calibration certificate/report - in aerial photography, the manufacturer of the cameraspecifies the interior orientation in the form of a certificate or report.

Cartesian - a coordinate system in which data are organized on a grid and points on thegrid are referenced by their X,Y coordinates.

cartography - the art and science of creating maps.

categorical data - see thematic data.

CCT - see computer compatible tape.

CD-ROM - a read-only storage device read by a CD-ROM player.

cell - 1. a 1˚ × 1˚ area of coverage. DTED (Digital Terrain Elevation Data) are distributedin cells. 2. a pixel; grid cell.

cell size - the area that one pixel represents, measured in map units. For example, onecell in the image may represent an area 30 feet by 30 feet on the ground.Sometimes called “pixel size.”

center of the scene - the center pixel of the center scan line; the center of a satelliteimage.

character - a number, letter, or punctuation symbol. One character usually occupies onebyte when stored on a computer.

check point - additional ground points used to independently verify the degree ofaccuracy of a triangulation.

check point analysis - the act of using check points to independently verify the degreeof accuracy of a triangulation.

chi-square distribution - a non-symmetrical data distribution, whose curve is charac-terized by a “tail” that represents the highest and least frequent data values. Inclassification thresholding, the “tail” represents the pixels that are most likely tobe classified incorrectly.

choropleth map - a map portraying properties of a surface using area symbols. Areasymbols usually represent categorized classes of the mapped phenomenon.

city-block distance - the physical or spectral distance that is measured as the sum ofdistances that are perpendicular to one another.

class - a set of pixels in a GIS file which represent areas that share some condition.Classes are usually formed through classification of a continuous raster layer.

class value - a data file value of a thematic file which identifies a pixel as belonging toa particular class.

classification - the process of assigning the pixels of a continuous raster image todiscrete categories.

599Field Guide

Page 626: Erdas FieldGuide

GlossaryC

classification accuracy table - for accuracy assessment, a list of known values ofreference pixels, supported by some ground truth or other a priori knowledge ofthe true class, and a list of the classified values of the same pixels, from a classifiedfile to be tested.

classification scheme - (or classification system) a set of target classes. The purpose ofsuch a scheme is to provide a framework for organizing and categorizing theinformation that can be extracted from the data.

clean - the process of constructing the topology of a vector layer by processing lines andpolygons. See build.

client - on a computer on a network, a program that accesses a server utility that is onanother machine on the network.

clump - a contiguous group of pixels in one class. Also called raster region.

clustering - unsupervised training; the process of generating signatures based on thenatural groupings of pixels in image data when they are plotted in spectral space.

clusters - the natural groupings of pixels when plotted in spectral space.

coefficient - one number in a matrix, or a constant in a polynomial expression.

coefficient of variation - a scene-derived parameter that is used as input to the Sigmaand Local Statistics radar enhancement filters.

collinearity - a non-linear mathematical model that photogrammetric triangulation isbased upon. Collinearity equations describe the relationship among imagecoordinates, ground coordinates, and orientation parameters

colorcell - the location where the data file values are stored in the colormap. The red,green, and blue values assigned to the colorcell control the brightness of the colorguns for the displayed pixel.

color guns - on a display device, the red, green, and blue phosphors that are illuminatedon the picture tube in varying brightnesses to create different colors. On a colorprinter, color guns are the devices that apply cyan, yellow, magenta, andsometimes black ink to paper.

colormap - an ordered set of colorcells, which is used to perform a function on a set ofinput values.

color printer - a printer that prints color or black-and-white imagery, as well as text.ERDAS IMAGINE supports several color printers.

color scheme - a set of lookup tables that assigns red, green, and blue brightness valuesto classes when a layer is displayed.

composite map - a map on which the combined information from different thematicmaps is presented.

compromise projection - a map projection that compromises among two or more of themap projection properties of conformality, equivalence, equidistance, and truedirection.

600 ERDAS

Page 627: Erdas FieldGuide

C

computer compatible tape (CCT) - a magnetic tape used to transfer and store digitaldata.

confidence level - the percentage of pixels that are believed to be misclassified.

conformal - a map or map projection that has the property of conformality, or trueshape.

conformality - the property of a map projection to represent true shape, wherein aprojection preserves the shape of any small geographical area. This is accom-plished by exact transformation of angles around points.

conic projection - a map projection that is created from projecting the surface of theearth to the surface of a cone.

connectivity radius - the distance (in pixels) that pixels can be from one another to beconsidered contiguous. The connectivity radius is used in connectivity analysis.

contiguity analysis - a study of the ways in which pixels of a class are grouped togetherspatially. Groups of contiguous pixels in the same class, called raster regions, or“clumps,” can be identified by their sizes and manipulated.

contingency matrix - a matrix which contains the number and percentages of pixelsthat were classified as expected.

continuous - a term used to describe raster data layers that contain quantitative andrelated values. See continuous data.

continuous data - a type of raster data that are quantitative (measuring a characteristic)and have related, continuous values, such as remotely sensed images (e.g.,Landsat, SPOT, etc.).

contour map - a map in which a series of lines connects points of equal elevation.

contrast stretch - the process of reassigning a range of values to another range, usuallyaccording to a linear function. Contrast stretching is often used in displayingcontinuous raster layers, since the range of data file values is usually muchnarrower than the range of brightness values on the display device.

control point - a point with known coordinates in the ground coordinate system,expressed in the units of the specified map projection.

convolution filtering - the process of averaging small sets of pixels across an image.Used to change the spatial frequency characteristics of an image.

convolution kernel - a matrix of numbers that is used to average the value of each pixelwith the values of surrounding pixels in a particular way. The numbers in thematrix serve to weight this average toward particular pixels.

coordinate system - a method for expressing location. In two-dimensional coordinatesystems, locations are expressed by a column and row, also called x and y.

correlation threshold - a value used in rectification to determine whether to accept ordiscard ground control points. The threshold is an absolute value thresholdranging from 0.000 to 1.000.

601Field Guide

Page 628: Erdas FieldGuide

GlossaryD

correlation windows - windows which consist of a local neighborhood of pixels. Oneexample is square neighborhoods (e.g., 3 X 3, 5 X 5, 7 X 7 pixels).

corresponding GCPs - the ground control points that are located in the samegeographic location as the selected GCPs, but were selected in different files.

covariance - measures the tendencies of data file values for the same pixel, but indifferent bands, to vary with each other in relation to the means of theirrespective bands. These bands must be linear. Covariance is defined as theaverage product of the differences between the data file values in each band andthe mean of each band.

covariance matrix - a square matrix which contains all of the variances and covarianceswithin the bands in a data file.

credits - on maps, the text that can include the data source and acquisition date,accuracy information, and other details that are required or helpful to readers.

crisp filter - a filter used to sharpen the overall scene luminance without distorting theinterband variance content of the image.

cross correlation - a calculation which computes the correlation coefficient of the grayvalues between the template window and the search window.

cubic convolution - a method of resampling which uses the data file values of sixteenpixels in a 4 by 4 window to calculate an output data file value with a cubicfunction.

current directory - also called “default directory,” it is the directory that you are “in.”It is the default path.

cylindrical projection - a map projection that is created from projecting the surface ofthe earth to the surface of a cylinder.

D dangling node - a line that does not close to form a polygon, or that extends past anintersection.

data - 1. in the context of remote sensing, a computer file containing numbers whichrepresent a remotely sensed image, and can be processed to display that image.2. a collection of numbers, strings, or facts that require some processing beforethey are meaningful.

database (one word) - a relational data structure usually used to store tabular infor-mation. Examples of popular databases include SYBASE, dBase, Oracle, INFO,etc.

data base (two words) - in ERDAS IMAGINE, a set of continuous and thematic rasterlayers, vector layers, attribute information, and other kinds of data whichrepresent one area of interest. A data base is usually part of a geographic infor-mation system.

data file - a computer file that contains numbers which represent an image.

602 ERDAS

Page 629: Erdas FieldGuide

D

data file value - each number in an image file. Also called “file value,” “image filevalue,” “digital number (DN),” “brightness value,” “pixel.”

datum - see reference plane.

decision rule - an equation or algorithm that is used to classify image data after signa-tures have been created. The decision rule is used to process the data file valuesbased upon the signature statistics.

decorrelation stretch - a technique used to stretch the principal components of animage, not the original image.

default directory - see current directory.

degrees of freedom - when chi-square statistics are used in thresholding, the numberof bands in the classified file.

DEM - see digital elevation model.

densify - the process of adding vertices to selected lines at a user-specified tolerance.

density - 1. the number of bits per inch on a magnetic tape. 9-track tapes are commonlystored at 1600 and 6250 bpi. 2. a neighborhood analysis technique that outputs thenumber of pixels that have the same value as the analyzed pixel in a user-specified window.

derivative map - a map created by altering, combining, or analyzing other maps.

descriptor - see attribute.

desktop scanners - general purpose devices which lack the image detail and geometricaccuracy of photogrammetric quality units, but are much less expensive.

detector - the device in a sensor system that records electromagnetic radiation.

developable surface - a flat surface, or a surface that can be easily flattened by being cutand unrolled, such as the surface of a cone or a cylinder.

digital elevation model (DEM)- continuous raster layers in which data file valuesrepresent elevation. DEMs are available from the USGS at 1:24,000 and 1:250,000scale, can be produced with terrain analysis programs, and IMAGINEOrthoMAX.

digital orthophoto - An aerial photo or satellite scene which has been transformed bythe orthogonal projection, yielding a map that is free of most significantgeometric distortions.

Digital Photogrammetry - photogrammetry as applied to digital images that are storedand processed on a computer. Digital images can be scanned from photographsor can be directly captured by digital cameras.

digital terrain model (DTM) - a discrete expression of topography in a data array,consisting of a group of planimetric coordinates (X,Y) and the elevations of theground points and breaklines.

digitized raster graphic (DRG) - a digital replica of Defense Mapping Agencyhardcopy graphic products. See also ADRG.

603Field Guide

Page 630: Erdas FieldGuide

GlossaryD

digitizing - any process that converts non-digital data into numeric data, usually to bestored on a computer. In ERDAS IMAGINE, digitizing refers to the creation ofvector data from hardcopy materials or raster images that are traced using adigitizer keypad on a digitizing tablet, or a mouse on a display device.

dimensionality - a term referring to the number of bands being classified. For example,a data file with 3 bands is said to be 3-dimensional, since 3-dimensional spectralspace is plotted to analyze the data.

directory - an area of a computer disk that is designated to hold a set of files. Usually,directories are arranged in a tree structure, in which directories can also containmany levels of subdirectories.

displacement - the degree of geometric distortion for a point that is not on the nadirline.

display device - the computer hardware consisting of a memory board and a monitor.It displays a visible image from a data file or from some user operation.

display driver - the ERDAS IMAGINE utility that interfaces between the computerrunning IMAGINE software and the display device.

display memory - the subset of image memory that is actually viewed on the displayscreen.

display pixel - one grid location on a display device or printout.

display resolution - the number of pixels that can be viewed on the display devicemonitor, horizontally and vertically (i.e., 512 × 512 or 1024 × 1024).

distance - see Euclidean distance, spectral distance.

distance image file - a one-band, 16-bit file that can be created in the classificationprocess, in which each data file value represents the result of the distanceequation used in the program. Distance image files generally have a chi-squaredistribution.

distribution - the set of frequencies with which an event occurs, or the set of probabil-ities that a variable will have a particular value.

distribution rectangles (DRs) - the geographic data sets into which ADRG data aredivided.

dithering - a display technique that is used in ERDAS IMAGINE to allow a smaller setof colors appear to be a larger set of colors.

divergence - a statistical measure of distance between two or more signatures. Diver-gence can be calculated for any combination of bands that will be used in theclassification; bands that diminish the results of the classification can be ruledout.

diversity - a neighborhood analysis technique that outputs the number of differentvalues within a user-specified window.

DLG - Digital Line Graph. A vector data format created by the USGS.

604 ERDAS

Page 631: Erdas FieldGuide

E

dot patterns - the matrices of dots used to represent brightness values on hardcopymaps and images.

double precision - a measure of accuracy in which 15 significant digits can be stored fora coordinate.

downsampling - the skipping of pixels during the display or processing of the scanningprocess.

DTM - see digital terrain model.

DXF - Data Exchange Format. A format for storing vector data in ASCII files, used byAutoCAD software.

dynamic range - see radiometric resolution.

E edge detector - a convolution kernel, usually a zero-sum kernel, which smooths out orzeros out areas of low spatial frequency and creates a sharp contrast where spatialfrequency is high, which is at the edges between homogeneous groups of pixels.

edge enhancer - a high-frequency convolution kernel that brings out the edges betweenhomogeneous groups of pixels. Unlike an edge detector, it only highlights edges,it does not necessarily eliminate other features.

eigenvalue - the length of a principal component which measures the variance of aprincipal component band. See also principal components.

eigenvector - the direction of a principal component represented as coefficients in aneigenvector matrix which is computed from the eigenvalues. See also principalcomponents.

electromagnetic radiation - the energy transmitted through space in the form of electricand magnetic waves.

electromagnetic spectrum - the range of electromagnetic radiation extending fromcosmic waves to radio waves, characterized by frequency or wavelength.

element - an entity of vector data, such as a point, a line, or a polygon.

elevation data - see terrain data, DEM.

ellipse - a two-dimensional figure that is formed in a two-dimensional scatterplot whenboth bands plotted have normal distributions. The ellipse is defined by thestandard deviations of the input bands. Ellipse plots are often used to test signa-tures before classification.

end-of-file mark (EOF) - usually a half-inch strip of blank tape which signifies the endof a file that is stored on magnetic tape.

end-of-volume mark (EOV) - usually three EOFs marking the end of a tape.

enhancement - the process of making an image more interpretable for a particularapplication. Enhancement can make important features of raw, remotely senseddata more interpretable to the human eye.

605Field Guide

Page 632: Erdas FieldGuide

GlossaryE

entity - an AutoCAD drawing element that can be placed in an AutoCAD drawing witha single command.

EOSAT - Earth Observation Satellite Company. A private company that directs theLandsat satellites and distributes Landsat imagery.

ephemeris data - contained in the header of the data file of a SPOT scene, providesinformation about the recording of the data and the satellite orbit.

epipolar stereopair - a stereopair without y-parallax.

equal area - see equivalence.

equatorial aspect - a map projection that is centered around the equator or a point onthe equator.

equidistance - the property of a map projection to represent true distances from anidentified point.

equivalence - the property of a map projection to represent all areas in true proportionto one another.

error matrix - in classification accuracy assessment, a square matrix showing thenumber of reference pixels that have the same values as the actual classifiedpoints.

ERS-1 - the European Space Agency’s (ESA) radar satellite launched in July 1991,currently provides the most comprehensive radar data available. ERS-2 isscheduled for launch in 1994.

ETAK MapBase - an ASCII digital street centerline map product available from ETAK,Inc. (Menlo Park, California).

Euclidean distance - the distance, either in physical or abstract (e.g., spectral) space,that is computed based on the equation of a straight line.

exposure station - during image acquisition, each point in the flight path at which thecamera exposes the film.

extend - the process of moving selected dangling lines up a specified distance so thatthey intersect existing lines.

extension - the three letters after the period in a file name that usually identify the typeof file.

extent - 1. the image area to be displayed in a Viewer. 2. the area of the earth’s surfaceto be mapped.

exterior orientation - all images of a block of aerial photographs in the groundcoordinate system are computed during photogrammetric triangulation, using alimited number of points with known coordinates. The exterior orientation of animage consists of the exposure station and the camera attitude at this moment.

exterior orientation parameters - the perspective center’s ground coordinates in aspecified map projection and three rotation angles around the coordinate axes.

extract - selected bands of a complete set of NOAA AVHRR data.

606 ERDAS

Page 633: Erdas FieldGuide

F

F false color - a color scheme in which features have “expected” colors. For instance,vegetation is green, water is blue, etc. These are not necessarily the true colors ofthese features.

false easting - an offset between the y-origin of a map projection and the y-origin of amap. Usually used so that no y-coordinates are negative.

false northing - an offset between the x-origin of a map projection and the x-origin of amap. Usually used so that no x-coordinates are negative.

fast format - a type of BSQ format used by EOSAT to store Landsat TM (ThematicMapper) data.

feature based matching - an image matching technique that determines the correspon-dence between two image features.

feature collection - the process of identifying, delineating, and labeling various typesof natural and man-made phenomena from remotely-sensed images.

feature extraction - the process of studying and locating areas and objects on theground and deriving useful information from images.

feature space - an abstract space that is defined by spectral units (such as an amount ofelectromagnetic radiation).

feature space area of interest - a user-selected area of interest (AOI) that is selectedfrom a feature space image.

feature space image - a graph of the data file values of one band of data against thevalues of another band (often called a scatterplot).

fiducial center - the center of an aerial photo.

fiducials - four or eight reference markers fixed on the frame of an aerial metric cameraand visible in each exposure. Fiducials are used to compute the transformationfrom data file to image coordinates.

field - in an attribute data base, a category of information about each class or feature,such as “Class name” and “Histogram.”

field of view - in perspective views, an angle which defines how far the view will begenerated to each side of the line of sight.

file coordinates - the location of a pixel within the file in x,y coordinates. The upper leftfile coordinate is usually 0,0.

file pixel - the data file value for one data unit in an image file.

file specification or filespec - the complete file name, including the drive and path, ifnecessary. If a drive or path is not specified, the file is assumed to be in the currentdrive and directory.

filled - referring to polygons; a filled polygon is solid or has a pattern, but is not trans-parent. An unfilled polygon is simply a closed vector which outlines the area ofthe polygon.

607Field Guide

Page 634: Erdas FieldGuide

GlossaryG

filtering - the removal of spatial or spectral features for data enhancement. Convolutionfiltering is one method of spatial filtering. Some texts may use the terms“filtering” and “spatial filtering” synonymously.

flip - the process of reversing the from-to direction of selected lines or links.

focal length - the orthogonal distance from the perspective center to the image plane.

focal operations - filters which use a moving window to calculate new values for eachpixel in the image based on the values of the surrounding pixels.

focal plane - the plane of the film or scanner used in obtaining an aerial photo.

Fourier analysis - an image enhancement technique that was derived from signalprocessing.

from-node - the first vertex in a line.

full set - all bands of an NOAA AVHRR (Advanced Very High Resolution Radiometer)data set.

function memories - areas of the display device memory that store the lookup tables,which translate image memory values into brightness values.

function symbol - an annotation symbol that represents an activity. For example, on amap of a state park, a symbol of a tent would indicate the location of a campingarea.

Fuyo 1 (JERS-1) - the Japanese radar satellite launched in February 1992.

G GAC - see global area coverage.

GCP - see ground control point.

GCP matching - for image to image rectification, a ground control point (GCP) selectedin one image is precisely matched to its counterpart in the other image using thespectral characteristics of the data and the transformation matrix.

GCP prediction - the process of picking a ground control point (GCP) in eithercoordinate system and automatically locating that point in the other coordinatesystem based on the current transformation parameters.

generalize - the process of weeding vertices from selected lines using a specifiedtolerance.

geocentric coordinate system - a coordinate system which has its origin at the center ofthe earth ellipsoid. The ZG-axis equals the rotational axis of the earth, and the XG-axis passes through the Greenwich meridian. The YG-axis is perpendicular toboth the ZG-axis and XG-axis, so as to create a three-dimensional coordinatesystem that follows the right hand rule.

geocoded data - an image(s) that has been rectified to a particular map projection andcell size and has had radiometric corrections applied.

608 ERDAS

Page 635: Erdas FieldGuide

G

geographic information system (GIS) - a unique system designed for a particularapplication that stores, enhances, combines, and analyzes layers of geographicdata to produce interpretable information. A GIS may include computer images,hardcopy maps, statistical data, and any other data needed for a study, as well ascomputer software and human knowledge. GISs are used for solving complexgeographic planning and management problems.

geographical coordinates - a coordinate system for explaining the surface of the earth.Geographical coordinates are defined by latitude and by longitude (Lat/Lon),with respect to an origin located at the intersection of the equator and the prime(Greenwich) meridian.

geometric correction - the correction of errors of skew, rotation, and perspective in raw,remotely sensed data.

georeferencing - the process of assigning map coordinates to image data and resam-pling the pixels of the image to conform to the map projection grid.

gigabyte (Gb) - about one billion bytes.

GIS - see geographic information system.

GIS file - a single-band ERDAS Ver. 7.X data file in which pixels are divided intodiscrete categories.

global area coverage (GAC) - a type of NOAA AVHRR (Advanced Very HighResolution Radiometer) data with a spatial resolution of 4 × 4 km.

global operations - functions which calculate a single value for an entire area, ratherthan for each pixel like focal functions.

.gmd file - the ERDAS IMAGINE graphical model file created with Model Maker(Spatial Modeler).

gnomonic - an azimuthal projection obtained from a perspective at the center of theearth.

graphical modeling - a technique used to combine data layers in an unlimited numberof ways using icons to represent input data, functions, and output data. Forexample, an output layer created from modeling can represent the desired combi-nation of themes from many input layers.

graphical model - a model created with Model Maker (Spatial Modeler). Graphicalmodels are put together like flow charts and are stored in .gmd files.

graticule - the network of parallels of latitude and meridians of longitude applied to theglobal surface and projected onto maps.

gray scale - a “color” scheme with a gradation of gray tones ranging from black towhite.

great circle - an arc of a circle for which the center is the center of the earth. A great circleis the shortest possible surface route between two points on the earth.

grid cell - a pixel.

609Field Guide

Page 636: Erdas FieldGuide

GlossaryH

grid lines - intersecting lines that indicate regular intervals of distance based on acoordinate system. Sometimes called a graticule.

ground control point (GCP) - specific pixel in image data for which the output mapcoordinates (or other output coordinates) are known. GCPs are used forcomputing a transformation matrix, for use in rectifying an image.

ground coordinate system - a three-dimensional coordinate system which utilizes aknown map projection. Ground coordinates (X,Y,Z) are usually expressed in feetor meters.

ground truth - data that are taken from the actual area being studied.

ground truthing - the acquisition of knowledge about the study area from field work,analysis of aerial photography, personal experience, etc. Ground truth data areconsidered to be the most accurate (true) data available about the area of study.

H halftoning - the process of using dots of varying size or arrangements (rather thanvarying intensity) to form varying degrees of a color.

hardcopy output - any output of digital computer (softcopy) data to paper.

header file - a file usually found before the actual image data on tapes or CD-ROMs thatcontains information about the data, such as number of bands, upper left coordi-nates, map projection, etc.

header record - the first part of an image file that contains general information aboutthe data in the file, such as the number of columns and rows, number of bands,data base coordinates of the upper left corner, and the pixel depth. The contentsof header records vary depending on the type of data.

high-frequency kernel - a convolution kernel that increases the spatial frequency of animage. Also called “high-pass kernel.”

High Resolution Picture Transmission (HRPT) - the direct transmission of AVHRRdata in real-time with the same resolution as Local Area Coverage (LAC).

High Resolution Visible (HRV) sensor - a pushbroom scanner on a SPOT satellite thattakes a sequence of line images while the satellite circles the earth.

histogram - a graph of data distribution, or a chart of the number of pixels that haveeach possible data file value. For a single band of data, the horizontal axis of ahistogram graph is the range of all possible data file values. The vertical axis isthe number of pixels that have each data value.

histogram equalization - the process of redistributing pixel values so that there areapproximately the same number of pixels with each value within a range. Theresult is a nearly flat histogram.

histogram matching - the process of determining a lookup table that will convert thehistogram of one band of an image or one color gun to resemble anotherhistogram.

610 ERDAS

Page 637: Erdas FieldGuide

I

horizontal control - the horizontal distribution of control points in aerial triangulation(x,y - planimetry).

host workstation - a CPU, keyboard, mouse, and a display.

hue - a component of IHS (intensity, hue, saturation) which is representative of thecolor or dominant wavelength of the pixel. It varies from 0 to 360. Blue = 0 (and360), magenta = 60, red = 120, yellow = 180, green = 240, and cyan = 300.

hyperspectral sensors - the imaging sensors that record multiple bands of data, such asthe AVIRIS with 224 bands.

I IGES - Initial Graphics Exchange Standard files are often used to transfer CAD databetween systems. IGES Version 3.0 format, published by the U.S. Department ofCommerce, is in uncompressed ASCII format only.

IHS - intensity, hue, saturation. An alternate color space from RGB (red, green, blue).This system is advantageous in that it presents colors more nearly as perceivedby the human eye. See intensity, hue, and saturation.

image - a picture or representation of an object or scene on paper or a display screen.Remotely sensed images are digital representations of the earth.

image algebra - any type of algebraic function that is applied to the data file values inone or more bands.

image center - the center of the aerial photo or satellite scene.

image coordinate system - the location of each point in the image is expressed forpurposes of photogrammetric triangulation.

image data - digital representations of the earth that can be used in computer imageprocessing and geographic information system (GIS) analyses.

image file - a file containing raster image data. Image files in ERDAS IMAGINE havethe extension .img. Image files from the ERDAS Ver. 7.X series software have theextension .LAN or .GIS.

image matching - the automatic acquisition of corresponding image points on theoverlapping area of two images.

image memory - the portion of the display device memory that stores data file values(which may be transformed or processed by the software that accesses the displaydevice).

image pair - see stereopair.

image processing - the manipulation of digital image data, including (but not limitedto) enhancement, classification, and rectification operations.

image pyramid - a data structure consisting of the same image represented severaltimes, at a decreasing spatial resolution each time. Each level of the pyramidcontains the image at a particular resolution.

611Field Guide

Page 638: Erdas FieldGuide

GlossaryI

image scale - expresses the average ratio between a distance in the image and the samedistance on the ground. It is computed as focal length divided by the flying heightabove the mean ground elevation.

image space coordinate system - identical to image coordinates, except that it adds athird axis (z) which is used to describe positions inside the camera. The units areusually in millimeters or microns.

.img file - an ERDAS IMAGINE file that stores continuous or thematic raster layers.

inclination - the angle between a vertical on the ground at the center of the scene and alight ray from the exposure station, which defines the degree of off-nadir viewingwhen the scene was recorded.

indexing - a function applied to thematic layers that adds the data file values of two ormore layers together, creating a new output layer. Weighting factors can beapplied to one or more layers to add more importance to those layers in the finalsum.

index map - a reference map that outlines the mapped area, identifies all of thecomponent maps for the area if several map sheets are required, and identifies alladjacent map sheets.

indices - the process used to create output images by mathematically combining theDN (digital number) values of different bands.

information - something that is independently meaningful, as opposed to data, whichare not independently meaningful.

initialization - a process that insures that all values in a file or in computer memory areequal, until additional information is added or processed to overwrite thesevalues. Usually the initialization value is 0. If initialization is not performed on adata file, there could be random data values in the file.

inset map - a map that is an enlargement of some congested area of a smaller scale map,and that is usually placed on the same sheet with the smaller scale main map.

instantaneous field of view (IFOV) - a measure of the area viewed by a single detectoron a scanning system in a given instant in time.

intensity - a component of IHS (intensity, hue, saturation) which is the overallbrightness of the scene and varies from 0 (black) to 1 (white).

interior orientation - defines the geometry of an image’s sensor.

intersection - the area or set that is common to two or more input areas or sets.

interval data - a type of data in which thematic class values have a natural sequence,and in which the distances between values are meaningful.

isarithmic map - a map that uses isorithms (lines connecting points of the same valuefor any of the characteristics used in the representation of surfaces) to represent astatistical surface. (Also called an isometric map.)

612 ERDAS

Page 639: Erdas FieldGuide

J

ISODATA clustering - Iterative Self-Organizing Data Analysis Technique; a method ofclustering that uses spectral distance as in the sequential method, but iterativelyclassifies the pixels, redefines the criteria for each class, and classifies again, sothat the spectral distance patterns in the data gradually emerge.

island - A single line that connects with itself.

isopleth map - a map on which isopleths (lines representing quantities that cannot existat a point, such as population density) are used to represent some selectedquantity.

iterative - a term used to describe a process in which some operation is performedrepeatedly.

J JERS-1 (Fuyo 1) - the Japanese radar satellite launched in February 1992.

join - the process of interactively entering the side lot lines when the front and rear lineshave already been established.

K Kappa coefficient - a number that expresses the proportionate reduction in errorgenerated by a classification process compared with the error of a completelyrandom classification.

kernel - see convolution kernel.

L label - in annotation, the text that conveys important information to the reader aboutmap features.

label point - a point within a polygon that defines that polygon.

LAC - see local area coverage.

.LAN files - multiband ERDAS Ver. 7.X image files (the name originally derived fromthe Landsat satellite). LAN files usually contain raw or enhanced remotely senseddata.

land cover map - a map of the visible ground features of a scene, such as vegetation,bare land, pasture, urban areas, etc.

Landsat - a series of earth-orbiting satellites that gather Multispectral (MSS) andThematic Mapper (TM) imagery, operated by EOSAT.

large scale - a description used to represent a map or data file having a large ratiobetween the area on the map (such as inches or pixels) and the area that is repre-sented (such as feet). In large-scale image data, each pixel represents a small areaon the ground, such as SPOT data, with a spatial resolution of 10 or 20 meters.

613Field Guide

Page 640: Erdas FieldGuide

GlossaryL

layer - 1. a band or channel of data. 2. a single band or set of three bands displayed usingthe red, green, and blue color guns of the ERDAS IMAGINE Viewer. A layercould be a remotely sensed image, an aerial photograph, an annotation layer, avector layer, an area of interest layer, etc. 3. a component of a GIS data base thatcontains all of the data for one theme. A layer consists of a thematic .img file andmay also include attributes.

least squares correlation - uses the least squares estimation to derive parameters thatbest fit a search window to a reference window.

least squares regression - the method used to calculate the transformation matrix fromthe GCPs (ground control points). This method is discussed in statisticstextbooks.

legend - the reference that lists the colors, symbols, line patterns, shadings, and otherannotation that is used on a map, and their meanings. The legend often includesthe map’s title, scale, origin, and other information.

lettering - the manner in which place names and other labels are added to a map,including letter spacing, orientation, and position.

level 1A (SPOT) - an image which corresponds to raw sensor data to which only radio-metric corrections have been applied.

level 1B (SPOT) - an image that has been corrected for the earth’s rotation and to makeall pixels 10 x 10 on the ground. Pixels are resampled from the level 1A sensordata by cubic polynomials.

level slice - the process of applying a color scheme by equally dividing the input values(image memory values) into a certain number of bins, and applying the samecolor to all pixels in each bin. Usually, a ROYGBIV (red, orange, yellow, green,blue, indigo, violet) color scheme is used.

line - 1. a vector data element consisting of a line (the set of pixels directly between twopoints), or an unclosed set of lines. 2. a row of pixels in a data file.

line dropout - a data error that occurs when a detector in a satellite either completelyfails to function or becomes temporarily overloaded during a scan. The result isa line, or partial line, of data with incorrect data file values creating a horizontalstreak until the detector(s) recovers, if it recovers.

linear - a description of a function that can be graphed as a straight line or a series oflines. Linear equations (transformations) can generally be expressed in the formof the equation of a line or plane. Also called “1st-order.”

linear contrast stretch - an enhancement technique that outputs new values at regularintervals.

linear transformation - a 1st-order rectification. A linear transformation can changelocation in X and/or Y, scale in X and/or Y, skew in X and/or Y, and rotation.

line of sight - in perspective views, the point(s) and direction from which the viewer islooking into the image.

614 ERDAS

Page 641: Erdas FieldGuide

M

local area coverage (LAC) - a type of NOAA AVHRR data with a spatial resolution of1.1 × 1.1 km.

logical record - a series of bytes that form a unit on a 9-track tape. For example, all thedata for one line of an image may form a logical record. One or more logicalrecords make up a physical record on a tape.

long wave infrared region (LWIR) - the thermal or far-infrared region of the electro-magnetic spectrum.

lookup table (LUT) - an ordered set of numbers which is used to perform a function ona set of input values. To display or print an image, lookup tables translate datafile values into brightness values.

low-frequency kernel - a convolution kernel that decreases spatial frequency. Alsocalled “low-pass kernel.”

LUT - see lookup table.

M magnify - the process of displaying one file pixel over a block of display pixels. Forexample, if the magnification factor is 3, then each file pixel will take up a blockof 3 × 3 display pixels. Magnification differs from zooming in that the magnifiedimage is loaded directly to image memory.

Mahalanobis distance - a classification decision rule that is similar to the minimumdistance decision rule, except that a covariance matrix is used in the equation.

majority - a neighborhood analysis technique that outputs the most common value ofthe data file values in a user-specified window.

map - a graphic representation of spatial relationships on the earth or other planets.

map coordinates - a system of expressing locations on the earth’s surface using aparticular map projection, such as Universal Transverse Mercator (UTM), StatePlane, or Polyconic.

map frame - an annotation element that indicates where an image will be placed in amap composition.

map projection - a method of representing the three-dimensional spherical surface of aplanet on a two-dimensional map surface. All map projections involve thetransfer of latitude and longitude onto an easily flattened surface.

matrix - a set of numbers arranged in a rectangular array. If a matrix has i rows and jcolumns, it is said to be an i by j matrix.

matrix analysis - a method of combining two thematic layers in which the output layercontains a separate class for every combination of two input classes.

matrix object - in Model Maker (Spatial Modeler), a set of numbers in a two-dimen-sional array.

maximum - a neighborhood analysis technique that outputs the greatest value of thedata file values in a user-specified window.

615Field Guide

Page 642: Erdas FieldGuide

GlossaryM

maximum likelihood - a classification decision rule based on the probability that apixel belongs to a particular class. The basic equation assumes that these proba-bilities are equal for all classes, and that the input bands have normal distribu-tions.

.mdl file - an ERDAS IMAGINE script model created with the Spatial ModelerLanguage.

mean - 1. the statistical average; the sum of a set of values divided by the number ofvalues in the set. 2. a neighborhood analysis technique that outputs the meanvalue of the data file values in a user-specified window.

mean vector - an ordered set of means for a set of variables (bands). For a data file, themean vector is the set of means for all bands in the file.

measurement vector - the set of data file values for one pixel in all bands of a data file.

median - 1. the central value in a set of data such that an equal number of values aregreater than and less than the median. 2. a neighborhood analysis technique thatoutput the median value of the data file values in a user-specified window.

megabyte (Mb) - about one million bytes.

memory resident - a term referring to the occupation of a part of a computer’s RAM(random access memory), so that a program is available for use without beingloaded into memory from disk.

mensuration - the measurement of linear or areal distance.

meridian - a line of longitude, going north and south. See geographical coordinates.

minimum - a neighborhood analysis technique that outputs the least value of the datafile values in a user-specified window.

minimum distance - a classification decision rule that calculates the spectral distancebetween the measurement vector for each candidate pixel and the mean vectorfor each signature. Also called spectral distance.

minority - a neighborhood analysis technique that outputs the least common value ofthe data file values in a user-specified window.

mode - the most commonly-occurring value in a set of data. In a histogram, the modeis the peak of the curve.

model - in a GIS, the set of expressions, or steps, that define your criteria and create anoutput layer.

modeling - the process of creating new layers from combining or operating uponexisting layers. Modeling allows the creation of new classes from existing classesand the creation of a small set of images - perhaps even a single image - which, ata glance, contains many types of information about a scene.

modified projection - a map projection that is a modified version of another projection.For example, the Space Oblique Mercator projection is a modification of theMercator projection.

616 ERDAS

Page 643: Erdas FieldGuide

N

monochrome image - an image produced from one band or layer, or contained in onecolor gun of the display device.

morphometric map - a map representing morphological features of the earth’s surface.

mosaicking - the process of piecing together images side by side, to create a largerimage.

multispectral classification - the process of sorting pixels into a finite number ofindividual classes, or categories of data, based on data file values in multiplebands. See also classification.

multispectral imagery - satellite imagery with data recorded in two or more bands.

multispectral scanner (MSS) - Landsat satellite data acquired in 4 bands with a spatialresolution of 57 × 79 meters.

multitemporal - data from two or more different dates.

N nadir - the area on the ground directly beneath a scanner’s detectors.

nadir line - the average of the left and right edge lines of a Landsat image.

nadir point - the center of the nadir line in vertically viewed imagery.

nearest neighbor - a resampling method in which the output data file value is equal tothe input pixel whose coordinates are closest to the retransformed coordinates ofthe output pixel.

neatline - a rectangular border printed around a map. On scaled maps, neatlinesusually have tick marks which indicate intervals of map coordinates or distance.

negative inclination - the sensors are tilted in increments of 0.6o to a maximum of 27o

to the east.

neighborhood analysis - any image processing technique that takes surrounding pixelsinto consideration, such as convolution filtering and scanning.

9-track - computer compatible tapes (CCTs) that hold digital data.

node - the ending points of a line. See from-node and to-node.

nominal data - a type of data in which classes that have no inherent order and thereforeare qualitative.

nonlinear - describing a function that cannot be expressed as the graph of a line or inthe form of the equation of a line or plane. Nonlinear equations usually containexpressions with exponents. “2nd-order” or higher-order equations and transfor-mations are nonlinear.

nonlinear transformation - a 2nd-order or higher rectification.

non-parametric signature - a signature for classification that is based on polygons orrectangles that are defined in the feature space image for the .img file. There is nostatistical basis for a non-parametric signature, it is simply an area in a featurespace image.

617Field Guide

Page 644: Erdas FieldGuide

GlossaryO

normal - the state of having a normal distribution.

normal distribution - a symmetrical data distribution that can be expressed in terms ofthe mean and standard deviation of the data. The normal distribution is the mostwidely encountered model for probability and is characterized by the “bellcurve.” Also called “Gaussian distribution.”

normalize - a process that makes an image appear as if it were a flat surface. Thistechnique is used to reduce topographic effect.

number maps - maps that output actual data file values or brightness values, allowingthe analysis of the values of every pixel in a file or on the display screen.

numeric keypad - the set of numeric and/or mathematical operator keys (“+”, “-”, etc.)that is usually on the right side of the keyboard.

O object - in models, an input to or output from a function. See matrix object, rasterobject, scalar object, table object.

oblique aspect - a map projection that is not oriented around a pole or the equator.

observation - in photogrammetric triangulation, a grouping of the image coordinatesfor a control point.

off-nadir - any point that is not directly beneath a scanner’s detectors, but off to anangle. The SPOT scanner allows off-nadir viewing.

1:24,000 - 1:24,000 scale data, also called “7.5-minute DEM” (Digital Elevation Model),available from USGS. It is usually referenced to the UTM coordinate system andhas a spatial resolution of 30 × 30 meters.

1:250,000 - 1:250,000 scale DEM (Digital Elevation Model) data available from USGS.Available only in arc/second format.

opacity - a measure of how opaque, or solid, a color is displayed in a raster layer.

operating system - the most basic means of communicating with the computer. Itmanages the storage of information in files and directories, input from devicessuch as the keyboard and mouse, and output to devices such as the monitor.

orbit - a circular, north-south and south-north path that a satellite travels above theearth.

order - the complexity of a function, polynomial expression, or curve. In a polynomialexpression, the order is simply the highest exponent used in the polynomial. Seealso linear, nonlinear.

ordinal data - a type of data that includes discrete lists of classes with an inherent order,such as classes of streams—first order, second order, third order, etc.

orientation angle - the angle between a perpendicular to the center scan line and theNorth direction in a satellite scene.

orthographic - an azimuthal projection with an infinite perspective.

orthocorrection - see orthorectification.

618 ERDAS

Page 645: Erdas FieldGuide

P

orthoimage - see digital orthophoto.

orthomap - an imagemap product produced from orthoimages, or orthoimage mosaics,that is similar to a standard map in that it usually includes additional infor-mation, such as map coordinate grids, scale bars, north arrows, and othermarginalia.

orthorectification - a form of rectification that corrects for terrain displacement and canbe used if a digital elevation model (DEM) of the study area is available.

outline map - a map showing the limits of a specific set of mapping entities such ascounties. Outline maps usually contain a very small number of details over thedesired boundaries with their descriptive codes.

overlay - 1. a function that creates a composite file containing either the minimum orthe maximum class values of the input files. “Overlay” sometimes refers generi-cally to a combination of layers. 2. the process of displaying a classified file overthe original image to inspect the classification.

overlay file - an ERDAS IMAGINE annotation file (.ovr extension).

.ovr file - an ERDAS IMAGINE annotation file.

P pack - to store data in a way that conserves tape or disk space.

panchromatic imagery - single-band or monochrome satellite imagery.

paneled map - a map designed to be spliced together into a large paper map. Therefore,neatlines and tick marks appear on the outer edges of the large map.

pairwise mode - an operation mode in rectification that allows the registration of oneimage to an image in another Viewer, a map on a digitizing tablet, or coordinatesentered at the keyboard.

parallel - a line of latitude, going east and west.

parallelepiped - 1. a classification decision rule, in which the data file values of thecandidate pixel are compared to upper and lower limits. 2. the limits of a paral-lelepiped classification, especially when graphed as rectangles.

parameter - 1. any variable that determines the outcome of a function or operation. 2.the mean and standard deviation of data, which are sufficient to describe anormal curve.

parametric signature - a signature that is based on statistical parameters (e.g., mean andcovariance matrix) of the pixels that are in the training sample or cluster.

passive sensors - solar imaging sensors that can only receive radiation waves andcannot transmit radiation.

path - the drive, directories, and subdirectories that specify the location of a file.

pattern recognition - the science and art of finding meaningful patterns in data, whichcan be extracted through classification.

619Field Guide

Page 646: Erdas FieldGuide

GlossaryP

perspective center - 1. a point in the image coordinate system defined by the x and ycoordinates of the principal point and the focal length of the sensor. 2. after trian-gulation, a point in the ground coordinate system that defines the sensor’sposition relative to the ground.

perspective projection - the projection of points by straight lines from a givenperspective point to an intersection with the plane of projection.

photogrammetric quality scanners - special devices capable of high image quality andexcellent positional accuracy. Use of this type of scanner results in geometricaccuracies similar to traditional analog and analytical photogrammetric instru-ments.

photogrammetry - the "art, science and technology of obtaining reliable informationabout physical objects and the environment through the process of recording,measuring and interpreting photographic images and patterns of electromag-netic radiant imagery and other phenomena." (ASP, 1980)

physical record - a consecutive series of bytes on a 9-track tape, followed by a gap, orblank space, on the tape.

piecewise linear contrast stretch - a spectral enhancement technique used to enhance aspecific portion of data by dividing the lookup table into three sections: low,middle, and high.

pixel - abbreviated from “picture element;” the smallest part of a picture (image).

pixel coordinate system - a coordinate system with its origin in the upper-left corner ofthe image, the x-axis pointing to the right, the y-axis pointing downward, and theunit in pixels.

pixel depth - the number of bits required to store all of the data file values in a file. Forexample, data with a pixel depth of 8, or 8-bit data, have 256 values (28 = 256),ranging from 0 to 255.

pixel size - the physical dimension of a single light-sensitive element (13 x 13 microns).

planar coordinates - coordinates that are defined by a column and row position on agrid (x,y).

planar projection - see azimuthal projection.

Plane Table Photogrammetry - Prior to the invention of the airplane, photographstaken on the ground were used to extract the geometric relationships betweenobjects using the principles of Descriptive Geometry.

planimetric map - a map that correctly represents horizontal distances between objects.

plan symbol - an annotation symbol that is formed after the basic outline of the objectit represents. For example, the symbol for a house might be a square, since mosthouses are rectangular.

point - 1. an element consisting of a single (x,y) coordinate pair. Also called “grid cell.”2. a vertex of an element. Also called “node.”

620 ERDAS

Page 647: Erdas FieldGuide

P

point ID - in rectification, a name given to GCPs in separate files that represent the samegeographic location.

point mode - a digitizing mode in which one vertex is generated each time a keypadbutton is pressed.

polar aspect - a map projection that is centered around a pole.

polygon - a set of closed line segments defining an area.

polynomial - a mathematical expression consisting of variables and coefficients. Acoefficient is a constant, which is multiplied by a variable in the expression.

positive inclination - the sensors are tilted in increments of 0.6o to a maximum of 27o

to the west.

primary colors - colors from which all other available colors are derived. On a displaymonitor, the primary colors red, green, and blue are combined to produce allother colors. On a color printer, cyan, yellow, and magenta inks are combined.

principal components - the transects of a scatterplot of two or more bands of data,which represent the widest variance and successively smaller amounts ofvariance that are not already represented. Principal components are orthogonal(perpendicular) to one another. In principal components analysis, the data aretransformed so that the principal components become the axes of the scatterplotof the output data.

principal component band - a band of data that is output by principal componentsanalysis. Principal component bands are uncorrelated and non-redundant, sinceeach principal component describes different variance within the original data.

principal components analysis - the process of calculating principal components andoutputting principal component bands. It allows redundant data to be compactedinto fewer bands that is, the dimensionality of the data is reduced.

principal point (Xp,Yp) - the point in the image plane onto which the perspective centeris projected, located directly beneath the interior orientation.

printer - a device that prints text, full color imagery, and/or graphics. See color printer,text printer.

profile - a row of data file values from a DEM (Digital Elevation Model) or DTED(Digital Terrain Elevation Data) file. The profiles of DEM and DTED run south tonorth, that is, the first pixel of the record is the southernmost pixel.

profile symbol - an annotation symbol that is formed like the profile of an object. Profilesymbols generally represent vertical objects such as trees, windmills, oil wells,etc.

proximity analysis - a technique used to determine which pixels of a thematic layer arelocated at specified distances from pixels in a class or classes. A new layer iscreated which is classified by the distance of each pixel from specified classes ofthe input layer.

621Field Guide

Page 648: Erdas FieldGuide

GlossaryQ

pseudo color - a method of displaying an image (usually a thematic layer) that allowsthe classes to have distinct colors. The class values of the single band file are trans-lated through all three function memories which store a color scheme for theimage.

pseudo node - a single line that connects with itself (an island) or where only two linesintersect.

pseudo projection - a map projection that has only some of the characteristics ofanother projection.

pushbroom - a scanner in which all scanning parts are fixed and scanning is accom-plished by the forward motion of the scanner, such as the SPOT scanner.

pyramid layers - image layers which are successively reduced by the power of 2 andresampled. Pyramid layers enable large images to be displayed faster.

Q quadrangle - 1. any of the hardcopy maps distributed by USGS such as the 7.5-minutequadrangle or the 15-minute quadrangle. 2. one quarter of a full Landsat TMscene. Commonly called a “quad.”

qualitative map - a map that shows the spatial distribution or location of a kind ofnominal data. For example, a map showing corn fields in the United States wouldbe a qualitative map. It would not show how much corn is produced in eachlocation, or production relative to the other areas.

quantitative map - a map that displays the spatial aspects of numerical data. A mapshowing corn production (volume) in each area would be a quantitative map.

R radar data - the remotely sensed data that are produced when a radar transmitter emitsa beam of micro or millimeter waves, the waves reflect from the surfaces theystrike, and the backscattered radiation is detected by the radar system’s receivingantenna which is tuned to the frequency of the transmitted waves.

RADARSAT - a Canadian radar satellite scheduled to be launched in 1995.

radiative transfer equations - the mathematical models that attempt to quantify thetotal atmospheric effect of solar illumination.

radiometric correction - the correction of variations in data that are not caused by theobject or scene being scanned, such as scanner malfunction and atmosphericinterference.

radiometric enhancement - an enhancement technique that deals with the individualvalues of pixels in an image.

radiometric resolution - the dynamic range, or number of possible data file values, ineach band. This is referred to by the number of bits into which the recordedenergy is divided. See pixel depth.

rank - a neighborhood analysis technique that outputs the number of values in a user-specified window that are less than the analyzed value.

622 ERDAS

Page 649: Erdas FieldGuide

R

raster data - data that are organized in a grid of columns and rows. Raster data usuallyrepresent a planar graph or geographical area. Raster data in ERDAS IMAGINEare stored in .img files.

raster object - in Model Maker (Spatial Modeler), a single raster layer or set of layers.

raster region - a contiguous group of pixels in one GIS class. Also called clump.

ratio data - a data type in which thematic class values have the same properties asinterval values, except that ratio values have a natural zero or starting point.

Real-Aperture Radar (RAR) - a radar sensor that uses its side-looking, fixed antenna totransmit and receive the radar impulse. For a given position in space, theresolution of the resultant image is a function of the antenna size. The signal isprocessed independently of subsequent return signals.

recoding - the assignment of new values to one or more classes.

record - 1. the set of all attribute data for one class of feature. 2. the basic storage unit ona 9-track tape.

rectification - the process of making image data conform to a map projection system. Inmany cases, the image must also be oriented so that the north direction corre-sponds to the top of the image.

rectified coordinates - the coordinates of a pixel in a file that has been rectified, whichare extrapolated from the ground control points. Ideally, the rectified coordinatesfor the ground control points are exactly equal to the reference coordinates. Sincethere is often some error tolerated in the rectification, this is not always the case.

reduce - the process of skipping file pixels when displaying an image, so that a largerarea can be represented on the display screen. For example, a reduction factor of3 would cause only the pixel at every third row and column to be displayed, sothat each displayed pixel represents a 3 × 3 block of file pixels.

reference coordinates - the coordinates of the map or reference image to which a source(input) image is being registered. Ground control points consist of both inputcoordinates and reference coordinates for each point.

reference pixels - in classification accuracy assessment, pixels for which the correct GISclass is known from ground truth or other data. The reference pixels can beselected by you, or randomly selected.

reference plane - In a topocentric coordinate system, the tangential plane at the centerof the image on the earth ellipsoid, on which the three perpendicular coordinateaxis are defined.

reference system - the map coordinate system to which an image is registered.

reference window - the source window on the first image of an image pair, whichremains at a constant location. See also correlation windows and searchwindows.

reflection spectra - the electromagnetic radiation wavelengths that are reflected byspecific materials of interest.

623Field Guide

Page 650: Erdas FieldGuide

GlossaryR

registration - the process of making image data conform to another image. A mapcoordinate system is not necessarily involved.

regular block of photos - a rectangular block in which the number of photos in eachstrip is the same; this includes a single strip or a single stereopair.

relation based matching - an image matching technique that uses the image featuresand the relation among the features to automatically recognize the correspondingimage structures without any a priori information.

relief map - a map that appears to be or is 3-dimensional.

remote sensing - the measurement or acquisition of data about an object or scene by asatellite or other instrument above or far from the object. Aerial photography,satellite imagery, and radar are all forms of remote sensing.

replicative symbol - an annotation symbol that is designed to look like its real-worldcounterpart. These symbols are often used to represent trees, railroads, houses,etc.

representative fraction - the ratio or fraction used to denote map scale.

resampling - the process of extrapolating data file values for the pixels in a new grid,when data have been rectified or registered to another image.

rescaling - the process of compressing data from one format to another. In ERDASIMAGINE this typically means compressing a 16-bit file to an 8-bit file.

reshape - the process of redigitizing a portion of a line.

residuals - in rectification, the distances between the source and retransformed coordi-nates in one direction. In ERDAS IMAGINE they are shown for each GCP. The Xresidual is the distance between the source X coordinate and the retransformed Xcoordinate. The Y residual is the distance between the source Y coordinate andthe retransformed Y coordinate.

resolution - a level of precision in data. For specific types of resolution see displayresolution, radiometric resolution, spatial resolution, spectral resolution, andtemporal resolution.

resolution merging - the process of sharpening a lower-resolution multiband image bymerging it with a higher-resolution monochrome image.

retransformed - in the rectification process, a coordinate in the reference (output)coordinate system that has transformed back into the input coordinate system.The amount of error in the transformation can be determined by computing thedifference between the original coordinates and the retransformed coordinates.See RMS error.

RGB - red, green, blue. The primary additive colors which are used on most displayhardware to display imagery.

RGB clustering - a clustering method for 24-bit data (three 8-bit bands) which plotspixels in 3-dimensional spectral space and divides that space into sections thatare used to define clusters. The output color scheme of an RGB-clustered imageresembles that of the input file.

624 ERDAS

Page 651: Erdas FieldGuide

S

rhumb line - a line of true direction, which crosses meridians at a constant angle.

right hand rule - a convention in three-dimensional coordinate systems (X,Y,Z) whichdetermines the location of the positive Z axis. If you place your right hand fingerson the positive X axis and curl your fingers toward the positive Y axis, thedirection your thumb is pointing is the positive Z axis direction.

RMS error - the distance between the input (source) location of a GCP and the retrans-formed location for the same GCP. RMS error is calculated with a distanceequation.

RMSE (Root Mean Square Error) - used to measure how well a specific calculatedsolution fits the original data.For each observation of a phenomena, a variationcan be computed between the actual observation and a calculated value. (Themethod of obtaining a calculated value is application-specific.) Each variation isthen squared. The sum of these squared values is divided by the number of obser-vations and then the square root is taken. This is the RMSE value.

roam - the process of moving across a display so that different areas of the image appearon the display screen.

root - the first part of a file name, which usually identifies the file’s specific contents.

ROYGBIV - a color scheme ranging through red, orange, yellow, green, blue, indigo,and violet at regular intervals.

rubber sheeting - the application of a nonlinear rectification (2nd-order or higher).

S sample - see training sample.

saturation - a component of IHS which represents the purity of color and also varieslinearly from 0 to 1.

scale - 1. the ratio of distance on a map as related to the true distance on the ground. 2.cell size. 3. the processing of values through a lookup table.

scale bar - a graphic annotation element that describes map scale. It shows the distanceon paper that represents a geographical distance on the map.

scalar object - in Model Maker (Spatial Modeler), a single numeric value.

scaled map - a georeferenced map that is accurately laid-out and referenced torepresent distances and locations. A scaled map usually has a legend whichincludes a scale, such as “1 inch = 1000 feet.” The scale is often expressed as a ratiolike 1:12,000 where 1 inch on the map equals 12,000 inches on the ground.

scanner - the entire data acquisition system, such as the Landsat Thematic Mapperscanner or the SPOT panchromatic scanner.

scanning - 1. the transfer of analog data, such as photographs, maps, or anotherviewable image, into a digital (raster) format. 2. a process similar to convolutionfiltering which uses a kernel for specialized neighborhood analyses, such as total,average, minimum, maximum, boundary, and majority.

625Field Guide

Page 652: Erdas FieldGuide

GlossaryS

scatterplot - a graph, usually in two dimensions, in which the data file values of oneband are plotted against the data file values of another band.

scene - the image captured by a satellite.

screen coordinates - the location of a pixel on the display screen, beginning with 0,0 inthe upper left corner.

screen digitizing - the process of drawing vector graphics on the display screen with amouse. A displayed image can be used as a reference.

script modeling - the technique of combining data layers in an unlimited number ofways. Script modeling offers all of the capabilities of graphical modeling with theability to perform more complex functions, such as conditional looping.

script model - a model that is comprised of text only and is created with the SpatialModeler Language. Script models are stored in .mdl files.

search radius - in surfacing routines, the distance around each pixel within which thesoftware will search for terrain data points.

search windows - candidate windows on the second image of an image pair that areevaluated relative to the reference window.

seat - a combination of an X-server and a host workstation.

secant - the intersection of two points or lines. In the case of conic or cylindrical mapprojections, a secant cone or cylinder intersects the surface of a globe at twocircles.

sensor - a device that gathers energy, converts it to a digital value, and presents it in aform suitable for obtaining information about the environment.

separability - a statistical measure of distance between two signatures.

separability listing - a report of signature divergence which lists the computed diver-gence for every class pair and one band combination. The listing contains everydivergence value for the bands studied for every possible pair of signatures.

sequential clustering - a method of clustering that analyzes pixels of an image line byline and groups them by spectral distance. Clusters are determined based onrelative spectral distance and the number of pixels per cluster.

server - on a computer in a network, a utility that makes some resource or serviceavailable to the other machines on the network (such as access to a tape drive).

shaded relief image - a thematic raster image which shows variations in elevationbased on a user-specified position of the sun. Areas that would be in sunlight arehighlighted and areas that would be in shadow are shaded.

shaded relief map - a map of variations in elevation based on a user-specified positionof the sun. Areas that would be in sunlight are highlighted and areas that wouldbe in shadow are shaded.

short wave infrared region (SWIR) - the near-infrared and middle-infrared regions ofthe electromagnetic spectrum.

626 ERDAS

Page 653: Erdas FieldGuide

S

Shuttle Imaging Radar (SIR-A, SIR-B, and SIR-C) - the radar sensors that fly aboardNASA space shuttles. SIR-A flew aboard the 1981 NASA Space Shuttle Columbia.That data and SIR-B data from a later Space Shuttle mission are still valuablesources of radar data. A future shuttle mission is scheduled to carry the SIR-Csensor.

Side-looking Airborne Radar (SLAR) - a radar sensor that uses an antenna which isfixed below an aircraft and pointed to the side to transmit and receive the radarsignal.

signal based matching - see area based matching.

signature - a set of statistics that defines a training sample or cluster. The signature isused in a classification process. Each signature corresponds to a GIS class that iscreated from the signatures with a classification decision rule.

skew - a condition in satellite data, caused by the rotation of the earth eastward, whichcauses the position of the satellite relative to the earth to move westward.Therefore, each line of data represents terrain that is slightly west of the data inthe previous line.

slope - the change in elevation over a certain distance. Slope can be reported as apercentage or in degrees.

slope image - a thematic raster image which shows changes in elevation over distance.Slope images are usually color-coded to show the steepness of the terrain at eachpixel.

slope map - a map that is color-coded to show changes in elevation over distance.

small scale - for a map or data file, having a small ratio between the area of the imagery(such as inches or pixels) and the area that is represented (such as feet). In small-scale image data, each pixel represents a large area on the ground, such as NOAAAVHRR (Advanced Very High Resolution Radiometer) data, with a spatialresolution of 1.1 km.

Softcopy Photogrammetry - see Digital Photogrammetry.

source coordinates - in the rectification process, the input coordinates.

spatial enhancement - the process of modifying the values of pixels in an image relativeto the pixels that surround them.

spatial frequency - the difference between the highest and lowest values of acontiguous set of pixels.

Spatial Modeler Language - a script language used internally by Model Maker (SpatialModeler) to execute the operations specified in the graphical models you create.The Spatial Modeler Language can also be used to write application-specificmodels.

spatial resolution - a measure of the smallest object that can be resolved by the sensor,or the area on the ground represented by each pixel.

speckle noise - the light and dark pixel noise that appears in radar data.

627Field Guide

Page 654: Erdas FieldGuide

GlossaryS

spectral distance - the distance in spectral space computed as Euclidean distance in n-dimensions, where n is the number of bands.

spectral enhancement - the process of modifying the pixels of an image based on theoriginal values of each pixel, independent of the values of surrounding pixels.

spectral resolution - the specific wavelength intervals in the electromagnetic spectrumthat a sensor can record.

spectral space - an abstract space that is defined by spectral units (such as an amountof electromagnetic radiation). The notion of spectral space is used to describeenhancement and classification techniques that compute the spectral distancebetween n-dimensional vectors, where n is the number of bands in the data.

spectroscopy - the study of the absorption and reflection of electromagnetic radiation(EMR) waves.

spliced map - a map that is printed on separate pages, but intended to be joinedtogether into one large map. Neatlines and tick marks appear only on the pageswhich make up the outer edges of the whole map.

spline - the process of smoothing or generalizing all currently selected lines using aspecified grain tolerance during vector editing.

split - the process of making two lines from one by adding a node.

SPOT - a series of earth-orbiting satellites operated by the Centre National d’EtudesSpatiales (CNES) of France.

standard deviation - 1. the square root of the variance of a set of values which is usedas a measurement of the spread of the values. 2. a neighborhood analysistechnique that outputs the standard deviation of the data file values of a user-specified window.

standard meridian - see standard parallel.

standard parallel - the line of latitude where the surface of a globe conceptually inter-sects with the surface of the projection cylinder or cone.

statement - in script models, properly formatted lines that perform a specific task in amodel. Statements fall into the following categories: declaration, assignment,show, view, set, macro definition, and quit.

statistical clustering - a clustering method that tests 3 × 3 sets of pixels for homogeneityand builds clusters only from the statistics of the homogeneous sets of pixels.

statistics (STA) file - an ERDAS Ver. 7.X trailer file for LAN data that contains statisticsabout the data.

stereographic - 1. the process of projecting onto a tangent plane from the opposite sideof the earth. 2. the process of acquiring images at angles on either side of thevertical.

stereopair - a set of two remotely-sensed images that overlap, providing two views ofthe terrain in the overlap area.

628 ERDAS

Page 655: Erdas FieldGuide

S

stereo-scene - achieved when two images of the same area are acquired on differentdays from different orbits, one taken east of the vertical, and the other taken westof the nadir.

stream mode - a digitizing mode in which vertices are generated continuously while thedigitizer keypad is in proximity to the surface of the digitizing tablet.

string - a line of text. A string usually has a fixed length (number of characters).

strip of photographs - consists of images captured along a flight-line, normally with anoverlap of 60% for stereo coverage. All photos in the strip are assumed to be takenat approximately the same flying height and with a constant distance betweenexposure stations. Camera tilt relative to the vertical is assumed to be minimal.

striping - a data error that occurs if a detector on a scanning system goes out ofadjustment - that is, it provides readings consistently greater than or less than theother detectors for the same band over the same ground cover. Also called“banding.”

structure based matching - see relation based matching.

subsetting - the process of breaking out a portion of a large image file into one or moresmaller files.

sum - a neighborhood analysis technique that outputs the total of the data file values ina user-specified window.

Sun raster data - imagery captured from a Sun monitor display.

sun-synchronous - a term used to describe earth-orbiting satellites that rotate aroundthe earth at the same rate as the earth rotates on its axis.

supervised training - any method of generating signatures for classification, in whichthe analyst is directly involved in the pattern recognition process. Usually, super-vised training requires the analyst to select training samples from the data, whichrepresent patterns to be classified.

surface - a one band file in which the value of each pixel is a specific elevation value.

swath width - in a satellite system, the total width of the area on the ground covered bythe scanner.

symbol - an annotation element that consists of other elements (sub-elements). See plansymbol, profile symbol, and function symbol.

symbolization - a method of displaying vector data in which attribute information isused to determine how features are rendered. For example, points indicatingcities and towns can appear differently based on the population field stored in theattribute database for each of those areas.

Synthetic Aperture Radar (SAR) - a radar sensor that uses its side-looking, fixedantenna to create a synthetic aperture. SAR sensors are mounted on satellites,aircraft, and the NASA Space Shuttle. The sensor transmits and receives as it ismoving. The signals received over a time interval are combined to create theimage.

629Field Guide

Page 656: Erdas FieldGuide

GlossaryT

T table object - in Model Maker (Spatial Modeler), a series of numeric values or characterstrings.

tablet digitizing - the process of using a digitizing tablet to transfer non-digital datasuch as maps or photographs to vector format.

Tagged Imaged File Format - see TIFF data.

tangent - an intersection at one point or line. In the case of conic or cylindrical mapprojections, a tangent cone or cylinder intersects the surface of a globe in a circle.

Tasseled Cap transformation - an image enhancement technique that optimizes dataviewing for vegetation studies.

temporal resolution - the frequency with which a sensor obtains imagery of a particulararea.

terrain analysis - the processing and graphic simulation of elevation data.

terrain data - elevation data expressed as a series of x, y, and z values that are eitherregularly or irregularly spaced.

text printer - a device used to print characters onto paper, usually used for lists,documents, and reports. If a color printer is not necessary or is unavailable,images can be printed using a text printer. Also called a “line printer.”

thematic data - raster data that are qualitative and categorical. Thematic layers oftencontain classes of related information, such as land cover, soil type, slope, etc. InERDAS IMAGINE, thematic data are stored in .img files.

thematic layer - see thematic data.

thematic map - a map illustrating the class characterizations of a particular spatialvariable such as soils, land cover, hydrology, etc.

Thematic Mapper (TM) - Landsat data acquired in 7 bands with a spatial resolution of30 × 30 meters.

theme - a particular type of information, such as soil type or land use, that is repre-sented in a layer.

3D perspective view - a simulated three-dimensional view of terrain.

threshold - a limit, or “cutoff point,” usually a maximum allowable amount of error inan analysis. In classification, thresholding is the process of identifying amaximum distance between a pixel and the mean of the signature to which it wasclassified.

tick marks - small lines along the edge of the image area or neatline that indicate regularintervals of distance.

tie point - a point whose ground coordinates are not known, but can be recognizedvisually in the overlap or sidelap area between two images.

TIFF data - Tagged Image File Format data is a raster file format developed by Aldus,Corp. (Seattle, Washington), in 1986 for the easy transportation of data.

630 ERDAS

Page 657: Erdas FieldGuide

T

TIGER - Topologically Integrated Geographic Encoding and Referencing System filesare line network products of the U.S. Census Bureau.

tiled data - the storage format of ERDAS IMAGINE .img files.

TIN - see triangulated irregular network.

to-node - the last vertex in a line.

topocentric coordinate system - a coordinate system which has its origin at the centerof the image on the earth ellipsoid. The three perpendicular coordinate axis aredefined on a tangential plane at this center point. The x-axis is oriented eastward,the y-axis northward, and the z-axis is vertical to the reference plane (up).

topographic - a term indicating elevation.

topographic data - a type of raster data in which pixel values represent elevation.

topographic effect - a distortion found in imagery from mountainous regions thatresults from the differences in illumination due to the angle of the sun and theangle of the terrain.

topographic map - a map depicting terrain relief.

topology - a term that defines the spatial relationships between features in a vectorlayer.

total RMS error - the total root mean square (RMS) error for an entire image. Total RMSerror takes into account the RMS error of each ground control point (GCP).

trailer file - 1. an ERDAS Ver. 7.X file with a .TRL extension that accompanies a GIS fileand contains information about the GIS classes. 2. a file following the image dataon a 9- track tape.

training - the process of defining the criteria by which patterns in image data are recog-nized for the purpose of classification.

training field - the geographical area represented by the pixels in a training sample.Usually, it is previously identified with the use of ground truth data or aerialphotography. Also called “training site.”

training sample - a set of pixels selected to represent a potential class. Also called“sample.”

transformation matrix - a set of coefficients which are computed from ground controlpoints, and used in polynomial equations to convert coordinates from one systemto another. The size of the matrix depends upon the order of the transformation.

transposition - the interchanging of the rows and columns of a matrix, denoted with T.

transverse aspect - the orientation of a map in which the central line of the projection,which is normally the equator, is rotated 90 degrees so that it follows a meridian.

triangulated irregular network (TIN) - a specific representation of DTMs in whichelevation points can occur at irregular intervals.

triangulation - establishes the geometry of the camera or sensor relative to objects onthe earth’s surface.

631Field Guide

Page 658: Erdas FieldGuide

GlossaryU

true color - a method of displaying an image (usually from a continuous raster layer)which retains the relationships between data file values and represents multiplebands with separate color guns. The image memory values from each displayedband are translated through the function memory of the corresponding colorgun.

true direction - the property of a map projection to represent the direction between twopoints with a straight rhumb line, which crosses meridians at a constant angle.

U union - the area or set that is the combination of two or more input areas or sets, withoutrepetition.

unscaled map - a hardcopy map that is not referenced to any particular scale, in whichone file pixel is equal to one printed pixel.

unsplit - the process of joining two lines by removing a node.

unsupervised training - a computer-automated method of pattern recognition in whichsome parameters are specified by the user and are used to uncover statisticalpatterns that are inherent in the data.

V variable - 1. a numeric value that is changeable, usually represented with a letter. 2. athematic layer. 3. one band of a multiband image. 4. in models, objects whichhave been associated with a name using a declaration statement.

variance - the measure of central tendency.

vector - 1. a line element. 2. a one-dimensional matrix, having either one row (1 by j), orone column (i by 1). See also mean vector, measurement vector.

vector data - data that represent physical forms (elements) such as points, lines, andpolygons. Only the vertices of vector data are stored, instead of every point thatmakes up the element. ERDAS IMAGINE vector data are based on theARC/INFO data model and are stored in directories, rather than individual files.See workspace.

vector layer - a set of vector features and their associated attributes.

velocity vector - the satellite’s velocity if measured as a vector through a point on thespheroid.

verbal statement - a statement that describes the distance on the map to the distance onthe ground. A verbal statement describing a scale of 1:1,000,000 is approximately1 inch to 16 miles. The units on the map and on the ground do not have to be thesame in a verbal statement.

vertex - a point that defines an element, such as a point where a line changes direction.

vertical control - the vertical distribution of control points in aerial triangulation(z - elevation).

vertices - plural of vertex.

632 ERDAS

Page 659: Erdas FieldGuide

W

viewshed analysis - the calculation of all areas that can be seen from a particularviewing point or path.

viewshed map - a map showing only those areas visible (or invisible) from a specifiedpoint(s).

volume - a medium for data storage, such as a magnetic disk or a tape.

volume set - the complete set of tapes that contains one image.

W weight - the number of values in a set; particularly, in clustering algorithms, the weightof a cluster is the number of pixels that have been averaged into it.

weighting factor - a parameter that increases the importance of an input variable. Forexample, in GIS indexing, one input layer can be assigned a weighting factorwhich multiplies the class values in that layer by that factor, causing that layer tohave more importance in the output file.

weighting function - in surfacing routines, a function applied to elevation values fordetermining new output values.

working window - the image area to be used in a model. This can be set to either theunion or intersection of the input layers.

workspace - a location which contains one or more vector layers. A workspace is madeup of several directories.

write ring - a protection device that allows data to be written to a 9-track tape when thering is in place, but not when it is removed.

X X residual - in RMS error reports, the distance between the source X coordinate and theretransformed X coordinate.

X RMS error - the root mean square error (RMS) in the X direction.

Y Y residual - in RMS error reports, the distance between the source Y coordinate and theretransformed Y coordinate.

Y RMS error - the root mean square error (RMS) in the Y direction.

Z zero-sum kernel - a convolution kernel in which the sum of all the coefficients is zero.Zero-sum kernels are usually edge detectors.

zone distribution rectangles (ZDRs) - the images into which each distributionrectangle (DR) are divided in ADRG data.

zoom - the process of expanding displayed pixels on an image so that they can be moreclosely studied. Zooming is similar to magnification, except that it changes thedisplay only temporarily, leaving image memory the same.

633Field Guide

Page 660: Erdas FieldGuide

GlossaryZ

634 ERDAS

Page 661: Erdas FieldGuide

Bibliography

Bibliography

Adams, J.B., Smith, M.O., and Gillespie, A.R. 1989. “Simple Models for ComplexNatural Surfaces: A Strategy for the Hyperspectral Era of Remote Sensing.”Proceedings IEEE Intl. Geosciences and Remote Sensing Symposium.1:16-21.

American Society of Photogrammetry. 1980. Photogrammetric Engineering and RemoteSensing XLVI:10:1249.

Atkinson, Paula. 1985. “Preliminary Results of the Effect of Resampling on ThematicMapper Imagery.” 1985 ACSM-ASPRS Fall Convention Technical Papers. FallsChurch, Virginia: American Society for Photogrammetry and Remote Sensingand American Congress on Surveying and Mapping.

Battrick, Bruce, and Lois Proud, eds. May 1992. ERS-1 User Handbook. Noordwijk, TheNetherlands: European Space Agency, ESA Publications Division, c/o ESTEC.

Benediktsson, J.A., Swain, P.H., Ersoy, O.K., and Hong, D. 1990. “Neural NetworkApproaches Versus Statistical Methods in Classification of Multisource RemoteSensing Data.” IEEE Transactions on Geoscience and Remote Sensing 28:4:540-51.

Berk, A., et al. 1989. MODTRAN: A Moderate Resolution Model for LOWTRAN 7.Hanscom Air Force Base, Massachusetts: U.S. Air Force Geophysical Laboratory(AFGL).

Bernstein, Ralph, et al. 1983. “Image Geometry and Rectification.” Chapter 21 in Manualof Remote Sensing, edited by Robert N. Colwell. Falls Church, Virginia: AmericanSociety of Photogrammetry.

Billingsley, Fred C., et al. 1983. “Data Processing and Reprocessing.” Chapter 17 inManual of Remote Sensing, edited by Robert N. Colwell. Falls Church, Virginia:American Society of Photogrammetry.

Blom, Ronald G., and Michael Daily. July 1982. “Radar Image Processing for Rock-TypeDiscrimination.” IEEE Transactions on Geoscience and Remote Sensing, Vol. GE-20,No. 3.

Buchanan, M. D. 1979. “Effective Utilization of Color in Multidimensional Data Presen-tation.” Proceedings of the Society of Photo-Optical Engineers, Vol. 199: 9-19.

Burrus, C. S., and T. W. Parks. 1985. DFT/FFT and Convolution Algorithms: Theory andImplementation. New York: John Wiley & Sons, Inc.

635Field Guide

Page 662: Erdas FieldGuide

Bibliography

Cannon, Michael, Alex Lehar, and Fred Preston, 1983. “Background Pattern Removalby Power Spectral Filtering.” Applied Optics, Vol. 22, No. 6: 777-779.

Carter, James R. 1989. “On Defining the Geographic Information System.” Fundamentalsof Geographic Information Systems: A Compendium, edited by William J. Ripple.Bethesda, Maryland: American Society for Photogrammetric Engineering andRemote Sensing and the American Congress on Surveying and Mapping.

Chahine, Moustafa T., et al. 1983. “Interaction Mechanisms within the Atmosphere.”Chapter 5 in Manual of Remote Sensing, edited by Robert N. Colwell. Falls Church,Virginia: American Society of Photogrammetry.

Chavez, Pat S., Jr., et al. 1991. “Comparison of Three Different Methods to Merge Multi-resolution and Multispectral Data: Landsat TM and SPOT Panchromatic.” Photo-grammetric Engineering & Remote Sensing, Vol. 57, No. 3: 295-303.

Chavez, Pat S., Jr., and Graydon L. Berlin. 1986. “Restoration Techniques for SIR-BDigital Radar Images.” Paper presented at the Fifth Thematic Conference:Remote Sensing for Exploration Geology, Reno, Nevada.

Clark, Roger N., and Ted L. Roush. 1984. “Reflectance Spectroscopy: QuantitativeAnalysis Techniques for Remote Sensing Applications.” Journal of GeophysicalResearch, Vol. 89, No. B7: 6329-6340.

Clark, R.N., Gallagher, A.J., and Swayze, G.A. 1990. “Material Absorption Band DepthMapping of Imagine Spectrometer Data using a Complete Band Shape Least-Square Fit with Library Reference Spectra.” Proceedings of the Second AVIRISConference. JPL Pub. 90-54.

Colby, J. D. 1991. “Topographic Normalization in Rugged Terrain.” PhotogrammetricEngineering & Remote Sensing, Vol. 57, No. 5: 531-537.

Colwell, Robert N., ed. 1983. Manual of Remote Sensing. Falls Church, Virginia: AmericanSociety of Photogrammetry.

Congalton, R. 1991. “A Review of Assessing the Accuracy of Classifications of RemotelySensed Data.” Remote Sensing of Environment, Vol. 37: 35-46.

Conrac Corp., Conrac Division. 1980. Raster Graphics Handbook. Covina, California:Conrac Corp.

Crippen, Robert E. July 1989. “Development of Remote Sensing Techniques for theInvestigation of Neotectonic Activity, Eastern Transverse Ranges and Vicinity,Southern California.” Ph.D. Diss., University of California, Santa Barbara.

Crippen, Robert E. 1989. “A Simple Spatial Filtering Routine for the Cosmetic Removalof Scan-Line Noise from Landsat TM P-Tape Imagery.” PhotogrammetricEngineering & Remote Sensing, Vol. 55, No. 3: 327-331.

Crippen, Robert E. 1987. “The Regression Intersection Method of Adjusting Image Datafor Band Ratioing.” International Journal of Remote Sensing, Vol. 8, No. 2: 137-155.

636 ERDAS

Page 663: Erdas FieldGuide

Crist, E. P., et al. 1986. “Vegetation and Soils Information Contained in TransformedThematic Mapper Data.” Proceedings of IGARSS’ 86 Symposium, ESA PublicationsDivision, ESA SP-254.

Crist, E. P., and R. J. Kauth. 1986. “The Tasseled Cap De-Mystified.” PhotogrammetricEngineering & Remote Sensing, Vol. 52, No. 1: 81-86.

Cullen, Charles G. 1972. Matrices and Linear Transformations. Reading, Massachusetts:Addison-Wesley Publishing Company.

Daily, Mike. 1983. “Hue-Saturation-Intensity Split-Spectrum Processing of SeasatRadar Imagery.” Photogrammetric Engineering & Remote Sensing, Vol. 49, No. 3:349-355.

Dangermond, Jack. 1988. “A Review of Digital Data Commonly Available and Some ofthe Practical Problems of Entering Them into a GIS.” Fundamentals of GeographicInformation Systems: A Compendium, edited by William J. Ripple. Bethesda,Maryland: American Society for Photogrammetric Engineering and RemoteSensing and the American Congress on Surveying and Mapping.

Defense Mapping Agency Aerospace Center. 1989. Defense Mapping Agency ProductSpecifications for ARC Digitized Raster Graphics (ADRG). St. Louis, Missouri: DMAAerospace Center.

Dent, Borden D. 1985. Principles of Thematic Map Design. Reading, Massachusetts:Addison-Wesley Publishing Company.

Duda, Richard O., and Peter E. Hart. 1973. Pattern Classification and Scene Analysis. NewYork: John Wiley & Sons, Inc.

Eberlein, R. B., and J. S. Weszka. 1975. “Mixtures of Derivative Operators as EdgeDetectors.” Computer Graphics and Image Processing, Vol. 4: 180-183.

Elachi, Charles. 1987 Introduction to the Physics and Techniques of Remote Sensing. NewYork: John Wiley & Sons.

Elachi, Charles. 1992. “Radar Images of the Earth from Space.” Exploring Space.

Elachi, Charles. 1987. Spaceborne Radar Remote Sensing: Applications and Techniques. NewYork: IEEE Press.

Elassal, Atef A., and Vincent M. Caruso. 1983. USGS Digital Cartographic Data Standards:Digital Elevation Models. Circular 895-B. Reston, Virginia: U.S. Geological Survey.

ESRI. 1992. ARC Command References 6.0. Redlands. California: ESRI, Inc.

ESRI. 1992. Data Conversion: Supported Data Translators. Redlands, California: ESRI, Inc.

ESRI. 1992. Managing Tabular Data. Redlands, California: ESRI, Inc.

637Field Guide

Page 664: Erdas FieldGuide

Bibliography

ESRI. 1992. Map Projections & Coordinate Management: Concepts and Procedures. Redlands,California: ESRI, Inc.

ESRI. 1990. Understanding GIS: The ARC/INFO Method. Redlands, California: ESRI, Inc.

Fahnestock, James D., and Robert A. Schowengerdt. 1983. “Spatially Variant ContrastEnhancement Using Local Range Modification.” Optical Engineering, Vol. 22, No.3.

Faust, Nickolas L. 1989. “Image Enhancement.” Volume 20, Supplement 5 of Encyclo-pedia of Computer Science and Technology, edited by Allen Kent and James G.Williams. New York: Marcel Dekker, Inc.

Fisher, P. F. 1991. “Spatial Data Sources and Data Problems.” Geographical InformationSystems: Principles and Applications, edited by David J. Maguire, Michael F.Goodchild, and David W. Rhind. New York: Longman Scientific & Technical.

Flaschka, H. A. 1969. Quantitative Analytical Chemistry: Vol 1. New York: Barnes &Noble, Inc.

Fraser, S. J., et al. 1986. “Targeting Epithermal Alteration and Gossans in Weatheredand Vegetated Terrains Using Aircraft Scanners: Successful Australian CaseHistories.” Paper presented at the fifth Thematic Conference: Remote Sensing forExploration Geology, Reno, Nevada.

Freden, Stanley C., and Frederick Gordon, Jr. 1983. “Landsat Satellites.” Chapter 12 inManual of Remote Sensing, edited by Robert N. Colwell. Falls Church, Virginia:American Society of Photogrammetry.

Frost, Victor S., Stiles, Josephine A., Shanmugan, K. S., and Holtzman, Julian C. 1982.“A Model for Radar Images and Its Application to Adaptive Digital Filtering ofMultiplicative Noise.” IEEE Transactions on Pattern Analysis and Machine Intelli-gence, Vol. PAMI-4, No. 2, March 1982.

Geological Remote Sensing Group Newsletter. May 1992. No. 5. Institute of Hydrology,Wallingford, OX10, United Kingdom.

Gonzalez, Rafael C., and Paul Wintz. 1977. Digital Image Processing. Reading, Massachu-setts: Addison-Wesley Publishing Company.

Gonzalez, Rafael C., and Richard E. Woods. 1992. Digital Image Processing. Reading,Massachusetts: Addison-Wesley Publishing Company.

Green, A.A. and Craig, M.D. 1985. “Analysis of Aircraft Spectrometer Data withLogarithmic Residuals.” Proceedings of the AIS Data Analysis Workshop. JPL Pub.85-41:111-119.

Guptill, Stephen C., ed. 1988. A Process for Evaluating Geographic Information Systems.U.S. Geological Survey Open-File Report 88-105.

638 ERDAS

Page 665: Erdas FieldGuide

Haralick, Robert M. 1979. “Statistical and Structural Approaches to Texture.”Proceedings of the IEEE, Vol. 67, No. 5: 786-804. Seattle, Washington.

Hodgson, Michael E., and Bill M. Shelley. 1993. “Removing the Topographic Effect inRemotely Sensed Imagery.” ERDAS Monitor, Fall 1993. Contact Dr. Hodgson,Dept. of Geography, University of Colorado, Boulder, CO 80309-0260.

Holcomb, Derrold W. 1993. “Merging Radar and VIS/IR Imagery.” Paper submitted tothe 1993 ERIM Conference, Pasadena, California.

Hord, R. Michael. 1982. Digital Image Processing of Remotely Sensed Data. New York:Academic Press.

Iron, James R., and Gary W. Petersen. 1981.“Texture Transforms of Remote SensingData,” Remote Sensing of Environment, Vol. 11: 359-370.

Jensen, John R., et al. 1983. “Urban/Suburban Land Use Analysis.” Chapter 30 inManual of Remote Sensing, edited by Robert N. Colwell. Falls Church, Virginia:American Society of Photogrammetry.

Jensen, John R. 1996. Introductory Digital Image Processing: A Remote Sensing Perspective.Englewood Cliffs, New Jersey: Prentice-Hall.

Johnston, R. J. 1980. Multivariate Statistical Analysis in Geography. Essex, England:Longman Group Ltd.

Jordan, III, Lawrie E., Bruce Q. Rado, and Stephen L. Sperry. 1992. “Meeting the Needsof the GIS and Image Processing Industry in the 1990s.” PhotogrammetricEngineering & Remote Sensing, Vol. 58, No. 8: 1249-1251.

Keates, J. S. 1973. Cartographic Design and Production. London: Longman Group Ltd.

Kidwell, Katherine B., ed. 1988. NOAA Polar Orbiter Data (TIROS-N, NOAA-6, NOAA-7, NOAA-8, NOAA-9, and NOAA-10) Users Guide. Washington, DC: NationalOceanic and Atmospheric Administration.

Kloer, Brian R. 1994. “Hybrid Parametric/Non-parametric Image Classification.” Paperpresented at the ACSM-ASPRS Annual Convention, April 1994, Reno, Nevada.

Kneizys, F. X., et al. 1988. Users Guide to LOWTRAN 7. Hanscom AFB, Massachusetts:Air Force Geophysics Laboratory.

Knuth, Donald E. 1987. “Digital Halftones by Dot Diffusion.” AMC Transactions onGraphics, Vol. 6: 245-273.

Kruse, Fred A. 1988.“Use of Airborne Imaging Spectrometer Data to Map MineralsAssociated with Hydrothermally Altered Rocks in the Northern GrapevineMountains, Nevada.” Remote Sensing of the Environment,Vol. 24: 31-51.

639Field Guide

Page 666: Erdas FieldGuide

Bibliography

Larsen, Richard J., and Morris L. Marx. 1981. An Introduction to Mathematical Statisticsand Its Applications. Englewood Cliffs, New Jersey: Prentice-Hall.

Lavreau, J. 1991. “De-Hazing Landsat Thematic Mapper Images.” PhotogrammetricEngineering & Remote Sensing, Vol. 57, No. 10: 1297-1302.

Leberl, Franz W. 1990. Radargrammetric Image Processing. Norwood, Massachusetts:Artech House, Inc.

Lee, J. E., and J. M. Walsh. 1984. Map Projections for Use with the Geographic InformationSystem. U.S. Fish and Wildlife Service, FWS/OBS-84/17.

Lee, Jong-Sen. 1981. “Speckle Analysis and Smoothing of Synthetic Aperture RadarImages.” Computer Graphics and Image Processing, Vol. 17:24-32.

Lillesand, Thomas M., and Ralph W. Kiefer. 1987. Remote Sensing and Image Interpre-tation. New York: John Wiley & Sons, Inc.

Lopes, A., Nezry, E., Touzi, R., and Laur, H. 1990. “Maximum A Posteriori SpeckleFiltering and First Order Texture Models in SAR Images.” International Geoscienceand Remote Sensing Symposium (IGARSS).

Lue, Yan and Kurt Novak. 1991. “Recursive Grid - Dynamic Window Matching forAutomatic DEM Generation.” 1991 ACSM-ASPRS Fall Convention TechnicalPapers.

Lyon, R.J.P. 1987. “Evaluation of AIS-2 Data over Hydrothermally Altered GranitoidRocks.” Proceedings of the Third AIS Data Analysis Workshop. JPL Pub. 87-30:107-119.

Maling, D. H. 1992. Coordinate Systems and Map Projections. 2nd ed. New York:Pergamon Press.

Marble, Duane F. 1990. “Geographic Information Systems: An Overview.” IntroductoryReadings in Geographic Information Systems, edited by Donna J. Peuquet and DuaneF. Marble. Bristol, Pennsylvania: Taylor & Francis, Inc.

Mendenhall, William, and Richard L. Scheaffer. 1973. Mathematical Statistics with Appli-cations. North Scituate, Massachusetts: Duxbery Press.

Menon, Sudhakar, Peng Gao, and CiXiang Zhan. 1991. “GRID: A Data Model andFunctional Map Algebra for Raster Geo-processing.” GIS/LIS ‘91 Proceedings, Vol.2: 551-561. Bethesda, Maryland: American Society for Photogrammetry andRemote Sensing.

Merenyi, E., Taranik, J.V., Monor, Tim, and Farrand, W. March 1996. “QuantitativeComparison of Neural Network and Conventional Classifiers for HyperspectralImagery.” Proceedings of the Sixth AVIRIS Conference. JPL Pub.

Minnaert, J. L., and G. Szeicz. 1961. “The Reciprocity Principle in Lunar Photometry.”Astrophysics Journal, Vol. 93: 403-410.

640 ERDAS

Page 667: Erdas FieldGuide

Nagao, Makoto, and Takashi Matsuyama. 1978. “Edge Preserving Smoothing.”Computer Graphics and Image Processing, Vol. 9: 394-407.

Needham, Bruce H. 1986. “Availability of Remotely Sensed Data and Information fromthe U.S. National Oceanic and Atmospheric Administration’s Satellite DataServices Division.” Chapter 9 in Satellite Remote Sensing for Resources Development,edited by Karl-Heinz Szekielda. Gaithersburg, Maryland: Graham & Trotman,Inc.

Nichols, David, et al. 1983. “Digital Hardware.” Chapter 20 in Manual of Remote Sensing,edited by Robert N. Colwell. Falls Church, Virginia: American Society of Photo-grammetry.

Oppenheim, Alan V., and Ronald W. Schafer. 1975. Digital Signal Processing. EnglewoodCliffs, New Jersey: Prentice-Hall, Inc.

Parent, Phillip, and Richard Church. 1987. “Evolution of Geographic InformationSystems as Decision Making Tools.” Fundamentals of Geographic InformationSystems: A Compendium, edited by William J. Ripple. Bethesda, Maryland:American Society for Photogrammetry and Remote Sensing and AmericanCongress on Surveying and Mapping.

Pearson, Frederick. 1990. Map Projections: Theory and Applications. Boca Raton, Florida:CRC Press, Inc.

Peli, Tamar, and Jae S. Lim. 1982. “Adaptive Filtering for Image Enhancement.” OpticalEngineering, Vol. 21, No. 1.

Pratt, William K. 1991. Digital Image Processing. New York: John Wiley & Sons, Inc.

Press, William H., et al. 1988. Numerical Recipes in C. New York, New York: CambridgeUniversity Press.

Prewitt, J. M. S. 1970. “Object Enhancement and Extraction.” Picture Processing andPsychopictorics, edited by B. S. Lipkin, and A. Resenfeld. New York: AcademicPress.

Rado, Bruce Q. 1992. “An Historical Analysis of GIS.” Mapping Tomorrow’s Resources.Logan, Utah: Utah State University.

Richter, Rudolf. 1990. “A Fast Atmospheric Correction Algorithm Applied to LandsatTM Images.” International Journal of Remote Sensing, Vol. 11, No. 1: 159-166.

Robinson, Arthur H., and Randall D. Sale. 1969. Elements of Cartography. 3rd ed. NewYork: John Wiley & Sons, Inc.

Sabins, Floyd F., Jr. 1987. Remote Sensing Principles and Interpretation. New York: W. H.Freeman and Co.

641Field Guide

Page 668: Erdas FieldGuide

Bibliography

Sader, S. A., and J. C. Winne. 1992. “RGB-NDVI Colour Composites For VisualizingForest Change Dynamics.” International Journal of Remote Sensing, Vol. 13, No. 16:3055-3067.

Schowengerdt, Robert A. 1983. Techniques for Image Processing and Classification in RemoteSensing. New York: Academic Press.

Schowengerdt, Robert A. 1980. “Reconstruction of Multispatial, Multispectral ImageData Using Spatial Frequency Content.” Photogrammetric Engineering & RemoteSensing, Vol. 46, No. 10: 1325-1334.

Schwartz, A. A., and J. M. Soha. 1977. “Variable Threshold Zonal Filtering.” AppliedOptics, Vol. 16, No. 7.

Short, Nicholas M. 1982. The Landsat Tutorial Workbook: Basics of Satellite Remote Sensing.Washington, DC: National Aeronautics and Space Administration.

Simonett, David S., et al. 1983. “The Development and Principles of Remote Sensing.”Chapter 1 in Manual of Remote Sensing, edited by Robert N. Colwell. Falls Church,Virginia: American Society of Photogrammetry.

Slater, P. N. 1980. Remote Sensing: Optics and Optical Systems. Reading, Massachusetts:Addison-Wesley Publishing Company, Inc.

Smith, J., T. Lin, and K. Ranson. 1980. “The Lambertian Assumption and Landsat Data.”Photogrammetric Engineering & Remote Sensing, Vol. 46, No. 9: 1183-1189.

Snyder, John P. 1987. Map Projections--A Working Manual. Geological Survey Profes-sional Paper 1532. Washington, DC: United States Government Printing Office.

Snyder, John P., and Philip M. Voxland. 1989. An Album of Map Projections. U.S.Geological Survey Professional Paper 1453. Washington, DC: United StatesGovernment Printing Office.

Srinivasan, Ram, Michael Cannon, and James White, 1988. “Landsat Destriping UsingPower Spectral Filtering.” Optical Engineering, Vol. 27, No. 11: 939-943.

Star, Jeffrey, and John Estes. 1990. Geographic Information Systems: An Introduction.Englewood Cliffs, New Jersey: Prentice-Hall.

Steinitz, Carl, Paul Parker, and Lawrie E. Jordan, III. 1976. “Hand Drawn Overlays:Their History and Perspective Uses.” Landscape Architecture, Vol. 66.

Stimson, George W. 1983. Introduction to Airborne Radar. El Segundo, California: HughesAircraft Company.

Suits, Gwynn H. 1983. “The Nature of Electromagnetic Radiation.” Chapter 2 in Manualof Remote Sensing, edited by Robert N. Colwell. Falls Church, Virginia: AmericanSociety of Photogrammetry.

642 ERDAS

Page 669: Erdas FieldGuide

Swain, Philip H. 1973. Pattern Recognition: A Basis for Remote Sensing Data Analysis(LARS Information Note 111572). West Lafayette, Indiana: The Laboratory forApplications of Remote Sensing, Purdue University.

Swain, Philip H., and Shirley M. Davis. 1978. Remote Sensing: The Quantitative Approach.New York: McGraw Hill Book Company.

Taylor, Peter J. 1977. Quantitative Methods in Geography: An Introduction to SpatialAnalysis. Boston, Massachusetts: Houghton Mifflin Company.

TIFF Developer’s Toolkit. 1990. Seattle, Washington: Aldus Corp.

Tou, Julius T., and Rafael C. Gonzalez. 1974. Pattern Recognition Principles. Reading,Massachusetts: Addison-Wesley Publishing Company.

Tucker, Compton J. 1979. “Red and Photographic Infrared Linear Combinations forMonitoring Vegetation.” Remote Sensing of Environment, Vol. 8: 127-150.

Walker, Terri C., and Richard K. Miller. 1990. Geographic Information Systems: AnAssessment of Technology, Applications, and Products. Madison, Georgia: SEAITechnical Publications.

Wang, Zhizhuo. 1990. Principles of Photogrammetry. Wuhan, China: Wuhan Techno-logical University of Surveying and Mapping.

Welch, Roy. 1990. “3-D Terrain Modeling for GIS Applications.” GIS World, Vol. 3, No.5.

Welch, R., and W. Ehlers. 1987. “Merging Multiresolution SPOT HRV and Landsat TMData.” Photogrammetric Engineering & Remote Sensing, Vol. 53, No. 3: 301-303.

Wolberg, George. 1990. Digital Image Warping. IEEE Computer Society PressMonograph.

Wolf, Paul R. 1980. “Definitions of Terms and Symbols used in Photogrammetry.”Manual of Photogrammetry. Ed. Chester C. Slama. Falls Church, Virginia:American Society of Photogrammetry.

Wong, K.W. 1980. “Basic Mathematics of Photogrammetry.” Manual of Photogrammetry.Ed. Chester C. Slama. Falls Church, Virginia: American Society of Photogram-metry.

Yang, Xinghe, R. Robinson, H. Lin, and A. Zusmanis. 1993. “Digital Ortho CorrectionsUsing Pre-transformation Distortion Adjustment.” 1993 ASPRS Technical Papers.New Orleans. Vol. 3: 425-434.

Zamudio, J.A. and Atkinson, W.W. 1990. “Analysis of AVIRIS data for SpectralDiscrimination of Geologic Materials in the Dolly Varden Mountains.”Proceedings of the Second AVIRIS Conference. JPL Pub. 90-54:162-66.

643Field Guide

Page 670: Erdas FieldGuide

Bibliography

644 ERDAS

Page 671: Erdas FieldGuide

A

IndexAa priori 221, 294, 316absorption 7, 10

spectra 6, 11accuracy assessment 254, 258accuracy report 259adaptive filter 151ADRG 25, 52, 72

file naming convention 76ordering 85

ADRI 52, 78file naming convention 80ordering 85

aerial photos 5, 28interior orientation 272

airborne imagery 51Airborne Imaging Spectrometer 11Airborne Multispectral Scanner Mk2 11AIRSAR 66, 70Aitoff 594Albers Conical Equal Area 538Almaz 66Almaz 1-B 69Almaz 1-b 69Almaz-1 69Analog Photogrammetry 262Analytical Photogrammetry 262annotation 113, 118, 404

element 404in script models 390layer 404

ARC system 72ARC/INFO 51, 88, 90, 313, 361, 367, 515

coverages 39data model 39, 42UNGENERATE 90

ARC/INFO GENERATE 49

Field Guide

ARC/INFO INTERCHANGE 49arc/second format 81ARCGEN 90ARCVIEW 49area based matching 294area of interest 35, 141, 372ASCII 52, 82, 369aspect 347, 352, 419

calculating 352equatorial 419oblique 419polar 419transverse 420

atmospheric correction 140atmospheric effect 130atmospheric modeling 131attribute

imported 44in models 389information 39, 41, 44raster 367thematic 366, 367vector 367, 370viewing 368

auto update 102, 103, 104AutoCAD 49, 51, 90average 450AVHRR 15, 52, 55, 62, 131, 317

extract 63full set 63ordering 84

AVIRIS 11, 70azimuth 417Azimuthal Equidistant 522

Bband 2, 57, 58, 60, 62

displaying 106banding 19, 129

see also striping

645

Page 672: Erdas FieldGuide

IndexC

Bartlett window 186Bayesian classifier 240BIL 20, 52bin 138binary format 20BIP 20, 23, 52Bipolar Oblique Conic Conformal 586bit 20

display device 98in files (depth) 17, 20, 24

block of photographs 267, 279blocking factor 22, 24border 410bpi 25breakline 292brightness inversion 142brightness value 17, 98, 99, 107, 110, 133BSQ 20, 21, 52buffer zone 373bundle adjustment (SPOT) 286Butterworth window 187byte 20

CC Programmers’ Toolkit 371CalComp Electrostatic Plotter 441Canadian Geographic Information System

359Canon PostScript Intelligent Processing

Unit 441Cartesian coordinate 41cartography 399cartridge tape 23, 25Cassini 587Cassini-Soldner 587CD-ROM 20, 23, 25cell 83center of the scene 281change detection 18, 33, 54, 313check point 277, 279

646

analysis 277, 287chi-square

distribution 255statistics 257

class 215, 364name 366, 368value 110, 368

numbering systems 365, 378classification 34, 68, 194, 215, 365

and enhanced data 153, 219and rectified data 314and terrain analysis 347evaluating 254flow chart 245iterative 219, 224, 242scheme 218

clump 374clustering 227

ISODATA 227, 228RGB 227, 232

coefficient 461in convolution 144of variation 195

color gun 98, 99, 107color scheme 45, 366color table 110, 368

for printing 442colorcell 99

read-only 100color-infrared 106colormap 99, 113

display 600complex image 53confidence level 257conformality 416contiguity analysis 372, 374contingency matrix 236, 238continuous data

see datacontrast stretch 34, 133

ERDAS

Page 673: Erdas FieldGuide

D

for display 107, 135, 453linear 133, 134min/max vs. standard deviation 108, 136nonlinear 134piecewise linear 134

contrast table 106control point 276convolution 19

cubic 341filtering 144, 341, 342, 375kernel

crisp 149edge detector 147edge enhancer 148gradient 202high frequency 145, 148low frequency 149, 341Prewitt 202zero-sum 146

convolution kernel 144high frequency 342low frequency 342Prewitt 202

coordinateCartesian 41, 422conversion 345file 4, 41, 122geographic 422, 531map 4, 41, 311, 314, 316planar 422reference 315, 316retransformed 330, 338source 316spherical 422

coordinate system 4correlation calculations 294correlation threshold 329correlation windows 294covariance 238, 251, 454

sample 454

Field Guide

covariance matrix 157, 235, 240, 253, 455cross correlation 295

Ddata 360

airborne sensor 51ancillary 220categorical 3complex 53, 479compression 153, 232continuous 3, 27, 106, 364, 472

displaying 109creating 122elevation 220, 347enhancement 126floating point 478from aircraft 70geocoded 22, 32, 312gray scale 118hyperspectral 11interval 3nominal 3ordering 84ordinal 3packed 62pseudo color 118radar 51, 64

applications 68bands 66merging 211

raster 4, 113converting to vector 87editing 35formats (BIL, etc.) 24importing and exporting 51in GIS 362sources 51

ratio 3satellite 51structure 159

647

Page 674: Erdas FieldGuide

IndexD

thematic 3, 27, 110, 223, 366, 472displaying 112

tiled 29, 53topographic 81, 347

using 83true color 118vector 113, 118, 313, 345, 367

converting to raster 87, 365copying 43displaying 45editing 394

densify 394generalize 394reshape 394spline 394split 394unsplit 394

from raster data 47importing 47, 53in GIS 362renaming 43sources 47, 49, 51structure 42

viewing 117multiple layers 119overlapping layers 119

data correction 19, 35, 125, 129geometric 129, 131, 311radiometric 129, 207, 312

data file value 1, 34display 107, 122, 134in classification 215

data storage 20database

image 31decision rule 217, 243

Bayesian 252feature space 248Mahalanobis distance 251maximum likelihood 252, 254

648

minimum distance 250, 252, 254, 257non-parametric 244parallelepiped 246parametric 244

decorrelation stretch 158degrees of freedom 257DEM 2, 28, 52, 81, 82, 131, 292, 312, 348

editing 36interpolation 37, 292ordering 84

density 25descriptive information

see attribute information 44Design with Nature (by Ian McHarg) 359desktop scanners 269detector 54, 129Developers’ Toolkit 371, 430DGN 49digital elevation model (DEM) 292digital image 47digital orthophoto 299

cell sizes 301creation 300

digital orthophotography 348Digital Photogrammetry 262digital picture

see image 98digital terrain model (DTM) 51, 292digitizing 47, 314

GCPs 316operation modes 48point mode 48screen 47, 49stream mode 48tablet 47

DIME 93dimensionality 153, 220, 460disk space 26diskette 20, 24displacement 304

ERDAS

Page 675: Erdas FieldGuide

E

display32-bit 101DirectColor 101, 103, 109, 111HiColor 101, 105PC 105PseudoColor 101, 102, 105, 110, 112TrueColor 101, 104, 105, 109, 111

display device 97, 107, 133, 135, 160display memory 126display resolution 97distance image file 254, 255distortion

geometric 299distribution 446Distribution Rectangle 72dithering 116

color artifacts 117color patch 117

divergence 236signature 239transformed 239

DLG 25, 49, 51, 53, 92Dreschler Operator 297DTED 52, 78, 81, 348DTM 51, 292, 300DXF 49, 51, 53, 90dynamic range 17

Eedge detection 147, 200edge enhancement 148eigenvalue 154eigenvector 154, 1578 mm tape 23, 25Eikonix 71electromagnetic radiation 5, 54electromagnetic spectrum 2, 5

long wave infrared region 6short wave infrared region 6

electrostatic plotter

Field Guide

CalComp 441Versatec 441

elevation model 291generating

digital method 291traditional method 291

ellipse 153, 236enhancement 34, 60, 125, 215

linear 107, 133nonlinear 133on display 113, 122, 126radar data 126radiometric 125, 132, 143spatial 125, 132, 143spectral 125, 153

entity (AutoCAD) 91EOF (end of file) 22EOSAT 19, 32, 57, 85EOV (end of volume) 22ephemeris data 283ephemeris information 469epipolar geometry 289epipolar image pair 293epipolar stereopair 289equal area

see equivalenceequidistance 417Equidistant Conic 525, 546Equidistant Cylindrical 587Equirectangular 527equivalence 417ERDAS macro language (EML) 371ERDAS Version 7.X 27, 52, 87EROS Data Center 85error matrix 259ERS-1 66, 68, 69

ordering 86ERS-2 69ESRI 39, 359ETAK 49, 51, 53, 93

649

Page 676: Erdas FieldGuide

IndexF

Euclidean distance 250, 254, 460expected value 452exposure station 266, 280extent 405

F.fsp.img file 224false color 59false easting 422false northing 422fast format 22Fast Fourier Transform 176feature based matching 297feature collection 307

monoscopic 307stereoscopic 307work flow 307

feature extraction 125feature point matching 297feature space 458

area of interest 221, 225image 224, 459

fiducials 273field 367file

.fsp.img 224

.gcc 317

.GIS 27, 87

.gmd 385

.img 27, 106, 468

.LAN 27, 87

.mdl 390

.ovr 404

.sig 454

.tif 440archiving 32header 24output 26, 31, 335, 336, 390

.img 135classification 259

650

pixel 98tic 41

file format 465HFA 485IMAGINE 468MIF 475vector layers 515

file name 31extensions 465

film recorder 440filter

adaptive 151Frost 192, 198Gamma-MAP 192, 199homomorphic 189Lee 192, 195Lee-Sigma 192local region 192, 193mean 192median 19, 192, 193periodic noise removal 188Sigma 195zero-sum 203

filtering 144see also convolution filtering

focal analysis 19focal length 272focal operation 35, 375focal plane 272Förstner Operator 2974 mm tape 23, 25Fourier analysis 126Fourier magnitude 176

calculating 178Fourier Transform

calculation 178Editor

window functions 185inverse 176neighborhood techniques 176

ERDAS

Page 677: Erdas FieldGuide

G

noise removal 188point techniques 176

frequencystatistical 446

Frost filter 192, 198function memory 126Fuyo 1 66

ordering 86

G.gcc file 317.GIS file 27, 87.gmd file 385GAC (Global Area Coverage) 62Gamma-MAP filter 192, 199Gaussian distribution 196Gauss-Krüger 593GCP 316, 461

corresponding 316digitizing 316, 317matching 329minimum required 328prediction 329selecting 316

General Vertical Near-side Perspective 529geocentric coordinate system 264geocoded data 22geographic information system

see GISgeology 68geometric distortion 299georeference 312, 437gigabyte 20GIS 1

data base 361defined 360history 359

glaciology 68global operation 35Gnomonic 533

Field Guide

gradient kernel 202graphical model 126, 371

convert to script 390create 384

graphical modeling 372, 383graticule 410, 422gray scale 348great circle 417GRID 52, 87, 88grid cell 1, 98grid line 410ground control point

see GCPground coordinate system 264ground truth 215, 221, 222, 236

Hhalftone 441hardcopy 437hardware 97header

file 22, 24record 22

hierarchical file architecture (HFA) 485High Resolution Visible (HRV) sensors 280histogram 132, 232, 366, 368, 446, 459, 460

breakpoint 136signature 236, 242

histogram equalizationformula 139

histogram match 140homogeneity

spatial frequency 149homomorphic filtering 189host workstation 97Hotine 548HRPT (High Resolution Picture Transmis-

sion) 62HRV sensors 280hydrology 68

651

Page 678: Erdas FieldGuide

IndexI

hyperspectral data 11hyperspectral image processing 126

I.img file 2, 27, 468file

.img 2ideal window 185IFOV (instantaneous field of view) 16IGES 49, 51, 53, 94image 1, 98, 125

airborne 51complex 53digital 47microscopic 51pseudo color 45radar 51raster 47ungeoreferenced 41

image algebra 166, 219Image Catalog 31, 32image coordinate system 263image data 1image display 2, 97image file 1, 106, 348

statistics 446Image Information 108, 115, 312, 314, 366Image Interpreter 10, 19, 126, 135, 151, 160, 206,

232, 349, 352, 354, 356, 365, 369, 371functions 127

image matching 293area based 294feature based 297feature point 297

image processing 1image pyramid 293image scale 266image space coordinate system 263import

direct 51

652

generic 52inclination 283index 372, 380

vegetation 10INFO 44, 91, 92, 94, 95, 367

path 44see also ARC/INFO

information (vs. data) 360interior orientation 272International Dateline 422interval

classes 337, 365data 3

inverse Fast Fourier Transform 176IRIS Color Inkjet Printer 442ISODATA 472

JJeffries-Matusita distance 240JERS-1 69

ordering 86

KKappa coefficient 259Kodak XL7700 Continuous Tone Printer

442kurtosis 205

L.LAN file 27, 87Laborde Oblique Mercator 588LAC (Local Area Coverage) 62Lambert Azimuthal Equal Area 535Lambert Conformal 563Lambert Conformal Conic 538, 586, 589Lambertian reflectance model 356Landsat 10, 18, 28, 47, 52, 55, 125, 131, 151, 561

description 55, 57history 57MSS 16, 57, 58, 129, 131

ERDAS

Page 679: Erdas FieldGuide

M

ordering 84TM 9, 10, 15, 22, 58, 150, 193, 212, 314, 317

displaying 106Laplacian operator 202, 203Latitude/Longitude 81, 312, 422, 531

rectifying 336layer 2, 363, 382least squares correlation 295least squares regression 318, 324Lee filter 192, 195Lee-Sigma filter 192legend 409level 1A data 268level 1B data 268, 319level slice 140Light SAR 69line 40, 45, 53, 92line detection 200line dropout 19, 130linear regression 131linear transformation 319, 324lines 280Linotronic Imagesetter 441local region filter 192, 193lookup table 99, 133

display 136Lowtran 9, 131

M.mdl file 390Machine Independent Format (MIF) 475magnification 98, 120, 121Mahalanobis distance 254map 400

accuracy 427, 434aspect 400base 400bathymetric 400book 437cadastral 400

Field Guide

choropleth 400colors in 403composite 400composition 432contour 400credit 412derivative 400hardcopy 435index 400information 473inset 400isarithmic 400isopleth 400label 412land cover 215lettering 414morphometric 400outline 400output to TIFF 440paneled 437planimetric 61, 400printing 437

continuous tone 442with black ink 443

qualitative 402quantitative 402relief 401scale 427, 438scaled 437, 441shaded relief 401slope 401thematic 401, 402title 412topographic 61, 401typography 412viewshed 401

Map Composer 399, 439map coordinate 314, 316

conversion 345map feature collection 307

653

Page 680: Erdas FieldGuide

IndexM

map projection 311, 314, 416, 474azimuthal 416, 419, 427compromise 416conformal 428conical 416, 419, 427cylindrical 416, 420, 427equal area 428external 423, 585gnomonic 419modified 421orthographic 419planar 416, 419pseudo 421selecting 427stereographic 419types 419units 424USGS 423, 518

MapBasesee ETAK

mapping 399mask 375matrix 462

analysis 372, 381contingency 236, 238covariance 157, 235, 240, 253, 455error 259transformation 315, 329, 334, 461

matrix algebraand transformation matrix 463multiplication 463notation 462transposition 464

maximum likelihoodclassification decision rule 238

mean 108, 136, 237, 450, 451, 454of ISODATA clusters 229vector 240, 457

mean Euclidean distance 205mean filter 192

654

measurement 48measurement vector 456, 458, 462median filter 192, 193megabyte 20Mercator 421, 541, 544, 548, 578, 583, 587meridian 422microscopic imagery 51Microsoft Windows NT 97, 105MIF data dictionary 483Miller Cylindrical 544minimum distance

classification decision rule 231, 238Minnaert constant 357model 382, 384Model Maker 126, 371, 382

criteria function 389functions 386object 387

data type 388matrix 387raster 387scalar 387table 387

working window 388modeling 35, 382

and image processing 383and terrain analysis 347using conditional statements 389

Modified Polyconic 589Modified Stereographic 590Modified Transverse Mercator 546Modtran 9, 131Mollweide Equal Area 591monoscopic collection 307Moravec Operator 297mosaic 33, 313multiplicative algorithm 151multispectral imagery 55, 60multitemporal imagery 33

ERDAS

Page 681: Erdas FieldGuide

N

Nnadir 55, 283, 302nadir line 302nadir point 302NASA 57, 64, 70NASA/JPL 69natural-color 106nearest neighbor

see resampleneatline 410neighborhood analysis 372, 375

boundary 376density 376diversity 376majority 376maximum 376mean 376median 376minimum 376minority 377rank 377standard deviation 377sum 377

9-track tape 23, 25NOAA 62node 40

dangling 397from-node 40pseudo 397to-node 40

noise removal 188nominal

classes 337, 365data 3

Non-Lambertian reflectance model 356, 357nonlinear transformation 321, 325, 327normal distribution 153, 251, 252, 253, 450, 454,

459Normalized Difference Vegetation Index

(NDVI) 11, 166

Field Guide

NPO Mashinostroenia 69

O.ovr file 404Oblique Mercator 548, 563, 588, 592oceanography 68off-nadir 60, 283offset 319oil exploration 681:24,000 scale 821:250,000 scale 82opacity 119, 368optical disk 23orbit 280order

of polynomial 461of transformation 461

ordinalclasses 337, 365data 3

orientation angle 284orthocorrection 83, 132, 209, 298, 306, 308, 312orthogonal 298orthogonal distance 272Orthographic 551orthographic projection 298, 299orthoimage 299orthomap 308orthorectification 298, 308, 312output file 26, 335, 336, 390

.img 135classification 259

overlay 372, 379overlay file 404

Ppanchromatic imagery 55, 60parallel 422parallelepiped

alarm 236

655

Page 682: Erdas FieldGuide

IndexR

parameter 454parametric 252, 253, 454pattern recognition 215, 236periodic noise removal 188perspective center 272, 280photogrammetric processing 270photogrammetric quality scanners 269photogrammetry 262

work flow 265photograph 51, 54, 151

aerial 70ordering 85

pixel 1, 98, 281depth 97display 98file vs. display 98size 281

pixel coordinate system 263Plane Table Photogrammetry 262Plate Carrée 527, 587, 594plotter

CalComp Electrostatic 441Versatec Electrostatic 441

point 40, 45, 53, 92label 40

point ID 316Polar Stereographic 554, 589pollution monitoring 68Polyconic 546, 557, 589polygon 40, 45, 53, 92, 171, 222, 375, 394polynomial 323, 461PostScript 439precision 287Preference Editor 114Prewitt kernel 202primary color 98, 443

RGB vs. CMY 443principal component band 154principal components 34, 149, 150, 153, 158,

212, 219, 455, 459, 460

656

computing 156principal point 272printer

Canon PostScript Intelligent ProcessingUnit 441

IRIS Color Inkjet 442Kodak XL7700 Continuous Tone 442Linotronic Imagesetter 441PostScript 439Tektronix Inkjet 441Tektronix Phaser 441Tektronix Phaser II SD 442

probability 241, 252profile 81projection

perspective 575Projection Chooser 423proximity analysis 372, 373pseudo color 59

display 364pseudo color image 45pushbroom scanner 60, 261, 268pyramid layer 30, 114, 474Pythagorean Theorem 460

RRadar 5, 19, 67, 151, 202, 203, 204radar imagery 51RADARSAT 52, 69

ordering 86radiative transfer equation 9RAM 113range line 208Raster Attribute Editor 111, 368, 486raster editing 35raster image 47raster region 374ratio

classes 337, 365data 3

ERDAS

Page 683: Erdas FieldGuide

S

Rayleigh scattering 8recode 35, 372, 378, 382record 24, 367

logical 24physical 24

rectification 32, 311process 315

Rectified Skew Orthomorphic 592reduction 121reference coordinate 315, 316reference pixel 258reference plane 264, 299reference windows 294reflect 319, 320reflection spectra 6registration 311

vs. rectification 315regular block of photos 267relation based matching 297remote sensing 311

vs. scanning 71report

generate 369resample 311, 314, 335

Bilinear Interpolation 113, 335, 338Cubic Convolution 113, 335, 341for display 113Nearest Neighbor 113, 335, 337, 341

residuals 287, 330resolution 15

display 97merge 150radiometric 15, 17, 18, 57spatial 15, 18, 141, 438spectral 15, 18temporal 15, 18

Restoration 335retransformed coordinate 330, 338RGB monitor 99RGB to IHS 211

Field Guide

rhumb line 417right hand rule 264RMS error 318, 329, 330, 334

tolerance 333total 332

roam 121Robinson Pseudocylindrical 592robust estimation 288Root Mean Square Error (RMSE) 269rotate 319rubber sheeting 321

S.sig file 454Sanson-Flamsteed 559SAR 64satellite 54

imagery 5system 54

scale 15, 319, 405, 437display 438equivalents 406large 15map 438paper 439

determining 440pixels per inch 407representative fraction 405small 15verbal statement 405

scale bar 405scaled map 441scanner 54scanning 71, 314scanning window 375scattering 7

Rayleigh 8scatterplot 153, 232, 459

feature space 224scene 280

657

Page 684: Erdas FieldGuide

IndexS

screendump command 88Script Librarian 391script model 126, 371

data type 393library 390statement 392

script modeling 372SDTS 49search windows 294Seasat-1 64seat 97secant 419seed properties 223sensor 54

active 6, 66passive 6, 66radar 69

separabilitylisting 240signature 238

7.5-minute DEM 82shaded relief 347, 354

calculating 355shadow

enhancing 134ship monitoring 68Sigma filter 195Sigma notation 445signature 216, 217, 221, 224, 235

alarm 236append 242contingency matrix 238delete 242divergence 236, 239ellipse 236, 237evaluating 236file 454histogram 236, 242manipulating 224, 242merge 242

658

non-parametric 216, 225, 235parametric 216, 235separability 238, 240statistics 236, 242transformed divergence 239

Simple Conic 525Simple Cylindrical 527Sinusoidal 421, 559SIR-A 66, 69

ordering 86SIR-B 66, 69

ordering 86SIR-C 69

ordering 86skew 319skewness 205slant-to-ground range correction 209SLAR 64, 66slope 347, 349

calculating 349Softcopy Photogrammetry 262source coordinate 316Southern Orientated Gauss Conformal 593Space Oblique Mercator 421, 548, 561spatial frequency 143Spatial Modeler 19, 53, 126, 357, 365, 382Spatial Modeler Language 126, 371, 382, 390speckle noise 67

removing 192speckle suppression 19

local region filter 193mean filter 192median filter 192, 193Sigma filter 195

spectral dimensionality 456spectral distance 238, 250, 254, 257, 460

in ISODATA clustering 228spectral space 154, 156, 458spectroscopy 6spheroid 429

ERDAS

Page 685: Erdas FieldGuide

T

SPOT 10, 15, 18, 19, 28, 32, 47, 52, 55, 60, 78, 95, 125,131, 151, 317

ordering 84panchromatic 15, 150XS 60

displaying 106SPOT bundle adjustment 286standard deviation 108, 136, 237, 287, 451

sample 453standard meridian 417, 420standard parallel 417, 419State Plane 424, 429, 538, 563, 578statistics 30, 446, 472

signature 236Stereographic 554, 575stereopair 288

aerial 288epipolar 289SPOT 289

stereo-scene 283stereoscopic collection 307stereoscopic imagery 61strip of photographs 266striping 19, 156, 193subset 33summation 445sun angle 354Sun Raster 52, 87, 88sun-synchronous orbit 280surface generation

weighting function 38swath width 54symbol 411

abstract 411function 411plan 411profile 411replicative 411

symbolization 39symbology 45

Field Guide

T.tif file 440tangent 419tape 20Tasseled Cap transformation 159, 166Tektronix

Inkjet Printer 441Phaser II SD 442Phaser Printer 441

texture analysis 204thematic data

see datatheme 363threshold 255thresholding 254thresholding (classification) 251tick mark 410tie point 278, 287TIFF 52, 71, 87, 88, 439, 440TIGER 49, 51, 53, 95

disk space requirement 96tiled format 470TIN 292topocentric coordinate system 264topographic database 308topographic effect 356topographic map 308topology 41, 395

build 395clean 395constructing 395

total field of view 54total RMS error 332training 215

supervised 215, 219supervised vs. unsupervised 219unsupervised 216, 219, 227

training field 221training sample 221, 224, 258, 313, 459

defining 222

659

Page 686: Erdas FieldGuide

IndexU

evaluating 224training site

see training fieldtransformation

1st-order 319linear 319, 324matrix 318, 322, 330nonlinear 321, 325, 327order 318

transformation matrix 315, 318, 322, 329, 330,334, 461

transposition 253, 464transposition function 239, 252, 253Transverse Mercator 546, 563, 578, 580, 587, 593triangulated irregular network (TIN) 292triangulation 270

accuracy measures 287aerial 269, 271SPOT 280

true color 59true direction 417type style

on maps 413

Uungeoreferenced image 41Universal Polar Stereographic 554Universal Transverse Mercator 546, 578, 580UTM

see Universal Transverse Mercator

VVan der Grinten I 583, 591variable 364

in models 393variance 205, 251, 452, 453, 454, 455

sample 452vector 462vector data

see data

660

vector layer 41velocity vector 284Versatec Electrostatic Plotter 441vertex 40Viewer 113, 114, 126, 372

dithering 116linking 120

volume 25set 25

VPF 49

Wweight factor 380

classificationseparability 240

weighting function (surfacing) 38windows

correlation 294reference 294search 294

Winkel’s Tripel 594workspace 42

XX residual 330X RMS error 332X Window 97XSCAN 71

YY residual 330Y RMS error 332

Zzero-sum filter 146, 203zone 72zone distribution rectangle (ZDR) 72zoom 120, 121

ERDAS