Introduction to computer graphics
-
Upload
virginia-wiley -
Category
Documents
-
view
36 -
download
0
description
Transcript of Introduction to computer graphics
Introduction to Introduction to computer graphicscomputer graphics
Year 3AYear 3A
What is a digital image ?What is a pixel ?
What is a digital image ?What is a pixel ?
Any image from a scanner, or from a digital camera, or in a computer, is a digital image. Computer images have been "digitized", a process which converts the real world color picture to instead be numeric computer data consisting of rows and columns of millions of color samples measured from the original image
How does a camera make an How does a camera make an image? How is it able to tell the image? How is it able to tell the little girl from the tree or from little girl from the tree or from the pickup truck? It simply the pickup truck? It simply cannot of course, the camera is cannot of course, the camera is indescribably dumb about the indescribably dumb about the scene, compared to human scene, compared to human brains. What all the camera can brains. What all the camera can see is a blob of light, which it see is a blob of light, which it attempts to reproduce, attempts to reproduce, whatever it is (it has no clue whatever it is (it has no clue what it is). what it is).
An image is a visual representation of An image is a visual representation of something. In information technology, the something. In information technology, the term has several usages:term has several usages:
1) An image is a picture that has been 1) An image is a picture that has been created or copied and stored in electronic created or copied and stored in electronic form. An image can be described in terms form. An image can be described in terms of of vector graphicsvector graphics or or raster graphicsraster graphics . An . An image stored in raster form is sometimes image stored in raster form is sometimes called a called a bitmapbitmap . An . An image mapimage map is a file is a file containing information that associates containing information that associates different locations on a specified image different locations on a specified image with with hypertexthypertext links. links.
What is a Pixel?What is a Pixel?
Learn a few digital camera basics. Learn a few digital camera basics. What is a pixel, anyway? What is a pixel, anyway? What information does it store? What information does it store? How many megapixels do you need?How many megapixels do you need? Learn about RGB, CMYK, color depth Learn about RGB, CMYK, color depth
and resolution.and resolution.
A PixelA Pixel
The word "pixel" means a picture The word "pixel" means a picture element. Every photograph, in digital element. Every photograph, in digital form, is made up of pixels. They are form, is made up of pixels. They are the smallest unit of information that the smallest unit of information that makes up a picture. Usually round or makes up a picture. Usually round or square, they are typically arranged in square, they are typically arranged in a 2-dimensional grida 2-dimensional grid
The pixel (a word invented from "The pixel (a word invented from "picture picture elementelement") is the basic unit of programmable ") is the basic unit of programmable color on a computer display or in a computer color on a computer display or in a computer image. Think of it as a logical - rather than a image. Think of it as a logical - rather than a physical - unit. The physical size of a pixel physical - unit. The physical size of a pixel depends on how you've set the depends on how you've set the resolutionresolution for for the display screen. If you've set the display the display screen. If you've set the display to its maximum resolution, the physical size to its maximum resolution, the physical size of a pixel will equal the physical size of the of a pixel will equal the physical size of the dot pitchdot pitch (let's just call it the dot size) of the (let's just call it the dot size) of the display. If, however, you've set the resolution display. If, however, you've set the resolution to something less than the maximum to something less than the maximum resolution, a pixel will be larger than the resolution, a pixel will be larger than the physical size of the screen's dot (that is, a physical size of the screen's dot (that is, a pixel will use more than one dot). pixel will use more than one dot).
In the image below, one portion has In the image below, one portion has been magnified many times over so been magnified many times over so that you can see its individual that you can see its individual composition in pixels. As you can composition in pixels. As you can see, the pixels approximate the see, the pixels approximate the actual image. The more pixels you actual image. The more pixels you have, the more closely the image have, the more closely the image resembles the original.resembles the original.
ResolutionResolution The number of pixels in an image is The number of pixels in an image is
sometimes called the sometimes called the resolutionresolution, even , even though this is a bit of a misuse of the term. though this is a bit of a misuse of the term. If we are using the term to describe pixel If we are using the term to describe pixel count, one convention is to express count, one convention is to express resolution as the width by the height, for resolution as the width by the height, for example a monitor resolution of 1280x1024. example a monitor resolution of 1280x1024. This means there are 1280 pixels from one This means there are 1280 pixels from one side to the other, and 1024 from top to side to the other, and 1024 from top to bottom.bottom.
Another convention is to express the Another convention is to express the number of pixels as a single number, number of pixels as a single number, like a 5 megapixel camera (a like a 5 megapixel camera (a megapixel is a million pixels). This megapixel is a million pixels). This means the pixels along the width means the pixels along the width multiplied by the pixels along the multiplied by the pixels along the height of the image taken by the height of the image taken by the camera equals 3 million pixels. In the camera equals 3 million pixels. In the case of our 1280x1024 monitors, it case of our 1280x1024 monitors, it could also be expressed as 1280 x could also be expressed as 1280 x 1024 = 1,310,720, or 1.31 1024 = 1,310,720, or 1.31 megapixels.megapixels.
So, How Many Pixels Do I So, How Many Pixels Do I Need?Need?
Now that we've answered the question "What is a Now that we've answered the question "What is a Pixel?" let's examine how many of them you need Pixel?" let's examine how many of them you need in your image.in your image.
Image resolution describes the amount of detail Image resolution describes the amount of detail that an image contains. The term can be applied to that an image contains. The term can be applied to digital images, film images, and prints. The bottom digital images, film images, and prints. The bottom line is that higher resolution means more image line is that higher resolution means more image detail.detail.
Camera manufacturers are always trying to sell Camera manufacturers are always trying to sell you on the number of megapixels. The fact is, from you on the number of megapixels. The fact is, from a strictly megapixel point of view, most camera a strictly megapixel point of view, most camera phones have "enough" for the average home user. phones have "enough" for the average home user.
The answer to how many pixels are "enough" The answer to how many pixels are "enough" depends on what you want to do with the image, depends on what you want to do with the image, and how big you want to enlarge it. As you see from and how big you want to enlarge it. As you see from the image above, which is a fairly low resolution the image above, which is a fairly low resolution image, when I blow it up too much, I start to see image, when I blow it up too much, I start to see the individual pixels. That effect is called the individual pixels. That effect is called ""pixelation.pixelation.""
For excellent quality prints, you'd ideally like a For excellent quality prints, you'd ideally like a minimum of 240 pixels per inch in each dimension. minimum of 240 pixels per inch in each dimension. This means for a 4"x6" print, you need 240x4 pixels This means for a 4"x6" print, you need 240x4 pixels in the width, and 240 x 6 pixels in the height. That's in the width, and 240 x 6 pixels in the height. That's 960px wide x 1440px high. Multiplied together, 960px wide x 1440px high. Multiplied together, that's 1,382,400 pixels, or approximately 1.4 that's 1,382,400 pixels, or approximately 1.4 megapixels. By the same token, to make decent megapixels. By the same token, to make decent 8"x10" print, you'd need a 4.6 megapixel camera.8"x10" print, you'd need a 4.6 megapixel camera.
Keep in mind that for a point and Keep in mind that for a point and shoot camera, beyond a certain point shoot camera, beyond a certain point (probably around 4 to 5 megapixels), (probably around 4 to 5 megapixels), more megapixels will not necessarily more megapixels will not necessarily yield a better image. Other issues, yield a better image. Other issues, like lack of overall image sharpness like lack of overall image sharpness due to poor image or lens quality, or due to poor image or lens quality, or poor lighting, will limit the usefulness poor lighting, will limit the usefulness of more megapixels.of more megapixels.
Color InformationColor Information
What is a pixel used for? Each pixel What is a pixel used for? Each pixel stores color information for your stores color information for your image. It will usually store it in either image. It will usually store it in either 3 components, known as 3 components, known as RGBRGB (Red, (Red, Green, Blue), or 4 components, Green, Blue), or 4 components, known as known as CMYKCMYK (Cyan, Magenta, (Cyan, Magenta, Yellow, blacK).Yellow, blacK).
The number of distinct colors that can be The number of distinct colors that can be represented by a pixel depends on the represented by a pixel depends on the amount of information stored for each amount of information stored for each pixel. Information is stored as pixel. Information is stored as bitsbits. the . the more bits per pixel (bpp) that are stored, more bits per pixel (bpp) that are stored, the more colors a pixel can represent. For the more colors a pixel can represent. For example, in the simplest case, if only a example, in the simplest case, if only a single bit of information is stored for a single bit of information is stored for a pixel, then it can be "on" or "off" -- black pixel, then it can be "on" or "off" -- black or white. The actual number of bits used or white. The actual number of bits used to represent the color of a single pixel is to represent the color of a single pixel is known as known as color depthcolor depth, or , or bit depth.bit depth.
What is image processingWhat is image processing
Is enhancing an image or extracting Is enhancing an image or extracting information or features from an imageinformation or features from an image
Computerized routines for information Computerized routines for information extraction (eg, pattern recognition, extraction (eg, pattern recognition, classification) from remotely sensed classification) from remotely sensed images to obtain categories of images to obtain categories of information about specific features.information about specific features.
Many moreMany more
Image Processing Includes Image Processing Includes
Image quality and statistical evaluationImage quality and statistical evaluation Radiometric correctionRadiometric correction Geometric correctionGeometric correction Image enhancement and sharpeningImage enhancement and sharpening Image classificationImage classification
Pixel basedPixel based Object-oriented basedObject-oriented based
Accuracy assessment of classificationAccuracy assessment of classification Post-classification and GISPost-classification and GIS Change detectionChange detection
GEO5083: Remote Sensing Image Processing and Analysis, spring 2012
Image QualityImage Quality
Many remote sensing datasets contain high-Many remote sensing datasets contain high-quality, accurate data. Unfortunately, sometimes quality, accurate data. Unfortunately, sometimes error (or noise) is introduced into the remote error (or noise) is introduced into the remote sensor data by: sensor data by: the environmentthe environment (e.g., atmospheric (e.g., atmospheric
scattering, cloud), scattering, cloud), random or systematic malfunctionrandom or systematic malfunction of the of the
remote sensing system (e.g., an uncalibrated remote sensing system (e.g., an uncalibrated detector creates striping), or detector creates striping), or
improper pre-processingimproper pre-processing of the remote sensor of the remote sensor data prior to actual data analysis (e.g., data prior to actual data analysis (e.g., inaccurate analog-to-digital conversion). inaccurate analog-to-digital conversion).
155
154 155
160162
163164
MODISTrue143
Cloud
Clouds in ETM+Clouds in ETM+
Striping Noise and RemovalStriping Noise and Removal
CPCACPCA
Combined Principle Combined Principle Component AnalysisComponent Analysis
Xie et al. 2004
Speckle Noise Speckle Noise and Removaland Removal
G-MAPG-MAP
Blurred objectsBlurred objectsand boundaryand boundary
Gamma Maximum A Posteriori Filter
Univariate descriptive image Univariate descriptive image statisticsstatistics
The The modemode is the value that is the value that occurs most frequently in a occurs most frequently in a distribution and is usually distribution and is usually the highest point on the the highest point on the curve (histogram). It is curve (histogram). It is common, however, to common, however, to encounter more than one encounter more than one mode in a remote sensing mode in a remote sensing dataset.dataset.
The The medianmedian is the value is the value midway in the frequency midway in the frequency distribution. One-half of the distribution. One-half of the area below the distribution area below the distribution curve is to the right of the curve is to the right of the median, and one-half is to median, and one-half is to the leftthe left
The The meanmean is the arithmetic is the arithmetic average and is defined as average and is defined as the sum of all brightness the sum of all brightness value observations divided value observations divided by the number of by the number of observations.observations.
n
BVn
iik
k
1
n
BVn
iik
k
1
Cont’Cont’
MinMin MaxMax VarianceVariance Standard deviationStandard deviation Coefficient of Coefficient of
variation (CV)variation (CV) SkewnessSkewness KurtosisKurtosis MomentMoment
1var 1
2
n
BVn
ikik
k
kkks var
k
kCV
Multivariate Image StatisticsMultivariate Image Statistics
Remote sensing research is often concerned Remote sensing research is often concerned with the measurement of how much radiant with the measurement of how much radiant flux is reflected or emitted from an object in flux is reflected or emitted from an object in more than one band. It is useful to compute more than one band. It is useful to compute multivariatemultivariate statistical measures such as statistical measures such as covariancecovariance and and correlationcorrelation among the several among the several bands to determine how the measurements bands to determine how the measurements covary. Variance–covariance and correlation covary. Variance–covariance and correlation matrices are used in remote sensing matrices are used in remote sensing principal principal components analysiscomponents analysis (PCA), (PCA), feature feature selectionselection, , classification and accuracy classification and accuracy assessmentassessment..
CovarianceCovariance The different remote-sensing-derived spectral The different remote-sensing-derived spectral
measurements for each pixel often change together in measurements for each pixel often change together in some predictable fashion. If there is no relationship some predictable fashion. If there is no relationship between the brightness value in one band and that of between the brightness value in one band and that of another for a given pixel, the values are mutually another for a given pixel, the values are mutually independent; that is, an increase or decrease in one independent; that is, an increase or decrease in one band’s brightness value is not accompanied by a band’s brightness value is not accompanied by a predictable change in another band’s brightness value. predictable change in another band’s brightness value. Because spectral measurements of individual pixels Because spectral measurements of individual pixels may not be independent, some measure of their may not be independent, some measure of their mutual interaction is needed. This measure, called the mutual interaction is needed. This measure, called the covariancecovariance, is the joint variation of two variables , is the joint variation of two variables about their common mean. about their common mean.
n
BVBVBVBVSP
n
i
n
iilikn
iilikkl
1 1
1
n
BVBVBVBVSP
n
i
n
iilikn
iilikkl
1 1
1 1cov
n
SPklkl 1
cov
n
SPklkl
CorrelationCorrelation
To estimate the degree of interrelation between variables in a manner not influenced by measurement units, the correlation coefficient, is commonly used. The correlation between two bands of remotely sensed data, rkl, is the ratio of their covariance (covkl) to the product of their standard deviations (sksl); thus:
To estimate the degree of interrelation between variables in a manner not influenced by measurement units, the correlation coefficient, is commonly used. The correlation between two bands of remotely sensed data, rkl, is the ratio of their covariance (covkl) to the product of their standard deviations (sksl); thus:
lk
klkl ss
rcov
lk
klkl ss
rcov
If we square the correlation coefficient (rkl), we obtain the sample coefficient of determination (r2), which expresses the proportion of the total variation in the values of “band l” that can be accounted for or explained by a linear relationship with the values of the random variable “band k.” Thus a correlation coefficient (rkl) of 0.70 results in an r2 value of 0.49, meaning that 49% of the total variation of the values of “band l” in the sample is accounted for by a linear relationship with values of “band k”.
If we square the correlation coefficient (rkl), we obtain the sample coefficient of determination (r2), which expresses the proportion of the total variation in the values of “band l” that can be accounted for or explained by a linear relationship with the values of the random variable “band k.” Thus a correlation coefficient (rkl) of 0.70 results in an r2 value of 0.49, meaning that 49% of the total variation of the values of “band l” in the sample is accounted for by a linear relationship with values of “band k”.
exampleexample
Band 1Band 1 (Band 1 x (Band 1 x Band 2)Band 2)
Band 2 Band 2
130130 7,4107,410 5757
165165 5,7755,775 3535
100100 2,5002,500 2525
135135 6,7506,750 5050
145145 9,4259,425 6565
675675 31,86031,860 232232
1354
540cov
5
232675)860,31(
12
12
SP
1354
540cov
5
232675)860,31(
12
12
SP
PixelPixel Band 1 Band 1 (green)(green)
Band 2 Band 2 (red)(red)
Band 3 Band 3 (ni)(ni)
Band 4 Band 4 (ni)(ni)
(1,1)(1,1) 130130 5757 180180 205205
(1,2)(1,2) 165165 3535 215215 255255
(1,3)(1,3) 100100 2525 135135 195195
(1,4)(1,4) 135135 5050 200200 220220
(1,5)(1,5) 145145 6565 205205 235235
Band 1Band 1 Band 2 Band 2 Band 3Band 3 Band 4Band 4
Mean (Mean (kk)) 135135 46.4046.40 187187 222222
Variance Variance ((varvarkk))
562.50562.50 264.80264.80 10071007 570570
((sskk)) 23.7123.71 16.2716.27 31.431.4 23.8723.87
((minminkk)) 100100 2525 135135 195195
((maxmaxkk)) 165165 6565 215215 255255
Range (Range (BVBVrr)) 6565 4040 8080 6060
Band 1Band 1 Band 2 Band 2 Band 3Band 3 Band 4Band 4
Band 1Band 1 562.2562.255
-- -- --
Band 2Band 2 135135 264.8264.800
-- --
Band 3Band 3 718.75718.75 275.2275.255
1007.1007.5050
--
Band 4Band 4 537.50537.50 6464 663.75663.75 570570
Univariate statistics
covariance
Band Band 11
Band Band 2 2
Band Band 33
Band Band 44
Band Band 11
-- -- -- --
Band Band 22
0.350.35 -- -- --
Band Band 33
0.950.95 0.530.53 -- --
Band Band 44
0.940.94 0.160.16 0.870.87 --Covariance Correlation coefficient
Types of radiometric correctionTypes of radiometric correction
Detector error or sensor error (internal Detector error or sensor error (internal error)error)
Atmospheric error (external error)Atmospheric error (external error) Topographic error (external error)Topographic error (external error)
Atmospheric correctionAtmospheric correction
There are several ways There are several ways to atmospherically to atmospherically correct remotely correct remotely sensed data. Some are sensed data. Some are relatively relatively straightforward while straightforward while others are complex, others are complex, being founded on being founded on physical principles and physical principles and requiring a significant requiring a significant amount of information amount of information to function properly. to function properly. This discussion will This discussion will focus on two major focus on two major types of atmospheric types of atmospheric correction:correction:
Absolute atmospheric Absolute atmospheric correctioncorrection, and, and
Relative atmospheric Relative atmospheric correctioncorrection..
Solar irradiance
Reflectance from study area,
Various Paths of Satellite Received Radiance
Diffuse sky irradiance
Total radiance at the sensor
L L
L
Reflectance from neighboring area,
1
2
3
Remote sensor
detector
Atmosphere
5
4 1,3,5
E
L
90Þ
0T
v T
0
0
v
p T
S
I
nr r
Ed
Solar irradiance
Reflectance from study area,
Various Paths of Satellite Received Radiance
Diffuse sky irradiance
Total radiance at the sensor
L L
L
Reflectance from neighboring area,
1
2
3
Remote sensor
detector
Atmosphere
5
4 1,3,5
E
L
90Þ
0T
v T
0
0
v
p T
S
I
nr r
Ed
60 milesor100km
Scattering, AbsorptionRefraction, Reflection
Absolute atmospheric Absolute atmospheric correctioncorrection
Solar radiation is largely unaffected as it travels through the Solar radiation is largely unaffected as it travels through the vacuum of space. When it interacts with the Earth’s vacuum of space. When it interacts with the Earth’s atmosphere, however, it is selectively atmosphere, however, it is selectively scattered and scattered and absorbedabsorbed. The sum of these two forms of energy loss is called . The sum of these two forms of energy loss is called atmospheric attenuationatmospheric attenuation.. Atmospheric attenuation may 1) make Atmospheric attenuation may 1) make it difficult to relate hand-held it difficult to relate hand-held in situin situ spectroradiometer spectroradiometer measurements with remote measurements, 2) make it difficult measurements with remote measurements, 2) make it difficult to extend spectral signatures through space and time, and (3) to extend spectral signatures through space and time, and (3) have an impact on classification accuracy within a scene if have an impact on classification accuracy within a scene if atmospheric attenuation varies significantly throughout the atmospheric attenuation varies significantly throughout the image.image.
The general goal of The general goal of absolute radiometric correctionabsolute radiometric correction is to is to turn the digital brightness values (or DN) recorded by a remote turn the digital brightness values (or DN) recorded by a remote sensing system into sensing system into scaled surface reflectancescaled surface reflectance values. Thesevalues. These values can then be compared or used in conjunction with scaled values can then be compared or used in conjunction with scaled surface reflectance values obtained anywhere else on the surface reflectance values obtained anywhere else on the planet.planet.
a) Image containing substantial haze prior to atmospheric correction. b) Image after atmospheric correction using ATCOR (Courtesy Leica Geosystems and DLR, the German Aerospace Centre).
a) Image containing substantial haze prior to atmospheric correction. b) Image after atmospheric correction using ATCOR (Courtesy Leica Geosystems and DLR, the German Aerospace Centre).
relative radiometric relative radiometric correctioncorrection
When required data is not available for When required data is not available for absolute radiometric correction, we can absolute radiometric correction, we can do relative radiometric correctiondo relative radiometric correction
Relative radiometric correction may be Relative radiometric correction may be used toused to Single-image normalization using histogram Single-image normalization using histogram
adjustmentadjustment Multiple-data image normalization using Multiple-data image normalization using
regressionregression
Single-image normalization Single-image normalization using histogram adjustmentusing histogram adjustment
The method is based on the fact that infrared The method is based on the fact that infrared data (>0.7 data (>0.7 m) is free of atmospheric m) is free of atmospheric scattering effects, whereas the visible region scattering effects, whereas the visible region (0.4-0.7 (0.4-0.7 m) is strongly influenced by them.m) is strongly influenced by them.
Use Use Dark SubtractDark Subtract to apply atmospheric to apply atmospheric scattering corrections to the image data. The scattering corrections to the image data. The digital number to subtract from each band digital number to subtract from each band can be either the can be either the band minimum, an averageband minimum, an average based upon a user defined region of interest, based upon a user defined region of interest, or or a specific valuea specific value
Dark Subtract using band Dark Subtract using band minimumminimum
Topographic correctionTopographic correction
Topographic slope and aspect also introduce Topographic slope and aspect also introduce radiometric distortion (for example, areas in radiometric distortion (for example, areas in shadow)shadow)
The goal of a slope-aspect correction is to The goal of a slope-aspect correction is to remove topographically induced illumination remove topographically induced illumination variation so that two objects having the same variation so that two objects having the same reflectance properties show the same reflectance properties show the same brightness value (or DN) in the image despite brightness value (or DN) in the image despite their different orientation to the Sun’s positiontheir different orientation to the Sun’s position
Based on DEM, sun-elevationBased on DEM, sun-elevation
Conceptions of geometric Conceptions of geometric correctioncorrection
Geocoding:Geocoding: geographical referencing geographical referencing Registration:Registration: geographically or nongeographically (no coordination system) geographically or nongeographically (no coordination system)
Image to Map (or Ground Geocorrection)Image to Map (or Ground Geocorrection)The correction of digital images to ground coordinates using ground The correction of digital images to ground coordinates using ground control points collected from maps (Topographic map, DLG) or ground control points collected from maps (Topographic map, DLG) or ground GPS points. GPS points.
Image to Image GeocorrectionImage to Image GeocorrectionImage to Image correction involves matching the coordinate systems or Image to Image correction involves matching the coordinate systems or column and row systems of two digital images with one image acting as column and row systems of two digital images with one image acting as a reference image and the other as the image to be rectified.a reference image and the other as the image to be rectified.
Spatial interpolation:Spatial interpolation: from input position to output position or coordinates. from input position to output position or coordinates. RST (rotation, scale, and transformation), Polynomial, TriangulationRST (rotation, scale, and transformation), Polynomial, Triangulation Root Mean Square Error (RMS):Root Mean Square Error (RMS): The RMS is the error term used to The RMS is the error term used to
determine the accuracy of the transformation from one system to determine the accuracy of the transformation from one system to another. It is the difference between the desired output coordinate for a another. It is the difference between the desired output coordinate for a GCP and the actual.GCP and the actual.
Intensity (or pixel value) interpolation (also called resampling):Intensity (or pixel value) interpolation (also called resampling): The process The process of extrapolating data values to a new grid, and is the step in rectifying an of extrapolating data values to a new grid, and is the step in rectifying an image that calculates pixel values for the rectified grid from the original data image that calculates pixel values for the rectified grid from the original data grid. grid. Nearest neighbor, Bilinear, CubicNearest neighbor, Bilinear, Cubic
Image enhancementImage enhancement
image reduction, image reduction, image magnification, image magnification, transect extraction, transect extraction, contrast adjustments (linear and non-linear),contrast adjustments (linear and non-linear), band ratioing, band ratioing, spatial filtering, spatial filtering, fourier transformations, fourier transformations, principle components analysis, principle components analysis, texture transformations, and texture transformations, and image sharpeningimage sharpening
Purposes of image Purposes of image classificationclassification
Land use and land cover (LULC)Land use and land cover (LULC)
Vegetation typesVegetation types
Geologic terrainsGeologic terrains
Mineral explorationMineral exploration
Alteration mappingAlteration mapping
…………..
What is image What is image classification or classification or
pattern recognitionpattern recognition
Is a process of classifying multispectral (hyperspectral) images Is a process of classifying multispectral (hyperspectral) images into into patterns of varying gray or assigned colorspatterns of varying gray or assigned colors that represent that represent either either clustersclusters of statistically different sets of multiband data, some of of statistically different sets of multiband data, some of
which can be correlated with separable classes/features/materials. which can be correlated with separable classes/features/materials. This is the result of This is the result of Unsupervised ClassificationUnsupervised Classification, or , or
numerical discriminatorsnumerical discriminators composed of these sets of data that have composed of these sets of data that have been grouped and specified by associating each with a particular been grouped and specified by associating each with a particular classclass, etc. whose identity is known independently and which has , etc. whose identity is known independently and which has representative areas (training sites) within the image where that representative areas (training sites) within the image where that class is located. This is the result of class is located. This is the result of Supervised ClassificationSupervised Classification. .
Spectral classesSpectral classes are those that are inherent in the remote are those that are inherent in the remote sensor data and must be identified and then labeled by the sensor data and must be identified and then labeled by the analyst.analyst.
Information classesInformation classes are those that human beings define. are those that human beings define.
supervised classification. Identify known a priori through a combination of fieldwork, map analysis, and personal experience as training sites; the spectral characteristics of these sites are used to train the classification algorithm for eventual land-cover mapping of the remainder of the image. Every pixel both within and outside the training sites is then evaluated and assigned to the class of which it has the highest likelihood of being a member.
unsupervised classification, The computer or algorithm automatically group pixels with similar spectral characteristics (means, standard deviations, covariance matrices, correlation matrices, etc.) into unique clusters according to some statistically determined criteria. The analyst then re-labels and combines the spectral clusters into information classes.
Hard vs. Fuzzy classificationHard vs. Fuzzy classification
SupervisedSupervised and and unsupervisedunsupervised classification classification algorithms typically use algorithms typically use hard classification hard classification logiclogic to produce a classification map that consists of to produce a classification map that consists of hard, discrete categories (e.g., forest, hard, discrete categories (e.g., forest, agriculture). agriculture).
Conversely, it is also possible to use Conversely, it is also possible to use fuzzy set fuzzy set classificationclassification logic logic, which takes into account the , which takes into account the heterogeneous and imprecise nature (mix heterogeneous and imprecise nature (mix pixels) of the real world. Proportion of the m pixels) of the real world. Proportion of the m classes within a pixel (e.g., 10% bare soil, 10% classes within a pixel (e.g., 10% bare soil, 10% shrub, 80% forest). Fuzzy classification schemes shrub, 80% forest). Fuzzy classification schemes are not currently standardized. are not currently standardized.
Pixel-based vs. Object-Pixel-based vs. Object-oriented classificationoriented classification
In the past, most digital image classification was based on In the past, most digital image classification was based on processing the entire scene pixel by pixel. This is commonly processing the entire scene pixel by pixel. This is commonly referred to as referred to as per-pixel (pixel-based) classificationper-pixel (pixel-based) classification. .
Object-oriented classificationObject-oriented classification techniques allow the techniques allow the analyst to decompose the scene into many relatively analyst to decompose the scene into many relatively homogenous image homogenous image objectsobjects (referred to as (referred to as patches or patches or segmentssegments) using a multi-resolution image segmentation ) using a multi-resolution image segmentation process. The various statistical characteristics of these process. The various statistical characteristics of these homogeneous image objects in the scene are then subjected homogeneous image objects in the scene are then subjected to traditional statistical or fuzzy logic classification. Object-to traditional statistical or fuzzy logic classification. Object-oriented classification based on image segmentation is often oriented classification based on image segmentation is often used for the analysis of high-spatial-resolution imagery (e.g., used for the analysis of high-spatial-resolution imagery (e.g., 1 1 1 m Space Imaging IKONOS and 0.61 1 m Space Imaging IKONOS and 0.61 0.61 m Digital 0.61 m Digital Globe QuickBird).Globe QuickBird).
Unsupervised classificationUnsupervised classification Uses Uses statistical techniquesstatistical techniques to group n-dimensional data into their to group n-dimensional data into their
natural spectral clusters, and uses the natural spectral clusters, and uses the iterative proceduresiterative procedures label certain clusters as specific information classeslabel certain clusters as specific information classes K-mean and ISODATAK-mean and ISODATA
For the first iteration arbitrary For the first iteration arbitrary starting valuesstarting values (i.e., the cluster (i.e., the cluster properties) have to be selected. These properties) have to be selected. These initial valuesinitial values can influence the can influence the outcome of the classification.outcome of the classification.
In general, both methods assign first arbitrary initial cluster values. In general, both methods assign first arbitrary initial cluster values. The second step classifies each pixel to the closest cluster. In the The second step classifies each pixel to the closest cluster. In the third step the new cluster mean vectors are calculated based on all third step the new cluster mean vectors are calculated based on all the pixels in one cluster. The second and third steps are repeated the pixels in one cluster. The second and third steps are repeated until the "change" between the iteration is small. The "change" can until the "change" between the iteration is small. The "change" can be defined in several different ways, either by measuring the be defined in several different ways, either by measuring the distances of the mean cluster vector have changed from one iteration distances of the mean cluster vector have changed from one iteration to another or by the percentage of pixels that have changed between to another or by the percentage of pixels that have changed between iterations. iterations.
The The ISODATA algorithm has some further refinementsISODATA algorithm has some further refinements by splitting and by splitting and merging of clusters. Clusters are merged if either the number of merging of clusters. Clusters are merged if either the number of members (pixel) in a cluster is less than a certain threshold or if the members (pixel) in a cluster is less than a certain threshold or if the centers of two clusters are closer than a certain threshold. Clusters centers of two clusters are closer than a certain threshold. Clusters are split into two different clusters if the cluster standard deviation are split into two different clusters if the cluster standard deviation exceeds a predefined value and the number of members (pixels) is exceeds a predefined value and the number of members (pixels) is twice the threshold for the minimum number of members.twice the threshold for the minimum number of members.
Supervised classification:Supervised classification:training sites selection training sites selection
Based on known a priori through a combination of fieldwork, Based on known a priori through a combination of fieldwork, map analysis, and personal experiencemap analysis, and personal experience
on-screen selectionon-screen selection of polygonal training data (ROI), of polygonal training data (ROI), and/or and/or
on-screen seedingon-screen seeding of training data (ENVI does not have of training data (ENVI does not have this, Erdas Imagine does). this, Erdas Imagine does). The The seedseed programprogram begins at a single begins at a single x, y x, y location and evaluates location and evaluates
neighboring pixel values in all bands of interest. Using criteria neighboring pixel values in all bands of interest. Using criteria specified by the analyst, the seed algorithm expands outward specified by the analyst, the seed algorithm expands outward like an amoeba as long as it finds pixels with spectral like an amoeba as long as it finds pixels with spectral characteristics similar to the original seed pixel. This is a very characteristics similar to the original seed pixel. This is a very effective way of collecting homogeneous training information. effective way of collecting homogeneous training information.
From From spectral libraryspectral library of field measurements of field measurements
SelectingSelectingROIsROIs
Alfalfa
Cotton
Grass
Fallow
Supervised classification Supervised classification methodsmethods
Various supervised classification algorithms may be used to assign an unknown pixel to one of Various supervised classification algorithms may be used to assign an unknown pixel to one of mm possible classes. The choice of a particular classifier or decision rule depends on the nature possible classes. The choice of a particular classifier or decision rule depends on the nature of the input data and the desired output. of the input data and the desired output. ParametricParametric classification algorithms assumes that classification algorithms assumes that the observed measurement vectors the observed measurement vectors XXcc obtained for each class in each spectral band during the obtained for each class in each spectral band during the training phase of the supervised classification are training phase of the supervised classification are GaussianGaussian; that is, they are normally ; that is, they are normally distributed. distributed. NonparametricNonparametric classification algorithms make no such assumption. classification algorithms make no such assumption.
Several widely adopted nonparametric classification algorithms include:Several widely adopted nonparametric classification algorithms include: one-dimensional one-dimensional density slicingdensity slicing parallepipedparallepiped,, minimum distanceminimum distance, , nearest-neighbornearest-neighbor, and , and neural network neural network andand expert system analysis expert system analysis..
The most widely adopted parametric classification algorithms is the:The most widely adopted parametric classification algorithms is the: maximum likelihoodmaximum likelihood..
Hyperspectral classification methodsHyperspectral classification methods Binary EncodingBinary Encoding Spectral Angle MapperSpectral Angle Mapper Matched FilteringMatched Filtering Spectral Feature FittingSpectral Feature Fitting Linear Spectral UnmixingLinear Spectral Unmixing
Source: http://popo.jpl.nasa .gov/html/data.html
Supervisedclassificationmethod:
Spectral FeatureFitting
Accuracy assessment of Accuracy assessment of classificationclassification
Remote sensing-derived thematic information are Remote sensing-derived thematic information are becoming increasingly important. Unfortunately, they becoming increasingly important. Unfortunately, they contain errors.contain errors.
Errors come from 5 sources:Errors come from 5 sources: Geometric error still thereGeometric error still there None of atmospheric correction is perfectNone of atmospheric correction is perfect Clusters incorrectly labeled after unsupervised classificationClusters incorrectly labeled after unsupervised classification Training sites incorrectly labeled before supervised classificationTraining sites incorrectly labeled before supervised classification None of classification method is perfectNone of classification method is perfect
We should identify the sources of the error, minimize it, We should identify the sources of the error, minimize it, do accuracy assessment, create metadata before being do accuracy assessment, create metadata before being used in scientific investigations and policy decisions. used in scientific investigations and policy decisions.
We usually need GIS layers to assist our classification.We usually need GIS layers to assist our classification.
Post-classification and GISPost-classification and GIS
salt-and-pepper
typestypes
Majority/Minority AnalysisMajority/Minority Analysis Clump ClassesClump Classes Morphology FiltersMorphology Filters Sieve ClassesSieve Classes Combine ClassesCombine Classes Classification to vector (GIS)Classification to vector (GIS)
Change detectionChange detection Change detect involves the use of multi-temporal datasets to Change detect involves the use of multi-temporal datasets to
discriminate areas of land cover change between dates of discriminate areas of land cover change between dates of imaging.imaging.
Ideally, it requires Ideally, it requires Same or similar sensor, resolution, viewing geometry, spectral bands, Same or similar sensor, resolution, viewing geometry, spectral bands,
radiomatric resolution, acquisition time of data, and anniversary datesradiomatric resolution, acquisition time of data, and anniversary dates Accurate spatial registration (less than 0.5 pixel error)Accurate spatial registration (less than 0.5 pixel error)
MethodsMethods Independently classified and registered, then compare themIndependently classified and registered, then compare them Classification of combined multi-temporal datasets, Classification of combined multi-temporal datasets, Principal components analysis of combined multi-temporal datasetsPrincipal components analysis of combined multi-temporal datasets Image differencing (subtracting), (needs to find change/no change Image differencing (subtracting), (needs to find change/no change
threshold, change area will be in the tails of the histogram threshold, change area will be in the tails of the histogram distribution)distribution)
Image ratioing (dividing), (needs to find change/no change threshold, Image ratioing (dividing), (needs to find change/no change threshold, change area will be in the tails of the histogram distribution)change area will be in the tails of the histogram distribution)
Change vector analysisChange vector analysis Delta transformationDelta transformation
Example: stages of Example: stages of developmentdevelopment
19941994
19961996
Sun City – Hilton Head
Sun City – Hilton Head
19741974
1,040 urban1,040 urbanhectareshectares
19941994
3,263 urbanhectares
315% increase
19741974
1,040 urban1,040 urbanhectareshectares
19941994
3,263 urbanhectares
315% increase