Digital Correction for High-Resolution Images...Digital Correction for DR. HAGOP MARKARIAN RALPH...

10
Digital Correction for DR.HAGOP MARKARIAN RALPH BERNSTEIN DALLAM G. FERNEYHOUGH LEON E. GREGG FREEMAN S. SHARP 1. B. M. Corp. Gaithersburg, Md. 20760 High-Resolution Images* An efficient, high-speed algorithm was applied to two images representative of those generated by the return beam vidicon used during the Earth Resources Technology Satellite Program. IN 1972, NASA plans to launch the first in its Earth Resources Technology Satellite (ERTS) series.l.2 The first satellite (ERTS-A) will carry two sensor systems-a group of three multispectral return beam vidicon (mv) cameras built by the RCA Corporation and a four-channel multispectral scanner (MSS) built by the Hughes Aircraft Company. It is hoped that the images generated by these quite large, precisioil geometric correction of even a small percentage of the images can be a formidable task unless efficient computa- tional techniques and equipment are used. The power and the flexibility of digital techniques for processing image data, as op- posed to conventional electro-optical proc- essing, have long been recognized. However, as recently as 1970 it was believed that only the most massive, state-of-the-art computer ABSTRACT: An efficient, high-speed algorithm for applying geometric corrections to high-resolution images has been developed and im- plemented as a digital computer program. This algorithm was applied to two images representative of those to be generated by the return beam vidicon which will be used during the Earth Resources Tech- nology Satellite Program. Processing of each image required only 80 seconds of CPU time and 450 kilobytes of memory on an IBM System/ 360 Model 65. In a companion experiment using the same two images, a simple "shadow casting" technique for detecting and locating reseau marks in the RBV data was found to be quite adequate for support of the geometric correction process. sensors will provide information which will enable improved monitoring and management of the earth's resources and the detection of environmental conditions, which may lead to significant economic benefits. The images generated by the ERTS sensors will suffer from a variety of geometric dis- tortions. Estimates of the portion of the ERTS imagery which will have to undergo a pre- cise correction for these distortions in order to permit the extraction of useful data vary from 5 percent to 100 percent.3.4 As the volume of images to be generated will be *Manuscript received by the Editor in No- vember 1971. facilities could cope with the ERTS processing load.5 Since 1969, IBM has been developing fast digital algorithms for the correction of high-resolution images on general-purpose digital computers. An automated procedure for removing geo- metric distortions from RBV images involves the mathematical modeling of the distortions. This requires the detection in the image data of reseau marks (to characterize distor- tions interior to the sensor) and geodetic control points (to characterize distortions ex- terior to the sensor such as earth curvature and camera attitude and altitude deviations). A computer reseau detection technique

Transcript of Digital Correction for High-Resolution Images...Digital Correction for DR. HAGOP MARKARIAN RALPH...

  • Digital Correction for

    DR. HAGOP MARKARIAN RALPH BERNSTEIN

    DALLAM G. FERNEYHOUGH LEON E. GREGG

    FREEMAN S. SHARP 1. B. M. Corp.

    Gaithersburg, Md. 20760

    Hig h-Resolution Images* An efficient, high-speed algorithm was applied to two images representative of those generated by the return beam vidicon used during the Earth Resources Technology Satellite Program.

    IN 1972, NASA plans to launch the first in its Earth Resources Technology Satellite

    (ERTS) series.l.2 The first satellite (ERTS-A) will carry two sensor systems-a group of three multispectral return beam vidicon (mv) cameras built by the RCA Corporation and a four-channel multispectral scanner (MSS) built by the Hughes Aircraft Company. It is hoped that the images generated by these

    quite large, precisioil geometric correction of even a small percentage of the images can be a formidable task unless efficient computa- tional techniques and equipment are used.

    The power and the flexibility of digital techniques for processing image data, as op- posed to conventional electro-optical proc- essing, have long been recognized. However, as recently as 1970 it was believed that only the most massive, state-of-the-art computer

    ABSTRACT: A n efficient, high-speed algorithm for applying geometric corrections to high-resolution images has been developed and im- plemented as a digital computer program. This algorithm was applied to two images representative of those to be generated by the return beam vidicon which will be used during the Earth Resources Tech- nology Satellite Program. Processing o f each image required only 80 seconds of CPU time and 450 kilobytes of memory on an IBM System/ 360 Model 65. In a companion experiment using the same two images, a simple "shadow casting" technique for detecting and locating reseau marks in the RBV data was found to be quite adequate for support of the geometric correction process.

    sensors will provide information which will enable improved monitoring and management of the earth's resources and the detection of environmental conditions, which may lead to significant economic benefits.

    The images generated by the ERTS sensors will suffer from a variety of geometric dis- tortions. Estimates of the portion of the ERTS imagery which will have to undergo a pre- cise correction for these distortions in order to permit the extraction of useful data vary from 5 percent to 100 percent.3.4 As the volume of images to be generated will be

    *Manuscript received by the Editor in No- vember 1971.

    facilities could cope with the ERTS processing load.5 Since 1969, IBM has been developing fast digital algorithms for the correction of high-resolution images on general-purpose digital computers.

    An automated procedure for removing geo- metric distortions from RBV images involves the mathematical modeling of the distortions. This requires the detection in the image data of reseau marks (to characterize distor- tions interior to the sensor) and geodetic control points (to characterize distortions ex- terior to the sensor such as earth curvature and camera attitude and altitude deviations). A computer reseau detection technique

  • termed shadow casting is described in this paper.

    The sampled and digitized RBV image (the geometrically distorted input image) is con- sidered to be a uniform, two-dimensional array of picture elements (pixels) each of which has a specific gray level. Because the ar- ray is uniform (i.e., in a rectangular coordi- nate frame with axes U and V, the input pixels lie only at integer values of U and V) , only the gray levels are stored in the com- puter; that is, no coordinate data need be stored. The U , V coordinates of each input pixel are the column and row indices of its gray level in the input data array.

    In order to take advantage of both this implied position property of data defined on a uniform array and the accuracy of film recording devices (e.g., laser beam and drum recorders) which record uniform rasters of image points, the geometrically correct out- put image is also defined as a uniform, two- dimensional array of pixels.

    A pair of global, bivariate mapping poly- nomials of the form

    are used to determine the U , V input image coordinates of the pixel at coordinates X, Y in the output image. These polynomials ac- count for the low-frequency sensor-associated distortions (centering, size, skew, pincushion, barrel, and S-term) as well as for distortions

    caused by earth curvature and camera at- titude and altitude deviations.

    Initial values for the polynomial coeffi- cients are obtained by performing a least- squared error fit to the vector differences between the observed and nominal reseau locations. A least-squared error fit of stand- ard photogrammetric resection equations to the vector differences between the observed and nominal geodetic control-point locations is used to determine the attitude and altitude of the camera. This information is used to modify the polynomial coefficients to account for the effects of earth curvature and camera attitude and altitude.

    These global mapping polynomials will not account for distortions which are high-fre- quency and/or local in nature. It is antici- pated that there will be no such errors in the sensors. The only high-frequency external error source is terrain relief, and it has been estimated that this will cause no significant error in the general case.13 Therefore, use of techniques which correct only low-frequency errors appears justified.

    As Figure 1 shows, the U, V coordinates of an output pixel mapped onto the input image plane do not generally coincide with the coordinates of any input pixel (i.e., the values of U = F ( X , Y) and V = G ( X , Y) are not generally integers). The mapped output pixel generally lies somewhere within a square defined by the four surrounding input pixels, and some form of interpolation on the gray levels of the input pixels must be used to determine a gray level for the out- put pixel. Therefore, the general geometric correction procedure involves mapping each pixel of the output image into the input

    + Input Image

    0 0 0 0

    Output l rnage

  • DIGITAL CORRECTION FOR HIGH-RESOLUTION IMAGESQ

    L Hor~zontal Break

    Legend:

    Input Image p~xel

    Lvert ical Break

    L ~ o r i z o n t a l Break

    Lvertccal Break

    Output image pixel mapped onto input image plane

    Gray level of the (rn,n)th p~xe l of the input image assigned to the (1,j)th pixel

    x j ,mation of the output image gmn

    image plane and interpolating on the gray levels of the surrounding input pixels to de- termine the output gray level.

    POINT SHIFT ALGORITHM

    The point shift algorithm is a mapping pro- cedure that is based on the recognition that a point mapped on the input image has a location which is at most one-half pixel spacing removed in each axis from some input image pixel location. Therefore, in all cases where the error budget allows a one-half pixel location error in this phase of image processing, it is possible to assign to each pixel in the output image the gray level of the nearest pixel in the input image. The consequence of this extends beyond the mere resolution of the interpolation problem for the gray-level assignment. I t simplifies the ectire mapping procedure, and as a result significantly decreases the execution time of the geometric correction process.

    The procedure is illustrated in Figure 2, where the large rectangle subdivided into square blocks represents a portion of an in- put image. Each square block constitutes the lialf-pixel neighborhood of the input pixel location marked with an r at its center. The slanting line represents the mildly curved map of some horizontal line in the output plane. The points marked 0 on this line are the mapped locations of the points that make

    up the horizontal line in the output image. The short arrows indicate the nearest neigh- bor assignment of levels. The assignment rule is that if the input pixel with coordinates m, n (i.e., mfh row and nth column) is the pixel closest to the actual mapped location of the (i, i)"~ output pixel on the input image, the gray level g is assigned to the (i, i)th output pixel.

    Nearest neighbor assignment simplifies the entire mapping problem because the expected total geometric distortion is small. Thus, if a line segment is mapped from the output plane onto the input plane, the first pixel of the output line segment with coordinates i, j is assigned the gray level of the pixel m, n in the input image.

    That is, g = g , and this relation- ship is maintained for a number of successive points on the line segment. Until this rela- tionship breaks down, gray-level assignments can be made from input image to output im- age pixels as

    The break in this pattern occurs for the i,i+k output pixel if it maps somewhere with a closest input image neighbor at the loca- tion,

  • nz,n+k* 1, a horizontal break, or

    m+l, n+k, a vertical break

    instead of mapping at m,n+k. Figure 2 illustrates both types of breaks.

    It is clear that if, on the average, the breaks occur P pixels apart, input gray levels can be transferred to the output image in strings on the average P pixels long. If use is made of instructions which manipulate strings of data as single units (e.g., the move character in- struction of IBM System/360 and System/ 370), the computer operations required to transfer the input gray levels to the output array will be reduced by a factor of P rela- tive to the operations required to transfer the data one pixel at a time.

    In the implementation of the point shift algorithm, an irregularly spaced, rectangular grid of lines is established on the output im- age plane (see Figure 3 ) . The intersections of the grid lines define a lattice of anchor points in the output image. The separation (horizontal and vertical) of the anchor points is chosen such that, if they are mapped from the output to the input image and connected through straight lines into a distorted grid, these lines at no place deviate more than a small fraction of the interpixel spacing (for instance, 0.1) from the curved lines that would have been obtained if every point on the output grid had been mapped.

    It is possible to show that for an allowable maximum deviation equal to K, the separa- tion of two anchor points on a horizontal output image line can be computed as

    Anchor Point

    Grtd Mesh

    FIGURE 3.

    Figure 4 illustrates the geometry relating to equation 2.

    As the geometric error on an RBV image il~creases with distance from the center of the image, the grid resulting from computa- tion of anchor points (i.e., grid intersections) according ot Equation 2 is one with a large mesh at the center of the image and increas- ingly smaller meshes moving towards the periphery. The advantage of this is that in comparison to a uniformly spaced grid, the nonuniform grid requires fewer meshes for the same maximum inaccuracy K. In one example based on typical ERTS RBV errors, a variable grid set for a maximum of 0.1 pixel interpolation error required only a 23 x 23 mesh (i.e., required mapping of 24 x 24 = 576 anchor points). A uniformly spaced grid for the same error would have required a 50 x 50 mesh (i.e., 51 x 51 = 2601

    112 3 12

    [AX], = 2

    A similar expression for [AX], based on G(X, Y) can be obtained. An acceptable ap- proximation to the solution of Equation 2 can be obtained by evaluating AX using partials evaluated at X , Y , storing that solution as d X , and reevaluating AX by using partials evaluated at

    anchor points). The purpose of mapping anchor points

    and, hence, the grid they represent from the output image onto the input image in the manner described is that the four anchor points that represent a single grid mesh are sufficient to determine, through geometric in- terpolation, the location of every point in the mesh. Therefore, by this means, the problem of the mapping (through Equation 1) of

  • DIGITAL CORRECTION FOR HIGH-RESOLUTION IMAGESQ

    ax F(X, + -, Y,)

    2

    1.7 x 107 pixels has been reduced to the the image. precise mapping of a few hundred points The operation inside and on the boundary (namely 576 in the example cited). of a mesh consists of the following computa-

    TO complete the description of the algo- tions: rithm, it is sufficient to describe the operation a. With reference to Figure 5, the location on the boundary and inside a single mesh as of every pixel on the leading vertical edge of the procedure repeats for each grid mesh in the input grid mesh is computed by interpola-

    Input Mesh Output Mesh

  • tion between the coordinates of the pixels a and c, (U=,V.) and (U?,Vc), respectively. If the entire image is considered, for the cited example this amounts to 24 X 4096 interpola- tions.

    b. On the leading vertical edge of each mesh the partials

    and

    are computed by interpolation between their values at the points a and c (Figure 5), re- spectively. The values of A and B are as- sunled to be constant on horizontal lines of the mesh.

    c. For each horizontal line in the output im- age, the break points are computed, one line at a time, in the input image. To do this, note that the partials A and B indicate the relative motion of the pixels mapped from output into input with respect to the pixel locations of the input image. Thus, for example, in the case of horizontal spacings, A > 0 indicates that the horizontal spacing between two consecutive mapped pixels, AU, is larger than the regular spacing of unity between the input pixel lo- cations. Continued mapping of additional pixels would indicate the mapped locations gaining on the input pixel locations. Still as an ex- ample, if A = 0.1, it means that the spacing AU between two consecutive mapped pixels is

    Therefore, after 1/A = 1/0.1/ = 10 pixels, the total distance covered by the mapped pixels is 10 X 1.1 = 11 units instead of 10 units, and the nearest-neighbor relationship for this string of pixels has broken down. In this caye, although the 10th mapped pixel derives its gray level from the 10th pixel of the input image, the 11th mapped pixel is assigned the gray level of the 12th input pixel. The 11th input pixel is skipped in this case.

    If the value of the partial A is negative, the mapped pixels migrate leftward relative the pixel locations in the input image. In this in- stance, at a break point the last input pixel gray level has to be used twice. For vertical break points, which involve the crossing of horizontal lines of data of the input image, the same reasoning applies. The partials B are used to compute the break points in this situa- tion.

    d. For strings of pixels terminated by hori- zontal and vertical break points, video values are assigned according to the nearest-neighbor rule. Once the next break point on a line has been determined as P pixels ahead, computer instructions (e.g., the move character instruc- tion of the IBM System/360 family) can be

    used to move the P pixel string of data points to the output image in one operation.

    In order to transfer gray-level data from the input to the output array a t high speed, the data should be resident in computer memory. Because a single output image line will map across a swath of input image lines, the entire swath of input data must be avail- able to the correction process. For the t-4 percent combined worst case errors of ERTS RBV images, this could require more than 1.3 megabytes of data storage. However, if the output image is processed one line a t a time, the terms of the mapping polynomials which are constant across an output image line can b e evaluated and the problem can b e reduced to that of accommodating +I percent random error, which requires less than 340 kilobytes of data storage.

    A reseau pattern composed of a 9 x 9 array of opaque cruciform marks is inscribed on the RBV faceplate to provide the means of determining the geometric distortion intro- duced b y the sensor. The mathematical characterization of the sensor-caused error in a given image requires the detection and location of the nominally black reseau marks in that image. The vector differences be- tween the actual locations and the undistorted locations of the reseau marks are used t o compute the coefficients of the bivariate mapping polynomials pertaining to the in- ternal errors of the RBV.

    A geometric error variation of 2 1 percent creates an uncertainty of +41 pixels in the

  • DIGITAL CORRECTION FOR HIGH-RESOLUTION IMAGESB

    X and Y directions about the last known lo- cation of the reseau mark. Given that the reseau mark can be inscribed inside a square with side dimension of 3 2 pixels, a search area of 128 x 128 pixels allows sufficient coverage for 1 percent geometric error. The shape of the reseau mark inside a 128 x 128 search area is shown in Figure 6.

    The reseau detection routine developed by IBM is based on the following operational sequence:

    1. The last known locations of reseau marks are inputs to the program.

    2. Within blocks of 128 X 128 elements, each centered around a previous reseau mark location, individual row and column sums of pixel gray levels are computed. This operation is called shadow casting. Thus, along the nth column, the sum would be

    where gm,,, is the gray level of the pixel located at the mth row and the nth column of the 128 x 128 block.

    3. The reseau mark contained within a block is detected by the application of the de- tection algorithm to the row and column se- quences (Sm) and (Sn). The algorithm de- scribed with reference to Figure 7 is based on moving a quadratic along the column sequence (S ), by fitting it to the sums S , S , S , and detecting the presence of sums at the locations n + 1, n + 2, n + 3, n + 4 whose value exceeds the quadratic function at those locations by a computed dynamic threshold. That is, each point at these four locations is tested according to the condition

    S -- actual sum at the n + p location S = estimated value of the sum at the

    n + p location computed from the quadratic

    gray level of a black (or noise-free reseau) pixel for a 6-bit quantization, Furthermore, if P,, is the average gray level for areas not containing reseau pixels, it follows that for 128 column or row sums across the reseal1 pattern,

    S, = sum containing 4 rescau pixels ( 5 )

    and

    S,, sum containing 32 reseau pixels ( 6 )

    = (128 - 32) P, + 32 X 63. Therefore:

    AS = S,, - S, = 28 (63 - P,) (7 )

    where substitution for P, from Equation 5 gives

    AS = 1821 - 0.225 S,.

    The significance of Equation 7 can b e seen by taking two extreme examples.

    a. If the reseau is on an all-white back- ground,

    S, = 4 X 63 = 252

    and

    A S = S,, - S, = 1821 - ,226 (252) = 1764.

    b. If the reseal1 is on an all-black back- ground,

    S, = 128 X 63

    and

    Thus AS, the difference between S,?, and S,, varies from a value of O for an all-black background (in which case the reseau is not

    6, = dynamic threehold

    = 1000 - 0.123S,

    A similar procedure is used for the row se- detectable) to a value of 1764 for an all- quence (Sm). white background. If the value of S, is used The expression used for the dynamic thres- as a measure of background average bright-

    hold function 8 can be developed as follows: ness, AS as a function of S, is the expected Let the gray level for a white pixel be as- excess of S,, over S, for the measured back- signed the value 0, and let 6 3 represent the ground brightness.

  • Reneau Mark

    About 50 percent of AS is used as a dy- namic threshold in Equation 3 for reseau detection. That is, by dividing Equation 7 by 1.82, the threshold is obtained as

    where S n is the midpoint value of the esti- mator quadratic.

    If the nearest-neighbor assignment tech- nique intrinsic to the point shift algorithm is applied to an image of a grid of lines whose spacing is some small number of pixels, the horizontal and vertical break points will pro- duce visible staircase or herringbone patterns in the processed image. We believed that the characteristics of the ERTS RBV images would be such as to exhibit no such objectionable cosmetic effects after a correction process using nearest-neighbor assignment. In order

    to test this hypothesis, to gain accurate in- formation on the execution time of the point shift algorithm, and to investigate the efficacy of the shadow casting reseau detection tech- nique, the techniques described above were experimentally reduced to practice and quan- titative results were obtained.8

    The steps of this experiment are shown in Figure 8. Two simulated ERTS RBV images were generated by scanning and digitizing two Gemini photographs on an IBM drum scanner/recorder. A 2-mil square spot was used in scanning and digitizing the 7.5-inch x 7.5-inch images, resulting in 3,750 lines of 3,750 samples for each image. Samples were quantized to 6 bits, giving 64 distinct gray levels. In order to approximate more closely the RBV image size, a uniform border was added to the data to expand each image to 4,096 lines of 4, 096 samples.

    Fifth-order mapping polynomials repre-

    lmage O ~ g ~ f ~ r ~ n g and Source Maiersal Preparatton Image Proceri\ng Data Dufbdl Image Recording Image Output

    Expand lmage

    Mapptng Funct,an Coeff,c#enls and

    7 Track 556 bpi

    3750 r 3750 Locate Rereau Marks

    2 correct Geometric

    1 L " - - --- - -- Appmxlmate Rereau Mark Locatlon

    Printout

    Rereau Mark. Defected and Located €3

    . ~ ~ p p , n g ~unctlon Coefflclentr RBV D8rtorfsonr

  • DIGITAL CORRECTION FOR HIGH-RESOLUTION IMAGES*

    sentative of worst case RBV errors (1 vercent \ L

    each for centering, skew, size, pincushion, and S-term; 0.2 percent for keystone) were assumed. An APL program was used to com- pute the locations of the anchor point grid lines, using the technique described. This resulted in a 25 x 25 mesh (as opposed to the 50 x 50 mesh that would have resulted if a regular grid were used). The mapping polynomials were used to compute the posi- tions into which 81 reseau marks were in- serted in the input image data. These marks were positioned so as to appear as a regular 9 x 9 array in the output image.

    A program implementing the point shift algorithm was written and executed on an IBM System/360 Model 65. The total CPU time required for the geometric correction process was 80 seconds for each image, and 450 kilbytes of core memory were used. The processed images were recorded on computer tape and were then recorded on film by an IBM drum scanner/recorder. The input and output versions of both experimental images show no cosmetic defects in the processed images.

    In a separate operation, the shadow-cast- ing reseau detection algorithm was applied to both experimental images. In the first image, 71 of the 8 1 marks were detected and lo- cated (i.e., the 4 pixels of each arm width were detected unambiguously for both the horizontal and vertical arms). Of the 81 marks in the second image, 73 were detected. There were no cases of fahe identification in either image.

    CONCLUSIONS

    These experiments show that digital tech- niques are a viable candidate for correction of high-resolution imagery. The processed images show none of the cosmetic defects which may result from the use of nearest- neighbor assignment.

    The cpu time of 80 seconds per image on a System/360 Model 65 is quite reasonable and much lower than any previous estimate. No attempt was made to achieve efficient 1/0 operation in these experiments, but it has been estimated that a 4096 x 4096 im- age can be read from or recorded on stand- ard 800-bits-per-inch computer tape in 135 seconds. As this I /O completely overlaps the processing, a single R B ~ image can be cor- rected in approximately 5 minutes. If 1600- bits-per-inch tape is used, the total correction time for each image decreases to 1.7 minutes.

    The reseau detection technique discussed

    here is totally adequate for support of the correction of RBV images. In each of the two images tested, more than 70 of the 81 reseau marks were found and no false detections were made. Inasmuch as the mapping poly- nomials used require the detection of a minimum of 21 reseau marks in each image, this level of performance is more than suffi- cient.

    REFERENCES 1. Jaffe, L., and Snmmers, R. A,, "The Earth

    Resources Survey Program Jells," Astro- nautics and Aeronautics, April 1971.

    2. George, T. A., "ERTS A and B-The Engi- neering System," Astronautics and Aero- nautics, April 1971.

    3. National Aeronautics and Space Administra- tion, Design Study Specifications for the Earth Resources Technology Satellite ERTS- A and B, document no. S-701-P-3, National Aeronautics and Space Administration, God- dard Space Flight Center, Greenbelt, Mary- land, released April 1969, revised October 1969, December 5, 1969, and January 13, 1970.

    4. Wood, Peter, "User Requirements for Earth Resource Satellite Data," Electronic and Aerospace Systems EASCON '70 Convention Record, IEEE Transactions on Aerospace and Electronic Systems, 70 C 16-AES, Oc- tober 26, 1970.

    5. Eaton, Donald A,, Wolf Research and De- velopnlent Corporation, A Study of Digital Techniques for Data Processing for an Earth Resozirces Technology Satellite (ERTS), pre- pared by the Wolf Research and Develop- ment Corporation, Riverdale, Maryland, for NASA-GSFC under contract no. NAS 5- 11735, Mod 2, March, 1970.

    6. Barnea, D. I., and Silverman, H. F., Inter- national Bnsiness Machines Corporation, The Class of Sequential Similarity Detection Al- gorithms (SSDA's) for Fast Digital Image Registration, IBM T. J. Watson Research Center Report, IBM T. J. Watson Research Center, Yorktown Heights, New York, sub- mitted for publication.

    7. Silverman, H. F., "On the Uses of Trans- forms for Satellite Image Processing," Sev- enth International Symposium on Remote Sensing of Environment, Univ. of Michigan, 17-21 May 1971.

    8. Bernstein, R., Ferneyhough, D. G., Gregg, L., Higley, R., Markarian, H., Miklos, J., Mooney, PP., Sharp, F., Experimental ERTS Image Processing, IBM Report CESC-70- 0465, May 18, 1970.

    9. TRW Systems Gronp, Earth Resources Tech- nology Satellite Final Report, Volume 2: ERTS Sy.stem Studies, April 17, 1970.

    10. Will, P. M., Bakis, R., Wesley, M. A,, Inter- national Business Machines Corporation, "On an All-Digital Approach to Image Processing

  • for ERTS," ERTS Program: Final Report, IBM Thomas J. Watson Research Center, Yorktown Heights, New York, March 6 , 1970.

    11. Will, P. M,, Bakis, R., Wesley, M. A., Bern- stein, R., Markarian, H., International Busi- ness Machines Corporation, "Digital Image Processing for the Earth Resources Tech- nology Satellite Data," ASP Meeting, Wash-

    ington, D.C., March 7-12, 1971. 12. Ferneyhough, D. G., Geometric Correction

    of ERTS RBV Images by Automatic Digital Techniques, thesis submitted to the George Washington University School of Engineer- ing and Applied Science, June 6, 1971.

    13. Colovocoresses, Alden P., "ERTS-A Satellite Imagery," Photogrammetric Engineering, 36: 6 , June 1970, pp. 555-560.

    The Birdseye Plaque

    (Continued from page 1310)

    by Mr. Heinz Gniner who had presented a generous assistance of the U.S. Geological Memorial Lecture in Birdseye's honor at the Survey. 1972 Annual Convention of the Society In 1965 the Society initiated a Birdseye (Photogram. Engr., 38:9, pp. 865-875). The President Citation to be awarded to each of construction and installation of the plaque its presidents upon completion of his term of were sponsored by the Society with the office.

    The American Society of Photogrammetry

    publishes two Manuals which are pertinent to its discipline:

    Manual of Photogrammetry (Third Edition)

    Price to Price to Members Nonmembers

    Manual of Color Aerial Photography 550 pages, 50 full-color aerial photographs, 16 pages of Munsell standard color chips, 40 authors

    Send orders, or requests for further information to ASP, 105 N. Virginia Ave., Falls Church, Va. 22046

    1220 pages in 2 volumes, 878 illustrations, 80 authors. (Sold only in sets of 2 volumes)