06319382

download 06319382

of 10

Transcript of 06319382

  • 8/12/2019 06319382

    1/10

  • 8/12/2019 06319382

    2/10

  • 8/12/2019 06319382

    3/10

    ZHU AND BAMLER: SPARSE IMAGE FUSION ALGORITHM WITH APPLICATION TO PAN-SHARPENING 2829

    Fig. 2. Few atoms y0 and x 0 of (left) the LR and (right) HR dictionaries D h and D l , respectively. In this example, LR and HR patches are of sizes 5 5 and50 50 pixels, respectively, i.e., downsampling factor F DS = 10 , which is very high.

    collections [9], dictionary training is the key issue to super-resolve the LR images. In our method, the dictionary pair islearnt directly from the image itself. Due to the fact that thedictionaries are built up from the pan image observing thesame area and acquired at the same time as the multispectralchannels, the LR multispectral image patches y k and theircorresponding HR patches x k to be reconstructed are expectedto have a sparse representation in this LR/HR dictionary pair.Furthermore, the corresponding y k and x k share the samesparse coefcients in D h and D l . This gives us an alternativemethod for image fusion when large collections of representa-tive satellite images are not available.

    B. Sparse Coefcients EstimationThis step attempts to represent each LR multispectral patch

    in the kth channel y k as a linear combination of LR pan patchesy 0 , i.e., of the atoms of the dictionary D l with a coefcientvector denoted by k . Since this dictionary is overcomplete,i.e., its columns are not orthogonal, there may be innitelymany solutions. We argue that it is very likely that the bestsolution is the one employing the least number of pan patches.Therefore, for each LR multispectral patch y k , a sparse coef-cient vector k is estimated by an L1 L2 minimization:

    k = argmin

    k 1 + 1

    2D k y k

    22 (1)

    where

    D = D l PD h y k =

    y k w k

    . (2)

    Matrix P is a matrix that extracts the region of overlap betweenthe current target patch and previously reconstructed ones. w kcontains the pixel values of the previously reconstructed HRmultispectral image patch on the overlap region. Note that, if the patches do not overlap, D = D l and y k = y k . Parameter is a weighting factor that gives a tradeoff between goodnessof t of the LR input and the consistency of reconstructedadjacent HR patches in the overlapping area. In our experiment, is chosen to be 1/F 2DS , i.e., we weight the overlapped and

    nonoverlapped areas according to their physical sizes. is thestandard Lagrangian multiplier, balancing the sparsity of thesolution and the delity of the approximation to y k .

    If and only if the dictionaries are stable [10], [11], solving(1) by means of standard convex optimization algorithms cangive the sparsest solution, though with systematic amplitudebias introduced by the L1 -norm approximation of the NP hardL0 -norm minimization problem [12]. However, due to the factthat the atoms in the dictionaries are often highly coherent,the dictionaries are normally not stable. Hence, we estimatethe sparse coefcient k using the SL1MMER algorithm [13],[14], which includes a model order selection step [15] and adebiasing [12] step to get rid of the mentioned amplitude bias.

    C. HR Multispectral Image Reconstruction

    Since each of the HR image patches x k is assumed toshare the same sparse coefcients as the corresponding LRimage patch y k in the coupled HR/LR dictionary pair, i.e., thecoefcients of x k in D h are identical to the coefcients of y kin D l , the nal sharpened multispectral image patches x k arereconstructed by

    x k = D h k . (3)

    The tiling and summation of all patches in all individualchannels nally give the desired pan-sharpened image X .

    The proposed SparseFI algorithm is supposed to demonstratethe potential of sparse signal reconstruction for data fusion.We do not expect that it outperforms all other well-establishedalgorithms that have been optimized over years and decades.The experimental results in the next two chapters, however,show that SparseFI in its infancy compares quite favorably withconventional algorithms and even exceeds them in most of thequality metrics.

    III. EXPERIMENTS W IT H ULTRA CAM DATA

    SparseFI has been applied to UltraCam data. They are mul-tispectral images with four channels (i.e., red, green, blue, andnear infrared) with a spatial resolution of 10 cm. From this veryHR multispectral image, we simulate the panchromatic image

  • 8/12/2019 06319382

    4/10

    2830 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 51, NO. 5, MAY 2013

    Fig. 3. Input images: (left) simulated HR pan image with a model error of 2.7 % ; (right) LR multispectral image obtained by downsampling the original HRmultispectral image with a factor of 10.

    X 0 by linearly combining the multispectral bands and addingsome model error

    X 0 = c1 X 1 + c2 X 2 + c3 X 3 + c4 X 4 + . (4)

    X k and ck are the values of the kth band of the HR multispec-tral image, and its corresponding weighting coefcient. is themodel error. A simulated LR multispectral image Y is obtainedby low-pass ltering and downsampling the original image, i.e.,the HR multispectral image X . As a validation, we use theproposed method to reconstruct the HR multispectral image X .

    By comparing to the original multispectral image X , we accessits performance with respect to conventional methods.In this section, the performance of the proposed SparseFI

    algorithm against the model error is investigated. Furthermore,the optimum regularization parameter under different noiselevels is analyzed.

    A. Robustness Against the Model Error

    To investigate the robustness of the algorithm against modelerrors, as shown in Fig. 3(a), a panchromatic image of anurban area with a reasonable model error of 2.7% is simulated.

    Fig. 3(b) illustrates the corresponding LR multispectral imageobtained by downsampling the HR multispectral image withan extreme factor of 10. From the two input images, the HRmultispectral image can be reconstructed and then be comparedwith the original HR multispectral image [see Fig. 4(a)]. Fig. 4shows the HR multispectral image reconstructed using (b) theproposed SparseFI method, (c) IHS method, (d) adaptive IHSmethod, (e) PCA method, and (f) Brovey transform method.Note that, for visual comparison, only the RGB channels arevisualized. Compared with the results produced by conven-tional pan-sharpening methods, it is veried that SparseFI canprovide visually satisfactory results even under the situation of the large downsampling factor of 10.

    For quantitative assessment, the results shown in Fig. 4 areevaluated using the well-known assessment criteria (see the

    detailed denitions in the Appendix). In brief, the utilizedassessment metrics include [16][20] the following:

    root-mean-square error (RMSE): calculates the changes inpixel values to compare the difference between the originaland pan-sharpened images;

    correlation coefcient (): measures the similarity of spec-tral features;

    spectral angle mapper (SAM): denotes the absolute valueof the angle between the true and estimated spectralvectors;

    degree of distortion (D ): reects the distortion level of the

    pan-sharpened image; universal image quality index (UIQI): a widely used imagesharpening quality assessment indicator;

    average gradient: reects the contrasts of details containedin the image and the image intelligibility;

    error relative dimensionless global error in synthesis(ERGAS): reects the overall quality of the pan-sharpenedimage.

    Table I summarizes the calculated assessment criteria value.The second row gives the reference values of different criteria.The best value is highlighted for each criterion. It is obvious thatSparseFI gives in most of the metrics the best performance.

    Even more obvious results are achieved by adding a moresignicant model error of 25%.

    Fig. 5 gives the reconstructed HR multispectral image bythe proposed SparseFI method, and Table II summarizes thequality metrics. It is evident that the SparseFI method is farless sensitive to the model error of the panchromatic imagecompared with other methods.

    B. Dependence on Patch Size and Overlapping Area Size

    It is obvious that the performance of the SparseFI algorithmdepends on the patch size and overlapping area size betweenthe adjacent patches. It is a tradeoff between the compactnessand completeness of the dictionary. On the one hand, a toosmall image patch size in the LR dictionary renders the LR

  • 8/12/2019 06319382

    5/10

    ZHU AND BAMLER: SPARSE IMAGE FUSION ALGORITHM WITH APPLICATION TO PAN-SHARPENING 2831

    Fig. 4. (a) Original HR multispectral image. HR multispectral image reconstructed from the input images shown in Fig. 3 using (b) SparseFI, (c) IHS method,(d) adaptive IHS method, (e) PCA method, and (f) Brovey transform method.

    dictionary not compact, i.e., the atoms in the LR dictionary aregenerally highly coherent. On the other hand, a too large imagepatch size endangers the sparse representation of multispectralimage patches in the dictionary pair, i.e., the atoms in the

    dictionary should contain primitive patterns instead of specicobjects. Table III summarizes the quality assessment resultswhen an HR multispectral image is reconstructed with differentpatch sizes. The data set in Fig. 3 is used in this experiment,

  • 8/12/2019 06319382

    6/10

    2832 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 51, NO. 5, MAY 2013

    TABLE IQUALITY METRICS FOR THE ULTRA CAM DATA (PAN IMAGE S IMULATED W ITH A 2. 7% MODEL ERROR )

    Fig. 5. HR multichannel image reconstructed by SparseFI with a model error of 25 % .

    TABLE IIQUALITY METRICS FOR ULTRA CAM DATA (PAN IMAGE S IMULATED W ITH A 25 % MODEL ERROR )

  • 8/12/2019 06319382

    7/10

    ZHU AND BAMLER: SPARSE IMAGE FUSION ALGORITHM WITH APPLICATION TO PAN-SHARPENING 2833

    TABLE IIIPERFORMANCE A SSESSMENT D EPENDENCE ON PATCH SIZ E

    TABLE IVPERFORMANCE A SSESSMENT D EPENDENCE ON REGULARIZATION PARAMETER

    i.e., with a downsampling factor of 10 and a model error of 2.7%. The overlapping area size and the regularization param-eter have been tuned to be optimum. Table III tells that the bestoverlap area size is 20%40% of the patch size. The optimumpatch size is 7 7 with an overlapping area size of 7 3 pixels.The dictionary size is determined by the image size, the patchsize, and the overlap area size. In this experiment, with anLR image size of 150 150, the length of unknown sparsecoefcients to be estimated is 28 times larger than the measure-ment length, i.e., the pixel number in a LR patch.

    C. Dependence on Regularization Parameter The regularization parameter in (1) balances the sparsity

    of the solution and the delity of the approximation to y ,and it guarantees the robust image reconstruction from noisydata. Due to the fact that parameter (see Table IV) dependson the noise level of the input data [21], different levels of Gaussian white noise are added to the LR input image to ndan appropriate value of that could give a common solution tothis convex minimization problem. It is claimed in [22] that, forGaussian white noise with standard deviation , one typicallysets = T with T 2 loge L , where L is the number of atoms in the dictionary. In our experiments, any smallerthan 3.7 could provide a solution. Within this region, underdifferent noise levels , we tuned the regularization parameter.The resulting optimum and its corresponding assessmentmetrics values are listed in Table III. It shows that the noisierthe data, the larger the value of should be, and the bestperformance appears with having a value on the order of noiselevel .

    IV. EXPERIMENTS W IT H WORLD V IE W-2 DATA

    The data acquired by WorldView-2 are a panchromatic imagewith a spatial resolution of 0.5 m and multispectral images witha spatial resolution of 2 m. Hence, the HR multispectral imagewith a spatial resolution of 0.5 m should be reconstructed. Inorder to have a reference for the multispectral image for latter

    quality assessment, we downsample the provided pan image to2 m as the input HR pan image [see Fig. 6(c)] and the multi-spectral image to 8 m as the input LR multispectral image [seeFig. 6(b)]. Finally, the HR multispectral pan-sharpened imagewith a resolution of 2 m is reconstructed [see Fig. 6(d)] andcompared with the original multispectral image [see Fig. 6(a)]at the same resolution level.

    The reconstructed HR multispectral images are comparedwith the results produced by the original IHS, adaptive IHS,PCA, and Brovey transform methods in Table V. It conrms theconclusion we found in the previous experiments, i.e., SparseFIgives superior performance for most of the quality metrics. Inparticular, different from the moderate performance in SAMfrom the previous tests, in this experiment, the HR multispectralimage reconstructed by the SparseFI algorithm shows superiorperformance in SAM.

    V. CONCLUSION

    In this paper, we have proposed the SparseFI algorithm forimage fusion and validated it using UltraCam data. The superiorperformance of SparseFI has been demonstrated by a statisticalassessment. Compared with other conventional pan-sharpeningmethods, SparseFI does not assume an accurate spectral modelof the panchromatic image, and hence, it is less sensitive tothe model error of the panchromatic image. It outperformsthe other algorithms in most of the assessment. The analysisof dependence on the regularization parameter indicates thatoptimal has a value on the order of noise level .

    A few additional remarks might be helpful for further use of our results.

    The proposed algorithm is generally applicable to imagefusion. The most straightforward example is hyperspectralimage sharpening. Often, we have an LR hyperspectral im-age and an HR multispectral image of the same area possi-bly, but not necessarily taken at the same time. We can usethe HR multispectral image to build up the overcompletedictionary pair and sharpen the LR hyperspectral image tothe spatial resolution of the HR multispectral image.

  • 8/12/2019 06319382

    8/10

    2834 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 51, NO. 5, MAY 2013

    Fig. 6. (a) Original multispectral image with a resolution of 2 m. (b) Downsampled input LR multispectral image Y with a resolution of 8 m. (c) Downsampledinput HR pan image X 0 with a resolution of 2 m. (d) HR multispectral image reconstructed by SparseFI.

    TABLE VQUALITY METRICS FOR WORLD V IE W-2 (TES T SIT E : ROM E )

    The proposed algorithm can be easily extended to imagesharpening under the condition that no HR pan imageX 0 is available. The dictionaries can be trained fromredundant numerous image resources, e.g., a remote sens-ing data archive. Although the straightforward dictionarylearning method mentioned in Section II cannot be directlyapplied in this case, it can be used for generating the ini-tial overcompleted dictionaries. Followed by the training

    process, the cotrained compact but overcomplete dictio-naries D l and D h , which contain the essential geometricinformation of reconstruction process, can be used to-gether with the offered LR image Y to generate the desiredHR image X [9].

    It is worth mentioning that, when the patch size is chosento be extremely large, the proposed method works similaras IHS because (1) will then tend to simply choose a

  • 8/12/2019 06319382

    9/10

    ZHU AND BAMLER: SPARSE IMAGE FUSION ALGORITHM WITH APPLICATION TO PAN-SHARPENING 2835

    single patch, i.e., the corresponding one. A further studyon developing parameter-free algorithms, i.e., systemati-cally tuning for an optimal patch size and regularizationparameter, would be of great interest.

    Since the dictionary of SparseFI is only trained from thedata set at hand, it contains less atoms than a dictionary

    trained from a larger set of images. Therefore, the algo-rithm is computationally efcient even with patch sizesmuch larger than 2 2 as used in [7]. These larger patchesalso contain more textural information. A potential disad-vantage is that texture or object classes that cover onlya small fraction of the image might be underrepresented,leading to a less sparse solution. If additional images of similar content and from the same sensor are available, thedictionary can be readily extended to specically cover theotherwise underrepresented object classes.

    Although the proposed sparse reconstruction-basedmethod leads to competitive results, it still can beimproved by considering the fact that the informationcontained in different multispectral channels is correlated.Such correlation allows us to introduce further prior, theso-called joint sparsity. The authors are currently workingon the extension of the proposed SparseFI algorithmtoward a Jointly Sparse Fusion of Images (J-SparseFI)algorithm.

    APPENDIX

    The quality criteria used in this paper are dened as [16][20]follows.

    RMSE: The RMSE is frequently used to compare thedifference between the original and pan-sharpened imagesby directly calculating the changes in pixel values. It isdened as

    RMSE = 1MN M

    i =1

    N

    j =1

    (X i,j X i,j )2 . (5)

    X i,j is the pixel value of the original image X , and X i,jis the pixel value of the pan-sharpened image X . M andN are the HR image sizes. The pan-sharpened image iscloser to the original image when RMSE is smaller.

    Correlation Coefcient (): The correlation coefcientof the pan-sharpened and original images measures thesimilarity of spectral feature. It is dened as

    =i,j

    (X i,j x) ( X i,j x)

    i,j [(X i,j x)2 ] i,j ( X i,j x)2(6)

    where x and x are the mean values of the original imageX and the pan-sharpened image X , respectively. A corre-lation coefcient of close to + 1 means that the two imagesare highly correlated.

    SAM : It denotes the absolute value of the angle betweenthe true and estimated spectral vectors

    SAM (X i,j , X i,j ) = arccos X i,j , X i,jX i,j 2 X i,j 2

    . (7)

    A value of SAM equal to zero denotes absence of spectraldistortion, but radiometric distortion is possible (the twopixel vectors are parallel but have different lengths). SAMis measured in either degrees or radians and is usuallyaveraged over the whole image to yield a global measure-ment of spectral distortion.

    Degree of Distortion : The degree of distortion directlyreects the level of pan-sharpened image distortion. It isdened as

    D = 1MN

    M

    i =1

    N

    j =1

    |X i,j X i,j | . (8)

    The distortion of the pan-sharpened image is small,whereas the value of D is small.

    UIQI (Q-average) : The UIQI has been widely used toassess the quality of image sharpening recently. It isdened as

    Q0 = x xx x

    2xx

    x2 + x2

    2x x(2x +

    2x )

    (9)

    where x , x and x, x are the mean values and standarddeviations of the original image X and the pan-sharpened

    image X , respectively. It combines three different factors,namely, loss of correlation, luminance distortion, and con-

    trast distortion. The best value of Q-average is 1. Average Gradient : The average gradient reects the con-

    trasts of details contained in the image and the imageintelligibility. It is dened as

    G = 1

    (N 1)(M 1)

    M 1

    i =1

    N 1

    j =1 1 x 2i,j + 2 x2i,j / 2.(10)

    1 x i,j and 2 x i,j are the rst difference along bothdirections, respectively. Generally, a bigger value of G

    represents the image with higher denition. ERGAS : The ERGAS reects the overall quality of the

    pan-sharpened image. It represents the difference betweenthe pan-sharpened and original images and is dened as

    ERGAS = 100hl 1k

    K

    k =1

    RMSE k

    X k

    2

    (11)

    where h/l is the ratio between pixel sizes of the panchro-matic and original multispectral images, and RMSE k andX k are the RMSE and mean values of the kth band,respectively. A small ERGAS value means small spectraldistortion so that the algorithm has high preservation of spectral information.

  • 8/12/2019 06319382

    10/10

    2836 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 51, NO. 5, MAY 2013

    ACKNOWLEDGMENT

    The authors would like to thank X. Wang for her support andperforming the experiments.

    REFERENCES[1] C. Pohl and J. L. Van Genderen, Multisensor image fusion in remote

    sensing: Concepts, methods and applications, Int. J. Remote Sens. ,vol. 19, no. 5, pp. 823854, Mar. 1998.

    [2] T. Tu, S. Su, H. Shyu, and P. Huang, A new look at IHS-like image fusionmethods, Inf. Fusion , vol. 2, no. 3, pp. 177186, Sep. 2001.

    [3] V. P. Shah and N. H. Younan, An efcient pan-sharpening method via acombined adaptive PCA approach and contourlets, IEEE Trans. Geosci. Remote Sens. , vol. 46, no. 5, pp. 13231335, May 2008.

    [4] Y. Zhang, Problems in the fusion of commercial high-resolution satel-lite images as well as Landsat-7 images and initial solutions, in Proc. Int. Archives Photogramm. Remote Sens. Spatial Inf. Sci. , 2002, vol. 34,pp. 587592, Part 4.

    [5] X. Otazu, M. Gonzalez-Audicana, O. Fors, and J. Nunez, Introductionof sensor spectral response into image fusion methods. Application towavelet-based methods, IEEE Trans. Geosci. Remote Sens. , vol. 43,no. 10, pp. 23762385, Oct. 2005.

    [6] K. Amolins, Y. Zhang, and P. Dare, Wavelet based image fusion

    techniquesAn introduction, review and comparison, ISPRS J. Pho-togramm. Remote Sens. , vol. 62, no. 4, pp. 249263, Sep. 2007.

    [7] S. Li and B. Yang, A new pan-sharpening method using a compressedsensing technique, IEEE Trans. Geosci. Remote Sens. , vol. 49, no. 2,pp. 738746, Feb. 2011.

    [8] J. Cheng, H. Zhang, H. Shen, and L. Zhang, A practical compressedsensing-based pan-sharpening method, IEEE Geosci. Remote Sens. Lett. ,vol. 9, no. 4, pp. 629633, Jul. 2012.

    [9] X. Wang, Compressive sensing for pan-sharpening, M.S. thesis, Tech.Univ. Munich, Munich, Germany, 2011.

    [10] E. Cands, Compressive sampling, in Proc. Int. Congr. Math. , Madrid,Spain, 2006, pp. 14331452.

    [11] D. Donoho, Compressed sensing, IEEE Trans. Inf. Theory , vol. 52,no. 4, pp. 12891306, Apr. 2006.

    [12] S. J. Wright, Sparse optimization, in Proc. SIAM Conf. Optim. ,Darmstadt, Germany, May 2011.

    [13] X. Zhu and R. Bamler, Tomographic SAR inversion by L1 normregularizationThe compressive sensing approach, IEEE Trans. Geosci. Remote Sens. , vol. 48, no. 10, pp. 38393846, Oct. 2010.

    [14] X. Zhu and R. Bamler, Super-resolution power and robustness of com-pressive sensing for spectral estimation with application to spacebornetomographic SAR, IEEE Trans. Geosci. Remote Sens. , vol. 50, no. 1,pp. 247258, Jan. 2012.

    [15] P. Stoica and Y. Selen, Model-order selection: A review of informationcriterion rules, IEEE Signal Process. Mag. , vol. 21, no. 4, pp. 3647, Jul.2004.

    [16] L. Wald, Quality of high resolution synthesized images: Is there a simplecriterion? in Proc. Int. Conf. Fusion Earth Data , 2000, pp. 99103.

    [17] Z. Wang and A. Bovik, A universal image quality index, IEEE SignalProcess. Lett. , vol. 9, no. 3, pp. 8184, Mar. 2002.

    [18] Q. Du, O. Gungor, and J. Shan, Performance evaluation for pan-sharpening techniques, in Proc. IEEE Int. Geosci. Remote Sens. Symp. ,2005, pp. 42644266.

    [19] M. Strait, S. Rahmani, and D. Merkurev, Evaluation of pan-sharpeningmethods, UCLA, Los Angeles, CA, 2008, Tech. Rep.[20] L. Alparone, L. Wald, J. Chanussot, C. Thomas, P. Gamba, and

    L. M. Bruce, Comparison of pansharpening algorithms: Outcome of the2006 GRS-S data-fusion contest, IEEE Trans. Geosci. Remote Sens. ,vol. 45, no. 10, pp. 30123021, Oct. 2007.

    [21] X. Zhu, X. Wang, and R. Bamler, Compressive sensing for imagefusionWith application to pan-sharpening, in Proc. IGARSS Conf. ,2011, pp. 27932796.

    [22] S. Mallat, A Wavelet Tour of Signal Processing, 3rd ed. Amsterdam, TheNetherlands: Academic, 2009, pp. 664665.

    Xiao Xiang Zhu (S10M12) received the Bache-lors degree in space engineering from the NationalUniversity of Defense Technology, Changsha, China,in 2006 and the Masters (M.Sc.) and Doctor of Engineering (Dr.-Ing.) degrees from the TechnischeUniversitt Mnchen (TUM), Munich, Germany, in2008 and 2011, respectively.

    In October/November 2009, she was a Guest

    Scientist at the Italian National Research Council(CNR)-Institute for Electromagnetic Sensing of theEnvironment (IREA), Naples, Italy. Since May 2011,

    she has been a Full-Time Scientic Collaborator with the Remote SensingTechnology Institute, German Aerospace Center (DLR), Wessling, Germany,and with the Remote Sensing Technology Department, TUM. Her main re-search interests are signal processing, including innovative algorithms such ascompressive sensing and sparse reconstruction, with applications in the eld of remote sensing, particularly 3-D, 4-D, or higher dimensional SAR imaging.

    Dr. Zhus Ph.D. dissertation titled Very high resolution tomographic SARinversion for urban infrastructure monitoringA sparse and nonlinear tourwon the Dimitri N. Chorafas Foundation Research Award in 2011 for itsdistinguished innovation and sustainability.

    Richard Bamler (M95SM00F05) received theDiploma degree in electrical engineering, the Doc-torate in Engineering, and the Habilitation in theeld of signal and systems theory in 1980, 1986, and1988, respectively, from the Technische UniversittMnchen, Munich, Germany.

    He was with the university from 1981 to 1989working on optical signal processing, hologra-phy, wave propagation, and tomography. In 1989,he joined the German Aerospace Center (DLR),Wessling, Germany, where he is currently the

    Director of the Remote Sensing Technology Institute. In early 1994, he wasa Visiting Scientist at the Jet Propulsion Laboratory in preparation of theSIC-C/X-SAR missions, and in 1996, he was a Guest Professor at the Univer-sity of Innsbruck. Since 2003, he has held a full professorship in remote sensingtechnology at the Technische Universitt Mnchen as a double appointmentwith his DLR position. His teaching activities include university lectures andcourses on signalprocessing, estimation theory, andSAR. Since he joinedDLR,he, his team, and his institute have been working on SAR and optical remotesensing, image analysis and understanding, stereo reconstruction, computervision, ocean color, passive and active atmospheric sounding, and laboratoryspectrometry. They were and are responsible for the development of theoperational processors for SIR-C/X-SAR, SRTM, TerraSAR-X, TanDEM-X,ERS-2/GOME, ENVISAT/SCIAMACHY, MetOp/GOME-2, and EnMAP. Heis the author of more than 200 scientic publications, among them 50 journalpapers and a book on multidimensional linear systems theory, and he is the

    holder of eight patents and patent applications in remote sensing. His currentresearch interests are in algorithms for optimum information extraction fromremote sensing data with emphasis on SAR. This involves new estimation algo-rithms such as sparse reconstruction and compressive sensing. He has devisedseveral high-precision algorithms for SAR processing, SAR calibration andproduct validation, Ground Moving Target Identication for trafc monitoring,SAR interferometry, phase unwrapping, persistent scatterer interferometry, anddifferential SAR tomography.

    Since 2010, Prof. Bamler has been a member of the executive board of Munich Aerospace, a newly founded research and education project betweenMunich universities and extramural research institutions, including DLR.