UNIVERSITY OF SOUTH CAROLINA Columbia, SC...
Transcript of UNIVERSITY OF SOUTH CAROLINA Columbia, SC...
NEW FRONTIERS IN IMAGING & SENSING
February 17-22, 2011 Sumwalt Building Room 102
IIMMII
workshop:
Wolfgang Dahmen, Workshop Moderator
The Interdisciplinary Mathematics Institute (IMI) and the
NanoCenter are pleased to announce a special research
workshop which is scheduled for February 17 – 22, 2011. As
the second event of this type, the workshop is a focus point of
our ongoing collaboration in developing new imaging methods
for electron microscopy, this time with special emphasis on
sparsity recovering and compressed sensing concepts. Accordingly, the scope of applications
will be widened to other data acquisition methods of high current interest in and outside the
university such as Synthetic Aperture Radar (SAR) and tomography related image
formation. This interactive workshop will bring together experts in relevant areas from
material science, microscopy, sensor systems, mathematics and computer science to identify
current obstacles and problems in the field that have the potential to be resolved by emerging
mathematical methods. We expect to have 23 lectures and 5 discussion sessions spread over a
six day period. Especially, there is ample space for discussions as it has proven very effective
in the past in triggering synergies between application aspects and novel methodological
developments.
The workshop is part of the fourth annual research seminar hosted by the IMI during the
period from mid February to mid April. While the first seminar had a methodological focus
on emerging concepts in “Mathematical Learning Theory in High Dimensions” the second
seminar was held in close collaboration with the NanoCenter focusing on new imaging
concepts for electron microscopy. In fact, recent advances in hardware-based aberration
correction have significantly expanded the nanoscale direct imaging capabilities of scanning
transmission electron microscopes (STEM). These instrumental advances are beginning to
radically transform the imaging of nanoscale matter and in the near future will provide huge
opportunities for the investigation of biological structures. However, severe bottlenecks of
these techniques are the manual operation and labor intense search procedures, the damage
due to the electron beam and the extreme environmental sensitivity of the instruments. One
focus point of the seminar is to tackle these scientific challenges, for instance, by formulating
and exploring a new mathematical model to treat a collection of electron microscopy scans of
two-dimensional projections which will facilitate the extraction of high-resolution images
from low-resolution/low-energy scans. The results of the previous seminars and last year’s
workshop will be part of the Springer book “Nano-scale Imaging in Electron Microscopy”
in Springer's Nanostructure Science & Technology Series. At this stage the workshop will
provide an excellent opportunity to learn from experts in essentially all relevant fields in the
above widened scope of applications and to coordinate future research in the area.
message from the
MODERATOR
Hosted By:
AND
UNIVERSITY OF SOUTH CAROLINA Columbia, SC 29208
2
Benjamin Berkels Interdisciplinary Mathematics Institute
University of South Carolina
Columbia, SC 29208
Image Segmentation Based on Learned Discriminative Dictionaries
Nowadays, sparse signal representations based on overcomplete dictionaries are used for a wide range
of signal and image processing tasks. One of the major challenges in this context is the design of suit-
able dictionaries. The sparse representation itself usually is just a means to an end and used to solve a
certain task like, for instance, denoising or compression. Here, we focus on dictionaries suitable for image segmentation
tasks and, picking up the discriminative dictionary model by Mairal et al., we introduce an improved minimization algorithm
for the underlying variational problem. This algorithm incorporates recent advances in orthogonal matching pursuit made by
Rubinstein et al. making it more efficient. Furthermore, it is more stable since it ensures an energy decay in the dictionary
update unlike the truncated Newton iteration used by Mairal et al. Finally, we study the applicability of discriminative dic-
tionaries to detect sulci on intra-operative digital photographs of the exposed human cortex. In this application, a discrimina-
tive dictionary pair is learned from a set of training images where an experienced physician manually marked the sulci ge-
ometry. We demonstrate that this approach allows a robust segmentation of these brain structures as long as the training data
contains images sufficiently similar to the input images.
*Joint work with Martin Rumpf (Institute for Numerical Simulation, University of Bonn), Marc Kotowski and Carlo Schaller (University
Hospital of Geneva).
ABSTRACTS
Peter Binev Interdisciplinary Mathematics Institute
University of South Carolina
Columbia, SC 29208
High Quality Image Formation by Unconventional Data Acquisition in
STEM
The instrumental advances in scanning transmission electron microscopy (STEM), and especially
in high-angle annular dark field (HAADF) STEM, have led to significant improvement of the
resolution and the quality of the resulting images. The high quality images are usually pro-
duced via high resolution sampling, in which the distance between the centers of the samples are a number of times smaller
than the diameter of the area of interaction of the beam with the specimen. However, this technique is not applicable to beam
sensitive materials that can be damaged by the high amount of energy per square angstrom required.
In this talk we are addressing this issue by investigating an unconventional data acquisition process for high quality HAADF
STEM, in which the investigated portion of the specimen is scanned with low energy beam using large intervals between the
samples (comparable with the diameter of the beam interaction area) and this is repeated several times on the same portion
of the specimen. The consecutive frames are then registered towards each other and then the data is assembled together to
produce a high quality image. This approach allows us to significantly reduce the spatial distortions during the scanning
process and to detect unusual changes indicating eventual beam damage, in order to suppress the data from the damaged re-
gions.
The presentation will focus on the basic theoretical setup, discuss some practical issues, and give some examples involving
beam sensitive materials like zeolites.
*This is a joint research project with Douglas Blom, Wolfgang Dahmen, Philipp Lamby, Robert Sharpley, and Thomas Vogt.
3
ABSTRACTS
Douglas Blom Electron Microscopy Center / NanoCenter
University of South Carolina
Columbia, SC 29208
Thomas Vogt NanoCenter / Dept. of Chemistry & Biochemistry
University of South Carolina
Columbia, SC 29208
Multislice Frozen Phonon HAADF Image Simulations of MoVNbTeO Complex Oxidation
Catalyst “M1”
We have recently reported on the analysis of the structure and composition of a complex oxide catalyst phase using aberra-
tion-corrected HAADF imaging. Good agreement with existing Rietveld refinement models of the structure from combined
synchrotron X-ray and neutron powder diffraction were found. To date, detailed image simulations of the structure have
been lacking. Frozen phonon multislice image simulations of the structure will be reported. The dependency of the image
simulations on detector geometry, Debye-Waller factors, and number of phonon configurations will be discussed. Sensitiv-
ity of the HAADF STEM technique to partial cation disorder for Mo-V-O columns sets a lower limit for HAADF STEM to
extract meaningful data on occupancy.
Nigel D. Browning1,2,3
1Dept. of Chemical Engineering & Materials Science 2Dept. of Molecular & Cellular Biology
University of California– Davis
Davis, CA 95616-5294
Quantifying Aberration Corrected Z-contrast Images of Interfaces/Defects
The development of spherical aberration correctors for scanning transmission electron microscopes
(STEM) has had a significant impact on the spatial resolution that can be obtained from experimental
images. In addition to the increase in spatial resolution (~0.05nm in the best microscopes), the use of
larger apertures to form the small electron probe has led to an increase in the beam current and a subsequent increase in the
sensitivity (contrast) of the images to small changes in structure and composition. However, the increase in beam current
brings with it the potential for electron beam modification of the specimen during image acquisition and the larger apertures
decrease the depth of focus, making image interpretation less straightforward. To fully realize the potential of aberration
corrected microscopes to quantify the changes in composition and structure that occur at interfaces and defects in materials
it is therefore important to develop a methodology that allows resolution and sensitivity to the quantitatively defined as a
function of the beam current and contrast available in each experiment – many experiments are now limited by the sample
rather than the microscope. In this presentation, the use of image processing and statistical analysis methods that allow for
quantitative structure and composition information to be extracted from both aberration corrected and uncorrected images
will be described. Analysis of 2-D images to determine dislocation core variability at grain boundaries in ceramic oxides
will also be highlighted.
*Work presented are joint with J.P. Buban (University of California, Davis), M. Chi (Oak Ridge National Laboratory), D. J. Masiel
(University of California, Davis), Q.M. Ramasse (SuperSTEM Laboratory, Warrington, UK), and M.C. Sarahan (SuperSTEM Labora-
tory, Warrington, UK).
3Condensed Matter & Materials Division
Lawrence Livermore National Laboratory
Livermore, CA 94550
4
ABSTRACTS
Ronald DeVore Department of Mathematics
Texas A&M University
College Station, TX 77840
Directed Learning in High Dimensions
Problems of approximation, classification, and learning functions in high dimensions present signifi-
cant challenges to computation in order not to suffer the curse of dimensionality. This is possible
only if the function to be recovered comes from a class which has small enough entropy. We shall
introduce various model classes for high dimensional functions based on sparsity and variable reduction and discuss what is
known about their optimal recovery through information queries.
Emre Ertin Department of Electrical & Computer Engineering
The Ohio State University
Columbus, OH 43210-1272
High Resolution Radar Sensing via Compressive Illumination
High range resolution radar systems use wideband frequency modulated waveforms to estimate
the spatial distribution of the scatterers in the scene. Digitizing ultrawideband radar returns at
high bit resolution is beyond the limits of current A/D technology. The emerging field of com-
pressive sensing has provided provable performance guarantees and signal recovery algorithms
for sub-sampling of sparse or compressible signals. In this talk we present a novel compressive
sensing strategy for high resolution radar. The key elements are compressive illumuniation using
waveforms with frequency diversity on transmit and random aliasing on receive, that shifts the
burden of the sampling operator from the receiver to the transmitter. The transmitter and receiver structure for compressive
sensing is described and the sensing matrix for the proposed compressive sensing strategy is derived for use in compressive
sensing recovery algorithms based on sparsity regularized inversion. A preliminary experimental demonstration of the com-
pressive sensing strategy is given through sampling of staggered multifrequency linear FM signals through a single low rate
A/D.
James Evans Department of Molecular & Cellular Biology
University of California, Davis
Davis, CA 95616
Noise Reduction and Image Restoration Algorithms for Cryogenic
Electron Microscopy of Proteins
Cryogenic Transmission Electron Microscopy allows for high-resolution structural determination
of biological proteins. By freezing proteins in a vitrified layer of amorphous water, the samples are
optimally preserved. However, these organic samples exhibit low scattering and degrade rapidly
upon exposure to the high-energy electron beam. Additionally, the vitreous ice “background” is of
similar density to protein and results in low signal-to-noise ratios (SNR) in the image. During data
collection, the SNR can be improved by defocusing the microscope to induce increased phase contrast. Thus, a tradeoff ex-
ists between imaging with a high SNR but low spatial resolution, or with a low SNR and high spatial resolution. In this talk I
will describe these limitations for imaging of proteins that currently act as a bottleneck for high-throughput structure deter-
mination. I will further discuss adaptive algorithms being researched to help identify the position of proteins within high
spatial resolution images for subsequent 3D reconstruction.
5
Kevin Kelly Department of Electrical and Computer Engineering
Rice University
Houston, TX 77251-1892
Applications of Compressive Sensing in Imaging and Spectroscopy
Hardware
This talk will review the implementation of compressive sensing in various optical imaging and spec-
troscopy systems. The basis of these systems is a well-established body of work which asserts that one
can exploit sparsity or compressibility when acquiring signals of general interest, and that one can de-
sign nonadaptive sampling techniques that condense the information in a compressible signal using far fewer data points
than were thought necessary. For various systems, this strategy has many advantages over more traditional raster scan meth-
ods by enhancing sensitivity and resolution. Specific examples will include implementation in infrared, hyperspectral, and
fluorescence-based imaging systems. Recent uses of this technique in terahertz and millimeter wave regimes will also be
mentioned. Lastly, limitations and future challenges in the field of compressed imaging will be reviewed.
ABSTRACTS
Holger Kohr Institute of Applied Mathematics
Universität Des Saarlandes
D-66123 Saarbrucken, Germany
Electron Tomography - 3D Imaging of Subcellular Structures
In conventional transmission electron microscopic (TEM) imaging, a single two-dimensional projec-
tion image of a specimen is taken at a relatively high dose level. This procedure, which yields a high
signal-to-noise ratio (SNR), is sufficient for samples that do not vary much in beam direction. How-
ever, a sample with a truly three-dimensional structure such as a cell or a macromolecular assembly necessitates a tool for
3D imaging such as Electron Tomography (ET). To apply this technique, the specimen is mounted on a rotatable stage, and
a series of low-dose TEM images taken at different tilt angles is acquired. The problem of recovering the three-dimensional
structure from these two-dimensional projections can be formulated as an inverse problem which is hard to solve due to its
typical ill-posedness, the low SNR in the data, the low image contrast and the impossibility to acquire data at large tilt an-
gles. In this talk, a mathematical method is presented which is capable of solving the problem in a stable and efficient man-
ner. Moreover, the method is extended to the case where one is not interested in the solution itself, but rather in a feature like
location and orientation of structural elements. Numerical results based on a dataset from structural biology are presented for
both the structure determination problem and the feature reconstruction problem.
6
Gitta Kutyniok Institute of Mathematics
University of Osnabrueck
49069 Osnabrueck, Germany
Data Separation via Sparse Approximation in Neurobiological Imaging
Along with the deluge of data we face today, it is not surprising that the complexity of such data is
also increasing. One instance of this phenomenon is the occurrence of multiple components, and
hence, analyzing such data typically involves a separation step. One most intriguing example comes
from neurobiological imaging, where images of neurons from Alzheimer infected brains are studied with the hope to detect
specific artifacts of this disease. The prominent parts of images of neurons are spines (pointlike structures) and dendrites
(curvelike structures), which require separate analyzes, for instance, counting the number of spines of a particular shape, and
determining the thickness of dendrites.
In this talk, we will first introduce a general methodology for separating morphologically distinct components using ideas
from sparse approximation. More precisely, this methodology utilizes two representation systems each providing sparse ap-
proximations of one of the components; the separation is then performed by thresholding. After introducing this method, we
provide an estimate for its accuracy. We then study this separation approach using the pair of wavelets (adapted to pointlike
structures) and shearlets (adapted to curvelike structures) for separating spines and dendrites. Finally, we discuss details of
the implementation and present numerical examples to illustrate the performance of our methodology.
*This is joint work with David Donoho (Stanford University) and Wang-Q Lim (University of Osnabrueck).
Philipp Lamby Interdisciplinary Mathematics Institute
University of South Carolina
Columbia, SC 29208
TV-Regularized Algebraic Reconstruction for Limited-Angle Tomography
The recent advances in sparse recovery and compressed sensing have also led to new developments in
the field of tomographic reconstruction. First, in [1] it was shown that certain piecewise constant func-
tions can be recovered exactly from a small number of projections with a regularized backprojection
algorithm using the TV-norm as a prior. Later, TV-regularization has been used in connection with the algebraic reconstruc-
tion technique [2] and equally-sloped tomography [3], mainly for application in medical imaging.
In the present talk we consider this approach in the context of STEM tomography. Here, the challenges are that the projec-
tion data is rather noisy, because one has to be concerned about the damage the electron beam inflicts on the specimen, and
that the view angle is restricted due to the intricate mechanics of the holder around which the specimen is tilted.
For this reason the algebraic reconstruction technique seems particularly suited because unlike Fourier methods it does not
require the full angular range of views to be formulated. Whether the results are satisfactory is of course another question,
which we try to answer (affirmatively!) in this talk. For this purpose we present preliminary numerical results using artificial
phantoms, discuss the choice of regularization functionals and constraints, and in particular address the question how to
solve the arising convex optimization problems. This is joint work with Peter Binev, Wolfgang Dahmen, Ronald DeVore,
Andreas Platen, Dan Savu, and Robert Sharpley, and continues the program outlined in [4].
References:
[1] E. Candes, J. Romberg, and T. Tao. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency in-
formation, IEEE Trans. on Information Theory 52, pp. 489-509 (2006).
[2] G.T. Herman and R. Davidi. Image reconstruction from a small number of projections, Inverse Problems 24 (2008).
[3] Y. Mao, B.P. Fahimian, S.J. Osher, and J. Miao. Development and optimization of regularized tomographic reconstruction algorithms
utilizing equally-sloped tomography, IEEE Trans. on Image Processing 19, pp. 1259-1268 (2010).
[4] P. Binev, W. Dahmen, R. DeVore, P. Lamby, D. Savu, and R. Sharpley. Compressed sensing and electron microscopy, IMI-Preprint
10:09, University of South Carolina (2010).
ABSTRACTS
7
Markus Navratil Institut für Geometrie und Praktische Mathematik
RWTH Aachen
52056 Aachen, Germany
How to Compare Patches of Electron Micrographs?— Adapting
Nonlocal Means to Denoising and Super-Resolution Reconstruction of
HAADF-STEM Images
The idea of averaging intensity values in a nonlocal manner, based on comparing image patches,
has turned out to be very appropriate for processing electron micrographs. On the one hand, the so
called nonlocal means exploit the high degree of repetitiveness in electron micrographs when de-
noising them, on the other hand, the algorithm overcomes the problem of accurate motion estima-
tion when super-resolving time series of such images. However, the algorithm has to be adapted to this special kind of data.
A major advantage of the patch-based strategy is that it is easy to make patch comparisons incorporate the extraordinary
characteristics of scanning transmission electron microscope (STEM) images that are due to their sequential data acquisi-
tion.
Two different adapted similarity notions for comparing patches of electron micrographs are presented. Both have in com-
mon that they apply an initial regression to the patches making their comparison more robust to noise. One of them makes
use of a continuous representation of the lines of a patch as spline fits. The other one employs the statistical method of prin-
cipal component analysis for a set of patches. The resulting algorithms are applied to high-angle annular darkfield (HAADF)
STEM images of MoVTeNbO M1 catalysts and more beam sensitive zeolites. We also propose a strategy for statistical
noise analysis offering a more reliable and more accurate choice of denoising parameters than visual inspection.
ABSTRACTS
Mauro Maggioni Department of Mathematics
Duke University
Durham, NC 27708-0320
Multiscale Geometric Methods for Noisy Point Clouds in High Dimensions
We discuss techniques for the geometric multiscale analysis of intrinsically low-dimensional point
clouds. We first show how such techniques may be used to estimate the intrinsic dimension of data sets,
then discuss a novel geometric multiscale transform, based on what we call geometric wavelets, that
leads to novel approximation schemes for point clouds, and dictionary learning methods for data sets.
Finally, we apply similar techniques to model estimation when points are sampled from a measure supported on a union of
an unknown number of unknown planes of unknown dimension.
Stanley Osher Department of Mathematics
University of California, Los Angeles
Los Angeles, CA 90095-1555
TBA
8
9
10
ABSTRACTS
Bryan Reed Lawrence Livermore National Laboratory
P.O. Box 808
Livermore, CA 94551-0808
Data Analysis Methods for Dynamic Transmission Electron Microscopy
The Dynamic Transmission Electron Microscope (DTEM) at Lawrence Livermore National Labora-
tory is a unique instrument able to capture images of fast-evolving microstructure with exposure
times of only 15 ns.1,2 This is more than six orders of magnitude faster than conventional in situ
electron microscopy and has enabled new insights into phase transformations, chemical reactions,
and materials dynamics on otherwise inaccessible spatiotemporal scales.
The DTEM's extremely short exposure time limits the number of electrons and therefore the amount of obtainable informa-
tion in a single shot. This fact is crucial in the determination of the tradeoff of spatial versus temporal resolution, and it also
demands data analysis methods that make the best use of the available information. We have found that a variant of princi-
pal component analysis (PCA) with iterative rescaling is a valuable way to examine a complicated set of DTEM diffraction
experiments. The method filters out noise and irrelevant instrumental parameter variations, such that the great majority of
the interesting information is encoded in just a few parameters. By using long-exposure measurements to identify signposts
in the resulting abstract space, we can then extract information about phase, crystal grain size, texture, morphology, and tem-
perature from each short-pulse measurement.
References: 1T. LaGrange et al., Appl. Phys. Lett. 89, 044105 (2006) 2J. S. Kim et al., Science 321, 1472 (2008)
This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory and was
supported by the Office of Science, the Office of Basic Energy Sciences, the Division of Materials Sciences and Engineering, and the
U.S. Department of Energy under contract No. DE-AC52-07NA27344. This work was funded by the Laboratory Directed Research and
Development Program at LLNL under project tracking code 08-ERD-032.
*Work presented are joint with M.K. Santala, T.B. LaGrange, G.H. Campbell, and N. D. Browning (Lawrence Livermore National
Laboratory, Livermore, CA).
George Rogers Protection, Measurements & Effects Branch
Naval Surface Warfare Center
Dahlgren, VA 22448
An Electro-Magnetic Signatures Approach to the Exploitation of
Polarimetric Synthetic Aperture Radar Data
The need to screen large areas to find missing light airplanes (e.g. the Steve Fosset crash) motivated
the NASA "Synthetic Aperture Radar for Search And Rescue (SAR2)" investigation into the use of
airborne Polarimetric Synthetic Aperture Radar (PolSAR) to detect crash sites. This presentation pro-
vides an overview of both the NASA effort and subsequent research into the use of PolSAR data to
detect specific objects in large data sets. After a brief introduction to SAR and PolSAR, a physics-based polarimetric decom-
position is presented along with the electro-magnetic signatures approach that we have developed. Results are presented
from a SAR2 data collection along with some ground truth examples of detections. The longer wavelengths needed for foli-
age penetration (L-Band, UHF) are comparable in size to most of the objects that might be of interest, meaning that the in-
teraction of the PolSAR pulses with the objects is Mie region scattering. This adds additional complexity to the problem,
which is the final area addressed in the presentation.
11
ABSTRACTS
Daniel Savu Interdisciplinary Mathematics Institute
University of South Carolina
Columbia, SC 29208
A Basic Concept in Compressive Sensing Applied to a STEM Image
Reconstruction Problem
Compressive Sensing is a way of acquiring and reconstructing a given signal under the assumption that it has only few non-
zero coefficients in a convenient representation (sparse signal), or that it has only few significant such coefficients
(compressible signal).
We interpret a STEM (Scanning Transmission Electron Microscopy)-HAADF (High-Angle Annular Dark-Field) image as
the sparse representation of the atomic columns in the sample and we would like to exploit this sparsity by a basic idea in
Compressive Sensing, namely to use a small number of appropriate linear measurements of the sample in low-resolution and
some suitable algorithm to reconstruct the high-resolution image.
We demonstrate this concept with experiments that target the reconstruction of a computer simulated image and of a micro-
graph when we simulate low dose STEM measurements, and identify possible developments of this technique as a useful
tool in STEM imaging.
Zuowei Shen Department of Mathematics
National University of Singapore
Singapore 119076
MRA Based Wavelet Frames and Applications
One of the major driving forces in the area of applied and computational harmonic analysis during the last
two decades is the development and the analysis of redundant systems that produce sparse approximations
for classes of functions of interest. Such redundant systems include wavelet frames, ridgelets, curvelets and shearlets, to
name a few. This talk focuses on tight wavelet frames that are derived from multiresolution analysis and their applications
in imaging. The pillar of this theory is the unitary extension principle and its various generalizations; hence we will first
give a brief survey on the development of extension principles. The extension principles allow for systematic constructions
of wavelet frames that can be tailored to, and effectively used in, various problems in imaging science. We will discuss some
of these applications of wavelet frames. The discussion will include frame-based image analysis and restorations, image
inpainting, image denoising, image deblurring and blind deblurring, image decomposition, and image segmentation.
Otmar Scherzer Computational Science Center
University of Vienna
Nordbergstrasse 15, 1090 Vienna, Austria
Variational Methods in Banach Spaces for the Solution of Inverse Problems
In the talk we give an overview on variational regularization methods in Banach spaces. The theory is an
extension of the classical theory of regularization methods in Hilbert spaces. An essential ingredient of such analysis is
source conditions, which are generalized and formulated for the Banach spaces setting. In this setting also fits sparsity regu-
larization, which has proven to be a powerful tool in imaging. One of the powerful results proven by Candes et al is a linear
convergence rate result, which can also be obtained (even in an infinite dimensional setting) from variational regularization
theory in Banach spaces. Finally we present some applications to Photoacoustic and radar imaging. *This is joint work with M. Grasmair, M. Haltmeier, C. Pöschl and E. Resmerita.
12
Vladimir Temlyakov Interdisciplinary Mathematics Institute
University of South Carolina
Columbia, SC 29208
Greedy Approximation in Compressed Sensing
While the l1 minimization technique plays an important role in designing computationally tractable
recovery methods in compressed sensing, its complexity is still impractical for many applications. An
attractive alternative to the l1 minimization is a family of greedy algorithms. We will discuss several
greedy algorithms from the point of view of their practical and theoretical performance.
ABSTRACTS
Jörn Ungerman Institut für Chemie und Dynamik der Geosphäre (ICG)
Forschungszentrum Jülich GmbH
52425 Jülich, Germany
3-D Tomographic Reconstruction of Atmospheric Trace Gas
Concentrations for Infrared Limb Imagers
Atmospheric infrared limb sounding measures spectrally resolved infrared radiation emitted by mo-
lecular vibrational and rotational bands. By scanning the layers of the atmosphere vertically in the
limb, it enables the deduction of the vertical structure and composition of the atmosphere. The special
viewing geometry of limb sounders inherently provides a very good vertical sampling and conse-
quently vertical resolution, but has difficulties to deliver a good horizontal resolution along the line of
sight. One way to improve the achievable resolution in the desired way is tomography, which employs multiple views of the
same volume from different directions to produce a spatially resolved reconstruction of the examined object.
GLORIA (Gimballed Limb Observer for Radiance Imaging of the Atmosphere) is a new remote sensing instrument essen-
tially combining a Fourier transform infrared spectrometer with a two-dimensional (2-D) detector array in combination with
a highly flexible gimbal mount. Its ability to pan the detector array allows for tomographic measurements of mesoscale
events for a wide variety of atmospheric constituents.
This talk presents the inverse problem posed by evaluating infrared limb sounder measurements. A 3-D Tikhonov regulari-
zation employing a priori information about the atmosphere combined with a Levenberg-Marquardt minimizer is used to
perform the inversion. This scheme is used to explore the capabilities of GLORIA to sound the atmosphere in full 3-D.
Sparse matrix representations, iterative solvers and an adjoint forward model reduce the computational effort. The long du-
ration of airborne tomographic measurements necessitates the consideration of atmospheric variability. Consequently, a
framework is presented that allows examining, quantifying, and compensating the influence of advection on 3-D atmos-
pheric tomographic retrievals.
13
ABSTRACTS
Paul Voyles Department of Materials Science & Engineering
University of Wisconsin– Madison
Madison, WI 53706-1595
Angular Correlations and New Structure in Bulk Metallic Glass from
Coherent Electron Nanodiffraction in the STEM
Coherent electron nanodiffraction contains a wealth of structural data. My group uses it to study the
structure of amorphous materials. Abstracting knowledge from this wealth of data is the hard part. I
will discuss two methods we are using to address this problem. The first method is studying the an-
gular correlations within the nanodiffraction patterns, which should reveal the rotational symmetry
elements of the local atomic structure. Preliminary results support five-fold rotational symmetry, consistent with the current
popular icosehedral structural model, but also support four- and six-fold symmetry, which are consistent with crystalline
packing. The second method is reverse Monte Carlo modeling, which generates computer structural models consistent with
the data. These models contain some icosehedral and some non-icosehedral clusters of atoms.
Masashi Watanabe Department of Materials Science and Engineering
Lehigh University
Bethlehem, PA 18015
Evaluation of Effective Data-Preprocessing for Principal Component
Analysis on Spectrum-Imaging Datasets
Spectrum-image (SI) is one of the most essential approaches to characterize materials at nano scale
in (Scanning) Transmission Electron Microscopy ((S)TEM). This approach is nowadays routinely
applicable for electron energy-loss spectrometry (EELS) and X-ray energy-dispersive spectrometry
(XEDS) in STEM and for energy-filtering in TEM (EFTEM). Although the SI approach is useful, analysis of SI datasets
may not be straightforward due to their large data scale and unknown variables in datasets. Principal component analysis
(PCA) can handle large scale datasets efficiently and extract statistically significant features. PCA would be useful espe-
cially for weak signals but repeated many time in SI dataset. Once the statistically significant features are revealed by PCA,
noise-reduced datasets can be reconstructed. PCA has been applied for various SI datasets.
The key factor to apply PCA is how to distinguish the weak features (which are often more important than the statistically
significant features in datasets) from heavy noise in SI datasets. To extract such hidden features from datasets under heavy
noise effectively, several data preprocessing methods prior to PCA, such as centering, scaling and more sophisticated pro-
cessing, have been proposed. In this talk, several preprocessing methods will be evaluated especially for SI datasets ac-
quired at nano scale by STEM-EELS, STEM-XEDS and EFTEM approaches.
14
POSTER PRESENTATIONS
Andreas Platen Institut für Geometrie und Praktische Mathematik
RWTH Aachen
52056 Aachen, Germany
Sequential Subspace Optimization Method for Tomography Applications
The Sequential Subspace Optimization method (SESOP) is a nonlinear optimizer for high-
dimensional smooth unconstrained minimization problems, which will be presented in this poster.
The main idea of SESOP is to find at each iteration the minimum of the function over an appropriate
subspace. Some choice of the subspace guaranties convergence to the exact solution with a conver-
gence rate that is quadratic in the number of iterations.
We apply SESOP to an abstract tomography model problem. In this application only few measurements can be taken in or-
der to minimize the exposure dose of the specimen, which leads to an under-determined system of linear equations. With
the assumption, that the exact reconstruction has piecewise constant color, i.e., the total variation is small, we use a TV-
norm to find the solution with the smallest one. The resulting problem is nonlinear, for which SESOP can be used. We show
that the reconstruction quality of SESOP is much higher than the one of Kaczmarz's algorithm, which is the usual method
applied for this class of reconstruction problems. So SESOP might be a very good alternative in tomography applications.
Sonali Mitra Department of Chemistry & Biochemistry
University of South Carolina
Columbia, SC 29208
HAADF-STEM Image Simulation Study for Structural
Characterization
The aberration-corrected High Angle Annular Dark Field (HAADF)-STEM (Scanning Trans-
mission Electron Microscopy) images of industrially important molybdenum-vanadium
bronze based selective oxidation catalysts were studied and quantitative comparisons of
atomic coordinates and metal site occupancies were made with X-ray and neutron powder
diffraction Rietveld refinements. Multislice frozen phonon HAADF-STEM image simulation
studies are done on the M1 phase of Mo-V bronze based catalyst. Catalysts at operating temperature are a new frontier in
research on heterogeneous catalyst. The Aduro heated holder system for electron microscopy released by Protochip will en-
able us to study different materials at high temperature comparable to the real world operating condition. An initial analysis
was done by HAADF-STEM image simulation on thermal lattice expansion of single crystals of silicon, SrTiO3 and MgO,
to compare the results with experimental high temperature HAADF-STEM images of these crystals. In another study simu-
lations are done on Sr3AMO4F (M=Al, Ga) family of anti-perovskite structures.
15
BIOGRAPHY—Workshop Moderator
Wolfgang Dahmen1, 2 1Institut für Geometrie und Praktische Mathematik
RWTH Aachen
52056 Aachen, Germany
Wolfgang Dahmen was born in 1949 in Linnich, Germany. He graduated in 1974 with a degree
in Mathematics and Secondary Topic in Physics. He received his Dr.rer.nat. in 1976 and his
habilitation in Mathematics in 1981.
Dr. Dahmen is the recipient of numerous awards, including the 2002 DFG Gottfried Wilhelm
Leibniz-Prize (the highest award in German Scientific Research). His work combines the devel-
opment of new theoretical concepts with applications including adaptive multi-scale methods, on
-line and real-time optimization for process control, and computer aided geometric design. He is
a professor and head of the Institut für Geometrie und Praktische Mathematik at RWTH Aachen. Professor Dahmen special-
izes in Applied and Numerical Analysis, Approximation Theory, Mathematical Learning Theory, and interdisciplinary ap-
plications. He has over 185 publications on topics such as:
Trigonometric approximation;
Multivariate splines;
Subdivision algorithms;
Adaptive wavelet methods partial differential and boundary integral equations;
Learning Theory;
Compressed sensing.
For more information, please visit Professor Dahmen’s homepage: http://www.igpm.rwth-aachen.de/en/dahmen
2Interdisciplinary Mathematics Institute
University of South Carolina
Columbia, SC 29208
16
Organizing Committee:
Peter Binev
Douglas Blom
Wolfgang Dahmen
Robert Sharpley
Thomas Vogt
The “New Frontiers in Imaging and Sensing” Workshop was made possible through
generous support from the College of Arts and Sciences, University of South Carolina.