Adaptive flat multiresolution multiplexed computational

9
Adaptive flat multiresolution multiplexed computational imaging architecture utilizing micromirror arrays to steer subimager fields of view Marc P. Christensen, Vikrant Bhakta, Dinesh Rajan, Tejaswini Mirani, Scott C. Douglas, Sally L. Wood, and Michael W. Haney A thin, agile multiresolution, computational imaging sensor architecture, termed PANOPTES (process- ing arrays of Nyguist-limited observations to produce a thin electro-optic sensor), which utilizes arrays of microelectromechanical mirrors to adaptively redirect the fields of view of multiple low-resolution subimagers, is described. An information theory-based algorithm adapts the system and restores the image. The modulation transfer function (MTF) effects of utilizing micromirror arrays to steering imaging systems are analyzed, and computational methods for combining data collected from systems with differing MTFs are presented. © 2006 Optical Society of America OCIS codes: 110.2970, 230.3990, 100.6640. 1. Background and Motivation The desire for information superiority in matters of national security has created a requirement for per- vasive optical sensors with flat form factors. Tradi- tional imaging systems contain a lens, and the quality of the resulting image is typically propor- tional to the physical size of the lens used. Both the light-gathering capability and the resolving power of the imaging sensor derive directly from the size of the optical elements in such systems. This fact ultimate- ly results in imaging devices that are bulky and cubelike—a constraint that has persisted since their invention. The costs associated with the design, man- ufacture, and packaging of such physically unwieldy systems have made them a relatively scarce resource in many applications in which their pervasive use would be beneficial. One only needs to consider recent developments in flat-panel technologies to gauge the possibilities for a flat imaging sensor. Flat displays are easier to place, take up less physical space, and are ultimately more useful because of their form factor. The creation of a flat imaging sensor requires a paradigm shift in the imaging-system approach, cou- pled with the proper selection and integration of emerging technologies. 1,2 Traditional imaging sen- sors utilize a lens or mirror to form the image that is then sampled onto a detector array. A thin optical sensor would be restricted to using smaller optical elements and would therefore require additional computation to augment image formation to achieve high resolution. Flat imaging sensors based on arrays of micro-optic elements have been proposed and pro- totyped. 3 These sensors allocate additional imaging resources to each region to provide the necessary data for computation to enhance the inherent low resolu- tion of the flat sensor. One constraint of the approach in Ref. 3 is the fixed overlap of the imaging resources, requiring the design to be optimized for a specific resolution and field of view (FOV). Clearly, adaptive sensor elements would increase the utility of flat cameras. Technologies developed in the late 1990s offer an opportunity to create a useful and flat micro-optic imaging sensor. Micromirror arrays similar to the ones used in many laptop projectors today have been utilized in novel imaging and signal-processing sys- tems. 4 The precision and optical quality of these mi- cromirror arrays makes them attractive candidates M. P. Christensen ([email protected]), V. Bhakta, D. Rajan, T. Mirani, and S. C. Douglas are with the Department of Electrical Engineering, Southern Methodist University, 6251 Airline Road, Dallas, Texas 75275-0338. S. L. Wood is with the Department of Electrical Engineering, Santa Clara University, 500 El Camino Real, Santa Clara, California 95053. M. W. Haney is with the Department of Computer Engineering, University of Delaware, Newark, Delaware 19716. Received 24 August 2005; accepted 11 December 2005; posted 11 January 2006 (Doc. ID 64197). 0003-6935/06/132884-09$15.00/0 © 2006 Optical Society of America 2884 APPLIED OPTICS Vol. 45, No. 13 1 May 2006

Transcript of Adaptive flat multiresolution multiplexed computational

Adaptive flat multiresolution multiplexed computationalimaging architecture utilizing micromirror arrays tosteer subimager fields of view

Marc P. Christensen, Vikrant Bhakta, Dinesh Rajan, Tejaswini Mirani, Scott C. Douglas,Sally L. Wood, and Michael W. Haney

A thin, agile multiresolution, computational imaging sensor architecture, termed PANOPTES (process-ing arrays of Nyguist-limited observations to produce a thin electro-optic sensor), which utilizes arraysof microelectromechanical mirrors to adaptively redirect the fields of view of multiple low-resolutionsubimagers, is described. An information theory-based algorithm adapts the system and restores theimage. The modulation transfer function (MTF) effects of utilizing micromirror arrays to steering imagingsystems are analyzed, and computational methods for combining data collected from systems withdiffering MTFs are presented. © 2006 Optical Society of America

OCIS codes: 110.2970, 230.3990, 100.6640.

1. Background and Motivation

The desire for information superiority in matters ofnational security has created a requirement for per-vasive optical sensors with flat form factors. Tradi-tional imaging systems contain a lens, and thequality of the resulting image is typically propor-tional to the physical size of the lens used. Both thelight-gathering capability and the resolving power ofthe imaging sensor derive directly from the size of theoptical elements in such systems. This fact ultimate-ly results in imaging devices that are bulky andcubelike—a constraint that has persisted since theirinvention. The costs associated with the design, man-ufacture, and packaging of such physically unwieldysystems have made them a relatively scarce resourcein many applications in which their pervasive usewould be beneficial. One only needs to consider recentdevelopments in flat-panel technologies to gauge the

possibilities for a flat imaging sensor. Flat displaysare easier to place, take up less physical space, andare ultimately more useful because of their formfactor.

The creation of a flat imaging sensor requires aparadigm shift in the imaging-system approach, cou-pled with the proper selection and integration ofemerging technologies.1,2 Traditional imaging sen-sors utilize a lens or mirror to form the image that isthen sampled onto a detector array. A thin opticalsensor would be restricted to using smaller opticalelements and would therefore require additionalcomputation to augment image formation to achievehigh resolution. Flat imaging sensors based on arraysof micro-optic elements have been proposed and pro-totyped.3 These sensors allocate additional imagingresources to each region to provide the necessary datafor computation to enhance the inherent low resolu-tion of the flat sensor. One constraint of the approachin Ref. 3 is the fixed overlap of the imaging resources,requiring the design to be optimized for a specificresolution and field of view (FOV). Clearly, adaptivesensor elements would increase the utility of flatcameras.

Technologies developed in the late 1990s offer anopportunity to create a useful and flat micro-opticimaging sensor. Micromirror arrays similar to theones used in many laptop projectors today have beenutilized in novel imaging and signal-processing sys-tems.4 The precision and optical quality of these mi-cromirror arrays makes them attractive candidates

M. P. Christensen ([email protected]), V. Bhakta, D. Rajan, T.Mirani, and S. C. Douglas are with the Department of ElectricalEngineering, Southern Methodist University, 6251 Airline Road,Dallas, Texas 75275-0338. S. L. Wood is with the Department ofElectrical Engineering, Santa Clara University, 500 El CaminoReal, Santa Clara, California 95053. M. W. Haney is with theDepartment of Computer Engineering, University of Delaware,Newark, Delaware 19716.

Received 24 August 2005; accepted 11 December 2005; posted 11January 2006 (Doc. ID 64197).

0003-6935/06/132884-09$15.00/0© 2006 Optical Society of America

2884 APPLIED OPTICS � Vol. 45, No. 13 � 1 May 2006

for a flat micro-optic imaging sensor. The use ofanalog-steerable micromirror arrays makes it possi-ble to direct imaging resources at will.5 Envision aflat optical sensor that contains a multitude of low-resolution micro-optic sensors, each of which is beingsteered toward regions of interest by using precisionmicromirror arrays—an attentive multiresolutionimager. Multiple low-resolution sensors interrogatethese regions of interest, and the resulting data aredigitally processed to extract high-resolution detailfrom the data. Regions with no features of interestare imaged at relatively low resolution with low num-bers of subimagers (SIs), whereas areas of interestare continually updated with increasing numbers ofSIs resulting in improved resolution up to the opticalresolution limit. Such a system can approach the per-formance of a high-resolution bulk imaging deviceand could even potentially surpass it in situations inwhich only a small portion of the image field is ofinterest.

There are numerous possible applications for suchflat imaging devices. An unmanned aerial vehicle(UAV) could be tiled with flat imaging sensors thatsurvey the entire scene simultaneously. A soldier’shelmet could contain many lightweight flat imagingsensors that report data not only to the soldier butalso to command operations, all without adding phys-ical weight or hindering the user’s movements. Phys-ical security assets could have hallways tiled withattentive flat sensors that would prevent people fromdetermining whether they are being observed. Formfactor is the single greatest obstacle to widespreadimage gathering today, and the necessary technolo-gies have emerged to fundamentally change the waywe collect images.

In this paper we summarize the computationalsubsampling approach in Section 2. Next in Section 3we introduce an adaptive multiresolution version ofmultiplex imaging by using an information theoreticmetric to drive the adaptation. In Section 4 we de-scribe the optical implications of utilizing a micromir-ror array for steering individual SIs and present amethod for intelligently designing diversity into theSI array. We present our conclusions in Section 5.

2. Computational Subsampling Approach

To clarify the use of computational imaging for sub-pixel resolution as used in Ref. 3 and adaptively inthe approach described herein, we work through anexample system. Let us consider a typical digital sin-gle lens reflex camera as a candidate design for re-ducing the form factor by a factor of 10 and look at theramifications. The baseline camera has a focal lengthof 5 cm, a lens aperture of 2.5 cm, and a detector pixelsize of 10 �m. It follows that the instantaneous fieldof view (IFOV) of a single detector is 10 �m�5 cm� 0.2 mrad. If we aim to reduce the working distanceof this baseline system by an order of magnitude, wecan consider using instead a lens with a focal lengthof 5 mm and an aperture of 2.5 mm. However, we areunable to reduce the size of the pixels in the detectorcommensurately (to 1 �m) owing to both manufac-

turing constraints and light-collection [signal-to-noise-ratio (SNR)] constraints. If we keep our 10 �mdetector pixel size, then the new IFOV is10 �m�5 mm � 2.0 mrad. We have lost an order ofmagnitude in angular resolution of the sensor. Yetthe diffraction-limited spot size remains the samesince the f�# of the lenses are equal. In fact, thediffraction-limited spot size of the system remains �5times smaller than our detector size. Herein lies thebenefit of the computational subpixel processing ap-proach.

Replicating the miniaturized optical system manytimes with precise offsets, which are less than indi-vidual detector IFOVs, allows superresolution signal-processing techniques to be applied to reconstruct upto the diffraction limit of the optical system.6 Now,instead of each detector having a nonoverlappingIFOV of 0.2 mrad for a total FOV of 200 mrad (witha 1000 � 1000 detector array) as in the baselinecamera, we have an array of 10 � 10 SIs, each with100 � 100 pixels and a corresponding FOV of200 mrad, but interleaved to sample the object spaceto allow a superresolution reconstruction algorithmto restore the image to a 0.2 mrad resolution. Itshould be noted that in this paper we are discussingsuperresolution in a signal-processing sense with agoal of approaching the fundamental optically limitedresolution, not superresolution in an optical sense(near-field effects) as an attempt to surpass the dif-fraction limit. In this paper we will consider an adap-tive approach to subpixel overlapping IFOVs, whichwill be created through the use of two-dimensional(2-D) analog micromirror arrays.

3. Adaptive Multiplexed Computational Imaging

We introduce a novel flat-image-sensor concepttermed PANOPTES (processing arrays of Nyquist-limited observations to produce a thin electro-opticsensor).7,8 It derives its name from Argos Panoptes, amythological giant with 100 eyes who was all seeing(panoptes) and was thought to be the ultimate sentry.Like this mythical character, the PANOPTES archi-tecture seeks to extract all the relevant informationfrom a scene, yet it is capable of adapting to anysituation. This objective can be achieved with anorder-of-magnitude decrease in sensor thickness rel-ative to a conventional camera with similar perfor-mance. The proposed architecture can be likened toan adaptable and steerable FOV version of thin ob-servation module by bound optics (TOMBO).3 Adapt-ability is paramount to the success of an imagingsensor’s attempt to meet the goal of a flat form factorwhile maintaining a high image quality.

Based on the information theory of imaging de-scribed in Refs. 9 and 10, the spatial informationavailable within a scene is typically not uniformlydistributed. Take, for example, Fig. 1, which showsan aerial view of airplanes parked at an airport ter-minal. Figure 2 is a mapping of local entropy at lowerresolution that corresponds to the local informationcontent (e.g., local entropy measure; see Ref. 10) ofthe image in Fig. 1. From Fig. 2, it is clear that there

1 May 2006 � Vol. 45, No. 13 � APPLIED OPTICS 2885

is a strong correlation between our subjective view ofinformation-rich regions of the image and the spatialentropy, which can be exploited in designing an adap-tive imaging sensor. It is also evident from Fig. 2 thatit is wasteful to uniformly apply limited imagingresources as a traditional camera would. Instead, astrategy is needed to optimize the sensing device’sinformation efficiency. For a given scene, this effi-ciency is measured as the number of bits of informa-tion per bit of data from the sensor. The goal ofadaptability in the PANOPTES architecture is to ap-ply imaging resources to match the information con-tent of the scene and therefore to approach theperformance of a traditional imaging sensor whilereducing the thickness of the sensor by an order ofmagnitude. This architecture achieves the requiredadaptability by using micromirror technology origi-nally developed for photonic switching.5 Figure 3 is aschematic depiction of the concept. It is a tiled archi-tecture, in which each tile consists of a small array ofdetectors, an optical quality 2-D analog micromirrorarray, and a transparent superstrate containing therequired micro-optic elements. The scene is imagedonto the relatively low-resolution detector array by afolded optical system that has the micromirror arrayat its pupil. Such configurations have been pursuedin parallel with this effort by others11 for steeringsingle-aperture, noncomputational, bulky imagingsystems. Locating the micromirror array at the pupilof the imaging system makes it possible to steer theFOV of the detector array. Having the capability ofadapting the FOVs of the SIs removes the require-ment that the SIs cover the entire scene at once andthereby allows them to have a reduced FOV and im-

proved angular resolution. In turn, this improvedphysical angular resolution relaxes the demands puton the reconstruction algorithms.

Information theory-based metrics drive PAN-OPTES to reorient micromirror arrays to enhance theinformation rate obtainable from the visual sceneover several frames by using a feedback mechanism.This feedback enables new adaptive algorithms, onesin which the actual sensor adapts to acquire the de-sired data, instead of those that merely postprocessthe data according to the signal statistics. Addition-ally, the capability of creating precise absolute geo-metric changes in sensor content via micromirrorpositioning allows a structure to be built that admitsa simple, local computational structure that can eas-ily be distributed across multiple digital processors.

A preliminary reconstruction algorithm was devel-oped as part of the initial validation of the PAN-OPTES concept. This algorithm fuses informationfrom multiple sensors whose FOVs are overlappedand slightly shifted (by amounts corresponding topartial pixels in the raw low-resolution imagery). Fig-ure 4 is an example low-resolution image that wouldbe obtained from a SI inspection of the object ofFig. 1 (from the superimposed square). The output ofeach SI is highly pixelated and does not contain suf-ficient information about the object. As shown in Fig.5, we reconstructed the image from a collection ofmany such overlapping (by a factor of 8 in each di-rection) low-resolution subimages by using a simpleWiener filter.12 For simplicity, the reconstruction iscarried out separately over nonoverlapping regions ofthe object. The object in the region of interest is de-noted by column vector f, and the output of a group ofSIs is collectively represented by the column vector g.We use a linear model to represent the effect of theoptics and detector. Thus g � Hf � v, where v is a

Fig. 1. Aerial image (original scene) of an airport terminal (fromUniversity of Southern California Signal and Image ProcessingInstitute image database). The square represents the FOV of asingle SI.

Fig. 2. Information content (entropy) map used to identify re-gions of interest for the sample airport image in Fig. 1.

2886 APPLIED OPTICS � Vol. 45, No. 13 � 1 May 2006

noise vector that is modeled as Gaussian with zeromean and covariance matrix �2I and H represents thecombined effect of the optical blurring and detectorsampling. The reconstructed image f̂ obtained byusing the simple Wiener filter is given byf̂ � RHt�HRHt � �2I��1g, where R is the covariance ofthe image defined as R � E�ff t �.

We now illustrate the advantage of adaptively al-locating available resources (sensors) by using thefollowing simulation. The adaptive placement of mul-tiple sensors is based on the entropy map I(x, y)(shown in Fig. 2), where x and y represent the spatialpositions within the region of interest. The entropymap, which is calculated as the average of the nor-malized power spectral density in the region of inter-est9 gives an indication of information content in eachregion of the image. This entropy map is then linearlyscaled to ensure that the entropy values lie in therange 1, . . . , M, where M determines the maximumresolution desired in the image. The number of SIsN(x, y) that are focused on region (x, y) is thusgiven by

N�x, y� � 1 ��M � 1��I�x, y� � Imin�

�Imax � Imin�, (1)

where Imin and Imax are the minimum and maximumI(x, y) over all x and y, respectively. Note that each SIdoes not completely cover the desired region but hasdifferent amounts of overlap with adjacent SIs. Weuse peak SNR (PSNR) as the metric for quantifyingthe quality of the reconstruction. This PSNR isdefined as PSNR � 10 log�2552�mse�, where mse� E��f � f̂�2� is the mean-square error between theoriginal image and the reconstructed image. Thevariation of PSNR for different values of M is plottedin Fig. 6. Note that the PSNR is plotted against thetotal number of SIs used to focus on the image fordifferent values of M. The total number of SIs used isgiven by �x,y N�x, y�. For comparison and to show thebenefit of adaptation, we also plot in Fig. 6 the vari-ation of PSNR with the number of SIs, where all theSIs are uniformly allocated to the entire scene. Theadvantage of adaptive allocation of SIs is clearly ev-

Fig. 3. PANOPTES tiled approach. Each SI consists of a mi-cromirror array, a reflective optical system in a single superstrate,and a low-resolution detector array.

Fig. 4. Output of a single SI. The FOV of this SI is the squareshown in Fig. 1.

Fig. 5. Reconstructed figure using data from multiple SIs that areadaptively assigned to the scene of interest.

Fig. 6. Plot of PSNR improvement achieved by using adaptive SIallocation and equal SI allocation.

1 May 2006 � Vol. 45, No. 13 � APPLIED OPTICS 2887

ident from Fig. 6. For instance, by using the samenumber of SIs, the adaptive information content–based allocation achieves a PSNR that is �1.5 dBhigher than that of uniform resource allocation. Al-ternately, the adaptive scheme achieves a PSNR of30 dB by using less than half of the number of SIsrequired with an equal allocation. It should also beemphasized that the PSNR metric only partially cap-tures the advantages of the adaptive scheme. Forexample, to achieve the same peak resolution (corre-sponding to 1�64th the area of a pixel), the adaptivescheme requires one fifth as many SIs as does anapproach with a uniform allocation strategy.

As expected, the quality of the reconstructed imageincreases with increasing number of overlapped SIs.Operating points of the system can be chosen based onperformance curves that measure information content.The minimum number of SIs required to apply to aparticular part of the image to obtain the desired qual-ity and information can be determined by analysessuch as the one that produced Fig. 6. These analyseswill result in adaptive algorithms that drive SI re-source allocation.

It should be noted that the adaptive allocation ofSIs to the different regions according to the informa-tion content used in Eq. (1) is not unique. Other non-linear allocations that optimize the PSNR (or anyother desired metric) will be investigated in futurestudies. The heuristic allocation defined by Eq. (1) isused to illustrate the potential performance improve-ments achieved by using adaptive allocation of imag-ing resources.

4. Modulation Transfer Function ofSteered Subimagers

Utilizing micromirror arrays to effect the steering ofSIs, as described in Section 3, has the deleteriouseffect of degrading the MTF of the system. This effectstems from the fact that a flat segmented mirror isused in place of its bulky gimbal-mounted counter-part to steer the FOV. Here we seek to leverage themultiplexed nature of the computational architectureto overcome the limitations of the individual SIsthrough the combination of a set of SIs with a diver-sity of MTFs.13 MTF diversity has been shown toprovide performance enhancements in computationalimagers.14–17 Below we present a framework for de-termining the resultant system MTF when severalSIs with varying MTFs are combined.

For a single-aperture imaging sensor, the MTFcan be obtained directly by applying a frequency-response analysis. Autocorrelation of the pupil func-tion gives the optical transfer function (OTF) and themagnitude of the OTF is the MTF.18 The angular tiltof individual mirrors creates an optical path differ-ence (OPD) error between rays that encounter differ-ent micromirrors. This OPD increases with the tiltangle. The analysis of the optical system must ac-count for this increasing OPD. If the OPD is muchsmaller than the coherence length of the incidentlight, as would be the case for extremely small tiltangles or for laser-illuminated scenes, then the seg-

mented aperture acts as a single coherent aperture.If, instead, the OPD exceeds the coherence length ofthe incident light, as in the case of large tilt anglesand natural illumination, the corresponding mirrorfacets will combine incoherently, resulting in a reso-lution penalty. The analyses of coherent and incoher-ent apertures are presented in Subsections 4.A and4.B, respectively.

A. Case 1: Coherent Aperture

When the OPD is smaller than the coherence lengthof the light, the autocorrelation of the wavefront errorevident at the optical system’s pupil provides theOTF. To isolate the effects of the micromirror array,we assume a perfect optical system and utilize awavefront error function at the pupil. This wavefronterror is the sawtooth phase error that occurs when allmirrors in the array are tilted to the same angle. Theamplitude of the resultant pupil function is unity,and the phase is a sawtooth waveform (as shown inFig. 7), where the period of the waveform correspondto the mirror center-to-center spacing and the mag-nitude of the phase is the round-trip phase error dueto the tilted micromirror. A slice through the magni-tude of the OTF is depicted in Fig. 8 for a range ofmirror tilt angles (amplitude of the phase sawtooth).The tilting of the mirrors in the array creates dips inthe MTF. The spatial frequencies of these dips corre-spond to the mirror pitch (sawtooth period). It is clearthat if a coherent multiplexed imaging system wereto be built by using micromirror arrays for steeringFOVs, then it would be desirable to have micromirrorarrays with several different mirror pitches to re-cover the frequencies lost by each. The desire to com-bine data from several SIs to overcome single-SI MTFdeficiencies is even stronger in the incoherent casepresented next.

B. Case 2: Incoherent Aperture

For natural illumination (e.g., sunlight), the coher-ence length of the light will be smaller than the path-

Fig. 7. Sawtooth phase function at the pupil of a SI due to steer-ing all micromirrors in an array to an identical angle, which in thiscase is along the x direction.

2888 APPLIED OPTICS � Vol. 45, No. 13 � 1 May 2006

length error due to the tilt angle and the aperture willbe incoherent (i.e., each mirror facet will act as anindependent aperture). An incoherent aperture ortilted micromirrors can be modeled as a slit (the larg-est coherent aperture) in which the orientation ischosen to have the thin dimension aligned with thesteered direction of the micromirror array. As themultiple facets of the micromirror array are incoher-ent, the frequency response can be determined by asingle facet. The overall response will be brighterthan the light passed by the slit aperture but willhave the same frequency response. Figure 9 showsthe MTF for a vertically oriented slit aperture. Owingto the vertical orientation, the frequency componentsin the horizontal direction are lost. Figure 10 showsthe MTF of the vertical slit aperture (Fig. 9) appliedto the image shown in Fig. 1. This MTF would corre-spond to a micromirror-array-steered SI that steeredhorizontally far enough so that the mirror tilt dis-placement exceeded the coherence length of the light.The lost information can be recovered if we choose tocombine data from a set of micromirror arrays thatare steered in different directions and are thereforemodeled by different slit orientations. An approach to

combining the MTFs of these apertures by using MTFsynthesis is described in Subsection 4.C.

C. Modulation Transfer Function Synthesis

It has been shown that diversity in the MTF of thesystem may be required to improve the performanceof a reconstruction algorithm.14–17 If the SIs can becategorized into a set of different MTFs, then it ispossible to reconstruct an image with an effectiveMTF that is better than any of the individual SIMTFs. For the purpose of this example, we assumethere is only one SI of each unique MTF, but in factthis SI could be representative of a collection of SIswith identical MTFs. Here we describe one method tocombine the MTF of each SI by using a linear mini-mum mean-square estimator (LMMSE).19 The MSEis chosen as the metric for estimation of the effec-tive MTF. In this approach, which is depicted inFig. 11(a), there are n SIs looking at the same inputscene X, and each SI is described by its own MTF, Hi,additive white Gaussian noise, Zi, and output Yi.Each SI’s output can be represented as

Yi � XHi � Zi, i � 1 . . . n, (2)

where Yi is the ith scalar component of the observedspatial-frequency spectrum, X is the object spectrum,H is the MTF, and Z is the spectrum of the noise.Applying LMMSE yields the MSE error of the inputas19

MSE ��x

2�z2

�z2 � �x

2HTH, (3)

where H is a concatenation of the various Hi

and �x and �z represent the standard deviation of theinput and noise, respectively. Recall that, in this case,our goal is not an optimum reconstruction, but rather

Fig. 8. Family of curves representing slices through the resultantMTF of the pupil function depicted in Fig. 7. The curves correspondto the following steering angles of the micromirror array: (from topto bottom) 0.5, 1, 1.5, and 2 deg.

Fig. 9. MTF of a vertical slit aperture. Horizontal spatial frequen-cies are truncated, whereas vertical frequencies are readily passed.

Fig. 10. Image from Fig. 1 with the MTF of vertical slit aperture(Fig. 9) applied.

1 May 2006 � Vol. 45, No. 13 � APPLIED OPTICS 2889

we are determining an equivalent single-systemMTF. Now the effective MTF, Heff, is calculated suchthat replacing the n SIs by this one effective Heff willresult in a similar performance, i.e., same MSE. Asshown in Fig. 11(b), such a hypothetical imager canbe modeled by

Y � XHeff � Zeff. (4)

Applying LMMSE to this system yields

MSEeff ��x

2�Zeff2

�Zeff2 � �x

2HeffTHeff

. (5)

Equating MSE from both cases yields the relationbetween the effective MTF and the individual SIMTF:

Heff2 �

�Zeff2

�Z2 HTH. (6)

This framework allows for a range of different effec-tive MTFs to be calculated from improving only thenoise, to improving only the MTF, and anything inbetween. An obvious choice for the balance betweenthe two is to pick the effective noise variance to bereduced by a factor of n. This assumption is conve-nient not only in terms of the noise, but it also has theproperty that if we are analyzing normalized

MTFs �MTF �0� � 1�, then the resultant effectiveMTF is also normalized. In this case we have

�Zeff2 �

�Z2

n , (7)

Heff � �H12 � H2

2 � · · · � Hn2

n 1�2

. (8)

The proposed strategy can be used to combine SIswith diverse MTFs to yield an improved effectiveMTF. To illustrate this method, we return to thesteered-micromirror (slit-aperture) example. To im-prove upon the performance of Fig. 10, we can utilizea number of different steering orientations. For thisexample we utilize the four orientations depicted inFig. 12 (vertical, horizontal, clockwise 45 deg, andcounterclockwise 45 deg). Utilizing the MTFs of ap-ertures 1 and 2 ensures that horizontal and verticalspatial frequencies are recovered, and using theMTFs of apertures 3 and 4 ensures the retrieval ofdiagonal frequencies. Figure 13 depicts the results ofapplying the individual MTFs to the image of Fig. 1.These figures represent the image detected by each ofthe SIs. The result of MTF synthesis of four diverseapertures is shown in Fig. 14, and the equivalentreconstructed image from the combination is depictedin Fig. 15. The results confirm that diversity in MTFimproves the effective MTF and therefore the imag-ing performance.14–17 While the four configurations ofthis example did not retrieve all frequencies in theoriginal image, one can determine the number of con-figurations that would be required. It is straight-

Fig. 11. (a) Schematic representation of the SI array outputmodel for LMMSE. (b) Schematic representation of the equivalentimager output model for LMMSE to determine the effective MTF.

Fig. 12. Effective aperture of four differently steered SI arrays.The fact that the coherence length of the incident light is smallerthan the OPD (i.e., case 2) makes each individually appear as a slit(white), while the gray slits add incoherently (in intensity) andtherefore do not improve the resolution.

2890 APPLIED OPTICS � Vol. 45, No. 13 � 1 May 2006

forward to determine the number of SIs required tocover the unit circle of a normalized frequency plot ofthe spectrum of the image as ��arctan�w�h�, where wis the slit width and h is its height. Therefore, if wehave a SI of 5 mm � 5 mm with a micromirror of size0.5 mm � 0.5 mm, then we will need �32 SIs, eachoriented at 5.71 deg, to cover the entire spectrum.Similarities between the resultant effective MTFsand the early research in MTF synthesis should benoted20–21 as well as the relationship of combiningimages blurred along different directions to computedtomography.22 While the adaptability afforded by mi-cromirrors is vital to achieving an efficient use ofimaging resources, it is apparent that SIs should bepredistributed across the expected field of view tominimize steering across several dimensions.

5. Conclusion

The PANOPTES architecture is one of an adaptive,multiresolution, and attentive computational imag-

ing sensor that directs its resources based on theinformation content and distribution across thescene. A key feature of the PANOPTES concept is itscapability of adjusting the quality of the recon-structed image according to its information content.This is achieved through the use of precisely control-lable microelectromechanical mirror arrays in thesensor pupil plane to vary the FOVs of the SIs. Apreliminary adaptive image reconstruction algorithmbased on information theoretic metrics was intro-duced. The results indicate that nonuniform distri-bution of imaging resources yields performanceenhancements. The MTF effects created through theuse of an analog-steerable micromirror array at theimaging system’s pupil plane were analyzed, and amethod for combining a diverse set of SI data to pro-vide an improved effective MTF was presented. Non-linear mappings of the sampled probability densityfunctions and their related heuristic algorithms areareas for future research.

The authors gratefully acknowledge the useful dis-cussions with Gary Euliss on the application of infor-mation theory to imaging. This research is supportedby the U.S. Defense Advanced Research ProjectsAgency (DARPA).

References1. J. N. Mait, R. Athale, and J. van der Gracht, “Evolutionary

paths in imaging and recent trends,” Opt. Express 11, 2093–2101 (2003).

2. J. Mait, M. W. Haney, Keith Goossen, and M. P. Christensen,“Shedding light on the battlefield: tactical applications of pho-tonic technology,” Ref. A370034 (National Defense UniversityCenter for Technology and National Security Policy, 2004).

3. J. Tanida, T. Kumagai, K. Yamada, and S. Miyatake, “Thinobservation module by bound optics (TOMBO): concept andexperimental verification,” Appl. Opt. 40, 1806–1813 (2001).

4. M. P. Christensen, G. Euliss, M. J. McFadden, K. M. Coyle, P.Milojkovic, M. W. Haney, J. van der Gratch, and R. Athale,“ACTIVE-EYES: an adaptive pixel-by-pixel image segmenta-tion sensor architecture for high dynamic range hyperspectralimaging,” in Appl. Opt. 41, 6093–6103 (2002).

5. X. Zheng, V. Kaman, S. Yuan, Y. Xu, O. Jerphagnon, A.

Fig. 13. Resultant images from applying the MTFs for four dif-ferent orientations of the slit aperture depicted in Fig. 12 to theimage of Fig. 1.

Fig. 14. Effective MTF from the combination of apertures de-picted in Fig. 12.

Fig. 15. Results of applying the MTF depicted in Fig. 14 to theimage of Fig. 1.

1 May 2006 � Vol. 45, No. 13 � APPLIED OPTICS 2891

Keating, R. C. Anderson, H. N. Poulsen, B. Liu, J. R. Sechrist,C. Pusarla, R. Helkey, D. J. Blumenthal, and J. E. Bowers,“Three-dimensional MEMS photonic cross-connect switch de-sign and performance,” IEEE J. Sel. Top. Quantum Electron. 9,571–578 (2003).

6. S. Baker and T. Kanade, “Limits on super-resolution and howto break them,” IEEE Trans. Pattern Anal. Mach. Intell. 24,1167–1183 (2002).

7. M. P. Christensen, M. W. Haney, D. Rajan, S. Wood, and S.Douglas, “PANOPTES: a thin agile multi-resolutions imagingsensor,” paper presented at the Government Microcircuit Ap-plications and Critical Technology Conference (GOMACTech-05), Las Vegas, Nev., 4–7 April 2005, paper 21.5.

8. M. W. Haney, M. P. Christensen, D. Rajan, S. C. Douglas, andS. L. Wood, “Adaptive flat micro-mirror-based computationalimaging architecture,” presented at OSA Topical Meetingon Computational Optical Sensing and Imaging (COSI),Charlotte, N.C., 6–9 June 2005.

9. P. B. Fellgett and E. H. Linfoot, “On the assessment of opticalimages,” Philos. Trans. R. Soc. London 247, 269–407 (1955).

10. F. O. Huck, C. L. Fales, and Z. Rahman, “An informationtheory of visual communication,” Philos. Trans. R. Soc. London354, 2193–2247 (1996).

11. S. K. Nayar and V. Branzoi, “Programmable imaging using adigital micromirror array,” in Proceedings of IEEE ComputerSociety Conference on Computer Vision and Pattern Recogni-tion (CVPR’04) (IEEE, 2004), Vol. 1, pp. 436–443.

12. R. C. Gonzalez and R. E. Woods, Digital Image Processing.(Prentice-Hall, 2002).

13. V. R. Bhakta and M. P. Christensen, “Performance metrics formulti-aperture computational imaging sensors,” presented at

OSA Topical Meeting on Computational Optical Sensing andImaging (COSI), Charlotte, N.C., 6–9 June 2005.

14. S. Wood, M Christensen, and D. Rajan, “Reconstruction al-gorithms for compound eye images using lens diversity,”paper presented at the Defense Applications of Signal Pro-cessing 2004 Workshop, Midway Utah, 27 March–1 April2005.

15. S. L. Wood, B. J. Smithson, D. Rajan, and M. P. Christensen,“Performance of a MVE algorithm for compound eye imagereconstruction using lens diversity,” in Proceedings of IEEEInternational Conference on Acoustics, Speech, and Signal Pro-cessing, 2005 (ICASSP’05) (IEEE, 2005), pp. 593–596.

16. S. L. Wood, D. Rajan, M. P. Christensen, S. C. Douglas, andB. J. Smithson, “Resolution improvement for compound eyeimages through lens diversity,” in Digital Signal ProcessingWorkshop 2004 and the Third IEEE Signal Processing Edu-cation Workshop (IEEE, 2004), pp. 151–155, doi:10.1109/DSPWS.2004.1437931.

17. H.-B. Lan, S. L. Wood, M. P. Christensen, and D. Rajan, “Ben-efits of optical system diversity for multiplexed image recon-struction,” Appl. Opt. 45, 2859–2870 (2006).

18. J. W. Goodman, Introduction to Fourier Optics, 2nd ed.(McGraw-Hill, 1996), Chap. 6, pp. 146–151.

19. H. L. Van Trees, Detection, Estimation, and ModulationTheory, Part I (Wiley, 1968).

20. A. W. Lohmann and W. T. Rhodes, “Two-pupil synthesis ofoptical transfer functions,” Appl. Opt. 17, 1141–1151 (1978).

21. J. N. Mait and W. T. Rhodes, “Two-pupil synthesis of opticaltransfer functions. 2: Pupil function relationships,” Appl. Opt.25, 2003–2007 (1986).

22. A. Macovski, Medical Imaging Systems (Prentice-Hall,1983).

2892 APPLIED OPTICS � Vol. 45, No. 13 � 1 May 2006