Programmable Imaging: Towards a Flexible Camera

16
International Journal of Computer Vision 70(1), 7–22, 2006 c 2006 Springer Science + Business Media, LLC. Manufactured in The Netherlands. DOI: 10.1007/s11263-005-3102-6 Programmable Imaging: Towards a Flexible Camera SHREE K. NAYAR AND VLAD BRANZOI Department of Computer Science, 450 Mudd Hall, Columbia University, New York, N.Y. 10027 [email protected] [email protected] TERRY E. BOULT Department of Computer Science, University of Colorado, Colorado Springs, Colorado 80933 [email protected] Received February 23, 2005; Revised June 8, 2005; Accepted June 21, 2005 First online version published in June, 2006 Abstract. In this paper, we introduce the notion of a programmable imaging system. Such an imaging system provides a human user or a vision system significant control over the radiometric and geometric characteristics of the system. This flexibility is achieved using a programmable array of micro-mirrors. The orientations of the mirrors of the array can be controlled with high precision over space and time. This enables the system to select and modulate rays from the scene’s light field based on the needs of the application at hand. We have implemented a programmable imaging system that uses a digital micro-mirror device (DMD), which is used in digital light processing. Although the mirrors of this device can only be positioned in one of two states, we show that our system can be used to implement a wide variety of imaging functions, including, high dynamic range imaging, feature detection, and object recognition. We also describe how a micro-mirror array that allows full control over the orientations of its mirrors can be used to instantly change the field of view and resolution characteristics of the imaging system. We conclude with a discussion on the implications of programmable imaging for computer vision. Keywords: programmable imaging, flexible imaging, micro-mirror array, digital micro-mirror device, MEMS, adaptive optics, high dynamic range imaging, optical processing, feature detection, object recognition, field of view, resolution, multi-viewpoint imaging, stereo, catadioptric imaging, wide-angle imaging, purposive camera. 1. A Flexible Approach to Imaging In the past few decades, a wide variety of novel imag- ing systems have been proposed that have fundamen- tally changed the notion of a camera. These include high dynamic range, multispectral, omnidirectional, and multi-viewpoint imaging systems. The hardware and software of each of these devices are designed to accomplish a particular imaging function. This func- tion cannot be altered without significant redesign of the system. It would clearly be beneficial to have a single imaging system whose functionality can be varied using software without making any hardware alterations. In this paper, we introduce the notion of a pro- grammable imaging system. Such a system gives a hu- man user or a computer vision system significant con- trol over the radiometric and geometric properties of the system. This flexibility is achieved by using a pro- grammable array of micro-mirrors. The orientations of

Transcript of Programmable Imaging: Towards a Flexible Camera

Page 1: Programmable Imaging: Towards a Flexible Camera

International Journal of Computer Vision 70(1), 7–22, 2006

c© 2006 Springer Science + Business Media, LLC. Manufactured in The Netherlands.

DOI: 10.1007/s11263-005-3102-6

Programmable Imaging: Towards a Flexible Camera

SHREE K. NAYAR AND VLAD BRANZOIDepartment of Computer Science, 450 Mudd Hall, Columbia University, New York, N.Y. 10027

[email protected]

[email protected]

TERRY E. BOULTDepartment of Computer Science, University of Colorado, Colorado Springs, Colorado 80933

[email protected]

Received February 23, 2005; Revised June 8, 2005; Accepted June 21, 2005

First online version published in June, 2006

Abstract. In this paper, we introduce the notion of a programmable imaging system. Such an imaging systemprovides a human user or a vision system significant control over the radiometric and geometric characteristicsof the system. This flexibility is achieved using a programmable array of micro-mirrors. The orientations of themirrors of the array can be controlled with high precision over space and time. This enables the system to selectand modulate rays from the scene’s light field based on the needs of the application at hand.

We have implemented a programmable imaging system that uses a digital micro-mirror device (DMD), whichis used in digital light processing. Although the mirrors of this device can only be positioned in one of two states,we show that our system can be used to implement a wide variety of imaging functions, including, high dynamicrange imaging, feature detection, and object recognition. We also describe how a micro-mirror array that allowsfull control over the orientations of its mirrors can be used to instantly change the field of view and resolutioncharacteristics of the imaging system. We conclude with a discussion on the implications of programmable imagingfor computer vision.

Keywords: programmable imaging, flexible imaging, micro-mirror array, digital micro-mirror device, MEMS,adaptive optics, high dynamic range imaging, optical processing, feature detection, object recognition, field of view,resolution, multi-viewpoint imaging, stereo, catadioptric imaging, wide-angle imaging, purposive camera.

1. A Flexible Approach to Imaging

In the past few decades, a wide variety of novel imag-ing systems have been proposed that have fundamen-tally changed the notion of a camera. These includehigh dynamic range, multispectral, omnidirectional,and multi-viewpoint imaging systems. The hardwareand software of each of these devices are designed toaccomplish a particular imaging function. This func-tion cannot be altered without significant redesign of

the system. It would clearly be beneficial to havea single imaging system whose functionality can bevaried using software without making any hardwarealterations.

In this paper, we introduce the notion of a pro-grammable imaging system. Such a system gives a hu-man user or a computer vision system significant con-trol over the radiometric and geometric properties ofthe system. This flexibility is achieved by using a pro-grammable array of micro-mirrors. The orientations of

Page 2: Programmable Imaging: Towards a Flexible Camera

8 Nayar, Branzoi and Boult

the mirrors of the array can be controlled with veryhigh speed. This enables the system to select and mod-ulate scene rays based on the needs of the applicationat hand.

The basic principle behind the proposed approach isillustrated in Fig. 1. The system observes the scene viaa two-dimensional array of micro-mirrors, whose ori-entations can be controlled. The surface normal ni ofthe i th mirror determines the direction of the scene rayit reflects into the imaging system. If the normals of themirrors can be arbitrarily chosen, each mirror can beprogrammed to select from a continuous cone of scenerays. As a result, the field of view and resolution char-acteristics of the imaging system can be chosen froma very wide space of possibilities. Moreover, since themirror orientations can be changed instantly, the field ofview and resolution characteristics can be varied fromone state to another without any delay. In short, wehave an imaging system whose geometric propertiesare instantly controllable.

Now let us assume that each mirror can also beoriented with normal nb such that it reflects a blacksurface (with zero radiance). Let the integration timeof the image detector be T. If the mirror is made topoint in the directions ni and nb for durations t andT − t , respectively, the scene ray is attenuated byt/T . As a result, each imaged scene ray can alsobe modulated with high precision. This allows usto control the brightness modulation of each pixelwith high precision. In other words, the radiometricproperties of the imaging system are also controllable.As we shall show, the ability to instantly change themodulation of the light received by each pixel alsoenables us to perform simple but useful computationson scene radiance values prior to image capture.

Since the micro-mirror array is programmable, theabove geometric and radiometric manipulations can allbe done using software. The end result is a single imag-ing system that can emulate the functionalities of sev-eral existing specialized systems as well as new ones.Such a flexible camera has two major benefits. First,a user is free to change the role of the camera basedon his/her need. Second, it allows us to explore the no-tion of a purposive camera that can, as time progresses,always produce the visual information that is most per-tinent to the task.

We have developed a prototype of the programmableimaging system that uses a commercially availabledigital micro-mirror device. The mirror elements ofthis device can only be positioned in one of two ori-

Figure 1. The principle underlying programmable imaging using

a micro-mirror array. If the orientations of the individual mirrors

can be controlled with high precision and speed, scene rays can

be selected and modulated in a variety of ways, each leading to a

different imaging system. The end result is a single imaging system

that can perform the functions of a wide range of specialized cameras.

entations. Using this system, we demonstrate severalfunctions including high dynamic range imaging,optical feature detection, and object recognitionusing appearance matching. We also show how thismicro-mirror device can be used to instantly rotate thefield of view of the imaging system. We believe that inthe future micro-mirror devices may become availablethat provide greater control over mirror orientations.We show how such a device would enable us tochange the field of view, emulate camera rotation, andcreate multiple views for stereo. We conclude witha discussion on the implications of programmableimaging for computer vision.

2. Imaging with a Micromirror Device

Ideally, we would like to have full control over the ori-entations of our micro-mirrors. Such devices are beingdeveloped for adaptive optical processing in astron-omy Tyson (1998). However, at this point in time, theydo not have the physical properties and programma-bility that we need for our purpose. To implement ourideas, we use the digital micro-mirror device (DMD)that was introduced in the 1980s by Hornbeck at TexasInstruments (Hornbeck, 1998; Hornbeck, 1989). TheDMD is a micro-electromechanical system (MEMS)that has evolved rapidly over the last decade and hasfound many applications (Dudley et al., 2003). It is thekey enabling technology in many of today’s projec-tion systems, Hornbeck (1995). The latest generationof DMDs have more than a million mirrors, each mir-ror roughly 14 × 14 microns in size (see Fig. 2). Fromour perspective, the main limitation of current DMDs is

Page 3: Programmable Imaging: Towards a Flexible Camera

Programmable Imaging 9

Figure 2. Our implementation of programmable imaging uses a

digital micro-mirror device (DMD). The most recent DMDs have

more than a million mirrors, each mirror roughly 14 × 14 microns

in size. The mirrors can be oriented with high precision and speed at

+10 or −10 degrees.

that the mirrors can be oriented in only two directions:−10◦ or +10◦ about one of the mirror’s diagonal axes(see Fig. 2). However, the orientation of each mirrorcan be switched from one state to the other in a few mi-croseconds, enabling modulation of incident light withvery high precision.

Figure 3 shows the optical layout of the system wehave developed using the DMD. The scene is first pro-jected onto the DMD plane using an imaging lens. Thismeans that the cone of light from each scene point re-ceived by the aperture of the imaging lens is focusedonto a single micro-mirror. When all the mirrors areoriented at +10◦, the light cones are reflected in the

Figure 3. Imaging using a DMD. The scene image is focused onto the DMD plane. The image reflected by the DMD is re-imaged onto a CCD.

The programmable controller captures CCD images and outputs DMD (modulation) images.

direction of a re-imaging lens which focuses the imagereceived by the DMD onto a CCD image detector. Notethat the DMD in this case behaves like a planar scenethat is tilted by 20◦ with respect to the optical axis ofthe re-imaging lens. To produce a focused image ofthis tilted set of source points, one needs to tilt the im-age detector according to the well-known Scheimpflugcondition (Smith, 1966).

3. Prototype System

It is only recently that developer kits have begun toappear that enable one to use DMDs in different ap-plications. When we began implementing our systemthis option was not available. Hence, we chose to re-engineer an off-the-shelf DMD projector into an imag-ing system by reversing the path of light; the projectorlens is used to form an image of the scene on the DMDrather than illuminate the scene via the DMD. Fig. 4(a)shows a partly disassembled Infocus LP 400 projec-tor. This projector uses one of the early versions of theDMD with 800×600 mirrors, each 17×17 microns insize. The modulation function of the DMD is controlledby simply applying an 8-bit image (VGA signal) to theprojector input. We had to make significant hardwarechanges to the projector. First, the projector lamp hadto be blocked out of the optical path. Then, the chas-sis was modified so that the re-imaging lens and thecamera could be attached to the system. Finally, a 2

Page 4: Programmable Imaging: Towards a Flexible Camera

10 Nayar, Branzoi and Boult

Figure 4. (a) A disassembled Infocus LP 400 projector that shows

the exposed DMD. (b) In this re-engineered system, the projector lens

is used as an imaging lens that focuses the scene on the DMD. The

image reflected by the DMD is re-imaged by a CCD camera. Camera

images are processed and DMD modulation images are generated

using a PC.

degree-of-freedom manual stage was used to orient thedetector behind the re-imaging lens so as to satisfy theScheimpflug condition.

The final system is shown in Fig. 4(b). It is im-portant to note that this system is bulky only becausewe are using the electronics of the projector to drivethe DMD. If instead we built the entire system fromscratch, it would be very compact and not much largerthan an off-the-shelf camera. The CCD camera usedin the system is an 8-bit monochrome Sony XC-75model with 640 × 480 pixels. The processing of thecamera image and the control of DMD image is doneusing a Dell workstation with a 2.5 GHz Pentium4 processor.

DMDs have previously been used in imaging appli-cations, but for very specific tasks such as recordingcelestial objects in astronomy. For instance, in Malbetet al. (1995) the DMD is used to mask out bright sourcesof light (like the sun) so that dimmer regions (the coronaof the sun) can be imaged with higher dynamic range. InKearney and Ninkov (1998), a DMD is used to maskout everything except a small number of stars. Lightfrom these unmasked stars are directed towards a spec-troscope to measure the spectral characteristics of thestars. In Christensen et al. (2002) and Castracane andGutin (1999), the DMD is used to modulate bright-ness values at a pixel level for high dynamic rangemultispectral imaging and removal of image bloom-ing, respectively. These works address rather specificimaging needs. In contrast, we are interested in a flex-ible imaging system that can perform a wide range offunctions.

An extensive off-line calibration of the geometricand radiometric properties of the system was con-ducted. The geometric calibration involves determiningthe mapping between DMD and CCD pixels. This map-ping is critical to controlling the DMD and interpretingthe images captured by the CCD. The geometric cali-bration was done by using a bright scene with more orless uniform brightness. Then, a large number of squarepatches were used as input to the DMD and recordedusing the CCD, as shown in Fig. 5. Note that a darkpatch in the DMD image produces a dark patch in theCCD image and a white patch in the DMD image pro-duces a patch of the bright scene in the CCD image.In order to scan the entire set of patches efficiently, bi-nary coding of the patches was done. The centroids ofcorresponding patches in the DMD and CCD imageswere fitted to a piecewise, first-order polynomial. Thecomputed mapping was found to have an RMS error of0.6 (CCD) pixels. This computed mapping as well asits inverse were resampled and stored as two look-uptables; one that maps CCD pixels to DMD pixels andan another that maps DMD pixels to CCD pixels. Bothof these are two-dimensional tables with two entries ateach location and hence efficient to store and use.

The radiometric calibration was done in two parts.First, the CCD camera was calibrated using a Macbethreflectance chart to obtain a one-dimensional look-uptable that maps image brightness to scene radiance.This was done prior to mounting the camera on thesystem. Once the camera was attached to the sys-tem, the radiometric response of the DMD modula-tion system (including the DMD chip, the projector

Page 5: Programmable Imaging: Towards a Flexible Camera

Programmable Imaging 11

Figure 5. Geometric calibration of the imaging system. The geometric mapping between the DMD and the CCD images is determined by

showing the system a scene of uniform brightness, applying patterned images to the DMD, and capturing the corresponding CCD images. The

centroids of corresponding patches in the DMD and CCD images are then used to compute the forward and inverse transformations between the

DMD and CCD planes.

electronics and the PC’s video card) was estimatedusing a scene with uniform brightness. A large num-ber of uniform modulation images of different bright-nesses were applied to the DMD and the correspond-ing (linearized) camera images were captured. A fewof the camera pixels (chosen around the center ofthe camera image) were used to compute a functionthat relates DMD input and camera output. This func-tion was again stored as a one-dimensional look-up

Figure 6. Examples that show how image irradiance is modulated with high resolution using the DMD.

table. One of the camera images was then used tocompute the spatial brightness fall-off function (whichincludes vignetting and other effects) of the completesystem.

Figure 6 shows two simple examples that illus-trate the modulation of scene images using the DMD.One can see that after modulation some of the sceneregions that were previously saturated produce use-ful brightness values. Note that the captured CCD

Page 6: Programmable Imaging: Towards a Flexible Camera

12 Nayar, Branzoi and Boult

image is skewed with respect to the DMD modula-tion image. This skewing is due to the required tiltof the CCD discussed above and is corrected usingthe calibration results. In our system, the modula-tion image can be controlled with 8 bits of preci-sions and the captured CCD images have 8 bits ofaccuracy.

4. High Dynamic Range Imaging

The ability to program the modulation of the imageat a pixel level provides us with a flexible means toimplement several previously proposed methods forenhancing the dynamic range. In this section, we willdescribe three different implementations of high dy-namic range imaging.

4.1. Temporal Exposure Variation

We begin with the simplest implementation, where theglobal exposure of the scene is varied as a function oftime. In this case, the control image applied to the DMDis spatially constant but changes periodically with time.An example of a video sequence acquired in this man-ner is shown in Fig. 7, where 4 modulation levels are

Figure 7. Spatially uniform but temporally varying DMD inputs can be used to generate a video with varying exposure (e). Using a DMD in

this case produces high quality data compared to changing the exposure time or the camera gain.

cycled over time. It has been shown in previous workthat an image sequence acquired in this manner can beused to compute high dynamic range video when themotions of scene points between subsequent frames issmall (Ginosar et al., 1992; Kang et al., 2003). Alter-natively, the captured video can be subsampled in timeto produce multiple video streams with lower frame-rate, each with a different fixed exposure. Such datacan improve the robustness of tasks such as face recog-nition, where a face missed at one exposure may bebetter visible and hence detected at another exposure.

Videos of the type shown in Fig. 7 can also be ob-tained by changing the integration time of the detectoror the gain of the camera. Due to the various forms ofcamera noise, changing integration time or gain com-promises the quality of the acquired data. In our case,since the DMD can be controlled with 8 bits of accu-racy and the CCD camera produces 8-bit images, thecaptured sequence can be controlled with 16 bits ofprecision. However, it must be noted that this is notequivalent to using a 16-bit detector. The additional 8bits of control provided by the DMD allows us to use256 different exposure settings. The end result is 16 bitsof control over the measure irradiance but the quanti-zation levels are not uniformly spaced as in the case ofa 16-bit detector.

Page 7: Programmable Imaging: Towards a Flexible Camera

Programmable Imaging 13

4.2. Spatio-Temporal Exposure Variation

In Nayar and Mitsunaga (2000), the concept of spatiallyvarying pixel exposures was proposed where an imageis acquired with a detector with a mosaic of neutraldensity filters. The captured image can be reconstructedto obtain a high dynamic range image with a slight lossin spatial resolution. Our programmable system allowsus to capture an image with spatially varying exposuresby simply applying a fixed (checkerboard-like) patternto the DMD. In Nayar and Narasimhan (2002), it wasshown that a variety of exposure patterns can be used,each trading off dynamic range and spatial resolutionin different ways. Such trade-offs are easy to exploreusing our system.

It turns out that spatially varying exposures can alsobe used to generate video streams that have higherdynamic range for a human observer, without post-processing each acquired image as was done in Nayarand Mitsunaga (2000). If one uses a fixed pattern, thepattern will produce a very visible modulation thatwould be distracting to the observer. However, if thepattern is varied with time, the eye becomes less sen-sitive to the pattern and a video with a larger rangeof brightnesses is perceived by the observer. Fig. 8(a)shows the image of a scene taken without modulation. Itis clear that the scene has a wide dynamic range and an8-bit camera cannot capture this range. Fig. 8(b) showsfour consecutive frames captured with spatially vary-ing exposures. The exposure pattern uses 4 differentexposures (e1, e2, e3, e4) within each 2 × 2 neighbor-hood of pixels. The relative positions of the 4 exposuresare changed over time using a cyclic permutation. Inthe images shown in Fig. 8(b), one sees the spatial pat-terns introduced by the exposures (see insets). How-ever, when this sequence is viewed at 30 Hz, the pat-tern is more or less invisible (the eye integrates overthe changes) and a wider range of brightnesses arevisible.

4.3. Adaptive Dynamic Range

Recently, the method of adaptive dynamic range wasintroduced in Nayar and Branzoi (2003), where the ex-posure of each pixel is controlled based on the sceneradiance measured at the pixel. A prototype device wasimplemented using an LCD attenuator attached to thefront of the imaging lens of a camera. This implementa-tion suffers from three limitations. First, since the LCDattenuator uses polarization filters, it allows only 50%

of the light from the scene to enter the imaging systemeven when the attenuation is set to zero. Second, theattenuation function is optically defocused by the imag-ing system and hence pixel-level attenuation could notbe achieved. Finally, the LCD attenuator cells producediffraction effects that cause the captured images to beslightly blurred.

The DMD-based system enables us to implementadaptive dynamic range imaging without any of theabove limitations. Since the image of the scene is firstfocused on the DMD and then re-imaged onto the im-age detector, we are able to achieve pixel-level control.In addition, the fill-factor of the DMD is very highcompared to an LCD array and hence the optical effi-ciency of the modulation is closer to 90%. Because ofthe high fill-factor, the blurring/diffraction effects areminimal.

In Christensen et al. (2002) and Castracane and Gutin(1999), a DMD has been used to implement adaptivedynamic range. However, these previous works do notadequately address the real-time spatio-temporal con-trol issues that arise in the case of dynamic scenes. Wehave implemented a control algorithm very similar tothe one in Nayar and Branzoi (2003) for computing theDMD modulation function based on each captured im-age. Results from this system are shown in Fig. 9. Thefirst row shows a person under harsh lighting imagedwithout modulation (conventional camera). The secondrow shows the output of the programmable system andthe third row shows the corresponding modulation (at-tenuation) images applied to the DMD. As described inNayar and Branzoi (2003), the output and modulationimages can be used to compute a video stream that hasan effective dynamic range of 16 bits, although withoutuniform quantization.

5. Intra-Pixel Optical Feature Detection

The field of optical computing has developed very ef-ficient and powerful ways to apply image processingalgorithms (such as convolution and correlation) Good-man (1968). A major disadvantage of optical comput-ing is that it requires the use of coherent light to repre-sent the images. This has proven cumbersome, bulky,and expensive. It turns out that programmable mod-ulation can be used to implement a limited class ofimage processing tasks directly to the incoherent op-tical image captured by the imaging lens, without theuse of coherent sources. In particular, one can applyconvolution at an intra-pixel level very efficiently. By

Page 8: Programmable Imaging: Towards a Flexible Camera

14 Nayar, Branzoi and Boult

Figure 8. (a) A scene with a wide range of brightnesses captured using an 8-bit (low dynamic range) camera. (b) Four frames of the same

scene (with moving objects) captured with spatio-temporal exposure modulation using the DMD. When such a video is viewed at frame-rate,

the observer perceives a wider dynamic range without noticing the exposure changes.

intra-pixel we mean that the convolution mask is beingapplied to the distribution of light energy within a singlepixel rather than a neighborhood of pixels. Intra-pixeloptical processing leads to very efficient algorithms forfinding features such as edges, lines, and corners.

Consider the convolution f ∗g of a continuous opti-cal image f with a kernel g whose span (width) is lessthan, or equal to, a pixel on the image detector. We canrewrite the convolution as f ∗ (g+ − g−) where g+

is made up of only the positive elements of g and g−

has only the absolute of the negative elements of g. Weuse this decomposition since incoherent light cannotbe negatively modulated (the modulation image can-not have negative values). An example of such a de-composition for the case of a first-derivative operatoris shown in Fig. 10(a). As shown in the figure, let eachCCD pixel correspond to 3 × 3 DMD pixels; i.e. theDMD has three times the linear resolution of the CCD.

Page 9: Programmable Imaging: Towards a Flexible Camera

Programmable Imaging 15

Figure 9. (a) Video of a person taken under harsh lighting using a conventional (8-bit) camera. (b) The raw output of the programmable system

when the DMD is used to achieve adaptive dynamic range. (c) The modulation images applied to the DMD. The raw camera output and the

DMD modulation can be used to compute a video with very high dynamic range.

Then, the two components of the convolution (due to g+

and g−) are directly obtained by capturing two imageswith the modulation images shown in Fig. 10(b). Thedifference between these images gives the final result( f ∗ g).

Figure 10(c) shows the four optically processed im-ages of a scene obtained for the case of the Sobeledge operator. The computed edge map is shown inFig. 10(d). Since our DMD has only 800 × 600 ele-ments, the edge map is of lower resolution with about200 × 150 pixels. Although four images are neededin this case, it can be applied to a scene with slowlymoving objects where each new image is only used toupdate one of the four component filter outputs in theedge computation. Note that all the multiplications in-volved in the convolutions are done in optical domain(at the speed of light).

6. Optical Appearance Matching

In the past decade, appearance matching using sub-space methods has become a popular approach to ob-ject recognition (Turk and Pentland, 1991; Murase andNayar, 1995). Most of these algorithms are based onprojecting input images to a precomputed linear sub-space and then finding the closest database point thatlies in the subspace. The projection of an input im-age requires finding its dot product with a number ofvectors. In the case of principal component analysis,the vectors are the eigenvectors of a correlation or co-variance matrix computed using images in the trainingset.

It turns out that optical modulation can be usedto perform all the required multiplications in opticaldomain, leaving only additions to be done computa-

Page 10: Programmable Imaging: Towards a Flexible Camera

16 Nayar, Branzoi and Boult

Figure 10. (a) Decomposition of a convolution kernel into two positive component kernels. (b) When the resolution of the DMD is higher

than that of the CCD, intra-pixel convolution is done by using just two modulation images and subtracting the resulting CCD images. (c) Four

images that result from applying the four component kernels of a Sobel edge operator. (d) The edge map computed from the four images in (c).

tionally. Let the input image be m and the eigenvec-tors of the subspace be e1, e2, . . . ek . The eigenvec-tors are concatenated to obtain a larger (tiled) vectorB = [e1, e2, . . . ek] and k copies of the input im-age are concatenated to obtained the (tiled) vectorA = [m, m, . . . . . . m]. If the vector A is “shown” asthe scene to our imaging system and the vector B isused as the modulation image, the image captured bythe camera is a vector C = A.∗ B, where .∗ denotes anelement-by-element product of the two vectors. Then,the image C is raster scanned to sum up its k tiles to ob-tain the k coefficients that correspond to the subspace

projection of the input image. This coefficient vectoris compared with stored vectors and the closest matchreveals the identity of the object in the image.

We have used our system to implement this idea anddevelop a real-time face recognition system. Fig. 11(a)shows the 6 people in our database; 30 poses (im-ages) of each person were captured to obtain a total of180 training images. PCA was applied and the 6 mostprominent eigenvectors are tiled as shown in Fig. 11(b)and used as the DMD modulation image. During recog-nition, the output of the video camera is also tiled inthe same way as the eigenvectors and displayed on a

Page 11: Programmable Imaging: Towards a Flexible Camera

Programmable Imaging 17

Figure 11. (a) People used in the database of the recognition system (30 different poses of each person are included). (b) The 6 most prominent

eigenvectors computed from the training set, tiled to form the modulation image. (c) A tiling of the input (novel) image is “shown” to the imaging

system by using an LCD display. Simple summation of brightness values in the captured image yields the coefficients needed for recognition.

screen that sits in front of the imaging system, as shownin Fig. 11(c). The 6 parts of the captured image aresummed to obtain the 6 coefficients. A simple nearest-neighbor algorithm is applied to these coefficients torecognize the person in the input image.

7. Programmable Imaging Geometry

Thus far, we have mainly exploited the radiometricflexibility made possible by the use of a programmablemicro-mirror array. Such an array also allows us to veryquickly alter the field of view and resolution character-istics of an imaging system1. Quite simply, a planar

array of mirrors can be used to emulate a deformablemirror whose shape can be changed almost instanta-neously.

To illustrate this idea, we do not use the imaging sys-tem in Fig. 4 as its optics would have to be substantiallyaltered to facilitate field of view manipulation. Instead,we consider the case where the micro-mirror array doesnot have an imaging lens that focuses the scene onto itbut instead directly reflects the scene into the cameraoptics. This scenario is illustrated in Fig. 12, where thearray is aligned with the horizontal axis and the view-point of the camera is located at the point P at height hfrom the array.

Page 12: Programmable Imaging: Towards a Flexible Camera

18 Nayar, Branzoi and Boult

Figure 12. The field of view of an imaging system can be controlled almost instantly by using a micro-mirror array. The scene is being reflected

directly by the array into the viewpoint P of the camera. When all the mirrors are tilted by the same angle (θ ), the effective field of view of the

system is the same as that of the camera but is rotated by 2θ . In this case of parallel micro-mirrors, the system has a locus of viewpoints (one

for each mirror) that lies on a straight line.

If all the mirrors are parallel to the horizontal axis,the array behaves like a planar mirror and the viewpointof the system is simply the reflection P ′ of the camera’sviewpoint P. The field of view in this case is the fieldof view of the camera itself (only reflected) as long asthe mirror array fills the field of view of the camera.Now consider the mirror located at distance d fromthe origin to have tilt θ with the horizontal axis, asshown in Fig. 12. Then, the angle of the scene rayimaged by this mirror is φ = 2θ + α, where α =tan−1(d/h). It is also easy to show that the viewpointof the system corresponding to this particular mirrorelement is the point Q with coordinates Qx (d) = d −√

(h2 + d2) cos β and Qy(d) = d −√

(h2 + d2) sin β

where β = (π/2)−φ. If all the micro-mirrors have thesame tilt angle θ , then the field of view of the systemis rotated by 2θ . In this case the system has a locus ofviewpoints (caustic) that is a segment of the line thatpasses through P and Q.

Figure 13 shows how the imaging system can beused to capture multiple views of the same scene to

implement stereo. In this example, the mirrors on theleft half and the right half of the array are oriented atangles θ and −θ , respectively. A single image capturedby the camera includes two views of the scene that arerotated with respect to each other by 4θ . Note that stereodoes not require the two views to be captured fromexactly two centers of projection. The only requirementis that each point in the scene is captured from twodifferent viewpoints, which is satisfied in our case.

If the mirrors of the array can be controlled to haveany orientation within a continuous range, one can seethat the field of view of the imaging system can be var-ied over a wide range. In Fig. 14 the mirrors at the twoend-points of the array have orientations −θ and θ , andthe orientations of mirrors in between vary smoothlybetween these two values. In this case, the field of viewof the camera is enhanced by 4θ .

As we mentioned, the DMDs that are currentlyavailable can have only one of two mirror orienta-tions (+ 10 or − 10 degrees) in their active (powered)state. Therefore, if all the mirrors are initially inac-

Page 13: Programmable Imaging: Towards a Flexible Camera

Programmable Imaging 19

Figure 13. Here, the mirrors on the left half and the right half of the array are oriented at angles θ and −θ , respectively. The result is a pair

of stereo views of the same scene captured within a single camera image. Each view has a linear viewpoint locus and the two views are rotated

with respect to each other by 4θ .

tive (0 degrees) and then powered and oriented at 10degrees, the field of view remains the same but its ori-entation changes by 20 degrees as described earlier.This very case is shown in Fig. 15, where the left im-age shows one view of a printed sheet of paper andthe right one shows the other (rotated) view of thesame.

One can see that both the images are blurred. Thisis because we are imaging the scene directly through aDMD without using a re-imaging lens and hence manymirrors lie within the light cone that is imaged by asingle pixel. Since the mirrors are tilted, the surfacediscontinuities at the edges of the mirrors cause diffrac-tion effects. These effects become negligible when theindividual micro-mirrors are larger.

8. Discussion

We have shown that programmable imaging using amicro-mirror array is a general and flexible approachto imaging. It enables one to significantly alter the ge-ometric and radiometric characteristics of an imagingsystem using software. We now conclude with a fewobservations related to the proposed approach.

• Programmable Raxels: Recently, a general imagingmodel was proposed in Grossberg and Nayar (2005)which allows one to represent an imaging system asa discrete set of “raxels”. A raxel is a combinationof a ray and a pixel. It was shown in Grossberg andNayar (2005) that virtually any imaging system canbe represented as a three-dimensional distribution ofraxels. The imaging system we have described inthis paper may be viewed as a distribution of pro-grammable raxels, where the geometric and radio-metric properties of each raxel can be controlled viasoftware independent of all other raxels. This doesnot imply, however, that any imaging system can beemulated using our approach. For instance, while wehave significant control over the orientations of theraxels (ray directions), it is not possible to use a sin-gle mirror-array to control the positions of the raxelsindependent of their orientations. Even so, the spaceof raxel distributions one can emulate is large.

• Optical Computations for Vision: As we haveshown, our approach can be used to perform somesimple image processing tasks, such as image mul-tiplications and intra-pixel convolutions, in opticaldomain. While optical image processing was not thegoal of our work, the ability to perform these com-

Page 14: Programmable Imaging: Towards a Flexible Camera

20 Nayar, Branzoi and Boult

Figure 14. A planar array of planar mirrors can be used to emulate curved mirrors. Here the orientations of the mirrors vary gradually from

−θ to θ . The image captured by the camera has a field of view that is 4θ greater than the field of view of the camera itself. Again, the system has

a locus of viewpoints which in this case lies on a curve. Such a system can be used to instantly switch between a wide range of image projection

models.

Figure 15. Two images of the same scene taken by pointing a camera directly at a DMD. The image on the left was taken with all the mirrors

at 0 degrees (inactive DMD) and the image on the right will all the mirrors at 10 degrees. The fields of view corresponding to the two images

are the same in terms of their solid angles but they are rotated by 20 degrees with respect to each other. Both images are blurred as the DMD is

a very small and dense array of mirrors that is not appropriate for capturing direct reflections of a scene.

putations in optical domain happens to be an inher-ent feature of programmable imaging using a micro-mirror array. There are two major advantages to pro-cessing visual data in optical domain. The first is that

any image processing or computer vision system isresource limited. Therefore, if any computations canbe moved to the optical domain and can be doneat the speed of light, it naturally frees up resources

Page 15: Programmable Imaging: Towards a Flexible Camera

Programmable Imaging 21

for other (perhaps higher levels) of processing. Thesecond benefit is that the optical processing is beingdone while the image is formed. That is, the compu-tations are applied to the signal when it is in its purestform—light. This has the advantage that the signalhas not yet been corrupted by the various forms ofnoise that occur between image detection and imagedigitization.

• Towards a Purposive Camera: Any imaging sys-tem is limited in terms of its resources. At a broadlevel, one may view these resources as being thenumber of discrete pixels and the number of bright-ness levels (bits) each of these pixels can measure.Different specialized cameras (omnidirectional, highdynamic range, multispectral, etc.) can each beviewed as a specific assignment of pixels and bitsto the scene of interest. From this viewpoint, pro-grammable imaging provides a means for dynami-cally changing the assignment of pixels and bits tothe scene. We can now begin to explore the notionof a purposive camera-one that has intelligence toautomatically control the assignment of pixels andbits so that it always produces the visual informationthat is most pertinent to the task.

• Future Implementation: We are currently pursu-ing the implementation of the next prototype of theprogrammable imaging system. There are two lim-itations to the existing system that we are inter-ested in addressing. The first is its physical pack-aging. We intend to redesign the hardware of thesystem from scratch rather than re-engineer a pro-jector. Recently, DMD kits have become availableand by using only the required components we be-lieve we can make the system very compact. Thesecond problem we wish to address relates to op-tical performance. In the current system, we haveused the projector lens as the imaging lens andan inexpensive off-the-shelf lens for the re-imaginglens. Higher optical resolution can be achievedby using lenses that match the properties of theDMD and the CCD.

Finally, the full flexibility of programmable imag-ing will become possible only when mirror arrays pro-vide greater control over mirror orientation. Signifi-cant advances are being made in MEMS technologyas well as adaptive optics that we hope will addressthis limitation. When micro-mirror arrays allow greatercontrol over the orientations of their mirrors, pro-grammable imaging will have the potential to impact

imaging applications in several fields of science andengineering.

Acknowledgments

This work was done at the Columbia Center for Visionand Graphics. It was supported by an ONR contract(N00014-03-1-0023).

Note

1. This approach to controlling field of view using a mirror array is

also being explored by Andrew Hicks at Drexel University, Hicks

(2003).

References

Castracane, J. and Gutin, M. A. 1999. DMD-based bloom control for

intensified imaging systems. In Diffractive and Holographic Tech.,Syst., and Spatial Light Modulators VI, vol. 3633, pp. 234–242.

SPIE.

Christensen, M.P., Euliss, G.W., McFadden, M.J., Coyle, K.M.,

Milojkovic, P., Haney, M.W., Gracht, J. van der, and Athale,

R. A. October 2002. Active-eyes: An adaptive pixel-by-

pixel image-segmentation sensor architecture for high-dynamic-

range hyperspectral imaging. Applied Optics, 41(29):6093–

6103.

Dudley, D., Duncan, W., and Slaughter, J. February 2003. Emerg-

ing digital micromirror device (DMD) applications. White paper,

Texas Intruments.

Ginosar, R., Hilsenrath, O., and Zeevi, Y. September 1992. Wide

dynamic range camera. U.S. Patent 5, 144,442.

Goodman, J. W. 1968. Introduction to Fourier Optics. McGraw-Hill,

New York.

Grossberg, M.D. and Nayar, S.K. 2005. The raxel imaging model

and ray-based calibration. IJCV, 61(2):119–137.

Hicks, R. A. 2003. Personal Communication.

Hornbeck, L.J. 1988. Bistable deformable mirror device. In Spat.Light Mod. and Apps., vol. 8. OSA.

Hornbeck, L.J. August 1989. Deformable-mirror spatial light mod-

ulators. In Projection Displays III, vol. 1150, pp. 86–102. SPIE.

Hornbeck, L.J. 1995. Projection displays and MEMS: Timely con-

vergence for a bright future. In Micromachined Devices and Com-ponents, vol. 2642. SPIE.

Kang, S.B., Uyttendaele, M., Winder, S., and Szeliski, R. 2003. High

dynamic range video. ACM Trans. on Graph. (Proc. of SIGGRAPH2003), 22(3):319–325.

Kearney, K.J. and Ninkov, Z. 1998. Characterization of digi-

tal micromirror device for use as an optical mask in imag-

ing and spectroscopy. Spatial Light Modulators, 3292:81–92.

SPIE.

Malbet, F., Yu, J., and Shao, M. 1995. High dynamic range imaging

using a deformable mirror for space coronography. Public. of theAstro. Soc. of the Pacific, 107:386.

Murase, H. and Nayar, S.K. 1995. Visual learning and recognition

of 3d objects from appearance. IJCV, 14(1):5–24.

Page 16: Programmable Imaging: Towards a Flexible Camera

22 Nayar, Branzoi and Boult

Nayar, S.K. and Branzoi, V. 2003. Adaptive Dynamic Range Imag-

ing: Optical Control of Pixel Exposures over Space and Time.

Proc. of Inter. Conf. on Computer Vision (ICCV), pp. 1168–

1175.

Nayar, S.K. and Mitsunaga, T. 2000. High dynamic range imag-

ing: Spatially varying pixel exposures. Proc. of IEEE Conf.on Computer Vision and Pattern Recognition, vol. 1:472–

479.

Nayar, S.K. and Narasimhan, S.G. 2002. Assorted pixels: Multi-

sampled imaging with structural models. Proc. of Euro. Conf. onComp. Vis. (ECCV), 4:636–152.

Smith, W.J. 1966. Modern Optical Engineering. McGraw-Hill.

Turk, M. and Pentland, A.P. 1991. Face recognition using eigen-

faces. In Proc. of IEEE Conf. on Computer Vision and PatternRecognition (CVPR), pp. 586–591.

Tyson, R.K. 1998. Principles of Adaptive Optics. Academic Press.