Multispectral High Dynamic Range Imaging · multispectral camera internally uses an industrial gray...

13
Lehrstuhl für Bildverarbeitung Institute of Imaging & Computer Vision Multispectral High Dynamic Range Imaging Johannes Brauers and Nils Schulte and Andr´ e Alexander Bell and Til Aach Institute of Imaging and Computer Vision RWTH Aachen University, 52056 Aachen, Germany tel: +49 241 80 27860, fax: +49 241 80 22200 web: www.lfb.rwth-aachen.de in: IS&T/SPIE Electronic Imaging. See also Bib T E X entry below. Bib T E X: @inproceedings{BRA08a, author = {Johannes Brauers and Nils Schulte and Andr\’{e} Alexander Bell and Til Aach}, title = {Multispectral High Dynamic Range Imaging}, booktitle = {IS\&T/SPIE Electronic Imaging}, year = {2008}, address = {San Jose, California, USA}, month = {Jan}, pages = {680704-1--680704-12} } 2008 Society of Photo-Optical Instrumentation Engineers. This paper was published in IS&T/SPIE Electronic Imaging and is made available as an electronic reprint with permission of SPIE. One print or electronic copy may be made for personal use only. Systematic or multiple reproduction, distribution to multiple locations via electronic or other means, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited. document created on: November 11, 2008 created from file: paper.tex cover page automatically created with CoverPage.sty (available at your favourite CTAN mirror)

Transcript of Multispectral High Dynamic Range Imaging · multispectral camera internally uses an industrial gray...

Page 1: Multispectral High Dynamic Range Imaging · multispectral camera internally uses an industrial gray level camera, it is a ected by the problem as well. We extend the dynamic range

Lehrstuhl für Bildverarbeitung

Institute of Imaging & Computer Vision

Multispectral High Dynamic RangeImaging

Johannes Brauers and Nils Schulte and Andre Alexander Bell and Til

AachInstitute of Imaging and Computer Vision

RWTH Aachen University, 52056 Aachen, Germanytel: +49 241 80 27860, fax: +49 241 80 22200

web: www.lfb.rwth-aachen.de

in: IS&T/SPIE Electronic Imaging. See also BibTEX entry below.

BibTEX:

@inproceedings{BRA08a,

author = {Johannes Brauers and Nils Schulte and Andr\’{e} Alexander Bell and Til Aach},

title = {Multispectral High Dynamic Range Imaging},

booktitle = {IS\&T/SPIE Electronic Imaging},

year = {2008},

address = {San Jose, California, USA},

month = {Jan},

pages = {680704-1--680704-12}

}

© 2008 Society of Photo-Optical Instrumentation Engineers. This paper was published in IS&T/SPIEElectronic Imaging and is made available as an electronic reprint with permission of SPIE. One print orelectronic copy may be made for personal use only. Systematic or multiple reproduction, distributionto multiple locations via electronic or other means, duplication of any material in this paper for a feeor for commercial purposes, or modification of the content of the paper are prohibited.

document created on: November 11, 2008created from file: paper.tex

cover page automatically created with CoverPage.sty

(available at your favourite CTAN mirror)

Page 2: Multispectral High Dynamic Range Imaging · multispectral camera internally uses an industrial gray level camera, it is a ected by the problem as well. We extend the dynamic range

Multispectral High Dynamic Range Imaging

Johannes Brauers, Nils Schulte, Andre A. Bell, Til Aach

Institute of Imaging and Computer Vision, RWTH Aachen UniversityTemplergraben 55, 52056 Aachen, Germany

Keywords: multispectral imaging, high dynamic range imaging, calibration, camera transfer function, noisereduction

ABSTRACT

Capturing natural scenes with high dynamic range content using conventional RGB cameras generally resultsin saturated and underexposed and therefore compromising image areas. Furthermore the image lacks coloraccuracy due to a systematic color error of the RGB color filters. The problem of the limited dynamic rangeof the camera has been addressed by high dynamic range imaging1,2 (HDRI): Several RGB images of differentexposures are combined into one image with greater dynamic range. Color accuracy on the other hand can begreatly improved using multispectral cameras,3 which more accurately sample the electromagnetic spectrum.We present a promising combination of both technologies, a high dynamic range multispectral camera featuringa higher color accuracy, an improved signal to noise ratio and greater dynamic range compared to a similar lowdynamic range camera.

1. INTRODUCTION

Color accuracy in terms of exact reproduction of the electromagnetic spectrum is rarely the decisive factor forconsumers when purchasing a camera. Common digital RGB cameras provide a satisfying color reproductionand – perhaps more importantly – are cheap and robust due to their mass-production state, therefore beingestablished in mobiles phones up to professional cameras. It is rarely known that they exhibit a systematic colorerror4 as they violate the Luther rule,5 which states that the camera’s spectral sensitivities have to be a linearcombination of those of the CIE observer. This makes them unsuitable for the accurate measurement of color,e.g. of art paintings.6

Another drawback of the most common camera type, namely the 1-chip RGB camera, is its need for spatialinterpolation of color information: Since 1-chip color cameras are equipped with a color filter array (CFA)mapping three spectral sensitivities to the spatial domain, this results in a downsampled image for each colorchannel. Though by using an interpolation algorithm,7 the separated color channels can be recombined into onecolor image, interpolation artifacts remain. Another disadvantage of the CFA is the induced shift variance,8

which causes the image statistics to be dependent on the image shift.

Figure 1. Our multispectral camera (left) and a sketch of its internal configuration (right).

Further author information: (Send correspondence to Johannes Brauers.)Johannes Brauers: E-mail: [email protected], Telephone: +49 (241) 80 27866

Page 3: Multispectral High Dynamic Range Imaging · multispectral camera internally uses an industrial gray level camera, it is a ected by the problem as well. We extend the dynamic range

Our multispectral camera shown in Fig. 1 offers a greatly improved color accuracy compared to 1-chip RGBcameras since the complete spatial information is acquired for each of seven color channels: This is achieved byplacing a computer-controlled filter wheel with optical bandpass filters between lens and gray level imaging sensor.For each filter wheel position, gray level images are acquired separately and are combined into a multispectralcolor image afterwards. This enables a rough sampling of the electromagnetic spectrum, and the Luther rulementioned above is mostly fulfilled. However, only static scenes can be acquired by this technique.

A problem concerning all conventional cameras – including gray level cameras – is their limited dynamicrange. This means that the acquisition of scenes with a high dynamic range (HDR) results in saturated as wellas underexposed areas. The latter furthermore contain more noise and a lower signal to noise ratio. Since ourmultispectral camera internally uses an industrial gray level camera, it is affected by the problem as well. Weextend the dynamic range by using high dynamic range imaging:1,2 For each filter wheel position, we acquireseveral images with different exposures by varying the exposure time. The images are then combined into amultispectral high dynamic range (MHDR) image.

An interesting approach9 of multispectral HDR imaging using two 3CCD RGB cameras is accomplished byuse of a half mirror to present the same view to both cameras, which are arranged perpendicularly. The ability toacquire multispectral images is given by using different interference filters which split the original passbands eachinto two halves. By doing so, e.g., the lower half of the red spectral sensitivity is acquired by the first cameraand the higher half by the second camera. High dynamic range imaging is enabled by placing an additionalneutral density filter in front of one camera. The outcome of this is that three passband halves are acquiredwith full sensitivity, while the other ones are acquired with a reduced sensitivity due to the neutral density filter.Therefore, the innovative concept in [9] offers only the limited number of two exposure levels, which may notsuffice for many HDR applications. Another limitation occurs when one half of a passband is saturated: In thiscase, there is no estimation for that specific wavelength range at all, since the passband halves are shared byboth cameras. In contrast, our concept is not limited to a certain number of exposure levels, acquires the fullspectral range for each exposure level and thus provides a greater dynamic range.

We start by the derivation of a model for the image acquisition chain in section 2 with both a continuous anddiscrete representation and describe the estimation of spectra. We explain the procedure of image acquisitionincluding calibration, address data export and describe our experiments in section 3 before we present our resultsin section 4. We conclude with section 5.

2. MODEL

Fig. 2 shows our acquisition model for high dynamic range imaging. The variable symbols are chosen accordingto [10,11] where applicable. The object β(λ) ∗ is illuminated by a light source and the reflected light passesthrough the optics, aperture and is filtered by one of the – in our case – seven spectral bandpass filters which arearranged in a computer-controlled filter wheel. The final camera value qi,j is further influenced by the cameraspectral characteristics, sensor size, exposure time and the camera transfer function (CTF), which describes theopto-electronic conversion of the radiant energy to a camera value. By modeling and inverting this optical chain,we derive an estimation of the object reflectance β(λ) by evaluating the camera output values qi,j .

2.1 Mathematical model of the imaging chain

The light source S(λ) emits light with the spectral irradiance S(λ) (unit: Wm2nm ). It is reflected by the spectral

reflectance β(λ), passes through the optics o(λ), which is normally assumed to be o(λ) = 1 and the aperture,which reduces the spectral irradiance by a constant a ≤ 1†. The next optical element is our filter wheel, whichcontains I optical bandpass filters, where each filter has a specific spectral characteristic curve τi(λ), i = 1 . . . I(see Fig. 3a). Each optical bandpass filter passes only a part of the spectrum and enables us to acquire thespectral color component for that specific wavelength range. The spectral irradiance reaching the sensor surface∗In fact, our object is acquired with a spatial resolution βx,y(λ). Without loss of generality, our model does not

incorporate position-dependence and we use the simplified notation β(λ).†In fact, the solid angle of the object with respect to the light source and camera has an additional influence on the

optical path, but is not considered here. It may be modeled by implicit integration into the aperture constant a.

Page 4: Multispectral High Dynamic Range Imaging · multispectral camera internally uses an industrial gray level camera, it is a ected by the problem as well. We extend the dynamic range

S(ë) â(ë) o(ë) a ô (ë)i AR(ë) Ti,j f()dë

(ë) i Qi,j

internal camera

light

sour

ce

refle

ctan

ce

optic

s

aper

ture

band

pass

filte

r

sens

or sp

ectra

l

sens

itivi

tyse

nsor

area

inte

grat

ion

over

spec

trum

expo

sure

tim

eCTF

qi,j

H (ë)i

E (ë)i

Figure 2. Diagram of our physical model using continuous variables; bandpass filters τi(λ) are responsible for spectralresolution, exposure times Ti,j for the resolution in the range of values. Abbreviations: Hi(λ) = τi(λ)R(λ), k = Aa.

350 400 450 500 550 600 650 700 7500

10

20

30

40

50

60

70

Wavelength λ (nm)

Spe

ctra

l res

pons

e H

i(λ)

1

2

3 4

5

6

7

(a) Joint spectral characteristics Hi(λ) of opticalbandpass filters τi(λ) and sensor R(λ).

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

50

100

150

200

250

Relative irradiance

Sen

sor

valu

e [8

bit]

(b) Camera transfer function f of our internalgray level camera Sony XCD-SX900; solid linesdenote measurements, the dotted line is a line fit,and the dashed line marks the black level.

Figure 3. Spectral and radiometric characteristic curves of our multispectral camera.

is then derived by

Ei(λ) = S(λ)β(λ) o(λ) a τi(λ) , (1)

where the physical unit of Ei(λ) is Wm2nm , since the optical elements β(λ), o(λ), a and τi(λ) are unit-less. By

multiplication of (1) with the spectral characteristic curve R(λ) of the sensor and the sensor area A, the spectralradiant power (unit: W

nm ) is computed by

φ(λ) = Ei(λ)R(λ)A . (2)

Integrating over the whole wavelength range yields the radiant power

φi =∫λ

φ(λ) dλ =∫λ

S(λ)β(λ) o(λ) a τi(λ)R(λ)Adλ (3)

for filter i with the physical unit W (Watts). φi represents a spectral sample, corresponding to a certainwavelength range of our object and we also term it spectral channel i. To take different exposure times Ti,j witha total number of J exposure levels into account, we compute the radiant energy

Qi,j = φiTi,j (4)

Page 5: Multispectral High Dynamic Range Imaging · multispectral camera internally uses an industrial gray level camera, it is a ected by the problem as well. We extend the dynamic range

with the physical unit Ws = J (Joule). We implicitly assume that φi is not a function of time, i.e., we have astill scene during the acquisition. The final camera value is derived by

qi,j = f(Qi,j) , (5)

where f is the camera transfer function (CTF), which describes the opto-electronic conversion of the radiantenergy Qi,j to a camera value qi,j . The CTF of our multispectral camera is depicted in Fig. 3b and coversnon-linearities of the camera’s internal analog/digital converter. It can be precisely measured by the proceduredescribed in section 3.1. Equations (1) to (5) can now be combined into

qi,j = f

Ti,j ∫ S(λ)β(λ) o(λ) a τi(λ)R(λ)Adλ

. (6)

To simplify (6), we set o(λ) = 1, combine the sensor spectral sensitivity R(λ) and the filter spectral characteris-tic τi(λ) into Hi(λ) = τi(λ)R(λ), set k = aA and obtain

qi,j = f

Ti,j ∫ kHi(λ)S(λ)β(λ)dλ

, (7)

which describes the complete acquisition procedure in the noiseless case. It depends on the selected bandpassfilter i and the exposure channel j.

2.2 Discrete representation of the imaging chainBecause typical reflectance spectra are smooth, it is sufficient and common3,9, 12 to use sampled spectra, e.g.,with a number of N = 61 spectral samples, which cover a wavelength range from λ1 = 400nm to λN = 700nm.Towards this end, we write Eq. (4)-(5) in matrix notation

qj = f (Tjφ) . (8)

Inserting

φ = kHSβ , (9)

yields the complete model

qj = f (TjkHSβ) (10)

from Eq. (7). Note that the integration in (7) is now implicitely realized by a matrix multiplication according to

qi,j = f

(Ti,j

N∑n=1

kHi(λn)S(λn)β(λn)

). (11)

The bold symbols denote vectors or matrices defined by

qj = (q1,j . . . qI,j)T (I × 1)

φ = (φ1 . . . φI)T (I × 1)

Tj = diag (T1,j . . . TI,j) (I × I)

Hi = (Hi(λ1) . . . Hi(λN ))T (N × 1)

H = (H1 . . .HI)T (I ×N)

S = diag (S(λ1) . . . S(λN )) (N ×N)

β = (β(λ1) . . . β(λN ))T (N × 1) ,

and correspond to their continuous variables. The operator diag(·) derives a diagonal matrix from a vector, ()T

denotes a transpose operation and I is the number of bandpass filters of the multispectral camera.

Page 6: Multispectral High Dynamic Range Imaging · multispectral camera internally uses an industrial gray level camera, it is a ected by the problem as well. We extend the dynamic range

2.3 EstimationGiven several camera responses qi,j for different spectral channels i = 1 . . . I and exposure channels j = 1 . . . J ,our aim is to estimate the originating spectrum β(λ) and its discrete variable β respectively, i.e. the inversion ofEq. (10). We split the solution into the HDRI part – namely the inversion of (8) – and the spectral estimation,which corresponds to an inversion of (9).

In the ideal case, without quantization and camera noise and without saturation effects of our camera, wecould directly invert Eq. (8), yielding the correct values φ. But in reality, we have to use a weighted averaging

φi =

J∑j=1

f−1(qi,j)Ti,j

w(qi,j)

J∑j=1

w(qi,j)=

J∑j=1

Qi,j

Ti,jw(qi,j)

J∑j=1

w(qi,j), (12)

of sensor values qj . The inverse camera transfer function (ICTF) is denoted by f−1 and the weighting functionby w(·). The ICTF is the inversion of the CTF depicted in Fig. 3b and compensates camera-specific nonlinearitiesas well as the black level of the camera. The weighting function w(·) favors values in the center of the range ofvalues over the ones at the borders, suppressing saturated values. Several weighting functions are suggested inthe literature; we actually use a Tukey window (see Fig. 4)

w(d) =

{1.0 0 ≤ |d| ≤ αD212

(1 + cos

d−αD2

2(1−α) D2

))αD2 ≤ |d| ≤

D2

(13)

with a length D + 1 = 256 and a taper ratio of α = 0.5, because it suppresses the unconfident values in theupper and lower range of values: in the measured CTF in Fig. 3b, especially the border areas diverge from thebest fit straight line and the upper area exhibits a rougher quantization, since lesser sensor values are assignedto a fixed irradiance interval. An alternative weighting function is the derivative of the CTF f ′(),1,13 which issimilar to our weighting function. However, since our CTF is rather linear, the critical border areas would havea too strong influence and so we use the more stringent weighting function in (13) instead.

In practice, Eq. (12) corresponds to a combination of camera images with different exposures but the samespectral channel to single grayscale HDR images. These images exhibit a greater dynamic range than the singleimages and each one represents a different wavelength range because it was acquired through a specific opticalbandpass filter i.

0 50 100 150 200 2500

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

camera value

wei

ghtin

g

Figure 4. Windowing function w(·): Tukey window with length D + 1=256 and taper ratio α = 0.5.

In most cases, the spectral resolution N (e.g. N = 61) is higher than the number of acquired spectralchannels I (e.g. I = 7). Therefore, Eq. (9) can not be directly inverted as well, yet the reflectance vector β has

Page 7: Multispectral High Dynamic Range Imaging · multispectral camera internally uses an industrial gray level camera, it is a ected by the problem as well. We extend the dynamic range

to be interpolated from the the radiant power φ. In other words, equation (9) cannot be inverted because thematrix H maps an (N × 1) vector to an (I × 1) vector. In addition, the light source S is normally unknown andis estimated from a reference target in our case. We furthermore approximate (9) by introduction of a (I × I)diagonal light source matrix S′ and derive

φ = kS′Hβ . (14)

Since the real spectral resolution is limited to only I spectral channels, the approximation does not introduceany limitations concerning accuracy, but allows for further simplification in (17).

To determine the spectrum of the light source S(λ) and its discrete approximated variable S′ respectively, weacquire a white balance reference card with a known spectrum βref. With knowledge of the camera response φref,we compute

S′ref = diag (φref ÷ (kHβref)) , (15)

where the operation “÷” denotes an element-wise division.

By insertion of (15) into (14), we derive

φ = k diag (φref ÷ (kHβref)) Hβ (16)

and by further simplification

(φ÷ φref) ◦ (Hβref) = Hβ , (17)

where the symbol “◦” denotes an element-wise multiplication. The division of φ by φref can be interpreted as amultispectral white balance and cancels out the camera specific factor k. The validity of (17) can be checked byinserting the white reference card β = βref: φ then becomes φref and the equality holds.

Since H is non-invertible because of its rectangular shape, we have to use an approximation

β = Hinv ((φ÷ φref) ◦ (Hβref)) (18)

by using the weighted pseudoinverse

Hinv = R−1xxHT

(HR−1

xxHT). (19)

The matrix

R−1xx =

1 ρ ρ2 · · · ρN−1

ρ 1 ρ · · · ρN−2

ρ2 ρ 1...

.... . . ρ

ρN−1 ρN−2 · · · ρ 1

(20)

models the reflectance β(λ) as being smooth,3 i.e., neighboring values are assumed to be correlated by a factor ρ.By using the pseudoinverse without weighting, a strong oscillation of the solution might result. A typical valuefor ρ is 0.98.

3. PRACTICAL CONSIDERATIONS

We used our multispectral camera shown in Fig. 1 for our experiments. The camera features I = 7 opticalbandpass filters in the range from 400nm to 700nm in steps of 50nm, which can be controlled via the IEEE-1394 bus by the PC. The camera uses the internal grayscale camera Sony XCD-SX900 with a chip size of6.4mm × 4.8mm and a resolution of 1280 × 760 pixel. We use a Nikkor AF-S DX 18-70mm lens on the externalF-mount, while the internal camera features a C-mount.

This section discusses four issues: The relative radiometric calibration of our camera, the practical acquisitionprocedure, the multispectral data export and the tests we performed to measure the improvement of our MHDRtechnique compared to conventional LDR multispectral imaging.

Page 8: Multispectral High Dynamic Range Imaging · multispectral camera internally uses an industrial gray level camera, it is a ected by the problem as well. We extend the dynamic range

3.1 Measuring the camera transfer function (CTF)

The combination of camera values qi,j from J different exposures to a single estimated radiant power φi asdescribed by Eq. (12) represents a key issue of multispectral high dynamic range imaging. An important conditionis the linearization of the camera values, since the direct combination of (non-linear) camera values wouldintroduce severe errors on the final measurement. The linearization requires the precise measurement of theCTF in Eq. (5) and the application of the inverse CTF

Qi,j = f−1(qi,j) . (21)

To obtain the CTF or its inverse several estimation methods have been proposed. These estimates rely on aseries of different exposures from the same scene. By assuming to know the exposure ratios and introducinga model for the CTF, e.g. a gamma curve, the parameters of the CTF model can be estimated from thejoint histograms of exposure pairs.14 Alternatively, a smoothness constraint has been incorporated instead of amodel to estimate the CTF.2 Other models investigated in the literature include a polynomial approximation,15

parametric functions,1,16,17 or a constrained piecewise linear model.18,19 However, in our case, we use a moreprecise method, namely a relative radiometric measurement with our calibration stand shown in Fig. 5, whichallows a more exact measurement20 of the CTF than, e.g., chart-based measurement methods.21 We exploit therelation

E ∼ r2

r2 + x2L (22)

between the radiance L [W/(m2sr)] of the integrating sphere with opening diameter r and the distance x betweenlight source and camera sensor to vary the irradiance E [W/m2] impinging on the sensor of the camera, whichinfluences the sensor value qi,j as well. In practice, we position the camera at various distances x, which implicitlycorrespond to a specific irradiances E and we capture the sensor values qi,j . The result of such a measurementis shown in Fig. 3b: the abscissa denotes the irradiance and the ordinate the sensor value. Since the distances xare measured with 50µm precision by our distance sensor, the measurement is very accurate.

Figure 5. Our camera transfer function (CTF) measurement stand: The integrating sphere (left) illuminates the camera(right); the distance can be modified by shifting the camera and on a sliding carriage (bottom), while the exact positionis measured by a distance sensor (not shown).

3.2 Acquisition procedureFig. 6 shows all images being acquired when using J = 3 different exposure times. The object being capturedis a GretagMacbeth ColorChecker with 24 color patches and has been illuminated non-uniformly to produce ahigh dynamic range scene. In the figure, different exposures have been placed on the vertical axis, while differentspectral channels are arranged on the horizontal axis. The reconstruction of the multispectral HDR image isdone as follows: First, the geometric distortions between the spectral channels are corrected by an algorithmdescribed in [22]. Towards this end, one channel is defined as the reference channel and a model fit is performedto geometrically adapt the other channels to the reference channel.

Page 9: Multispectral High Dynamic Range Imaging · multispectral camera internally uses an industrial gray level camera, it is a ected by the problem as well. We extend the dynamic range

Figure 6. All images taken for an acquisition with seven spectral channels I = 7 (left to right) and three exposurelevels J = 3 (top to bottom).

After that, the inversion of the camera transfer function f(·) and combination to a high dynamic rangegrayscale image is performed by applying Eq. (12). It can be interpreted as a merging of the images in Fig. 6on the vertical axis. Fig. 7 visualizes the influence of each image on the HDR image for a series of J = 7exposures: Each histogram represents a single image from the exposure time series, where image values havebeen normalized with their corresponding exposure time. Images with small exposure times (top of figure)contribute to the reproduction of bright areas of the image, since they occupy a large range of values – yet witha limiting quantization. The rough quantization or rather the sensor’s limited resolution causes noise in thedark areas. Images which have been acquired with longer exposure times (bottom of figure) are saturated in thebright areas as the peak on the right indicates. But since they focus their dynamic range on the dark areas, thefine resolution in that areas contributes to the reproduction of dark areas.

To adapt the acquisition to the current light source, a calibration capture of a white reference patch has to bemade shortly before the acquisition. The linearized camera response φref is extracted by cropping and averagingthe white balance patch. The calibration counterpart, namely the spectrum βref can be measured with a spectralphotometer.

The final estimation of spectra then is carried out by Eq. (18). The previously estimated white referencedata serves as calibration data for the estimation: The term (φ÷ φref) of Eq. (18) represents a multispectralwhite balance and ((φ÷ φref) ◦ (Hβref)) adapts the acquired values to the spectrum βref of the white referencepatch. Finally, the matrix Hinv interpolates the I-channel camera data to N spectral sample points.

3.3 Multispectral data exchange

We use the .aix file format23 for storing multispectral HDR images, since it provides structures for multichannelimages and also copes with floating point data. The floating point data is produced when combining gray scaleimages to one HDR image as shown in Eq. (12). Reducing the data to, e.g., 8 bit data would destroy thedetails in dark and bright regions. An advantage of the .aix file format is the ability to store the reconstructionmatrix Hinv (see Eq. (19)) within the file. It is therefore not necessary to save the complete reconstructedspectra β with each e.g. N = 61 values but only I = 7 channels, resulting in considerable reduction of file size.More precisely, we store the values

(φ÷ φref) ◦ (Hβref) (23)

from Eq. (18) in the Frame table23 and the reconstruction matrix Hinv in the Samples2Spectrum table. Thespectrum β is then reconstructed by using the stored matrix Hinv.

3.4 Experiments

For evaluation of the spectral and signal to noise ratio (SNR) performance we utilized a GretagMacbeth MunsellDigital ColorChecker SG (see Fig. 8a) featuring 140 color patches with an overall size of 21.6cm × 27.9cm. Anintentionally generated shading introduced by halogen lamps causes the scene to exhibit a high dynamic range.

Page 10: Multispectral High Dynamic Range Imaging · multispectral camera internally uses an industrial gray level camera, it is a ected by the problem as well. We extend the dynamic range

T=1/706s

T=1/358s

T=1/180s

T=1/90s

T=1/45s

T=1/22s

0 0.5 1 1.5 2 2.5 3 3.5

x 104

T=1/11s

Normalized camera values

Figure 7. Histograms of normalized images with different exposure times.

We placed the camera perpendicularly to the ColorChecker and acquired J = 7 different exposures for the I = 7spectral channels, resulting in a total number of 49 acquisitions.

Because the chosen target also features white patches, we used the patch E5 for white balancing as describedin Eq. (15). The reconstruction is then carried out by the equations in section 2. For exact evaluation of eachcolor patch we crop an area inside each patch. To compare our HDR acquisition to an LDR one, we acquire twomultispectral images: The first one is an HDR image, where we acquire several exposure levels j = 1 . . . 7. Thesecond one is acquired by using only one exposure level J = 1.

4. RESULTS

Fig. 8 visualizes our analysis of the SNR improvement between LDR and HDR acquisition: We measured thepeak signal to noise ratio for each patch in both images and computed the differences between LDR patch andHDR patch. We skipped saturated patches in the LDR image since they would seem to have no noise due to thesaturation, i.e., the mapping to a single value.

Fig. 8a shows the difference between HDR and LDR patches in terms of SNR; positive numbers denoteimprovements, negative numbers degradation. The latter ones are due to the introduction of new gray levels inthe HDR image: While the resolution in the LDR image is limited and several gray levels are combined intoa single level due to quantization, the HDR image provides a greater variety of gray levels. When computingthe PSNR, there seems to be more noise in that regions. In general, we derive from Fig. 8a that especiallydark patches benefit from HDR imaging. Fig. 8b gives a more statistical view of the SNR improvement anddemonstrates the reduction of noise for nearly all color patches. We computed the mean improvement to be5.06dB for our test image.

Based on the acquired LDR and HDR images, we also investigated on measurements of the color accuracy.The reference spectral data for comparison with the acquired images was collected by scanning the color patcheswith a GretagMacbeth EyeOne spectral photometer. We measured the color difference between acquisition andreference with the ∆E00

24 formula. To compensate for the shading of our acquisitions and because we are mainlyinterested in the color components, we scaled the intensity of each image patch to match the original brightness.The results are depicted in Fig. 9, where each point (∆E00,LDR,∆E00,HDR) corresponds to one color patch.Points with ∆E00,LDR > ∆E00,HDR are marked with blue color and represent an improvement in terms of coloraccuracy, the others denote a degradation. The chart therefore enables a direct impression on the color accuracyimprovement of HDR imaging since most of the points are below the diagonal line. More precisely, the coloraccuracy of 104/140 color patches are improved.

Page 11: Multispectral High Dynamic Range Imaging · multispectral camera internally uses an industrial gray level camera, it is a ected by the problem as well. We extend the dynamic range

+2.4 +6.8 +3.0 +11.4 +0.2 +7.3 +10.4 +3.1 +10.3 +8.5 +14.4 +6.8

+1.4 +3.8 +5.0 −0.2 +8.5 +3.3 +4.6 +9.1 +8.6 +4.4 +4.2 +8.2 +9.1 +14.4

+1.4 +0.6 +3.4 +1.8 +8.4 +8.9 +13.3 +2.9 +3.0 +3.0 +11.1 +10.0 +9.1

+3.5 +3.3 +4.9 +2.0 +9.5 −0.1 +6.9 +8.1 +3.7 +6.6 +10.4 +4.6

+0.8 +2.7 −0.2 −1.7 +0.1 +3.2 +7.9 +14.3 +3.3 +7.0 +4.4 +7.2

+2.5 +1.7 +2.9 +6.5 +7.7 +4.9 +2.9 +2.4 +0.6 +3.3 +8.7 +3.6 +8.0

+2.8 +1.3 +0.5 +1.0 +1.9 +5.2 +2.8 +7.3 +3.9 +8.2 +4.2 +3.5 +3.3

+1.2 −0.4 +3.6 +1.7 +0.3 +2.3 +3.5 +3.2 +8.0 +3.5 +13.8 +6.5 +5.3 +7.4

+2.0 +4.0 +0.5 +4.0 +7.0 +3.6 +3.4 +3.6 +3.9 +5.6 +5.1 +3.5 +11.0 +9.3

+3.4 +9.4 −0.8 +2.6 +10.0 +0.2 +6.2 +9.3 +2.0 +7.4 +8.6 +8.8 +4.3

(a) Chart overview. In general, dark patches benefit more fromHDR than light patches; saturated fields in the LDR image areleft blank.

0 20 40 60 80 100 120 140−10

−8

−6

−4

−2

0

2

4

6

8

10

ColorChecker patch number

∆PS

NR

(dB

)

HDR better

LDR better

5.06

(b) Bar plot. Dotted line denotes mean improve-ment; saturated fields in the LDR image havebeen skipped.

Figure 8. Comparison of the signal to noise ratio of LDR/HDR acquisitions of a ColorChecker with 140 color patches,which has been illuminated non-uniformly to simulate a high dynamic range scene. Negative ∆PSNR values are due tointroduction of new gray levels in the HDR image.

0 1 2 3 4 5 60

1

2

3

4

5

6

∆E00, LDR

∆E00

, HD

R

HDR better

LDR better

Figure 9. Comparison of spectral error ∆E00: in 104/140 cases, the color accuracy is improved; reference: GTMB EyeOnespectral measure; some data points of ∆E00,LDR have been skipped because they are saturated.

To prove practical applicability, we also acquired outdoor scenes with our portable multispectral camera(Fig. 10). Here, also seven exposures have been taken out for each of the seven spectral channels, resulting in atotal number of 49 acquisitions. The high image quality can be realized when examining detail crops of dark orbright image regions. Fig. 11a shows a crop of one dormer of Aachen’s city hall roof. We artificially brightenedthe crops to improve the expressiveness. The LDR acquisition exhibits a large amount of noise because the darkregions are mapped only on a few values in the image. Using HDR imaging, the dark regions are also capturedwith long exposure times, thus mapping the same range of values to more quantization levels (see Fig. 7). Theprecise information from that exposures for the dark regions is incorporated into the final multispectral HDRimage and thus greatly improves image quality.

The improvement of light areas as depicted in Fig. 11b is clearer: While bright areas in the LDR image mightbe saturated due to the limitation of dynamic range, the HDR image also acquires images with short exposure

Page 12: Multispectral High Dynamic Range Imaging · multispectral camera internally uses an industrial gray level camera, it is a ected by the problem as well. We extend the dynamic range

Figure 10. HDR image of Aachen’s city hall.

(a) Brightened crops of a dark region: The imagenoise is significantly reduced.

(b) Shaded crops of a bright region: The HDRimage shows more details which are lost in thesaturated area of the LDR image.

Figure 11. Detail crops of Fig. 10 showing the improvements of HDR imaging; left images: LDR, right images: HDR.

times coping with bright areas. In the figure, the HDR image therefore preserves the structure of the roof tiles,while it is lost in the LDR image.

5. CONCLUSIONS

We have presented both theory and practice of multispectral high dynamic range (MHDR) imaging: We derivedthe mathematical model of an MHDR camera and described the spectral estimation from a series of exposures.We presented our camera and addressed several practical issues like the relative radiometric calibration andthe spectral calibration with a white patch. In a laboratory test, we compared our camera to a conventionallow dynamic range camera and proved the advantages in terms of color accuracy and signal to noise ratio.Additionally, we showed the applicability of our camera by taking outdoor images and emphasized the advantageswith descriptive images.

REFERENCES1. S. Mann, “Comparametric equations with practical applications in quantigraphic image processing,” IEEE

Transactions on Image Processing 9(8), pp. 1389–1406, 2000.2. P. E. Debevec and J. Malik, “Recovering High Dynamic Range Radiance Maps from Photographs,” in

SIGGRAPH 2007, pp. 369–378, (Los Angeles, California, USA), Aug 1997.3. S. Helling, E. Seidel, and W. Biehlig, “Algorithms for spectral color stimulus reconstruction with a seven-

channel multispectral camera,” in IS&Ts Proc. 2nd European Conference on Color in Graphics, Imagingand Vision CGIV 2004, pp. 254–258, (Aachen, Germany), Apr 2004.

Page 13: Multispectral High Dynamic Range Imaging · multispectral camera internally uses an industrial gray level camera, it is a ected by the problem as well. We extend the dynamic range

4. F. Konig and P. G. Herzog, “On the limitation of metameric imaging,” Proc. IS&T´s Image Processing,Image Quality, Image Capture, Systems Conference (PICS) 2, pp. 163–168, 1999.

5. R. Luther, “Aus dem Gebiet der Farbreizmetrik,” Zeitschrift fur technische Physik 8, pp. 540–558, 1927.6. A. Ribes, F. Schmitt, R. Pillay, and C. Lahanier, “Calibration and spectral reconstruction for crisatel: An

art painting multispectral acquisition system,” Journal of Imaging Science and Technology 49, pp. 563–573,Dec 2005.

7. B. Gunturk, J. Glotzbach, Y. Altunbasak, R. Schafer, and R. Mersereau, “Demosaicking: color filter arrayinterpolation,” IEEE Signal Processing Magazine 22, pp. 44–54, January 2005.

8. T. Aach, “Comparative analysis of shift variance and cyclostationarity in multirate filterbanks,” IEEETransactions on Circuits and Systems–I: Regular Papers 54(5), pp. 1077–1087, 2007.

9. H. Haneishi, S. Miyahara, and A. Yoshida, “Image acquisition technique for high dynamic range scenesusing a multiband camera,” Wiley’s Color Research & Application 31(4), pp. 294–302, 2006.

10. G. Wyszecki and W.S.Stiles, Color Science: Concepts and Methods, Quantitative Data and Formulae, Wiley,1982.

11. D. C. O’Shea, Elements of Modern Optical Design, Wiley-Interscience, 1985.12. R. Berns, L. Taplin, M. Nezamabadi, M. Mohammadi, and Y. Zhao, “Spectral imaging using a commercial

color-filter array digital camera,” in Proc. of The 14th Triennal ICOM-CC meeting, pp. 743–750, (TheHague, The Netherlands), Sep 2005.

13. A. A. Bell, J. N. Kaftan, T. Aach, D. Meyer-Ebrecht, and A. Bocking, “High Dynamic Range Images As aBasis for Detection of Argyrophilic Nucleolar Organizer Regions Under Varying Stain Intensities,” in IEEEInternational Conference on Image Processing, ICIP 2006, pp. 2541–2544, IEEE, 2006.

14. S. Mann and R. W. Picard, “Being ‘undigital’ with digital cameras: Extending Dynamic Range by Com-bining Differently Exposed Pictures,” Tech. Rep. 323, M.I.T. Media Lab Perceptual Computing Section,Boston, Massachusetts, 1994.

15. T. Mitsunaga and S. K. Nayar, “Radiometric self calibration,” in IEEE Conference on Computer Visionand Pattern Recognition. CVPR 1999, 1, pp. 374–380, 1999.

16. Y. Tsin, V. Ramesh, and T. Kanade, “Statistical calibration of ccd imaging process,” in IEEE InternationalConference on Computer Vision. ICCV 2001, 1, pp. 480–487, 2001.

17. D. Hasler and S. Susstrunk, “Modeling the Opto-Electronic Conversion Function (OECF) for Application inthe Stitching of Panoramic Images,” in International Conference on Imaging Systems (ICIS), pp. 379–380,2002.

18. A. F. Barros and F. M. Candocia, “Image registration in range using a constrained piecewise linear model,”in IEEE International Conference on Acoustics, Speech and Signal Processing. ICASSP 2002, 4, pp. 3345–3348, 2002.

19. F. M. Candocia and D. A. Mandarino, “A semiparametric model for accurate camera response function mod-eling and exposure estimation from comparametric data,” IEEE Transactions on Image Processing 14(8),pp. 1138–1150, 2005.

20. A. A. Bell, J. N. Kaftan, D. Meyer-Ebrecht, and T. Aach, “An evaluation framework for the accuracy ofcamera transfer functions estimated from differently exposed images,” in IEEE Southwest Symposium onImage Analysis and Interpretation, pp. 168–172, (Denver, Colorado), Mar 2006.

21. Y. C. Chang and J. F. Reid, “Rgb calibration for color image analysis in machine vision,” IEEE Transactionson Image Processing 5(10), pp. 1414–1422, 1996.

22. J. Brauers, N. Schulte, and T. Aach, “Modeling and compensation of geometric distortions of multispectralcameras with optical bandpass filter wheels,” in 15th European Signal Processing Conference, (Poznan,Poland), Sep 2007.

23. “Multispectral image file format aix.” http://spectral.joensuu.fi/multispectral/spectralimages/AIX-Format-1.6.0.pdf.

24. G. Sharma, W. Wu, and E. N. Dalal, “The CIEDE2000 color-difference formula: Implementation notes,supplementary test data, and mathematical observations,” Wiley’s Color Research & Application 30, pp. 21–30, Feb 2005.