Scene-based nonuniformity correction technique for infrared focal-plane arrays

9
Scene-based nonuniformity correction technique for infrared focal-plane arrays Yong-Jin Liu, 1, * Hong Zhu, 1,2 and Yi-Gong Zhao 1 1 Institute of Pattern Recognition and Intelligent Control, School of Electronic Engineering, Xidian University, Xian 710071, China 2 School of Electromechanical Engineering, Xidian University, Xian 710071, China *Corresponding author: [email protected] Received 17 November 2008; revised 21 March 2009; accepted 23 March 2009; posted 26 March 2009 (Doc. ID 104094); published 15 April 2009 A scene-based nonuniformity correction algorithm is presented to compensate for the gain and bias non- uniformity in infrared focal-plane array sensors, which can be separated into three parts. First, an inter- frame-prediction method is used to estimate the true scene, since nonuniformity correction is a typical blind-estimation problem and both scene values and detector parameters are unavailable. Second, the estimated scene, along with its corresponding observed data obtained by detectors, is employed to update the gain and the bias by means of a line-fitting technique. Finally, with these nonuniformity parameters, the compensated output of each detector is obtained by computing a very simple formula. The advantages of the proposed algorithm lie in its low computational complexity and storage requirements and ability to capture temporal drifts in the nonuniformity parameters. The performance of every module is demon- strated with simulated and real infrared image sequences. Experimental results indicate that the pro- posed algorithm exhibits a superior correction effect. © 2009 Optical Society of America OCIS codes: 100.2000, 100.2550, 100.3020, 040.1240, 110.3080. 1. Introduction Infrared focal-plane arrays (IRFPAs) have been widely used in infrared imaging systems in recent years. However, it is well known that each photode- tector in the array has a different photoresponse as a result of detector-to-detector variability during the IRFPA s fabrication stage, which can cause spatial nonuniformity, namely, fixed pattern noise. Despite significant advances in detector technology, the fixed pattern noise remains a serious problem in that the images obtained with an infrared imaging system are degraded by this kind of noise. Moreover, the spa- tial nonuniformity tends to drift slowly and ran- domly with time, so a one-time factory calibration will not provide a permanent solution to the problem. In this case, it is required that the nonuniformity be corrected repeatedly during the course of camera operation. There are two main types of nonuniformity correc- tion (NUC) technique at present. One is calibration- based techniques, such as the two-point calibration algorithm [1] and the multipoint calibration algo- rithm [2], which employ blackbody radiation sources of various temperatures and calculate the gain and the bias of each detector in the IRFPA by using a linear (or high-order) fitting procedure. Generally, this type of technique can obtain more accurate cor- rection results. However, many additional devices (e.g., blackbody sources, electromechanical parts, etc.) are required, which may increase the size and the cost of infrared imaging systems. What is more, the normal operation of the camera must be halted during calibration, so this category of NUC methods cannot be applied to uninterrupted-work situations. The other is scene-based techniques, which exploit only the information in the scenes being imaged and avoid the disadvantages of the calibration-based 0003-6935/09/122364-09$15.00/0 © 2009 Optical Society of America 2364 APPLIED OPTICS / Vol. 48, No. 12 / 20 April 2009

Transcript of Scene-based nonuniformity correction technique for infrared focal-plane arrays

Page 1: Scene-based nonuniformity correction technique for infrared focal-plane arrays

Scene-based nonuniformity correction techniquefor infrared focal-plane arrays

Yong-Jin Liu,1,* Hong Zhu,1,2 and Yi-Gong Zhao1

1Institute of Pattern Recognition and Intelligent Control, School of Electronic Engineering,Xidian University, Xi’an 710071, China

2School of Electromechanical Engineering, Xidian University, Xi’an 710071, China

*Corresponding author: [email protected]

Received 17 November 2008; revised 21 March 2009; accepted 23 March 2009;posted 26 March 2009 (Doc. ID 104094); published 15 April 2009

A scene-based nonuniformity correction algorithm is presented to compensate for the gain and bias non-uniformity in infrared focal-plane array sensors, which can be separated into three parts. First, an inter-frame-prediction method is used to estimate the true scene, since nonuniformity correction is a typicalblind-estimation problem and both scene values and detector parameters are unavailable. Second, theestimated scene, along with its corresponding observed data obtained by detectors, is employed to updatethe gain and the bias by means of a line-fitting technique. Finally, with these nonuniformity parameters,the compensated output of each detector is obtained by computing a very simple formula. The advantagesof the proposed algorithm lie in its low computational complexity and storage requirements and ability tocapture temporal drifts in the nonuniformity parameters. The performance of every module is demon-strated with simulated and real infrared image sequences. Experimental results indicate that the pro-posed algorithm exhibits a superior correction effect. © 2009 Optical Society of America

OCIS codes: 100.2000, 100.2550, 100.3020, 040.1240, 110.3080.

1. Introduction

Infrared focal-plane arrays (IRFPAs) have beenwidely used in infrared imaging systems in recentyears. However, it is well known that each photode-tector in the array has a different photoresponse as aresult of detector-to-detector variability during theIRFPA’s fabrication stage, which can cause spatialnonuniformity, namely, fixed pattern noise. Despitesignificant advances in detector technology, the fixedpattern noise remains a serious problem in that theimages obtained with an infrared imaging systemare degraded by this kind of noise. Moreover, the spa-tial nonuniformity tends to drift slowly and ran-domly with time, so a one-time factory calibrationwill not provide a permanent solution to the problem.In this case, it is required that the nonuniformity be

corrected repeatedly during the course of cameraoperation.

There are two main types of nonuniformity correc-tion (NUC) technique at present. One is calibration-based techniques, such as the two-point calibrationalgorithm [1] and the multipoint calibration algo-rithm [2], which employ blackbody radiation sourcesof various temperatures and calculate the gain andthe bias of each detector in the IRFPA by using alinear (or high-order) fitting procedure. Generally,this type of technique can obtain more accurate cor-rection results. However, many additional devices(e.g., blackbody sources, electromechanical parts,etc.) are required, which may increase the size andthe cost of infrared imaging systems. What is more,the normal operation of the camera must be haltedduring calibration, so this category of NUC methodscannot be applied to uninterrupted-work situations.The other is scene-based techniques, which exploitonly the information in the scenes being imagedand avoid the disadvantages of the calibration-based

0003-6935/09/122364-09$15.00/0© 2009 Optical Society of America

2364 APPLIED OPTICS / Vol. 48, No. 12 / 20 April 2009

Page 2: Scene-based nonuniformity correction technique for infrared focal-plane arrays

techniques. In recent years, a large number of scene-based NUC algorithms have been proposed. Harrisand Chiang [3] developed a constant-statistics algo-rithm based on the assumption that the temporalmean and variance of the irradiance are identicalfor each detector. Hayat et al. [4] further developeda statistical algorithm that assumes that the irradi-ance at each detector is a uniformly distributed ran-dom variable with a constant range. Hardie et al. [5]developed a registration-based algorithm that relieson the fact that detectors should have the same re-sponse when observing the same scene point at dif-ferent times. An algebraic scene-based algorithmwas first proposed by Ratliff et al. [6] that utilizes glo-bal one-dimensional subpixel motion to unify thebiases of all detectors in the array to a common value.This technique was later extended by the authors toa radiometrically accurate form (called RASBA) thatallows arbitrary two-dimensional translation [7].However, this extension was achieved at the costof partly using the calibration-based technique. Re-cently, a more generalized algebraic algorithmscene-based algorithm was presented by the sameauthors [8], which integrates RASBA with the alge-braic scene-based algorithm of [6] and does not re-quire the calibration procedure. Narayanan et al.[9] developed a NUC technique that exploits theknowledge of the focal-plane array readout architec-ture. In this algorithm, two models, i.e., a detector-level model and a readout amplifier model, are em-ployed and are corrected, respectively. Torres andHayat [10] developed a Kalman-filtering approach,which utilizes a Gauss–Markov model to capturethe drift in the nonuniformity parameters. Anothermultimodel Kalman-filtering algorithm was laterproposed by Pezoa et al. [11]. This method uses abank of Kalman filters to estimate the respective sys-tem states, and the final estimations of the statevariables are generated by summing all the weightedestimates from the Kalman filters.In this paper, we present a scene-based NUC algo-

rithm that partly integrates the registration-based[5] and the algebraic [8] methods. First, an inter-frame-prediction technique is employed to estimatethe true scene values of a given detector in IRFPA,and then, with these scene estimations and the cor-responding observed values through the detector, aline-fitting procedure is used to estimate the re-sponse parameters of the individual detector. Finally,the NUC can be achieved by using these parameters.The remainder of this paper is organized as

follows. In Section 2 the proposed scene-based algo-rithm is described in detail. In Section 3 a perfor-mance analysis performed by using simulated andreal infrared data is presented. Finally, some conclu-sions are given in Section 4.

2. Algorithm Description

In this section, we separate the proposed NUC algo-rithm into three parts and describe each; the partsare illustrated in Fig. 1. First, an image registration

technique is performed to estimate the global motionparameters between adjacent frames. Here we em-ploy a gradient-basedmethod [12], for it can estimatethe subpixel translation accurately. With these mo-tion parameters, a bilinear interpolation technique[8] is used to predict the true scene of the next uncor-rected frame from the current corrected one. Second,the observed data and the estimated scene are usedto form an estimation of the nonuniformity para-meters by means of the recursive mixed least-squares (RMLS) method [13], which is much betterthan the recursive least-squares (RLS) method men-tioned in some other literature [9,14], since RMLS ismore suitable for the requirements of the proposedalgorithm. Finally, these nonuniformity parameterscan be used to correct the next frame with a very sim-ple formula.

Consider an M ×N IRFPA, in which each detectorhas a differing photoresponse, but all photoresponsescan be linearly approximated as follows:

ykði; jÞ ¼ akði; jÞxkði; jÞ þ bkði; jÞ; ð1Þ

where akði; jÞ and bkði; jÞ are, respectively, the gainand the bias of the ði; jÞth detector in frame k, andxkði; jÞ denotes the true infrared radiation on the de-tector ði; jÞ at this time.

A. Interframe Prediction

Interframe prediction involves two aspects, motionestimation (registration) and bilinear interpolation,which are described in detail as follows.

1. Motion Estimation

Generally speaking, when still scenes are imaged bya moving camera at a relatively large distance, theimage xkþ1 can be approximately obtained by trans-lating its adjacent frame xk. We therefore assumethat the horizontal and the vertical relative displace-ments of the two images are denoted hk and vk, re-spectively. Thus xkþ1ði; jÞ can be expressed as

xkþ1ði; jÞ ¼ xkðiþ hk; jþ vkÞ: ð2Þ

Now we use the first three terms of the Taylor ser-ies expansion as an approximation for the right-handside of Eq. (2). This yields

xkþ1ði; jÞ ≈ xkði; jÞ þ hkgikði; jÞ þ vkgjkði; jÞ; ð3Þ

where gikði; jÞ ¼ ∂xkði; jÞ=∂i and gjkði; jÞ ¼ ∂xkði; jÞ=∂j.

Fig. 1. Block diagram of the proposed NUC algorithm.

20 April 2009 / Vol. 48, No. 12 / APPLIED OPTICS 2365

Page 3: Scene-based nonuniformity correction technique for infrared focal-plane arrays

In light of the relationship expressed in Eq. (3), wedefine the least-squares estimates for the registra-tion parameters as follows:

hk; vk ¼ argminhk;vk

Ekðhk; vkÞ; ð4Þ

where

Ekðhk; vkÞ ¼XMi¼1

XNj¼1

½xkþ1ði; jÞ − xkði; jÞ − hkgikði; jÞ

− vkgjkði; jÞ�2: ð5Þ

To solve the minimization problem in Eq. (4), wedifferentiate Ekðhk; vkÞ with respect to hk and vkand set the derivatives equal to zero. Finally, the es-timated registration parameters are calculated asfollows:

�hk

vk

�¼

� PMi¼1

PNj¼1½gikði; jÞ�2

PMi¼1

PNj¼1 g

ikði; jÞgjkði; jÞP

Mi¼1

PNj¼1 g

ikði; jÞgjkði; jÞ

PMi¼1

PNj¼1½gjkði; jÞ�2

�−1

�PM

i¼1

PNj¼1 ΔXkði; jÞgikði; jÞP

Mi¼1

PNj¼1 ΔXkði; jÞgjkði; jÞ

�; ð6Þ

where

ΔXkði; jÞ ¼ xkþ1ði; jÞ − xkði; jÞ: ð7Þ

According to the property of the Taylor series ex-pansion, this technique is accurate only for smallshifts. However, it is common for large motion pa-rameters between adjacent frames. Thus an iterativemethod is developed, which repeatedly utilizes thegradient-based technique until the registration esti-mates become sufficiently small (less than one pixel).The detailed description is as follows:

Step 1. Calculate the motion parameters hk and vkbetween adjacent frames xk and xkþ1 according toEq. (6). If jhkj < 1 and jvkj < 1, output hk and vk di-rectly. Otherwise, let TempH ¼ roundðhkÞ andTempV ¼ roundðvkÞ (where roundð·Þ denotes theround operation); go to Step 2.Step 2. Translate xkþ1 according to the values of

TempH and TempV so as to more closely match xk.Step 3. The modified image is then registered to xk.

Compute hk and vk at this time by using Eq. (6). Ifjhkj < 1 and jvkj < 1, go to Step 4. Otherwise, letTempH ¼ roundðhkÞ þ TempH and TempV ¼roundðvkÞ þ TempV; go to Step 2.Step 4. Let hk ¼ hk þ TempH and vk ¼ vk þ

TempV and then output hk and vk.

2. Bilinear Interpolation

We assume that the temperature of the observedscenes does not change significantly during the timebetween consecutive frames. Thus if two adjacentframes exhibit an arbitrary translational motion be-tween them, we approximate the value of a given pix-el in image xkþ1 as a bilinear interpolation of thevalues corresponding to the appropriate four pixelsfrom image xk. This is done as follows: first, rewritethe vertical and horizontal components of the displa-cement between xk and xkþ1 as the sum of their inte-gral and fractional parts [8], i.e., hk ¼ ⌊hk⌋þΔhkand vk ¼ ⌊vk⌋þΔvk, where ⌊ · ⌋ indicates a maxi-mum integer that is less than or equal to theshift. For convenience, we define γ1;k ¼ ΔhkΔvk,γ2;k ¼ ð1 −ΔhkÞΔvk, γ3;k ¼ Δhkð1 −ΔvkÞ, andγ4;k ¼ ð1 −ΔhkÞð1 −ΔvkÞ. Thus when hk > 0 and vk >0 (assuming that the down-rightward direction ispositive), the bilinear-interpolation approximation

xkþ1 for the image xkþ1 is expressed as

xkþ1ði; jÞ ¼ γ1;kxkðiþ ⌊hk⌋þ 1; jþ ⌊vk⌋þ 1Þþ γ2;kxkðiþ ⌊hk⌋þ 1; jþ ⌊vk⌋Þþ γ3;kxkðiþ ⌊hk⌋; jþ ⌊vk⌋þ 1Þþ γ4;kxkðiþ ⌊hk⌋; jþ ⌊vk⌋Þ; ð8Þ

and the other three situations have a form similarto Eq. (8).

B. Linear Fitting

Since the proposed NUC technique is performed on apixel-by-pixel basis, the pixel index ði; jÞ is omittedfor brevity of notation. Thus Eq. (1) can be rewrittenas

yk ¼ akxk þ bk: ð9ÞBy employing the method of interframe prediction,

we can estimate the true values of the next uncor-rected image through the current corrected one. Toobtain the optimum estimation of nonuniformityparameters, a line-fitting procedure is believed tobe a good choice.

1. Recursive Least-Squares Fitting

Torres et al. [14] used a recursive least-squares tech-nique with the observed data and the estimated

2366 APPLIED OPTICS / Vol. 48, No. 12 / 20 April 2009

Page 4: Scene-based nonuniformity correction technique for infrared focal-plane arrays

desired data to yield the nonuniformity parameters.Here we give a brief review of this method.Let us assume that xk is the estimation of the true

infrared radiation xk and that ak and bk are the up-dated estimations of the nonuniformity parametersat time k. Thus the estimation error of yk can beexpressed as

Δyk ¼ yk − yk ¼ yk −Hkθk; ð10Þ

where θk ¼ ½ ak bk �T and Hk ¼ ½ xk 1 �. The matrixform of Eq. (10) is

ΔyðkÞ ¼ yðkÞ − yðkÞ ¼ yðkÞ −HðkÞθk; ð11Þ

where yðkÞ ¼ ½ y1 y2 � � � yk �T and HðkÞ ¼½HT

1 HT2 � � � HT

k �T . For convenient expression,we define the notation •ðkÞ, which denotes a col-umn vector constituted of •1;•2;…;•k, e.g.,ΔyðkÞ ¼ ½Δy1 Δy2 � � � Δyk �T and yðkÞ ¼½ y1 y2 � � � yk �T .For the first k frames of output images, θk must be

chosen to minimize the quadratic form

εk ¼Xkn¼1

ðyn −HnθkÞ2: ð12Þ

Differentiating εk with respect to θk and setting thederivative equal to zero yields the following least-squares result:

θk ¼ ½HTðkÞHðkÞ�−1HTðkÞyðkÞ ¼ PkHTðkÞyðkÞ; ð13Þ

where Pk is the 2 × 2 covariance matrix.For a recursive update of the parameter θk, the

RLS method is used, and all necessary equationsto form the algorithm are listed below:

θkþ1 ¼ θk þ Kkþ1½ykþ1 −Hkþ1θk�; ð14Þ

Kkþ1 ¼ Pkþ1HTkþ1; ð15Þ

Pkþ1 ¼ Pk −PkHT

kþ1Hkþ1Pk

1þHkþ1PkHTkþ1

; ð16Þ

whereKkþ1 is the gain vector. Note that the recursiveprocedure occurs when a new frame arrives.

2. Recursive Mixed Least-Squares Fitting

Actually, the RLSmethod assumes that theHðkÞma-trix is noise free and the error occurs only in the out-put vector yðkÞ. We know, however, that the firstcolumn of HðkÞ is composed of x1; x2;…; xk, whichare all the estimations of true scene values. It isunavoidable that HðkÞ contains errors. Therefore,in this situation RLS cannot be the optimal estimatorof the nonuniformity parameters, which may resultsin a bias error.

The recursive total least-squares (RTLS) method[15] can solve the problem that both the matrixHðkÞ and the response output yðkÞ are noisy. How-ever, RTLS needs to assume that the noise distur-bance is distributed evenly over all of the columnsof HðkÞ and yðkÞ [13]. As a matter of fact, the secondcolumn of HðkÞ is a vector with components of all 1,which evidently contains no errors. Therefore, theRTLS method does not meet the requirements ofthe algorithmic model mentioned in Eq. (11) either.

The RMLS method, mixing RLS and RTLS above,is suitable for solving the line-fitting problem in thispaper, because it assumes that part of the data, suchas only yðkÞ and xðkÞ, are noisy. Let the error of xðkÞbe denotedΔxðkÞ. The goal of the RMLS solution is tominimize the perturbation to the noisy portion of thesystem in a Frobenius sense [13], i.e.,

� min ∥½ΔxðkÞΔyðkÞ�∥Fsubject to

½WðkÞ þΔWðkÞ�γk ¼ ½ 1 yðkÞ �ΔyðkÞ xðkÞ −ΔxðkÞ �γk ¼ 0; ð17Þ

where WðkÞ ¼ ½ 1 yðkÞ xðkÞ �, ΔWðkÞ ¼½ 0 −ΔyðkÞ −ΔxðkÞ �, and γk ¼ ½bk − 1 ak�T .

Alternatively, the RMLS problem in Eq. (17) canbe stated as, choose γk to minimize the Rayleighquotient,

minγk

�γTk RWWðkÞγkγ0Tk γ0k

�; ð18Þ

where γ0k ¼ ½−1 ak �T and RWWðkÞ is an extendedsample autocorrelation matrix, defined by

RWWðkÞ ¼ 1kWTðkÞWðkÞ: ð19Þ

To solve this problem, a partial QR factorization ofthe matrix WðkÞ is performed to give

20 April 2009 / Vol. 48, No. 12 / APPLIED OPTICS 2367

Page 5: Scene-based nonuniformity correction technique for infrared focal-plane arrays

WðkÞ ¼ QðkÞ�RðkÞ; ð20Þ

where QðkÞ is a k × k unitary matrix and �RðkÞ isupper triangular. Note that �RðkÞ is a k × 3 tall ma-trix, with all but the first three rows zero. For com-pactness, we let RðkÞ represent the first three rowsof �RðkÞ.For each new Wkþ1 ¼ ½ 1 ykþ1 xkþ1 �, Rðkþ 1Þ is

deduced from RðkÞ via a transformation matrixTðkþ 1Þ, namely,

�Rðkþ 1Þ

0

�¼ Tðkþ 1Þ

�RðkÞWkþ1

�; ð21Þ

where Tðkþ 1Þ, which is the cumulation of a series ofGivens rotations, is also unitary. Therefore, it is veryconvenient to update the RðkÞ matrix. Furthermore,WðkÞ and RðkÞ are equivalent in a mean-squaresense, i.e.,

WTðkþ 1ÞWðkþ 1Þ ¼ ½WTðkÞ WTkþ1 �

�WðkÞWkþ1

¼ �RTðkÞ�RðkÞ þWTkþ1Wkþ1

¼ RTðkÞRðkÞ þWTkþ1Wkþ1

¼ RTðkþ 1ÞRðkþ 1Þ: ð22Þ

Thus, RðkÞ can completely substitute for WðkÞ tosolve the problem in Eq. (18), and Eq. (18) is rewrit-ten as

minγk

�γTkRTðkÞRðkÞγkγ0Tk γ0k

�: ð23Þ

Let the 3 × 3 upper triangular matrix RðkÞ bedesignated

RðkÞ ¼�R11ðkÞ R1yðkÞ R12ðkÞ

0 R2yðkÞ R22ðkÞ�; ð24Þ

where both R2yðkÞ and R22ðkÞ are 2 × 1 vectors. SinceWðkÞ and RðkÞ are equivalent, the first column of

RðkÞ is noise free, and the other two columns arenoisy. The RMLS algorithm solves the minimizationproblem in Eq. (23) by two steps. First, compute theRTLS solution ak through the equation

R2ðkÞγ0k ¼ ½R2yðkÞ R22ðkÞ ��−1ak

�¼ 0; ð25Þ

for all columns of R2ðkÞ are noisy, which meets theassumed condition of the RTLS method. Second,given ak, bk is determined by solving the least-squares equation

½R11ðkÞ R1yðkÞ R12ðkÞ �γk¼ R11ðkÞbk − R1yðkÞ þ R12ðkÞak ¼ 0: ð26Þ

It is well known that the total least-squares solu-tion γ0k is the right-hand eigenvector associated withthe minimum eigenvalue of RT

2 ðkÞR2ðkÞ [15]. A sim-ple and accurate method to compute the eigenvectorγ0k is the inverse iteration technique [16]. Thus, theRMLS algorithm can be illustrated as follows:

Step 1. Set Rð0Þ ¼ I (where I is the 3 × 3 identitymatrix), a0 ¼ 1.

Step 2. For a new incoming frame, we haveWkþ1 ¼ ½ 1 ykþ1 xkþ1 �; update Rðkþ 1Þ based onEq. (21).

Fig. 2. (a) Shifted infrared image. (b) Infrared image with simulated nonuniformity.

Fig. 3. ARE for various levels of gain and bias nonuniformity (σadenotes gain standard deviation and σb bias standard deviation).

2368 APPLIED OPTICS / Vol. 48, No. 12 / 20 April 2009

Page 6: Scene-based nonuniformity correction technique for infrared focal-plane arrays

Step 3. Let γ0kþ1 ¼ ½−1 ak �T ; solve for ξ inRT

2 ðkþ 1ÞR2ðkþ 1Þξ ¼ γ0kþ1.Step 4. akþ1 ¼ −ξ2=ξ1 (for only ξ divided by −ξ1 can

have the same form as γ0kþ1), where ξ ¼ ½ ξ1 ξ2 �T.Step 5. bkþ1 ¼ ðR1yðkþ 1Þ − R12ðkþ 1Þakþ1Þ=

R11ðkþ 1Þ, k ¼ kþ 1; go to Step 2.

C. Nonuniformity Correction

From the descriptions in Subsections 2.A and 2.B, wecan see that the nonuniformity parameters are up-dated when a new frame arrives. Assume that theestimations of the gain and the bias in frame k areak and bk, respectively. The corrected kth frame isobtained by

~xk ¼ yk − bkak

: ð27Þ

Note that ~xk is the correction result, unlike xk,which is the estimation of the true infrared radiationxk. With the increase of the frame number k, ~xk willbe convergent to the true value xk.

3. Experimental Results

In this section, the performance of the proposed tech-nique is demonstrated with both simulated and realdata. We will exhibit the effect of every key step inthe algorithm during the following description.

A. Registration Accuracy with Nonuniformity

Since motion estimation plays an important part inthe proposed algorithm, here we study the ability ofthe selected registration algorithm to perform in thepresence of nonuniformity. For this purpose, a 310 ×250 8 bit gray-scale infrared image corrected by thetwo-point calibration method is first globally trans-lated for a known displacement to form anotherimage. The shifted frame, shown in Fig. 2(a), is thencorrupted with simulated Gaussian gain and biasnonuniformity, and unity-mean gain and zero-meanbias are always assumed. A typical corrupted imageis shown in Fig. 2(b), where the gain standard devia-tion σa is 0.1 and the bias standard deviation σb is 10.Last, the original image and the corrupted one areregistered by the iterative gradient-based techniquementioned in Subsection 2.A.

To exhibit the registration accuracy, the experi-ment is repeated 100 times. For each experiment,the shifted image is corrupted with new randomlygenerated noise of different levels. Thus, the averagerelative error (ARE) between the estimated and thetrue shifts can be calculated for various levels of non-uniformity. The results are shown in Fig. 3. Note thatwhen σa and σb are, respectively, less than 0.4 and 40,the average relative error is less than 50%. In other

Fig. 4. (a) True image of frame 181. (b) Image estimated from frame 180.

Fig. 5. Performance of interframe prediction versus framenumber.

Fig. 6. Relationships between input and output before and aftercorruption.

20 April 2009 / Vol. 48, No. 12 / APPLIED OPTICS 2369

Page 7: Scene-based nonuniformity correction technique for infrared focal-plane arrays

words, even when the predefined global shift is asgreat as two pixels, the absolute error of registrationis less than one pixel spacing. Therefore, we can con-clude that the motion-estimation technique is fairlyaccurate when the level of nonuniformity is notvery high.If the noise is too severe to obtain good registra-

tion, it is necessary to employ other NUC methodsfirst for preprocessing[5], such as the statistical algo-rithm [3,4] and the temporal high-pass filtering algo-rithm [17], since they do not rely on registration.

B. Performance Analysis of Interframe Prediction

Interframe prediction is used to estimate the truevalue of the next frame based on the informationfrom the current one, preparing for the succeedingprocedure of linear fitting. Specially for the firstframe, there is no previous information to utilize.In this case, the a priori knowledge of the detectorsor some image-processing methods, such as median-filtering techniques [9], can be used to form roughnonuniformity parameters to correct frame 1 as anot too bad start.In this subsection, a sequence of 181 frames of

312 × 256 8 bit gray-scale infrared images withoutany noise is employed to investigate the predictionaccuracy. According to the interframe-prediction al-gorithm, here we first utilize the previous frame toestimate the next one (Fig. 4 shows the true frame181 and frame 181 as estimated from frame 180),and then we calculate the mean absolute error be-tween the estimated and the true frames. The twosteps above are repeated throughout the wholesequence. Thus, 180 mean absolute errors are ob-tained. Since images are all 8 bit gray scale, the aver-age relative error can be computed by dividing themean absolute error by 255, which is shown in Fig. 5.Note that the maximum ARE is less than 2.5%.Furthermore, from Fig. 4 we can find that it is diffi-cult to distinguish between the true and the esti-mated frames by the naked eye, for the ARE hereis only 1.8%.

C. Comparison between Recursive Least Squares andRecursive Mixed Least Squares Methods

From Subsection 2.B we know that RMLS is moresuitable for the practical situation here than isRLS. To demonstrate this, we assume that Eq. (9)is specialized as

y ¼ 2xþ 5; ð28Þ

namely, a and b are specified as 2 and 5, respectively.First, a sequence of 200 numerical values is gener-

ated randomly to form the input x. Thus, the corre-sponding output y can be calculated by Eq. (28).Then, x and y, respectively, are corrupted with Gaus-sian random noise of mean zero and variance one,and the relationship between them is shown in Fig. 6.Finally, the nonuniformity parameters a and b areestimated through the linear fitting of the corruptedx and y. Since this experiment has been repeated 100times, the average line-fitting results by means ofRLS and RMLS are shown in Fig. 7. In these plotsthe dotted curves represent the estimated values aand b obtained by RLS with the increase of framek. It is observed that both estimations deviate fromthe original values 2 and 5. The estimated nonunifor-mity parameters obtained by RMLS are plottedagainst the number of frames (solid curves). As more

Fig. 7. Average line-fitting results against frame number. (a) Parameter a estimated by RLS and RMLS. (b) Parameter b estimated byRLS and RMLS.

Fig. 8. Correction performances of RLS and RMLS against framenumber with real infrared data.

2370 APPLIED OPTICS / Vol. 48, No. 12 / 20 April 2009

Page 8: Scene-based nonuniformity correction technique for infrared focal-plane arrays

incoming frames are acquired, both solid curvesgradually converge to their true values. Therefore,we can conclude that when both input x and outputy are noisy, RMLS yields a better line-fitting perfor-mance than RLS.Actually, whether RMLS or RLS is used in NUC,

the infrared images observed by detectors are re-quired to have a relatively wide range of scene values[5]. For some detectors, if the motion is not sufficientenough, the estimated gain and bias will be far fromthe correct values, which may cause the production ofghosting artifacts [18] over the output images. Oneway of solving this problem is that the relative mo-tion between adjacent frames is detected beforehand,and the judgment criterion can be described asfollows:

δk ¼XMi¼1

XNj¼1

jykði; jÞ − yk−1ði; jÞj: ð29Þ

If δk is above some given threshold, the motion isevident and the frame yk can be used to update thegain and the bias by means of RMLS or RLS. Other-wise, this frame is eliminated and correction uses thenonuniformity parameters of frame k − 1.

D. Nonuniformity Correction with Real Infrared Data

The performance of the proposed algorithm is stud-ied by applying it to a set of 200-frame real infrareddata. These data were acquired by using a 320 × 240IRFPA operating in the wavelength range of8–14 μm. For comparison, the NUC with RLS is alsotested here.To evaluate the algorithm performance, the rough-

ness parameter ρ is used, which is defined by [4]

ρðf Þ ¼ jjh1 � f jj1 þ jjh2 � f jj1jjf jj1

; ð30Þ

where f denotes the digital image under analysis, h1is a horizontal mask ½1;−1�, h2 ¼ hT

1 is a verticalmask, ∥f∥1 is the l1 norm of f , and the asterisk repre-sents discrete convolution. Note that ρ does not re-quire the knowledge of the true image, so it can beused to evaluate the uniformity of the corrected in-

frared images. Clearly, a smaller value for ρ indicatesa better effect for the corrected image.

After the original infrared image sequence was cor-rected by RLS (only instead of RMLS used as linearfitting in the proposed algorithm) and RMLS (theproposed algorithm), the roughness parameter ρwas measured for the two corrected sequences andalso for the original images. The evaluated resultsare shown in Fig. 8, where the values of ρ for the ori-ginal images versus frames are plotted with a short-dashed curve. It is observed that this curve keeps arelatively higher level and does not change signifi-cantly with the increase of frames. The solid andlong-dashed curves indicate the correction perfor-mances of RMLS and RLS, respectively. Note thatboth curves have converged when by frame 30. How-ever, the solid curve exhibits a lower level, whichmeans that RMLS has a better correction perfor-mance than RLS.

To present an intuitional effect, some images aredisplayed in Fig. 9, where consistent conclusionsare obtained. Fig. 9(a) shows the original infraredimage of frame 200, where the striped pattern noisesare so severe that we can hardly distinguish the use-ful information. The frame corrected by RLS isshown in Fig. 9(b). Although the spatial nonunifor-mity is significantly removed, a large amount ofghosting appears in the corrected image. The mainreason is that the RLS method introduces estimationerrors that make the nonuniformity parameters in-accurate. Figure 9(c) shows the NUC frame obtainedwith RMLS. By naked eye, it is observed that thisimage without any ghosting artifacts exhibits a bet-ter result than Fig. 9(b).

In addition, the execution time of the proposedalgorithm has been tested in MATLAB running ona PC (with CPU Celeron 2:0GHz and memory768MB). The average correction time for an imageis approximately 2:4 s. Therefore, this method is verypossible for real-time hardware-based implemen-tation.

4. Conclusions

We have presented a scene-based nonuniformitycorrection (NUC) algorithm that is separated intothree steps. If the fixed-pattern noise level is not

Fig. 9. (a) Original image of frame 200. (b) Image corrected by RLS. (c) Image corrected by RMLS.

20 April 2009 / Vol. 48, No. 12 / APPLIED OPTICS 2371

Page 9: Scene-based nonuniformity correction technique for infrared focal-plane arrays

too high, accurate global registration may beachieved, which is demonstrated with simulated in-frared images. Otherwise, some other NUC methodsshould be used first to reduce the noise intensity.With these registration parameters, a bilinear-interpolation technique is employed to yield a newscene estimate. Specially, the first frame withoutany a priori information is estimated only by image-denoising methods. The estimated scene, along withits corresponding observed data, is used to updatethe gain and the bias by means of RMLS. Thus,the compensated output of a given detector is easilyobtained by subtracting its bias from the readoutvalue and then divided by the corresponding gain.Some experiments have been done to test the pro-

posed algorithm. Simulation results indicate that theinterframe-prediction accuracy is satisfactory. Sinceglobal translational motion is assumed, the predic-tion errors may result from local motion, scene rota-tion, etc. The performance of the RMLS method wasevaluated by employing simulated and real data, andthe results show that RMLS has a better line-fittingresult than RLS, which is developed in the other lit-erature. The main reason is that RMLS is more sui-table for the model presented here. Naturally, a largerange of scene values is required to reduce the ghost-ing artifacts.We know that the proposed algorithm is executed

recursively, which can lower the computational com-plexity and the storage requirements. In addition,the property of updating nonuniformity parametersframe by frame makes it possible to capture the tem-poral drift in the gain and bias of each detector.

This research was supported by the NationalNatural Science Foundation of China (NSFC)(60572151).

References1. D. L. Perry and E. L. Dereniak, “Linear theory of nonunifor-

mity correction in infrared staring sensors,” Opt. Eng. 32,1854–1859 (1993).

2. Y. Shi, T. X. Zhang, Z. G. Cao, and H. Li, “A feasible approachfor nonuniformity correction in IRFPA with nonlinear re-sponse,” Infrared Phys. Technol. 46, 329–337 (2005).

3. J. G. Harris and Y. M. Chiang, “Nonuniformity correction ofinfrared image sequences using the constant-statistics con-straint,” IEEE Trans. Image Process. 8, 1148–1151 (1999).

4. M. M. Hayat, S. N. Torres, E. Armstrong, S. C. Cain, andB. Yasuda, “Statistical algorithm for nonuniformity correctionin focal-plane arrays,” Appl. Opt. 38, 772–780 (1999).

5. R. C. Hardie, M. M. Hayat, E. Armstrong, and B. Yasuda,“Scene-based nonuniformity correction with video sequencesand registration,” Appl. Opt. 39, 1241–1250 (2000).

6. B. M. Ratliff, M. M. Hayat, and R. C. Hardie, “An algebraicalgorithm for nonuniformity correction in focal-plane arrays,”J. Opt. Soc. Am. A 19, 1737–1747 (2002).

7. B. M. Ratliff, M. M. Hayat, and J. S. Tyo, “Radiometricallyaccurate scene-based nonuniformity correction for array sen-sors,” J. Opt. Soc. Am. A 20, 1890–1899 (2003).

8. B. M. Ratliff, M. M. Hayat, and J. S. Tyo, “Generalized alge-braic scene-based nonuniformity correction algorithm,” J. Opt.Soc. Am. A 22, 239–249 (2005).

9. B. Narayanan, R. C. Hardie, and R. A. Muse, “Scene-basednonuniformity correction technique that exploits knowledgeof the focal-plane array readout architecture,” Appl. Opt.44, 3482–3491 (2005).

10. S. N. Torres and M. M. Hayat, “Kalman filtering for adaptivenonuniformity correction in infrared focal-plane arrays,” J.Opt. Soc. Am. A 20, 470–480 (2003).

11. J. E. Pezoa, M. M. Hayat, S. N. Torres, and M. S. Rahman,“Multimodel Kalman filtering for adaptive nonuniformitycorrection in infrared sensors,” J. Opt. Soc. Am. A 23,1282–1291 (2006).

12. R. C. Hardie, K. J. Barnard, J. G. Bognar, E. E. Armstrong, andE. A. Watson, “High resolution image reconstruction from asequence of rotated and translated frames and its applica-tion to an infrared imaging system,” Opt. Eng. 37, 247–260(1998).

13. B. E. Dunne and G. A. Williamson, “QR-based TLS and mixedLS-TLS algorithms with applications to adaptive IIR filter-ing,” IEEE Trans. Signal Process. 51, 386–394 (2003).

14. F. Torres, S. N. Torres, and C. San Martin, “A recursive leastsquare adaptive filter for nonuniformity correction of infraredimage sequences,” in Progress in Pattern Recognition, ImageAnalysis and Applications, Vol. 3773 ofLecture Notes in Com-puter Science (Springer, 2005), pp. 540–546.

15. C. Davila, “An efficient recursive total least squares algorithmfor FIR adaptive filtering,” IEEE Trans. Signal Process. 42,415–419 (1994).

16. G. Golub and C. V. Loan, Matrix Computations (JohnsHopkins U. Press, 1983).

17. D. A. Scribner, K. A. Sarkady, J. T. Caulfield, M. R. Kruer,G. Katz, and C. J. Gridley, “Nonuniformity correction for star-ing IR focal plane arrays using scene-based techniques,” Proc.SPIE 1308, 224–233 (1990).

18. J. G. Harris and Y. M. Chiang, “Minimizing the ‘ghosting’artifact in scene-based nonuniformity correction,” Proc. SPIE3377, 106–113 (1998).

2372 APPLIED OPTICS / Vol. 48, No. 12 / 20 April 2009