Multimodal and time-lapse skin registration

8
Multimodal and time-lapse skin registration S. Madan 1 , K. J. Dana 1 and G. O. Cula 2 1 Department of Electrical and Computer Engineering, Rutgers University, Piscataway, NJ, USA and 2 Consumer and Personal Product Division, Johnson & Johnson, Piscataway, NJ, USA Background/purpose: Computational skin analysis is revolu- tionizing modern dermatology. Patterns extracted from image sequences enable algorithmic evaluation. Stacking multiple images to analyze pattern variation implicitly assumes that the images are aligned per-pixel. However, breathing and involun- tary motion of the patient causes significant misalignment. Alignment algorithms designed for multimodal and time-lapse skin images can solve this problem. Sequences from multi- modal imaging capture unique appearance features in each modality. Time-lapse image sequences capture skin appear- ance change over time. Methods: Multimodal skin images have been acquired under five different modalities: three in reflectance (visible, parallel- polarized, and cross-polarized) and two in fluorescence mode (UVA and blue light excitation). For time-lapse imagery, 39 images of acne lesions over a 3-month period have been col- lected. The method detects micro-level features like pores, wrin- kles, and other skin texture markings in the acquired images. Images are automatically registered to subpixel accuracy. Results: The proposed registration approach precisely aligns multimodal and time-lapse images. Subsurface recovery from multimodal images has misregistration artefacts that can be eliminated using this approach. Registered time-lapse imaging captures the evolution of appearance of skin regions with time. Conclusion: Misalignment in skin imaging has significant impact on any quantitative or qualitative image evaluation. Micro-level features can be used to obtain highly accurate reg- istration. Multimodal images can be organized with maximal overlap for successful registration. The resulting point-to-point alignment improves the quality of skin image analysis. Key words: multimodal registration – time-lapse registration – micro-level features – surface component recovery – appear- ance tracking Ó 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd Accepted for publication 20 September 2014 C OMPUTATIONAL SKIN analysis including pat- tern recognition and change detection is creating a foundation for modern quantitative dermatology. Pattern analysis can utilize multiple imaging modalities (multimodal) or imaging over time (time-lapse). Skin imaging modalities include blue fluorescence, ultraviolet fluorescence, visible light (unpolarized), and crossed/parallel polari- zation. For each of these five modalities, a dis- tinct set of features is captured in the image. Stacking the images provides a five-dimensional appearance vector, but only if the images are aligned per-pixel. Because the images are high resolution (less than 1 mm per-pixel), even breathing and involuntary motions of the patient cause significant misalignment. Alignment algo- rithms are based on common features, so regis- tering images from modalities with complementary instead of common features is a challenge. We present a method of high-resolu- tion multimodal skin registration that uses micro-features for alignment and a maximal over- lap ordered sequence for multimodal alignment. Our results demonstrate highly accurate subpixel alignment in both multimodal and time-lapse image sequences. Interest in multimodal skin imaging is supported by observations of porphy- rin in fluorescence modalities. In addition, polar- ized light imaging offers the key advantage that surface structures can be separated from subsur- face structures (13). Time-lapse imaging cap- tures the appearance of skin during the different stages of the pathology, and is used by dermatol- ogists (46) to track the changes in skin appear- ance and evolution of disease over time. Existing methods of face registration, e.g., active appearance models (7, 8) and the LucasKanade registration algorithm (9) typically rely on macro-features like eyes, nose, and chin boundary to register face images. Accurate alignment of high-resolution skin regions that 1 Skin Research and Technology 2014; 0:18 Printed in Singapore All rights reserved doi: 10.1111/srt.12195 © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd Skin Research and Technology

Transcript of Multimodal and time-lapse skin registration

Page 1: Multimodal and time-lapse skin registration

Multimodal and time-lapse skin registration

S. Madan1, K. J. Dana1 and G. O. Cula2

1Department of Electrical and Computer Engineering, Rutgers University, Piscataway, NJ, USA and 2Consumer and Personal Product Division,Johnson & Johnson, Piscataway, NJ, USA

Background/purpose: Computational skin analysis is revolu-

tionizing modern dermatology. Patterns extracted from image

sequences enable algorithmic evaluation. Stacking multiple

images to analyze pattern variation implicitly assumes that the

images are aligned per-pixel. However, breathing and involun-

tary motion of the patient causes significant misalignment.

Alignment algorithms designed for multimodal and time-lapse

skin images can solve this problem. Sequences from multi-

modal imaging capture unique appearance features in each

modality. Time-lapse image sequences capture skin appear-

ance change over time.

Methods: Multimodal skin images have been acquired under

five different modalities: three in reflectance (visible, parallel-

polarized, and cross-polarized) and two in fluorescence mode

(UVA and blue light excitation). For time-lapse imagery, 39

images of acne lesions over a 3-month period have been col-

lected. The method detects micro-level features like pores, wrin-

kles, and other skin texture markings in the acquired images.

Images are automatically registered to subpixel accuracy.

Results: The proposed registration approach precisely aligns

multimodal and time-lapse images. Subsurface recovery from

multimodal images has misregistration artefacts that can be

eliminated using this approach. Registered time-lapse imaging

captures the evolution of appearance of skin regions with time.

Conclusion: Misalignment in skin imaging has significant

impact on any quantitative or qualitative image evaluation.

Micro-level features can be used to obtain highly accurate reg-

istration. Multimodal images can be organized with maximal

overlap for successful registration. The resulting point-to-point

alignment improves the quality of skin image analysis.

Key words: multimodal registration – time-lapse registration –

micro-level features – surface component recovery – appear-

ance tracking

� 2014 John Wiley & Sons A/S. Published by JohnWiley & Sons LtdAccepted for publication 20 September 2014

COMPUTATIONAL SKIN analysis including pat-tern recognition and change detection is

creating a foundation for modern quantitativedermatology.Pattern analysis can utilize multiple imaging

modalities (multimodal) or imaging over time(time-lapse). Skin imaging modalities include bluefluorescence, ultraviolet fluorescence, visiblelight (unpolarized), and crossed/parallel polari-zation. For each of these five modalities, a dis-tinct set of features is captured in the image.Stacking the images provides a five-dimensionalappearance vector, but only if the images arealigned per-pixel. Because the images are highresolution (less than 1 mm per-pixel), evenbreathing and involuntary motions of the patientcause significant misalignment. Alignment algo-rithms are based on common features, so regis-tering images from modalities withcomplementary instead of common features is achallenge. We present a method of high-resolu-

tion multimodal skin registration that usesmicro-features for alignment and a maximal over-lap ordered sequence for multimodal alignment.Our results demonstrate highly accurate subpixelalignment in both multimodal and time-lapseimage sequences. Interest in multimodal skinimaging is supported by observations of porphy-rin in fluorescence modalities. In addition, polar-ized light imaging offers the key advantage thatsurface structures can be separated from subsur-face structures (1–3). Time-lapse imaging cap-tures the appearance of skin during the differentstages of the pathology, and is used by dermatol-ogists (4–6) to track the changes in skin appear-ance and evolution of disease over time.Existing methods of face registration, e.g.,

active appearance models (7, 8) and the Lucas–Kanade registration algorithm (9) typically relyon macro-features like eyes, nose, and chinboundary to register face images. Accuratealignment of high-resolution skin regions that

1

Skin Research and Technology 2014; 0: 1–8Printed in Singapore � All rights reserveddoi: 10.1111/srt.12195

© 2014 John Wiley & Sons A/S.Published by John Wiley & Sons Ltd

Skin Research and Technology

Page 2: Multimodal and time-lapse skin registration

do not have these structures is a challenge. Inour approach, micro-level features like pores,wrinkles, and other skin texture markings areused to register high-resolution multimodal faceimages acquired in dermatology clinical studies.We show that these micro-features are sufficientto achieve high accuracy registration. In particu-lar we register: (a) multimodal images acquiredunder polarized, fluorescence, and visible lightwithin a second time interval, (b) sequence oftime-lapse skin images with acne lesionsacquired over a 3-month period clinical study.We have developed a publicly available soft-ware toolbox to precisely register skin imagesusing micro-level features (link provided in finalpublication).

Overview of registration techniquesDetailed surveys discuss various registrationmethods can be found in Zitova and Flusser (10)and Szeliski (11). Feature-based methods gener-ally extract features and match points to estimatea transformation between the two images(12–14). Our approach is also feature based,but we use micro-features to achieve subpixel,pore-level alignment. Mutual information-basedregistration methods (15–17) register images bymaximizing the mutual information betweenthem. The registration approach assumes thatthe intensities in the two images being registeredare statistically dependent. Deformable model-based methods have the ability to handle non-rigid motion. For example in Rueckert et al. (18),the authors use cubic B-splines to register breastMRI images and in Mattes et al. (19), the authorsuse cubic B-splines to register PET and CT chestimages. Active appearance models (20–22) aredeformable models used in computer vision forregistering and tracking faces. However, activeappearance models rely on macro-level featureslike lips, face boundary, nose, and eyes insteadof micro-level features.

Materials and Methods

Imaging modalitiesWe have acquired skin images under five dif-ferent modalities. The reflectance modes are vis-ible light unpolarized (VL), parallel-polarized(PL), and cross-polarized (CP). The fluorescencemodes are UVA and blue light excitation(BL). The different modalities capture different

information about skin. The intensity Iv mea-sured by the VL image sensor consists of twocomponents, the surface reflectance and the sub-surface reflectance (23). The surface componentIs is the part of the incident light reflected off theskin surface, and the subsurface component Id isthe part of the incident light traveling throughthe stratum corneum and the epidermis before itexits the skin. The VL sensor measures the sumof the surface and the subsurface components,

Iv ¼ Is þ Id: ð1ÞPolarized images are acquired by placing a lin-

ear polarizer in front of the light source and thecamera (24–26). For the PL mode, the polarizer isin front of the sensor parallel to the polarizer infront of the light source. In this mode, the inten-sity Ip measured by the sensor is,

Ip ¼ Iv þ 1

2Id: ð2Þ

The PL image enhances the surface compo-nent and surface features like raised borders oflesions, pore structure, and wrinkles (27). Whenthe polarizer in front of the sensor is perpendic-ular to the polarizer in front of the light sourcefor the CP image, the entire surface componentgets blocked, and the sensor measures only halfthe subsurface component. The intensity Ix mea-sured by the sensor is,

1x ¼ 1

2Id: ð3Þ

The CP image brings out subsurface skin fea-tures like color variation due to melanin ery-thema (28).Fluorescence images are obtained with either

ultraviolet (UVA) excitation at 360–400 nm orblue light excitation at 400–460 nm. Under UVAexcitation (29, 30), the collagen cross-links fluo-resce and photo-damaged regions appear as darkspots produced by absorbance of epidermal pig-mentation. Collagen cross-links increase withexposure and age. Under blue light excitation,elastin cross-links fluoresce. Blue light excitationenables detection of both melanin pigmentationand superficial vasculature, and sebum produc-ing pores appear as yellow-white spots.

Imaging setupWe acquire multimodal and time-lapse imagesusing a custom built equipment, as shown in

2

Madan et al.

Page 3: Multimodal and time-lapse skin registration

Fig. 1. The imaging equipment consists of alight source, a sensor, and polarizer filtersplaced in front of the light source and the sen-sor. In each imaging session, multimodalimages of the subject are acquired within a 1 stime interval. During the acquisition, the opera-tion of the sensor and the orientation of thepolarizers are synchronized using a computer.Figure 2 shows an example image from eachmodality. The eye region and all face images inthe paper have been blacked out to preserve theidentity of the subject. Figure 2a–c shows VP,

PL, and CP images. Figure 2d and e showsUVA and BL images. Although a chinrest isused and the images are obtained in rapid suc-cession, misregistration of the high-resolutionimages is evident.For time-lapse imaging, we have imaged a

subject with acne lesions at 39 different timepoints over a 3-month period. Figure 3 showsexample images from the time-lapse set. Thetime-lapse images are misregistered, whichmakes it difficult to track the evolution of acnelesions over time, or perform quantitative pro-cessing on them. Precise point-to-point registra-tion of the temporal images allows visualizationof the evolution of acne lesions over time.

Multimodal registration methodWe use micro-level features to register multi-modal images acquired in a single imaging ses-sion during a clinical study. By registering aface patch, the surface can be approximated asplanar. Figure 4 shows image patches frommultimodal images that are used for registration

Fig. 1. Imaging equipment to acquire multimodal images.

(a) (b) (c)

(d) (e)

Fig. 2. Multimodal skin images. Reflectance images: unpolarized (a), parallel-polarized (b), and cross-polarized (c). Fluorescence images: UVA (d)and blue light (e) excitation.

3

Multimodal and time-lapse skin registration

Page 4: Multimodal and time-lapse skin registration

and do not contain macro-structures like eyes,nose, and mouth. In traditional computer visionalgorithms, such regions are treated as feature-less regions. However, the patch images areabundant in micro-level features like pores,wrinkles, and skin texture, and we register thepatch images by extracting and matching themicro-level features. By applying a local inten-sity transform for contrast enhancement, micro-features can be detected with the scale invariantfeature transform (SIFT) (31) interest pointdetector. We increase the contrast of the patch

images by stretching the intensities of theimages in gray scale such that 1% of the intensi-ties is saturated. Features are clearly broughtout in the contrast enhanced patch images. Weuse a common computer vision workflow formatching SIFT feature points to estimate atransformation between the patch images foralignment. The transformation can be modeledas quadratic, affine, or homography, i.e., a3 9 3 invertible matrix. According to theory ofmultiview geometry, when a planar surface isimaged from two different viewpoints (32), the

Fig. 3. Sample images of a subject with acne lesions. The subject was imaged at 39 different time points over a 3-month period. Three timesamples are shown.

Fig. 4. Multimodal alignment overview. The key innovation is a procedure to use micro-features for alignment. By ordering the sequence of multi-modal images as illustrated, alignment pairs have maximal overlap in features. A concatenation of the alignment parameters enables alignment ofthe entire set.

4

Madan et al.

Page 5: Multimodal and time-lapse skin registration

transformation between the two images can bemodeled using a homography. By assumingplanar patches, the transformation betweenpatch points is a homography. In particular,there exists a matrix H such that,

Sx11

� �¼ H

x21

� �ð4Þ

where x1 = [x1, y1] is the pixel location of thepoint in one image, x2 = [x2, y2] the pixel loca-tion in the other image, and s a scale factor.Equation 4 can be rewritten as,

xT2 1 0T 0� x1xT2 � x1

0T 0 xT2 1� y1xT2 � y1

� � h1h2h3

24

35 ¼ 0 ð5Þ

where h1, h2, and h3, are the first, second, andthird rows of the homography matrix . Eachpair of point correspondence gives two con-straints on the homography matrix. A total of npoint correspondences give 2n constraintswhich can be written as,

Ah ¼ 0 ð6Þ

where h ¼ ½hT1 ; hT2 ; hT3 �T, and A is a 2n 9 9matrix representing constraints from the pointcorrespondences. Estimating the homographyby directly solving Eq. (6) in the presence ofoutliers results in an incorrect estimation. Inorder to estimate the homography, outliers areremoved using the RANSAC (33) algorithm.A typical example yields a total of approxi-mately 100 feature points are retained froman original set of 1500 feature points. Once allthe outliers are removed, the homography inEq. (6) is re-estimated. The estimate is furtherrefined using the Levenberg–Marquardt opti-mization algorithm (34). The final homographyis used to warp the input patch image ontothe reference patch image using bilinear inter-polation.For registering multimodal images, pairs of

images are registered and then homographiesare concatenated to register all five modes to asingle coordinate frame as shown in Fig. 4. Thepairs for registration are chosen according to amaximal overlap ordering. A maximal overlapordered sequence that enables registration is iden-tified. Specifically, we have determined byobservation that a specific sequence of imagepairs enables registration because there exists a

sufficiently large set of common features. Regis-tration of the entire multimodal set is accom-plished in a pairwise manner by aligning: (a)PL and VL images, (b) VL and CP images, (c)UVA fluorescence and CP images, and (d) blueexcited fluorescence and UVA fluorescenceimages (BL and UVA).

Results

Multimodal registrationFigure 4 shows the images of a 1000 9 850 skinregion skin patch acquired under PL, visual,CP, UVA excitation, and blue excitation modali-ties. The skin appearance is different under dif-ferent modalities. The following pairs areregistered: VL and PL, VL and CP, UVA andCP, BL and UVA. Figure 4 shows the pixellocations of corresponding feature points in themisregistered pairs plotted together as whiteand black points. Note that the pixel locationsdo not overlap indicating misregistration. Theroot mean square (RMS) values of the differ-ence between the pixel locations in the misreg-istered images are 12.20, 9.54, 24.55, and 14.39respectively. The images being registered arevery high resolution; therefore, a small changein the location of a scene point results in a largedifference between the pixel corresponding tothe new scene location and the pixel corre-sponding to the original scene location. Figure 4(bottom row) shows the pixel locations of thecorresponding feature points in the registeredimages. Note that the pixel locations overlapindicating precise registration. The RMS valuesof the difference between the pixel locations inthe registered images are 0.80, 0.91, 1.46, and0.90.

Time-lapse registrationThe time-lapse set consisting of 39 misregis-tered images of forehead acne lesions are regis-tered using the micro-level feature-basedregistration approach. In the first step, theimages are globally registered using the Lucas–Kanade algorithm. Figure 5 shows the inputimages and the registration results after globalregistration and alignment with micro-featuresremove the residual misregistration between theglobally registered images. The RMS value ofthe difference between the pixel locations is3.198 for the globally registered images, and

5

Multimodal and time-lapse skin registration

Page 6: Multimodal and time-lapse skin registration

(a) (b)

(c) (d)

(e) (f)

Fig. 6. Surface component recovery from images that are parallel-polarized (a) and cross-polarized (b). The results using unregistered inputimages (c, e) and registered input images (d, e).

(a) (b)

(c) (d)

Fig. 5. (a) Two of the 39 skin images with acne lesions acquired in the 3-month clinical study. (b) Pixel locations of corresponding feature pointsin the input images. Black/white indicates the two images and the feature point are clearly misaligned. (c) Pixel locations of corresponding featurepoints in the globally registered images. (d) Pixel locations of corresponding feature points in the final precisely registered forehead regions.

6

Madan et al.

Page 7: Multimodal and time-lapse skin registration

0.861 for the final micro-level feature-based reg-istration approach.Videos showing the entire set of 39 misregis-

tered and the registered time-lapse images areavailable and link will be provided in finalversion.

Recovering surface reflectanceThe surface reflectance component can beobtained by subtracting the CP and PL imagesif they are registered. Figure 6a and b shows apair of polarized images (crossed and parallel)acquired in succession. Figure 6c shows the sur-face component recovered using the unregis-tered input images. Figure 6d shows the surfacereflectance using the registered images. Misreg-istration artefacts are clearly seen in Fig. 6d inthe form of dark spots throughout the image.Artefacts are also seen in Fig. 6d around thebottom tip of the nose region. The close-up

view of the nose region within the rectangularboundary is shown in Fig. 6e (unregisteredinput images) and Fig. 6f (registered inputimages). The surface component can be accu-rately recovered by registering the input polar-ized images up to the resolution of pore-levelfeatures using the micro-level feature-based reg-istration algorithm before subtraction.

Discussion

The results show effective skin registrationusing a computational approach designed forgeneric computer vision tasks. The innovativecontribution is the identification of a pairwiseapproach for registering multimodal imagesthat have no common feature set. In addition,we have shown that micro-features such aspores in reflectance images and porphyrin influorescence images have sufficient visual struc-ture for feature-based registration.

References

1. Bae EJ, Seo SH, Kye YC, Ahn HH.A quantitative assessment of thehuman skin surface using polar-ized light digital photography andits dermatologic significance. SkinRes Technol 2010; 16: 270–274.

2. Muccini A, Kollias N, Phillips SB,Anderson RR, Sober AJ, Stiller MJ,Drake LA. Polarized light photog-raphy in the evaluation of photoag-ing. J Am Acad Dermatol 1995; 33:765–769.

3. Kollias N, Stamatas GN. Opticalnon-invasive approaches for diag-nosis of skin diseases. J Invest Der-matol 2002; 7: 64–75.

4. Hongcharu W, Taylor CR, ChangY, Aghassi D, Suthamjariya K,Anderson RR. Topical ALA-photo-dynamic therapy for the treatmentof acne vulgaris. J Invest Dermatol2000; 115: 183–192.

5. Tanzi EL, Alster TS. Comparison ofa 1450-nm diode laser and a 1320-nm Nd:YAG laser in the treatmentof atrophic facial scars: a prospec-tive clinical and histologic study.Dermatol Surg 2004; 30: 152–157.

6. Rizova E, Kligman A. New photo-graphic techniques for clinicalevaluation of acne. J Eur Acad Der-matol Venereol 2001; 15: 13–18.

7. Cootes T, Edwards G, Taylor C.Active appearance models. IEEETrans Pattern Anal Mach Intell(PAMI) 2001; 23: 681–685.

8. Matthews I, Baker S. Activeappearance models revisited. Int JComput Vis 2004; 60: 135–164.

9. Baker S, Matthews I. Lucas–Ka-nade 20 years on: a unifying frame-work. Int J Comput Vis 2004; 56:221–255.

10. Zitova B, Flusser J. Image registra-tion methods: a survey. Image VisComput 2003; 21: 977–1000.

11. Szeliski R. Image alignment andstitching: a tutorial. Found TrendsComput Graph Vis 2006; 2: 1–104.

12. Capel D, Zisserman A. Automatedmosaicing with super-resolutionzoom. IEEE conference on com-puter vision and pattern recogni-tion, California, 1998, 885–891.

13. Ni D, Qu Y, Yang X, Chui YP,Wong TT, Ho SS, Heng PA. Volu-metric ultrasound panorama basedon 3d sift. International conferenceon medical image computing andcomputer-assisted intervention,New York, 2008, 52–60.

14. Yang G, Stewart CV, Sofka M, TsaiCL. Registration of challengingimage pairs: initialization, estima-tion, and decision. IEEE Trans Pat-tern Anal Mach Intell (PAMI)2007; 29: 1973–1989.

15. Pluim JPW, Maintz JBA, ViergeverMA. Mutual information basedregistration of medical images: asurvey. IEEE Trans Med Imaging2003; 22: 986–1004.

16. Maes F, Collignon A, Vandermeu-len D, Marchal G, Suetens P. Mul-

timodality image registration bymaximization of mutual informa-tion. IEEE Trans Med Imag 1997;16: 187–198.

17. Butz T, Thiran JP. Affine registra-tion with feature space mutualinformation. International confer-ence on medical image computingand computer-assisted interven-tion, Utrecht, 2001, 549–556.

18. Rueckert D, Sonoda LI, Hayes C,Hill DLG, Leach MO, Hawkes DJ.Nonrigid registration using free-form deformations: application tobreast MR images. IEEE TransMed Imaging 1999; 18: 712–721.

19. Mattes D, Haynor DR, Vesselle H,Lewellen TK, Eubank W. PET-CTimage registration in the chest usingfree-form deformations. IEEE TransMed Imaging 2003; 22: 120–128.

20. Saragih JM, Lucey S, Cohn JF.Deformable model fitting by regu-larized landmark mean-shift. Int JComput Vis 2011; 91: 200–215.

21. Butz T, Thiran JP. Multi-view facealignment using direct appearancemodels. International conferenceon automatic face and gesture rec-ognition, Washington, DC, 2002,324–329.

22. Xiao J, Moriyama T, Kanade T,Cohn J. Robust full-motion recov-ery of head by dynamic templatesand re-registration techniques. Int JImag Syst Technol 2003; 13: 85–94.

23. Nayar S, Fang X, Boult T. Removalof specularities using color and

7

Multimodal and time-lapse skin registration

Page 8: Multimodal and time-lapse skin registration

polarization. IEEE conference oncomputer vision and pattern recog-nition, New York, 1993, 583–590.

24. Anderson R. Polarized light exami-nation and photography of the skin.Arch Dermatol 1991; 127: 1000–1005.

25. Studinski RCN, Vitkin IA. Method-ology for examining polarizedlight interactions with tissues andtissue like media in the exact back-scattering direction. J Biomed Opt2000; 5: 330–337.

26. Jacques SL, Ramella-Roman JC,Lee K. Imaging superficial tissueswith polarized light. Lasers SurgMed 2000; 26: 119–129.

27. Kollias N. Polarized light photo-graphy of human skin, Bioengi-neering of the skin, CRC Press,New York, 1997; 95–104.

28. Kollias N, Stamatas GN. Opticalnon-invasive approaches for diag-nosis of skin diseases. J Invest Der-matol 2002; 17: 64–75.

29. Bae Y, Nelson JS, Jung B. Multi-modal facial color imaging modalityfor objective analysis of skin lesions.J Biomed Opt 2008; 13: 064007.

30. Han B, Jung B, Nelson JS, ChoiEH. Analysis of facial sebum dis-tribution using a digital fluorescentimaging system. J Biomed Opt2007; 12: 014006.

31. Lowe DG. Distinctive image fea-tures from scale invariant keypoints.Int J Comput Vis 2004; 60: 91–100.

32. Hartley R, Zisserman A. Multipleview geometry in computer vision.Cambridge: Cambridge UniversityPress, 2000.

33. Fischler MA, Bolles RC. Randomsample consensus: a paradigm formodel fitting with applications toimage analysis and automated car-tography. Commun ACM 1981; 24:381–395.

34. Press WH, Teukolsky SA, Vetter-ling WT, Flannery BP. Numericalrecipes in C++. Cambridge: Cam-bridge University Press, 2002.

Address:Siddharth MadanDepartment of Electrical and ComputerEngineering, Rutgers University,Piscataway NJ, USATel: +1 848 445 5253Fax:+1 732 445 2820e-mail: [email protected]

8

Madan et al.