Optomechanical shape analysis using group theory

5
Optomechanical shape analysis using group theory Jenny Magnes, 1, * Margo Kinneberg, 1 Rahul Khakurel, 1 and Noureddine Melikechi 2 1 Physics and Astronomy Department, Vassar College, 124 Raymond Avenue, Poughkeepsie, New York 12604, USA 2 Department of Physics and Pre-Engineering and the Center for Research and Education in Optical Sciences and Applications, Delaware State University, 1200 North DuPont Highway, Dover, Delaware 19901, USA *Corresponding author: [email protected] Received 25 January 2010; revised 24 May 2010; accepted 27 May 2010; posted 30 June 2010 (Doc. ID 123016); published 27 July 2010 We describe an optomechanical technique using a knife-edge, which is scanned spatially across a beam of light to identify shape-based irradiance. Symmetry groups are identified through linear and rotational scanning signatures of illuminated shapes. The scanning signature is used to classify the shape into a symmetry group. To demonstrate the shape analysis technique, we have classified basic geometric shapes, which belong to the orthogonal and dihedral symmetry groups O 2 , D 2 , D 3 , and D 6 . © 2010 Optical Society of America OCIS codes: 200.1130, 200.4740, 200.4880, 230.0230. 1. Introduction The field of shape recognition is generally under- stood as the remote sensing and identifying of shapes and objects through the use of optical techniques. Researchers in the field of shape recognition have fo- cused their efforts on edge detection [1,2], shadow techniques [35], shape classification [610], shape matching [11,12], as well as spectroscopic techniques [13]. Edge detection is probably one of the most important components of shape recognition. This ap- proach relies heavily on image processing techniques because shadows and obstructions can distort the ability to define edges. Shape matching typically re- lies on matching a stored shape statistically through overlapping with a stored model shape. This method presents challenges because some objects may give poor statistical matches, whereas object identifica- tion may be performed more accurately through sym- metry identification. Biological systems typically rely on symmetry-based shape recognition. In parti- cular, Vetter et al. have demonstrated that the hu- man visual system applies symmetry to combine a limited number of two-dimensional views to recog- nize the object in new views [14]. We propose a quan- titative optomechanical symmetry recognition method. This method lends itself to applications in quality control manufacturing applications as well as other known symmetry recognition uses, such as sur- veillance and remote sensing. However, the proposed optomechanical shape recognition system could work very well as an element of a hybrid object recognition method. Pixelated cameras, mostly charge-coupled device (CCD) cameras, are the primary devices currently used for data acquisition in object recognition. Using CCD arrays, one image or a series of images is recorded and then processed. Comparisons between optical intensities or wavelengths recorded by indi- vidual pixels are a common method for processing images. These methods have proven to be successful in edge detection, as reported by Medina-Carnicer et al. [2]. These authors successfully developed an automated method for outlining shapes in complex sceneries. An alternative method to improving ma- chine vision is the use of geons [15]. Geons fit a two- dimensional shape into various cross sections of a three-dimensional shape. The three-dimensional shape is then defined by the way in which the two- dimensional shape changes as it scans through the three-dimensional shape. The above methods often suffer from one major drawback: they require large amounts of computing power or time. In this paper, 0003-6935/10/224188-05$15.00/0 © 2010 Optical Society of America 4188 APPLIED OPTICS / Vol. 49, No. 22 / 1 August 2010

Transcript of Optomechanical shape analysis using group theory

Page 1: Optomechanical shape analysis using group theory

Optomechanical shape analysis using group theory

Jenny Magnes,1,* Margo Kinneberg,1 Rahul Khakurel,1 and Noureddine Melikechi2

1Physics and Astronomy Department, Vassar College, 124 Raymond Avenue, Poughkeepsie, New York 12604, USA2Department of Physics and Pre-Engineering and the Center for Research and Education in Optical Sciences

and Applications, Delaware State University, 1200 North DuPont Highway, Dover, Delaware 19901, USA

*Corresponding author: [email protected]

Received 25 January 2010; revised 24 May 2010; accepted 27 May 2010;posted 30 June 2010 (Doc. ID 123016); published 27 July 2010

We describe an optomechanical technique using a knife-edge, which is scanned spatially across a beamof light to identify shape-based irradiance. Symmetry groups are identified through linear and rotationalscanning signatures of illuminated shapes. The scanning signature is used to classify the shape intoa symmetry group. To demonstrate the shape analysis technique, we have classified basic geometricshapes, which belong to the orthogonal and dihedral symmetry groups O2, D2, D3, and D6. © 2010Optical Society of AmericaOCIS codes: 200.1130, 200.4740, 200.4880, 230.0230.

1. Introduction

The field of shape recognition is generally under-stood as the remote sensing and identifying of shapesand objects through the use of optical techniques.Researchers in the field of shape recognition have fo-cused their efforts on edge detection [1,2], shadowtechniques [3–5], shape classification [6–10], shapematching [11,12], as well as spectroscopic techniques[13]. Edge detection is probably one of the mostimportant components of shape recognition. This ap-proach relies heavily on image processing techniquesbecause shadows and obstructions can distort theability to define edges. Shape matching typically re-lies on matching a stored shape statistically throughoverlapping with a stored model shape. This methodpresents challenges because some objects may givepoor statistical matches, whereas object identifica-tion may be performedmore accurately through sym-metry identification. Biological systems typicallyrely on symmetry-based shape recognition. In parti-cular, Vetter et al. have demonstrated that the hu-man visual system applies symmetry to combine alimited number of two-dimensional views to recog-nize the object in new views [14]. We propose a quan-

titative optomechanical symmetry recognitionmethod. This method lends itself to applications inquality control manufacturing applications as well asother known symmetry recognition uses, such as sur-veillance and remote sensing. However, the proposedoptomechanical shape recognition system could workvery well as an element of a hybrid object recognitionmethod.

Pixelated cameras, mostly charge-coupled device(CCD) cameras, are the primary devices currentlyused for data acquisition in object recognition. UsingCCD arrays, one image or a series of images isrecorded and then processed. Comparisons betweenoptical intensities or wavelengths recorded by indi-vidual pixels are a common method for processingimages. These methods have proven to be successfulin edge detection, as reported by Medina-Carniceret al. [2]. These authors successfully developed anautomated method for outlining shapes in complexsceneries. An alternative method to improving ma-chine vision is the use of geons [15]. Geons fit a two-dimensional shape into various cross sections of athree-dimensional shape. The three-dimensionalshape is then defined by the way in which the two-dimensional shape changes as it scans through thethree-dimensional shape. The above methods oftensuffer from one major drawback: they require largeamounts of computing power or time. In this paper,

0003-6935/10/224188-05$15.00/0© 2010 Optical Society of America

4188 APPLIED OPTICS / Vol. 49, No. 22 / 1 August 2010

Page 2: Optomechanical shape analysis using group theory

we present a complementary technique that can beused for shape analysis that is experimentallystraightforward and requires very little computingpower. Our approach is based on the application of aknife-edge scanning method where only the collec-tive optical power passing by an object is recorded.No recording (photographing) of the image/shadowitself is necessary because the total power is recordedas the knife-edge is translated or rotated across theshadow as described in the next section. We thendiscuss ways to use the recorded optical power asa function of distance or angle to identify objectsbased on symmetry. A tabletop setup is used to detectand classify shapes in accordance to symmetry char-acteristics through linear [16–18] and rotationaloptomechanical scanning. Optomechanical scanningsignatures are mapped onto symmetry structures byapplying group theory.

2. Experimental Setup

Choosing an object that is much larger than thewavelength of the light used to illuminate the objectminimizes diffraction effects. In general, visible lightwill work well for dimensions larger than 1 cm. Theexperimental setup used to record two-dimensionaloptical power scans of a given object is similar tothe kind used to analyze Gaussian laser beams[19,20]. Nevertheless, in this experimental setup[see Fig. 1(a)], a light beam is expanded to coverthe edges outlining an object, which is mounted ona rotational stage. The remaining light passes theknife-edge, which is mounted on a translationalstage, and enters a photodetector. There are twoscanning mechanisms:

1. The first scanning mechanism, shown inFig. 1(b), consists of a series of linear scans. The lin-

ear scans are used to identify symmetry axes for re-flections. In this case, the knife-edge is translatedacross the object, and as the object is uncovered, thepower is recorded. If the derivative of the data issymmetrical, it follows that a symmetry axis existsfor reflections because the rate change of the powerwith translation is then symmetrical. Repeating thisprocess for various orientations determines if othersymmetry axes exist.

2. The second scanning mechanism, shown inFig. 1(c), involves a rotational scan used to identifyrotational symmetries. In this case, the razor bladeremains fixed as the object is rotated. Data fromthe photodetector are recorded as a function of theangle of rotation. These data help to determine rota-tional symmetries, as described below.

Positive identification of an object may depend onlinear or rotational scans or both. It is therefore ad-visable to build a library of shapes based on the ap-plication so that the number of objects in the librarydetermines howmany scans are necessary. For exam-ple, linear scans of an elliptical disk and a circulardisk will always indicate perfect reflection symmetrybecause the areas on each side are equal, regardlessof the orientation of the ellipse or the circle. Also, if acircle or an ellipse is covered halfway and then ro-tated [Fig. 1(c)], there will be no power fluctuationsin the detector. In this case, off-center rotationalscans, i.e., covering the circle or ellipse less or morethan halfway, would cause the power to fluctuate asthe ellipse is rotated.

3. Data Analysis and Discussion

The data analysis for the linear scans can be com-pleted in several ways. Figure 2(a) shows the rawdata, i.e., the exposed area as a function of edgetranslation for two different orientations. While thetwo sets of raw data look amazingly similar, the de-rivatives, i.e., the change in area, differ in symmetrydepending on the orientation, as shown in Fig. 2(b). Asymmetrical change in area is an indicator for reflec-tion symmetry:

~A0 ¼ σy~A ¼¼�−1 00 1

��x − xpeak

y

¼�−ðx − xpeakÞ

y

�; ð1Þ

where σy is the reflection matrix with respect to the yaxis. ~A and ~A0 are mirrored position vectors originat-ing at the intersection of the symmetry axis throughthe peak and the horizontal axis and ending atan arbitrary point on the figure. It can be seen inFig. 2(b) that points A and A0 are fairly “good”mirrorimages of each other, indicating that this orientationcontains a symmetry axis for reflections. Points Band B0, however, are clearly not symmetrical withrespect to the peak of the tilted figure. Below we

Fig. 1. (Color online) (a) A laser was used in this optomechanicalscanning setup. The laser light is expanded using a diffuser so thatthe laser light envelops the object. The knife-edge is used to scanacross the shadow. The light is then collected using a lens anddirected onto a photodetector. (b) A knife-edge is displaced linearly.(c) The object is rotated about different points for rotationalscanning.

1 August 2010 / Vol. 49, No. 22 / APPLIED OPTICS 4189

Page 3: Optomechanical shape analysis using group theory

discuss a method to positively distinguish “good”mirror images from nonsymmetric orientations.

An efficient evaluation of symmetries of the twodata sets in Fig. 2 is performed by calculating theskewness for each figure. It should be noted that askewness (or third moment) of zero indicates perfectsymmetry, while a positive skewness indicates apeak that is shifted toward the negative side. Skew-ness is typically used to verify the validity of statis-tical data sets:

γ ¼ nðn − 1Þðn − 2Þ

X�xi − �xσ

�3; ð2Þ

where n is the total number of statistical data points,xi is the horizontal value of the data point, �x is thestatistical average, and σ is the standard deviation.The above equation can be modified to work for ana-log data sets:

γ ¼ nðn − 1Þðn − 2Þ

Xmj¼1

f ðxjÞ�xj − �x

σ

�3; ð3Þ

where m is the number of data points and j indicatesthe index. Data sets were modified such that n ≫ mto assure that the skewness obtained is independentof the light intensity in this experiment. Using this

method, the skewness for the negative 19:5° tiltedfigure (Fig. 2) renders a skewness of −0:374�0:008, while the modeled skewness is −0:381 for thissystem. The agreement of the modeled and actualdata is confirmed visually in Fig. 3. The nontiltedfigure yields a skewness of −0:083� 0:008. Thisvalue is not exactly zero because the background in-tensity has a gradient, as confirmed by the modeledskewness value of −0:087, clearly closer to zero and,therefore, indicating a symmetry axis at no tilt. Thedata confirm that skewness is a valid measure ofsymmetry.

It should be noted here that differentiation repro-duces a shape with this method directly if it is asingle-valued function. Convex shapes could be re-produced using optomechanical integration, even ifthe shape cannot be represented by a function in anyorientation. One example is illustrated using a dia-mond (Fig. 4). Details of this method are describedin Ref. [15]. Convex shapes can be determined usingrotational scanning (Fig. 5).

The data analysis for the rotational scans can beunderstood by comparing the raw data (Fig. 5) forthree shapes: a circle, square, and equilateral trian-gle. The rotational symmetries can be identified byrecording the amount of light passing the objectand the knife-edge, which is covering part of the ob-ject off center. The amount of light passing a circulardisk does not change as the disk rotates. For anyother object, the amount of light changes as long asthe knife-edge does not pass through the center of in-version. The amount of light changes proportionallyto the area exceeding the largest circular disk fittedinside the object (Fig. 5). The rotational symmetriescan then be read off the data curves. The number k ofequally large peaks reflects the rotational symme-tries by fixing the rotational operator for each shape:

Fig. 2. (Color online) (a) Raw data: area as a function of knife-edge translation for two different orientations. (b) Differentiatedarea for two different orientations.

Fig. 3. (Color online) Experimental and simulated data for atilted house show good agreement.

4190 APPLIED OPTICS / Vol. 49, No. 22 / 1 August 2010

Page 4: Optomechanical shape analysis using group theory

CðZÞk ¼

0B@

cos 2πk sin 2π

k 0− sin 2π

k cos 2πk 0

0 0 1

1CA: ð4Þ

For the circular disk, a flat line indicates an infinitenumber of rotational symmetries, while four andthree equally large peaks indicate the appropriatenumber of rotational symmetries in a square andequilateral triangle, respectively.

4. Conclusions

We have shown that the results from linear androtational scans can be used to classify objects usinggroup multiplication tables, as was done with theequilateral triangle (Table 1). As an example, wecould identify the circle as a member of the O2ðRÞgroup, while the square and triangle are classified inthe dihedral D4 and D3 groups, respectively. This is asystem that can easily be automated to build a shapelibrary of optomechanical signatures. Immediate ap-plications lie in quality control in production lines [21]and shadow techniques [3]. Moreover, the aspect ofusing modified skewness for image analysis is excit-ing and may inspire new theoretical considerations.

We thank Daniel Lawrence for discussions onskewness. We thank Peter Pappas for discussionson group theory and Kenneth Livingston for discus-sions on shape recognition. This work was supportedby the Vassar College Undergraduate Research Sum-mer Institute and the National Science FoundationCenter for Research Excellence in Science and Tech-nology (NSF-CREST) award 0630388.

References1. Z. He, X. You, and Y. Yuan, “Texture image retrieval based on

non-tensor product wavelet filter banks,” Signal Process. 89,1501–1510 (2009).

2. R. Medina-Carnicer, F. J. Madrid-Cuevas, A. Carmona-Poyato,and R. Muñoz-Salinas, “On candidates selection for hysteresisthresholds in edge detection,” Pattern Recogn. 42, 1284–1296(2009).

3. H. Kawasaki and R. Furukawa, “Shape reconstruction andcamera self-calibration using cast shadows and scene geome-tries,” Int. J. Comput. Vision 83, 135–148 (2009).

4. G. D. Finlayson,M. S. Drew, and C. Lu, “Entropyminimizationfor shadow removal,” Int. J. Comput. Vision 85, 35–37 (2009).

5. W. Zhou, G. Huang, A. Troy, and M. L. Cadenasso, “Object-based land cover classification of shaded areas in high spatialresolution imagery of urban areas: a comparison study,”Remote Sens. Environ. 113, 1769–1777 (2009).

Fig. 4. (Color online) Shapes represented by a non-single-valuedfunction can be reproduced in sections using optomechanical scan-ning, as long as the exposed part represents a function.

Fig. 5. (Color online) Rotational scanning reveals k-fold rota-tional symmetry through the optomechanical signature. Eachpeak represents a symmetry axis.

Table 1. Group Multiplication Table for Dihedral Group D3

D3 σ0 σ1 σ2 C0 C1 C2

σ0 σ0 σ1 σ2 C0 C2 C1

σ1 σ1 σ2 σ0 C1 C2 C0

σ2 σ2 σ0 σ1 C2 C0 C1

C0 C0 C2 C1 σ0 σ2 σ1C1 C1 C0 C2 σ1 σ0 σ2C2 C2 C1 C0 σ2 σ1 σ0

1 August 2010 / Vol. 49, No. 22 / APPLIED OPTICS 4191

Page 5: Optomechanical shape analysis using group theory

6. A. Ecker and S. Ullman, “A hierarchical non-parametricmethod for capturing non-rigid deformations,” Image VisionComput. 27, 87–98 (2009).

7. D. Levi and S. Ullman, “Learning to classify by ongoingfeature selection,” Image Vision Comput. 28, 715–723(2010).

8. E. Borenstein and S. Ullman, “Combined top-down/bottom-upsegmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 30,2109–2125 (2008).

9. S. J. Dickinson, R. Bergevin, I. Biederman, J.-O. Eklundh, R.Munck-Fairwood, A. K. Jain, and A. Pentland, “Panel report:the potential of geons for generic 3-D object recognition,”Image Vision Comput. 15, 277–292 (1997).

10. F. de Vieilleville and J.-O. Lachaud, “Comparison and im-provement of tangent estimators on digital curves,” PatternRecogn. 42, 1693–1707 (2009).

11. I. Cervantes, R. Baumung, A.Molina, T. Druml, J. P. Gutiérrez,J. Sölkner, andM. Valera, “Size and shape analysis of morpho-functional traits in the Spanish Arab horse,” Livest. Sci. 125,43–49 (2009).

12. T. Martin, E. Cohen, and R. M. Kirby, “Volumetric parameter-ization and trivariate B-spline fitting using harmonic func-tions,” Comput. Aided Geom. Des. 26, 648–664 (2009).

13. R. M. Bolle, J. H. Connell, N. Haas, R. Mohan, and G. Taubin,“VeggieVision: a produce recognition system,” in Proceedingsof the 3rd IEEE Workshop on Applications of Computer Vision(IEEE, 1996), pp. 244–251.

14. T. Vetter, T. Poggio, and H. H. Bültoff, “The importance ofsymmetry and virtual views in three-dimensional object re-cognition,” Curr. Biol. 4, 18–23 (1994).

15. S. J. Dickinson, R. Bergevin, I. Biederman, J. Eklundh,R. Munk-Fairwood, A. K. Jain, and A. Pentland, “Panel report:the potential of geons for generic 3-D object recognition,”Image Vision Comput. 15, 277–292 (1997).

16. J. Magnes, G. Schwarz, J. Hartke, D. Burt, and N. Melikechi,“Optomechanical integration method for finite integrals,”Appl. Opt. 46, 6918–6922 (2007).

17. D. Burt, J. Magnes, G. Schwarz, and J. Hartke, “Teachingintegration through a physical phenomenon,” Primus 18,283 (2008).

18. J. Magnes, T. David, R. Khakurel, M. Kinneberg, D. Olson,and N. Melikechi, “Shape recognition through opto-mechanical scanning,” in Frontiers in Optics, OSA TechnicalDigest (CD) (Optical Society of America, 2008), paperFMJ5.

19. J. M. Khosrofian and B. A. Garetz, “Measurement of a Gaus-sian laser beam diameter through the direct inversion ofknife-edge data,” Appl. Opt. 22, 3406–3410 (1983).

20. R. L. McCally, “Measurement of Gaussian beam parameters,”Appl. Opt. 23, 2227 (1984).

21. O. Wolfgang, “Application of optical shape measurement forthe nondestructive evaluation of complex objects,” Opt. Eng.39, 232–243 (2000).

4192 APPLIED OPTICS / Vol. 49, No. 22 / 1 August 2010