Automated Face Detection and Recognition
A Survey
Waldir [email protected]
Universidade do MinhoMestrado em Informática
MI-STAR 2010
Face Detection
Locating generic faces in images
©2009 Angelo State University
Face Detection: applications• Web cams that track the user
• Cameras that shoot automatically when they detect smiles
• Blurring of faces inpublic image databases
©2009 Google
• Counting of people in aroom (e.g. fortemperature adjustment)
Face Recognition
Distinguishing a specific face from other faces
©2009 TotallyLooksLike.com
Face Recognition: applications• Biometrics / access control
""Minority Report" ©2002 20th Century Fox
Superbad" ©2007 Columbia Pictures
• Searching mugshot databases
• Tagging photo albums• Detecting fake ID
cards
• o no action requiredo scan many people at onceo places: airports, banks,
safeso data: laptops, medical info
Humans vs. Computers
• "Built-in" face detection / recognition ability
• detection &recognition indifferent areasof the brain
• can be fooledby look-alikes
© SingularityHub.com
• Algorithms must be built from scratch• Virtually perfect
memory• Can work 24/7
without degrading performance
• Can apply strictermatching criteria
Computer representation of faces
• Faces vary across many attributes — they're multidimensional
• Plotted in spaces with more than 3 dimensionso in fact, it's commonly one dimension per pixelo on a 20×20px image, that's 400 dimensions!
• Humans can't visualize or compute distances intuitively in >3D space. Computers can. But...
• It is computationally intensive. Dimensionality reduction is applied to enhance efficiency
PCA: Principal component analysis
• Data is projected into a lower dimensional space
• preserving the directions that are most significant
• not necessarily orthogonal to the original ones!
cc-by Lydia E. Kavraki <cnx.org/content/m11461/>
What defines a "match"?
• Ideally, distance in "facespace" should be:o zero, for a specific match in face recognitiono small, for a generic faceo large, otherwise
• But there are variations due to:o facial expressionso illumination varianceo pose (orientation)o dimensionality reduction
The distance theshold
• faces closer to each other than a given limit (threshold) are considered matches.
• A looser threshold can be used for face detection.
© 1991 M. Turk and A. Pentland
The ROC curve
• Too low threshold = more false negatives• Too high threshold = more false positives• EER = Equal error rate
© 2007 Y. Du and C.-I. Chang"Handbook of Fingerprint Recognition" © 2004 D. Maltoni et al.
Some history...
Francis Galton (1888)Designed a biometric system for description and identification of faces
© 2007 University of Texas at Austin
Public Domain
Woody Bledsoe (1964)First implementation of automatic
facial recognition in a mug shot database.
• Michael D. Kelly (1970)o Visual identification of people by
computer• Takeo Kanade (1973)
o Computer recognition of human faces
ClassificationZhao et al., 2003:“[The facial recognition problem has] attracted researchers from very diverse backgrounds: psychology, pattern recognition, neural networks, computer vision, and computer graphics.” geometric (feature based) × photometric (image based)
detection × recognitionpre-processing
3DVideo
Pre-processing• Face location / normalization• Later processing doesn't need to scan the whole
image• Morphological operators (very fast)• Rough operators to detect heads• Finer confirmation operators to detect prominent
features
© Brunelli and Poggio 1993 © Reisfeld et al., 1995
Eigenfaces• Sirovich and Kirby 1987; Turk and Pentland 1991• Uses PCA to discover principal components
(eigenvectors)• Each face is described as a linear combination of the
main eigenvectors• Image-based approach
(features might not be intuitive)• eigenvectors can be translated back
to the original pixel–basedrepresentation, many producingface-like images (hence the nameeigenfaces)
© AT&T Laboratories
Fisherfaces• Instead of PCA, it uses Linear disciminant analysis
(LDA), developed by Robert Fisher in 1936• Variation can be greater due to lighting than due to
different faces (Moses el al. 1994)
©1997 Belhumeur et al.
• Shashua [1994] demonstrated thatimages from same face but underdifferent illumination conditionslie close to each other in the high-dimensional facespace
• LDA can grasp these similarities better than PCA, which makes Fisherfaces more illumination independent than eigenfaces
Neural networks• Based on the natural brain structure
of simple, interconnected neurons• Good at approximating complex prob-
lems without deterministic solutions• Each pixel of the face image is mapped to an input
neuron • The intermediate (hidden-layer) neurons are as many
as thenumber of reduced dimensions that are intended.
• The network “learns” what patterns are likely faces or not
• Initially promising, but Cottrell and Fleming [1990] showed that they can at best match an eigenface approach.
cc-by-sa Cburnett <commons.wikimedia.org>
Gabor wavelets
• First proposed in 1968 by Dennis Gabor• Analog to Fourier series: images are decomposed in a
series of wavelets applied in different points• Further developed to flexible models: elastic grid
matching.
GFDL Wikimedia Commons
© Wiskott et al. 1997
Active Shape/Appearance Models• Original concept by Kass et al., 1987: “snakes”,
deformable curves that adjust to edges • Yuille [1987] extended the concept to flexible sets of
geometrically related points (not necessarily on a curve)
• Cootes [2001] applies statistical analysis to model and restrict the variation (flexibility) of model points
©2001 Cootes et al.
3D• 2D deal poorly with varying poses
(orientation) of the head• Many have attempted to compensate
by storing several views per faceo obviously resource-consuming
• 3D attempts to solve this issue, using:
©2006 Bowyer et al.
• active range sensors (laser scanners, ultrasound)• passive sensors (structured light: grid projected on
face) • New poses can be matched by deforming the 3D
model
Video• Lower quality images (frames), due to compression.
Reconstructed models will have low accuracy.• Advantage: temporal coherence, optical flow• Simplest approach: use frame difference to detect
moving foreground objects and match their shapes (blobs) to heads
• Locate faces, then track them• Reconstruct 3D shape from relative
movement of tracked points. This iscalled Structure from Motion (SfM)
©2010 Christian Rakete <http://www.dorfpunks.de>
ComparisonStandard tests needed for valid results comparison
Databases: FERET, MIT, Yale, and many smaller ones Evaluations:• Face Recognition Vendor Test (FVRT)• Face Recognition Grand Challenge• XM2VTS Conferences:• International Conference in Audio- and Video-Based
Person Authentication (AVBPA)• International Conference in Automatic Face and
Gesture Recognition (AFGR)
Questions?
Top Related