Lecture 2 Mei-Chen Yeh 03/09/2010. Outline Demos Image representation and feature extraction –...
-
Upload
ruby-claire-hancock -
Category
Documents
-
view
216 -
download
1
Transcript of Lecture 2 Mei-Chen Yeh 03/09/2010. Outline Demos Image representation and feature extraction –...
Lecture 2
Mei-Chen Yeh03/09/2010
Outline
• Demos• Image representation and feature extraction– Global features– Local features: SIFT
• Assignment #2 (due: 03/16)
Demos• Augmented Reality
– http://www.youtube.com/watch?v=P9KPJlA5yds– http://www.youtube.com/watch?v=U2uH-jrsSxs
• Tracking– Traffic– Counting people
• Image search– MyFinder: http://128.111.56.44/myFinder/– Simplicity: http://wang14.ist.psu.edu/cgi-bin/zwang/regionsearch_show.cgi
• Image annotation– ALIPR: http://alipr.com/
• Embedded face detection and recognition• Tiling slide show • Pivot: http://www.technologyreview.com/video/?vid=533
Multimedia Systems: A Multidisciplinary Subject
• Signal Processing• Data Mining• Machine Learning• Pattern Recognition• Networking• … and more!
Topics (1)
• Image/video processing– Feature extraction– Video syntax analysis– Compression
Topics (2)
• Content-based image/video retrieval– Copy detection– Region-based retrieval– Multi-dimensional indexing
Topics (3)
• Multimodal system– Audio processing– Multimodality analysis
Topics (4)
• Semantic concept detection– Object detection– Object recognition
Topics (5)
• Tracking– Motion features– Models– Single-, multiple-object tracking
Topic (6)
• Qualify of Service/Experience– QoE Framework– VoIP System Evaluation– Imaging System Evaluation
Resources of the readings• ACM International Conference on Multimedia– The premier annual event on multimedia research,
technology, and art– Started since 1993– >400 attendees– Program: Content, Systems, Applications, HC tracks– Full papers (16%), short papers (28%) – Technical demonstrations, open source software
competition, the doctoral symposium, tutorials (6), workshops (11), a brave new topic session, panels (2), Multimedia grand challenge
• IEEE Transactions on Multimedia
Image Representations
Multimedia file formats
• A list of some formats used in the popular product “Macromedia Director”
• These formats differ mainly in how data are compressed.
• Features are normally extracted from raw data.
1-bit images
• Each pixel is stored as a single bit (0 or 1), so also referred to as binary image.
• So-called 1-bit monochrome image
No color
8-bit gray-level images
• Each pixel has a gray-value between 0 and 255. (0=>black, 255=>white)
• Image resolution refers to the number of pixels in a digital image
• A 640 x 480 grayscale image requires ??? kB
One byte per pixel640x480 = 307,200 ~ 300 kB
24-bit color images
• Each pixel is represented by three bytes, usually representing RGB.• This format supports
256x256x256 (16,777,216) possible colors.
• A 640x480 24-bit color image would require 921.6 kB!
Lena: 1972Lena: 1997
Image Features
Feature types
• Global features– Color– Shape– Texture
• Local features– SIFT– SURF– Self-similarity descriptor– Shape context descriptor– …
…
……
A fixed-length feature vector
Color histogram
• A color histogram counts pixels with a given pixel value in Red, Green, and Blue (RGB).
• An example of histogram that has 2563 bins, for 24-bit color images:
Color histogram (cont.)
• Quantization
Color histogram (cont.)
• Problems of such a representation
Case 1
Case 2
Case 3
SAME!
SAME!
SAME!
Search by color histograms
Regional color
• Divide the image into regions
• Extract a color histogram for each region
• Put together those color histograms into a long feature vector
Textures
• Many natural and man-made objects are distinguished by their texture.
• Man-made textures– Walls, clothes, rugs…
• Natural textures– Water, clouds, sand, grass, …
What is this?
Examples
More: http://www.ux.uis.no/~tranden/brodatz.html
Texture features
• Structural– Describe arrangement of texture elements– E.g., “texton model”, “texel model”
• Statistical– Characterize texture in terms of statistics– E.g., co-occurrence matrix, Markov random field
• Spectral– Analyze in spatial-frequency domain– E.g., Fourier transform, Gabor filter, wavelets
Textual Properties
• Coarseness: coarse vs. fine• Contrast: high vs. low• Orientation: directional vs. non-directional• Edge: line-like vs. blob-like• Regularity: regular vs. random• Roughness: rough vs. smooth
Shape
• Boundary-based feature– Use only the outer boundary of the shape– E.g. Fourier descriptor, shape context descriptor
• Region-based feature– Use the entire shape region– Local descriptors
Shape: Fourier descriptor
Properties
• Invariant to translation, scale, and rotation
Feature types
• Global features– Color– Shape– Texture
• Local features– SIFT– SURF– Self-similarity descriptor– Shape context descriptor– …
…
……
A fixed-length feature vector
David G. Lowe. Distinctive Image Features from Scale-Invariant Key-points, IJCV, 2004
What is SIFT?
• Scale Invariant Feature Transform (SIFT) is an approach for detecting and extracting local feature descriptors from an image.
• SIFT feature descriptors are reasonably invariant to – scaling– rotation– image noise– changes in illumination– small changes in viewpoint
Types of invariance
illumination scale rotation viewing angle
621 128162.38 155.79 44.30 2.6157 6 0 0 0 0 0 1 58 63 1 0 7 6 1 8 8 9 0 024 42 39 14 0 0 0 0 0 0 7 2 44 7 0 0 23 22 6 69 137 64 0 0 0 0 11 137 55 12 0 0 2 25 137 112 0 0 0 0 3 17 30 6 34 1 0 0 20 51 137 89 137 89 0 0 0 15 115 102137 47 0 0 4 37 26 43 0 0 0 0 19 45 4 0 0 0 0 0 0 16 137 53 33 2 0 0 0 56 137 51 57 2 0 0 0 3 14 35 0 0 0 0 0 2 0 0282.47 185.76 27.80 2.0090 0 0 0 0 0 0 0 1 41 13 1 0 12 4 0 5 17 15 16 17 83 35 16 19 0 0 1 2 13 24 104 0 1 9 0 0 0 0 0 22 127 127 5 0 0 0 1 127 127 75 16 6 0 0 70 55 2 0 1 0 0 25 127 1 1 9 0 0 1 1 2 115 22 49 4 0 0 0 68127 127 30 4 0 0 0 58 67 127 69 0 0 0 5 20 2 0 0 0 4 65 5 2 85 50 6 0 1 15 2 30 56 93 53 19 0 0 4 41 22 127 86 1 0 2 17 20……….
Number of keypointsFeature dimension
Matching two images
• Densely cover the image(an image with 500x500 pixels => 2000 feature vectors)• Distinctive• Invariant to image scale, rotation, and partially invariant to changing
viewpoints and illumination • Perform the best among local descriptors
– K. Mikolajczyk and C. Schmid, “A performance evaluation of local descriptors,” PAMI 05.
Simple test (scale and rotate)• Scale to 60% and rotate 30 degree
693 keypoints
349 keypoints
214 matches!
Simple test (illumination)
693 keypoints
467 matches!
633 keypoints
693 keypoints
728 keypoints
25 matches!Simple test (different appearance)
Simple test (different appearance)693 keypoints
832 keypoints
1 match!
Simple Test (different appearance with occlusion)
693 keypoints
1124 keypoints
0 match!
• How to generate SIFT feature descriptors?• How to use SIFT features descriptors (for
object recognition, image retrieval, etc.) ?
About SIFT…
SIFT: Overview
• Major stages of SIFT computation
Scale-space extrema detection
Keypoint localization
Orientation assignment
Keypoint descriptor
An image
feature vectors (128-d)
Identify potential interest points(location, scale)
Localize candidate keypointsReduced sets of (location, scale)
Identify the dominant orientations (location, scale, orientation)
Build a descriptor based onhistogram of gradients in localneighborhood
Interest point detector+
descriptor
Step 1: Scale-space extrema detection
• How do we detect locations that are invariant to scale change of the image?
• Detecting extrema in scale-space– For a given image I(x,y), its linear scale-space representation:
– Be efficiently implemented by searching for local peaks in a series of DoG (difference-of-Gaussian) images
),(*),,(),,( yxIyxGyxL 222 2/)(
22
1),,(
yxeyxG
),,(),,(
),(*)),,(),,((),,(
yxLkyxL
yxIyxGkyxGyxD
Step 1: Scale-space extrema detection
σ
kσ
k2σ
DoGimages
Gaussianimages
Step 2: Scale-space extrema detection
DoG
DoG
DoG
If X is the largest or the smallest of all of its neighbors, X is called a keypoint.
Why DoG?
• An efficient function to compute• A close approximation to the scale-normalized Laplacian of
Gaussian– Lindeberg showed that the normalization of the Laplacian with the
factor σ2 is required for true scale invariance. (1994)– Mikolajczyk found that the maxima and minima of
produce the most stable image features. (2002)
• DoG v.s.
G22
G22
GkyxGkyxG
k
yxGkyxGGG
22
2
)1(),,(),,(
),,(),,(
Output of Step 1
~ 2000 keypoints in a 500x500 imageToo many keypoints!
Step 2: Accurate keypoint localization
• Reject points that have low contrast or are poorly localized along an edge
Image size:233x189 832
729 536
Step 2: Accurate keypoint localization
• Another example
Extrema of DoG across scales
After removal of lowcontrast points
After removal ofedge responses
Step 2: Accurate keypoint localization
• Simple method (Lowe, ICCV 1999)– Use gradient magnitudes
• More sophisticated method (Brown and Lowe, BMVC 2002)– Use the Taylor expansion of the scale-space function,
compare the function value at the extremum to a threshold (0.03)
– Use the ratio of eigenvalues of a 2x2 Hessian matrix, eliminate keypoints with a ratio greater than 10
)max(1.0 ,, jiji MM
Step 3: Orientation assignment
Step 3: Orientation assignment• To achieve invariance to rotation• Compute gradient magnitude and orientation for each
image sample L(x, y, σ)
• Form an orientation histogram from the gradient orientations of sample points within a region around the keypoint, weighted by its gradient magnitude and a Gaussian-weighted window
• Detect the highest peak
Step 4: Local image descriptor
Use a 4x4 grid computed from a 16x16 sample array128-d = 4 * 4 * 8 (orientations)
Examples: 2x2 grid on a 8x8 sample array
Step 4: Local image descriptor
• Fairly compact (128 values)
Results
Summary
Scale-space extrema detection
Keypoint localization
Orientation assignment
Keypoint descriptor
An image
feature vectors
scale
rotation
illumination changeviewpoint change
Invariant to…
Discussions
• Do local features solve the object recognition problem?
• How do we deal with the false positives outside the object?
• How do we reduce the complexity matching two sets of local features?
Assignment #2• Download SIFT demo program
– http://www.cs.ubc.ca/~lowe/keypoints/– Or
http://www.csie.ntnu.edu.tw/~myeh/courses/s10_ms/Assignments/siftDemoV4.zip
• Prepare at least two pairs of images which you think are similar– 1st set: SIFT can match well– 2nd set: SIFT cannot match well
• Email to TA ([email protected]) your report that includes– Your experimental results– Your observations