A Performance Analysis of Various Feature Detectors and their Descriptors for
Panorama Image Stitching
ISRAA HADI ALI1 and Sarmad Salman
2
1IT College. Babylon University, Iraq
[email protected] 2Msc. student - IT College. Babylon University, Iraq
Abstract
Feature detectors and descriptors play an essential role in computer vision application such as
image registration, object recognition, and image classification and retrieval. This paper presents the
analysis of the performance of multiple feature detectors and descriptors, namely SIFT, SURF, ORB,
BRIEF, BRISK, FREAK. It analyzed in terms of the number of features, the number of matching
points in overlapping regions between images and subjective accuracy of stitching. Extraction of
different symmetric features from image increases the chance of the requirement reliable matching
among a variety of scene views. One result of the experiments shows that AGAST, FAST and
BRISK detectors have the highest number of detected key points, whereas STAR, AKAZE, and
MSER have a lower number of extracted key points. Moreover, the speed of each algorithm is
registered.
Keywords: Feature Detection, Feature Description, Panorama Image Stitching
1- Introduction
Feature detection is the process of extracting image information by searching at every point
and see if there is an image feature that gives the same style to existing feature types such as point,
line, corner, and blob. Feature detection methods and algorithms have a considerable contribution for
implementing many applications in computer vision such as feature matching, 3D scene
International Journal of Pure and Applied MathematicsVolume 119 No. 15 2018, 147-161ISSN: 1314-3395 (on-line version)url: http://www.acadpubl.eu/hub/Special Issue http://www.acadpubl.eu/hub/
147
reconstruction, image retrieval, and object categorization. The salient features are entities that are
desirable and extended over the entire image [7]. It should be highly distinctive and exclusive, i.e. it
has the same illustration for each occurrence in the images. An optimal method of feature detection
must be vigorous to the transformations and the distortions of images like translation, rotation, scale,
change in intensity and firm to noise and compression [19]. Feature description is the process of
describing the interest points and creating a vector which includes all the information concerning
magnitude and direction. Feature descriptors can depend on one or more of the measures such as
second-order statistics, coefficients image transform and parametric models [12]. The size of
descriptor determines computation complexity, since if it is large, then it will arise long computation
time, but if it is small, then some useful information may be rejected [9].
The investigation of robust feature detectors and descriptors (SIFT, SURF, ORB, BRISK,
BRIEF, etc.) and knowing the differences between them is very important, in view of the fact that
there are tradeoffs in speed, execution time and accuracy among them.
Many comparative studies of feature detectors and descriptors have elaborated many aspects of
differences between them. Furthermore, the experiments presented provide more details about
specific characteristics of each detector and descriptor, such as the number of features detected, the
time complexities, the extent of the resistance to noise, change in illumination and variety of
transformations. In [1], authors have evaluated the performance of FREAK, SURF and BRISK
descriptors and many detectors against diverse types of constraints, specifically rotate, scale and
illumination. In [5], the writers have compared different contemporary key point detectors and
descriptors. Their experience calculated the mean and deviation of distances among feature
descriptors in low-good light status. Other assessments [2], [3], and [4] were presented results aiming
at the predilection of one or more combinations, or at identifying the detectors descriptors that offer
better performance under a specific field or application.
In this paper, the evaluation of the performance of multiple combinations of feature detection
and description techniques have been manipulated. The analysis includes comparing the number of
features extracted from each detector with all possible descriptors, comparing the matching points
between sets of images, in addition to register the time of features detecting.
The rest of this paper is recognized as follows: Section two presents a brief discussion of
feature detectors and descriptors. Section three is concerned with the analysis of the performance of
these techniques. Finally, Section four presents the conclusions arrived at from the study.
International Journal of Pure and Applied Mathematics Special Issue
148
2- Features Detection and Description
The features that can be extracted from images are of two types, which are global and local
features[12]. Global features describe the image as a whole, perhaps it is an attribute such as the
color or texture included in all pixels. On the other hand, local features involve the key-point or the
interest region of the image, including the corner, blob, and edge. It is detected and described by the
local features detectors and descriptors. The robust feature detector should be invariant to most
image transformations such as affine transformation, and there must be resistance to noise presence
and blurring as well. It should extract any point or region with the corresponding location in order to
achieve the optimal matching of the same scene in multiple images. There are different feature
detectors and descriptors, as in the following:
1- SIFT Detectors and Descriptors
SIFT, the abbreviation of Scale Invariant Feature Transform, was proposed by Lowe et al. It
solved many image limitations on feature detection and description such as scaling, rotation, affine
transformation, intensity, noise, and viewpoint changes. The SIFT consist of four main steps, which
are scale space extreme detection, key-point localization, orientation assignment, and key-point
descriptor [2]. In the first step, SIFT uses ( DoG ) Difference of Gaussian, to estimate scale space
extreme. The DoG is computed for different octaves and layers per each octave of the image, figure
(1). The extreme is selected if it has the best representation compared with the current, previous and
next scales of (3×3) neighborhood, figure (2). The second step is key-point localization where
interest points are localized and refined by rejecting insignificant and low contrast points with
respect to the known threshold. In the third step, the orientation is assigned to each key-point by
choosing a neighborhood around the interest point location that depends on scale and the gradient
magnitude, and the direction is computed for this region as well. After that, the histogram is
constructed and the highest bin is selected as the representative for each block. The last step is the
key-point descriptor, where the feature vector is generated. A (16×16 ) block centroid with key-
point, SIFT splits it to 16 sub-region of (4×4) sizes. Each sub-region construct orientation histogram
with 8 bin. The length of the vector is (4×4×8=128 ) element. Descriptor vector must be robust to
change in illumination and viewpoint, rotation, compact, and highly distinctive [1,6].
International Journal of Pure and Applied Mathematics Special Issue
149
Figure 1: compute the DoG Figure 2: comparison pixels to extreme
detection
2- SURF Detectors and Descriptors
Speed-up Robust Feature, abbreviated as SURF, was proposed by Herbet Bay. It is a local
feature detector and descriptor used for registration, 3D reconstruction and object recognition. SURF
detector is inspired by the SIFT descriptor [2]. SURF is computationally faster than SIFT and it is
robust against image transformation and noise. It consists of two major steps: (1) Key point detection
(2) key point description. In the first step, SURF uses BLOB detector relying on the Hessian matrix
to extract the maximum element as interest point [8]. The assignment of orientation is performed by
using Haar wavelet responses that are calculated for the local region around each key point by
applying appropriate Gaussian weights. For key point description step, the integral image
conjugation with the Haar wavelet are used to encode the distribution of pixel intensity value. The
neighborhood around each key point is a circular region. It is divided into (4×4) sub-region, for each
sub-region computes the response of Haar wavelet, that contributes with four values to the
descriptor. Feature descriptor is represented with (4×4×4=64) dimension for each key point [5].
3- ORB Detectors and Descriptors
ORB, the Shortcut of Oriented FAST and Rotated BRIEF, was proposed by Rublee et al. It is
constructed on FAST key point detector and BRIEF descriptor after significant modifications to
improve the performance [13]. It is a fast descriptor, invariant to rotation and noise resistant. The
detection of the key point step is based on the modified FAST detector and it constructs the scale
pyramid of the image. First, interest points are detected, then sorting them by applying Harries corner
measure and elect N top points based on a threshold. The FAST does not compute the orientation, so
computed by first ordered moments to improve the rotation invariance. The standard BRIEF
descriptor is a poor rotation task, whose function is to manage this problem ORB computed rotation
International Journal of Pure and Applied Mathematics Special Issue
150
matrix using the patch orientation. This makes BRIEF descriptor steered according to the orientation
[7,10].
4- BRISK Detectors and Descriptors
Binary Robust Invariant Scalable Key point (BRISK), was proposed by Stefan Leutenegger et
al. It is a key point detector and descriptor inspired by AGAST and BRIEF [5]. It is developed to
overcome the obstacles with SIFT and SURF such as a large number of resources and high power
requirements. BRISK uses AGAST to detect the key point that is an improvement in the speed of
FAST with a similar performance for preserving detected features. Her detector employs the
Hamming distance to achieve low computation time. BRISK consists of three steps: (1) sampling
pattern, (2) orientation compensation and (3) sampling pairs [2]. In the first step, the sampling
pattern is a set of concentric points circles that spread around the interesting point, which determine
whether the point is a corner or not based on FAST or AGAST detector. Gaussian smoothing is
performed at each pixel sample point to get the point value. Then, these pairs are grouped in three
sets: (1) short-distance pairs, (2) long-distance pairs and (3) unused pairs (unused pairs are not
involved in both other sets) [8]. The second step is orientation compensation that gives rotation
invariance. The calculation of sum gradient between 'short pairs' and 'long pairs' determines the
direction of interest point. Finally, sampling pairs are achieved by comparing the intensity value
between the first and second points for all pairs. The result is the 512- bit binary descriptor, for
matching key point BRISK uses Hamming distance. This is done by comparing the sum of "XOR
operation" among two binary descriptors [2].
5- BRIEF Descriptor
BRIEF, the abbreviation of Binary Robust Independent Elementary Features, is a local binary
string descriptor proposed by Michael Calonder et al. [20]. It presents an efficient performance and
good accuracy in robotics application[9]. BRIEF builds a binary descriptor by comparing the
intensity values between two randomly selected pair of points in an image patch, which described
featured currently is centroid it. Simply, every bit in the feature descriptor appears as this, if the
intensity value of the first point is higher than the intensity value of the second point of the compared
pair, and reset otherwise. The BRIEF descriptor is sensitive to noise, so initial smoothing is
performed implement on 9×9 pixels filter. Hamming distance is used in lieu of Euclidian distance to
compute the similarity of the descriptor [12]. For this reason, the BRIEF descriptor is
computationally faster than other descriptors. The length of the descriptor is 256 bit and computes
over 31×31 size of patch pixels. It is robust to contrast and brightness, but not sturdy to the rotation.
International Journal of Pure and Applied Mathematics Special Issue
151
6- Freak descriptor
The freak ( Fast Retina Key point ) descriptor was proposed by Alexandre Alahi and et al.[5].
It biologically inspired by the human visual system, particularly the retina. It uses Retina sampling
grid which is circular where the points near from the center have a higher density. The intensity of
the points as it moves away from the center, it decreases exponentially. The image is smoothed with
different Gaussian kernel size to make every sample points more resistance to noise as in BRISK.
The FREAK descriptor reinforced feature orientation, it grouped the predestined local gradients over
selected point pairs. It adopted the ‘course-to-fine’ strategy to a description of features. FREAK
results a binary string descriptor like BRIEF. The saccadic search starts with the first ( 16 bytes ) of a
descriptor. The search continues when the distance is lower than a predefined threshold. This
approach rejects almost-candidate points. So, the computation time is shortened and the memory is
saved. The FREAK descriptor is robustness to contrast, rotate, scale, and blur[9].
7- FAST Detector
FAST (Features from Accelerated Segment Test), a technique was proposed by Rosten
Drummand. FAST is a corner detector developed to be faster than many detectors, so it used in real
time frame rate application [11]. FAST detectors draw 'BRESENHAM circle' around each point to
evaluate the circle of pixels to extract features [1]. This circle consists of 16 pixels (labeled from 1-
16) with a radius of 3. As observed in figure (3), the candidate point p is classified as a corner or not
according to two stages. The first stage is to compare the candidate point with the intensity value of
1,5,9 and 13 pixels if at least three pixels are brighter or darker than the candidate point against the
predefined threshold. Then, this point is classified as a feature and proceeds to the next stage,
otherwise, the candidate point is rejected. The second stage, where 3 points satisfy a given criterion
and then test all 16 pixels. In this stage, if 12 contiguous pixels satisfy the previous condition, then
the point is considered to be a feature. This process is repeated for all other points in the image.
Many problems have been manipulated to overcome the limitation and obstacles encountered with
using FAST. This approach includes machine learning, ID3 (a decision tree classifier) and non-
maximal suppression [12].
International Journal of Pure and Applied Mathematics Special Issue
152
Figure 3: corner detection by fast algorithm
8- MSER Detector
Maximally Stable Extremal Region (MSER), the detector was proposed by Matas et al. It is
effective to object recognition and achieves a better performance for stereo matching [9]. In images,
MSER is adopted as BLOB detection. It determines the association among image elements from two
images along with different viewpoints. The meaning of 'Extremal' indicates that each pixel in the
MSER region has lower or upper intensity, i.e. brighter or darker extremal regions, than all pixels on
outside region ambit [3]. In other words, MSER region consists of connected pixels unchanged over
a range of thresholds. MSER detection is analogous to a 'water shedding' process described in [2].
They divide the set of pixels into two sets, which are Black and Wight, according to the intensity of
the threshold. The 'Maximally Stable' region is one that maintains its state with only little change
over several intensity thresholds that are selected. It is invariant to affine transformation.
9 - Other Features Detector and Descriptors
- STAR key point detector is derived from 'CenSurE' feature detector [5]. It uses a bi-level
approximation of the LOG filter, the abbreviation for Laplacian of Gaussian.
- Harris Laplace is a corner detector proposed by Mikolajczyk and schimd [12]. It depends on a
composition modified copy of Harris corner detector and representation of Gaussian scale space. The
Harris-Laplace retrieves a considerably smaller number of points compared to the (LoG) or (DoG)
detectors. It is invariant to scale, but futile to affine transformation.
- GFTT, the abbreviation of Good Feature To Track, was proposed by Shi and Tomasi [18]. It is a
feature detector based on investigating the local autocorrelation function to the intensity of the
image.
International Journal of Pure and Applied Mathematics Special Issue
153
- KAZE algorithm was proposed by Alcantarilla et al. It detects and describes 2D features in non-
linear scale-space extrema. KAZE use non-linear diffusion filtering besides the Additive Operator
Splitting (AOS) method that makes blurring adaptive to image features. It follows the same main
four steps as SIFT for object recognition with a significant alteration[15].
- AKAZE (Accelerated KAZE) is created for the alleviation of the costly computation with the
implementation of KAZE[17]. It uses FED, an abbreviation of Fast Explicit Diffusion, a method to
arise nonlinear scale space. AKAZE employes the modified copy of LDB descriptor as a binary
descriptor to speed up the key-point description.
- LUCID, an abbreviation of Locally Uniform Comparison Image Descriptor, was proposed by
Zieyler et al. [16]. It is based on the linear time permutation distance among RGB ordered value of
two image patches. LUCID uses Hamming distance as a comparing method in the matching stage.
- LATCH, the abbreviation of Learned Arrangements of Three Patch Codes, was proposed by Levi
and Hassner [17]. It is a binary descriptor that uses a 3×3 patches for comparison. LATCH elects 3
patches, the first of which is named anchor, and then calculates the Frobenius distance between an
anchor and the other 2 pixels. After that, it compares both distances, learned and selects the best
patch triples.
The Experiment
The current study is concerned with the evaluation of the performance of a variety of
combinations of methods of detection and description. A comparison is made between each detector
and all the possible descriptors on it. The results are recorded, concerning the number of discovered
features and the time of detection, and the number of identical features between each detector and
descriptor is extracted.
The experiment is executed on a set of images taken from the Internet, which provides a
sufficient overlap between them to give the best number of detected features. These images are of the
following sizes: 384×512.
International Journal of Pure and Applied Mathematics Special Issue
154
Figure 4: The images of The Experiment
OpenCV library with Python language is used under the Windows 7 environment to complete
tests. The execution performed on a laptop holds the following specifications: Intel ® 4nd
generation
core i7 4600U 2.10 GHz 2.70 GHz processor, 8 GB RAM.
The number of features detected plays an important role in image stitching and the completion
of precise image panoramas. Perhaps, the greater the number of features detected leads to increasing
of matches between images, thereby providing a sufficient number of features that are finally used to
calculate the homography matrix.
The experiment aims to study the behavior of each detector of feature against a set of
descriptors and to know the number of features detected in each attempt. As shown in Tables 1,2,3
and 4 the results show that AGAST, FAST and BRISK give the largest number of features detected
with all the descriptors, while STAR, MSER, and AKAZE give the lowest number. It is worth noting
that some detectors offer a high detection rate. However, in the matching phase, they lose the greater
part of them. Although SIFT, SURF, and KAZE give a moderate number of features, they give the
highest rate of accuracy when stitching images. Tables 5 and 6 views the number of matching
features for all the comparisons that are shown in Tables 1,3 and 2,4 respectively.
Table 1: Shows the detected features for image 1
Descriptor SIFT SURF ORB BRISK BRIEF LUCID DAISY LATCH FREAK
Detector
SIFT 956 956 956 827 692 956 956 701 693
SURF 1943 1943 1943 1509 1703 1943 1943 1717 1047
ORB/1000 983 983 983 623 983 983 983 983 150
BRISK 2218 2218 2218 2218 2059 2218 2218 2074 1668
KAZE 986 986 986 835 821 986 986 726 679
AKAZE 507 507 507 507 507 507 507 507 507
International Journal of Pure and Applied Mathematics Special Issue
155
Table 2: Shows the detected features for image 1
Descriptor BRISK BRIEF LUCID DAISY LATCH FREAK
Detector
FAST 3960 3312 4531 4531 3339 3492
STAR 166 166 166 166 166 202
HAR.LAP* 730 718 857 857 725 607
MSER 180 197 282 282 199 85
AGAST 4157 3436 4740 4740 3472 3640
GFTT 840 793 1000 1000 698 816
* HARRIS LAPLACE
Table 3: Shows the detected features for image 2
Descript SIFT SURF ORB BRISK BRIEF LUCID DAISY LATCH FREAK
Detector
SIFT 941 941 941 857 762 941 941 768 760
SURF 2105 2105 2105 1609 1840 2105 2105 1860 940
ORB/1000 989 989 989 582 989 989 989 989 107
BRISK 2591 2591 2591 2591 2426 2591 2591 2447 1891
KAZE 1063 1063 1063 911 716 1063 1063 826 783
AKAZE 582 582 582 582 582 582 582 582 582
Table 4: Shows the detected features for image 2
Descriptor BRISK BRIEF LUCID DAISY LATCH FREAK
Detector
FAST 4517 3821 4990 4990 3870 4042
STAR 213 213 213 213 213 160
HAR. LAP. 773 716 998 998 734 587
MSER 126 139 218 218 140 85
AGAST 4640 3880 5147 5147 3930 4104
GFTT 910 692 1000 1000 797 724
International Journal of Pure and Applied Mathematics Special Issue
156
Figure (5) shows that the algorithm resulting in high-precision outcomes due to an abundance
of matching key points as in Table 5 or 6. Otherwise, when the number of matching key points is
Significantly little, then the results are like figure (6).
Figure 5: Left image is created by KAZE detector and SIFT descriptor. Right image is created by
AGAST detector and BRISK descriptor.
Figure 6: Left image is created by HARRIS LAPLACE detector and DAISY descriptor. Right image
is created by ORB detector and LATCH descriptor.
Table 5: Illustrates the matching features between image 1 and image 2 (table 1 & 3)
Descriptor SIFT SURF ORB BRISK BRIEF LUCID DAISY LATCH FREAK
Detector
SIFT 219 219 219 91 73 205 172 73 61
SURF 441 441 441 191 215 440 467 138 56
ORB/1000 81 81 81 72 89 154 73 64 no
BRISK 228 228 228 228 316 429 427 150 117
KAZE 277 277 277 197 132 260 260 91 90
AKAZE 100 100 100 88 102 157 150 64 60
Table 6: Illustrate the matching features between image 1 and image 2 (table 2 & 4)
International Journal of Pure and Applied Mathematics Special Issue
157
Descriptor BRISK BRIEF LUCID DAISY LATCH FREAK
Detector
FAST 407 486 821 834 364 240
STAR 11 23 69 24 12 10
HAR.LAP. 5 18 292 68 7 no
MSER 25 27 69 29 19 10
AGAST 379 469 837 747 312 235
GFTT 162 157 201 286 142 97
* HARRIS LAPLACE
Experiments show that using the KAZI and AKAZI algorithm as descriptors when combined
with all detectors gives results largely consistent with the behavior of SIFT and SURF descriptors, in
terms of the number of detected features, the number of identical points and the accuracy of
stitching. So, their results have been deleted from the tables. Furthermore, LUCID and DAISY give
the same results as SIFT with all detectors. The only case in which we did not get the matching
points were between the FREAK descriptor and each of ORB and HARRIS LAPLACE detector.
In summary, we refer in Table 7 to the Time spent to discover the features for all detectors with
respect to each of the BRISK, and FREAK descriptor.
Table 7: shows the time of detect features in milliseconds.
Detectors SIFT SURF ORB BRISK FAST STAR KAZE AKAZE HARRIS MSER AGAST GFTT
BRISK
Img1 0.053 0.043 0.013 0.033 0.001 0.005 0.237 0.028 0.390 0.109 0.005 0.007
Img2 0.055 0.052 0.006 0.037 0.002 0.006 0.246 0.029 0.406 0.093 0.006 0.008
FREAK
Img1 0.052 0.053 0.013 0.032 0.001 0.005 0.226 0.027 0.247 0.115 0.005 0.008
Img2 0.055 0.048 0.006 0.040 0.002 0.005 0.228 0.029 0.247 0.114 0.006 0.007
Conclusion
Many combinations of image feature detectors and descriptors are used to optimize the
outcomes that result from them. Conversely, the comparisons have been discussed to cover the most
properties and points of weakness and robustness among these techniques. In this study, there are
comparisons between all features detectors and possible descriptors, to make the results of features
detection under each descriptor concise. The experiment shows that BRISK, BRIEF, and LATCH
International Journal of Pure and Applied Mathematics Special Issue
158
have an effect on the number of features detected from SIFT, SURF, ORB, BRISK, KAZY, and
AKAZE detectors, whereas other descriptors did not have an effect. In other words, they give the
same result under all detectors. What's more, only DAISY and LUCID descriptors give similar
results against FAST, MSER, HARRIS LAPLACE, AGAST, GFTT detectors. Some detectors lose
most of their features when matching, although they provide a high detection average.
References
[1] A. Madbouly, M. Wafy, and M.Mostafa,"Performance Assessment of Feature Detector-
Descriptor Combination". International Journal of Computer Science Issues, Vol. 12, Issue 5, 2015.
[2] . Ik, and K. zkan,"A Comparative Evaluation of Well-known Feature Detectors and Descriptors".
International Journal of Applied Mathematics, Electronics and Computers, ISSN:2147-8228, 2014.
[3] Tejas S Patel,"A Study on Feature Extraction Techniques for Image Mosaicing System".
IJARIIE-ISSN(O)-2395-4396, Vol. 2, Issue 3, 2016.
[4] S. Shaikh, and B. Patankar,"Multiple Feature Extraction Techniques in Image Stitching".
International Journal of Computer Applications (0975-8887) Vol. 123, No.15, 2015.
[5] ] Akash Patel, D. R. Kasat, S. Jain, and V.M. Thakare,". Performance Analysis of Various
Feature Detector and Descriptor for Real Time Video based Face Tracking". International Journal of
Computer Applications (0975- 8887), Vol. 93, No 1, 2014.
[6] E. Salahat, and M. Qasaimeh. "Recent Advances in Features Extraction and Description
Algorithms: A Comprehensive Survey". 2017 IEEE International Conference on Industrial
Technology (ICIT). pp.1059-1063, 2017.
[7] R. Menaka."A Methodical Review on Image Stitching and Video Stitching Techniques".
International Journal of Applied Engineering Research ISSN 0973-4562 Vol. 11, pp 3442- 3448, No.
5, 2016.
[8] S. Veni,” Image Processing Edge Detection Improvements And Its Applications”, International
Journal Of Innovations In Scientific And Engineering Research (IJISER), Vol 3 Issue 6 Jun 2016,
Pp.51-54.
[9] N. Kaushik, R. Rawat, and A. Bhalla."A Brief Study of Di_erent Feature Detector and
Descriptor". International Journal of Advanced Research in Computer and Communication
Engineering Vol. 5, Issue 4, 2016.
[10] Ch. Mohanbhai, and H. Bhaidasna."A Survey On Image Mosaicing Using Feature Based
Approach". International Journal of Engineering Development and Research, Vol. 5, Issue 1, ISSN:
2321-9939, 2017.
International Journal of Pure and Applied Mathematics Special Issue
159
[11] E. Adel, M. Elmogy, and H. Elbakry."Image Stitching based on Feature Extraction Techniques:
A Survey". International Journal of Computer Applications (0975-8887) Vol. 99, No. 6, 2014.
[12] M. Hassaballah, A. Abdelmgeid, and H. Alshazly."Image Features Detection, Description and
Matching". Springer International Publishing Switzerland, 2016.
[13] E. Rublee, V. Rabaud, K, Konolige, and G.Bradski."ORB: an e_cient alternative to SIFT or
SURF". Proc of IEEE. International Conference on Computer Vision, pp. 2564-2571, 2011.
[14] K. Mikolajczyk, C. Schmid."Scale & a_ne invariant interest point detectors". International
Journal of Computer Vision. 60(1), pp.63-86, 2004.
[15] Pablo F. Alcantarilla, Adrien Bartoli, and Andrew J. Davison, "KAZE Features", University
dAuvergne, Clermont Ferrand, France, 2012.
[16] Andrew Ziegler, E. Christiansen,"Locally uniform comparison image descriptor". Neural
Information Processing Systems (NIPS): pp. 19, 2012.
[17] G. Levi and T. Hassner, LATCH: Learned Arrangements of Three Patch Codes, arXiv preprint
arXiv:1501.03719, 2015.
[18] A. Schmidt, M. Kraft, M. Fularz, and Z. Domagaa."Comparative Assessment of Point Feature
Detectors and Descriptors in the Context of Robot Navigation". Journal of Automation, Mobile
Robotics & Intelligent Systems. Vol. 7, 2013.
[19] Sh. Mistry, and A.Patel."Image Stitching using Harris Feature Detection". International
Research Journal of Engineering and Technology (IRJET). Vol. 03 Issue. 04, 2016.
[20] M. Calonder, V. Lepetit, C. Strecha, and P.Fua,"BRIEF: Binary Robust Independent Elementary
Features", in Proceedings of ECCV 2010, pp. 778-792. 2010.
International Journal of Pure and Applied Mathematics Special Issue
160
161
162
Top Related