The Theory of Variances

53
The Theory of Variances Astrit Rexhepi Centre for Vision, Speech and Signal Processing School of Electronics and Physical Sciences University of Surrey Guildford GU2 7XH United Kingdom [email protected] http://www.surrey.ac.uk Abstract. The variance is a well known concept in mathematics, it measures the degree of fluctuations of the signal (function) about its mean value. Principal axis theory itself is completely based on this con- cept. In this paper I will show that even the theory of wavelets can be completely based on the concept of variances, moreover, it is only a di- mension of a multidimensional concept of variances that leads us to a new, more general, multidimensional representative theory called The Theory of Variances. My motivation came by analyzing shapes of closed contours using the wavelet transform and the possibility of incorporating the concept of the variance for their representation and description. Key words: Wavelet theory, Principal axis theory, Shape representation and description

Transcript of The Theory of Variances

Page 1: The Theory of Variances

The Theory of Variances

Astrit Rexhepi

Centre for Vision, Speech and Signal ProcessingSchool of Electronics and Physical Sciences

University of SurreyGuildford GU2 7XH

United [email protected]

http://www.surrey.ac.uk

Abstract. The variance is a well known concept in mathematics, itmeasures the degree of fluctuations of the signal (function) about itsmean value. Principal axis theory itself is completely based on this con-cept. In this paper I will show that even the theory of wavelets can becompletely based on the concept of variances, moreover, it is only a di-mension of a multidimensional concept of variances that leads us to anew, more general, multidimensional representative theory called TheTheory of Variances. My motivation came by analyzing shapes of closedcontours using the wavelet transform and the possibility of incorporatingthe concept of the variance for their representation and description.

Key words: Wavelet theory, Principal axis theory, Shape representationand description

Page 2: The Theory of Variances

2 Astrit Rexhepi

1 Introduction

We begin our discussion by giving first a brief introduction on shapes, theirrepresentation and description methods, concepts and applications. Shape is anobject property which has been carefully investigated in recent years and manypapers may be found dealing with numerous applications. Despite this variety,differences among many approaches are limited mostly to terminology. Thesecommon methods can be characterized from different points of view[17, 18, 4, 6, 5, 14, 11, 19, 9].

– Input representation form: Object description can be based on boundaries(contour-based, external) or on more complex knowledge of whole regions(region-based, internal).

– Object reconstruction ability: That is, Whether an object’s shape can orcan not be reconstructed from the description. Many varieties of shape-preserving methods exist. They differ in the degree of precision with respectto object reconstruction.

– Incomplete shape recognition ability: That is, to what extent an object’sshape can be recognized from the description if the objects are occluded andonly partial shape information is available.

– Local/global description character: Global descriptors can be used if com-plete object data are available for analysis. Local descriptors describe localproperties using partial information about the objects. Thus, local descrip-tors can be used for description of occluded object’s.

– Mathematical and heuristic techniques: A typical mathematical technique isshape description based on the Fourier transform. A representative heuristicmethod may be elongatedness.

– Statistical or syntactic object description.– A robustness of description to translation, rotation, and scale transforma-

tions: Shape description properties in different resolutions.

Motion analysis is directly related to shape analysis, provided the object canbe segmented correctly. A proper representation of the shape and a good de-scription of its characteristics are steps that determines success or failure ofany vision analysis system. Many criteria should be taken into account whenchoosing one method of representation and description among others, and someof them has been already mentioned. One of the problems in shape analysis isobject occlusion, However, the situation can be easier to solve if pure occlusionis considered, not combined with orientation variations yielding changes in 2Dprojections, if this is the case then only visible parts of objects should be usedfor description. Here, the shape descriptor choice must be based on its ability todescribe local object properties. However, if the descriptor gives only a global ob-ject description (e.g., object size, average boundary curvature, perimeter), sucha description is useless if only a part of an object is visible. If a local descriptoris applied (e.g., description of local boundary changes), this information may beused to compare the visible part of the object to all objects which may appearin the image. Clearly, if object occlusion occurs, the local or global character of

Page 3: The Theory of Variances

The Theory of Variances 3

the shape descriptor must be considered first. The second problem that we musttake into account when choosing a descriptor is the robustness of the descriptorto translation, rotation, and scale transformation.If we are given only the boundary of an object then in general we have twochoices to represent and describe a shape: contour-based (external informa-tion) or region-based (internal information). We focus our attention mainly oncontour-based shape representation and description.

1.1 Contour based shape representation and description: A review

In this section we try to introduce generally applicable contour-based methodsfor representation and description.An object can be described by a sequence of unit-size line segments known asChain code. A point in the contour is selected as first element and by using anorientation say clock-wise we trace the contour element by element. The processresults in a sequence of numbers. This definition of chain code is also knownas Freeman’s code [8]. Boundary length is another way of describing a contourwhich is simply derived from the chain code. Since in the chain code verticaland horizontal steps have a unit length, the problem of defining the perimeter ofa boundary is a simple matter. The curvature scalar descriptor finds the ratiobetween the total number of boundary pixels and the number of pixels where theboundary direction changes significantly. A curvature itself is the rate of changeof slope whose maxima can be used as descriptors. The Bending energy of aborder (curve) may be understood as the energy necessary to bend a rod to thedesired shape, and can be computed as a sum of squares of the border curva-ture over the border length, that also can be used as a shape descriptor. Othertechniques includesFourier descriptorsthat can be invariant to translation androtation if the co-ordinate system is appropriately chosen [16, 12], but althoughthey are general technique, problems with describing local information exist. Amodified technique using a combined frequency-position space that deals betterwith local curve properties is described in [7].Representation of a boundary using segments with specified properties is anotheroption for boundary description. If the segment type is known for all segments,the boundary can be described as a chain of segment types. A polygonal rep-resentation approximates a region by a polygon, the region being representedusing its vertices. There are many types of straight-segment boundary repre-sentations [16, 15, 10]; the problem lies in determining the location of boundaryvertices. Boundary segmentation into segments of constant curvature is anotherpossibility for boundary representation. The boundary may also be split intosegments which can be represented by polynomials, usually of second order suchas B-splines [22].Sensitivity of shape descriptors to scale (image resolution) is an undesirable fea-ture of a majority of descriptors. In other words, shape description varies withscale, and different results are achieved at different resolutions. This problem is

Page 4: The Theory of Variances

4 Astrit Rexhepi

no less important if a curve is to be divided into segments; some curve segmen-tation points exist in one resolution and disappear in others without any directcorrespondence. Considering this, a scale-space approach to curve segmentationthat guarantees a continuously changing position of segmentation points is asignificant achievement [3, 20, 21, 2, 1]. In this approach only new segmentationpoints can appear at higher resolutions, and no existing segmentation pointscan disappear. This technique is based on application of a unique Gaussiansmoothing kernel to a one-dimensional signal (e.g., a curvature function) overa range of sizes and the result is differentiated twice. To determine the peaksof curvature, the zero-crossing of the second derivative is detected; the posi-tion of zero-crossings give the positions of curve segmentation points. Differentlocations of segmentation points are obtained at varying resolution (differentGaussian kernel size).

In this paper I present The theory of variances and its application on shaperepresentation and description. Using this theory we are able to represent ashape and describe its characteristics in a very elegant way and very accurately.The theory itself consists of set of unlimited number of rotation and translationinvariant representations. I started my journey by considering the unusually caseof the wavelet transform, more specifically, the case when the basic wavelet de-pends on the function under consideration. We start our discussion by givingfirst a brief introduction on The continuous wavelet transform in Section 2. Theconcept of The variance transform, which is our theory, is given in Section 3.In Section 3.1 we show how the variance transform responds when applied tostructures like lines. The concept of the Infinite variance transform and its usesis presented in Section 4. The generalized variance transform ant its normalizedversion is given Section 5. Momentums of the variance transform are given inSection 6. As the boundary shape in our case is a closed contour, in Section 7we show the application of the variance transform to closed contours. Finally, inSection 8, we show the application of the variance transform to corner and linedetection. A detailed experimental work is given in Section 9.

2 The continuous wavelet transform

Wavelets [23] are very important tools in signal and image processing. Imaginea signal (function) of period T whose first half period contains a sine(or cosine)function of frequency say w1 , whereas in the second half period the same func-tion of frequency say w2 . The Fourier transform of this signal will contain twocampions corresponding to frequencies w1 and w2 , thus it tells as about thefrequency content of the signal but it does not tell as anything about the posi-tions of this frequencies in time domain. In one word, Fourier transform capturesall spatial and frequency information, but there is no obvious way to answer thequestion (say): at each position, what are local spatial frequencies? The wavelet

Page 5: The Theory of Variances

The Theory of Variances 5

approach is to ad a degree of freedom to the representation which is defined as

Φf (a, b) =∫ ∞

−∞f(x)ψa,b(x) dx (1)

where ψa,b(x) is called the basic wavelet or the mother wavelet defined as

ψa,b(x) =1√aψ

(x− b

a

). (2)

Observe that the transform is a function of the scale a and the shift b. Thus,the wavelet transform of a function f is computed by the inner products of fwith the basic wavelet at each of the possible values of a and b.

3 The variance transform

From the previous section it is clear that the wavelet transform is in fact a cor-relation of the function f with the mother wavelet for every possible scale a.In one word, the wavelet transform is a measure of proximity. Since the scalefactor a is inverse proportional to the frequency (the higher the scale the lowerthe frequency and vice versa) the transform will yield significant values at thoseparts of function f having the frequency content as the mother wavelet.Mother wavelet as defined in 2 is a function that does not depend on f . In thissection we will be dealing with a case where the mother wavelet depends on fand satisfies the conditions such as finite support and zero sum (its integralsum should be equal to zero) which we define as:

ψx,s(γ) = f(γ)− 12s

∫ x+s

x−s

f(τ) dτ. (3)

Now, the corresponding wavelet transform of the function f will be:

Φf (x, s) =12s

∫ x+s

x−s

f(γ)ψx,s(γ) dγ. (4)

The wavelet transform as defined is a two-dimensional function on x and s, andis similar in methodology with previously defined transform in Eq.1, but in ourcase the basic wavelet depends on f.After substituting 3 into 4 we have:

Φf (x, s) =12s

∫ x+s

x−s

f(γ)(

f(γ)− 12s

∫ x+s

x−s

f(τ) dτ

)dγ. (5)

Page 6: The Theory of Variances

6 Astrit Rexhepi

and, after some arrangements it is:

Φf (x, s) =12s

∫ x+s

x−s

f2(γ) dγ −(

12s

∫ x+s

x−s

f(γ)dγ

)(12s

∫ x+s

x−s

f(τ)dτ

). (6)

which is equal to

Φf (x, s) =12s

∫ x+s

x−s

f2(γ) dγ −[

12s

∫ x+s

x−s

f(γ)dγ

]2

. (7)

The first term in the right-hand-side of the last expression is the mean valueof the power of f in the interval from x − s to x + s, whereas the second termrepresents the power of the mean of f in the same interval as first term. Withthis in mind, expression 7 we can write as:

Φf (x, s) = f2x,s(γ)− fx,s(γ)

2. (8)

Let ξ be a random variable, from statistical theory [28, 27, 26] the variance of ξ is:

σ2 = ξ2 − ξ2

(9)

Analogically, expression 8 represents the variance of f(γ) on the interval of length2s centered on x. From expressions 8 and 9 we can observe a fundamental dif-ference, first one is a function of variance fluctuations whereas second one is aglobal variance which is constant. We call it The variance transform and for-mally denote it as:

σ2f (x, s) = Φf (x, s) =

12s

∫ x+s

x−s

f2(γ) dγ −[

12s

∫ x+s

x−s

f(γ)dγ

]2

. (10)

In particular,when the basic wavelet depends on another function g

ψx,s(γ) = g(γ)− 12s

∫ x+s

x−s

g(τ) dτ. (11)

we get the Covariance transform and will be the subject of my next paper.

In Figure 1, we give an example of the variance transform applied to a syn-thetic signal as shown in Figure 1a, whose variance transform for s=1 to s=50 is

Page 7: The Theory of Variances

The Theory of Variances 7

shown in Figure 1b. Please observe the signal’s shape in Figure 1a; the distancebetween any two consecutive minima-maxima or maxima-minima projected onx are approximately equal with each other. The only difference between partsof the signal contained between these extrema is their particular slope (the firstderivative with respect to x). Figure 1b shows up some characteristic extremawhose projection on (x,s)-space is given in Figure 1c. Please observe the posi-tion of these extrema in Figure 1c; as we go bottom-up, first maxima appears inapproximately the same scale of s, this is because (as mentioned earlier) the pro-jected inter-distances between any two consecutive extrema are approximatelyequal. Magnitudes of these maxima are higher at about x=50 and they drop aswe go away, which correspond to particular frequencies (the slope) of the signalfrom Figure 1a. The projection of Figure 1b onto (v,s) is shown in Figure 1c.Observe that as s increases the variance transform tends to level at about 20which is the variance of the signal from Figure 1a. Before we go into furtherdetails it is worth studying the behavior of the variance transform when appliedto linear functions.

3.1 The response of the variance transform to line structures

Consider a linear function of the form f(x) = kx + b, the variance transform ofthis function at the origin x = 0 based on 10 will be:

σ2f (0, s) =

12s

∫ s

−s

f2(γ) dγ −[

12s

∫ s

−s

f(γ)dγ

]2

. (12)

After substituting f(x) into 12 and replacing the second term of 12 with a con-stant mc (because it represents the mean of the linear function about the originand it is constant for any s) we have:

σ2f (0, s) =

13

k2s2 + b2 −m2c (13)

Thus, it is a second degree function on s. Since the slope k is constant and atthe origin b = mc, values of σ2

f (0, s) increases as s increases. The first derivativewith respect to s of 13

∂s

(σ2

f (0, s))

=23

k2s (14)

is a linear function on s. It increases linearly as s increases and it does not de-pend on b and mc but it does depend on k.The second derivative with respect to s of 13

∂2

∂s2

(σ2

f (0, s))

=23

k2 (15)

does not depend on s but it does depend on k which is the slope of the function.

Page 8: The Theory of Variances

8 Astrit Rexhepi

As the slope is directly proportional to the frequency content then the function15 is a measure of the frequency, it takes high values at those parts of the signalwhere the frequency content is high and vice versa. This is an important prop-erty of this function.In order to reveal above mentioned properties for a real signal (function) we allneed to do is to take derivatives with respect to s of the signal’s VT-matrix.In Figure 2a we show the top-view plot of the first derivative with respect tos applied to the VT-matrix of the signal from Figure 1a. As we can see, firstmaxima have dropped down, they show more local behavior than first maximaof Figure 1c. Their corresponding magnitudes are higher at about s = 50 anddrop down as we go away (similar to Figure 1c). Figure 2b shows the plot (top-view) of the second derivative with respect to s to the same signal’s VT-matrixas before. The only formal difference now is that the scale where first maximaappear dropped down more than in Figure 2a, and they show much more localproperties of the signal. As we know, expression 15 when applied to a linearfunction does not depend on s, it depends only on k which is the slope of thefunction. Since parts of the signal from Figure 1a that are contained betweenany two consecutive extrema are approximately linear, their mid-points shouldcorrespond much more with first maxima in Figure 2b.In Figure 2(c to f) we show plots of derivatives (third to sixth) with respect to s.They also show up some characteristic maxima and minima as other derivativesbut now they should depend mainly on the mean value of that part of the signalbecause the third, forth and other higher derivatives with respect to s based onexpression 13 do not depend on the slope. Please observe plots from Figure 2,extrema are ordered diagonally which show a sort of linearity.From equation 15 we know that the second order derivative with respect to s ofthe variance transform of a linear function is a function that depends only onthe slope (frequency) of the signal. In general, any function can be treated as apiece-wise rectilinear function. Since the extrema of the second order derivativewith respect to s of the variance transform matrix shows up at relatively lowscales of s we are confident to state that these high extrema are results of the fre-quency contents of particular segments of the signal mainly. The extrema of thefirst order derivatives with respect to s of the variance transform and extrema ofthe variance transform alone are results of both the frequency content and thelength of those segments of the signal. Finding the position of these extrema isof particular interest. Since the first extrema show up first as we go bottom-upin the matrix and their values are high whereas as we go further the values ofelements of matrixes quickly stabilizes about a constant. Thus, it should be ofinterest to know what is the sum with respect to s of the VT-matrix and itsderivatives. We expect that the sum of these matrices with respect to s shouldbe a result of these extrema mainly. In Figure 3(a to f) we show plots of thesesums for variance transform matrix and its first to fifth derivatives respectively.Please observe that the position (vertically) of the first maxima in Figure 2aand Figure 2b do coincide with the maxima of Figure 3a and Figure 3b, whereasmaxima of these two last do coincide with the points of inflection of the signal

Page 9: The Theory of Variances

The Theory of Variances 9

from Figure 1a. We also show sums with respect to s of other derivatives whichrepresents other local behaviors of the signal.By no less importance is the sum with respect to x of the VT-matrix and itsderivatives with respect to s. In Figure 5(a to f we show plots of the sum withrespect to x of the VT-matrix and its first five derivatives with respect to s re-spectively. Please observe that the first maxima (from left) after each derivationmoved to the left, which is in accordance with our claims mentioned before thatthe first maxima of the signal’s VT-matrix and its derivatives with respect to srepresent global to local behavior of the signal.

4 The infinite-variance transform

In the last section we presented main concepts of the variance transform andits use in finding interest points of a given signal (function). We showed howthe sum of the variance transform matrix and its corresponding derivatives mayhelp in finding these characteristic points. In this section, we will try to find theanalytic expression of the sum with respect to s (vertical sum) of the variancetransform matrix. That is, we integrate expression 10 with respect to s as:

∫ ∞

−∞σ2

f (x, s) ds. (16)

The above integral would be much easier to treat if we represent it first in thefrequency domain.

Let G(ω) and F (ω) be the Fourier transform of f2(γ) and f(γ) respectively.Substituting them into 10 for x = 0 we have:

σ2f (0, s) =

12s

∫ s

−s

12π

∫ ∞

−∞G(ω)ejωγdω dγ −

[12s

∫ s

−s

12π

∫ ∞

−∞F (ω)ejωγdω

]2

.

(17)

If we exchange the order of integration and after some arrangements we have:

σ2f (0, s) =

12π

∫ ∞

−∞G(ω)

sin(ωs)(ωs)

dω −[

12π

∫ ∞

−∞F (ω)

sin(ωs)(ωs)

]2

. (18)

A shift in time domain corresponds to a multiplication of the Fourier transformwith ejωx, and the variance-transform 10 for any x will be:

σ2f (x, s) =

12π

∫ ∞

−∞G(ω)

sin(ωs)(ωs)

ejωx dω−[

12π

∫ ∞

−∞F (ω)

sin(ωs)(ωs)

ejωx dω

]2

.

(19)

Page 10: The Theory of Variances

10 Astrit Rexhepi

The evaluation of the integral with respect to s of the last expression maylead to integrations (or summations) which are quite difficult to solve or evenimpossible because the integral with respect to s of the second term of 19 isthe integral of the product which does not have a trivial solution. We focus ourefforts on solving the second term first.For a chosen x = xk the second term of 19 becomes a function of s so let asdenote it by Γ 2(s). If we denote with η(ς) the probability density function ofΓ (s) then a good connection to probabilistic domain comes from a well knownidentity in the theory of statistics [26] which states that if the function Γ (s) isergodic by mean value (stationary signals whose mean value in time domain isequal to its statistical mean) then the following is valid:

limT→∞

1T

∫ T2

−T2

Γ (s)ds.=

∫ ∞

−∞ςηε(ς)dς (20)

or in a more compact form:

Γ (s) .= ε (21)

A dot sign above the equal sign in 20 is a sign for making us aware that theabove equation is valid only when the function is ergodic. The above equationstates that under certain conditions the mean value with respect to s of Γ (s) isequal to its statistical mean. Giving a proof whether a certain function is ergodicor not is beyond the scope of this report paper, but it is important to say thatalmost every stationary positive function satisfies this condition, moreover, it isvalid even if the function is partially ergodic. However, interested readers areencouraged to read the textbook written by Lukatela et. al. [26].If we denote by Ω an arbitrary function, equation 20 takes it general form as

limT→∞

1T

∫ T2

−T2

Ω [Γ (s)] ds.=

∫ ∞

−∞Ω(ς)ηε(ς)dς (22)

or simply:

Ω [Γ (s)] .= Ω(ε) (23)

If Ω represents a power of the function, then for Γ 2(s) equation 21 becomes:

Γ 2(s) .= ε2 (24)

To decompose the right-hand-side of the last equation we represent it as themean value of two random processes as:

Page 11: The Theory of Variances

The Theory of Variances 11

ε1ε2 =∫ ∞

−∞

∫ ∞

−∞ς1ς2ηε1ε2(ς1, ς2)dς1dς2 (25)

In the last expression we decomposed ε2 = ε1ε2 = εε whose joint pdf can berepresented as a product of their respective pdf as ηε1ε2(ς1, ς2) = ηε1(ς1)ηε2(ς2)so that they can be treated as statistically independent random variables andequation 25 becomes:

ε1ε2 =∫ ∞

−∞ς1ηε1(ς1)dς1

∫ ∞

−∞ς2ηε2(ς2)dς2 = ε1 · ε2 (26)

Since ε1 = ε2 = ε equation 24 now becomes:

Γ 2(s) .= ε1 · ε2.= ε2 (27)

Based on equation 20 and equation 27 we are confident to state the followingconditional identity:

Γ 2(s) .= Γ (s) · Γ (s) (28)

The reader should not be confused by the last identity because it is happenwhenever Γ (s) is ergodic. Let us point out that in our case Γ (s) has a sincshape and everywhere positive whose values starts from zero and as s increasesΓ (s) tends to the mean value of f(γ) oscillatory. Thus, the variance of the meanvalue tends to zero as s tends to infinity and hence Γ (s) can be treated aspartially ergodic.Now, based on identity 28, the integral with respect to s of 10 for x = 0 will be:

∫ ∞

−∞σ2

f (0, s) ds.=

∫ ∞

−∞

G(ω)|ω| dω −

[∫ ∞

−∞

F (ω)|ω| dω

]2

(29)

And for any x it is:

∫ ∞

−∞σ2

f (x, s) ds.=

∫ ∞

−∞

G(ω)|ω| ejωxdω −

[∫ ∞

−∞

F (ω)|ω| ejωxdω

]2

(30)

The last expression states that the sum of all variances of f(γ) on intervals(γ−s, γ +s) for s = (0,∞) is a function. The shape of this function as expressedin equation 30 will be meaningful if for ω = 0 we set ω to one. It make sensebecause in this way we take out the constant term of 30 whose value goes toinfinity. We call it The infinite variance transform since it represents the sum ofall variances. Its extension for n− dimmensional cases is straightforward.

Page 12: The Theory of Variances

12 Astrit Rexhepi

5 The generalized variance transform and its normalizedversion

Let us recall again expression 19 from the previous section where the variancetransform was expressed in frequency domain as:

σ2f (x, s) =

12π

∫ ∞

−∞G(ω)

sin(ωs)(ωs)

ejωx dω−[

12π

∫ ∞

−∞F (ω)

sin(ωs)(ωs)

ejωx dω

]2

.

(31)

The sinc term within the signs of integrals is a smoothing filter. In generalwe can use any filter Λ(ω, s) in which case expression 31 takes it general formas:

σ2f (x, s) =

12π

∫ ∞

−∞G(ω)Λ(ω, s) ejωx dω −

[12π

∫ ∞

−∞F (ω)Λ(ω, s) ejωx dω

]2

.

(32)

Here we come to the concept of variances, because the above expression isnot a usual variance (that has a singular meaning). We call it The generalizedvariance transform.The ratio

σ2f (x, s)sn

(33)

we call The normalized version of the generalized variance transform.

6 Momentums of the variance transform

If we observe the right-hand-side of equation 30 we can see that the term 1./|ω|within the integral sign is a smoothing filter. In general it could have been anyfilter. In particular it can be the nth power of itself, in which case it would be

Ξ(x, n) =∫ ∞

−∞

G(ω)|ω|n ejωxdω −

[∫ ∞

−∞

F (ω)|ω|n ejωxdω

]2

(34)

Page 13: The Theory of Variances

The Theory of Variances 13

which we call momentums of the infinite variance transform. In Figure 4(a to c)we show plots of the function 34 for n=1, n=2, and n=3 respectively when ap-plied to the signal from Figure 1a. Please observe the similarity of the VT-matrixsum with respect to s in Figure 3a and the sum using function 34 in Figure 4awhich we call the first momentum. The second and third momentum (Figure4(b and c)) as we can see are only a smoothed version of Figure 4a. Figure 4(dand e) represents momentums for n=-1 and n=-2 respectively. Momentums arean interesting representation of a signal(function), for n equal 2 or 3 we get asmoothed function whose extrema represent the symmetry of the shape and ismuch more representative then the Principal axis theory [24], and they play animportant rule when applied to closed contours (where principal axis theory isnot practical) which is the subject of the next section.

7 The application of the variance transform to closedcontours

A closed contour is a function of two variables x(n) and y(n) where n is thearc− length. Under this condition the following identity holds:

(∂x

∂n

)2

+(

∂y

∂n

)2

= 1 (35)

We define the variance transform of a closed contour as the sum of the variancetransform of x(n) and y(n) respectively. For a infinitesimally small arch we canassume that the contour is linear and the following is valid:

σ2x(0, s) =

13

k2s2 + b2x −m2

x (36)

σ2y(0, s) =

13

(1− k)2s2 + b2y −m2

y (37)

σ2x(0, s) + σ2

y(0, s) =13

s2 (38)

Please observe that the sum of 36 and 37 no longer depends on k.In general, we can shift the origin at any position but their sum as shown in 38only can shift but remains unchanged and it depends only on s. It is near mind,because the variance of a function is a measure of the fluctuations about themean value and it depends entirely on the geometry of function, thus it does notdepend on the choice of the coordinate system. A 2-D contour is represented with

Page 14: The Theory of Variances

14 Astrit Rexhepi

respect to its x and y components which are of n and since their sum with respectto Eq.38 for infinitesimally small arch does not depend on k than it does not de-pend on k for the entire contour and hence it is both translation and rotationinvariant. Moreover, it is important to point out, that the above conclusionsare valid even under the generalized variance transform. Now, we can imaginehow many rotation and translation invariant representations we can get from it,definitely infinite. This is a very important issue for treating closed contours,and in general, shapes of any dimension. In the meantime, this is a justificationwhy the theory of variances is far more important than the wavelet theory, theprincipal axis theory, and other theories on shape representation and description.

In Figure 6a we show an example of a synthetic rectangle. The variance trans-form matrix and its top and side views are shown in Figure 6(b,c,d) first sixthderivatives with respect to s of the VT-matrix are shown in Figure 7(a to f)respectively. Please observe first maxima in Figure 7(a and b) which are the firstand derivatives with respect to s of the VT-matrix. The first derivative showsmore local properties of the rectangle whereas the second derivative (Figure 7b)shows much more local properties, the position of its first maxima and minimacorrespond exactly with sides and corners of the rectangle. In Figure f) we showthe sum with respect to s of the VT-matrix and its first five derivatives withrespect to s respectively. If we trace the contour of the rectangle contra-clock-wise maxima of Figure 8a correspond with mid-points of two long sides of therectangle, whereas minima correspond with mid-points of two short sides. Otherderivatives in Figure 8(b to f represent other local behaviors of the rectangle.In Figure 9(a and b) we show plots of the momentum function for n = 1 andn = 2. Please compare Figure 9a with Figure 8a, they have approximately thesame shape. The shape of Figure 9b as we can see is only a smoothed version ofFigure 9a. Figure 9(c and d) are momentums for n = −1 and n = −2, which (asin Figure 8) represent some local properties of the rectangle. Figure 10(a to f)are plots of sum with respect to x of the VT-matrix and its first five derivativeswith respect to s respectively. As in the case of synthetic signal from Figure 1a,the first maxima (from the left) moves to the left.

8 The use of variance-transform for corner and linedetection

In addition to its use for global to local feature detection, the variance trans-form can be used also as a tool for corner and line detection at a scale we want.As mentioned in the beginning of this chapter, values of the variance transformmatrix as we go bottom-up starts from zero and keeps increasing until theyreach the first maximum, after that their values tends to level about the globalvariance oscillatory where the variance of this oscillations tends to zero as stends to infinity. Thus, if we take the inverse of the variance transform matrix,their values will start from one and as we go bottom-up they will droop to zero.

Page 15: The Theory of Variances

The Theory of Variances 15

Hence, if we sum first (say) few rows, we will get significant maxima and min-ima which correspond with corners and lines of a given boundary. An exampleof this application has been given in Figure 11. Figure 11a represent the sumof all rows of the inverse VT-matrix. Figure 11b represent the sum of first tenrows, and Figure 11c points of the contour (corners and lines) corresponding tomaxima and minima of Figure 11a. We could have used extrema of Figure 11binstead, provided we smooth them first. It is suggested that whenever possiblewe should use just the sum of first few rows and smooth them slightly because inthis way we get consistent unbiased position of corners and lines. Please observethat when taking the sum we are also inducing a sort of history for corners andlines as it can be observed in Figure 11a, low and high minima represent longand short sides respectively, whereas maxima represent corners and as cornersin the rectangle have equal angle maxima in Figure 11a have the same height.To know the approximate length of sides of the rectangle, after we find theirposition using a corner and line detector as mentioned in this section, we takecolumns of the VT-matrix corresponding to sides of the rectangle as shown inFigure 12(a,c,e), where (a) is the variance transform of the long side, (c)-thevariance transform of a corner, and (e)-the variance transform of the short side.Their corresponding first derivatives are shown face-to-face in Figure 12(b,d,f),please observe the first maxima of (b) and (f) corresponding to the long and shortlines, the first maxima in (b) occurred at about s = 13, whereas in (f) it is abouts = 8. In Figure 13(a,b,c) we show plots of their corresponding second derivativesrespectively for a long side, corner, and a short side. First maxima of (a) and(b) occurs at about s = 8 and s = 4 respectively, which are proportional withlengths of long and short sides of the rectangle from our example. More exam-ples concerning this matter are given in Section 9.3, Section 9.4, and Section 9.5.

Page 16: The Theory of Variances

16 Astrit Rexhepi

9 Experimental results

9.1 Experimental results: Synthetic signal

0 10 20 30 40 50 60 70 80 90 1000

2

4

6

8

10

12

14

16

18

20

(a) (b)

(c) (d)

Fig. 1. (a)A synthetic signal. (b) Its corresponding variance transform (VT). (c) Top-view of VT. (d) Side-view of VT.

Page 17: The Theory of Variances

The Theory of Variances 17

(a) (b)

(c) (d)

(e) (f)

Fig. 2. ((a)to(f)): Derivatives with respect to ’s’ from one to six respectively of thesynthetic signal’s VT-matrix.

Page 18: The Theory of Variances

18 Astrit Rexhepi

0 10 20 30 40 50 60 70 80 90 1001500

1600

1700

1800

1900

2000

2100

2200

2300

2400

0 10 20 30 40 50 60 70 80 90 10020.2

20.4

20.6

20.8

21

21.2

21.4

21.6

21.8

22

22.2

(a) (b)

0 10 20 30 40 50 60 70 80 90 100−3

−2.5

−2

−1.5

−1

−0.5

0

0.5

1

0 10 20 30 40 50 60 70 80 90 100−1.2

−1

−0.8

−0.6

−0.4

−0.2

0

0.2

(c) (d)

0 10 20 30 40 50 60 70 80 90 100−0.6

−0.5

−0.4

−0.3

−0.2

−0.1

0

0.1

0 10 20 30 40 50 60 70 80 90 100−0.06

−0.04

−0.02

0

0.02

0.04

0.06

0.08

0.1

0.12

(e) (f)

Fig. 3. ((a)to(f)): The sum with respect to ’s’ of derivatives from Figure

Page 19: The Theory of Variances

The Theory of Variances 19

0 10 20 30 40 50 60 70 80 90 100−5

0

5

10

15

20

25

30

35

40

45

0 10 20 30 40 50 60 70 80 90 100−5

0

5

10

15

20

25

30

35

40

45

(a) (b)

0 10 20 30 40 50 60 70 80 90 100−5

0

5

10

15

20

25

30

35

40

45

0 10 20 30 40 50 60 70 80 90 100−1000

−900

−800

−700

−600

−500

−400

−300

−200

−100

0

(c) (d)

0 10 20 30 40 50 60 70 80 90 100−4

−3.5

−3

−2.5

−2

−1.5

−1

−0.5

0x 10

4

(e)

Fig. 4. ((a)to(c)): Momentums from one to three respectively of the synthetic signal.((d) and (e)): Momentums for -1 and -2 respectively.

Page 20: The Theory of Variances

20 Astrit Rexhepi

0 10 20 30 40 50 60 70 80 90 1000

500

1000

1500

2000

2500

0 10 20 30 40 50 60 70 80 90 100−50

0

50

100

150

200

250

300

(a) (b)

0 10 20 30 40 50 60 70 80 90 100−60

−40

−20

0

20

40

60

0 10 20 30 40 50 60 70 80 90 100−25

−20

−15

−10

−5

0

5

10

15

(c) (d)

0 10 20 30 40 50 60 70 80 90 100−20

−15

−10

−5

0

5

10

0 10 20 30 40 50 60 70 80 90 100−8

−6

−4

−2

0

2

4

6

8

10

(e) (f)

Fig. 5. ((a)to(f)): The sum with respect to ’x’ of the VT-matrix and its first fifthderivatives with respect to ’s’ respectively.

Page 21: The Theory of Variances

The Theory of Variances 21

9.2 Experimental results: Synthetic rectangle

20 30 40 50 60 7030

35

40

45

50

55

60

65

70

75

80

(a) (b)

(c) (d)

Fig. 6. (a)A synthetic rectangle. (b) Its corresponding variance transform (VT). (c)Top-view of VT. (d) Side-view of VT.

Page 22: The Theory of Variances

22 Astrit Rexhepi

(a) (b)

(c) (d)

(e) (f)

Fig. 7. ((a)to(f)): Its first sixth derivatives with respect to ’s’ respectively of the VT-matrix.

Page 23: The Theory of Variances

The Theory of Variances 23

0 10 20 30 40 50 60 70 803.1

3.15

3.2

3.25

3.3

3.35

3.4

3.45

3.5x 10

4

0 10 20 30 40 50 60 70 80534.5

534.6

534.7

534.8

534.9

535

535.1

535.2

535.3

535.4

535.5

(a) (b)

0 10 20 30 40 50 60 70 80−6

−5

−4

−3

−2

−1

0

1

2

3

0 10 20 30 40 50 60 70 80−1.8

−1.6

−1.4

−1.2

−1

−0.8

−0.6

−0.4

−0.2

0

0.2

(c) (d)

0 10 20 30 40 50 60 70 80−0.7

−0.6

−0.5

−0.4

−0.3

−0.2

−0.1

0

0 10 20 30 40 50 60 70 80−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

(e) (f)

Fig. 8. ((a)to(f)): The sum with respect to ’x’ of the VT-matrix and its first fivederivatives with respect to ’s’ respectively.

Page 24: The Theory of Variances

24 Astrit Rexhepi

0 10 20 30 40 50 60 70 80−80

−60

−40

−20

0

20

40

60

80

100

120

0 10 20 30 40 50 60 70 80−150

−100

−50

0

50

100

150

200

(a) (b)

0 10 20 30 40 50 60 70 80−1400

−1200

−1000

−800

−600

−400

−200

0

200

0 10 20 30 40 50 60 70 80−4

−3.5

−3

−2.5

−2

−1.5

−1

−0.5

0x 10

5

(c) (d)

Fig. 9. ((a)and(b)): Momentums for 1 and 2. ((c)and(d)): Momentums for -1 and -2.

Page 25: The Theory of Variances

The Theory of Variances 25

0 10 20 30 40 50 60 70 800

0.5

1

1.5

2

2.5

3

3.5

4

4.5x 10

4

0 10 20 30 40 50 60 70 80−200

0

200

400

600

800

1000

1200

1400

1600

1800

(a) (b)

0 10 20 30 40 50 60 70 80−150

−100

−50

0

50

100

150

200

0 10 20 30 40 50 60 70 80−20

−10

0

10

20

30

40

50

(c) (d)

0 10 20 30 40 50 60 70 80−30

−25

−20

−15

−10

−5

0

5

0 10 20 30 40 50 60 70 80−15

−10

−5

0

5

10

15

(e) (f)

Fig. 10. ((a)to(f)): The sum with respect to ’x’ of the VT-matrix and its first fivederivatives with respect to ’s’ respectively.

Page 26: The Theory of Variances

26 Astrit Rexhepi

0 10 20 30 40 50 60 70 801.7

1.75

1.8

1.85

1.9

1.95

2

(a)

0 10 20 30 40 50 60 70 801.45

1.5

1.55

1.6

1.65

1.7

(c)

20 30 40 50 60 7030

35

40

45

50

55

60

65

70

75

80

(c)

Fig. 11. ((a)and(b)): The output of corner-detector for total-sum and the sum from 1to 10 respectively. (c): The plot of corresponding minima and maxima from (a).

Page 27: The Theory of Variances

The Theory of Variances 27

0 10 20 30 40 50 60 70 800

100

200

300

400

500

600

0 10 20 30 40 50 60 70 80−10

−5

0

5

10

15

20

25

30

35

(a) (b)

0 10 20 30 40 50 60 70 800

100

200

300

400

500

600

0 10 20 30 40 50 60 70 80−5

0

5

10

15

20

25

(c) (d)

0 10 20 30 40 50 60 70 800

100

200

300

400

500

600

0 10 20 30 40 50 60 70 80−10

−5

0

5

10

15

20

25

(e) (f)

Fig. 12. First column: The variance transform of a corner point and two neighboringmiddle points respectively. Second column: Their first derivative with respect to ’s’respectively.

Page 28: The Theory of Variances

28 Astrit Rexhepi

0 10 20 30 40 50 60 70 80−5

−4

−3

−2

−1

0

1

2

3

(a)

0 10 20 30 40 50 60 70 80−3

−2.5

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

(c)

0 10 20 30 40 50 60 70 80−4

−3

−2

−1

0

1

2

3

(e)

Fig. 13. Top-down: The second derivative with respect to ’s’ of the variance transformof a corner point and two neighboring middle points respectively.

Page 29: The Theory of Variances

The Theory of Variances 29

9.3 Experimental results: A synthetic closed contour

In this section we present an example of a synthetic closed contour Figure 14awhich is a typical example that contains sides and corners of different length andangel. Figure 14(b,c,d) represent the variance transform, its top and side viewsrespectively. In Figure 15(a to f) we show plots of the first six derivatives withrespect to s respectively. Please observe the scale of extrema in (a) and (b). InFigure and in Figure 17(a to f) we show plots of the sum with and with respectto x of VT-matrix and its first five derivatives with respect to s respectively.Sums of the VT-matrix using momentums from n = 1 to n = 3 are shown inFigure 18(a,b,c) respectively. Please compare Figure 18a with Figure 16a, thehave approximately the same shape. Figure 18(b and c) as mentioned in previousexamples, are only smoothed versions of Figure 18a. It is of interest to observemagnitudes of maxima and minima of Figure 18, they are not equal. Namely,the first maxima of Figure 18b is slightly higher than the second maxima, andthe first minima is slightly higher than second minima. Figure 18(d to f) are mo-mentums for n = 1, n = −2, and n = −3 which represents some local featuresof the contour. The variance transform corner and line detector applied to thissynthetic closed contour is shown in Figure 19. Figure 19a shows the output ofthe corner and line detector (total sum). Points of the contour corresponding tomaxima and minima of Figure 19a are shown in Figure 19b, where red dots corre-spond to minima and black dots to maxima. If we trace the contour from Figure19b contra-clock-wise from the point of coordinates (150,100) and compare withminima and maxima of Figure 19a (starting from first point in the left) we canobserve that high and low magnitudes of minima of Figure 19a correspond withshort and long sides of Figure 19b, whereas high and low magnitudes of maximacorrespond with small and big angels in Figure 19b. While keeping the sameorder of tracing we have a total of 12 sides in Figure 19b which are denotedwith red dots. For each of them we calculated the variance transform. First andsecond derivatives with respect to s are shown in Figure 20, Figure 21 and inFigure 22, where first and second columns represent first and second derivativesrespectively. Please compare the position of their first maxima with the lengthof corresponding sides of Figure 19b, they are in proportion.

Page 30: The Theory of Variances

30 Astrit Rexhepi

40 60 80 100 120 140 160

50

60

70

80

90

100

110

120

130

140

150

(a) (b)

(c) (d)

Fig. 14. (a)A synthetic closed contour. (b) Its corresponding variance transform (VT).(c) Top-view of VT. (d) Side-view of VT.

Page 31: The Theory of Variances

The Theory of Variances 31

(a) (b)

(c) (d)

(e) (f)

Fig. 15. ((a)to(f)): The first six derivatives with respect to ’s’ respectively of the ’syn-thetic closed contour’s VT-matrix’.

Page 32: The Theory of Variances

32 Astrit Rexhepi

0 50 100 150 200 2504.36

4.37

4.38

4.39

4.4

4.41

4.42

4.43

4.44

4.45

4.46x 10

5

0 50 100 150 200 2502582.2

2582.4

2582.6

2582.8

2583

2583.2

2583.4

(a) (b)

0 50 100 150 200 250−8

−6

−4

−2

0

2

4

6

0 50 100 150 200 250−1.6

−1.4

−1.2

−1

−0.8

−0.6

−0.4

−0.2

0

0.2

(c) (d)

0 50 100 150 200 250−1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0 50 100 150 200 250−0.4

−0.2

0

0.2

0.4

0.6

0.8

1

(e) (f)

Fig. 16. ((a)to(f)): The sum with respect to ’s’ of the VT-matrix and its first fivederivatives with respect to ’s’ respectively.

Page 33: The Theory of Variances

The Theory of Variances 33

0 50 100 150 200 2500

1

2

3

4

5

6x 10

5

0 50 100 150 200 250−1000

0

1000

2000

3000

4000

5000

6000

7000

8000

9000

(a) (b)

0 50 100 150 200 250−200

−100

0

100

200

300

400

500

0 50 100 150 200 250−60

−40

−20

0

20

40

60

80

100

120

140

(c) (d)

0 50 100 150 200 250−80

−70

−60

−50

−40

−30

−20

−10

0

10

0 50 100 150 200 250−40

−30

−20

−10

0

10

20

30

40

50

(e) (f)

Fig. 17. ((a)to(f)): The sum with respect to ’x’ of the VT-matrix and its first fivederivatives with respect to ’s’ respectively.

Page 34: The Theory of Variances

34 Astrit Rexhepi

0 50 100 150 200 250−40

−20

0

20

40

60

80

100

120

140

0 50 100 150 200 250−100

−50

0

50

100

150

200

(a) (b)

0 50 100 150 200 250−100

−50

0

50

100

150

200

0 50 100 150 200 250−15000

−10000

−5000

0

(c) (d)

0 50 100 150 200 250−15

−10

−5

0x 10

6

0 50 100 150 200 250−4.5

−4

−3.5

−3

−2.5

−2

−1.5

−1

−0.5

0

0.5x 10

10

(e) (f)

Fig. 18. ((a)to(c)): Momentums from 1 to 3 respectively. ((d)to(f)): Momentums from-1 to -3 respectively.

Page 35: The Theory of Variances

The Theory of Variances 35

0 50 100 150 200 2501.4

1.6

1.8

2

2.2

2.4

2.6

2.8

(a)

40 60 80 100 120 140 160

50

60

70

80

90

100

110

120

130

140

150

(b)

Fig. 19. (a): The output of the corner-detector. (b): The corresponding points of thecontour for minima(red dots) and maxima(black dots).

Page 36: The Theory of Variances

36 Astrit Rexhepi

0 5 10 15 20 250

5

10

15

20

25

30

35

0 5 10 15 20 25−6

−5

−4

−3

−2

−1

0

1

2

3

(a) (b)

0 5 10 15 20 250

5

10

15

20

25

30

0 5 10 15 20 25−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5

3

(c) (d)

0 5 10 15 20 250

5

10

15

20

25

30

0 5 10 15 20 25−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5

3

(e) (f)

0 5 10 15 20 250

5

10

15

20

25

30

35

0 5 10 15 20 25−5

−4

−3

−2

−1

0

1

2

3

(g) (h)

Fig. 20. First and second column: The first and second derivative with respect to ’s’ ofthe VT-matrix respectively for the red points from 1 to 4 starting from the red pointwith coordinates (150,100) with respect to tracing the contour counterclockwise.

Page 37: The Theory of Variances

The Theory of Variances 37

0 5 10 15 20 250

5

10

15

20

25

30

35

40

45

50

0 5 10 15 20 25−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5

3

(a) (b)

0 5 10 15 20 250

5

10

15

20

25

30

35

0 5 10 15 20 25−5

−4

−3

−2

−1

0

1

2

3

(c) (d)

0 5 10 15 20 250

5

10

15

20

25

30

35

40

0 5 10 15 20 25−1

−0.5

0

0.5

1

1.5

2

2.5

(e) (f)

0 5 10 15 20 250

5

10

15

20

25

30

0 5 10 15 20 25−3

−2

−1

0

1

2

3

(g) (h)

Fig. 21. First and second column: The first and second derivative with respect to ’s’ ofthe VT-matrix respectively for the red points from 5 to 8 starting from the red pointwith coordinates (150,100) with respect to tracing the contour counterclockwise.

Page 38: The Theory of Variances

38 Astrit Rexhepi

0 5 10 15 20 250

5

10

15

20

25

30

0 5 10 15 20 25−3

−2

−1

0

1

2

3

(a) (b)

0 5 10 15 20 250

5

10

15

20

25

30

35

40

0 5 10 15 20 25−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5

3

(c) (d)

0 5 10 15 20 250

5

10

15

20

25

30

35

0 5 10 15 20 25−5

−4

−3

−2

−1

0

1

2

3

(e) (f)

0 5 10 15 20 250

5

10

15

20

25

30

35

40

45

50

0 5 10 15 20 25−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5

3

3.5

(g) (h)

Fig. 22. First and second column: The first and second derivative with respect to ’s’ ofthe VT-matrix respectively for the red points from 9 to 12 starting from the red pointwith coordinates (150,100) with respect to tracing the contour counterclockwise.

Page 39: The Theory of Variances

The Theory of Variances 39

9.4 Experimental results: The speaker

In this section we show the application of the variance transform to a real ex-ample from Figure 23a. Figures 23(b,c,d) show plots of the variance transformand its top and side views respectively. Top view of the variance transform andits first five derivatives with respect to s are shown in Figure 24(a to f). Asin other examples, peak structures are 25(first and second column) are sums(with respect to s and with respect to x respectively) of the VT-matrix andits first three derivatives with respect to s respectively. Momentums for n = 1and n = 2 are shown in Figure 26(a and b). Please compare shapes of Figure26a and Figure 25a, they are approximately the same. As in other examples,momentum for n = 2 is only a smoothed version of the momentum for n = 1.Please observe magnitudes of maxima and minima of Figure 26b, they differs alot. Momentums for n = −1 and n = −2 are shown in Figure 26(c and d). Theoutput of the corner and line detector for the total sum and for a sum from 1to 10 are shown in Figure 27a and Figure 27b respectively. Please observe thatas a result of noise the corner and line detector shows up to many extrema. Inorder to get only significant extrema, we smoothed Figure 27b with an averagingfilter of size 37, the result is shown in Figure 27c. The filter size we determinedas the half distance of the first maxima from Figure 25d. Points of the contourcorresponding to extrema of Figure 27a are shown in Figure 28a, whereas pointsof the contour corresponding to extrema of Figure 27c are shown in Figure 28bwhere red and yellow circles represent maxima and minima respectively. Pointsof the contour that correspond with extrema of the second momentum Figure26b are shown in Figure 28c, red and green stars correspond to maxima, whereasred and green circles correspond to minima. Red colore stands for the highestvalue of maxima or minima.From the last example Figure 28c, we can see thatmagnitudes of extrema in Figure 26b are important clues for determining thesymmetry of shapes.

Page 40: The Theory of Variances

40 Astrit Rexhepi

(a) (b)

(c) (d)

Fig. 23. (a)The contour of a speaker. (b) Its corresponding variance transform (VT).(c) Top-view of VT. (d) Side-view of VT.

Page 41: The Theory of Variances

The Theory of Variances 41

(a) (b)

(c) (d)

(e) (f)

Fig. 24. ((a)to(f)): The top-view of the variance transform and its first five derivativeswith respect to ’s’ respectively.

Page 42: The Theory of Variances

42 Astrit Rexhepi

0 50 100 150 200 250 300 350 400 4503

3.05

3.1

3.15

3.2

3.25

3.3

3.35x 106

0 50 100 150 200 250 300 350 400 4500

0.5

1

1.5

2

2.5

3

3.5

4

4.5x 106

(a) (b)

0 50 100 150 200 250 300 350 400 4501.0196

1.0196

1.0196

1.0196

1.0196

1.0196

1.0197

1.0197

1.0197

1.0197

1.0197x 104

0 50 100 150 200 250 300 350 400 450−0.5

0

0.5

1

1.5

2

2.5

3

3.5x 104

(c) (d)

0 50 100 150 200 250 300 350 400 450−25

−20

−15

−10

−5

0

5

10

15

20

25

0 50 100 150 200 250 300 350 400 450−400

−200

0

200

400

600

800

1000

1200

(e) (f)

0 50 100 150 200 250 300 350 400 450−1.3

−1.2

−1.1

−1

−0.9

−0.8

−0.7

−0.6

0 50 100 150 200 250 300 350 400 450−50

0

50

100

150

200

250

300

(g) (h)

Fig. 25. First and second column: The sum with respect to the sum with respect to’x’(second column) of the VT-matrix and its first three derivatives with respect to ’s’respectively.

Page 43: The Theory of Variances

The Theory of Variances 43

0 50 100 150 200 250 300 350 400 450−1000

−500

0

500

1000

1500

2000

0 50 100 150 200 250 300 350 400 450−1500

−1000

−500

0

500

1000

1500

2000

2500

(a) (b)

0 50 100 150 200 250 300 350 400 450−4

−3.5

−3

−2.5

−2

−1.5

−1

−0.5

0

0.5x 104

0 50 100 150 200 250 300 350 400 450−2

−1.8

−1.6

−1.4

−1.2

−1

−0.8

−0.6

−0.4

−0.2

0x 107

(c) (d)

Fig. 26. (a and b): Momentums for 1 and 2. (c and d): Momentums for -1 and -2.

Page 44: The Theory of Variances

44 Astrit Rexhepi

0 50 100 150 200 250 300 350 400 4501.64

1.66

1.68

1.7

1.72

1.74

1.76

1.78

1.8

(a)

0 50 100 150 200 250 300 350 400 4501.54

1.56

1.58

1.6

1.62

1.64

1.66

1.68

(b)

0 50 100 150 200 250 300 350 400 4501.55

1.555

1.56

1.565

1.57

1.575

(a)

Fig. 27. The output of the corner-detector: (a)-total sum; (b)-the sum from 1 to 10;(c)-A smoothed version of (b)

Page 45: The Theory of Variances

The Theory of Variances 45

(a)

(b)

(a)

Fig. 28. The corresponding points of the corner-detector: (a)-total sum; (b)-the sumfrom 1 to 10(smoothed), red and yellow circles represents maxima and minima respec-tively; (c)-The corresponding points of extremes of the first momentum of VT(red andgreen stars correspond to maxima, whereas red and green circles correspond to minimaof the first momentum, where red stands for highest value of maxima or minima).

Page 46: The Theory of Variances

46 Astrit Rexhepi

9.5 Experimental results: Vehicle

In this section we show the application of the variance transform to another realexample; the shape of vehicle Figure 29a. Figures 29(b,c,d) shows the plot ofthe variance transform and corresponding top and side views respectively. Thetop view of the variance transform and its first five derivatives with respect to sare shown in Figure 30(a to f). Please observe that linear structures are evidentas in other examples. The sum with of VT-matrix and its first three derivativeswith respect to s are shown in Figure 31(first column). The sum with respectto x of the VT-matrix and its first three derivatives with respect to s are shownin Figure 31(second column). Momentums for n = 1 and n = 2 are shown inFigure 32(a and b). Please compare shapes of Figure 32a and Figure 31a, theyare approximately the same. Momentums for n = −1 and n = −2 are shownin Figure 32(c and d) respectively. The outputs of the corner and line detectorfor the total sum and the sum from 1 to 10 are shown in Figure 33(a and b)respectively. Points of the contour corresponding to maxima of Figure 33b areshown in Figure 33c (red spots). In this case we did not use smoothing becausethe smoothness term of the active contour used for delineating boundaries washigher than in the previous example.Points corresponding to maxima and min-ima of the second momentum (Figure 32b) are shown in the same figure Figure33c, where black and green circles represent maxima and minima of Figure 32b.

Page 47: The Theory of Variances

The Theory of Variances 47

140 150 160 170 180 190 200 210 220

100

110

120

130

140

150

160

170

(a) (b)

(c) (d)

Fig. 29. (a)The contour of a vehicle. (b) Its corresponding variance transform (VT).(c) Top-view of VT. (d) Side-view of VT.

Page 48: The Theory of Variances

48 Astrit Rexhepi

(a) (b)

(c) (d)

(e) (f)

Fig. 30. ((a)to(f)): The top-view of the variance transform and its first five derivativeswith respect to ’s’ respectively.

Page 49: The Theory of Variances

The Theory of Variances 49

0 20 40 60 80 100 1200.9

0.95

1

1.05

1.1

1.15

1.2x 10

5

0 20 40 60 80 100 1200

2

4

6

8

10

12

14x 10

4

(a) (b)

0 20 40 60 80 100 120−15

−10

−5

0

5

10

15

0 20 40 60 80 100 120−500

0

500

1000

1500

2000

2500

3000

3500

4000

(c) (d)

0 20 40 60 80 100 120−2

−1.8

−1.6

−1.4

−1.2

−1

−0.8

−0.6

0 20 40 60 80 100 120−200

−150

−100

−50

0

50

100

150

200

250

300

(e) (f)

0 20 40 60 80 100 120−1

−0.9

−0.8

−0.7

−0.6

−0.5

−0.4

−0.3

0 20 40 60 80 100 120−20

−10

0

10

20

30

40

50

60

70

80

(g) (h)

Fig. 31. First and second column: The sum with respect to the sum with respect to’x’(second column) of the VT-matrix and its first three derivatives with respect to ’s’respectively.

Page 50: The Theory of Variances

50 Astrit Rexhepi

0 20 40 60 80 100 120−400

−300

−200

−100

0

100

200

300

400

500

0 20 40 60 80 100 120−800

−600

−400

−200

0

200

400

600

800

(a) (b)

0 20 40 60 80 100 120−800

−600

−400

−200

0

200

400

600

0 20 40 60 80 100 120−4.5

−4

−3.5

−3

−2.5

−2

−1.5

−1

−0.5

0

0.5x 10

5

(c) (d)

Fig. 32. (a and b): Momentums for 1 and 2. (c and d): Momentums for -1 and -2.

Page 51: The Theory of Variances

The Theory of Variances 51

0 20 40 60 80 100 1201.6

1.65

1.7

1.75

1.8

1.85

(a)

0 20 40 60 80 100 1201.46

1.48

1.5

1.52

1.54

1.56

1.58

1.6

1.62

1.64

(b)

140 150 160 170 180 190 200 210 220

100

110

120

130

140

150

160

170

(a)

Fig. 33. (a and b): The output of the corner-detector (a)-total sum, (b)-the sum from1 to 10. (c): (Red spots)- points of the contour corresponding to maxima of (b). Blackand Green circles represent maxima and minima of the first momentum.

Page 52: The Theory of Variances

52 Astrit Rexhepi

References

1. F. Mokhtarian. Silhouette-based object recognition through curvature scale space.IEEE Transactions on Pattern Analysis and Machine Intelligence. 17:539-544.1995.

2. F. Mokhtarian and A. K. Mackworth. A theory of multiscale, curvature-basedshape representation for planar curves. IEEE Transactions on Pattern Analysisand Machine Intelligence. 14:789-805. 1992.

3. J. Babaud, A.P.Witkin, M. Baudin, and R.O. Duda. Uniqueness of the Gaussiankernel for scale-space filtering. IEEE Transaction on Pattern Analysis and MachineIntelligence, vol. 8, 1986, pp. 26-33.

4. D.H. Ballard and C.M. Brown. Computer Vision. Prentice-Hall, Englewood Cliffs,NJ, 1982.

5. P.J. Besl. Geometric modelling and computer vision. Proceedings of the IEEE,vol.76:936-958, 1988.

6. M. Brady. Representing shape. In M. Brady, L.A. Gerhardt, and H.F. Davidson,editors, Robotics and Artificial Intelligence, pp:279-300. Springer + NATO, Berlin,1984.

7. G. Eichmann, C. Lu, M. Jankowski, and R. Tolimeiri. Shape representation byGabor expansion. in Hybrid Image and Signal Processing II, Orlando, FL, pp:86-94, Society for Optical Engineering, Bellingham, WA, 1990.

8. H. Freeman. On the encodind of arbitrary geometric configuration. IRE Transac-tions on Electronic Computers, EC-10(2):260-268, 1961.

9. D.C. Hogg. Shape in machine vision. Image and Vision Computing, 11:309-316,1993.

10. Q. Ji and R.M. Haralick. Corner detection with covariance propagation. In Visionand Pattern Recognition,pp:362-367, IEEE Computer Society, Los Alamitos, CA,1997.

11. J.J. Koenderink. Solid Shape. MIT Press, Cambridge, MA,1990.12. C.C. Lin and R. Chellappa. Classification of partial 2D shapes using Fourier

descriptors. IEEE Transactions on Pattern Analysis and Machine Intelligence,9(5):686-690, 1987.

13. P. Maragos. Pattern spectrum and multiscale shape representation. IEEE Trans-actions on Pattern Analysis and Machine Intelligence, 11:701-716, 1989.

14. S. Marshall. Review of shape coding techniques. Image and Vision Computing,7(4);281-294, 1989.

15. J. Matas and J. Kittler. Junction detection using probabilistic relaxation. Imageand Vision Computing, 11:197-202, 1993.

16. T. Pavlidis. Structural Pattern Recognition. Springer Verlag, Berlin, 1977.17. T. Pavlidis. A review of algorithms for shape analysis. Computer Graphics and

Image Processing, 7:243-258, 1978.18. T. Pavlidis. Algorithms for shape analysis of contours and waveforms. IEEE Trans-

actions on Pattern Analysis and Machine Intelligence, 2(4):301-312, 1980.19. R.J. Watt. Issues in shape perception. Image and Vision Computing, 11:389-394,

1993.20. A. P. Witkin. Scale-space filtering. in A P Pentland, editor, From Pixels to Predi-

cates, pp:5-19. Ablex, Norwood, NJ, 1986.21. A. L. Yuille and T. A. Poggio. Scaling theorems for zero-crossings. IEEE Transac-

tions on Pattern Analysis and Machine Intelligence, 8(1):15-25, 1986.22. A. Blake and M. Isard, Active Contours, Springer, Berlin, 1998.

Page 53: The Theory of Variances

The Theory of Variances 53

23. K. Castleman. Digital Image Processing, Englewood Cliffs, NJ, Prentice-Hall. 1996.24. H. Hotelling. Analysis of a Complex of Statistical Variables into Principal Compo-

nents. Educ. Psychol.. Vol. 24, pp:417-441, 498-433. 1933.25. R. Gonzalez, R. Woods. Digital Image Processing. 1993.26. G. Lukatela. Statistichka Teorija Telekomunikacija i Teorija Informacija. Vol.1.

Gradjevinska knjiga, Beograd. 1991.27. J. E. Freund. Mathematical Statistics. Second Edition. Prentice-Hall, INC., Engle-

wood Cliffs, NJ. 1971.28. P. L. Meyer. Introductory Probability and Statistical Applications. Second edition.

Addison-Wesley. 1970.