Texture-Based Vanishing Point Voting for Road Shape Estimation · under perspective [18, 5]. All of...

10
Texture-Based Vanishing Point Voting for Road Shape Estimation Christopher Rasmussen Dept. Computer & Information Sciences University of Delaware [email protected] Abstract Many rural roads lack sharp, smoothly curving edges and a homogeneous surface appearance, hampering traditional vision-based road-following methods. However, they often have strong texture cues parallel to the road direction in the form of ruts and tracks left by other vehicles. This paper describes an unsupervised al- gorithm for following ill-structured roads in which dominant texture orientations computed with Gabor wavelet filters vote for a consensus road vanishing point lo- cation. The technique is first described for estimating the direction of straight-road segments, then extended to curved and undulating roads by tracking the vanishing point indicated by a differential “strip” of voters moving up toward the nominal vanishing line. Finally, the vanishing point is used to constrain a search for the road boundaries by maximizing texture- and color-based region discriminant func- tions. Results are shown for a variety of road scenes including gravel roads, dirt trails, and highways. 1. Introduction Many complementary strategies for visual road following have been developed based on certain assumptions about the characteristics of the road scene. For ex- ample, edge-based methods such as those described in [17, 16, 1] are often used to identify lane lines or road borders, which are fit to a model of the road curvature, width, and so on. These algorithms typically work best on well-engineered roads such as highways which are paved and/or painted, resulting in a wealth of high- contrast contours suited for edge detection. Another popular set of methods for road tracking are region-based [1, 3, 10, 20]. These approaches use characteristics such as color or texture measured over local neighborhoods in order to formulate and threshold on a likelihood that pixels belong to the road area vs. the back- ground. When there is a good contrast for the cue chosen, there is no need for the presence of sharp or unbroken edges, which tends to make these methods more appropriate for unpaved rural roads. Most road images can be successfully interpreted using a variant of one of the two above approaches. Nonetheless, there are some scenes that possess neither strong edges nor contrasting local characteristics. Figure 1(a) shows one such road (the cross was added by our algorithm and is explained in Section 2.1). It is from BMVC 2004 doi:10.5244/C.18.7

Transcript of Texture-Based Vanishing Point Voting for Road Shape Estimation · under perspective [18, 5]. All of...

Page 1: Texture-Based Vanishing Point Voting for Road Shape Estimation · under perspective [18, 5]. All of the voting methods for localizing a road’s vanishing point that we have identified

Texture-Based Vanishing Point Voting

for Road Shape Estimation

Christopher Rasmussen

Dept. Computer & Information SciencesUniversity of [email protected]

Abstract

Many rural roads lack sharp, smoothly curving edges and a homogeneous surfaceappearance, hampering traditional vision-based road-following methods. However,they often have strong texture cues parallel to the road direction in the form ofruts and tracks left by other vehicles. This paper describes an unsupervised al-gorithm for following ill-structured roads in which dominant texture orientationscomputed with Gabor wavelet filters vote for a consensus road vanishing point lo-cation. The technique is first described for estimating the direction of straight-roadsegments, then extended to curved and undulating roads by tracking the vanishingpoint indicated by a differential “strip” of voters moving up toward the nominalvanishing line. Finally, the vanishing point is used to constrain a search for theroad boundaries by maximizing texture- and color-based region discriminant func-tions. Results are shown for a variety of road scenes including gravel roads, dirttrails, and highways.

1. Introduction

Many complementary strategies for visual road following have been developedbased on certain assumptions about the characteristics of the road scene. For ex-ample, edge-based methods such as those described in [17, 16, 1] are often used toidentify lane lines or road borders, which are fit to a model of the road curvature,width, and so on. These algorithms typically work best on well-engineered roadssuch as highways which are paved and/or painted, resulting in a wealth of high-contrast contours suited for edge detection. Another popular set of methods forroad tracking are region-based [1, 3, 10, 20]. These approaches use characteristicssuch as color or texture measured over local neighborhoods in order to formulateand threshold on a likelihood that pixels belong to the road area vs. the back-ground. When there is a good contrast for the cue chosen, there is no need for thepresence of sharp or unbroken edges, which tends to make these methods moreappropriate for unpaved rural roads.

Most road images can be successfully interpreted using a variant of one of thetwo above approaches. Nonetheless, there are some scenes that possess neitherstrong edges nor contrasting local characteristics. Figure 1(a) shows one such road(the cross was added by our algorithm and is explained in Section 2.1). It is from

BMVC 2004 doi:10.5244/C.18.7

Page 2: Texture-Based Vanishing Point Voting for Road Shape Estimation · under perspective [18, 5]. All of the voting methods for localizing a road’s vanishing point that we have identified

(a) (b)

Figure 1: (a) Desert road from DARPA Grand Challenge example set (with van-ishing point computed as in Section 2.1); (b) Votes for vanishing point candidates(top 3/4 of image)

a set of “course examples” made available to entrants in the 2004 U.S. DARPAGrand Challenge, an autonomous cross-country driving competition [4] [DARPAair-brushed part of the image near the horizon to obscure location-identifyingfeatures]. There is no color difference between the road surface and off-road areasand no strong edges delimiting it. The one characteristic that seems to define suchroads is texture, but not so much in a locally measurable sense, because thereare bumps, shadows, and stripes everywhere. Rather, one seems to apprehendthe roads most easily because of their overall banding patterns. This banding,presumably due to ruts and tire tracks left by previous vehicles driven by humanswho knew the way, is aligned with the road direction and thus most apparentbecause of the strong grouping cue imposed by its vanishing point.

A number of researchers have used vanishing points as global constraints forroad following or identification of painted features on roads (such as so-called “ze-bra crossings,” or crosswalks) [14, 13, 15, 9, 2]. Broadly, the key to the approach isto use a voting procedure like a Hough transform on edge-detected line segmentsto find points where many intersect. Peaks in the voting function are good candi-dates for vanishing points. This is sometimes called a “cascaded” Hough transform[19] because the lines themselves may have first been identified via a Hough trans-form. Similar grouping strategies have also been investigated outside the contextof roads, such as in urban and indoor environments rich in straight lines, in con-junction with a more general analysis of repeated elements and patterns viewedunder perspective [18, 5].

All of the voting methods for localizing a road’s vanishing point that we haveidentified in the literature appear to be based on a prior step of finding line seg-ments via edge detection. Moreover, with the exception of [9], vanishing-point-centric algorithms appear not to deal explicitly with the issue of road curvatureand/or undulation, which remove the possibility of a unique vanishing point as-sociated with the road direction. Both of these limitations are problematic ifvanishing point methods are to be applied to bumpy back-country roads like the

Page 3: Texture-Based Vanishing Point Voting for Road Shape Estimation · under perspective [18, 5]. All of the voting methods for localizing a road’s vanishing point that we have identified

desert scene discussed above.In this paper we present a straightforward method for locating the road’s van-

ishing point in such difficult scenes through texture analysis. Specifically, we re-place the edge-detection step, which does not work on many such images becausethe road bands are too low-frequency to be detected, with estimates of the domi-nant orientation at each location in the image. These suffice to conduct voting ina similar fashion and find a vanishing point.

We also add a second important step to deal with road curvature (both in-and out-of-plane), which is the notion of tracking the vanishing points associatedwith differential segments of the road as they are traced from the viewer intothe distance. By integrating this sequence of directions, we can recover shapeinformation about the road ahead to aid in driving control. The method is easilyextended to temporal tracking of the vanishing point over sequences of images.

Finally, the estimated road curvature alone does not provide information aboutthe vehicle’s lateral displacement that would allow centering–for this we need esti-mates of the left and right road boundaries. Vanishing point information providesa powerful constraint on where these edges might be, however, by defining a fam-ily of possible edge lines (for straight roads) and edge contours (for curved roads)radiating outward (below the horizon line) from a single image location. Thisallows the road segmentation task to be formulated as simply a 2-D search amongthese curves for left and right boundaries which maximize the difference of somevisual discriminant function inside the road region vs. outside it. In this paper, wedescribe an approach to combining measures of texturedness and color to robustlylocate these edges.

2. Methods

There are four significant components to the road following algorithm. First, adominant texture orientation θ(p) (the direction that describes the strongest localparallel structure or texture flow) is computed at every image pixel p = (x, y).Second, assuming a straight, planar road, all dominant orientations in the imagevote for a single best road vanishing point. Third, if the road curves or undulates aseries of vanishing points for tangent directions along the road must be estimated(for a sequence of road images, the deformation of this vanishing point contourfrom image to image must also be estimated, but due to space considerations itis not described in this paper). Finally, the left and right edges of the road aredetermined by optimizing a discriminant function relating visual characteristicsinside and outside the road region. We describe the last three steps in the followingsubsections; our method of dominant orientation estimation using n = 72 Gaborwavelet filters [7] at equally spaced angles is explained in detail in [11].

2.1. Vanishing Point Voting

For a straight road segment on planar ground, there is a unique vanishing pointassociated with the dominant orientations of the pixels belonging to the road.Curved segments induce a set of vanishing points (discussed below). Though roadvanishing points may lie outside the field of view (FOV) of the road-followingcamera, it is reasonable to limit our search to the area of the image itself if the

Page 4: Texture-Based Vanishing Point Voting for Road Shape Estimation · under perspective [18, 5]. All of the voting methods for localizing a road’s vanishing point that we have identified

following conditions are met: (1) The camera’s optical axis heading and tilt arealigned with the vehicle’s direction of travel and approximately level, respectively;(2) Its FOV is sufficiently wide to accomodate the maximum curvature of theroad; and (3) The vehicle is roughly aligned with the road (i.e., road following isproceeding successfully).

Furthermore, we assert that for most road scenes, especially rural ones, thevanishing point due to the road is the only one in the image. In rural scenes, thereis very little other coherent parallel structure besides that due to the road. Thedominant orientations of much off-road texture such as vegetation, rocks, etc. arerandomly and uniformly distributed with no strong points of convergence. Evenin urban scenes with non-road parallel structure, such texture is predominantlyhorizontal and vertical, and hence the associated vanishing points are located welloutside the image.

In the following subsections we describe methods for formulating an objectivefunction votes(v) to evaluate the support of road vanishing point candidates v =(x, y) over a search region C roughly the size of the image itself, and how toefficiently find the global maximum of votes.

2.1.1 Straight Roads

The possible vanishing points for an image pixel p with dominant orientationθ(p) are all of the points (x, y) along the line defined by (p, θ(p)). Because ourmethod of computing the dominant orientations has a finite angular resolution ofnπ , uncertainty about the “true” θ(p) should spread this support over an angularinterval. Thus, if the angle of the line joining an image pixel p and a vanishingpoint candidate v is α(p,v), we say that p votes for v if the difference betweenα(p,v) and θ(p) is within the dominant orientation estimator’s angular resolution.Formally, this defines a voting function as follows:

vote(p,v) ={

1 if |α(p,v)− θ(p)| ≤ n2π

0 otherwise (1)

This leads to a straightforward objective function for a given vanishing pointcandidate v:

votes(v) =∑

p∈R(v)

vote(p,v) (2)

where R(v) defines a voting region. For straight, flat roads, we set R(v) to be theentire image, minus edge pixels excluded from convolution by the kernel size, andminus pixels above the current candidate v. Only pixels below the vanishing linel implied by v are allowed to vote because support is only sought from features inthe plane of the road (though in urban scenes out-of-plane building features, etc.may corroborate this decision [15]).

In this work, we assume that l is approximately horizontal. Other researchershave inferred l for road and urban scenes by computing a second vanishing pointobtained with either another set of parallel lines [19, 13, 12] or the cross ratioof equally spaced lines [13]. The road scenes we are considering have insufficientstructure to carry out such computations.

Page 5: Texture-Based Vanishing Point Voting for Road Shape Estimation · under perspective [18, 5]. All of the voting methods for localizing a road’s vanishing point that we have identified

An example of votes computed at a 1-pixel resolution over C = the top three-fourths of the image in Figure 1(a) (where the maximum vote-getter vmax is indi-cated with a cross) is shown in Figure 1(b).

An exhaustive search over C for the global maximum of votes of the type nec-essary to generate Figure 1(b) is costly and unnecessary. Rather, a hierarchicalscheme is indicated in order to limit the number of evaluations of votes and con-trol the precision with which vmax is localized. Because it integrates easily withthe tracking methods described in the next subsection, we perform a randomizedsearch by carrying out a few iterations of a particle filter [6] initialized to a uniform,grid-like distribution over C (with spacing ∆x = ∆y = 10).

2.1.2 Curved Roads

Road curvature and/or non-planarity result in a set of different apparent vanishingpoints associated with different tangents along the section of road seen by thecamera. Therefore the procedure of the preceding subsection, which assumes aunique vanishing point, must be modified to estimate the sequence of vanishingpoints which correspond to differential segments of road at increasing distancesfrom the camera. Furthermore, over a sequence of images gathered as the cameramoves along the road, the vanishing point contour thus traced for a single imagemust itself be tracked from frame to frame.

Suppose the spine of the approaching road section that is visible to the cam-era is parametrically defined by a nonlinear space curve x(u), with increasing uindicating greater distance along the road. If x(u) lies entirely in one plane thenthe image of these vanishing points v(u) is a 1-D curve on the vanishing line l. Ifx(u) is not in the plane, then the vanishing line varies with distance according tol(u) (still assumed to be horizontal) and v(u) is a 2-D curve.

We cannot directly recover v(u) since x(u) is unknown. However, u is a mono-tonically increasing function of the image scanline s, where s = 0 is the bottomrow of pixels, so we can attempt to estimate the closely related curve v(s). Thisimplies modifying Equation 2 to

votes(v(s)) =∑

p∈R(v,s±∆)

vote(p,v) (3)

where R(v, s±∆) is now a differential horizontal strip of voting pixels centered onscanline s. Smaller values of the height of the strip 2∆ yield a more accurate butless precise approximation of the road tangent (∆ ≈ 0.1h, where h is the imageheight, for the results in this paper). s is iterated from 0 until vmax (s) ∈ R(v, s±∆)(roughly the point where the strip crosses the vanishing line).

A strip-based approach to vanishing point detection for curvature estimationwas also used in [9] for edge-detected road boundaries and lane lines, but withonly a few non-overlapping strips.

Furthermore, we do not simply estimate a best vanishing point fit vmax (s) forevery s independently by rerunning the full randomized search over C as describedin the previous subsection. Rather, we track the vanishing point by continuing torun the particle filter with weak dynamics p(v(s) |v(s− 1)) (e.g., a low-variance,circular Gaussian). This allows a more accurate estimate of vmax (s) because of

Page 6: Texture-Based Vanishing Point Voting for Road Shape Estimation · under perspective [18, 5]. All of the voting methods for localizing a road’s vanishing point that we have identified

the concentration of particles already in the solution area, and reduces the chanceof misidentification of the vanishing point due to a false peak somewhere else inthe image.

The vanishing point of each strip s implies a tangent to the image of the roadcurve at s. By hypothesizing an arbitrary point on the road, we can integratethis tangent function over s (i.e., with Euler steps) and thereby trace a curve or“flow line” followed by the road point. The results of this process are illustratedin Figure 4 for two example road segments.

2.2. Constrained Road Segmentation

Searching for the road boundaries de novo would be onerous, but knowledge of thevanishing point location(s) (plural for curved roads) provides a powerful constrainton where they may be. Regardless of how curvy the road is, the flow lines inducedby the vanishing point tracking process represent a 1-D family of possible roadedges. Thus, the search for an optimal pair of left and right edges (l, r) is only2-dimensional, and fairly tightly bounded by the range of possible road widths andthe vehicle’s maximum expected lateral departure from the road midline.

The criterion Q(l, r) which we seek to optimize is the difference between theaverage values of some characteristic J within the image road region road(l, r) (asdefined by the candidate left and right edge curves and the borders of the image)and that characteristic in the region outside the road offroad(l, r) (see [8] for arelated approach applied to autonomous harvesting). For MAP estimation, wecombined this with a weak Gaussian prior on edge locations p(l, r) to obtain thefollowing expression:

Q(l, r) = |J̄(road(l, r))− J̄(offroad(l, r))| · p(l, r) (4)

We used four region-based cues to distinguish the road: the color channels JR,JG, and JB ; and a measure of texturedness JT . Because it was already computedto obtain dominant orientations, it was inexpensive to use the magnitude of themaximum complex response JT = Icomplex(x, y) at each pixel 1.

The objective functions QT for texturedness, QR for redness, QR for greenness,and QB for blueness are shown in the lower part of Figure 2 for an example straightroad image (upper left) and its computed vanishing point. The possible anglesθ(l) and θ(r) of the straight candidate road edges with respect to the horizontalvanishing line were limited to n = 72 discrete values. Thus the Q functions areplotted as 72×72 height maps, with each row representing a hypothetical left edgeand each column a hypothetical right edge.

The computed road edges for each cue corresponding to the maximum of itsobjective function, indicated with a cross in the height map, are plotted as lines onthe input image. Each of the color channel cues localizes the left road edge well,

1We have experimented with directly using the distribution of pixels JV that voted for vmax ,because by our assumptions the density of such voters should be much heavier within the roadthan outside it. For heavily banded roads, this approach was successful, but it did not extendto lightly-textured roads with strong edges (like paved roads with painted lanes, for which ourvanishing point localization method works quite well). The key problem is that voters for suchroads are concentrated along their edges rather than distributed throughout the interior of theroad region.

Page 7: Texture-Based Vanishing Point Voting for Road Shape Estimation · under perspective [18, 5]. All of the voting methods for localizing a road’s vanishing point that we have identified

r

l!

#Q

TQ

RQ

GQ

B

Figure 2: Road segmentation example: (Top left): Image with best texture-basededges (inner cyan lines), color-based edges (outer red, blue, green); (Top right):Best left and right edge estimates from green cue; (Bottom): Underlying objectivefunctions Q for texture and color channels.

while the right edge is found correctly only by the blue and green cues (the greenline is overdrawn by the blue). The red cue has a bimodal distribution on the rightedge and chooses the outer, incorrect option, while the texture cue’s estimate errsinside the road on both sides.

Empirically, we have found that the left and right road edges are well-localizedover a diverse set of straight-road images in one of two ways: (1) One or more cuesfind both edges accurately; or (2) One or more cues find only one edge accurately,while another one or more cues localize the other edge well. Which cue or cuesexhibit good performance depends on the characteristics of the road material,lighting conditions, and the distribution of flora and rocks in the off-road imageregions. Situation (2) most often occurs because the background has differentcharacteristics on the left and right sides of the road.

We have gotten promising preliminary results based on a simple measure ofthe peakedness of the Q function. Specifically, we rank every cue a’s left edgeestimate l̂a and right edge estimate r̂a separately by calculating the kurtosis of the1-D functions Qa(l, r̂a) and Qa(l̂, r), respectively, with the highest winning. Theintuition behind this measure is that peakedness is correlated with confidence inthe solution, and all else being equal we want a left edge estimate that is clearlyabove the other choices given the right edge estimate, and vice versa. The bestleft and right edges in the example image were both found by the green cue usingthis procedure, as shown in the upper right of Figure 2.

Page 8: Texture-Based Vanishing Point Voting for Road Shape Estimation · under perspective [18, 5]. All of the voting methods for localizing a road’s vanishing point that we have identified

Figure 3: Computed vanishing points are shown as crosses, with Gaussian fits ofa set of human responses marked with ellipses

3. Results

Straight roads Vanishing point localization by dominant texture orientationvoting worked robustly on predominantly straight roads with a wide variety ofsurface characteristics. Some results are shown in Figure 3. 16 illustrative images(the “Mojave” data) were chosen from a large set of high-resolution digital pho-tographs taken on a scouting trip along a possible Grand Challenge route in theSouthern California desert. The algorithm was run on resampled 160×120 versionsof the images using particle-based search; the figure shows the computed vmax for12 of the 16 images with a green cross. To assess the algorithm’s performancevs. human perception of the vanishing point location, we conducted a small web-based study: ∼ 30 people were given a short definition of road vanishing points,shown two different example images with the vanishing point marked, and askedto click where they thought the vanishing point was in 640× 480 versions of eachof the 16 Mojave images. 16 subjects completed the study; 11 of their 256 choices(4.3%) were manually removed as obvious misclick outliers. The figure indicatesthe distribution of human choices with purple 3σ error ellipses, most of which werefairly tight. The median positional difference at the 320 × 240 scale between ouralgorithm’s estimates and the human choices was 6.2 pixels horizontally and 4.3pixels vertically.Curved roads Examples of tracking the vanishing point for curved, non-planarroads from a set of 720× 480 DV camera images captured on a variety of roads atFort Indiantown Gap, PA are shown in Figures 4(a) and (b). The yellow contourindicates the trace of the estimated vanishing point positions as the scanline swas incremented and the set of voters changed to points farther along the road.Horizontal movement of the vanishing point is of course proportional to left-right

Page 9: Texture-Based Vanishing Point Voting for Road Shape Estimation · under perspective [18, 5]. All of the voting methods for localizing a road’s vanishing point that we have identified

(a) (b)

Figure 4: Tracks of vanishing point (yellow) for curved road and induced road“flow lines” (green)

road curvature, and vertical movement indicates a rise or dip in the road. Thuswe see that the road in Figure 4(a) is relatively level with a slight dip in themiddle, which is correct. On the other hand, the vanishing point for the road inFigure 4(b) rises, indicating an approaching hill.

A more intuitive interpretation of the vanishing point motion can be obtainedby simply integrating the implied tangent to the image of the road curve (the linejoining any point on scanline s with its vanishing point v(s)) for a few samplepoints. These are the green contours, or “flow lines,” in the images in Figures 4.These curves are not meant to indicate the width of the road, for no segmentationhas been carried out, but rather only to illustrate the structure of the terrain thatthe road ahead traverses.Segmentation The vanishing points found automatically in Figure 3 were usedto constrain a search for the road edges in those images. The best left and rightedges obtained using the method of Section 2.3 are overlaid on the input images,with the color of each side indicating which cue’s estimate was used (cyan = JT ,with the other colors as expected). For efficiency, only n = 72 possible edgelocations angles relative to the vanishing point location were tested, limiting theangular resolution of road boundary localization. Nonetheless, the performancewas strong on a variety of low-contrast roads, a number of which have ambiguousborders even to a human (e.g., row 3, column 1–(3, 1); or the railroad embankmentsin (1, 3) and (2, 2)). Overestimates of the road width, which are perhaps mostdangerous to an autonomous vehicle, are relatively small, and never so large thatthe midline is off the road.

4. Conclusion

We have presented an algorithm for road following that at its most basic levelrelies on road texture to identify a global road vanishing point for steering control.By spatially partitioning the voting procedure into strips parallel to the vanishingline and tracking the results, we can estimate a sequence of vanishing points forcurved road and recover road curvature through integration. Finally, we have

Page 10: Texture-Based Vanishing Point Voting for Road Shape Estimation · under perspective [18, 5]. All of the voting methods for localizing a road’s vanishing point that we have identified

leveraged the information provided by the vanishing point to make the problemof road boundary identification highly manageable even on road scenes with verylittle structure. With no training, the system is robust to a variety of road surfacematerials and geometries, and it runs quickly enough for real vehicle control.

Though our method for road segmentation performs satisfactorily on manyscenes, the general issue of how to identify the best cue or cues for a given imageis a difficult model selection problem, and needs to be examined in more detail.

5. Acknowledgments

Portions of this work were supported by a grant from the National Institute ofStandards & Technology. The Mojave data was generously provided by Caltech’sGrand Challenge team.

References[1] N. Apostoloff and A. Zelinsky. Robust vision based lane tracking using multiple cues and

particle filtering. In Proc. IEEE Intelligent Vehicles Symposium, 2003.[2] J. Coughlan and A. Yuille. Manhattan world: Orientation and outlier detection by bayesian

inference. Neural Computation, 15(5):1063–088, 2003.[3] J. Crisman and C. Thorpe. UNSCARF, a color vision system for the detection of unstruc-

tured roads. In Proc. Int. Conf. Robotics and Automation, pp. 2496–2501, 1991.[4] Defense Advanced Research Projects Agency (DARPA). DARPA Grand Challenge. Avail-

able at http://www.darpa.mil/grandchallenge. Accessed July 22, 2003.[5] A. Zisserman F. Schaffalitzky. Planar grouping for automatic detection of vanishing lines

and points. Image and Vision Computing, 9(18):647–658, 2000.[6] M. Isard and A. Blake. Contour tracking by stochastic propagation of conditional density.

In Proc. European Conf. Computer Vision, pp. 343–356, 1996.[7] T. Lee. Image representation using 2D Gabor wavelets. IEEE Trans. Pattern Analysis and

Machine Intelligence, 18(10):959–971, 1996.[8] M. Ollis and A. Stentz. Vision-based perception for an autonomous harvester. In Proc.

Int. Conf. Intelligent Robots and Systems, pp. 1838–1844, 1997.[9] A. Polk and R. Jain. A parallel architecture for curvature-based road scene classication. In

I. Masaki, editor, Vision-Based Vehicle Guidance, pp. 284–299. Springer, 1992.[10] C. Rasmussen. Combining laser range, color, and texture cues for autonomous road follow-

ing. In Proc. Int. Conf. Robotics and Automation, 2002.[11] C. Rasmussen. Grouping dominant orientations for ill-structured road following. In Proc.

Computer Vision and Pattern Recognition, 2004.[12] E. Ribeiro and E. Hancock. Perspective pose from spectral voting. In Proc. Computer

Vision and Pattern Recognition, 2000.[13] S. Se. Zebra-crossing detection for the partially sighted. In Proc. Computer Vision and

Pattern Recognition, pp. 211–217, 2000.[14] S. Se and M. Brady. Vision-based detection of staircases. In Proc. Asian Conf. Computer

Vision, pp. 535–540, 2000.[15] N. Simond and P. Rives. Homography from a vanishing point in urban scenes. In Proc.

Int. Conf. Intelligent Robots and Systems, 2003.[16] B. Southall and C. Taylor. Stochastic road shape estimation. In Proc. Int. Conf. Computer

Vision, pp. 206–212, 2001.[17] C. Taylor, J. Malik, and J. Weber. A real-time approach to stereopsis and lane-finding. In

Proc. IEEE Intelligent Vehicles Symposium, 1996.[18] A. Turina, T. Tuytelaars, and L. Van Gool. Efficient grouping under perspective skew. In

Proc. Computer Vision and Pattern Recognition, 2001.[19] T. Tuytelaars, M. Proesmans, and L. Van Gool. The cascaded Hough transform. In Proc.

IEEE Int. Conf. on Image Processing, 1998.[20] J. Zhang and H. Nagel. Texture-based segmentation of road images. In Proc. IEEE Intel-

ligent Vehicles Symposium, 1994.