Automotive Mapping, Localization, and Perception for ... · quality digital map data. The True...
Transcript of Automotive Mapping, Localization, and Perception for ... · quality digital map data. The True...
Automotive Mapping, Localization, and Perception for Active Safety Applications
Ryan Eustice & James McBrideFord - University of Michigan Innovation Alliance
3rd Annual “Focus on the Future” Automotive Research ConferencesSeptember 15, 2010
Generation 1 test vehicle at the California Speedway – September, 2005
Complete autonomous control at 50 mph
2005 DARPA “Desert Classic” ChallengeMojave Desert - Primm, NV
• How does autonomous vehicle research benefit Ford Motor Company and its customers?
• Solving the complex challenges required to make a vehicle capable of autonomous driving enables and accelerates the development of technologies which will be found on future automotive safety systems throughout the industry.
DARPA “Desert Classic” Finals
2007 DARPA Urban ChallengeGeorge Air Force Base - Victorville, CA
Route planning, navigation, hazard detection and avoidance, interaction with moving human and robotic traffic, etc.
XAV-250 at the 2007 DARPA Urban Challenge
Generation 2 test vehicle at George AFB – November, 2007
XAV-250 Sensor Configuration
DARPA Urban Challenge - Area A“The Circles of Death”
Stopping, merging, and yielding across densely moving traffic.
120m
[Video clip Area A (1:27)]
Although we are sensing traffic signs, concrete barriers and lane markers, note that they can be inferred from a 3-D map and precise localization.
DARPA Urban Challenge - Area C“The Belt Buckle”
Intersection precedence, blocked routes, U-turns, etc.
[Video clips Area C (0:54), Area C 3-Cam (1:16)]
Since there are no lane markings, the roadway is determined by sensed curbs and GPS map clues.
We did not Participate in the DARPA Grand Challenges to Simply Become Builders of Robots
We participated in the DARPA Grand Challenges to learn about rapidly evolving sensors and control algorithms, which
will ultimately be applied to automotive active safety features.
1. Without precise lane-level localization (< 0.5m) and 3-D maps, we will be unable to fully realize the next generation of automotive safety features.Consider Lane Keeping and Collision Mitigation Safety Features on a curved road segment…
• Am I exactly where I should be on the roadway?• Are there lane markings, curbs, ditches, off-road hazards? Are they obscured from view? Can
my sensors detect them?• Is the oncoming traffic in their lane? Is the pedestrian in the roadway or on the sidewalk?
2. Although we could navigate the DARPA Urban Challenge solely with maps, GPS and LIDAR, we would not allow a production automotive safety system to assume interventional control without redundant sensor information - hence we have incorporated fused camera data into these studies.
3. There is a clear and viable pathway to production implementation - involving 3-D maps, advanced algorithms, and sensing via GPS, LIDAR and cameras.
In-path obstacle? Lane markings visible? Obstructed view?
Collision pathway possible or topologically prohibited?
Roadway edge detection and mapping - which is the more dangerous “highway”?
Bolivian “Death Road”
I-15 California
GPS Alone Will Not Suffice
Single-point positions collected for 24 hours on a rooftop using an expensive NovAtel OEM4 receiver. One sigma uncertainties are on the order of 2m, and in most cases would be greater if the receiver was in motion.
A minimum of 4 satellites is required to determine a 3-D position (x,y,z,t). Horizontal Dilution of Precision (HDOP) is a metric characterizing the quality of GPS satellite coverage. A smaller HDOP value represents a better solution. For a “reasonable” GPS solution, HDOP should be < 3.
Easting Error (m)
Nor
thin
g Er
ror (
m)
RegionHDOP<32σR = 13m
HDOP<1.52σR = 6.5m
95% of Outagesdo not Exceed
U.S.A. 85% 63% 28 seconds
Texas 99% 94% 12 seconds
New York City 65% 31% 36 seconds
GPS Availability - Real World Driving Data95% Confidence Interval
NAVTEQ “True” - January 8, 2010
NAVTEQ has introduced ‘True’ – the company’s new mapping collection system, which is being rolled out on its US fleet of field-collection vehicles, in doing so enhancing its ability to deliver a growing range of high-quality digital map data. The True system represents a new era in map data collection, and uses technologies unique in the industry in both the scale and quality with which collection can take place. Through a combination of LIDAR, panoramic and high-resolution cameras, GPS and IMU positioning all of the data collected is geo-referenced, making it possible to superimpose both imagery and 3-D data points together to create a more highly detailed digital representation. The distinctive manner in which data is captured with this technology provides the critical platform to move digital maps from 2-D to 3-D representation.
Tele Atlas ADAS - January 7, 2010
A new navigation product for Advanced Driver Assistance Systems (ADAS) has been unveiled at the Consumer Electronics Show in Las Vegas. Tele Atlas ADAS includes gradient, road curvature and ADAS-quality geometry that can be used to create a variety of ADAS-level applications, including eco-routing, adaptive cruise control, energy management, headlight steering, road preview and curve warning.
Google is Everywhere
Google Earth20,700 Oakwood Boulevard, Dearborn, MI
Google Earth20,700 Oakwood Boulevard, Dearborn, MI
Ladybug vs. Google Street View20,700 Oakwood Boulevard, Dearborn, MI
Simultaneous Localization and Mapping (SLAM)
•Note the Retrograde Motion of Some of the Feature Points
•Note the Challenge of Moving Objects•Pitch Angle alone is as Good as GPS
for Longitudinal Localization
Potential Side Benefits Include:
•Energy Optimization for Hybrid Vehicles
•Active Suspension Algorithms•Off-Road Incident Intervention
1
2a
a
c
c
b
b
e
e
d
d
f
g
g
f
h
h
y
θ
r
x
222222 cossin θσθσθσ rrx +=
θθ
cossin
ryrx
==
θσσθ
σσθ
rx
rx
≈→
≈→
0
90
Even with a single good landmark* our lateral in-lane position is accurate to < 0.5m.
* With production typical laser beam divergence of 3mrad and range accuracy of 2 to 5cm.
USBL
DVL
VAN
Navigation in Harsh Environments:Mapping the RMS Titanic 3.7km down
RMS Titanic ResultsCamera-Generated Map and Localization
3-D Mapping on the XAV-250
• Ladybug 3 spherical camera•Velodyne HDL-64E LIDAR•Riegl LMS-Q120 LIDAR•Delphi ACC3 radar•Applanix POS-LV 420 INS
• 64 beams, 360° FOV• +2.5° to –24° elevation• 10Hz refresh rate• >1,000,000 points/s• 120m range, 2cm accuracy
• 6 1.125” Sony CCD imagers• 12MP resolution (6×1600×1200)• 15 frames/s at full resolution• 800Mb/s 1394b interface• Embedded JPEG compression
Crosswalk in front of RICLIDAR vs. Camera View
Fused LIDAR and Camera Imagery(LIDAR Color-coded by Height Above Ground Plane)
Crosswalk in front of RIC
Downtown Dearborn
Camera-Textured LIDAR Imagery
Ladybug vs. Google Street View20,700 Oakwood Boulevard, Dearborn, MI
ICP Algorithm (Input: 3-D LIDAR point cloud)
Iterative Closest Point (ICP) Algorithm (Top-down view of 3-D LIDAR point cloud at two different instances of time)
t = t0 t = t0 + δt, plus random noise
ICP Algorithm
Features, aside from the truck and moving obstacles are re-aligned.
Before After
~10m
Camera+ICP Fused AlgorithmAssign SIFT descriptors to the 3-D LIDAR for robust point-cloud matching
LIDAR projected into imagery(ground-plane not rendered for visual clarity)
SIFT camera feature detections(coincident with LIDAR data)
Camera+ICP Fused Algorithm (2)RANSAC algorithm that uses SIFT matches to seed ICP registration, resulting in faster and more robust point-cloud registration
Frame to frame SIFT matching This algorithm generates more robust results with less spatial data
Difference in Scans
Reg
istr
atio
n er
ror (
m)
SLAM GPS-Denied DemonstrationOmni-Star HP GPS ground-truth and Applanix POS-LV IMU
1.6 km loop around Dearborn
Dead ReckoningNo GPS Reception – Velocity Integration of MEMs-based XSENS MTi-G IMU Data
SLAM DemonstrationGround Truth vs. Laser Odometry
No GPS Reception – Intra-frame Motion Compensated by XSENS MTi-G IMU
Camera Pathway to Production
•Wikitude Travel: more than 370,000 world-wide points of interest (POI) can be identified on the real-time camera view of an iPhone as you hold up the device.
•Wikitude Drive: provides turn-by-turn navigational directions accurately overlaid on the video screen of an iPhone as you drive.
November 4, 2009:iPhone 3GS - 3MP still, VGA video ($5)Third-Party Apps top the 100,000 mark.
More Rapidly Evolving Technology Widgets ($2.99)iPhone CPU is comparable to a 2000 Desktop Computer…
Augmented Driving by imaGinyze 2010.http://www.imaginyze.com/Site/Q%26A.html
•Research and Survey Grade: Velodyne HDL-64E and Riegl LMS-Q120i$10,000 - $75,000
• Automotive Production Grade: Hella IDIS and Ibeo LUX$500 - $1000
•Consumer Electronics Grade: Neato XV-11 robotic vacuum employing SLAM$25 (LIDAR components)
Scanning LIDAR Pathway to Production
Flash LIDAR is Coming…Advanced Scientific Concepts - an iRobot Spinoff
7
3
10
20
75
30
MetersSolid-state 128×128 Flash LIDAR mounted on a moving vehicle, shown with color-coded range.
[Video clip ASC Flash LIDAR (1:06)]
Flash LADAR Sensor Cost Model
-
5,000
10,000
15,000
20,000
25,000
30,000
35,000
Q=10 Q=100 Q=1000 Q=10,000 Q=100,000$
CoG
s
OtherLaborHousingLaserOpticsPCAHybridizeROICAPD-Array
Thanks and Questions
•LIDAR (Laser Imaging Detection and Ranging) is an optical remote sensing technology which measures properties of scattered light to determine range and/or other information about a distant target.
•Like radar, which uses radio waves, the range to an object is determined by measuring the time delay between transmission of a pulse and detection of the reflected signal.
•The primary difference is that with LIDAR, much shorter wavelengths of the electromagnetic spectrum are used, typically in the ultraviolet, visible or near infrared, making it possible to image much smaller features (1000x).
LIDAR Primer
The LIDAR suite on the XAV-250:
2 high power, fine angular resolution, single-line scanners (Riegl LMS-Q120)
1 360° FOV, 64-beam rotating scanner (Velodyne HDL-64E)
Potential Automotive Safety Features Enabled by Autonomous Research
• Anti-lock brake systems (ABS), imminent collision warning, panic brake assist, collision mitigation by braking (CMbB)
• Cruise control, adaptive cruise control (ACC), ACC + stop-and-go capabiility, urban cruise control (UCC – recognizes stop signs and traffic signals)
• Lane departure warning (LDW), lane keeping assistance (LKA), electronic power assist steering (EPAS), adaptive front steering (AFS), active steer (EPAS + AFS), emergency lane assist (ELA)
• Traction control, electronic stability control (ESC), roll stability control (RSC), active suspension, next-generation stability control
• Integration of pre-crash sensing with occupant protection equipment (airbags, seat belts, pre-tensioners)
• Collision mitigation by integrated brake, throttle, steering and vehicle dynamics control
• Vehicle to vehicle and infrastructure integration (V2V / V2I), intelligent vehicle highway systems (IVHS)
• Blind spot detection, pedestrian detection, parking assistance, night vision
• Driver drowsiness and distraction monitoring
• Total accident avoidance / autonomous vehicle control