Anchoring AI via Robots and ROS

1
Anchoring AI via Robots and Anchoring AI via Robots and ROS ROS A. Dobke ’14, D. Greene ‘13, D. Hernandez '15, C. Hunt ‘14, M. McDermott ‘14, L. Reed '14, V. Wehner '14, A. Wilby '14 and Z. Dodds Navigation and Planning Platforms/Tasks Amid the increase in exemplary online lectures, assignments, and communities in CS and AI, bricks-and-mortar institutions will increasingly assert their value through the labs and situated experiences they provide. This work highlights inexpensive robots that, along with ROS, we have used to scaffold both CS and AI, spanning from CS1 to open-ended investigations at three institutions. Multirobot Coordination Acknowledgments We use the Kinect's depth images to implement a corridor follower that is capable of freely wandering the Libra Complex without external guidance. The state-machine delegates high-level decisions to the map (below), while tracking the type of surroundings the robot is currently facing. We gratefully acknowledge funds from The Rose Hills Foundation, the NSF projects REU CNS-1063169 & CPATH 0939149, and HMC. Scalable labs: Robots + ROS Localization and Convoys ROS’s flexible scaffolding Corridor-following snapshots and corresponding controller states. In addition to the Muddbot and drones, ROS enables the use of other platforms. We have implemented panorama-based localization and control on a nerf launcher using OpenCV SURF features. At left, a three-robot convoy demonstrates a follow-the- leader task. Message-passing allows the convoy of robots to handle failures (the obstacle at right) by starting recovery routines when the team is disrupted. This project’s resources are fleixble enough to support a variety of research and educational goals. Western State and Glendale colleges have adapted some of the curriculum, hw, and sw for their CS + AI courses. Corridor- following We use OpenCV to draw a map that represents a network of corridors known as the “Libra Complex” at Mudd. With odometry and estimated landmarks, we can localize the robot (or two) in the map at the same time that the system plans paths for them. Map Management The map computes the best path, shows it, and guides robot turns. The map also assists localization by simulating range sensors’ values. Visual servoing Landmark-based navigation Color and SURF-based vision Kinect-based control MuddBot: • iRobot Create • Kinect on pan and/or tilt AR.Drone 1 and 2: • $300 wi-fi device • two-camera vision + accel. ROS exposes our platforms’ physical capabilities. On one hand, ROS allows us to hide details: in CS1 and CS2 we treat the sensors and actuators as black boxes. For AI work, however, we can immediately access its many libraries: • Line-following Odometric mapping Galile o Foyer Olin Beckman Parsons Keck Jacobs blue ~ carpet gray ~ cement white ~ tile A gmapped Libra Complex a 3d rendering from gmapping The Create’s encoders and FOVIS visual odometry have complementary strengths. Below, maps from a MuddBot and ROS. At tip is our nerf launcher and webcamera. Below are, first, a panorama maps and an example of SURF-based localization within it, along with a desired view (green diamond). The second is the result after image-based navigation. novel image panorama map, SURF matches, and localization result MuddBot fli ps hoop- jumping

description

Anchoring AI via Robots and ROS. A. Dobke ’ 14, D. Greene ‘ 13, D. Hernandez '15, C. Hunt ‘ 14, M. McDermott ‘ 14 , L. Reed '14, V. Wehner '14, A. Wilby '14 and Z. Dodds. Scalable labs: Robots + ROS. Navigation and Planning. Localization and Convoys. - PowerPoint PPT Presentation

Transcript of Anchoring AI via Robots and ROS

Page 1: Anchoring  AI via Robots and ROS

Anchoring AI via Robots and ROSAnchoring AI via Robots and ROSA. Dobke ’14, D. Greene ‘13, D. Hernandez '15, C. Hunt ‘14, M. McDermott ‘14, L. Reed '14, V. Wehner '14, A. Wilby '14 and Z. Dodds

Navigation and Planning

Platforms/Tasks

Amid the increase in exemplary online lectures, assignments, and communities in CS and AI, bricks-and-mortar institutions will increasingly assert their value through the labs and situated experiences they provide. This work highlights inexpensive robots that, along with ROS, we have used to scaffold both CS and AI, spanning from CS1 to open-ended investigations at three institutions.

Multirobot Coordination

Acknowledgments

We use the Kinect's depth images to implement a corridor follower that is capable of freely wandering the Libra Complex without external guidance. The state-machine delegates high-level decisions to the map (below), while tracking the type of surroundings the robot is currently facing.

We gratefully acknowledge funds from The Rose Hills Foundation, the NSF projects REU CNS-1063169 & CPATH 0939149, and HMC.

Scalable labs: Robots + ROS Localization and Convoys

ROS’s flexible scaffolding

Corridor-following snapshots and corresponding controller states.

In addition to the Muddbot and drones, ROS enables the use of other platforms. We have implemented panorama-based localization and control on a nerf launcher using OpenCV SURF features.

At left, a three-robot convoy demonstrates a follow-the-leader task. Message-passing allows the convoy of robots to handle failures (the

obstacle at right) by starting recovery routines when the team is disrupted.

This project’s resources are fleixble enough to support a variety of research and educational goals. Western State and Glendale colleges have adapted some of the curriculum, hw, and sw for their CS + AI courses.

Corridor-following

We use OpenCV to draw a map that represents a network of corridors known as the “Libra Complex” at Mudd. With odometry and estimated landmarks, we can localize the robot (or two) in the map at the same time that the system plans paths for them.

Map Management

The map computes the best path, shows it, and guides robot turns.

The map also assists localization by simulating range sensors’ values.

• Visual servoing• Landmark-based navigation • Color and SURF-based vision• Kinect-based control

MuddBot:• iRobot Create• Kinect on pan and/or tilt

AR.Drone 1 and 2:• $300 wi-fi device• two-camera vision + accel.

ROS exposes our platforms’ physical capabilities. On one hand, ROS allows us to hide details: in CS1 and CS2 we treat the sensors and actuators as black boxes. For AI work, however, we can immediately access its many libraries:

• Line-following• Odometric mapping

Galileo Foyer

OlinBeckma

n

Parsons

KeckJacobs

blue ~ carpet

gray ~ cement

white ~ tile

A gmapped Libra Complexa 3d rendering from gmapping

The Create’s encoders and FOVIS visual odometry have complementary strengths. Below, maps from a MuddBot and ROS.

At tip is our nerf launcher and webcamera. Below are, first, a panorama maps and an example of SURF-based localization within it, along with a desired

view (green diamond). The second is the result after image-based navigation.

novel image panorama map, SURF matches, and localization result

MuddBot

flips

hoop-jumping