Renaud Detry TRS

download Renaud Detry TRS

of 13

description

Renaud Detry TRS

Transcript of Renaud Detry TRS

  • TRS: An Open-source Recipe for Teaching/Learning Robotics with a

    Simulator.

    Learning Robotics with a Simulator: Setup Your Laptop in 5 min,Code a Control, Navigation, Vision or Manipulation Project in 100 Lines.

    Renaud DetryUniversity of Lige, Belgium

  • Simulation VS Real Robot

    Joint Observation of Object Pose and Tactile Imprints for Online GraspStability Assessment

    Yasemin Bekiroglu Renaud Detry Danica Kragic

    AbstractThis paper studies the viability of concurrentobject pose tracking and tactile sensing for assessing graspstability on a physical robotic platform. We present a kernel-logistic-regression model of pose- and touch-conditional graspsuccess probability. Models are trained on grasp data whichconsist of (1) the pose of the gripper relative to the object,(2) a tactile description of the contacts between the objectand the fully-closed gripper, and (3) a binary descriptionof grasp feasibility, which indicates whether the grasp canbe used to rigidly control the object. The data is collectedby executing grasps demonstrated by a human on a roboticplatform composed of an industrial arm, a three-finger gripperequipped with tactile sensing arrays, and a vision-based objectpose tracking system. The robot is able to track the poseof an object while it is grasping it, and it can acquiregrasp tactile imprints via pressure sensor arrays mounted onits grippers fingers. We consider models defined on severalsubspaces of our input data using tactile perceptions orgripper poses only. Models are optimized and evaluated with f -fold cross-validation. Our preliminary results show that stabilityassessments based on both tactile and pose data can providebetter rates than assessments based on tactile data alone.

    I. INTRODUCTION

    Object grasping and manipulation in real-world environ-ments are, from a robotics viewpoint, uncertain processes.Despite efforts in improving autonomous grasp planners,either by learning or by building into agents sophisticatedvisuomotor programs, one cannot assume that a grasp willwork exactly as planned. One obvious reason for this,amongst many other, is that the perceptual observations onwhich the planner bases its reasoning are always noisy. Itis thus unlikely that the robots fingers will come in contactwith the object at the exact intended points. The object willgenerally move while fingers are being closed, and the finalobject-gripper configuration, even if geometrically similar tothe intended one, may present a prohibitively different forceconfiguration. For this reason, executing grasping actions inan open-loop system is unlikely to prove viable in real-world environments. Real-world environments will oftenrequire a closed-loop system in which perceptual feedbackis constantly monitored and triggers plan corrections.Amongst the multitude of available sensors that exist,

    vision and touch seem particularly relevant for grasping.Vision-driven grasping and manipulation have been exten-sively studied [1], [2]. Vision has typically been used to plan

    Y. Bekiroglu, R. Detry and D. Kragic are with the Centrefor Autonomous Systems, CSC, KTH, Stockholm, Sweden.yaseminb,detryr,[email protected].

    This work was supported by EU through the project CogX, FP7-IP-027657, and GRASP, FP7-IP-215821 and Swedish Foundation for StrategicResearch.

    Fig. 1. Experimental robotic platform, composed of an industrial arm, athree-finger gripper equipped with tactile sensing arrays, and a camera (onthe right).

    grasping actions, and to update action parameters as objectsmove. Touch-based grasp controllers have also been studied,either for controlling finger forces to avoid slippage and toprevent crushing objects [3], [4], [5], or for assessing graspstability [6], [7].In this paper, we study the joint impact of visual and

    tactile sensing on grasp stability assessment. Consideringvision and touch separately brings valuable information ongrasp stability. However, in many situations one modality cansubstantially help disambiguating the readings obtained fromthe other one. For instance, it is conceivable that for someobject, two grasps approaching from different directionswould yield similar tactile readings, but one would allowfor robustly moving the object while the other would let theobject slip away. Such situations may occur, e.g., because oneof the grasps benefits from an extra gripper-object contactpoint in an area that is not covered by tactile sensors, orbecause of a different relative configuration of the graspwith respect to the center of mass of the object. Consideringboth modalities jointly should intuitively lead to more robustassessments. We present a platform equipped with hardwareand software components which allow it to obtain 6D objectpose (3D position and 3D orientation) and tactile imprintinformation during a grasping action, and we suggest meansof using these data to learn a model of grasp stability.Our robotic platform is composed of an industrial arm,

    a three-finger gripper equipped with tactile sensing arrays,and a vision-based object pose tracking system (see Figure1). The robot is able to track the pose of an object while itis grasping it despite object occlusions, and it can acquiregrasp tactile imprints via pressure sensor arrays mounted onits grippers fingers.

  • V-REP: An Easy Robot Simulator

    Trivial installation Linux, Mac, WinRemote API for C++, Python, Matlab, ROSStable, fast.

  • V-REP: An Easy Robot Simulator

    Trivial installation Linux, Mac, WinRemote API for C++, Python, Matlab, ROSStable, fast.

  • V-REP + Peter Corkes Matlab Robot Toolbox

    CodeControlVisionNavigation

    in 100 lines of code!

  • What You Get

    Recipe for organizing a Master-level robotics projectThe project: pickup groceries from a table and store them

    in baskets. Involves: control, navigation, mapping, vision and

    manipulation.

    http://teaching-robotics.org/trs

  • Practice Navigation, Grasping, Vision in 5 Minutes

    vrep = remApi('remoteApi', 'extApi.h');id = vrep.simxStart('127.0.0.1', 57689, true, true, 2000, 5);

    [res image] = vrep.simxGetVisionSensorImage(id, cam);

    vrep.simxSetJointTargetVelocity(id, wheel1, 10);

  • How To Use

    Freely distributed via GitHubhttps://github.com/ULgRobotics/trs.git master branch: code & V-REP models (GPL) gh-pages branch: doc & project details (CC)

    http://teaching-robotics.org/trs

  • Teaching-robotics.org

    http://teaching-robotics.org/

  • TRS Tutorial at IROS

  • Learning Robotics with a Simulator:Setup Your Laptop in 5 min,

    Code a Control, Navigation, Vision or Manipulation

    http://teaching-robotics.org/trs