Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa...
-
Upload
kerrie-jasmine-black -
Category
Documents
-
view
215 -
download
0
Transcript of Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa...
![Page 1: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.](https://reader035.fdocuments.in/reader035/viewer/2022062518/56649ea95503460f94bad8a3/html5/thumbnails/1.jpg)
Vision-Based Reach-To-Grasp Movements
From the Human Example to an Autonomous Robotic System
Alexa Hauck
![Page 2: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.](https://reader035.fdocuments.in/reader035/viewer/2022062518/56649ea95503460f94bad8a3/html5/thumbnails/2.jpg)
Context
Special Research Program “Sensorimotor”
C1: Human and Robotic Hand-Eye Coordination
• Neurological Clinic (Großhadern), LMU München
• Institute for Real-Time Computer Systems, TU München
MODEL
ofHand-Eye
CoordinationAN
ALY
SIS
of
hum
an r
each
ing m
ovem
ents
SYN
TH
ESIS
of
a r
oboti
c sy
stem
![Page 3: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.](https://reader035.fdocuments.in/reader035/viewer/2022062518/56649ea95503460f94bad8a3/html5/thumbnails/3.jpg)
The Question is ...
How to use which visual information for motion control?
control strategy representation catching reaching
![Page 4: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.](https://reader035.fdocuments.in/reader035/viewer/2022062518/56649ea95503460f94bad8a3/html5/thumbnails/4.jpg)
State-of-the-art Robotics
)(),(),( txtxtxxT
+ easy integration with path planning
+ only little visual information needed– sensitive against model errors
)())(( txtxxT
+ model errors can be compensated
– convergence not assured
– high-rate vision needed)())(( txtffT
Impressive results
... but nowhere near human performance!
Visual Servoing: (visual feedback control)
Look-then-move: (visual feedforward control)
![Page 5: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.](https://reader035.fdocuments.in/reader035/viewer/2022062518/56649ea95503460f94bad8a3/html5/thumbnails/5.jpg)
The Human Example
Separately controlled hand transport:• almost straight path• bell-shaped velocity profile
Experiments with target jump:• smooth on-line correction of the trajectory
Experiments with prism glasses:• on-line correction using visual feedback • off-line recalibration of internal models
Use of visual information in spatial representation Combination of visual feedforward and feedback
... but how ?
![Page 6: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.](https://reader035.fdocuments.in/reader035/viewer/2022062518/56649ea95503460f94bad8a3/html5/thumbnails/6.jpg)
New Control Strategy
1
1
))()(()())(()()(n
iniiiTnnn tgtgteDtxxtgttx
![Page 7: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.](https://reader035.fdocuments.in/reader035/viewer/2022062518/56649ea95503460f94bad8a3/html5/thumbnails/7.jpg)
Example: Point-to-point
![Page 8: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.](https://reader035.fdocuments.in/reader035/viewer/2022062518/56649ea95503460f94bad8a3/html5/thumbnails/8.jpg)
Example: Target Jump
![Page 9: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.](https://reader035.fdocuments.in/reader035/viewer/2022062518/56649ea95503460f94bad8a3/html5/thumbnails/9.jpg)
Example: Target Jump
![Page 10: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.](https://reader035.fdocuments.in/reader035/viewer/2022062518/56649ea95503460f94bad8a3/html5/thumbnails/10.jpg)
Example: Target Jump
![Page 11: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.](https://reader035.fdocuments.in/reader035/viewer/2022062518/56649ea95503460f94bad8a3/html5/thumbnails/11.jpg)
Example: Multiple Jumps
![Page 12: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.](https://reader035.fdocuments.in/reader035/viewer/2022062518/56649ea95503460f94bad8a3/html5/thumbnails/12.jpg)
Example: Multiple Jumps
![Page 13: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.](https://reader035.fdocuments.in/reader035/viewer/2022062518/56649ea95503460f94bad8a3/html5/thumbnails/13.jpg)
Example: Double Jump
![Page 14: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.](https://reader035.fdocuments.in/reader035/viewer/2022062518/56649ea95503460f94bad8a3/html5/thumbnails/14.jpg)
Hand-Eye System
Robotimages
ImageProcessing
features
ImageInterpretation
position target & hand
MotionPlanning
trajectory
RobotControl
commands
Models
Hand-EyeSystem
&Objects
objectmodel
sensormodel
armmodel
objectmodel
![Page 15: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.](https://reader035.fdocuments.in/reader035/viewer/2022062518/56649ea95503460f94bad8a3/html5/thumbnails/15.jpg)
The Robot: MinERVA
manipulator with 6 joints
CCD cameras
pan-tilt head
![Page 16: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.](https://reader035.fdocuments.in/reader035/viewer/2022062518/56649ea95503460f94bad8a3/html5/thumbnails/16.jpg)
Robot Vision
3D
Bin. Stereo
Target
correspondingpoints
Hand
correspondingpoints
![Page 17: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.](https://reader035.fdocuments.in/reader035/viewer/2022062518/56649ea95503460f94bad8a3/html5/thumbnails/17.jpg)
Example: Reaching
![Page 18: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.](https://reader035.fdocuments.in/reader035/viewer/2022062518/56649ea95503460f94bad8a3/html5/thumbnails/18.jpg)
Example: Reaching
![Page 19: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.](https://reader035.fdocuments.in/reader035/viewer/2022062518/56649ea95503460f94bad8a3/html5/thumbnails/19.jpg)
Example: Reaching
![Page 20: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.](https://reader035.fdocuments.in/reader035/viewer/2022062518/56649ea95503460f94bad8a3/html5/thumbnails/20.jpg)
Model Parameters
Arm:• geometry, kinematics• 3 parameters
Arm-Head Relation:• coordinate transformation• 3 parameters
Head-Camera Relations:• coordinate transformations• 4 parameters
Cameras:• pinhole camera model• 4 parameters (+ rad. distortion)
Calibration
manufacturer
measuring tape
HALCON
HALCON
![Page 21: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.](https://reader035.fdocuments.in/reader035/viewer/2022062518/56649ea95503460f94bad8a3/html5/thumbnails/21.jpg)
Use of Visual Feedback
mean maxcorr0 8.9cm 20cm
1 Hz 0.4cm 1cm
![Page 22: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.](https://reader035.fdocuments.in/reader035/viewer/2022062518/56649ea95503460f94bad8a3/html5/thumbnails/22.jpg)
Example: Vergence Error
![Page 23: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.](https://reader035.fdocuments.in/reader035/viewer/2022062518/56649ea95503460f94bad8a3/html5/thumbnails/23.jpg)
Example: Compensation
![Page 24: Vision-Based Reach-To-Grasp Movements From the Human Example to an Autonomous Robotic System Alexa Hauck.](https://reader035.fdocuments.in/reader035/viewer/2022062518/56649ea95503460f94bad8a3/html5/thumbnails/24.jpg)
Summary
• New control strategy for hand-eye coordination
• Extension of a biological model
• Unification of look-then-move & visual servoing
• Flexible, economic use of visual information
• Validation in simulation
• Implementation on a real hand-eye system