Abidi, Gonzalez - 1990 - The Use of Multi Sensor Data for Robotic Applications
-
Upload
steveebullock4952 -
Category
Documents
-
view
218 -
download
0
Transcript of Abidi, Gonzalez - 1990 - The Use of Multi Sensor Data for Robotic Applications
8/2/2019 Abidi, Gonzalez - 1990 - The Use of Multi Sensor Data for Robotic Applications
http://slidepdf.com/reader/full/abidi-gonzalez-1990-the-use-of-multi-sensor-data-for-robotic-applications 1/19
I E E E T R A N S A C T I O N S N R O B O T I C S A N D A UT OM A r i m . V OL 6. N O 1. A P R I L IWO 1 5 9
The Use of Multisensor Data for Robotic
Applications
Abstract-Space applicat ions can be greatly enhanced by the use of
robotics and automation in activities such as orbital inspection and
maintenance. Partial or ful l autonomy capabilities in space systems will
enhance safety, reliability, productivity, adaptability, and will reduce
overall cost. At the core of a robotic system i s the ability to acquire,
fuse, and interpret multisensory data to generate appropriate actions in
the performance of a given task. The feasibility of realistic autonomous
space manipulation tasks using multisensory information is presented.
This i s shown through two experiments involving a fluid interchange
system and a module interchange system. In both cases, autonomous
location of the mating element, autonomous location of a guiding light
target, mating, and demating of the system are performed. Specifically,
vision-driven techniques were implemented that determine the arbitrary
two-dimensional position and orientation of the mating elements as well
as the arbitrary three-dimensional position and orientation of the light
targets. The robotic system i s also equipped with a forcehorque sensor
that continuously monitors the six components of force and torque ex-
erted o n the end effector. Using vision, force, torque, proximity, and
touch sensors, the fluid interchange system and the module interchangesystem experiments were accomplished autonomously and successfully.
I. INTRODUCTION
TH the recent advances in the areas of vision and sens-w ng, robots have become a major element of today’s in-
dustrial world [11, [ 2 ] .They have been beneficial in replacing
humans not only in tasks at which robots are more efficient
but also in those that humans find undesirable because they
are strenuous, borin g, difficult, or hazardous. Robotic systems
are being used currently in hazardous environments such as
those encountered in chemical, nuclear, military, underwater,
mining, and space applications. Examples of space applica-
tions are found in areas such as servicing and repair facilities
for spacecraft and manufacturing and assembling of space lab-
oratories [3].Space applications can be greatly enhanced by the use of
robotics and automation. Activities such as collision-free nav-
igation, rendezvous and docking, satellite retrieval and ser-
vicing, orbital inspection, maintenance, refurbishment and
repair, and navigation of remotely piloted vehicles now re-
quire human assistance, often in environments that are hostile.
Presently, because of the actual state of the art of artificial in-
telligence, robotics, and automation, these space activities are
perform ed usually in manual m ode, sometimes in teleoperatedmode but seldom in autonomous mode. This philosophy in-
creases the level of involvement of astronauts, further raising
their exposure to danger.
Autonomous operation should be the norm in hazardous
environments. Total or partial autonomy of space systems will
enhance safety, reliability, productivity, and adaptability and
will reduce the overall cost of space missions. Each of these
factors alone justifies automation [3]-[5].
1) Safety: Although a human-machine mix addresses many
of the safety issues, specifically those associated with the
safety of equipment, this strategy is hazardous to humans when
placed in space in either intravehicular or extravehicular ac-
tivities. Additional risk is run during extravehicular activities,
particularly those performed in high orbits where the levels
of radiation are harmful to humans. Other kinds of risks areassociated with maneuvering large payloads. Consequently,
optimal safety is achieved when humans are used exclusively
as a last resort f or dealing with unforeseen circ umstanc es thatare beyond the capabilities of current autonomous technolo-
gies.
2 ) Reliability: When a definite line of dichotomy can be
established between “normal” and “abnormal” situations,
machine-driven decisions can be more reliable than human
decisions. The latter can be affected negatively by omissions
and illusions that are often induced by stringent conditions
in manual operations or a lack of awareness in teleoperated
activities. Therefore, when circumstances allow for full task
automation, reliability is expected to be greatly enhanced.
3) Adaptability: Autonomous systems can and should bedesigned in a modular fashion to accommodate futur e growth
and to allow subsystems to monitor their own operation as
well as the operation of other subsystems. This leads to safer
and more reliable overall systems requiring less reliance on
human and ground support.
4) Cost: Improvements in safety, reliability, and adaptabil-
ity result in higher efficiency, hence a reduction in the overall
cost of space missions. It is noted, however, that automation
does not Dreclude the implementation of alternative strategiesManuscript received December 2 1. 1988; revised July 24 , 1989. This work
was supported by the DOE University Program in Robotics for Advanced
Reactors (Universities of Florida, Michigan, Tennessee, Texas, and the OakRidge National Laboratory) under Grant D O E - D ~ - ~ ~ 0 2 - 8 6 ~ ~ 3 7 9 6 8 .he
FIS and MIS experiments were carried out under Contract NAG-8.630-BO1 199962 with NASA, Marshall Space Flight Center.
ing, University of Tennessee, Knoxville, TN 37996-2 100.R. c.Gonzatez is with Perceptics Corpor ation, Knoxville, TN 379334991.
and the Department of Electrical and Computer Engineering, University of
Tennessee, Knoxville, TN 37996-2100.
for manual override by humans, which would provide smooth
passage Of
cations associated with a robust and flexible human-machine
interface.
advantages to be gained from autonomous space operations[ 3 ] , [6]. In order to achieve autonomous systems with these
features to be implemented at low cost, the human presenceor telepresence should be replaced/extended to some degree
from On e to the Other with th e
M, A, Abidi is with the Depanment of Electrical and Computer Engineer. Safety, and low are the main
IEEE Log Number 8933037.
lO42-296X/9O/O400-0 159$01 OO 0 990 IEEE
8/2/2019 Abidi, Gonzalez - 1990 - The Use of Multi Sensor Data for Robotic Applications
http://slidepdf.com/reader/full/abidi-gonzalez-1990-the-use-of-multi-sensor-data-for-robotic-applications 2/19
160 IEEE T R A N S A C T I O N S O N R O B O T I C S A N D A U T O M A T I O N . VOL 6. NO 2. A P R I L 1990
by an array of sensors capable of mapping the environment
in which they operate. This sensory system should be able
to acquire, fuse, and interpret multisensory data to generate
actions based on sensed data. The need for several sensors
is evident from the complexity of the inspection and manip-
ulation tasks necessary to carry out space missions. The use
of sensors to perform inspection and manipulation tasks in a
variably structured or unstructured workspace is often based
on the following three steps:1) On the basis of collected sensory data, a model of the
workspace is built and continuously updated thereafter. (This
model integrates information about the scene collected by var-
ious sensors. Knowledge in this model relates to position, ori-
entation, size, and other features of the objects in the work
space.)2) Using the model built in 1) along with a list of tasks
to be performed, the system negotiates a list of feasible tasks
and generates moves to implement each of them.
3) Each of the tasks selected in 2) is implemented after
planning and verification.
The experiments described in this paper are performed
within the scope of full task autonomy. In a teleoperation
mode, the human operator sets up the robotic system to per-
form the desired task. U sing its sensory cues, the system mapsthe workspace and performs its operations in a fully au-
tonomous m ode. Finally, the system reports back to the human
operator on the success or failure of this task and resumes its
teleoperation mode.
11. OBJECTIVE
The intent here is to demonstrate the feasibility of realistic
autonomous robotic manipulations using multisensory infor-
mation cues. T he environment is variably stru ctured. Only the
relative position of the manipulator/end effector with respect
to the workpieces being manipulated is unknown when data
are being sensed. This does not preclude relative motion of
objects once an experiment has started. This task is perform edusing integrated information from vision, forceltorque, prox-
imity, and touch sensing. The cooperative use of sensors is
illustrated through the following two experiments, which are
of basic importance in numerous NASA operations.
1) Fluid Interchange System (FIS): The purpose of this
experiment is to autonomously mate and demate a fluid in-
terchange system using multisensory data. A mock-up of the
FIS has been designed and built. It is composed of a mating
nozzle element, a receptacle, and a number of light targets
(Fig. 1) .
2) Module Interchange System (MIS): The purp ose of this
experiment is to autonomously identify, remove, and replace
a faulty or spent module (from an orbiter) by a new module
(from a serv icer). A mock-up of the MIS system was designed
and built (Fig. 2 ). It is composed of a mating module element,a mounting rack, and light targets. Here, again, the system is
manipulated using cues from the vision, forcekorque, prox-
imity, and touch sensors.
Because of the similarity between the FIS and MI S exper-
iments, in this paper, we will describe the FIS operation in
Fig. 1 Fluid interchange system (FIS) setup nozzle, receptacle , and lighttarget
-----.-Fig. 2 . Module interchange system ( MIS) setup: exchange module, mount-
ing rack, and light target.
detail and mention the results of the MIS experiment only
briefly.
A . Experimental Setup
The mock-up of the FIS is composed of an align-
ment/locking mechanism that consists of a nozzle and a re-
ceptacle mounted in a module rack (Fig. 1). This module also
holds the four-light guiding target used by the vision system.
The nozzle and receptacle are essentially a large male and fe-male BNC-like connector pair. The nozzle is cylindrical and
has two stainless steel pins located 180" apart, perpendicular
to its outer surface, extending 0.50 in. It has a flat, padded,
parallelepipedic end that serves as an attachment point to the
robot end effector. The nozzle is 4.75 in long. It consists of
a cylindrical portion, which is 2.25 in in height. Its inner and
outer diameters are 1.25 and 1.75 in, respectively. The at-
tachment handle is 2.50 x 1.75 x 1.25 in and is centered on
the opposite end of the cylindrical portion from the nozzle
opening.
The receptacle is mounted on a rectangular plate of
6.75 x 8.75 in, which is in turn mounted on a module of
1.25 x 1.25 x 7.50 in in dimension. The receptacle module
system is 12.75 in deep. It is mounted in a standard nuclear in-
strumentation module ( NIM ) rack. The receptacle has a flaredrim and inner and outer diameters of 2 .00 and 2.90 in, respec-
tively. At a depth of 1.40 in, a spring-loaded pressure plate is
mounted; its travel distance is 0.25 in. The receptacle also has
tw o V notches located 180" apart in its rim leading into two
grooves that lock the nozzle into position once it is inserted.
8/2/2019 Abidi, Gonzalez - 1990 - The Use of Multi Sensor Data for Robotic Applications
http://slidepdf.com/reader/full/abidi-gonzalez-1990-the-use-of-multi-sensor-data-for-robotic-applications 3/19
A B l D l A N D GONZA LEZ MULTISENSOR DATA FOR ROBOTIC MATING AND DEMATING IN SPACF 1 6 1
Th e receptacle also holds the four-light guiding target, which
is used by the vision system to locate the position and orien-
tation of the receptacle. The positions of the light target, with
respect to the lower left-hand corner of the receptacle, are in
inches: (6.25, 7.50), (6.25, 1.20), (0.50, 1.20), and (7.50.
0.50). The first dimension is the distance across the rack, and
the second is the height.
The rack is 8.50 x 20.00 x 17.25 in. The inner dimensions
are 7.75 x 17.00 x 10.60 in. Modules are guided into the rack
by 12 sets of equally spaced (0.35-in) strips placed along the
inner top and bottom surfaces of the rack.
The initial two-dimensional position and orientation of the
nozzle are unknown. Its height, however, is known. The three-
dimensional position and orientation of the receptacle holding
the target are unknown. The only constraint in this experi-
ment is that the FIS (nozzle, receptacle, and FIS target) be
positioned in the field of view of the camera and be within
reach of the end effector. The world coordinate system (Fig.
3) is defined with respect to the robotic workstation table and
is divided into two areas: the mating element area and the
target area. This means that in both experiments, the robotic
system is expected t o find the mating element anywhere in themating area and locate the light target anywhere in the target
area.All the dimensions mentioned above, along with the infor-
mation corresponding to the relative positions of the objects
being manipulated in this experiment, are stored in a database
that is easily updatable by the operator and directly accessible
by each sensor module during the running of the operation.
The FIS and MIS experiments illustrating the cooperative
use of multisensory information for autonomous operations
are performed using (see Figs. 4 an d 5 ) the following:
A Cincinnati Milacron T 3-7 26 six-degree-of-freedom in-
dustrial robot
Several external sensors, including vision, range, prox-
imity, touch, forceitor que, and sound (the range and
sound sensors are not used in this experiment because
they both use ambient air as a medium of transduction,
hence, they a re not appropriate for space applications).A PERCEPTICS-9200E image processor
A VAX 11/785-VM S computer.
B . Description of th e FI S and MIS Experiments
The FIS and MIS experiments can be summarized as fol-
lows. A m ating element appears anywh ere in the mating area.
Its two-dimensional position and orientation on the table are
unknown. The mating element is within the field of view of
a camera rigidly mounted on the arm and within reach of the
robot grip per. T he light targets are within the field of view of
the camera w hen the robot is in an arbitrary position within a
predetermined area. The paradigm described in the previous
section is followed here for the execution of the experiment.
The robot is moved by the human operator such that the light
target is visible; then, the robot is put in autonomous mode.
The robot determines the approximate position of the mount-
ing rack using the vision sensor. This measurement is later
refined once the robot knows the approximate position of the
target. Afterward, the robot positions itself so that the mating
I"
T ar ge t s M a t i ng E l em en tsA r ea A r ea
Fig. 3 . World coordinate system with respect to robotic workstation.
Fig. 4. Robotic system and its sensors used for FIS and MIS experiments.1-vision; 2-range and sound; 3-proximity; 4-touch; and 5-force andtorque.
Vision1: Data Fusion
f
E E
N
I
NG
Proximity
TouchControl
I c
Activation g h-Tq- 1 4 1 -
Soundat a
Acquisition
-.rig. 5 . Interface functions for autonomous robotic manipulation. Only the
vision, force/torque, proximity, and touch sensors are used in this experi-
ment. The range and sound sensors are not included because they cannotbe supported in space.
area is within the field of view of the camera and uses its vi-
sion sensor to determine (approximately) the two-dimensional
position of the mating element. This measurement is later re-
fined by moving the robot arm closer to the mating element.
The robot then picks up the mating element after checking
for its presence by using its proximity and touch sensors. Therobot then proceeds to insert the nozzle (FIS) or the exchange
module (MIS). During the entire experiment, the forcekorque
sensor is active.
In Section 111, the vision aspects of this system are inves-
tigated. Specifically, we cover in detail the two-dimensional
8/2/2019 Abidi, Gonzalez - 1990 - The Use of Multi Sensor Data for Robotic Applications
http://slidepdf.com/reader/full/abidi-gonzalez-1990-the-use-of-multi-sensor-data-for-robotic-applications 4/19
1 6 2 IEEE T R ANS AC T I ONS ON ROBOTICS A N D AUT OM AT I ON. VOL 6 . NO 2. APRIL 1990
mating element positions and the three-dimensional target po-
sitions. In Sections IV-VI, the state of the art and the prop er-
ties of the sensors implemented in this work for for celtorque,
proximity, and touch, respectively, are discussed. In each of
these sections, we briefly discuss previously published work
by others that relates to the topic of that section. Section VI1
is devoted to the experimental results.
111. VIS ION ENS IN GOR TH E IN T E R C H A N G EYSTEMS
A . Background
By vision sensing, one usually refers to the two-dimensional
optical information provided by a vision sensor or camera.
There are virtually no vision sensors built specifically for
robotics applications. Most of the vision sensors used in
robotics ar e borrowed from the television industry; hence,
they are generally not sized properly, rugged enough, or sen-
sitive enough. Two of the most common types of cameras
mounted on a robot for vision sensing are vidicons and solid-
state arrays, and both types are commonly available in reso-
lutions varying from 32 x 32 to 512 x 512 sensing elements
per image. In general, these sensors are rigidly fixed to the
manipulator in order to simplify the coordin ate transformation
between world system and camera system. Because solid-state
cameras are more rugged and of smdler size, they are a bet-ter choice for robotics applications. However, vidicons are
generally cheaper and present fewer problems in interfacing
to some image-processing systems; therefore, they may bepreferred for fixed mounting in the work space (such as over-
head). Of all the sensors, vision has the distinction of having
the widest potential range of applications. Vision may be used
as a substitute for, or in conjunction with, many of the other
types of sensors for the purposes of objec t detection, recogni-
tion, location, and inspection. In addition, it provides a means
for monitoring the environment for the presence of obstacles
and for the occurrence of abnormal events [ 7 ] .
Object detection, recognition, and location are tasks to
which vision is ideally suited. Using vision, such tasks may be
performed in a natural and efficient manner prior to contact,
thus allowing the robot to plan its actions before any move-ment is made. This is contrasted with other sensors, which
may require extensive robot motion to accomplish these tasks.
Because processing a visual image involves a large amount
of variable data, the algorithm s required are generally of
far greater complexity than those used for other sensors.
There are a number of contributing factors, which tend to
complicate the situation. One m ajor problem results from the
difficulty of deducing three-dimensional locations from a two-
dimensional image because there is a loss of information in
the projection process. If the object is restricted to movement
in a plane, such as on a table or conveyor belt, then deducing
location may be fairly simple. However, if motion is unre-
stricted, then three-dimensional location may be much more
difficult, and approaches such as structured lighting, stereo,and other multiple imaging techniques may be necessary.
A great burden can also be placed on im age-processing al-
gorithms as a result of lighting conditions in the work space.
Among these ar e inadequate or nonuniform light, shadows, re-
flectance variations, and occlusion of objects. This latter prob-
lem may result from atmospheric occlusions, such as smoke,
from other objects. as would be the case in picking a part
from a bin, or from the robot hand itself if it is not possible to
position the camera in an ideal location such as on the gripper
itself. Although vision sensing is capable of performing many
of the tasks required of an intelligent rob ot, its high complexity
and cost limit its use [2], [7]-[9].
B. Vision Subsystem
During the FIS and MI S experiments, the vision system
performs two principal tasks: 1 ) identification of two-
dimensional position and orientation of the FIS and MIS mat-
ing elements and 2) identification of three-dimensional posi-
tion and orientation of the mounting elements for the FIS and
MIS. In the following, we highlight the two methods used in
our vision system. At the same time, we mention some similar
existing systems that were reported in the literature.
1) Two-Dimensional Position and Orientation of Mat-
ing Elements: Here, we describe a method for determining
the arbitrary two-dimensional position and orientation of the
mating elements of the FIS and MIS using a vision sensor.
Many of the vision-based systems used for path planning for
mobile robots, automated inspection, and mechanical assem-
bly use similar techniques. A summary of these techniquesmay be found in [101 an d [113. In this experiment, several
preprocessing steps are performed on the image depicting themating element. First, the grey-scale image I ' is transformed
to a binary image I using thresholding [l], [2]. The charac-
teristic function for this image f ( x , ) s equal to 0 for image
points on the background and 1 for points on the object.
We choose a point that represents the position of a two-
dimensional object. In this case, the center of the area of the
object was selected. To calculate the center of area (X, ) ,
we find the area A and the first moments M x and M y of the
object using the following expressions [121:
A = l f ( - Y , Y)dXdY (1 )
Mx = j6.f 0, )dX dY
My = / L Y f ( X . Y)d Xd Y. (3)
(2 )
The coordinates of the center of area X and J are computed
as
X = M x I A (4 )
J =M y / A .
For an N x N discrete image
N N N N
x =y X x f k ) / CfC.?Y ) (6 )
J = C Y f ( X , Y ) / y X f ( x , ) . ( 7 )
x= l y- I x=l y= l
N N N N
x = l y= l x = l y= l
The usual practice to d etermine the two-dimensional orien-
8/2/2019 Abidi, Gonzalez - 1990 - The Use of Multi Sensor Data for Robotic Applications
http://slidepdf.com/reader/full/abidi-gonzalez-1990-the-use-of-multi-sensor-data-for-robotic-applications 5/19
A B I D I A N D G O N Z A L E Z : M U L T I S E N S O R D A T A FOR R O B O T I CMA m c; A N D LEMA r i i w IK PAC’E
-chi
ch2
ch3
- ch4
163
tation of an object is to compute the axis of the least second
moment or moment of inertia [12]. To accomplish this, we
find the line for which the integral of the squared distance
to points in the object is minimum, that is, minimize E with
respect to 8 :
where r ( x ,y ) s the minimum distance from the point ( x , )
to the line being sought. To represent a line, we use the two
parameters p and 0 (where p is the perpendicular distancefrom the origin to the line, and 0 is the angle between the x
axis and the line measured counterclockwise) (Fig. 6). Th e
minimization of E yields the following orientation angle:
r 1
where
N N
x= l y= l
N N
My, = (Y -H 2 f x ,Y ) (12)x = l y = l
are the second moments of the image. The outcome of this
operation is the two-dimensional position and orientation of
the mating element ( X , j , ) .2) Three-Dimensional Position and Orientation of the
Mounting Element: Here, we describe a method that deter-
mines the three-dimensional position and orientation of four-
light guiding targets, which are utilized to locate the FIS andMIS mounting elements. With reference to Fig. 7 , the objec-
tive is to recover the three-dimensional position (with respect
to a coordin ate system fixed to the robot) of each of the point
targets PO,P I , Pz, an d P3, knowing their respective im-
ages p ~ , ~ ,2, and p3 as well as the size of the target
[P ,= ( X , ,Y , ,2 ,) nd p , = ( x , , ,)] . Since the position and
the orientation of the camera are known with respect to the
robot, recovering the position of the target points in a camera
reference frame yields the position and the orientation of the
targets in a coordinate system fixed to the robot.The need for accurate, fast, and efficient positioning of
a robot arm with respect to a given target has been ad-
dressed in many ways using painted lines, infrared beacons,
and much more sophisticated three-dimensional vision tech-niques [13]-[17]. A summary of most of these techniques can
be found in [18]. In this section, we show that the position of
the target can be determined up to the desired accuracy using
a four-point light target. The solution is direct and therefore
superior to most of the other iterative schemes.
c y
/ YMATINGLEMENT
Fig. 6. 0-0 representation of a line.
IMAGE OF T A R G E l
TARGET
O2
Y
Fig. 7. Target points and their images in a pinhole camera model.
In an augmented coordinate system, the image coordinates( x , ) of an image point are related to the world coordinates
( X , ,2) by
where chi, ch2, ch3, and ch4 represent the elements of thehomogeneous coordinates [11. In this paper, we use the pin-
hole camera model for our vision sensor (Fig. 7). The imagecoordinates ( x , ) and the world coordinates ( X ,Y , Z ) ar e
related as follows [l], [2], [19]:
W c = A W (18)
where
(19)
(20)
w = ( X ,Y ,z,)’
wc = ( X C , C , c, )’
8/2/2019 Abidi, Gonzalez - 1990 - The Use of Multi Sensor Data for Robotic Applications
http://slidepdf.com/reader/full/abidi-gonzalez-1990-the-use-of-multi-sensor-data-for-robotic-applications 6/19
16 4 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION. V O L . 6 . NO 2. APRIL 1990
-a1 a2 a3 010-
a4 a5 a6 a11
a7 08 a9 a1 2
- 0 0 0 1 -
A = (39)
O O l1'
- cos 0 s i n8 0 0 -
- s in8 cos0 0 0
0 0 1 0Ro =
- 0 0 0 1-
t o 0 0 1 J
RT = R , .Rp . R e . (28)
The transformation RT denotes the total movement. Equa-tions (25)-(27) introduce implicitly the idea that this transfor-
mation preserves distance. Hence, for RT
(25)
the following holds true:
and
a1a2+a4a5 +a7a8 =0
ala3 + a4a6 +a7a9 = 0
a2a3 + a5a6+a8a9 = 0.
(36)
(37)
(38)
Note that not all these equations are independent.
In the case where the translation vector is not zero, the
global movement can be described by a matrix similar to RT ,
which includes the coefficients a 10, a 1 I , an d a 12 characterizingthe translation. Obviously, (30)-( 38) still hold true:
l o 0 0 1 1 allow some simplifying assumptions about the problem ge-
8/2/2019 Abidi, Gonzalez - 1990 - The Use of Multi Sensor Data for Robotic Applications
http://slidepdf.com/reader/full/abidi-gonzalez-1990-the-use-of-multi-sensor-data-for-robotic-applications 7/19
A B ID l A N D G O N Z A L E Z : M U L T I S EN S O R D A T A F O R R O B O T I C M A T I N G A N D D E M A T I N G I N S P A C E
- 1 0 0 0 0 0 -
My 0 0 x ; x y 0 x ;
Xxq AY? 0 x ; x q x ;Y T x ;
AY: 0 x g x q x g y p x ;
0 0 AY: y;XT y;YF y;
- 0 0 AY? y;Xp Y p q y;-
16 5
-cIl-
c12
c 2 2
c3I
c32
- c 3 4 -
Fig. 8. Transformation of world coordinate system into standard positionT, ransformation of camera coordinate system into ideal position S , and
relationship between the two transformations. W ,- riginal world coor-
dinates; Wp- tandard position coordinates; Wy- deal position coordi-
nates; W:-original camera coordinates.
ometry and also indicate the adjustment to be applied to the
solution to compensate for these steps. Throughout this pa-
per, we use the vector w ; o represent the augmented vector
( X i ,Y ; ,Z ; , l)’, where ( X i ,Y ; ,Z ; ) are the ( X ,Y , Z ) coor-
dinates of point P ; . The three target points are denoted by
P o, P I , nd P 2 . As before, any superscript on W indicates a
corresponding superscript on X , Y , an d Z and also refers to
the coordinate s of the point using a system other than the fixed
world system. The se transformations are shown diagrammat-
ically in Fig. 8. They are briefly summarized below. Details
of each transformation and its effect on the overall process in
the recovery of the relative position of the light targets are
given in Appendices 1-111.a) The first step transforms this general target positioning
problem into an intermediate coordinate system in which the
three target points are in what we shall define as standardposition. The standard position is characterized by P O at the
origin, P I on the positive X axis, and P2 in the first quadrant
of the X Y plane (or on the positive Y axis). This transfor-
mation will be denoted by E its effect on the recovery of the
target’s image is detailed in Appendix I .
b) The second preprocessing step must be perfo rmed on
each image to be processed. It involves another transforma-
tion, which describes how the image would look to a camera
that has its lens center in the same location as the real camera
but is looking directly at the origin of a given a system and
has its x axis in the same plane as the X axis of the CY system.
We will refer to this “fictitious” camera as being in idealposition and will call the image seen by such a camera an
ideal image. Such an image is characterized by the image of
Po( p0) at the origin and the image of P ~ ( p l )n the positive
x axis. This transformation will be denoted by S ; its effect on
the recovery of the target position is examined in Appendix
11.
c) The transformation relating the coordinates of a given
point from the standard position to the ideal position is de-scribed by matrix C. The computation of C and the combined
effect of T nd S are given in Appendix 111. (The transforma-
tion B is defined as an intermediary step in the computation
of A . )For implementation purposes, the th ree previous operations
are summarized as follows:
Step I : Compute the transformation matrix T using (58).
Step 2: Compute the transformation matrix S-I using (77).
Step 3: Compute the transformation matrix C using (102).
Step 4: Compute the transformation matrix A using A =
S- ’CT.
0
0
0
0
- 0
MC’ =Co. (42)
Note that the first row of (41) is superfluous; it was added to
make M a square matrix. Since we have six unknowns andfive equations, we need to add one m ore equation. U sing the
fact that (C11, ( 2 2 1 , C31)’ is orthonormal and that C2, = 0,
we may write
c:,+c:, 1. (43)
Because of the nature of (42), all the unknown terms can be
written as a function of
CII = f ~d ’+ r 2 ‘
Without inverting M , we can compute r by combining the
equations given in the preceding system:
(45)MYB3 - x ~ B I c31- _ _x P XCiB
I ( 1 3 - B 2 ) CI I
where
8/2/2019 Abidi, Gonzalez - 1990 - The Use of Multi Sensor Data for Robotic Applications
http://slidepdf.com/reader/full/abidi-gonzalez-1990-the-use-of-multi-sensor-data-for-robotic-applications 8/19
1 6 6 IEEF. TRANSACTIONS ON ROBOTICS ANI1 AUTOMATION. VOL 6. NO 2 . APRIL 1990
B2 = Y ~ Y P ( Y $ X ~L)$x~)(X;YY; -X;YY;) deflecting laterally, angularly, or both whenever the end ef-
B3 = Y;YY;y(y;xg l;x;)(Yp - Y;).
Although (44) suggests that CI I can take two distinct values,
in reality this cannot happen at the same time because
or
(46)fector of a gripped workpiece experiences a force or torque
1221, WI.
The case where C11 = 0 leads to an infinite value of r or
to P I , P2, an d P3 being collinear, which contradicts our
first assumption about the relative position of these points.
Once C I I s computed, the remaining parameters are read-
ily computed. Using (50), he matrix A is computed, and the
three-dimensional coordinates of PO, I , 2, an d P3 are each
determined by solving (13) for each target.
IV . F ~ R C EN D TORQUEENSINGOR TH E INTERCHANGEYSTEMS
Active and passive forcehorque sensors are capable of en-
hancing the manipulation capabilities of a robot in a variety of
tasks, such as contact detection, contour following, tool inser-
tion, and mechanical assembly. Force and torque sensors pro-
vide a safety device that can interrupt the robot motion when
an unexpected obstacle is encountered. Though often mounted
at the end effector level, these sensors protect practically theentire arm because they are capable of sensing any relative
motion between the end effector and the rest of the robot arm
as well. This is a major difference between these sensors and
tactile sensors, which provide only local information. Equally
important is the use of force and torque sensors in manipu-lation tasks. In dynamically constrained operations, excessive
levels of local force and torque are very comm on. T hese sen-
sors can be used as feedback devices that clip or eliminate
excessive forces and/or torques. The peg-in-the-hole problem
is a classical example where active and/or passive sensing can
b e a ml ie d ..A . B a c k g ro u n d
Force and torque sensing generally refer to the m easurement
of the global forces and torques applied on a workpiece bya robot manipulator. In force and torque measurement, we
distinguish between active and passive sensing.
Active sensing is performed through either joint or wrist
sensing. Early techniques for determining force and torque
were based on measuring the individual forces acting on each
manipulator joint. These individual forces can be inferred
from the fluid back pressure or current in the armature for
hydraulic robots or electric robots, respectively. These tech-
niques are highly sensitive to the weight and motion of the
robot itself. This problem is easily eliminated by using a wrist
sensor since the force and torque components are measured
in a more direct manner. Wrist sensors are composed of a
number of sensing elements arranged in some configuration.
Strain gauges are the most commonly used sensing elements
because of their reliability, durability, and low cost, althoughmany other elements are suitable for this purpose. Most wrist
sensors produce eight raw readings that need to be converted
into the six Cartesian components of force and torque along
the major axes using a resolved force matrix. This is done
through computer software or the hardware processor that is
usually provided by the manufacturer of off-the-shelf sensors.
Wrist sensors are usually mounted at the base of the robot
hand and are capable of performing a single-point contact
measurement [11.
Passive sensing refers to the added kinematic design of a
forcekorque sensor that allows the annihilation of excessive
forces without the need for active servo feedback. T he instru-
mented remote center compliance (IRCC) system typifies this
concept. The IRCC is a multiaxis compliant interface between
the robot arm and the end effector. It continuously monitors
B . Experimenta l Se tup
The force/torque sensor used in this work is the Lord FT-
30/100. The sensor is designed for wrist mounting. It uses
eight piezoresistive strain gauges that provide real-time feed-
back of the location, magnitude, and orientation of forces and
moments. The system is equipped with a processor that re-
solves the eight raw pieces of data into six Cartesian forces
and torques at a rate of 83 Hz. T he converted force and torque
measurements can be referenced with respect to any element
in the arm, including the sensor itself. This allows the can-
cellation of the effect of the load of the sensor, end effector
or tool, or workpiece, which would otherwise degrade the
accuracy of the measurements. This system has a maximum
force and torque capacity of 3 0 lb and 100 in.lb, respectively.
The system resolution is about 1 oz and 1 in.oz for force and
torque, respectively. A one-chip Z8 microcomputer has been
added to this system for communication between the sensorand the main processor (VAX 11/785-VMS) for sensor bias-
ing, threshold setting, and interface to other sensors.
During the entire experiment, including camera positioning
and activation, data acquisition and processing, nozzle recog-
nition and manipulation, and m ating and demating of the FIS,
the forcekorque sensor is active. Its role is to continuously
monitor each of the three force components F *, F , , an d FZalong the three axes X , Y, and Z , respectively, relative to the
robot end effector. It also monitors the three components of
torque T , , T y , an d T z . At a rate of 83 Hz, the actual six
components
are read and compared against a preset set of valuesthe location of the compliance center along the end-effector
axis as well as lateral and angular stiffness of the system taken
at the center of compliance. The key elements of the IRCCsystem are three elastometer shear pads, which respond by
vmx = ( 1 ~ ~ x 1 ,~ ~ x 1 ,Fy1, T ~ l , lTyl,T ? ( ) / .
The robot motion is immediately halted every time any com-
ponent of force or torque exceeds its preset threshold. This
8/2/2019 Abidi, Gonzalez - 1990 - The Use of Multi Sensor Data for Robotic Applications
http://slidepdf.com/reader/full/abidi-gonzalez-1990-the-use-of-multi-sensor-data-for-robotic-applications 9/19
ABIDI AN D GONZALEZ. MULTISENSOR DA TA FOR ROBOTIC MATING AND DEMATING IN SPACE 167
means that at all times
In this experiment, w hen the robot encounters excessive force
(torque), it stops, waits for a preset time ( 5 s) , then proceeds
with the task where it left off. If these interrupts persist, the
system attempts indefinitely to resum e its task until it is reset
manually. It is worth noting that the threshold vector Paxs
a dynam ic variable that can be changed at any moment of theexecution time of a given task. For the FIS experiment, Pax
was set to
Pa"(15, 15, 15, 50, 50, 50)'
during motion (n o collision is expected) and double this value
during manipulation (collision is expected). The forces are in
ounces; the torques are in ounce-inches.
V . PROXIMITYENSINGF TH E INTERCHANGEYSTEMS
A . Background
Proximity sensing usually involves the detection of a distur-
bance brought to a monitored signal by the presence/absence
of an object within a specific volume around the sensing ele-
ment. As opposed to range sensors, which provide the actualdistance between the targeted object and the sensor, proximity
sensors provide only a binary o utput, indicating if this distance
is below or above a predetermined threshold distance. Most
proximity sensors are either inductive, magnetic, capacitive,
ultrasonic, or optical.
Inductive proximity sensors detect the variation in a current
resulting from the deform ation of the flux lines of a permanent
magnet caused by putting the sensor close to a ferromagnetic
material. Magnetic proximity sensors are based either on the
Hall effect or on Reed switches. The Hall effect is manifested
when a ferromagnetic material is placed close to a semicon-
ductor magnet (the sensing element in the proximity sensor).
This causes an increase in the strength of the magnetic field,
hence a corresponding increase in magnitude of the Lorentz
force applied on the magnet and the voltage across the semi-
conductor. Magn etic proximity sensors that are based on Reed
switches have two Reed magnets in a glass tube. When the
sensor is brought in the vicinity of a magnetic material, the
resulting magnetic field causes the contact to close, which
turns on the sensor. Capacitive proximity sensors detect the
beginning or end of oscillations due to a change in capacitance
of a resonant circuit caused by the change in the permittivity
of the dielectric material in the sensing capacitor. This change
is induced by the close proximity of the sensed object. Optical
proximity sensors detect the presence of an object by sensing
the echo of a light beam generated by the sensor. A typical
setup includes an infrared light emitter such as a solid-state
light-emitting diode (L ED ) and an infrared light receiver such
as a photodiode. Ultrasonic proximity sensors are in princi-ple identical to optical sensors. An electroacoustic transducer
(generally piezoelectric) emits a narrow acoustic beam, which
gets reflected back if the sensor is placed in close vicinity of
an object [2], [24].
This variety in sensing modalities yields an equal variety
in characteristics. Some sensors are capable of detecting the
presence of objects from great distances averaging several
feet (ultrasonic and optical); others are capable of sensing
only within a few millimeters (inductive and magnetic). Ca-
pacitive, ultrasonic, and optical sensors are capable of detect-
ing most materials but with significantly varying sensitivity,
which is highly dependen t on surface reflection properties and
orientation. On the other hand, inductive and magnetic sen-
sors are sensitive only to m etals or, more specifically, ferrous
metals. Proximity sensing has a wide range of applications,
such as collision avoidance, object location and acquisition,
and object tracking [ 7 ] , [22], [2 5 ] .
B . Experimental Setup
The proximity sensor used in this experiment is optical;
it is made of two solid-state infrared light emitters coupled
with two solid-state light receivers arranged in the configura-
tion shown in Fig. 9 171. Both the emitter and the receiver are
placed such that the light bouncing off an object in the sensing
volume comes directly into the receiver. When the light inten-
sity is greater than a threshold value, the sensor is activated.
It is designed to detect the presence of an object between the
parallel jaws of the end effector when the beam is interrupted.
Conversely, it detects the presence of an object in front of th eend effector when a return beam is sensed. In both the FIS
and the MI S manip ulation experiments, the proximity sensor
performs checks on the position of the mating element. The
manip ulation experim ent is halted whenever the actual reading
is different from that which is expected.
VI. TOUCHENSINGOR TH E INTERCHANGEYSTEMS
A . Background
A touch sensor can provide a robot with valuable informa-
tion that can be crucial in many applications requiring recog-
nition, acquisition, and manipu lation of comp lex workpieces.
Touch sensors presently used in industry are o ften simp le and
crude devices with limited capability. As with many other
sensing modalities, the state of the art in touch sensing is
still very primitive. Up until the 1970's, microswitches, pneu-
matic jets, and ON/OFF pressure-sensitive pads were the only
tactile sensors in existence. They were primarily used as limit
switches for collision detection. Since the 1980's, however,
considerable effort has been devoted to the development of
more sophisticated touch sensors and to the integration of these
sensors in robotic syste ms 181, 1261.
The historical lack of interest in touch and many other
contact sensors may be attributed to an early belief that vi-
sion sensors could capture texture and shape information with
such a fidelity that little or no additional information would
be gained from touch sensing. Obviously, for most general-
purpose robotic systems operating in semistructured or un-
structured environm ents, the state of the art of vision sensing
and scene understandin g points toward a need for added sen-sory capability. With touch sensing, there are no counterparts
to unfavorable optical conditions such as shadow s, occlusions,
and variations in illumination, reflectivity, and color, which
are sometimes so severe that they simply preclude the use
of vision. The amount of processed data and the complexity
8/2/2019 Abidi, Gonzalez - 1990 - The Use of Multi Sensor Data for Robotic Applications
http://slidepdf.com/reader/full/abidi-gonzalez-1990-the-use-of-multi-sensor-data-for-robotic-applications 10/19
168 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION. VO L 6. N O. 2. APRIL 1990
w:? -:w
X I‘ L .N ; z&o , l~
Fig. 9. Proximity sensor configuration [7].
of the interpretation methods are much higher in vision than
in touch. Intrinsic shape and texture information is readily
present in raw touch data, whereas complex and approximate
algorithms are required to extract these features from vision
data. Touch sensors as presently used in robotics vary from
a single-point ON/OFF microsw itches to arrays of gray-scale-
responsive elements, which measure the magnitudes of forces
applied at each point. Tactile information is often inferred
from force measurements [26].
Binary touch sensors, the simplest type, indicate only the
presence or absence of an object. This is sometimes referred
to as simple contact sensing or simple touch. Despite its sim-
plicity, binary sensing provides valuable information. T he in-
terface of such sensors to a processor and the interpretation
of the information are both relatively easy. For instance, asimple-contact element on the inside of each finger of a robot
end effector determines the presence or absence of a work-
piece between the fingers. Mounted on the outside surfaces of
the grip per, a sim ple-contact touch sensor can also be used to
stop the motion of a robot arm upon collision with an unex-
pected wor kpiece to avoid any damage to the workpiece or the
gripper itself. This is possible only when the relative speed
between wo rkpiece and end effector is limited.
The second type of sensor is gray-scale responsive to the
force applied to each point. Devices that provide continuous
sensing of forces in array fashion form are often referred to
as artificial skins. In these sensors, a number of sensitive ele-
ments arranged in an array are mounted on the fingertip of
a sensing probe or inside the robot’s fingers. Current ar-
ray sizes vary from 5 x 5 to 16 x 16 elements per finger tip.The spatial resolution is often on the order of 0.1 in. These
sensors offer a complexity and resolution comparable to the
human touch. These arrays are often used for the recognition
of small workpieces by extracting simple shape and texture
features. Often, the techniques used for touch recognition are
directly inspired from computer vision work. However, pro-
cessing of tactile data is generally less complicated because it
is not affected by the many variables that affect vision, such as
shadow s, reflections, and variations in illumination [7], [27].
All the systems developed thus far suffer from a combi-
nation of the following problems: 1) low sensitivity, 2) low
dynamic range, 3) limited bandwidth , 4) lack of dc response,
5) hysteresis and nonlinearity, 6) temper ature and pressure sta-
bility, 7) fragility and lack of flexibility, and 8) susceptibilityto the external world [22], [26].
B . Experimental Setup
In this work, two microswitches are mounted inside the
end effector’s textured compliant surfaces. T hey are used pri-
marily for inspecting the presence or absence of a work-
piece between the end effector jaws. The manipulation ex-
periment is halted whenever the actual reading is different
from that which is expected. (The robot is also equipped with
16 x 10 and 80 x 80 gray-scale-responsive array touch sen-
sors, which are not used in the FIS and MIS experiments.)
VII. EXPERIMENTALESULTS
In this section, we describe the autonom ous mating and
demating of the FIS and the M IS using the concepts presented
in the previous sections.
A . Th e FIS: Setup and Experiment
The setup used in this experiment was shown in Section I1
(Fig. 4). The FIS experiment involves determining the two-
dimensional position and orientation of the nozzle as well as
the three-dimensional position and orientation of the light tar-
get then performing mating and demating of the interchange
system. It is important to note that the experim ent is fully au-
tonomous from the moment the light targets are placed within
the field of view to the end of the m ating/dem ating operation.
First, the robot moves the camera to a predefined position
high above the mating elements area (Fig. 10) then turns on
its own lights to determine the two-dimensional position ofthe nozzle. An image is acquired, histogram equalized, and
thresholded [2] to extract the object (nozzle) from the back-
ground (table) (F ig . 1 (a ) and (b)). The image acquired here
is 128 x 128 in size. U sing (6) and (7), the center of the area
of the nozzle is computed. In the experiment described here,
the center coordinates were found to be ( i l , l ) = (72 , 55) .
The parameters iI an d j l are the row and column pixel co-
ordinates, respectively. Using the camera geometry, the cor-
responding coordinates with respect to the world coordinate
system are ( X I , 1 , Z I )= (14.02, 24.14, 4.75). The Z I co -
ordinate, which is the height of the nozzle, is known. The
dimensions are measured in inches with respect to the world
coordinate system. Usually, this measurement is inaccurate
and thus cannot be relied upon to pick up the nozzle.
Next, the robot moves the camera closer, directly abovethe nozzle, since its approx imate position is known (Fig . 12),
and acquires another image of the nozzle to determ ine its po-
sition and orientation mor e accurately. This image is similarly
processed (Fig . 1 (c) and (d)), and the pixel coordinates of
the center of the area are computed. In this case, they are
( i 2 , j 2 ) = (76 , 63) .Using the camera geometry, the corresponding coordi-
nates with respect to the world coordinate system are
( X 2 , Y 2 , Z 2 ) = (13.23, 24.18, 4.75). The Z2 coordinate,
which is the height of the nozzle, is known. Using (9), th e
orientation is calculated to be 47.1 O with respect to the Xaxis. Once these values are known, the robot checks for the
presence of the nozzle using the proximity and touch sensors.
In this experiment, the use of the proximity and touch sensorsis limited to comparing the sensor readings to the expected
values in the vicinity of the nozzle. After these checks, the
robot picks up the nozzle (Fig. 13) and proceeds to locate the
receptacle.
To locate the receptacle, the robot moves the camera to a
8/2/2019 Abidi, Gonzalez - 1990 - The Use of Multi Sensor Data for Robotic Applications
http://slidepdf.com/reader/full/abidi-gonzalez-1990-the-use-of-multi-sensor-data-for-robotic-applications 11/19
ABlDl AND CON ZALE Z. MULTISENSOR DATA FOR ROBOTIC MATING AND DEMATING IN SPACE 16 9
Fig I O Vision system acquires an image of the mating element area to
determine the two-dimensional position of the nozzle Fig 14 Robot acquires an image of the FIS target
(Cl (d)
Fig. 11 . Top view of mating element area: (a) Original image ; (b) thresh-olded image: bottom-close-up of nozzle: (c ) Original image; (d) thresh-
olded image.
Fig 12. Vision system acquires an image of the nozzle to determine its
two-dimensional position and orientation
.
Fig. 13. Robot proceeds to pick up the nozzle after proximity and touch
sensing checks are passed.
Fig. 15. Image of FIS target: (a) With lights on; (b) with lights off; (c)difference image.
predefined position so that the target area is within the field
of view of the camera (Fig. 14). It turns on the four indicator
lights on the rack containing the receptacle, acquires an image
of the scene (Fig. 15(a)), then acquires another image (Fig.15(b)) after it turns off the FIS target. The two images are
subtracted (F ig. 15(c) ), and the resulting patterns are thinned
[l] to find the position of the four lights (target) in the im-
age. Their respective pixel coordinates and sensor readings in
inches ar e as follows: '
~ -
Point i j x Y
PO 46 28 + 0.067 + 0.135
p , 82 29 - 0.036 + 0.032P? 81 63 - 0.033 +0.037
P, 45 61 + 0.070 +0.044
Using the algorithm presented in Section 111, we find the
three-dimensional position and orientation of the receptacle.
The resulting position vector of the lower left corner of the
target in this experiment is
~ 7 ,: ,z7,l , p I , W )
= (23.49, -19 .74, 9.76, 62.4, 94.3, 90.9).
8/2/2019 Abidi, Gonzalez - 1990 - The Use of Multi Sensor Data for Robotic Applications
http://slidepdf.com/reader/full/abidi-gonzalez-1990-the-use-of-multi-sensor-data-for-robotic-applications 12/19
1 7 0 IEEE TRANSACTIONS ON ROBOTICS A N D A U T O M A T I O N . VOL. 6. NO . 2. APRIL 1990
Fig 18 Robot proceeds to insert the nozzle in the receptacle with theig 16 Robot acquires a close-up image of the FIS target
, --
Fig 17 Close-up image of light target: (a ) Wlth lights on, (b) with lightsoff, (c) difference image
The parameters Xy , Y : , and Z ? ar e in inches; 8 , 61, and cy
are in degrees. At this step, the robot has an approximate idea
regarding the position of the mating rack. To refine its mea-
surement, the robot moves the camera closer to and directly
in front of the receptacle (Fig. 16) then repeats the measure-
ment of the three-dimensional position and orientation (Fig.
17). The resulting pixel coordinates for the four light targets
2re as follows:
Point i j X Y
Po 29 10 3 +0.116 -0.072
PZ 110 28 -0.115 +0.135p3 110 104 -0.115 -0.075
p, 29 27 10 . 116 +0.138
The corresponding position and orientation of the receptacle
are calculated as
w;, y ; ,z;, 6 2 , 62 , cyz)
= (23.21, -19.46, 9.43, 65.9, 94.3, 88.3).
Note the low disparity between the two measurements for X o
forceitorque sensor active during all operations
Y o , nd Zo.By contrast, the correction in the angles is more
significant. This was typical of this experiment.
Finally, the robot proceeds to insert the nozzle (Fig. 18).
The flared rim of the receptacle guides the nozzle into the
receptacle during the FIS matingldemating operations by cre-
ating forces on the nozzle. As the nozzle is inserted, the
stainless-steel pins enter the V notches, which further enhance
the alignment of the nozzle. Once the nozzle is inserted, the
pins reach the bottom of the V notches. T he robot then rotates
the nozzle clockwise by 15". While the nozzle is rotating, thepins enter the grooves in the receptacle, and the nozzle is
then released. Under the effect of the forces exerted on it by
the spring-loaded pressure plate, the nozzle locks into place
and rests in the notches, which are located at the end of the
grooves. The pressure plate holds the pins locked into the
notches and holds the nozzle in place. The mating of the noz-
zle into the receptacle is completed (Fig. 19). The demating
operation of the FIS is performed using the same steps as
those used to determine the position of the receptacle. The
demating part is simply the reverse action.
In the future, the FIS will include a fluid-tight connector,
which will consist of a male connector that is inside the flared
rim, a pressure-plate assembly, and a female connector (the
inner surface of the nozzle). The male connector will include
an O-ring that will achieve the fluid-tight connection of the
mated FIS.
B . Th e MIS: Setup and Experiment
The goal here is to autonomously identify, remove, and re-
place a faulty or spent module by a new module from a tool
rack. A mock-up of the MIS has been designed and built. The
system is vision driven with the forceltorque, proximity, and
touch sensors used as safety devices. The MI S hardware con-
sists of a rack (which would b e part of the space vehicle to be
serviced) and an exchange module, which includes connectors
for electronics, as well as a mechanical interface. To perform
the module exchange, we utilize a standard nuclear instrumen-
tation module (NIM) system. The MIS experiment involves
determining the two-dimensional position arid orientation ofthe exchange module as well as the three-dimensional posi-
tion and orientation of the light target attached to the mounting
rack then performing insertion and removal of the interchange
module. The experimental setup was shown in Fig. 2.
8/2/2019 Abidi, Gonzalez - 1990 - The Use of Multi Sensor Data for Robotic Applications
http://slidepdf.com/reader/full/abidi-gonzalez-1990-the-use-of-multi-sensor-data-for-robotic-applications 13/19
A B l D l A N D G O N Z A L E Z : M U L T I S E N S O R D A T A F O R R O B O T I C M A T I N G A N D D F M A T I N G I N SPACE
APPENDIX
Fig. 19
Fig. 20
Fluid interchange system (FI S) . Robot completes the insertion of
the nozzle into the receptacle.
Module interchange system (MIS ) Robot completes the insertionof the module into the mating rack
Using procedures similar to those described in the FIS
experiment, the MIS mating and demating is performed au-
tonomously and successfully. In Fig. 20, the robot is shown
completing the insertion of the module in the mating rack.
A more complete description of this system may be found in
t281, ~ 9 1 .
VIII. CONCLUSIONS
In this paper, we described a robotic system that au-
tonomously performs realistic manipulation operations suit-
able for space applications. This system is capable of acquir-
ing, integrating, and interpreting multisensory data to locate,
mate, and demate a fluid interchange system and a module
interchange system. In both experiments, autonomous loca-
tion of a guiding light target, mating, and demating of the
system are performed. We implemented vision-driven tech-
niques that determine the arbitrary two-dimensional position
and orientation of the mating elements as well as the arbi-
trary three-dimensional position and orientation of the light
targets attached to the mounting elements. The system is also
equipped with a forceltorque, proximity, and touch sensor.
Both experiments were successfully accomplished on mock-
ups built for this purpose, regardless of the initial positionof the end effector, mating element, and light targets. This
method is also robust with respect to variations in the ambi-
ent light. This is particularly important because of the 90-min
day-night shift in space.
17 1
EFFECT F T H E TRANSFORMATION
In Section 111, the recovery of the matrix A was decomposedinto steps involving transformations T, S , and C. Appendix I
deals with the effect of the first transformation T.
This transform s a general problem to an intermediate coor-
dinate system in which the three world points are in what we
shall define as standard position. Standard position is charac-
terized by Po at the origin, P I on the positive X axis, andPz in the first quadrant of the X Y plane (or on the positive
Y axis). Before finding the matrix T, which transforms our
three points to standard position, we will examine the effect
of this step on the solution. Using the superscript CY to refer
to our intermediate system, we may write Wp = T W , to
describe the transformation of points in world coordinates to
intermediate coordinates and W , = T- l Wp to describe the
inverse transformation. Because our goal is to solve for the
transformation matrix A , which will transform a point W , in
world coordinates to W-F in camera coordinates, we write
Wg = A W , . (49)
Making the substitution
W i = T-'WP
yields
wg = A T - ' w ~
W,C = BWY (50)
where B = A T - ' . Now, the problem is to solve (50) fo r
the matrix B, which transforms points from intermediate co-
ordinates to camera coordinates. This requires the use of the
original image coordinates and the new a-world coordinates.
Afterward, we solve for A = BT to get the solution to the
original problem.
For any three noncolinear points, a linear transformation
T always exists, which will put the points in standard posi-
tion. The three points form a triangle that lies in the new
X Y plane, and this triangle must contain at least two interior
angles, which are acute or right. One of the corresponding
points is chosen arbitrarily as P o ; the others are labeled PI
and Pz .
Because of the use of augmented spaces, the matrix T may
be determined by using a concatenation of intermediate matrix
transformations. Thes e steps will require the following:
1) Translate the coordinate system so that Po is at the ori-
gin.
2) Pan about the 2 axis so that the Y coordinate of P I is
zero and the X coordinate is positive using Re.
3) Tilt about the Y axis so that the Z coordinate of P I is
zero using Rg.
4) Rotate about the X axis so that the 2 coordinate of Pz is
zero and the X and Y coordinates are nonnegative using
R, .
We therefore write T = R,RaReTo and subsequently will
determine these matrices.
8/2/2019 Abidi, Gonzalez - 1990 - The Use of Multi Sensor Data for Robotic Applications
http://slidepdf.com/reader/full/abidi-gonzalez-1990-the-use-of-multi-sensor-data-for-robotic-applications 14/19
1 7 2 IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION. VOL. 6. N O. 2. APRIL 1990
Re =
this translation is
To =
1c o s 0 s in 8 0 0
-s in8 cos0 0 0
0 0 1 0
- 0 0 0 1 1
1 0 0 -Xo
0 1 0 -Yo
0 0 1 -zo
0 0 0 1
R , =
The first transformation translates Po to the origin using
matrix To. We use Wp = ( X ; ,Y;, Z;, 1 )' = TOW; o refer
to the points after this translation. The matrix that describes
(Xf ,Y f , Z f , 1)' =Rg W e ,where
Ro =
- 1 0 0 0 -
0 c o s a s i n a 0
0 - s i n a COSCY 0
0 1 -0 0
l o 0 0 11(51) In ideal position, the point P 2 is required to be on the X Y
plane; consequently, z(:= X : sin p +Z : co s =0 , o r
where X O , O ,an d ZO re the coordinates of Po in the worldp =ta n - ' ( -Z ~ /X ; ) . (53 )
system.
After a pan through a n angle 8 about the Zo axis, the pointsare located at ( X e ,Y $ ,Ze), and we may write W$ = Re Wp,
where Re is given by
Finally, a rotation aboGt the X B xis by an angle a will result
in
w; ( x p , Yp , z p , 1)' = R , W P
In ideal position, we require the point P 2 to be on the positivex axis; consequently, 1 = -x? in e +Y? co s e = 0 , o r
e = tan-'(Yy/Xy). (52)
We add x to this value of 8 if X y is less than zero so that Xy
is positive. In addition, if Xy = Yy = 0, we may arbitrarily
le t 0 be equal to zero.Next, a tilt of p about the T' axis will result in We =
Add x to 8 if X I <XO . f X I = X O nd Y = YO , hen 8 =0. Ad d x to a if the denominator is less thai
these operations into a single matrix, T = R,RgRoTo, which is given as follows:
COSp cose co s p sin 8 -sin -Xo co s p co s 8 - Yo cos p sin 0
+ZO in
-cos a sin 8
+sin a sin p co s 0
co s a co s 0
+sin a sin p sin e
sin a co s @ -XO( -cos a sin 8 + sin CY sin p cos 8)
-Yo (cos a co s 0 + sin a sin p sin 0 )
-Zo (sin a cos 6)
sin a sin 8 - s i n a c o s 8 c o s 8 c o s p - X o ( s in a s in O + c o s a s i n p c o s O )
-Yo( -sin a cos8 + co s CY sin sin e )co s a sin0 os 0 +cos a sin p sin 8
-zo co s e cos p )
0 10
( 5 5 )
(56)
(57)
cero. Combining
8/2/2019 Abidi, Gonzalez - 1990 - The Use of Multi Sensor Data for Robotic Applications
http://slidepdf.com/reader/full/abidi-gonzalez-1990-the-use-of-multi-sensor-data-for-robotic-applications 15/19
We now have the coordinates ( X p ,Y p, Z y ) for each of the
points Pi in the a intermediate system and have thereby de-
termined the transformation T completely through the m atrix
T. Note that for a given problem, this step needs to be ap-
plied only once since the results are not dependent on camera
position.
A P P E N D IX1
EFFECT F TH E TRANSFORMATIONAs mentioned in Appendix I , the recovery of the matrix
A was decomposed into steps involving transformations T,
S, an d C. Appendix I1 deals with the effect of the second
transformation S.
This preprocessing step must be performed on each image
taken of the target. It involves another transformation, which
describes how the image would look to a camera that has
its lens center at the same location as the real camera but is
looking directly at the origin (in the 01 system) and has its
x axis in the same plane as the X xis of the 01 system. We
will refer to this “fictitious” cam era as being in ideal position
and will call the image seen by such a camera an ideal image.
Such an image is characterized by po at the origin and p i on
the positive x axis.
As was true in the first preprocessing step, the transform a-
tion is composed of several intermediate matrices concatenated
to form a single transformation. Each transformation consists
of a rotation of the camera about the lens center. A rotation
about this point will have a computable effect on the image.
Assuming initially that the fictitious and the real cam eras are
at the same position, we will use the following sequence of
steps to move the fictitious camera into ideal position:
1) Shift the camera axes to the lens center using T i .
2 ) Pan the camera about the lens center so that po is on the
3) Tilt the camera about the lens center so that po falls on
4) Roll the camera about its Z axis so that P I alls on the
image y axis using R4.
the origin using R,.
positive x axis of the image using R,.
The complete transformation may be written as S =
R,R,RJx.The first transformation moves the origin to the lens center:
v o o 10 1 0 0
0 0 1 - xTA= 1 1 . ( 59 )
l o o o 11We use W? = ( X ? ,Y f , Z?, I) ’ = TAW? o refer to the
position of the points after this translation. Note that only
the Z coordinates are changed, and the projective equations
become
173BIDl A N D G O N Z A L E Z M UL T I S F NS OR DAT A FOR ROBOTIC M A T I N G A N D DF M AT I NG Ih S P A C L
where xf an d yf are the image coordinates of P, , namely p I
Next, we pan the camera with an angle 4 about the new
Y axis using the transformation
[ c o s 4 0 - s i n 4 01
R 4 = l o s i n 4 0 c o s 4 O I .
t o 0 0 11The three points are then located at W: = (X : , Y ?, Z: ,
1) ’ = R,Wf, and their images are at x! = -hX:/Z: dnd
y! = -XY!/Z?. We select 4 so that xt = 0. Therefore, we
require X : = ,X co s 4 - ,X sin 4 = 0, or
The camera is then tilted with an angle w about its new x
axis bringing po to the image origin. We may write W ; =
( X y , Y y, Z; , 1) ’ = R, W: to compute the new location of
P I after the tilt transformation, which is described by
O O l1’
0 -sinw cosw 0
0 cosw sinw
l o 0 0 11
After applying this transformation, our image points move to
Since p o will now move to the origin, we know that y t an d
therefore Y t are equal to zero. W e have Y =Y $ co s w +Z$
sin w =0, o r
= (-Y,x/Z,X)cosC#l= y,xcos4/x
=y; co s 4 / A
w = an-’ (yo“ cos $/h ), 0 5 / w /< n/2 . ( 65 )
After the pan and tilt, a roll about the camera z axis with
an angle p is performed to move p i to the positive x axis.
This is accomplished using the transformation
-s inp cosp 0 0
0
l o 0 0 1 1
8/2/2019 Abidi, Gonzalez - 1990 - The Use of Multi Sensor Data for Robotic Applications
http://slidepdf.com/reader/full/abidi-gonzalez-1990-the-use-of-multi-sensor-data-for-robotic-applications 16/19
17 4 I F F F TRANSACTIONS ON ROBOTICS AND AUTOMATION VO L 6, NO 2. APRIL 1990
(73)
In summ ary, the three angles defining the transformation S
(74)
(75)
The new coordinates of P, following this transformation are y p 1 - (S21X" +S22Ya - 23h)
S31X" +s 3 2 y a -S33h *iven by Wy = ( X p ,Yp, Zp, I) ' =RpWY. The image points
are then described by
ar e
o 5 141< x / 2xp = -mp/zy
yp = -hYp/Zy.
yy co s w +xy sin 4 sin w ~ h co s 4 sin w
X ~ C O S ~A s i n 4
(76)
Because we require that P I now move to the positive x axis ,
Yy = -XL; in p + Y y co s p = O
we know y: = 0; therefore, Yy = 0. Substituting, we
Y:
X';' X ~ C O S ~Z:sin$
Y ; os w + (x: i n 4 + Z: cos 4) s inwta n p = -
~ ~~~~ ~~
We will add a to p in the event that the sign ofx';' is negative
so that P I moves to the positive x axis. We can show that the
sign of xy is the same as the sign of
For later processing, S - ' instead of S is needed for the finalsolution. We find S - ' by writing S - ' = Th'R,lR;lR;' .
Substituting in the appro priate matrices and com bining yields
the following matrix for S-I :(e ;)(xyx; +yyy; + h2). (69)
co s 4 co s p
+sin 4 sin w sin p
-cos 4 sin p
+sin 4 sin w co s p
sin 4 co s w
co s w sin p co s w co s pS-1 =
-sin 4 co s p s i n 4 s i n p c o s 4 c o s w h
+cos 4 sin w co s pco s 4 sin w sin p
1 0 0 0 1 1
This sequence of steps gives a means of computing the image
of three points as seen by a camera in ideal position. Italso provides us with the matrix S = RpR,R+Th, which de-
scrib es the transformation involved in moving the real cam era
into ideal position.
A point in the real camera system would therefor e transform
to Wp = S Wc in the ideal camera system. We know that the
coordinates of a point in the ideal image are related to its
coordinates in standard position by
xp = -mp/zp (70)
y; = -xrp/z;. (71)
The new image coordinates are related to the original image
coordinates by
(77)
Before we performed this step, ou r goal was to solve for the
matrix B where Wc = BW " and the image corresponded to
the real camera system. Substituting in Wc = S - ' Wp yields
S P 1WP =BW". There fore
WP = S B W " = C W " (78)
where C = S B , o r B =S-IC.
We may now solve (78) for C using the ideal image coor-
dinates of each point, then solve for B using B = S-IC, and
then solve for A using A =B T .
APPENDIX11
SOLVINGOR THE MATRIX
As mentioned in Appendices I an d 11, the computation of the
matrix A was decomposed into steps involving transformations
8/2/2019 Abidi, Gonzalez - 1990 - The Use of Multi Sensor Data for Robotic Applications
http://slidepdf.com/reader/full/abidi-gonzalez-1990-the-use-of-multi-sensor-data-for-robotic-applications 17/19
ABIDl A N D G O N Z A L E Z . M U L T I S E N SO R D A T A F O R R O B O T I C M A T I N G A N D D E M A T I N G IN S P A C E 1 7 2
T , S , and C. Appendix 111deals with the computation of the
transformation C.
After having applied these two preprocessing steps (trans-
formations T an d s), we are left with a situation in which
the camera is looking at the standard origin with its x axis in
the same plane as the standard X axis. This was termed the
ideal position. In addition, the world points are located at the
origin, on the X axis, and in the X Y plane. This was termed
standard position. We may now solve for the elements of the
matrix C by writing (70) and (71) in terms of X p , Y p , an dZF through the use of (78). Because the transformation T A
moved the center of the coordinate system to the center of the
camera lens, our projective equations no longer have the X
term in the denominator. With this in mind, writing (23)an d
(24) or each point and using the fact that we now have X g ,
Yg , Z g , Y y , Z;Y , Zq , xg , yg , an d y: all equal to zero yields
the following set of equations:
we therefo re make the ’simplifying assumption that the three
points form a right angle at Po, which would resu lt in a closcd-
form solution to this problem. T his means that X y is equal to
zero, and (86) and (87), respectively, becom e
Combining (88)-(90) and (92), e may write each variable in
terms of C I I s follows:
(79) where s1 an d s2 each has a value of f . Substituting intoCI4 = o
hC24 = 0 ( 8 0 ) (91) and squaring both sides yields
xx;c11 +XY;C12 +x;x;c31
xx;c21 + xy;c22 + y,Px;c31
where+x;y;c32 + hc14 + X ; c 3 4 x 0 (83)
al = xX;‘x;
a2 = XTxyx,P
a3 = XY7x:x;
a4 = Y : X ~ ( X ; ) ~
f y;y?c32 f hC24 + y;c34 = 0. (84)
From this system of equations, we may obtain simpler expres-
sions. Substituting the second equation into the fourth equation
yields a5 = x , P ) ~
C2I = o . ( 8 5 ) a6 = ( Y ; t
Dividing (81) by xy and subtracting (83) divided by x ; , we
obtain
Equation (96) may be further expanded by carrying out thesquares, multiplying both sides by (a5 ahC:,), and collecting
Finally, dividing (83) by x ; and subtracting (84)divided by
yg yields
Equations (86) and (87 ) represent two linear equations with
five unknowns. Using ( 30) - (35) ,we may write three nonlinear
equations involving the C; s:
c:,+c:, 1 (88)
c:,+c:,+c:,= 1 (89)
CllC12 +c31c32= 0. (90)
terms
bl +b2C:l + b3C:l = C I I ( ~ ~ C : Ib5)sIJl -CY,
(97)
where2 261 =a5a2- a 3
b2 = a5a: - a5az +a: - ai - aia6
b3 = a6a: - a6a:
b4 = 2ala2a6
b5 = 2asala2 +2a3a4.
Squaring each side again and collecting terms, we obtain
C I
+c~C:I+ c ~ C ~ Ic ~ C ~ Ic s C ~ I O (98)where
CI= bi
~2 = 2blb2 - b:
A closed-form solution to this system of equations in its gen-
era1 form could not be found. In the following discussion,
8/2/2019 Abidi, Gonzalez - 1990 - The Use of Multi Sensor Data for Robotic Applications
http://slidepdf.com/reader/full/abidi-gonzalez-1990-the-use-of-multi-sensor-data-for-robotic-applications 18/19
1 7 6 IE€!E TK4NSACTIONS O N ROBOTICS A N D AUT OM AT I ON. V O L . 6. NO . 2. APRIL 1990
~3 = 2 6 1 6 3+ bi + b: + 2b4b5 versities of Florida, Michigan, Tennessee, Texas, and the
Oak Ridge National Laboratory) under grant DOE-DE-FG02-
86NE37968. The FIS and MIS experiments were carried outunder contract number NAG8-630-BO 1 199962 with NASA,
~4 = 2b2b3 - b4b5 - b:
cs = b: +b:.
CII = I >
Cl2 = s I s 2 x z p d h J ( x ; ) ?+ (y;)’ - $ ) 2 t 2
c l 3 = s2y;(t2- ) J ( X ; ) ’ + (U;)’- y ; l 2 f 2
c2, = o
c 2 2 = s I s * y ; m J ( x y +Oq - y p t 2
c 2 3 = s2x; J ( x ; ) 2 + ( y y - y;)2t2
1 (102)
with Is; = 1. This completes the computation of the C matrix
relating ideal position to standard position.
ACKNOWLEDGMENT
Our Robotics facilities are supported in part by the DOE
University Program in Robotics for Advanced Reactors (Uni-
Marshall Space Flight Center.
The efforts of Dr. R. 0. Eason and B. Bernhard in de-
veloping the robotic workstation and documenting the sensor
acquisition and interpretation modules are sincerely acknowl-
edged. T. Chandra and J. Spears have programmed the FIS
and MIS experiments; their contributions are also sincerelyacknowledged. Finally, sincere thanks are due to Dr. W. L.
Green, F. Nola, J . Turner, and T. Bryan for their support in
this work. Mrs. Janet Smith has typed several versions of this
manuscript; her efforts are appreciated.
111
121
131
141
151
I171
REFERENCES
K. S. Fu, R. C. Gonzalez, and C. S . G . Lee, Introduction to
Robotics: Control, Sensing, Vision, and Intelligence. New York:McGraw-Hill. 1987.R. C. Gonzalez and P. Wintz. Digital Image Processing. 2nd ed.
Reading, MA: Addison-Wesley, 1987.
A. Cohen and J . D. Erickson, “Future use of machine intelligence androbotics for the space station and implications for the U . S . economy,”
IEEE J . Robotics Automat., vol. RA-1, pp. 117-123, Sept. 1985.M . A. Bronez, “Vision concepts for space-based telerobotics,” in
Proc. Remote Syst. Robotics Hostile Envir. Conf. (Pasco. WA),Mar. 1987, pp. 150-157.R. E . Olsen and A. Quinn, “Telerobotic orbital servicing technologydevelopment,” in Proc. Remote Sys t. Robotics Hostile Envir. Conf.
(Pasco, WA). Mar. 1987, pp. 158-161.
A. K. Bejczy, “Sensors, controls, and man-machine interface for ad-vanced teleoperation,” Science, vol. 208, pp. 1327-1335, 1980.
R. 0 . Eason and R. C. Gonzalez, “Design and implementation ofa multisensor system for an industrial robo t,’ ’ Tech. Rep. TR-ECE-86-23, Univ. Tennessee, Final Rep. Martin Marietta Energy Syst..
Contract 37B-07685C-07, Feb. 1986.A. C. Kak and J . S . Albus, Sensors fo r Intelligent Robots (S . Y.
Nof. Ed.)
G. Beni and S. Hackwood (Eds.). Recent Advances in Robotics.
New York: Wiley, 1985.
A. Pugh (Ed.),Robot Sensors: Vision. New York: Springer, 1986,vol. I .
A. lkonomopoulos. N . Ghani, G. Doemens, E . Kutzer, and N. Roth.”Image processing and analysis in multisensory systems.” IEEETrans. Circuits Syst., vol. CAS-34, pp. 1417-1431, Nov. 1987.
B. K. P. Horn, Robot Vision. New York: McGraw-Hill, 1986.R. M . Haralick, “Using perspective transformation in scene analysi s,”
Com put . Graphics Image Process., vol. 13, pp. 191-221, 1980.I. Fukui, “TV image processing to determine the position of a robot
vehicle,“ Patt. Recogn., vol. 14, pp. 101-109, 1981.
M. J. Magee and J . K . Aggarwal, “Determining motion parameters
using intensity guided range sensing,” in Proc. 7th Int. Co nf. Patt.
Recogn. (Montreal , Canada), July 1984, pp. 538-541.E. S . McVey, K. C. Drake, and R. M. Inigo. “Range measurement
by a mobile robot using a navigation line,” IEEE Trans. Paft. Ana/ .
Mach. Intell., vol. PAMI-8, pp. 105-109. Jan. 1986.
M. R. Kabuka and A. E. Arenas, “Position verification of a mobile
robot using standard pattern,” lEEE J . Robotics Automat., vol. RA-
3 . pp. 505-516, Dec. 1987.R . Y. Tsai, “A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV camerasand lenses.’’ IEEE J . Robotics Automat., vol. RA-3, pp. 323-344,Aug. 1987.
E. L . Hall, Computer Image Processing and Recognition. New
York, Academic, 1979.R. 0.Eason, M . A. Abidi, and R. C. Gonzalez, “A method for camera
calibration using three world points,” in Proc. Int. C onf. Syst. Man
Cyber. (Halifax, Canada), Oct. 1984, pp. 280-289.
M. A. Abidi, R. 0. Eason, and R. C. Gonzalez, “Camera calibration
in robot visio n,” in Proc. 4th Scand. Co nf. Image Anal. (Trondheim,Norway). June 1985. pp. 471-478.
New York: Wiley, 1985, chap. 13. pp. 214-230.
8/2/2019 Abidi, Gonzalez - 1990 - The Use of Multi Sensor Data for Robotic Applications
http://slidepdf.com/reader/full/abidi-gonzalez-1990-the-use-of-multi-sensor-data-for-robotic-applications 19/19
ABlDl AND GONZALEZ MULTISFNSOR DATA FOR ROBOTIC MATING A h [ ) D k M A T l h G I N SPACk 177
1221
1231
A. Pugh (Ed.), Robot Sensors: Tactile and Non -Visio n. NewYork: Springer. 1986, vol. 2.
0 .Khatib, “A unified approach for motion and force control of robot
manipulators: The operational space formulation.” IEEE J. Robotics
Automat., vol. RA-3. pp. 43-53, Feb. 1987.
A. C. Staugaard, Robotics and A I: An Introduction to Applied Ma -
chine Intelligence.
W. G . Holzbock, Robotic Technology: Principles and Practice.
New York: Van Nostrand Reinhold, 1986.L. D. Harmon, Tactile Sensing for Robots (G . Beni and S . Hackwood.
Eds.).
P. Allen and R. Bajczy. “Object recognition using vision and touch.”in Proc. 91h J . Conf . Artificial In tell. (Los Angeles, CA), Aug.1985, pp. 1131-1137.M. A. Abidi, T. Chandra, R. C. Gonzalez. W. L . Green. and J.
Spears, “Multisensor system for autonomous robotic space manip-
ulations,” Tech. Rep. TR-ECE-87-24, Univ. Tennessee, Final Rep.NASA, Contract NAG8-630-B01199962. Nov. 1987.
M . A. Abidi, W. L. Green, T. Chandra, and J . Spears, “Multisen-
sor robotic system for autonomous space maintenance and repair,”in Proc. SPIE Conf Space Automat. IV (Cambridge, MA), Nov.1988. pp. 104-1 14.
Englewood Cliffs, NJ: Prentice-Hall. 1987.
New York: Wiley, 1986. ch. 10. pp. 389-424.
M . A. Abidi (S‘81- M’87) attended the Ecole Na-tionale d‘lngenieur de Tunis, Tunisia. from 1975 to
1981 and received the Principal Engineer Diplomaand the First Presidential Engineer Award in 1981.
He received the M. S. degree in 1985 and the Ph .D.degree in 1987 from the University of Tennessee.Knoxville, both in electrical engineering.
In 1987, he joined the Department of Electri-cal and Computer Engineering at the University of
Tennessee. His research and teaching interests in -
clude digital signal processing, image processing.
and robot sensing. He has published over 30 papers in these areas.
Dr. Abidi is a member of several honorary and professional societies.
including Tau Beta Pi. Phi Kappa Phi, and Eta Kappa Nu.
R. C . Gonzalez (S’6S-M’70~SM’7S-F’83) re-ceived the B.S.E.E. degree from the Universit).
of Miami in 1965 and the M.E. and Ph.D. de-grees in electrical engineering from the Universityof Florida. Gainesville. in 1967 and 1970, respec-
tively.He was affiliated with the GT&E Corporation and
the Center for Information Research at the Univer-sity of Florida, NASA, and is presently President of
Perceptics Corporation and Distinguished ServiceProfessor of Electrical and Computer Engineering
at the University of Tennessee, Knoxville. He is a frequent consultant to indus-try and government in the areas o f pattern recognition. image process ing, andmachine learning. He received the 1978 UTK Chancello r’s Research Scholar
Award, the 1980 Magnavox Engineering Professor Award, and the 1980 M.E.Brooks Distinguished Professor Award for his work in these fields. He wasnamed Alumni Distinguished Se rvice Professor at the University of Tennesseein 1984. He was awarded the University of Miami‘s Distinguished Alumnus
Award in 1985, the 1987 IEEE Outstanding Engineer Award for CommercialDevelopment in Tennessee, and the 1988Albert Rose National Award for Ex-
cellence in Commercial Image Processing. He is also the recipient of the 1989B. Otto Wheeley Award for Excellence in Technology Transfer and the 1989
Coopers and Lybrand Entrepreneur of the Year Award. He is coauthor of
Pattern Recognition Principles. Digital Image Processing, and Syntactic
Pattern Recognition: An Introduction (Addison-Wesley). and of Robotics:
Contro l, Sensing, Vision and Intelligence (McGraw-Hill).
Dr. Gonzalez is an Associate Editor for the IEEE TRANSACTIONSN S I S -TEMS, M.k\, AND CmmvETics and the International Journal of Computer
and Information Sciences and is a member of several professional and hon-
orary wcieties. including Tau Beta Pi , Phi Kappa Phi. Eta Kappa Nu. and
Sigma X I .