Haptic Interaction Method Using Visual Information and Physically Based ModelingA Haptic Interaction...

10
636 IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 15, NO. 4, AUGUST 2010 A Haptic Interaction Method Using Visual Information and Physically Based Modeling Jungsik Kim, Farrokh Janabi-Sharifi  , Senior Member, IEEE , and Jung Kim  , Member, IEEE  Abstract—Haptic feedback can be used to sense a physical envi- ronmentata remote site inordertoovercomespatialorscalebarri- ers in telemanipulation. The aim of this paper is to develop a haptic interaction method for a deformable object manipulation system by means of image processing and physically based modeling tech- niques. The interaction forces between the instrument driven by a haptic device and a deformable object are inferred in real time based on the visual information from a slave environment without force sensor. A physically based model of the deformable object is constructed by integrating the geometric information from vision,  a p riori knowledge of the object mechanical properties, and a pre- defined coordinate system of the slave environment. The forces are then derived from the model, while a boundary condition is up- dated based on the images (a tool-tip position tracking). In order to demonstrate the applicability and effectiveness of the proposed algorithm, macro- and microscale experimental systems were built and equipped with a telemanipulation system and a commercial haptic display. The proposed method was verified using silicone (macroscale) and zebrafish embryos (microscale).  Index Terms—Forceestimation, haptic feedback,imageprocess- ing, physically based modeling, telemanipulation. I. INTRODUCTION H APTIC feedback offers the potential to increase the qual- ityandcapability ofhuman–machine interactions, aswell as the ability to skillfully manipulate objects by exploiting the senseoftouch[1].Intelemanipulation,hapticfeedbackhasbeen studied in the fields of robotic guidance and obstacle avoidance [2], robotic surgery [3]–[5], micro-/nanomanipulation [6]–[9], and biomanipulation [10], [11]. For example, in nano- or biomanipulation applications, where the operator manipulates a micro-/nanoscale object with limited vision feedback through a microscope, haptic assistance can be used to compensate visual errors [8] and improve the operator manipulation final quality (e.g., operation time and efficiency) [10], [11]. Sensing the force information of a slave environment is a pre- requisite in order to display a user force feedback duringmanip- ulation tasks. Sensing the force information and its reflection to Manuscript received October 15, 2008; revised May 23, 2009; accepted August 13, 2009. Date of publication September 25, 2009; date of current version July 28, 2010. Recommended by Technical Editor G. Morel. This work was supported by the Ministry of Knowledge Economy, Korea, under the Information Technology Research Center Support Program supervised by the Institute for Information Technology Advancement under Grant IITA-2009- C1090-0902-0008. J. Kim and J. Kim are with the Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology, Daejeon 305-701, Korea (e-mail: [email protected]; [email protected]). F. Janabi-Sharifi is with the Department of Mechanical and Industrial Engineering, Ryerson University, Toronto, ON M5B 2K3, Canada (e-mail: fsharifi@ryerson.ca). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TMECH.2009.2031240 a user still constitutes a challenging issue because of problems associated with sensor design and force rendering [3], [12]. The realization of a force feedback in telemanipulation has mainly beendonethusfarbyintegratingforcesensorsintoaslavesiteto measure reaction forces between a slave robot and the environ- ment. The measured force signals are then filtered to guarantee the stability of the haptic device and offer an improved quality of the force feedback. The force sensor, however, has a low SNR ratio for force feedback, and can be damaged through physical contact with the environment or by exposure to biological and chemical materials. Although the use of a strain–gauge sensor or a commercial six-axis force/torque sensor in teleoperated robotic surgery has been examined [4], [13], current commer- cial surgery robots hardly provide an adequate haptic feedback due to safety and effectiveness issues, partially associated with the reliability of the force sensor in a noisy environment. Very small-scale force sensing for micromanipulation is more diffi- cult because of the design of small force sensors that needs to meet challenging requirements for such applications, including microsensing for multiple degrees of freedom (DOF) with high resolution and accuracy while maintaining a high SNR. In ad- dition, sufficient reliability and repeatability of the force sensor must be preserved. In particular, microscale measurements for biomanipulation are subject to severe disturbances due to the liquid surface tension (e.g., when cells are in a medium) and adhesion forces [12], [14]. Therefore, new methods capable of avoiding the use of the force sensors have recently become very prevalent. This paper presents a new method to estimate the interaction forcesofa slaveenvironmentbasedonvisualinformationrather than on direct force measurements using a force sensor. The vi- sual information measured from optical devices is transformed into haptic information by modeling the slave environment. The interaction forces are rendered from this environment using a mechanical modelrepresentingtherelationshipbetweentheob-  ject deformation and the applied forces. Therefore, it is not necessary to use force sensors. Originally, the term “haptic ren- dering” was defined as the process of computing and generating forces in response to a user interaction with virtual objects [15], including collision detection, force response, and control al- gorithms [16]. The proposed algorithm also incorporates these components in order to compute and generate forces due to the user interaction with the visually modeled slave environment. The interaction-force prediction algorithm is investigated us- ingimageprocessing andphysicallybased modelingtechniques. The geometry (boundary) information of a deformable object is obtained from the images of a slave site in the preprocess for the modeling, and the kinematic information of a slave tool tip 1083-4435/$26.00 © 2009 IEEE

description

2010A Haptic Interaction Method Using Visual Information and Physically Based ModelingA Haptic Interaction Method Using Visual Information and Physically Based Modeling

Transcript of Haptic Interaction Method Using Visual Information and Physically Based ModelingA Haptic Interaction...

  • 636 IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 15, NO. 4, AUGUST 2010

    A Haptic Interaction Method Using VisualInformation and Physically Based Modeling

    Jungsik Kim, Farrokh Janabi-Sharifi, Senior Member, IEEE, and Jung Kim, Member, IEEE

    AbstractHaptic feedback can be used to sense a physical envi-ronment at a remote site in order to overcome spatial or scale barri-ers in telemanipulation. The aim of this paper is to develop a hapticinteraction method for a deformable object manipulation systemby means of image processing and physically based modeling tech-niques. The interaction forces between the instrument driven bya haptic device and a deformable object are inferred in real timebased on the visual information from a slave environment withoutforce sensor. A physically based model of the deformable object isconstructed by integrating the geometric information from vision,a priori knowledge of the object mechanical properties, and a pre-defined coordinate system of the slave environment. The forces arethen derived from the model, while a boundary condition is up-dated based on the images (a tool-tip position tracking). In orderto demonstrate the applicability and effectiveness of the proposedalgorithm, macro- and microscale experimental systems were builtand equipped with a telemanipulation system and a commercialhaptic display. The proposed method was verified using silicone(macroscale) and zebrafish embryos (microscale).

    Index TermsForce estimation, haptic feedback, image process-ing, physically based modeling, telemanipulation.

    I. INTRODUCTION

    HAPTIC feedback offers the potential to increase the qual-ity and capability of humanmachine interactions, as wellas the ability to skillfully manipulate objects by exploiting thesense of touch [1]. In telemanipulation, haptic feedback has beenstudied in the fields of robotic guidance and obstacle avoidance[2], robotic surgery [3][5], micro-/nanomanipulation [6][9],and biomanipulation [10], [11]. For example, in nano- orbiomanipulation applications, where the operator manipulates amicro-/nanoscale object with limited vision feedback through amicroscope, haptic assistance can be used to compensate visualerrors [8] and improve the operator manipulation final quality(e.g., operation time and efficiency) [10], [11].

    Sensing the force information of a slave environment is a pre-requisite in order to display a user force feedback during manip-ulation tasks. Sensing the force information and its reflection to

    Manuscript received October 15, 2008; revised May 23, 2009; acceptedAugust 13, 2009. Date of publication September 25, 2009; date of currentversion July 28, 2010. Recommended by Technical Editor G. Morel. Thiswork was supported by the Ministry of Knowledge Economy, Korea, underthe Information Technology Research Center Support Program supervised bythe Institute for Information Technology Advancement under Grant IITA-2009-C1090-0902-0008.

    J. Kim and J. Kim are with the Department of Mechanical Engineering,Korea Advanced Institute of Science and Technology, Daejeon 305-701, Korea(e-mail: [email protected]; [email protected]).

    F. Janabi-Sharifi is with the Department of Mechanical and IndustrialEngineering, Ryerson University, Toronto, ON M5B 2K3, Canada (e-mail:[email protected]).

    Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

    Digital Object Identifier 10.1109/TMECH.2009.2031240

    a user still constitutes a challenging issue because of problemsassociated with sensor design and force rendering [3], [12]. Therealization of a force feedback in telemanipulation has mainlybeen done thus far by integrating force sensors into a slave site tomeasure reaction forces between a slave robot and the environ-ment. The measured force signals are then filtered to guaranteethe stability of the haptic device and offer an improved qualityof the force feedback. The force sensor, however, has a low SNRratio for force feedback, and can be damaged through physicalcontact with the environment or by exposure to biological andchemical materials. Although the use of a straingauge sensoror a commercial six-axis force/torque sensor in teleoperatedrobotic surgery has been examined [4], [13], current commer-cial surgery robots hardly provide an adequate haptic feedbackdue to safety and effectiveness issues, partially associated withthe reliability of the force sensor in a noisy environment. Verysmall-scale force sensing for micromanipulation is more diffi-cult because of the design of small force sensors that needs tomeet challenging requirements for such applications, includingmicrosensing for multiple degrees of freedom (DOF) with highresolution and accuracy while maintaining a high SNR. In ad-dition, sufficient reliability and repeatability of the force sensormust be preserved. In particular, microscale measurements forbiomanipulation are subject to severe disturbances due to theliquid surface tension (e.g., when cells are in a medium) andadhesion forces [12], [14]. Therefore, new methods capable ofavoiding the use of the force sensors have recently become veryprevalent.

    This paper presents a new method to estimate the interactionforces of a slave environment based on visual information ratherthan on direct force measurements using a force sensor. The vi-sual information measured from optical devices is transformedinto haptic information by modeling the slave environment. Theinteraction forces are rendered from this environment using amechanical model representing the relationship between the ob-ject deformation and the applied forces. Therefore, it is notnecessary to use force sensors. Originally, the term haptic ren-dering was defined as the process of computing and generatingforces in response to a user interaction with virtual objects [15],including collision detection, force response, and control al-gorithms [16]. The proposed algorithm also incorporates thesecomponents in order to compute and generate forces due to theuser interaction with the visually modeled slave environment.

    The interaction-force prediction algorithm is investigated us-ing image processing and physically based modeling techniques.The geometry (boundary) information of a deformable object isobtained from the images of a slave site in the preprocess forthe modeling, and the kinematic information of a slave tool tip

    1083-4435/$26.00 2009 IEEE

  • KIM et al.: HAPTIC INTERACTION METHOD USING VISUAL INFORMATION AND PHYSICALLY BASED MODELING 637

    can be obtained using a fast image processing algorithm for theinput of the physically based model to estimate the interactionforces. In this paper, the boundary element (BE) method (BEM)is used as a physically based modeling technique while a prioriknowledge of the material properties is assumed. During theinteractions, the boundary conditions are updated using a real-time motion analysis of the slave environment. The interactionforces are then calculated based on the model, and conveyedto the user through a haptic device. The proposed algorithmrequires only the material properties and the object edge infor-mation. Thus, this algorithm is robust to topological changes ofthe model network. In addition, measuring the deformation ofan entire object body and applying it to the model as nodal dis-placements can be a very time-consuming work. Therefore, theposition update of a slave robot (tool tip) is used to recover theforces, which is similar to the haptic interaction point (HIP) invirtual reality (VR) applications [17]. Moreover, the proposedsystem addresses the force sensing issues in both micro- andmacroscales so that a very small-scale or very large-scale slaveenvironment can be rendered using the proposed algorithm.

    This paper is organized as follows. Section II presents theprevious work related to vision-based force estimation meth-ods. Section III provides an overview of the proposed hapticfeedback method, which is based on image processing and phys-ically based modeling techniques. In order to demonstrate theeffectiveness of the proposed method, macro- and microscaletelemanipulation systems were developed. In Section IV, theexperimental results of the developed telemanipulation systemsare presented. Finally, conclusions and suggestions with regardto future work are given in Section V.

    II. PREVIOUS WORK

    A large number of computer vision and image processingtechniques have been investigated with regard to the objectrecognition and tracking [18], the characterization of materialproperties [19], [20], the collision detection [21], and the model-ing of deformable objects [22]. In this context, the force estima-tion from visual information has also received much attention.Forces are usually computed based on the geometric informa-tion of an object (or a manipulation tool) for the known inputdisplacements, for which the measured geometrical informationis applied to an offline force estimation algorithm. For instance,Wang et al. [23] computed the deformation gradients of elas-tic objects from images and estimated the external forces usingthe stressstrain relationships. Luo and Nelson [24] presenteda method fusing force and vision feedback for a deformableobject manipulation, in which the measured deformation wasapplied to a finite-element (FE) model to obtain the force esti-mates. Greminger and Nelson [25] showed a force measurementthrough the boundary displacements of elastic objects using aDirichlet-to-Neumann map. Nelson et al. [26] measured the ap-plied forces for biological cells with a point-load model for celldeformation. DiMaio and Salcudean [27] measured the tissuephantom deformation to estimate the applied force distributionduring the insertion of a needle. Anis et al. [28] used the forcedisplacement relationship of a microgripper in a microassembly

    process. Liu et al. [29] measured the contact forces of a biolog-ical single cell using the deflection of a polydimethylsiloxane(PDMS) post in a cell holding device.

    A few researchers have studied the real-time force estimationalgorithms for haptic rendering based on visual information.Owaki et al. [30] introduced a concept in which the visual dataof real objects were used as haptic data to simulate the virtualtouching of an object, but not for telemanipulation tasks. Theyused a high-speed active-vision system that allows to obtain vi-sual data at 200 Hz. Ammi et al. [10] used microscopic imagesto provide haptic feedback in a cell injection system. A cell non-linear massspring model was used to compute the interactionforces for haptic rendering. However, massspring models offerlimited accuracy [31]. Other significant disadvantages of theirmethod include its weak connection to biomechanics. For ex-ample, there was no mechanically relevant relationship betweenthe model parameters and the object material properties. More-over, the parameters were calculated from offline FE method(FEM) simulations; this required extra FE modeling efforts, andthe results were influenced by the network topology. Kennedyand Desai [32] proposed a vision-based haptic feedback systemin the case of robot-assisted surgery. A rubber membrane wasmodeled using an FE model, and a grid located on the rubbermembrane was visually tracked in order to measure its dis-placement. The FE model then reflected the interaction forcesusing the displacement values as boundary conditions. With thismethod, however, it was necessary to stamp a grid pattern onthe object to generate the internal meshes and track each nodefor the FE model, which made this method inconvenient andimpractical for biological- and microscale objects. In addition,real-time solution of FEM is usually not feasible [33].

    In conclusion, the massspring system and FEM model inthe aforementioned studies present severe shortcomings, of-ten requiring additional efforts. FEM models were not efficientenough to be used in real-time applications. Finally, in manyof the previous systems, the FEM required a controlled slaveenvironment to model the membrane. The massspring modelwas usually nonrealistic and highly sensitive to the tuning ofthe model, such as in the spring constant of the mesh, throughadditional experiments. To circumvent the issues related to theuse of FEM and massspring models, the present paper usesBEM as an alternative approach to estimate the forces requiredfor the haptic feedback. BEM is a numerical solution techniqueto solve the differential equations representing an object modelthat computes the unknowns on the model boundary insteadof on its entire body. The proposed method uses the objectedge information and known material properties, which makeit highly adaptive to the network topology changes by reducingthe amount of additional effort required in previous systems.

    III. VISION-BASED HAPTIC INTERACTION METHOD

    A. OverviewFig. 1 represents the coordinates of the developed system.

    A master interface has a master space with frame in whichthe position of the haptic stylus is given by the 3-D vectorp. The physical interactions between a manipulator and a

  • 638 IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 15, NO. 4, AUGUST 2010

    Fig. 1. Coordinate frames of the telemanipulation system.

    Fig. 2. Overall system block diagram (Fh is the exerted force to a hapticdevice by human operator).

    deformable object are introduced in the slave space . Theshape of an object can be expressed by q, and the positionof the manipulator p is related to p by the transform Tp .The interactions in the slave space are mapped to the imagespace I to measure the position p and q, and to estimatethe interaction force F = f(q, p), where f() representsthe continuum mechanics method. The interaction force F isthen transformed into F = TF F using the transform TF .The transforms Tp and TF contain scaling factors between themaster and slave spaces. If a position scaling factor in Tp is setto scale down (or up), the forces are scaled up (or down) by aforce scaling factor in TF .

    The algorithm consists of two parts (see Fig. 2): the con-struction of a deformable object model (preprocess) and theinteraction force update for each frame (run-time process). Inthe preprocess phase, the edge information of the object is ob-tained using image processing techniques, and a boundary meshis constructed based on the edge information. The BE model isthen created with the object mesh and known material prop-erties. Using this model, the system of equations is built andprecomputed; it is used for a fast update of the system matrix inthe run-time process.

    In the run-time phase, the position of a haptic device is ap-plied to a robotic manipulator for position commands, and theinteraction force at the slave space is commanded to the hapticdevice for force feedback. The displacement uc at the contactpoint of a deformable object is estimated based on the visuallytracked tool-tip position, and is applied to the precomputed BEmodel as a boundary condition to compute the reaction force.For a good-fidelity haptic feedback, the forces are deduced at ahigh rate (1 kHz) from forces computed at a low rate (60 Hz)using a linear force interpolation method.

    The key parts of the algorithm consist of the geometry ex-traction from images, the object modeling, and the real-timecomputation of the interaction forces. The remainder of thissection concretely explains each part of the algorithm.

    B. Geometry ExtractionFast and accurate motion tracking and edge detection tech-

    niques are important for modeling a deformable object. Theedge (Iq) of the object along with the tool-tip position (Ip) of aslave manipulator is extracted and tracked using the followingmethods.

    A template matching is used to track the tool-tip position,which is a process that determines the location of a templateby measuring the degree of similarity between an image andthe template. Although there are several methods that can mea-sure the degree of similarity, such as the summation of thesquared difference (SSD), a normalized cross-correlation co-efficient was implemented to reduce the degree of sensitivityto contrast changes in the template and video image [34]. Thecorrelation between the pixel of the template (w h) and everypixel in the entire image is given in (1), shown at the bottomof the page, where I(Ix + Ix, Iy + Iy) = I(Ix + Ix, Iy +Iy) I(Ix, Iy) and T (Ix, Iy) = T (Ix, Iy) T . I(Ix, Iy)andT (Ix, Iy) are the corresponding values at location (Ix, Iy) ofthe image and template pixels, respectively, and I(Ix, Iy) and Tare the average pixel value in the template and the average pixelvalue in the image under the template window, respectively. Inorder to reduce the computational load of the pixel-by-pixel op-eration (1), a moving region-of-interest (ROI) is adopted. As themovement of the tool tip is very small in the sequential frames,

    C(Ix, Iy) =

    h1I y

    w1Ix T (

    Ix, Iy)I(Ix + Ix, Iy + Iy)(h1I y

    w1Ix T (Ix, Iy)2

    h1I y

    w1Ix I(Ix + Ix, Iy + Iy)2

    )1/2 (1)

  • KIM et al.: HAPTIC INTERACTION METHOD USING VISUAL INFORMATION AND PHYSICALLY BASED MODELING 639

    the ROI is determined around the identified position via a tem-plate matching. The template matching is then performed in theROI to obtain the new position.

    To represent the geometry (q) of a deformable object, theobject boundary (Iq) is extracted using the active contour model(snake) developed by Kass et al. [35]. The contour with a set ofcontrol points is initially manually placed near the edge of in-terest. The energy function (2) defined surrounding each controlpoint is then computed, and the contour is drawn to the edge ofthe image where the energy has a local minimum. In this paper,a fast greedy algorithm [36] for energy minimization is used,and the energy function Esnake is defined by

    Esnake =

    ((s)Econt + (s)Ecurv + (s)Eimage)ds (2)

    where s is the arc length along the snakes contour taken as aparameter, the continuity energy Econt minimizes the distancebetween control points and prevents all control points frommoving toward the previous control point, Ecurv represents thecurvature energy, and is responsible for the curvature of thecontour corner, the image energy Eimage indicates the normal-ized edge strength, and the values of , , and determine thefactors of each energy term. The edge of the object is finallyrepresented by the positions of the control points that are usedto mesh the boundary of the object for the BE model.

    C. Continuum Mechanics ModelFor a realistic and plausible force estimation, the continuum

    mechanics modeling of a deformable object has been widelystudied and developed in haptic applications [37]. In continuummechanics, differential equations for the stress or strain equilib-rium have to be solved, and numerical methods such as FEMand BEM are usually used with a discretization of the objectinto a number of elements.

    The BEM directly uses mechanical parameters, and handlesvarious interactions between the tools and the objects. Due to itsphysically based nature and computational advantages over theFEM, it has been used in computer animation and haptic appli-cations. According to [50] for the real-time object simulation,BEM has less computation time than FEM, and FEM has diffi-culties to construct and modify the element mesh. In this paper,as the obtained geometry information of a deformable object isthe edge information, BEM using the boundary mesh is highlysuitable compared to FEM that requires the solid elements ofthe object. James and Pai [38] successfully applied BEM to thesimulation of a deformable object with haptic feedback. Thereaction force and deformation were computed based on pre-computed reference boundary value problems known as Greensfunctions (GFs) and a capacitance matrix algorithm (CMA).

    In this paper, the BE model of a deformable object was builtusing the extracted object edge information using the controlpoints of an active contour model and the related material prop-erties (Youngs modulus E and Poissons ratio ). In the linearelasticity theory [39], the equation of equilibrium for the dis-placement ui , called Navier equation, can be expressed by

    2uix2j

    + (+ )2uj

    xixj+ bi = 0 (3)

    Fig. 3. BEs with boundary conditions in (a) the preprocess and (b) the run-timeprocess.

    where Lame constants and are = E/2(1 + ) and =E/(1 + )(1 2), bi is the body force components, and xiis denotes the position of a point of the body. Navier equationcan be converted in the form of an integral equation over theboundary () using Greens theorem. The boundary integralequation is given by

    Cik (P )uk (P ) +

    ukT

    ik (P,Q)d =

    tkU

    ik (P,Q)d

    T ik =1

    4(1 )[1r

    r

    n

    {(1 2)ik + ri

    r

    rkr

    }

    (1 2)r

    (rirnk rk

    rni

    )]

    U ik =1

    8(1 )[(3 4) ln

    (1r

    )ik +

    rir

    rkr

    ](4)

    where the point P is called load or source point and the pointQ is called field point. The body force term is omitted andignored in (4) to simplify the computation. The variable tkis traction on the boundary and Cik is the coefficient for asmoothness of the boundary, T ik and U ik are the traction anddisplacement components of the fundamental solution of Navierequation [39], n is the outward unit normal vector at a load point,r is the distance between the load point and the field point, andik is the Kronecker delta symbol.

    In order to numerically solve (4), the boundary is discretizedinto N elements (see Fig. 3). The points representing the un-known values, tractions and displacements, are defined as nodes.In the present study, we have selected constant elements for sim-plicity, namely, the nodes are assumed to be in the middle ofeach element and the unknowns have a constant value over eachelement. Equation (4) can be written in matrix form and dis-cretized as

    ciui +

    utd =

    tud

    ciui +N

    j=1

    (j

    td

    )uj =

    Nj=1

    (j

    ud

    )tj . (5)

    The integrals in (5) are numerically solved using Gaussianquadrature. Finally, the resulting system of equations is given

  • 640 IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 15, NO. 4, AUGUST 2010

    byN

    j=1

    Hijuj =N

    j=1

    Gijtj

    HU = GT (6)where Hij = ciij +

    t

    d, Gij =

    ud, the H and

    G matrices are 2N 2N dense matrices in the case of 2-Dproblems, and U and T are the displacement and traction vectors,respectively. The boundary conditions, displacements (u ) ortractions (t), are applied at each node to solve these algebraicequations. When the displacement value is given on a node, thetraction value can be obtained and vice versa. Equation (6) canbe rearranged as

    AY + AY = 0 Y = A1(AY) (7)whereY is the unknown vector consisting of unknown boundarynodal values, Y represents the known boundary conditions, andA and A consist of the columns of the H and G matricesaccording to the indexes of Y and Y, respectively. Y can beobtained by solving (7).

    When the object is deformed, the boundary conditions at thecollision nodes change. Therefore, (6) and (7) must be rewrittento take the new boundary conditions into account and they mustbe solved in real time.

    D. Real-Time Force ComputationFor a real-time and realistic haptic interaction, it is necessary

    to provide a haptic feedback with updating rates greater than500 Hz [40]. In other words, the interaction forces must becomputed within 2 ms. In order to solve the linear matrix systemof (7) in real time, a CMA is used [38].

    In the preprocess, the reference boundary value problem isdefined with the boundary conditions as Fig. 3(a), i.e., A0Y0 +A0Y0 = 0. In the run-time process, the displacement boundarycondition is applied at the contact points between the object andthe tool [see Fig. 3(b)]. If the S boundary conditions change, theA matrix for a new set of boundary conditions can be relatedto the precomputed A0 matrix by swapping simple S blockcolumns. Using the ShermanMorrisonWoodbury formula, therelationship between A and A0 can be obtained as follows:

    A1 = A10 A10 (A0 A0)ISC1ITS A10 . (8)Equation (7) can then be represented byY = A1(AY) = Y(0) + (IS + IS )C1ITS Y(0) (9)

    where, IS is an 2N 2S submatrix of the identity matrix,Y(0) =

    [(I IS ITS ) IS ITS

    ]Y, C is known as the capac-

    itance matrix (2S 2S), C = ITS IS , and the GFs iscomputed as = A10 A0 . Equation (9), known as the ca-pacitance matrix formulae, can then be implemented to reducethe amount of recomputation. The solution Y for the tractionsand displacements over the entire boundary can be obtainedwith the computational complexity of the inverse of the smallercapacitance matrix.

    TABLE ICOMPUTATION TIME ACCORDING TO THE NUMBER OF NODES

    It is not necessary to compute the global deformation (nographic rendering) because the visual feedback is providedthrough real-time video images rather than using computer-generated graphic images. The contact force can be easily com-puted using capacitance matrix [38]. From (9),ITS Y = I

    TS [Y

    (0) + (IS + IS )C1ITS Y(0) ] = C1ITS Y

    (0) .(10)

    Given the nonzero displacement boundary conditions at thecontact S nodes in run-time process, the resulting contact forcecan be computed by

    F = aE ITS Y = aEC1ITS Y

    (0)

    = aEC1ITS Y = aEC1YS (11)where aE is the effective area. It consists of the nodal area and ascaling factor for different-scale manipulation tasks in order tomagnify (or reduce) the contact force while providing a hapticfeedback to the user.

    Table I shows the computation time for the preprocess partand the run-time process, using a 2.40-GHz Pentium-IV PC.The time required for preprocessing increased with the numberof nodes. As shown in third column in Table I, it is difficultto achieve real-time haptic rendering using (7). A fast compu-tation speed in run-time processes can be obtained using (11),regardless of the number of nodes.

    Although the contact forces are rapidly computed using lo-cally updated boundary conditions, the forces are obtained ata visual update rate (of approximately 60 Hz) because of theboundary conditions that are updated from the images. Giventhe reaction force Fn and the time tn at a visual update rate,F(t) = Fn (tn t < tn+1). It is insufficient to achieve agood fidelity haptic feedback. Therefore, a force interpolationmethod [41] is used to derive the forces at high rates (1 kHz).The forces interpolating tn1 and tn are exerted between tn andtn+1 , following

    F(t) =Fn Fn1

    tn tn1 (t tn1) +Fn1 ,

    tn t< tn+1 . (12)

    E. Collision DetectionThe collision detection is achieved utilizing hierarchical

    bounding boxes and a neighborhood watch algorithm [42]. TheBE model is hierarchically represented as oriented bounding box

  • KIM et al.: HAPTIC INTERACTION METHOD USING VISUAL INFORMATION AND PHYSICALLY BASED MODELING 641

    trees and stored in a preprocess phase. If a line segment betweenthe previous and current tool-tip positions is inside the boundingbox, potential collisions are sequentially checked along the tree.When the last bounding box for the line element collides withthe line segment, the ideal haptic interface point is constrainedat the collision node. The distance between the tool tip and thecollision node is used as the displacement boundary conditionof the node. During interactions, the collision nodes are rapidlyupdated using a neighborhood watch algorithm, which is basedon a predefined linkage between the nodes.

    IV. CASE STUDIES AND RESULTSThe developed algorithm was evaluated for the manipula-

    tion of elastic materials with different scales. Two experimentswere conducted to demonstrate the effectiveness of the al-gorithm in macro- and microtelemanipulation tasks. In bothsystems, the deformation of the objects and the motion of aslave robot were captured by a charge-coupled device (CCD)camera (SVS340MUCP, SVS-Vistek, Seefeld, Germany with640 480 pixels resolution and maximum of 250 fps), and theimages were transmitted to a computer (Pentium-IV 2.40 GHz).The 2-D geometry information can be known through imageprocessing techniques using OpenCV [43]. A commercial hap-tic device (SensAble Technologies, PHANToM Omni, USA)was used for force feedback, and a priori knowledge of the ma-terial properties was obtained through the experiment and fromthe literature. The behavior of the model during manipulationwas compared with that from a real deformable object. Table IIsummarizes the two experiments, and shows the experimentalsetup and the parameters.

    A. Experiment 1: Macroscale Telemanipulation SystemThe macroscale manipulation system consists of an inanimate

    deformable object and a planar manipulator with an indenter tipas a slave robot. Fig. 4 shows the setup for the experimentalplatform. A 3-DOF planar manipulator (500 mm 500 mm)performs indentation tasks on a rectangular-shaped object madefrom silicone gel (GE, TSE3062, USA). The images obtainedusing a CCD camera have a size of 640 480 pixels and aresolution of 0.35 mm/pixel. In addition, the indentation forceis measured using a one-axis force sensor (Senstech, SCMM2-1K, Korea). The force sensor is used to validate the estimatedforce from visual information.

    First, the Youngs modulus of the silicone block was mea-sured to construct the BE model. A silicone specimen (42 mm40 mm 9 mm) was molded, and its stressstrain relationshipwas obtained from a compression test [27]. Although a nonlinearrelationship between stress and strain was observed (see Fig. 5),because the BE model is based on linear elasticity, this rela-tionship was linearized in the initial deformation region usinga small deformation linear theory. Strains up to 5%10% wereconsidered to be in the linear region, and the Youngs moduluswas approximated as 127 kPa in this region.

    The geometry of the rectangular-shaped block was repre-sented using 60 control points along the active contour. Hence,the BE model consisted of 60 line elements with 60 nodes. As

    TABLE IIMACRO- AND MICROMANIPULATION SYSTEMS

    Fig. 4. Experimental setup of slave part in macroscale telemanipulationsystem.

    one side of the block was fixed to the platform, zero displace-ment boundary conditions (u = 0) were applied on this side.When the indenter deformed the block, the resulting contactforce was computed based on the proposed method. Simultane-ously, the actual contact force along the indenter insertion axiswas measured by the force sensor.

    The model prediction was compared with the block response.The interaction forces at the contact point are shown in Fig. 6.The results show a reasonable match between the measured andestimated force values. While the local strain was raised, the

  • 642 IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 15, NO. 4, AUGUST 2010

    Fig. 5. Silicone block stressstrain plot from compression test.

    Fig. 6. Measured surface forces and nodal forces from BEM.

    difference between the values was increased due to the linearapproximation of the silicone block nonlinearities. A measure ofbias (about 0.06 N) was also observed due to errors coming fromthe object buckling along the perpendicular direction to the planeand from measurement errors occurring in the image analysis(e.g., edge detection noise and minor illumination changes). Thebias could be overcome using a scaling factor in the case of themicromanipulation system, where the scaled-up reaction forcemust be reflected to the user. Fig. 7 shows a comparison betweenthe actual block deformation and the global deformation of theBE model according to dissimilar indentation locations. Thedotted line represents the nodes of the BE model; it is determinedas a result of the input displacement at the contact point. Eachnodal displacement of the BE model is in good agreement withthe deformation of the object. Fig. 8 shows the master andthe visually tracked slave position, and the feedback force toa user along the indentation direction. The force estimationperformance is satisfied for force feedback.

    B. Experiment 2: Cellular Manipulation SystemIn this experiment, an application to cellular manipulation

    is presented. Cellular manipulations such as a microinjectionare now increasingly used in transgenics, and biomedical and

    Fig. 7. Deformation of silicone block and computed global deformation ofBE model (dotted line).

    Fig. 8. Position and force profiles for macroscale telemanipulation system.

    pharmaceutical research. Some examples include the creationof transgenic mice by injecting cloned deoxyribonucleic acid(DNA) into fertilized mouse eggs and intracytoplasmic sperminjections (ICSIs) with a micropipette. However, most cellularmanipulation systems have primarily focused to date on visualinformation in conjunction with a dial-based console system.The operator needs extensive training to perform these tasks,and even an experienced operator can have low success rates anda poor reproducibility due to the nature of the tasks [44], [45].

    The developed cell injection system is shown in Fig. 9. Itconsists of an inverted microscope (Motic, AE31, China) andtwo 3-DOF micromanipulators (Sutter, MP225, USA) to guidethe cell holding and injection units. An injection micropipette(Humagen, MIC-9m-45, USA) is connected to a micromanip-ulator, whereas a glass capillary with an air pump (Eppendorf,CellTram Air, Germany) is connected to another micromanip-ulator to hold the cell. Each micromanipulator has a resolution

  • KIM et al.: HAPTIC INTERACTION METHOD USING VISUAL INFORMATION AND PHYSICALLY BASED MODELING 643

    Fig. 9. Developed cellular manipulation system.

    of 0.0625 m along each axis and a travel distance of 25 mm.Images were captured at a 40 magnification. The obtainedimages have a size of 640 480 pixels and a resolution of2 m/pixel.

    Zebrafish embryos were used as a deformable object in the ex-periments. Zebrafish have been widely used as a model in devel-opmental genetic and embryological research due to their simi-larity to the human gene structure [46]. In this paper, zebrafishembryos were obtained from the Laboratory of Molecularand Developmental Genetics, Chungnam National University,Korea. The embryos are considered as a linear elastic materialfor research in the small-deformation linear theory. It has beenreported that the Youngs modulus of the chorion of the zebrafishembryo is approximately 1.51 MPa with a standard deviation of0.07 MPa and that the Poissons ratio is equal to 0.5 [47]. Theseproperties were used in the BE model of the cell.

    Conventionally, the cell injection procedure involves: 1) guid-ing the injection pipette; 2) puncturing the membrane; and3) depositing the materials. In this paper, the task was to punc-ture the chorion of a zebrafish embryo and guide the injectionpipette to a targeted position. The location of the targeted po-sition was randomly chosen and changed for every test. Unlikemacroscale experiments for the silicone block (indentation ex-periment), the cell membrane was punctured by an injectionpipette. Therefore, it was necessary to provide the user witha puncturing cue. As the BEM cannot compute the membranepuncturing, the overshoot of the injection pipette after the break-ing of the membrane was measured. Published work revealedthat the penetration force significantly decreases after punctur-ing [48]. Accordingly, when the position overshoot occurred,the magnitude of the reaction force was set to zero.

    Fig. 10 shows the deformation and puncturing of a zebrafishembryo during the injection procedure. As can be seen, thecell membrane is punctured in a small displacement, and theBE model tracking performance is satisfactory in this defor-mation region. For the BE model, the nodes for the membraneattached to the holding pipette (a glass capillary) have zerodisplacement boundary conditions (u = 0). Fig. 11 shows theestimated force response for the deformation created by theinjection pipette. The membrane was punctured when the de-formation length ranged approximately between 50 and 200 m.

    Fig. 10. Procedure of a cellular injection task for a zebrafish embryo andcomputed global deformation of BE model (dotted line).

    Fig. 11. Estimated forcedeformation curve of a zebrafish embryo.

    Fig. 12. Forcedeformation curve of a zebrafish embryo (revision from [47]).

    According to previously published work (see Fig. 12), the mea-sured forcedeformation relationship for a zebrafish embryo ischaracterized by a nonlinear behavior that can be approximatedas linear for small deformations (up to 100 m). This allows

  • 644 IEEE/ASME TRANSACTIONS ON MECHATRONICS, VOL. 15, NO. 4, AUGUST 2010

    Fig. 13. Amplified cell injection and puncturing force computed using the BEmodel, the arrows indicate puncturing moment.

    us to use the proposed linear elastic model for small deforma-tions. Different force scales are exhibited in Figs. 11 and 12due to different sizes and types of pipette and different injec-tion velocities. In order to display the force response to a user,the microcontact forces need to be magnified. Specifying andvarying the appropriate force scaling factor has been an issuein micromanipulation [12], [49]. The scaling factor was exper-imentally chosen within the maximum applicable force of thehaptic device (3.3 N). Fig. 13 shows the scaling forces overtime for force feedback to a user. The feedback forces increaseduring the insertion of the micropipette, and a user can feel thepuncturing force when puncturing occurs.

    V. CONCLUSIONIn this paper, a haptic interaction method with deformable

    objects was investigated while inferring the force informationof a slave environment using visual information without a forcesensor. This method is based on image processing techniques(active contour model and template matching) and on a contin-uum mechanics model (BE model).

    The BE model for the modeling of the slave environmentrequires only edge information and a priori knowledge of theobject material properties. These fewer requirements allow thealgorithm to be robust to potential topological changes of themodel network and do not imply a controlled slave environ-ment. Accordingly, the same algorithm can not only be usedin a micro(or nano)scale, but also in a macroscale environmentwithout any revision. The cellular manipulation system of a ze-brafish embryo and the macroscale telemanipulation experimentof a silicone block showed the potential of the proposed methodwhen applied at different scales. Experimental results showedthat the algorithm allows the users to feel reaction forces inreal time during the indentation and injection (with puncturing)tasks by means of haptic devices. The proposed algorithm isparticularly well suited for micromanipulation due to difficul-ties associated with reliable microforce sensing.

    In our future work, experiments with various biological mate-rials will be performed for specific applications, in which the BEmodel and the image processing techniques would be adjustedfor unmodeled behaviors and environmental conditions such asthe nonlinear behavior of tissues, tool-tip deformation in cellu-lar squeezing, and the adhesion forces and liquid environmentin micromanipulation. In addition, the integration of additionalhaptic feedback modalities, such as a torque feedback, will berealized.

    ACKNOWLEDGMENT

    The authors would like to thank Prof. C. Kim of ChungnamNational University, Korea, for culturing the cells used in thisstudy, and Prof. D. Lee and Prof. J. H. Shin of Korea AdvancedInstitute of Science and Technology for their valuable commentson this research.

    REFERENCES

    [1] M. Lin and K. Salisbury, Haptic renderingBeyond visual computing,IEEE Comput. Graph. Appl., vol. 24, no. 2, pp. 2223, Mar./Apr. 2004.

    [2] I. Hassanzadeh, F. Janabi-Sharifi, R. Akhavan, and X. Yang, Teleoper-ation of mobile robots by shared impedance control: A pilot study, inProc. IEEE Int. Conf. Control Appl., 2005, pp. 346351.

    [3] D. Reintsema, C. Preusche, T. Ortmaier, and G. Hirzinger, Toward high-fidelity telepresence in space and surgery robotics, Presence: Teleoper.Virtual Environ., vol. 13, no. 1, pp. 7798, 2004.

    [4] H. Mayer, I. Nagy, A. Knoll, E. Braun, R. Bauernschmitt, and R. Lange,Haptic feedback in a telepresence system for endoscopic heart surgery,Presence: Teleoper. Virtual Environ., vol. 16, no. 5, pp. 459470, 2007.

    [5] M. Mitsuishi, N. Sugita, and P. Pitakwatchara, Force-feedback augmen-tation modes in the laparoscopic minimally invasive telesurgical system,IEEE/ASME Trans. Mechatronics, vol. 12, no. 4, pp. 447454, Aug. 2007.

    [6] N. Ando, P. Korondi, and H. Hashimoto, Development of nicromanipula-tor and haptic interface for networked micromanipulation, IEEE/ASMETrans. Mechatronics, vol. 6, no. 4, pp. 417427, Dec. 2001.

    [7] I. Bukusoglu, C. Basdogan, A. Kiraz, and A. Kurt, Haptic manipula-tion of microspheres using optical tweezers, in Proc. 14th Symp. HapticInterfaces Virtual Environ. Teleoper. Syst., 2006, pp. 361365.

    [8] M. Sitti and H. Hashimoto, Teleoperated touch feedback from the sur-faces at the nanoscale: Modeling and experiments, IEEE/ASME Trans.Mechatronics, vol. 8, no. 2, pp. 287298, Jun. 2003.

    [9] L. Liu, Y. Luo, N. Xi, Y. Wang, J. Zhang, and G. Li, Sensor referencedreal-time videolization of atomic force microscopy for nanomanipula-tions, IEEE/ASME Trans. Mechatronics, vol. 13, no. 1, pp. 7685, Feb.2008.

    [10] M. Ammi, H. Ladjal, and A. Ferreira, Evaluation of 3D pseudo-hapticrendering using vision for cell micromanipulation, in Proc. IEEE/RSJInt. Conf. Intell. Robots Syst., Beijing, China, 2006, pp. 21152120.

    [11] A. Pillarisetti, M. Pekarev, A. D. Brooks, and J. P. Desai, Evaluating theeffect of force feedback in cell injection, IEEE Trans. Autom. Sci. Eng.,vol. 4, no. 3, pp. 322331, Jul. 2007.

    [12] Z. Lu, P. C. Y. Chen, and W. Lin, Force sensing and control in micro-manipulation, IEEE Trans. Syst., Man, Cybern. C, Appl. Rev., vol. 36,no. 6, pp. 713724, Nov. 2006.

    [13] C. R. Wagner, N. Stylopoulos, P. G. Jackson, and R. D. Howe, The benefitof force feedback in surgery: Examination of blunt dissection, Presence:Teleoper. Virtual Environ., vol. 16, no. 3, pp. 252262, 2007.

    [14] M. Gauthier and M. Nourine, Capillary force disturbances on a partiallysubmerged cylindrical micromanipulator, IEEE Trans. Robot., vol. 23,no. 3, pp. 600604, Jun. 2007.

    [15] K. Salisbury, D. Brock, T. Massie, N. Swarup, and C. Zilles, Hapticrendering: Programming touch interaction with virtual objects, in Proc.1995 Symp. Interactive 3D Graph., pp. 123130.

    [16] K. Salisbury, F. Conti, and F. Barbagli, Haptic rendering: Introduc-tory concepts, IEEE Comput. Graph. Appl., vol. 24, no. 2, pp. 2432,Mar./Apr. 2004.

    [17] T. H. Massie and J. K. Salisbury, The PHANToM haptic interface: Adevice for probing virtual objects, in Proc. ASME Dyn. Syst. ControlDiv., 1994, vol. 55, no. 1, pp. 295301.

  • KIM et al.: HAPTIC INTERACTION METHOD USING VISUAL INFORMATION AND PHYSICALLY BASED MODELING 645

    [18] N. Ogawa, H. Oku, K. Hashimoto, and M. Ishikawa, Microrobotic visualcontrol of motile cells using high-speed tracking system, IEEE Trans.Robot., vol. 21, no. 4, pp. 704712, Aug. 2005.

    [19] L. V. Tsap, D. B. Goldgof, S. Sarkar, and P. S. Powers, A method forincreasing precision and reliability of elasticity analysis in complicatedburn scar cases, Int. J. Pattern Recognit. Artif. Intell., vol. 14, no. 2,pp. 189211, 2000.

    [20] X. Liu, Y. Wang, and Y. Sun, Real-time high-accuracy micropipetteaspiration for characterizing mechanical properties of biological cells,in Proc. IEEE Int. Conf. Robot. Autom., Rome, Italy, 2007, pp. 19301935.

    [21] W. H. Wang, X. Y. Liu, and Y. Sun, Contact detection in microroboticmanipulation, Int. J. Robot. Res., vol. 26, no. 8, pp. 821828, 2007.

    [22] D. N. Metaxas and I. A. Kakadiaris, Elastically adaptive deformable mod-els, IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 10, pp. 13101321, Oct. 2002.

    [23] X. Wang, G. K. Ananthasuresh, and J. Ostrowski, Vision-based sensingof forces in elastic objects, Sens. Actuators A, vol. 94, no. 3, pp. 142156,2001.

    [24] Y. Luo and B. J. Nelson, Fusing force and vision feedback for manipu-lating deformable objects, J. Robot. Syst., vol. 18, no. 3, pp. 103117,2001.

    [25] M. A. Greminger and B. J. Nelson, Vision-based force measurement,IEEE Trans. Pattern Anal. Mach. Intell., vol. 26, no. 3, pp. 290298, Mar.2004.

    [26] B. J. Nelson, Y. Sun, and M. A. Greminger, Microrobotics for molecularbiology: Manipulating deformable objects at the microscale, SpringerTracts Adv. Robot., vol. 15, pp. 115124, 2005.

    [27] S. P. DiMaio and S. E. Salcudean, Needle insertion modeling and sim-ulation, IEEE Trans. Robot. Autom., vol. 19, no. 5, pp. 864875, Oct.2003.

    [28] Y. H. Anis, J. K. Mills, and W. L. Cleghorn, Vision-based measurement ofmicroassembly forces, J. Micromech. Microeng., vol. 16, no. 8, pp. 16391652, 2006.

    [29] X. Liu, Y. Sun, W. Wang, and B. M. Lansdorp, Vision-based cellular forcemeasurement using an elastic microfabricated device, J. Micromech.Microeng., vol. 17, no. 7, pp. 12811288, 2007.

    [30] T. Owaki, Y. Nakabo, A. Namiki, I. Ishii, and M. Ishikawa, Real-timesystem for virtually touching objects in the real world using modalitytransformation from images to haptic information, Syst. Comput. Jpn.,vol. 30, no. 9, pp. 1724, 1999.

    [31] A. E. Kerdok, S. M. Cotin, M. P. Ottensmeyer, A. M. Galea, R. D. Howe,and S. L. Dawson, Truth cube: Establishing physical standards forsoft tissue simulation, Med. Image Anal., vol. 7, no. 3, pp. 283291,2003.

    [32] C. W. Kennedy and J. P. Desai, A vision-based approach for estimatingcontact forces: Applications to robot-assisted surgery, Appl. BionicsBiomech., vol. 2, no. 1, pp. 5360, 2005.

    [33] H. Delingette, Towards realistic soft tissue modeling in medical simula-tion, Proc. IEEE, vol. 86, no. 3, pp. 512523, Mar. 1998.

    [34] J. K. Aggarwal, L. S. Davis, and W. N. Martin, Correspondence processesin dynamic scene analysis, Proc. IEEE, vol. 69, no. 5, pp. 562572, May1981.

    [35] M. Kass, A. Witkin, and D. Terzopoulos, Snakes: Active contour models,Int. J. Comput. Vis., vol. 1, no. 4, pp. 321331, 1988.

    [36] D. J. Williams and M. Shah, A fast algorithm for active contours andcurvature estimation, CVGIP: Image Understanding, vol. 55, no. 1,pp. 1426, 1992.

    [37] U. Meier, O. Lopez, C. Monserrat, M. C. Juan, and M. Alcaniz, Real-timedeformable models for surgery simulation: A survey, Comput. MethodsProgr. Biomed., vol. 77, no. 3, pp. 183197, 2005.

    [38] D. L. James and D. K. Pai, Multiresolution Greens function methodsfor interactive simulation of large-scale elastostatic objects, ACM Trans.Graph., vol. 22, no. 1, pp. 4782, 2003.

    [39] M. H. Aliabadi, The Boundary Element Method, Volume 2: Applicationsin Solids and Structures. New York: Wiley, 2002.

    [40] E. Chen and B. Marcus, Force feedback for surgical simulation, ProcIEEE, vol. 86, no. 3, pp. 524530, Mar. 1998.

    [41] Y. Zhuang and J. Canny, Haptic interaction with global deformations, inProc. IEEE Int. Conf. Robot. Autom., San Francisco, CA, 2000, pp. 24282433.

    [42] C. H. Ho, C. Basdogan, and M. A. Srinivasan, Efficient point-basedrendering techniques for haptic display of virtual objects, Presence:Teleoper. Virtual Environ., vol. 8, no. 5, pp. 477491, 1999.

    [43] (2006). [Online]. Available: http://www.intel.com/technology/computing/opencv/index.htm

    [44] P. Kallio and J. Kuncova, Manipulation of living biological cells: Chal-lenges in automation, presented at the Workshop Microrobotics Bioma-nipulation (IROS 2003), Las Vegas, NV.

    [45] Y. Sun and B. J. Nelson, Biological cell injection using an autonomousmicrorobotic system, Int. J. Robot. Res., vol. 21, no. 10/11, pp. 861868,2002.

    [46] D. Y. R. Stainier, Zebrafish genetics and vertebrate heart formation,Nature Rev. Genetics, vol. 2, no. 1, pp. 3948, 2001.

    [47] D. H. Kim, C. N. Hwang, Y. Sun, S. H. Lee, B. Kim, and B. J. Nelson,Mechanical analysis of chorion softening in prehatching stages of ze-brafish embryos, IEEE Trans. Nanobiosci., vol. 5, no. 2, pp. 8994, Jun.2006.

    [48] D. H. Kim, Y. Sun, S. Yun, S. H. Lee, and B. Kim, Investigating chorionsoftening of zebrafish embryos with a microrobotic force sensing system,J. Biomech., vol. 38, pp. 13591363, 2005.

    [49] A. Menciassi, A. Eisinberg, I. Izzo, and P. Dario, From macro to micromanipulation: Models and experiments, IEEE/ASME Trans. Mechatron-ics, vol. 9, no. 2, pp. 311320, Jun. 2004.

    [50] Y. M. Tang, A. F. Zhou, and K. C. Hui, Comparison of FEM and BEM forinteractive object simulation, Comput.-Aided Des., vol. 38, pp. 874886,2006.

    Jungsik Kim received the B.S. degree in mechanicalengineering in 2006 from Inha University, Incheon,Korea, and the M.S. degree in mechanical engineer-ing in 2008 from Korea Advanced Institute of Sci-ence and Technology, Daejeon, Korea, where he iscurrently working toward the Ph.D. degree in me-chanical engineering.

    His current research interests include biomicro-manipulation, haptic rendering, microforce sensing,bioinstrumentation, and visual servoing.

    Farrokh Janabi-Sharifi (S91M95SM02) re-ceived the B.Sc. degree in mechanical engineeringfrom the Middle East Technical University, Ankara,Turkey, the M.A.Sc. degree in mechanical engineer-ing from the University of Toronto, Toronto, ON,Canada, and the Ph.D. degree in electrical and com-puter engineering in 1995 from the University of Uni-versity of Waterloo, Waterloo, ON, Canada.

    Between 1995 and 1997, he was a Natural Sci-ences and Engineering Research Council Postdoc-toral Fellow at McGill University, where he was also

    an Instructor in the Center for Intelligent Machines, and the Department ofElectrical and Computer Engineering. In 1997, he joined the Department ofMechanical and Industrial Engineering, Ryerson University, Toronto, where heis currently an Associate Professor of mechanical and industrial engineeringand the Director of the Robotics, Mechatronics and Manufacturing AutomationLaboratory. His research interest include optomechatronic systems with a focuson image-guided control and planning. He has been a Visiting Researcher atKorea Advanced Institute of Science and Technology, Daejeon, Korea, Institutde Recherche en Informatique et Syste`mes Aleatoires (IRISA) Institut Nationalde Recherche en Informatique et Automatique (INRIA), Rennes, France, andthe Technische Universitat Munchen, Munich, Germany. He is currently anAssociate Editor of the International Journal of Optomechatronics, and an Edi-torial Board Member of the Journal of Robotics and The Open Cybernetics andSystematics Journal.

    Dr. Janabi-Sharifi has also been an organizer or co-organizer of several in-ternational conferences on optomechatronic systems control.

    Jung Kim (M05) received the Ph.D. degree fromthe Department of Mechanical Engineering, Mas-sachusetts Institute of Technology, Cambridge.

    He is an Associate Professor in the Department ofmechanical engineering, Korea Advanced Instituteof Science and Technology, Daejeon, Korea. His cur-rent research interests include haptic interactions invirtual environments, soft-tissue modeling, medicalrobotics, medical simulation, and biological cellularmanipulation.