A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world...

89
A Co-manipulation Robotic System for Brain Biopsies Pedro Miguel Pires Roios Thesis to obtain the Master of Science Degree in Mechanical Engineering Supervisors: Prof. Jorge Manuel Mateus Martins Dr. Manuel Herculano Carvalho Examination Committee Chairperson: Prof. João Rogério Caldas Pinto Supervisor: Prof. Jorge Manuel Mateus Martins Member of the Committee: Prof. Miguel Afonso Dias de Ayala Botto June 2016

Transcript of A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world...

Page 1: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

A Co-manipulation Robotic System for Brain Biopsies

Pedro Miguel Pires Roios

Thesis to obtain the Master of Science Degree in

Mechanical Engineering

Supervisors: Prof. Jorge Manuel Mateus Martins

Dr. Manuel Herculano Carvalho

Examination Committee

Chairperson: Prof. João Rogério Caldas Pinto

Supervisor: Prof. Jorge Manuel Mateus Martins

Member of the Committee: Prof. Miguel Afonso Dias de Ayala Botto

June 2016

Page 2: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

i

Page 3: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

ii

“Primeiro diz o que queres ser, depois faz o que tens de fazer.”

Page 4: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

iii

Page 5: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

iv

Acknowledgments

First, I would like to thank my advisor, Professor Jorge Martins, for giving me the possibility to work in the

field of Surgical Robotics. Thank you for the guidance, for the ideas provided during this thesis, but in particular

for having challenged me to constantly overcome myself.

In second place, I would like to thank Dr. Manuel Herculano and Dr. Pedro Baptista for their willingness

and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of

view.

I would like to give a special thanks to Pedro Teodoro for his advice on my first day in this thesis: “Paciência,

Persistência e Perseverança”, advice that was followed every day. Another special thanks to my colleague

Inês Machado who shared with me her knowledge about image registration and who accompanied me in the

late working nights at the Hospital Santa Maria in Lisbon.

This work would also not be possible without the help of my colleagues from the Surgical Robotics

Laboratory, in particular Rui Coelho who was always available to listen my to ideas and share coffee capsules,

and to Vanessa Cunha who was always ready to cheer me up when something went wrong.

To all my friends Afonso, Raposo, Igor, Rodrigo, Hugo, Daniela, Cabral, Clarinha, Cascão, Daniel, Inês,

Rita, Simões, Joana, Catarina and my colleagues at CTT for their friendship, support, encouragement and

sometimes distraction moments which are also important.

Last but not least, I would like to thank my mother and my sister for all their love, unconditional support and

encouragement that accompanied me throughout this thesis.

Page 6: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

v

Page 7: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

vi

Abstract

Brain diseases affect a substantial amount of people worldwide, and brain surgery is in many cases the

most efficient, but complex, therapeutic procedure to treat them. The evolution of robotics, especially in the

field of human-machine interaction, has provided more precise, less invasive procedures, allowing to reduce

human errors, and to overcome certain limitations of the conventional brain surgeries. In this project, a robotic

solution involving the KUKA LWR 4+ and the Polaris Spectra optical tracking system has been implemented,

in order to perform brain surgeries requiring precise targeting of structures within the brain. The robot’s

positioning task is performed by a co-manipulation setup between the surgeon and the robot, through a virtual

impedance environment. Two simulated brain biopsies were performed using a phantom specifically designed

for the purpose. One was performed at Hospital Santa Maria in Lisbon, following the current medical

procedures and using the clinical neuronavigation instrumentation. The other was performed at the Surgical

Robotics Lab of Instituto Superior Técnico, using the neuronavigation robotic system. The targeting errors of

both approaches were measured, and then compared. The errors obtained with the clinical approach, from the

medical image acquisition to the biopsy execution, were 3.74mm±1.56mm and 3.77°±2.23° for target position

and trajectory orientation, respectively. The errors obtained with the neuronavigation robotic system were

2.46mm±1.04mm and 3.61°±2.75°, for target position and trajectory orientation, respectively. The results of

this project reveal a 34% decrease in position error and a 4% decrease in orientation error with the robotic

system.

Keywords

Robotic Brain Biopsy, Co-manipulation, Virtual Impedance Control, Light Weight Robotics, Image Registration

Page 8: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

vii

Page 9: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

viii

Resumo

As doenças cerebrais afectam um número considerável de pessoas a nível mundial, onde a cirurgia ao

cérebro é em muitos casos o procedimento terapêutico mais eficaz, embora complexo. A evolução da robótica

no campo da interação homem-máquina tem criado ferramentas mais precisas, menos invasivas e que

diminuem o erro humano para superar certas limitações das cirurgias cerebrais convencionais. Nesta tese,

uma solução robótica envolvendo o KUKA LWR 4+ e o sistema óptico Polaris Spectra foi implementada, com

o objectivo de realizar cirurgias cerebrais que envolvam o posicionamento de uma ferramenta cirúrgica no

interior do cérebro. A tarefa de posicionamento é realizada por co-manipulação entre o cirurgião e o robot,

através de um ambiente de impedância virtual. Duas simulações de biópsia ao cérebro com base num fantoma

projectado para o efeito. Uma das biópsias foi realizada no Hospital Santa Maria, em Lisboa, seguindo o

procedimento cirúrgico actual com base no equipamento de neuronavegação. A outra biópsia foi realizada no

Laboratório de Cirurgia Robótica do Instituto Superior Técnico, utilizando o sistema de neuronavegação

robótico desenvolvido. Os erros de posicionamento de ambos os ensaios foram comparados. Os erros

obtidos, desde a aquisição da imagem médica até à realização da biópsia, no ensaio clínico foram

3.74mm±1.56mm e 3.77°±2.23°, para posição e orientação respectivamente. Os erros obtidos com o sistema

desenvolvido foram de 2.46mm±1.04mm e 3.61°±2.75°, para posição e orientação respectivamente. Os

resultados deste projecto revelam uma diminuição de 34% no erro de posição e de 4% no erro de orientação

com o sistema robótico.

Palavras-chave

Biópsia Cerebral Robotizada, Co-manipulação, Controlo de Impedância Virtual, Robótica de Construção Leve,

Registo de Imagem

Page 10: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

ix

Page 11: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

x

Contend

CHAPTER 1 - INTRODUCTION 1

1 Surgical robots 1 1.1 A brief history of surgical robots 1 1.2 Types of operations robotic surgery may be used in 2

2 Needle interventions in the brain 3 2.1 Brain biopsy 3

Stereotactic Biopsy 3 Neuronavigation assisted brain biopsy 5 Difficulties of Brain Surgery 6

3 State of the art of robotic neurosurgery 7 3.1 ROSA robot 7

Patient’s registration 7 ROSA functionalities 8

3.2 Neuromate robot 8 3.3 NeuroArm robot 9 3.4 RONNA 11

4 Previous Projects at the Surgical Robotics Lab 11 4.1 Hip Resurfacing Surgical Procedure 11 4.2 Robot assisted needle interventions for brain surgery 12

5 Proposed project 13

CHAPTER 2 - DEVICES 15

1 KUKA lightweight robot 4+ 15 1.1 KUKA Robot Language 16

Joint Position Control 16 Joint Impedance Control 16 Cartesian Impedance Control 16

1.2 KUKA Fast Research Interface 17 Joint Position Control 18 Cartesian Impedance Control 18 Joint Impedance Control 18

2 Polaris Spectra tracking system 19

3 Devices’ Communication 19 3.1 FRI-xPC server 20

FRI receive 21 FRI transmit 21 Main controller 23 Brain biopsy 23

4 Communication between the Polaris Spectra with the FRI-xPC server 24

Page 12: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

xi

CHAPTER 3 - CASE STUDY: BRAIN BIOPSY 27

1 Phantom 27

2 Clinical Neuronavigation Procedure 28

3 Procedure performed at IST 30 3.1 Patient’s registration - localization of the target points in the robotic arm’s reference frame 34 3.2 Surgical co-manipulation mode 38

Impedance control law 42

CHAPTER 4 - RESULTS 49

1 Procedure performed at IST 49 1.1 Needle’s calibration 49

Robotic arm 49 Polaris Spectra 51

1.2 Phantom’s registration 51 1.3 Biopsy 53

2 Procedure performed at Hospital Santa Maria 54 2.1 Image registration error 54 2.2 Phantom registration 57 2.3 Biopsies 57

CHAPTER 5 - DISCUSSION 59

CHAPTER 6 - CONCLUSIONS AND FUTURE WORK 63

1 Conclusion 63

2 Future work 64

BIBLIOGRAPHY 65

APPENDIX A 69

1 Flags to select the control mode of the robotic arm 69

2 Minimum and maximum stiffness and damping coefficient values for the Joint Impedance Control of the KUKA

manipulator 69

3 Minimum and maximum stiffness and damping coefficient values for the Cartesian Impedance Control of the

KUKA manipulator 69

Page 13: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

xii

Table of figures Figure 1 - PUMA 560 [6]. ................................................................................................................................... 1 Figure 2 – PROBOT [6]. .................................................................................................................................... 1 Figure 3 – ROBODOC [6]. ................................................................................................................................. 1 Figure 4 - da Vinci Xi patient-side cart. One arm holds the camera and is driven by foot pedals while the three

arms are controlled by the surgeon's hands controls [7]. .................................................................................. 2 Figure 5 - The da Vinci Xi surgical console is located near the cart and consists of the inside binocular viewer,

finger-held controllers, foot control pedals, and the control computer [7]. ........................................................ 2 Figure 6 - The inside viewing system allows two separate, complete images to be delivered to the surgeon,

one image to each eye, resulting in a true 3D image perception [2]. ................................................................ 2 Figure 7- The stereotactic frame with the MRI localizer box [18]. ..................................................................... 4 Figure 8- 3D reconstruction [19]. ....................................................................................................................... 4 Figure 9- Needle's trajectory planning – the surgeon can select the target point and the entry point [21]. ...... 4 Figure 10 - Stereotactic setup [22]. ................................................................................................................... 5 Figure 11 - Stereotactic setup during a surgery [23]. ........................................................................................ 5 Figure 12- Set-up in the operating room. The patient reference frame is attached to the Mayfield frame.

Planning based on preoperative MRI is performed using a pointer [27]. .......................................................... 5 Figure 13 - Mayfield frame with reference tool attached to it. The biopsy needle is positioned to perform the

surgery [16]. ....................................................................................................................................................... 6 Figure 14 - Acquiring face’s points using a pointer [28]. ................................................................................... 6 Figure 15 - Target point localization. The surgeon uses a needle with two infrared reflective markers attached

to find the right orientation for the biopsy [29]. .................................................................................................. 6 Figure 16 - ROSA robot [34]. ............................................................................................................................. 7 Figure 17 - Demonstration of the registration procedure with an ultra-precise optical distance sensor [36]. ... 8 Figure 18 - Renishaw’s Neuromate robot – the robot is linked in a mechanical way to a stereotactic frame [38].

........................................................................................................................................................................... 9 Figure 19 - Ultrasound based registration method. On the left side of the image the ultrasonic emitter is

attached to the robot’s arm. On the right side of the image the ultrasonic receptor is attached to an insert in

the skull of the patient [39]. ................................................................................................................................ 9 Figure 20 - Two seven-DOF robotic manipulators in the surgical block [43]. .................................................. 10 Figure 21 – Workstation [43]. .......................................................................................................................... 10 Figure 22 - NeuroArm inside an MRI machine [46]. ........................................................................................ 10 Figure 23 - RONNA robotic system in the Laboratory of the Faculty of Mechanical Engineering and Naval

Architecture of University of Zagreb [48]. ........................................................................................................ 11 Figure 24 - Co-manipulation robotic solution for hip resurfacing surgery [13]................................................. 12 Figure 25 - Photograph of the robot correctly positioning the needle in the desired target [12]. .................... 13 Figure 26 - Surgeon performing the biopsy [12]. ............................................................................................. 13 Figure 27 - KUKA LWR 4+ axis and joints’ numbering [52]. ............................................................................ 15 Figure 28 - States and events diagram of the FRI communication protocol between a Host and the KRC [53].

......................................................................................................................................................................... 17 Figure 29 - Polaris system. .............................................................................................................................. 19 Figure 30 - Polaris tool with markers. .............................................................................................................. 19 Figure 31 - Polaris Spectra Passive Markers and Drill Experimental Setup [13]. ........................................... 19 Figure 32 - System components and communication rates. ........................................................................... 20 Figure 33 - FRI-xPC server block diagram. ..................................................................................................... 21 Figure 34 - FRI-xPC server state machine. ..................................................................................................... 23 Figure 35 - Communication between the Polaris Spectra and the FRI-xPC server. ....................................... 25 Figure 36 - Phantom. ....................................................................................................................................... 27 Figure 37 - Phantom - top view. ...................................................................................................................... 27 Figure 38 - Phantom - side view. ..................................................................................................................... 27 Figure 39 - Cut sphere - target point and desired orientation at IST. .............................................................. 27 Figure 40 - Cut sphere - target point and desired orientation at Hospital Santa Maria. .................................. 27 Figure 41 - StealthStation Treon, composed by a computer and an optical tracking device [59]. .................. 28 Figure 42 - Phantom fixed to the surgery bed. ................................................................................................ 29 Figure 43 - Acquisition of points for the phantom’s registration. ..................................................................... 29

Page 14: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

xiii

Figure 44 - Planned target points and trajectories. Each line represents the trajectory planned for each one of

the target points. The surgeon can see if the trajectory will intersect any important structure of the brain. ... 29 Figure 45 - Phantom with the reference tool attached on the left and the Vertek probe with a fixed pose

supported by the Vertek articulated arm. ......................................................................................................... 30 Figure 46 - Surgeon aligning the Vertek probe with the plan on the computer screen. .................................. 30 Figure 47 - Needle in the support of the articulated arm reaching a target point. ........................................... 30 Figure 48 - Needle in the support of the articulated arm (closer view). ........................................................... 30 Figure 49 - Setup of the system at the Surgical Robotics Lab. ....................................................................... 31 Figure 50 - Setup of the system: KUKA robotic arm with the needle with reflector markers at the end-effector,

the phantom with a reference frame attached and the Polaris Spectra. ......................................................... 31 Figure 51 - Phantom’s point cloud. .................................................................................................................. 31 Figure 52 - Computed spheres based on the point clouds obtained with the Polaris Spectra. The centre of each

sphere is a red dot. .......................................................................................................................................... 32 Figure 53 - Sphere point cloud - isolation of the internal curved cone. ........................................................... 32 Figure 54 - Points of the cone in black and the centroid of the slices in blue. ................................................ 33 Figure 55 - Centre point and desired orientation for one sphere..................................................................... 33 Figure 56 - Target points and desired orientations for all the spheres. ........................................................... 33 Figure 57 - Reference tool with reflective markers attached to the needle´s support. .................................... 34 Figure 58 - Examples of trials. ......................................................................................................................... 34 Figure 59 - Reference frame of the Polaris Spectra ( 𝑂𝑥𝑝𝑜𝑙𝑦𝑝𝑜𝑙𝑧𝑝𝑜𝑙 ), needle tip ( 𝑂𝑥𝑛𝑦𝑛𝑧𝑛 ), phantom

(𝑂𝑥𝑠1𝑦𝑠1𝑧𝑠1), needle holder (𝑂𝑥𝑠2𝑦𝑠2𝑧𝑠2), and KUKA robotic arm (𝑂𝑥𝑘𝑦𝑘𝑧𝑘). ............................................ 35 Figure 60 - Reference frames after the pivoting of the reference tool s2 in the Polaris Spectra. ................... 35 Figure 61 - Illustration of the environment. ...................................................................................................... 38 Figure 62 - Impedance environment reference frames. Cone’s vertice position, 𝐩ck, target point’s position,

𝐩tk, and the needle’s position, 𝐩nk, with respect to the robotic arm’s base reference frame and the needle’s

position, 𝐩nc, with respect to the cone’s reference. ........................................................................................ 40 Figure 63 - Virtual impedance environment..................................................................................................... 40 Figure 64 - Points expressed on the cone's reference frame. Needle’s position, 𝒑𝒏𝒄, the projected point, 𝒑𝒍𝒄,

and the contact point, 𝒑𝒄𝒐𝒏𝒕𝒂𝒄𝒕𝒄, in the cone’s reference frame. .................................................................. 42 Figure 65 - Correcting the needle’s position. ................................................................................................... 43 Figure 66 - Correcting the needle's orientation. .............................................................................................. 46 Figure 67 - Robotic arm with a fixed pose of the end-effector and null stiffness in the null-space. ................ 50 Figure 68 - Errors obtained by changing the null-space .................................................................................. 50 Figure 69 - Needle's calibration on Polaris Spectra - translation of the origen of the coordinate system. ...... 51 Figure 70 - Needle's pivoting on Polaris Spectra. ........................................................................................... 51 Figure 71 – Desirable points to collect for the phantom's registration............................................................. 52 Figure 72 - Points collected during the registration with the first hypothesis (left) and with the second hypothesis

(right). Green - points under 1mm error- Red - points above 1 mm error. ...................................................... 52 Figure 73 - Biopsy errors - procedure performed at Surgical Robotics Lab .................................................... 53 Figure 74 - Successful trials ............................................................................................................................ 54 Figure 75 - Unsuccessful trials ........................................................................................................................ 54 Figure 76 - Procedure to obtain the centres of the target spheres from the medical images. The steps for the

CT images are shown on the left, and the steps for the MRI are shown on the right. .................................... 55 Figure 77 - Biopsy errors of the procedure performed at Hospital Santa Maria. Error between the planned

target points and the points reached. .............................................................................................................. 58

Page 15: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

xiv

List of tables

Table 1 - Computed centres and radius of the spheres based on the point clouds obtained with the Polaris

Spectra............................................................................................................................................................. 32 Table 2 - 𝑲𝒄𝒂𝒓𝒕 components. .......................................................................................................................... 48 Table 3- Needle's calibration on the robotic arm ............................................................................................. 49 Table 4 – Mean errors of the tip obtained by varying the null-space .............................................................. 50 Table 5- Needle's calibration on the Polaris Spectra ....................................................................................... 51 Table 6 - Phantom's registration - % of points under the threshold ................................................................ 52 Table 7 - Biopsy errors - procedure performed at Surgical Robotics Lab ....................................................... 53 Table 8 - Registration's error between MRI and CT-scan. .............................................................................. 56 Table 9 - Registration's error between the medical images and the phantom. ............................................... 56 Table 10 - Capacity of the surgeon to select the desired target point and orientation. ................................... 57 Table 11 - Biopsy errors of the procedure performed at Hospital Santa Maria. Error between the planned target

points and the points reached. ........................................................................................................................ 58 Table 12 - Summary of the image registration results. .................................................................................... 59 Table 13 -Resume of the biopsy errors. .......................................................................................................... 61 Table 14- Flags to select the control mode of the robotic arm ........................................................................ 69

Page 16: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

xv

Page 17: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

xvi

Abbreviations

CT Computed Tomography

DBS Deep Brain Stimulation

DOF Degrees of Freedom

FRI Fast Research Interface

KRL KUKA Robot Language

LWR Lightweight Robot

MRI Magnetic Resonance Imaging

SVD Singular Value Decomposition

TCP Tool Centre Point

3D Three-dimensional

Page 18: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

xvii

Page 19: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

xviii

List of Symbols

In the remaining of this work, the following notation is used:

𝑎 for scalars

𝒂 for vectors

𝑨 for matrices

The main symbols used throughout the text are listed below:

𝜀 Damping Coefficient

𝜏 Torque

𝑲 Stiffness matrix

𝒒 Joint angles vector

𝑫 Damping matrix

𝑱 Jacobian matrix

𝑭 Cartesian force

𝒑𝐵𝐴 Position of the reference B expressed in the reference A

𝑹𝐵𝐴 Rotation matrix of the reference B expressed in the reference A

𝑼 Left-singular vectors matrix from singular value decomposition

𝑺 Singular values diagonal matrix from singular value decomposition

𝑽 Right-singular vectors matrix from singular value decomposition

𝑯 General correlation matrix

Page 20: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

xix

Page 21: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

1

Chapter 1 - Introduction

In this chapter, a brief history about surgery assisted by robots will be introduced followed by a description

about the surgical procedures studied at the Mechanical Engineering Department of Instituto Superior Técnico

(IDMEC). It will also be presented some of the state of the art robots used in surgeries related to the surgical

procedures describe. To conclude this chapter, it will be explained the most relevant projects of the IDMEC in

this field as the aim of this thesis.

1 SURGICAL ROBOTS

1.1 A BRIEF HISTORY OF SURGICAL ROBOTS

The goals of using robots in surgeries are to provide improved diagnostic and therapeutic capabilities. They

allow a less invasive experience when compared with an open surgery for the same purpose, and at the same

time provide the means to perform more precise interventions.

The first documented use of a robot-assisted surgical procedure was on the 11th of April 1985 at the Memorial

Medical Centre in Long Beach, U.S.A. The robot was a PUMA 560, a standard industrial robotic arm, used to

place a needle for a brain biopsy using computerized axial tomography scan (CT-scan) guidance [1]. However,

the PUMA 560 (Figure 1) was not allowed for surgical purposes because it was considered unsafe as it was

an industrial robot designed in a way that should not have any contact with people.

The 1985 procedure led to a new world in medicine. In 1988 the first laparoscopic procedure involving a robotic

system was performed, a cholecystectomy [2]. In April 1991, the PROBOT (Figure 2) [3], developed at the

Imperial College in London, was used to perform prostatic surgery. In 1992, the ROBODOC [3, 4, 5] (Figure

3) from Integrated Surgical Systems made history by being the first robot to assist in a human total hip

arthroplasty.

Figure 1 - PUMA 560 [6].

Figure 2 – PROBOT [6].

Figure 3 – ROBODOC [6].

In 2000, the da Vinci Robotic Surgery System, one of the current state of the art systems, was approved by

the Food and Drug Administration (FDA) in the U.S. [3, 2]. The da Vinci Robotic Surgery System is a

telemanipulation system for minimum invasive surgery. It is composed by two parts: the patient-side cart and

the surgeon-side console. The patient-side cart is composed of 4 robotic arms (Figure 4), one for the camera

and three for the surgical instruments. The surgeon-side console (Figure 5) may be located up to 3 meters

Page 22: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

2

away from the patient. From the console, the surgeon has finger-held controllers (Figure 6) from which the end

effector tools will mimic the surgeon’s movements [2].

Figure 4 - da Vinci Xi patient-side

cart. One arm holds the camera

and is driven by foot pedals while

the three arms are controlled by

the surgeon's hands controls [7].

Figure 5 - The da Vinci Xi surgical

console is located near the cart

and consists of the inside

binocular viewer, finger-held

controllers, foot control pedals,

and the control computer [7].

Figure 6 - The inside viewing

system allows two separate,

complete images to be delivered

to the surgeon, one image to

each eye, resulting in a true 3D

image perception [2].

In Portugal there are 3 da Vinci systems in use. The first one was installed at Hospital da Luz [8] and the others

in SANFIL [9] and more recently at Champalimaud Foundation [10]. The first procedure with this system at

Hospital da Luz was a radical prostatectomy in a 50 years old patient [8].

1.2 TYPES OF OPERATIONS ROBOTIC SURGERY MAY BE USED IN

Robots were initially used in gallbladder removal surgeries, but since then, robots have been used in diverse

types of surgical operations [11] as listed below:

Orthopaedic surgery

Total hip arthroplasty: femur preparation

Total hip arthroplasty: acetabular cup

placement

Knee surgery

Spine surgery

Neurosurgery

Complement image-guided-surgery

Radiosurgery

Gynaecologic surgery

Tubal re-anastomosis

Hysterectomies

Ovary resection

Cardiothoracic surgery

Mammary artery harvest

Mitral valve repair

Urology

Radical prostatectomy

Ureter repair

Nephrectomy

General surgery

Cholecystectomy

Heller myotomy

Gastric bypass

Adrenalectomy

Bowel resection

Esophagectomy

In this thesis, we focus on image-guided-surgery, more specifically on needle interventions in the brain.

Page 23: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

3

2 NEEDLE INTERVENTIONS IN THE BRAIN

In the last years, the research on robotic surgery systems at the Surgical Robotics Lab of IST has dedicated

special focus on brain needle interventions [12], building on previous work which was developed for the hip

resurfacing surgical procedure [13].

2.1 BRAIN BIOPSY

A biopsy is a medical procedure to remove samples of a particular living tissue, to be analysed, and determine

the existence or nature of a particular disease [14]. In the case of a brain biopsy, the damaged tissue can be

found not only at its surface, near the skull, but also deeply embedded. Because the brain is a very delicate

area, this procedure can lead to loss of cognitive and basic functions if specific regions are affected. It is a

highly invasive surgery, since in order to access the brain the skull must first be breached and only then a

sample of the targeted area may be collected

Essentially, there are two types of brain biopsies, open or closed brain biopsy [15]:

Open Biopsy

The tissue sample is taken during an operation where the tumour is exposed through a craniotomy, i.e., the

temporary removal of bone from the skull.

Closed Biopsy

A small hole, called a burr hole, is drilled in the skull. Then, a needle is inserted through the hole to reach and

remove a sample of tissue. A computer-assisted planning system uses information from a CT and Magnetic

resonance imaging (MRI) scan to provide precise information about the tumour’s location. Then, a stereotactic

frame may be used to accurately place the needle into the burr hole, or alternatively a computer-assisted

guidance system (for example the StealthStation from Medtronic [16]) may be used to navigate the needle.

Stereotactic Biopsy

A Stereotactic Biopsy has 3 main steps:

Pre-operative image acquisition

Needle’s path planning

Assembly of the stereotactic equipment

Preoperative image acquisition

In this step, a CT-scan and an MRI of the patient’s head are acquired. In order to have a coordinate system

and workspace volume for the brain of the patient, a stereotactic frame is attached to the patient’s head with

a localization box (Figure 7) [17]. This localization box provides a reference frame for the brain in both the 3D

volumes of the CT, where the skull is visible, and in the 3D volume of the MRI where the brain structures are

visible. Image fusion is then performed to relate both coordinate frames. Image reconstruction may also be

performed providing a 3D visualization of the patient’s head (Figure 8).

Page 24: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

4

Figure 7- The stereotactic frame with the MRI

localizer box [18].

Figure 8- 3D reconstruction [19].

Needle’s path planning

Once the surgeon has the 3D reconstruction of the patient’s head, he can plan the needle’s trajectory. This

trajectory must be very carefully planned by the surgeon in order to avoid any important brain structure to the

well-functioning of the brain. Usually, this trajectory planning is made by a software that allows the surgeon to

select the target point, the entry point on the skull and to check if it avoid the important structures (Figure 9).

This type of software outputs the 3D coordinates of the target point expressed in the reference coordinate

system defined with the stereotactic frame and two angles that defines the trajectory’s orientation to that target

[20]. This information is used in the next step to assemble the stereotactic equipment.

Figure 9- Needle's trajectory planning – the surgeon can select the target point and the entry point [21].

Assembly of the stereotactic equipment

Once the patients arrive in the operating room, the stereotactic head frame is fixed to the operating table to

prevent movements that may occur during the surgery. After that, a device in an arc shape is attached to the

frame. That device has a needle’s holder attached to it (Figure 10).

Page 25: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

5

Figure 10 - Stereotactic setup [22].

Figure 11 - Stereotactic setup during a surgery [23].

With this stereotactic device, it is possible to align the needle with desired orientation defined during the

trajectory’s planning by sliding the needle’s holder through the arc. Once the needle is with the correct

orientation each degree of freedom of the stereotactic device is locked to maintain the needle in the desired

orientation during all the surgery (Figure 11) [24].

Neuronavigation assisted brain biopsy

With the neuronavigation approach, the stereotactic frame and localization box is no longer needed. A surgery

assisted by a neuronavigation system still follows the first two steps as the stereotactic biopsy: the preoperative

images acquisition (without the frame) and the needle’s path planning. Image fusion is then used to relate the

CT to the MRI. After that, image patient registration is performed, and the procedure is optically navigated to

the target point as described below:

Patient-image registration with the neuronavigation system

In order to reference the patient in the coordinates of the neuronavigation system, patient-image registration

is performed. This step is crucial for the accuracy of the surgery as it will heavily influence the estimation of

the position of the tumour inside the patient’s skull. To do this, a Mayfield frame with an arch must be attached

to the patient’s head, which acts as a reference system solidary with the skull (Figure 12) [25]. The arch

attached to the Mayfield frame (Figure 13) must never change position since it serves as reference frame

throughout the entire operation. With this setup, points on the face of the patient are then acquired, using an

infrared optically tracked pointer that is recognized by the neuronavigation system (Figure 14) [26].

Figure 12- Set-up in the operating room. The patient reference frame is attached to the Mayfield frame. Planning based on

preoperative MRI is performed using a pointer [27].

Page 26: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

6

Figure 13 - Mayfield frame with reference tool attached

to it. The biopsy needle is positioned to perform the

surgery [16].

Figure 14 - Acquiring face’s points using a pointer [28].

Localization of the target point

This step is the one where the surgeon inserts the needle in the brain. The needle has two infrared reflective

markers attached to it (Figure 15). The neuronavigation system with its stereo infrared cameras detects the

markers and calculates the needle orientation and tip position. These coordinates are given with respect to a

reference system solidary with the skull and also seen by the stereo cameras, and linked with the preoperative

images in real-time. The surgeon is then able to observe the needle’s location on a screen and compare it with

the trajectory planned before the operation.

Figure 15 - Target point localization. The surgeon uses a needle with two infrared reflective markers attached to find the

right orientation for the biopsy [29].

Difficulties of Brain Surgery

Apart from the fact that the brain is very delicate, and any accident, even a small one, can have serious

consequences to the patient, one problem inherent to all the procedures mentioned is brain shift. Brain shift is

the displacement of the brain during surgical procedures. When the hole is drilled in the skull the intracranial

pressure drops causing the brain to move out of its original place. This phenomenon is also influenced by the

patient’s head position, and the loss of cerebrospinal fluid [29, 30] . In open surgeries, displacements may vary

between a considerable 10 mm and 24 mm making the phenomenon an important problem to be taken into

consideration [31, 32].

All the procedures mentioned are based on preoperative MRI and CT-scans of the patient, and therefore they

only take in account the position of the brain in the moment these images were acquired. One way to

Page 27: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

7

compensate for brain shift consists in using intraoperative ultrasound imaging to update the deformation of the

brain, thus giving the surgeon more accurate positioning information.

3 STATE OF THE ART OF ROBOTIC NEUROSURGERY

In the past years, a number of robotic systems have been developed to improve neurosurgical procedures. In

this section, surgical robots developed for neurosurgery, especially the ones focusing on brain biopsy, are

presented.

3.1 ROSA ROBOT

The ROSA robot (Figure 16) is a technologically advanced solution for neurosurgery. It was designed to aim

a deep brain stimulation electrode or a needle to a specific target point chosen by the surgeon with high

accuracy. This robot can operate in a stereotactic environment, like in DBS, or it can work with a frameless

setup, when performing a brain biopsy for example. Performing this procedure using ROSA minimizes the

positioning errors introduced by the surgeon as documented in [33].

ROSA was developed to work in a very simple way. To set it up, it is needed to acquire preoperative images

by MRI or CT-scan and to perform the patient’s registration. To plan the surgical instrument’s trajectory, the

same method used in neuronavigation or stereotactic procedures is valid. Once this is done, the robot is ready

to perform the surgery. It can point the surgical instrument automatically or by co-manipulation.

Figure 16 - ROSA robot [34].

Patient’s registration

The patient’s registration is in all similar to the method used in the neuronavigation assisted brain biopsy.

Before doing the registration, ROSA must be connected to the head support. Once connected, the system is

able to acquire points of the face contour. These points will be used to obtain the 3D head model. To perfom

the patient’s registration it is possible to couple to the ROSA arm an ultra-precise optical distance sensor

allowing a quick procedure with low errors [35].

Page 28: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

8

Figure 17 - Demonstration of the registration procedure with an ultra-precise optical distance sensor [36].

ROSA functionalities

There are two possible behaviours for ROSA during the surgery. It can position the surgical instrument

automatically according to the pre-planned trajectory or it can be co-manipulated. The co-manipulation allows

the surgeon to move the robot manually but, at the same time, constraining the movement to the programmed

trajectory [35].

Regarding to safe mechanisms, ROSA defines a safety zone that confines all the robot’s motions to a

preoperatively planned cone defined along the trajectory. This characteristic guarantees that the surgical

instrument is aligned with the planned trajectory during the entire procedure.

One of the main advantages of using robots in neurosurgery is the increase of safety, the reduction of the

operating time, and the ability to perform poses that are uncomfortable or even impossible for the surgeon to

execute [36].

Studies show that using ROSA there is an increase of precision of the procedure. The targeting error of the

ROSA robot is reported to be approximately 3.3mm and the error associated to the conventional surgical

procedures is approximately 5.8mm [33].

3.2 NEUROMATE ROBOT

Another equipment developed for stereotactic brain surgeries is the Neuromate robot. This robot can work in

a frame based or frameless method. The Neuromate working principle is in all similar to ROSA. The main

differences reside is the patient’s registration process, as well as on some different utilities.

Like in the ROSA robot, this robotic system has a two steps setup process: the trajectory planning and the

patient’s registration. The first step is performed in the same manner as in ROSA. The second step, which

purpose is to identify the position of the patient’s head expressed in the robot coordinate system, can be

performed in two different ways. The first one is to link in a mechanical way the robot to a stereotactic frame

(Figure 18) in order to calibrate the robotic system [37].

Page 29: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

9

Figure 18 - Renishaw’s Neuromate robot – the robot is linked in a mechanical way to a stereotactic frame [38].

The second way to do the registration of the patient’s head is with an ultrasound probe. For that, some

ultrasound receivers must be attached to a temporary insert in the skull of the patient when acquiring the MRI

and CT images. Then, in the operating block, an ultrasound emitter is mounted on the Neuromate’s arm. With

this method no frame is used (Figure 19) [39]. It is assumed that in both methods the head of the patient

remains fixed to the robotic system during the entire surgery, either through the base of the stereotactic frame

or a head holder.

Figure 19 - Ultrasound based registration method. On the left side of the image the ultrasonic emitter is attached to the

robot’s arm. On the right side of the image the ultrasonic receptor is attached to an insert in the skull of the patient [39].

The Neuromate robot offers different approaches to realize the surgery. Taking for example a biopsy, is offer

to the surgeon two ways to command the robot. One way is to use the endoscopic navigation to insert the

needle. This method allows the surgeon to do tele-manipulation while observing the inside of the brain [40]

[41]. The other approach is to command the robot to position the needle in an automatic way. In both methods

a burr hole in the skull’s bone must be opened, before inserting the biopsy needle. A laser pointer mounted on

the robot’s arm is used to correctly position the burr hole.

The advantages of using this system are similar to the ones referred for ROSA [40] [42].

3.3 NEUROARM ROBOT

One of the most sophisticated system used in brain surgery is the NeuroArm [41]. This robotic system is

composed by:

Page 30: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

10

• Two seven degree of freedom (DOF) robotic arms (Figure 20)

• The main system controller

• Workstation with one high-definition medical monitor and two six-DOF haptic devices (Figure 21)

outside the operating room

Figure 20 - Two seven-DOF robotic manipulators

in the surgical block [43].

Figure 21 – Workstation [43].

In the workstation the surgeon is capable of controlling the robot in an interactive way [44] [45]. This setup not

only allows stereotactic procedures but also microsurgical procedures.

Two features that distinguish this system from others is the fact that it is fully MRI compatible and can be tele-

operated from a workstation outside the operating room.

That fact that the system is MRI compatible means that the system can operate inside a MRI scanner and in

this way acquire images of the brain that allows a real-time update of the brain’s position and permits the

correction of the brain shift (Figure 22) [46].

Figure 22 - NeuroArm inside an MRI machine [46].

The workstation includes one high-definition medical monitor and two six-DOF haptic devices that offer,

respectively, visual and tactile feedback of the robot’s work environment [47]. The medical monitor receives

images from two high-definition cameras present inside the operating block, which gives the surgeon a full-

depth perception and spatial orientation, therefore granting plain 3D display capabilities. The six-DOF haptic

devices increases many functionalities to the robot including the filtration of the surgeon’s hand tremor, the

scaling of the surgeon’s motions (the robotic arms can move in a different scale than the surgeon’s hand

movement) and the force-feedback.

Page 31: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

11

3.4 RONNA

The Robotic Neurosurgical Navigation system – RONNA – is a dual-arm robotic neurosurgical system

developed since 2011 by a research team of the Faculty of Mechanical Engineering and Naval Architecture in

collaboration with the Clinical Hospital of Zagreb and the Institute for Brain of Zagreb [48].

The RONNA system (Figure 23) consists of a navigation and planning software, two robots (master and

assistant), a stereovision system, a localization feature (marker), operating tools (grippers, guides, etc.) and a

context-aware control software. The master robot is a KUKA KR6 and the slave is a KUKA LWR 4+ [49].

Figure 23 - RONNA robotic system in the Laboratory of the Faculty of Mechanical Engineering and Naval Architecture of

University of Zagreb [48].

The patient’s registration with RONNA is based on the localization of a plate with markers attached to the

patient's head. The RONNA project focuses on the use of robots instead of the usual stereotactic method and

in this way reduce the manual adjustment of devices and human error.

4 PREVIOUS PROJECTS AT THE SURGICAL ROBOTICS LAB

In the last years, several projects have been developed at the Surgical Robotics Lab of IST, merging

engineering with medicine. The developed projects have aimed at the development of software to help surgical

planning, as well as hardware like surgical robots to facilitate surgical procedures. In this section, the two most

relevant projects to the current work are described.

4.1 HIP RESURFACING SURGICAL PROCEDURE

This project was the case study of Pedro Pires’ PhD thesis [13]. The goal of this case study was to integrate a

robotic system to assist orthopaedic surgeons to perform hip resurfacing prosthesis surgery with high accuracy

and precision (Figure 24), by providing haptic guidance during the bone drilling stage of the procedure.

Page 32: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

12

To accomplish that, a co-manipulation robotic solution, based on variable impedance control for physical

surgeon-robot interaction, was developed. This solution also presents an ultrasound image sensor for non-

invasive real-time bone tracking, complementing the co-manipulation robotic solution [50].

Figure 24 - Co-manipulation robotic solution for hip resurfacing surgery [13].

It is worth highlighting in this robotic solution, the co-manipulation setting. Co-manipulation means that there

is a human-robot interaction, unlike the typical scenario of industrial robotic arms. In this case, the surgeon is

in the robot’s workspace. The robot will guide the surgeon throughout a predetermined path, resulting in an

increase of the surgeon capabilities by increasing his precision and accuracy. In other words, adding the co-

manipulation factor with the surgeon’s movements brings together the higher geometric accuracy and precision

of the robot manipulator with the higher sensibility and decision making capability of the surgeon when applying

force.

4.2 ROBOT ASSISTED NEEDLE INTERVENTIONS FOR BRAIN SURGERY

This project was the topic of Tiago Salgueiro’s Master thesis [12]. The goal of Salgueiro’s thesis was to adapt

the co-manipulation method developed in [13] to the hip resurfacing surgical procedure but to perform a brain

biopsy.

The main idea of this project was that the robot guides the surgeon throughout a predetermined path and will

assure that the biopsy needle enters a determined point of the skull with a determined orientation (Figure 25).

Both the entering point and the orientation (pose) of the needle are pre-determined in the preoperative phase.

Once in this pose, the surgeon is able to move the needle into the brain until the target point is reached while

the robot manipulator keeps the needle positioner fixed (Figure 26).

Page 33: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

13

Figure 25 - Photograph of the robot correctly

positioning the needle in the desired target [12].

Figure 26 - Surgeon performing the biopsy [12].

5 PROPOSED PROJECT

The current project is built on the previous projects, and is divided into two parts. The first part is to build a

Simulink Real-Time model connecting the system devices used in the procedure (chapter 2 and 3). The second

part is to create a robot assisted surgical environment to perform a brain biopsy (chapter 4), and to validate it

against the standard procedure on a cranial phantom (chapters 5).

The robotic environment consists of a KUKA LWR 4+ robotic arm and the Polaris Spectra optical tracking

system. These two elements are connected and synchronized to each other. The approach followed in this

thesis differs from the ones followed in [13, 12], as it is totally Matlab/Simulink based, facilitating future

developments, whereas the past projects were developed in Visual Studio using the C++ language over

Windows.

Another significant difference is the communication frequency with the robotic arm KUKA LWR 4+. In the

previous projects the communication bandwidth was limited to 500 Hz. With the approach used in this project,

using the Simulink Real-Time toolbox, the communication bandwidth is at its maximum value of 1kHz.

In the case study of this project, the surgical assisted robotic environment is tested by simulating a brain biopsy

on a phantom. All the concepts of the robotic environment are based on the current procedure performed at

Hospital Santa Maria, also described in this thesis.

To conclude this project, the accuracy results of the brain biopsies performed with the assistance of a robotic

arm are compared with the ones achieved with the optical neuronavigation system.

Page 34: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

14

Page 35: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

15

Chapter 2 - Devices

In this chapter, the devices used in this thesis are introduced. The KUKA lightweight robot 4+, the robotic arm

used in this work, is presented and its control laws explained. The Fast Research Interface (FRI) protocol is

also addressed in this chapter. The Polaris Spectra optical tracking device is then introduced, and the

description of the communications between devices finalizes the chapter.

1 KUKA LIGHTWEIGHT ROBOT 4+

The KUKA lightweight robot (LWR) 4+ (Figure 27) is one of the most innovative robotic manipulators, product

of a research collaboration between the Institute of Robotics and Mechatronics at the German Aerospace

Centre (DLR) and the company KUKA Roboter. The motivation for the creation of this manipulator was the

need to send a robotic manipulator to space, in 1991. This robotic manipulator should have a low weight for

training purposes on Earth and also low cost of operation [51].

Figure 27 - KUKA LWR 4+ axis and joints’ numbering [52].

The main features that make the KUKA LWR 4+ one of the state of the art robotic manipulators are the fact it

can support his own weight (it has a 1:1 power-to-weight ratio), its kinematic redundancy that makes it

resemble the human arm and torque sensors in each of the joints. The combination of position sensors and

torque sensors makes it possible to implement not only position control but also impedance control.

These properties make the KUKA LWR 4+ one of the most advanced robots for human-robot interaction field.

The KUKA LWR 4+ can be interfaced in two distinctive manners, directly in the controller console using the

KUKA Robot Language (KRL) or by using the KUKA Fast Research Interface (FRI) API. In this project the FRI

was used to perform the communication with the KUKA LWR controller.

Page 36: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

16

1.1 KUKA ROBOT LANGUAGE

The KUKA Robot Language (KRL) programs are .src files with standard functions of KUKA industrial robots.

In these programs we can specify several robot parameters, such as the control methodologies (Joint

Impedance Control, Joint Position Control and Cartesian Impedance Control), along with points and paths to

be followed by the robot end-effector. It is also possible to specify forces and the stiffness applied on the joints.

This programing language lacks the flexibility to change the robot control methods in real time (1ms) along

with its parameters. For that reason, KRL is mostly used for repetitive processes in controlled environments.

Joint Position Control

𝒒𝑡𝑎𝑟𝑔𝑒𝑡 𝑝𝑜𝑖𝑛𝑡 = 𝒒𝑎𝑐𝑡𝑢𝑎𝑙 𝑝𝑜𝑖𝑛𝑡

eq. 1

The basics of Joint Position Control is that there is only one variable that is allowed to be changed, the angle

of each joint. The joints of the robot are controlled with a high stiffness controller, with zero steady state error.

Joint Impedance Control

The law that defines the Joint Impedance Control is:

𝝉𝑐𝑚𝑑 = 𝑲𝐽𝑜𝑖𝑛𝑡(𝒒𝑡𝑎𝑟𝑔𝑒𝑡 𝑝𝑜𝑖𝑛𝑡 − 𝒒𝑎𝑐𝑡𝑢𝑎𝑙 𝑝𝑜𝑖𝑛𝑡) + 𝑫(𝜀𝑗) + 𝒇𝑑𝑦𝑛𝑎𝑚𝑖𝑐𝑠 + 𝝉𝑑𝑠𝑟

eq. 2

In this control law, the commanded torque 𝝉𝑐𝑚𝑑 is a function of four components:

a virtual spring between the desired joint angles 𝒒𝑡𝑎𝑟𝑔𝑒𝑡 𝑝𝑜𝑖𝑛𝑡 and the measured joint angles 𝒒𝑎𝑐𝑡𝑢𝑎𝑙 𝑝𝑜𝑖𝑛𝑡

given by 𝑲𝐽𝑜𝑖𝑛𝑡(𝒒𝑡𝑎𝑟𝑔𝑒𝑡 𝑝𝑜𝑖𝑛𝑡 − 𝒒𝑎𝑐𝑡𝑢𝑎𝑙 𝑝𝑜𝑖𝑛𝑡)

a damping term 𝑫(𝜀𝑗)

a nonlinear dynamic model compensation term 𝒇𝑑𝑦𝑛𝑎𝑚𝑖𝑐𝑠

and a desired torque 𝝉𝑑𝑠𝑟

Cartesian Impedance Control

The law that defines the Cartesian Impedance Control is:

𝜏𝑐𝑚𝑑 = 𝑱𝑇(𝑲𝐶𝑎𝑟𝑡(𝑷𝑡𝑎𝑟𝑔𝑒𝑡 𝑝𝑜𝑖𝑛𝑡𝑏𝑎𝑠𝑒 − 𝑷𝑎𝑐𝑡𝑢𝑎𝑙 𝑝𝑜𝑖𝑛𝑡

𝑏𝑎𝑠𝑒 ) + 𝑭𝑐𝑚𝑑) + 𝑫(𝜀) + 𝒇𝑑𝑦𝑛𝑎𝑚𝑖𝑐𝑠

eq. 3

This cartesian law of the commanded joint torque is defined by the product of the diagonal cartesian stiffness

matrix, 𝑲𝐶𝑎𝑟𝑡 ,with the cartesian pose error calculated between the desire pose and the actual pose of the

robot,𝑷𝑡𝑎𝑟𝑔𝑒𝑡 𝑝𝑜𝑖𝑛𝑡𝑏𝑎𝑠𝑒 − 𝑷𝑎𝑐𝑡𝑢𝑎𝑙 𝑝𝑜𝑖𝑛𝑡

𝑏𝑎𝑠𝑒 ,plus a commanded cartesian force, 𝑭𝑐𝑚𝑑 . A virtual spring between the

desired pose and the measured pose is represented by the term 𝑲𝐶𝑎𝑟𝑡(𝑷𝑡𝑎𝑟𝑔𝑒𝑡 𝑝𝑜𝑖𝑛𝑡𝑏𝑎𝑠𝑒 − 𝑷𝑎𝑐𝑡𝑢𝑎𝑙 𝑝𝑜𝑖𝑛𝑡

𝑏𝑎𝑠𝑒 ) . The

Page 37: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

17

orientation at the cartesian representation is defined by three Euler rotation angles (A, B, C) referring to rotation

along 𝑧, 𝑦, and 𝑥 respectively [53].

These terms are multiplied by the transposed jacobian matrix 𝑱𝑇, to obtain the corresponding joint torques. A

damping term 𝑫(𝜀) function of a specified damping coefficient and a dynamic model compensation

𝒇𝑑𝑦𝑛𝑎𝑚𝑖𝑐𝑠 are also added to complete the model of the robot.

1.2 KUKA FAST RESEARCH INTERFACE

The Fast Research Interface (FRI) protocol is a low-level communication interface that allows the researchers

to have access to the KUKA LWR 4+ controller from an external computer and its variables in real time, with

up to 1 kHz time sample frequency. The communication is based on the UDP communication protocol between

the host computer and the KRC controller [53]. Figure 28 shows the states and events of the FRI protocol.

Figure 28 - States and events diagram of the FRI communication protocol between a Host and the KRC [53].

The procedure for communication initialization follows the following steps:

1. Open the FRI protocol with the friOpen (ts) command, where the variable ts represents the

communication sample time in milliseconds. This function will initialize an UDP Socket and a Host-

KRC connection is done to evaluate the quality of communications.

2. Once the communication quality is sufficient and there is no error in the KRC, the friStart command

can be executed to perform the change of the FRI state from Monitor Mode to Command Mode. Only

in Command Mode the user can send commands to the robotic arm.

3. The event friStop allows the system to return to the Monitor Mode.

4. Once in Monitor Mode, the FRI communication BUS can be terminated with the friClose event

To use the FRI protocol, it is necessary to create a KRL program on the KRC controller that runs at the same

time that as a client program running at the real-time computer, in our case the Simulink Real-Time target.

This necessity is due to the fact that there are some configurations that can only be changed locally on the

KRC, for example:

Page 38: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

18

Changing the KUKA LWR control strategy in real time (Cartesian Impedance Control, Joint Impedance

Control and Joint Position Control).

Changing the KRC FRI mode (monitor and control mode).

Setting the FRI communication sampling time.

Setting the tool and base reference frame.

The control laws used with FRI are presented below.

Joint Position Control

The joint position control is the same as specified for the KRL (eq. 1). Nevertheless, if the communication

sample time is greater than 1ms the desired joint position command must be interpolated.

Cartesian Impedance Control

The Cartesian Impedance Control uses the same control law defined in eq. 3. The variables that can be

changed in real time by the FRI are:

Cartesian Stiffness 𝑲𝐶𝑎𝑟𝑡.

A damping coefficient ξ.

A desired cartesian external force, 𝑭𝑐𝑚𝑑, defined in the same reference frame as the stiffness matrix.

As a result of the fact that KUKA LWR has seven degrees of freedom, the cartesian control can work

simultaneously with the joint controller. When this happens, the joint controller will affect in the robot’s null-

space.

For example, if we define the stiffness matrix 𝑲𝐽𝑜𝑖𝑛𝑡 and the damping coefficient 𝜉𝐽 of the joints to the minimum

values allowed, 0.01 𝑁𝑚

𝑟𝑎𝑑 and 0.1

𝑁𝑚∗𝑠

𝑟𝑎𝑑 respectively, for a fixed value of cartesian stiffness and damping, the

robot will work in gravity compensation mode at the null-space while maintaining the tool centre point (TCP) in

the desired pose with the prescribed Cartesian stiffness.

Joint Impedance Control

The joint impedance control is as defined in eq. 2 The variables that can be changed in real time by the FRI

are:

Joint Stiffness 𝑲𝐽𝑜𝑖𝑛𝑡.

A damping coefficient 𝜉𝐽.

A desired external joint torque, 𝝉𝑑𝑠𝑟, for every joint.

With these control schemes it is possible to use the KUKA LWR in a co-manipulation environment. It consists

in a variable impedance control method that allows the robot to behave like a non-linear spring-damper system.

The KUKA FRI module allows to update the joint stiffness 𝑲𝐽𝑜𝑖𝑛𝑡 and the cartesian stiffness 𝑲𝐶𝑎𝑟𝑡 in real time

(up to 1ms update rate), leading to a soft and safe human-robot interaction.

Page 39: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

19

2 POLARIS SPECTRA TRACKING SYSTEM

The Polaris Spectra is a track device (Figure 29). It can track the 3D position and orientation of passive markers

attached to tools (Figure 30) with high accuracy. It emits infrared light to wirelessly detect and track a tool’s

position and orientation in all six degrees of freedom. The position and orientation of each tool is tracked and

displayed within the simulation software in real time.

Figure 29 - Polaris system.

Figure 30 - Polaris tool with markers.

The Polaris Spectra hardware acquires the pose of passive markers in relation to the Polaris base reference

frame with a 60Hz frame rate and an accuracy of 0.3mm.

This kind of device is used in different areas of medicine like medical simulation and surgical training,

measuring movement in motor control research, in orthopaedics research is used for fatigue analysis, for

example [54]. In [13] the Polaris Spectra device was added to track the bone position in relation to the robotic

arm position (Figure 31).

Figure 31 - Polaris Spectra Passive Markers and Drill Experimental Setup [13].

3 DEVICES’ COMMUNICATION

One of the main goals of this thesis is to interact with the KUKA LWR 4+ with Matlab and in order to that the

FRI communication protocol is used. Figure 32 shows the hardware involved and the communication rate

between the different components.

Page 40: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

20

Figure 32 - System components and communication rates.

This configuration reduces the possibility of system failure because all device communications are separated

from each other making it easier to detect any failure in a specific part. Also, this configuration makes the

system scalable, as it is simple to add new components.

The FRI communications are initiated in the KUKA’s controller via a UDP socket and to achieve the desired

sampling rate it is necessary a computer with a real-time operating system that responds to the UDP packages

at the desired sampling rate. Once the communication is established, its quality is being permanently

monitored by KUKA’s control system and if any delay or failure is detected, the controller stops the

communication and the robotic arm [55].

The interaction with other devices like the Omega6 haptic device, Polaris or another computer (clients) with

different sampling rates may lead to a variation of the time 2ms larger than the one allowed by the FRI

communication protocol. So, in this case, to make the interaction with different clients possible a FRI-xPC

server was developed [56].

The FRI-xPC server was implemented with Simulink Real-Time on the host pc (Figure 32) and then, after being

compiled into C code, is downloaded as a standalone application to a real-time system with the MathWorks

provided real-time kernel.

3.1 FRI-XPC SERVER

The FRI is an interface that allows the user to communicate with KUKA’s controller; in other words, it receives

information of the current state if the robotic arm and sends the information needed to control the behaviour of

KUKA [55].

In [56] the FRI-xPC server was developed to perform the main low-level actions and only present the user with

most relevant information (joint positions, tool-centre-point’s pose, communication quality, etc.), of the robotic

arm in order to make this interaction simpler.

The most important tasks of the FRI-xPC server are [56]:

communication with the FRI fulfilling all the requirements of the FRI UDP socket communication

communication with all clients via UDP socket communication at approximately constant rate

Page 41: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

21

interpolation the input signals send by client’s computer when the sampling frequencies are not equal

assures proper change of robot control modes via FRI

In this thesis it is intended to have all tasks performed in the FRI-xPC server at the maximum sampling rate.

In this work, the FRI-xPC server developed in [56] was adapted to have 4 different groups (Figure 33):

FRI Receive

FRI Transmit

Main controller

Brain Biopsy

Figure 33 - FRI-xPC server groups diagram.

FRI receive

In this block, all the information received from FRI via UDP (tFriMsrData [55]) is unpacked and every time a

new packet is received, a trigger signal is sent to activate the FRI transmit, the Main controller and the Brain

Biopsy blocks.

FRI transmit

The FRI transmit block is where the data to send via UDP to FRI is prepared and packed (tFriCmdData [55]).

In that packet all the information about the joint positions and the TCP’s pose must be sent. If that information

came from a client (Omega6 for example) with a lower sample frequency than the one needed for FRI

communication, it would be necessary to interpolate. In this thesis, because the surgeon will work in co-

manipulation with the robotic arm and that information is processed in the FRI-xPC server, no interpolation is

needed.

Joint positons interpolation

To interpolate the joint positions Euler integration is used.

Let 𝒒𝐹𝑅𝐼(𝑡𝑘−1) be the joint positions at the instant 𝑡 = 𝑡𝑘−1 and 𝒒𝑑𝑒𝑠𝑖𝑟𝑒𝑑(𝑡𝑘) be the joint positions desired at the

instant 𝑡 = 𝑡𝑘. Then, 𝒒𝐹𝑅𝐼(𝑡𝑘) can be calculated as:

𝒒𝐹𝑅𝐼(𝑡𝑘) = 𝒒𝐹𝑅𝐼(∆𝑡) + �̇�(∆𝑡)(∆𝑡)

eq. 4

Page 42: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

22

and

�̇�(𝑡𝑘−1) = 𝐾𝑝(𝒒𝑑𝑒𝑠𝑖𝑟𝑒𝑑(𝑡𝑘) − 𝒒𝐹𝑅𝐼(∆𝑡)) + �̇�𝑑𝑒𝑠𝑖𝑟𝑒𝑑(∆𝑡)

eq. 5

where 𝐾𝑝 is a proportional gain.

TCP cartesian pose interpolation

To interpolate the TCP position, the same method is used as the one used for joint position interpolation.

Let 𝒙𝐹𝑅𝐼(𝑡𝑘−1) be the cartesian position at the instant 𝑡 = 𝑡𝑘−1 and 𝒙𝑑𝑒𝑠𝑖𝑟𝑒𝑑(𝑡𝑘) be the cartesian position desired

at the instant 𝑡 = 𝑡𝑘. Then, the 𝒙𝐹𝑅𝐼(𝑡𝑘) can be calculated as:

𝒙𝐹𝑅𝐼(𝑡𝑘) = 𝒙𝐹𝑅𝐼(∆𝑡) + �̇�(∆𝑡)(∆𝑡)

eq. 6

and

�̇�(∆𝑡) = 𝐾𝑝(𝒙𝑑𝑒𝑠𝑖𝑟𝑒𝑑(𝑡𝑘) − 𝒙𝐹𝑅𝐼(∆𝑡)) + �̇�𝑑𝑒𝑠𝑖𝑟𝑒𝑑(∆𝑡)

eq. 7

where 𝐾𝑝 is a proportional gain.

To interpolate the TCP orientation, the SLERP (Spherical Linear intERPpolation) algorithm is used [57, 58]:

𝑆𝐿𝐸𝑅𝑃(𝑝0, 𝑝1, 𝜆) =sin (( 1 − 𝜆 ) Ω)

sin ( Ω )𝑝0 +

sin ( 𝜆 Ω)

sin ( Ω )𝑝1

eq. 8

cos(Ω) = 𝑝0 ⋅ 𝑝1

eq. 9

where:

𝑝0 is the quaternion of the actual rotation matrix of the robot

𝑝1 is the quaternion of the desired rotation matrix for the robot

and 𝜆 is defined as:

𝜆 =𝑡 − 𝑡𝑘

𝑡𝑠

eq. 10

where 𝑡𝑠 is the sampling time of the client computer.

Page 43: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

23

Main controller

In order to control all the actions of the FRI-xPC server a state machine was used [56]. The state machine has

two main states:

Init state: in this state all variables are initialized.

Connected state: this is the state when the communication with the FRI is established with the

required quality.

The connected state has 2 sub states that correspond to FRI states and the transition states:

MonitorMode: this sub state does not allow motion of the robot of any kind.

CommandMode: this sub state allows motion of the robot.

In the CommandMode there are two additional states:

Hold: the robot’s position cannot be changed

RobotActive: the positon of the robot can be changed to the desired one. The positions and other

signals of the robot related to the motion are active only in RobotActive state. If not in the RobotActive’s

state, the positions and orientations do not change their value and torques and forces are 0.

Figure 34 summarizes the different states:

Figure 34 - FRI-xPC server state machine.

The FRI states are also controlled by the KUKA Controller. In the robot’s controller a KRL program is running

where the data sent over FRI controls the execution of the program allowing us to switch between different

control modes and change variable values like impedance parameters or force values [56].

Brain biopsy

The brain biopsy block is responsible for the computation of the positions and orientations of the movements,

setting all the stiffness, force and damping parameters to be sent to the robotic arm and the control modes

desired, via FRI. This block is the one to be changed, according to the desired task to perform with the robotic

arm. Independently of the task, there are 3 main sections in this block:

Control mode

In this section, the user may choose the control mode to operate the robot: Joint Position control, Joint

Impedance control or Cartesian Impedance control.

Page 44: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

24

This part consists in setting up 3 flags: cmdMode, cmdFlag and cmdRun. The cmdMode and cmdFlags are

dependent of each other and set the control mode and the cmdRun flag defines the velocity law to be used

[56]. A list of the possible flags values is presented in Table 14 in appendix A.

TCP Pose and joint angles to be sent to the robotic arm

In this section, the user must be sending a cartesian pose or a set of joint angles (according with the control

mode) to the robotic arm. If the user wants the robotic arm to perform a trajectory, it must be implemented in

this section. In this thesis a rectilinear trajectory was implemented in cartesian and joint spaces as explained

in the next chapter.

Stiffness, damping and force parameters

Depending on the action the user pretends to perform with the robotic arm, a set of stiffness, damping and

external forces can be adjusted. According to the desired control mode, the possible values for stiffness and

damping are shown in the tables in appendix A.

In the case study presented in this thesis, all these sections were adapted for the desired task: a brain biopsy.

4 COMMUNICATION BETWEEN THE POLARIS SPECTRA WITH THE FRI-XPC SERVER

The optical tracking system Polaris Spectra is used to detect the relative pose between reference tools. In this

thesis, the Polaris Spectra was used to compute the pose of the phantom in the robot’s workspace. To achieve

this, it is necessary to know, continuously in time, the robotic arm and the phantom poses expressed in the

Polaris Spectra’s reference frame.

The Polaris Spectra cannot be connected to Simulink Real-Time, as it is a USB device, and it has a much

slower frequency, 60Hz, compared with the FRI frequency, 1kHz.The solution to synchronize both devices,

consists in connecting via UDP the FRI-xPC server (which is running in real-time) with a Simulink model in

simulation time that is communicating with the Polaris Spectra. The goal to achieve is that the pose acquired

in a certain instant of time by the Polaris Spectra is the one synchronized with the robotic arm’s pose received

by FRI.

Let’s consider model 1 as the FRI-xPC server and model 2 as the one running the Polaris Spectra.

Model 2 sends a pulse to model 1, via UDP. Model 1 detects if it received a UDP message and checks its

content: if it is 1, it responds to model 2 by sending the robotic arm’s pose, otherwise it sends an impossible

number for the pose (10000 for example).

Once the message from model 1 is received by model 2, the content is checked: if it is 10000, it does nothing,

otherwise it acquires the poses of the tools from the Polaris Spectra.

Page 45: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

25

Figure 35 - Communication between the Polaris Spectra and the FRI-xPC server.

The message from model 1 also carries the real-time value at which the message was sent. Knowing this we

are able to compare the time between messages and if the time interval is bigger than a limit defined by the

user, the values are neglected. The limit was defined as 0.05s as it was the mean achievable value after

communication testing in the Lab.

Page 46: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

26

Page 47: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

27

Chapter 3 - Case study: Brain biopsy

This chapter presents the case study of this thesis, a brain biopsy in a phantom. In this case study, the goal is

to compare the accuracy of the current biopsy procedure performed with neuronavigation, with the accuracy

obtained in a simulated biopsy performed with the assistance of the robotic arm. The clinical neuronavigation

procedure was performed at Hospital Santa Maria with the Neurosurgery team, and the robot assisted

procedure was performed at the Surgical Robotics Lab of IST.

1 PHANTOM

The test subject in this case study was a phantom that simulates the human head (Figure 36). The phantom

was designed in SolidWorks and 3D printed using the thermoplastic material of ABS on a Stratasys Dimension

3D printer. The phantom has 10 target spheres and the centre point of each sphere is only accessible through

a hole at a specific orientation. Each sphere has its own orientation in order to simulate different situations that

the surgeon can be challenged with (Figure 37 and Figure 38).

Figure 36 - Phantom.

Figure 37 - Phantom - top view.

Figure 38 - Phantom - side view.

In each sphere, the target point was defined as the centre of the top plane, and the desired orientation was

the vector that describes the central axis of each access cone (Figure 39). At Hospital Santa Maria the defined

target point was the centre of the cone inside the sphere and the same orientation as before (Figure 40).

Figure 39 - Cut sphere - target point and desired

orientation at IST.

Figure 40 - Cut sphere - target point and desired

orientation at Hospital Santa Maria.

The reason to have two different target points concepts lays on the fact that the one used in Hospital Santa

Maria will guide the needle to the target point. If this concept was used at Surgical Robotics Lab, we would not

be able to know if the target point was reach by the robotic arm or by the guidance of the cone. Using the

Page 48: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

28

target point at in Figure 39 facilitates the full visualization of the procedure. Nevertheless, whichever target

point is used, its position is measured with the Polaris tracking system, and therefore there is no direct influence

on the accuracy results.

2 CLINICAL NEURONAVIGATION PROCEDURE

In this section the steps performed by the Surgeons at Hospital Santa Maria to perform a brain biopsy with the

aid of neuronavigation are described.

After the pre-operative stage, where the CT-scan and MRI of the phantom where acquired and the target points

and respective trajectories were manually defined, the surgeon uses an equipment, like the StealthStation

Treon (Figure 41), to perform the brain biopsy. This equipment has an optical tracking system attached to it.

Figure 41 - StealthStation Treon, composed by a computer and an optical tracking device [59].

The first step is to fix the phantom to the surgery table to guarantee that it will not move during the surgery

(Figure 42). Once it is fixed, the surgeon must perform the registration of the phantom into the StealthStation

Treon system. The objective of this step is to match the physical points of the phantom with the ones in the

pre-operative images in order to know where the planed target points and trajectories are in the real phantom

(Figure 43). In this case study, the target points were the centre of the spheres inside the phantom (Figure 40).

Page 49: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

29

Figure 42 - Phantom fixed to the surgery bed.

Figure 43 - Acquisition of points for the phantom’s registration.

Once the registration is done, the surgeon can see in the software where are the planed target points and

trajectories, as shown in Figure 44:

Figure 44 - Planned target points and trajectories. Each line represents the trajectory planned for each one of the target

points. The surgeon can see if the trajectory will intersect any important structure of the brain.

Once the registration stage is concluded, with the help of a tracking device similar to the Polaris Spectra (Figure

41), the surgeon can align the needle with the planned pose, assisted by the Vertek system [60]. The Vertek

system is composed of a reference tool, a probe and an articulated arm. The surgeon attached to the phantom

the reference tool that has reflective markers that are detected by the tracking device, the same way as the

probe (Figure 45). After setting up the system, the surgeon can align the probe with the plan on the screen as

well as possible, fixing the probe with the articulated arm (Figure 46).

Page 50: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

30

Figure 45 - Phantom with the reference tool attached on

the left and the Vertek probe with a fixed pose supported

by the Vertek articulated arm.

Figure 46 - Surgeon aligning the Vertek probe with the plan

on the computer screen.

Once the Surgeon is satisfied with the alignment reached, he inserts the needle in the support of the articulated

arm and simulates the biopsy (Figure 47 and Figure 48).

Figure 47 - Needle in the support of the articulated arm

reaching a target point.

Figure 48 - Needle in the support of the articulated arm

(closer view).

3 PROCEDURE PERFORMED AT IST

The procedure performed at IST was planned to replicate, as closely as possible, the real procedure performed

at Hospital Santa Maria. The optical tracking method used in this thesis was based on the Polaris Spectra

system. The following system was set up (Figure 49 and Figure 50):

Page 51: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

31

Figure 49 - Setup of the system at the Surgical Robotics Lab.

Figure 50 - Setup of the system: KUKA robotic arm with the needle with reflector markers at the end-effector, the phantom

with a reference frame attached and the Polaris Spectra.

The pre-operative stage is also different. Instead of using the CT-scan and MRI of the phantom, we used a 3D

point cloud obtained previously with the Polaris Spectra and this is our ground truth (Figure 51).

Figure 51 - Phantom’s point cloud.

After acquiring the point cloud of each target sphere, it is necessary to estimate not only its centre but also the

central axis of the cone.

Page 52: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

32

To obtain the centre of each sphere expressed in the reference frame attached to the phantom, the following

minimization problem is solved:

min ∑((𝑥𝑖 − 𝑥𝑐)2 + (𝑦𝑖 − 𝑦𝑐)2 + (𝑧𝑖 − 𝑧𝑐)2 − 𝑟2)2

𝑛

𝑖=1

eq. 11

Where (𝑥𝑖 , 𝑦𝑖 , 𝑧𝑖) is the dataset of each sphere, (𝑥𝑐 , 𝑦𝑐 , 𝑧𝑐) is the sphere’s centre coordinates, 𝑟 is the radius of

the sphere and 𝑛 is the size of the dataset. The results obtain are shown in Table 1 and Figure 52:

Figure 52 - Computed spheres based on the point clouds obtained with the Polaris Spectra. The centre of each sphere is a red dot.

Spheres’ centre and radius

Sphere X(mm) Y(mm) Z(mm) r(mm)

1 -123.12 10.20 96.95 7.48

2 -105.78 -1.14 54.07 7.77

3 -136.09 9.03 10.55 7.85

4 -117.27 7.99 -3.59 7.78

5 -74.70 9.76 -3.09 7.40

6 -75.31 4.76 22.29 7.46

7 -80.32 9.64 96.02 7.74

8 -62.08 10.18 82.79 7.40

9 -48.96 8.95 63.87 7.59

10 -48.12 9.77 22.51 6.91

Table 1 - Computed centres and radius of the spheres based on the point clouds obtained with the Polaris Spectra

Knowing the centre, it is necessary to estimate the central axis of the cone. Our approach begins by isolating

the points of the internal curved cone. To do this we select the points inside a smaller radius sphere with the

same centre (Figure 53).

Figure 53 - Sphere point cloud - isolation of the internal curved cone.

With this smaller dataset,𝒅𝑐𝑜𝑛𝑒, we need to slice the curved cone in parts and calculate the centroid of each

slice. To know in which direction to slice the curved cone we fit a line by Orthogonal Distance Regression.

Let 𝒄 be the mean values of 𝒅𝑐𝑜𝑛𝑒

𝒄 = [1

𝑛∑ 𝑥𝑖

𝑛

𝑖=1

1

𝑛∑ 𝑦𝑖

𝑛

𝑖=1

1

𝑛∑ 𝑧𝑖

𝑛

𝑖=1

]

eq. 12

Page 53: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

33

and 𝑨 the matrix of dimension 𝑛 × 3:

𝑨 = [

𝑥1 − 𝑐𝑥 𝑦1 − 𝑐𝑦 𝑧1 − 𝑐𝑧

… … …𝑥𝑛 − 𝑐𝑥 𝑦𝑛 − 𝑐𝑦 𝑧𝑛 − 𝑐𝑧

]

eq. 13

Applying the Singular Value Decomposition (SVD) to matrix 𝑨, we choose the highest eigenvalue of matrix 𝑺

and the correspondent eigenvector of the matrix 𝑽 to obtain the direction of the axis of the cone.

Knowing the direction, we slice the cone into 20 slices along this axis, compute the centroid of each slice, and

obtain the following blue points:

Figure 54 - Points of the cone in black and the centroid of the slices in blue.

Considering the new dataset of the blue points, the centroids of each slice, and by repeating the process

explained before using eq. 12 and eq. 13, a new orientation can be computed. This is the direction considered

as the true direction of the cone’s central axis. This second step provides improved results as it has a filtering

effect of the outlier points. Figure 55 shows the result of one sphere and in Figure 56 the result for the whole

set.

Figure 55 - Centre point and desired orientation for one

sphere.

Figure 56 - Target points and desired orientations for all

the spheres.

Once the centre coordinates and respective orientation vectors are computed, we define the target points for

the robotic experiments by a translation with a magnitude of 7mm along the orientation vector (Figure 39).

Once the target points and respective orientations are defined, a reference tool with reflective markers was

attached to the needle´s support on the robotic arm (Figure 57).

Page 54: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

34

Figure 57 - Reference tool with reflective markers attached to the needle´s support.

With this setup the Polaris Spectra can detect both reference tools and relate the position of the needle and

the position of the phantom.

Like in the actual procedure performed in Hospital Santa Maria, the registration of the phantom must be done

to know where the phantom is located in the workspace of the robotic arm. Once the registration is done, the

surgeon just needs to grab the end-effector of the robotic arm and pull it, slowly, to the target point (Figure 58).

The robot will haptically guide the surgeon to the planned pose for that target.

Figure 58 - Examples of trials.

3.1 PATIENT’S REGISTRATION - LOCALIZATION OF THE TARGET POINTS IN THE ROBOTIC ARM’S REFERENCE

FRAME

In Figure 59 the setup of the system used at IST is represented. In order to perform the biopsy, the robotic arm

must know where the target point is located in its reference’s frame during all the procedure. To detect the

position of the phantom, a reference tool (s1) of the Polaris Spectra is attached to it and another reference tool

(s2) is attached to the needle’s support. With this configuration, the Polaris Spectra can detect the pose of the

tools s1 and s2 expressed in its reference frame, 𝑻𝑠1𝑝𝑜𝑙

and 𝑻𝑠2𝑝𝑜𝑙

respectively. From the FRI we know the pose

of the needle’s reference frame in the robotic arm’s reference frame, 𝑻𝑛𝑒𝑒𝑑𝑙𝑒𝑘𝑢𝑘𝑎 .

Page 55: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

35

Figure 59 - Reference frame of the Polaris Spectra (𝑂𝑥𝑝𝑜𝑙𝑦𝑝𝑜𝑙𝑧𝑝𝑜𝑙), needle tip (𝑂𝑥𝑛𝑦𝑛𝑧𝑛), phantom (𝑂𝑥𝑠1𝑦𝑠1𝑧𝑠1), needle

holder (𝑂𝑥𝑠2𝑦𝑠2𝑧𝑠2), and KUKA robotic arm (𝑂𝑥𝑘𝑦𝑘𝑧𝑘).

The Polaris Spectra system allows us to perform a pivoting motion of a reference tool about a fixed point. In

other words, it allows us to perform a translation of the tool reference frame to the pivoting point. In this case,

the reference frame of the reference tool s2 was moved to the tip of the needle as shown in Figure 60:

Figure 60 - Reference frames after the pivoting of the reference tool s2 in the Polaris Spectra.

The pose of the reference tool s2 expressed in reference frame s1, 𝑻𝑠2𝑠1, is given by:

𝑻𝑠2𝑠1 = (𝑻𝑠1

𝑝𝑜𝑙)

−1𝑻𝑠2

𝑝𝑜𝑙

eq. 14

Once the pivoting is done, the position of the reference tool s2 expressed in reference frame s1 is the same

as the needle’s position expressed in reference frame s1:

Page 56: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

36

𝑻𝑠2𝑠1 [

0001

] = 𝑻𝑛𝑠1 [

0001

]

eq. 15

To calculate the pose of reference tool s1 expressed in the robotic arm’s reference frame, 𝑻𝑠1𝑘 , a rigid body

transformation must be formulated, as described below.

We have two correspondent datasets; the position of the needle expressed in the robotic arm’s reference

frame (from FRI), 𝒑𝑛𝑘, and the position of needle expressed in reference frame s1 (from the Polaris), 𝒑𝑛

𝑠1, which

are related by:

𝒑𝑛𝑘 = 𝑹𝑠1

𝑘 𝒑𝑛𝑠1 + 𝒑𝑠1

𝑘

eq. 16

where 𝑹𝑠1𝑘 is a 3×3 rotation matrix and 𝒑𝑠1

𝑘 is a 3×1 position vector of reference frame s1 expressed in the robotic

arm’s reference frame. To calculate the optimal transformation, a minimization of a least squares error problem

was solved. This problem formulated as:

min ∑‖(𝒑𝑛𝑘)𝑖−𝑹𝑠1

𝑘 (𝒑𝑛𝑠1)𝑖 − 𝒑𝑠1

𝑘 ‖2

𝑛

𝑖=1

eq. 17

where 𝑛 is the number of points of the datasets.

So as to remove the translation vector of eq. 17, both datasets must be centred. To centre the data, the mean

of each dataset, 𝒑𝑛𝑘̅̅̅̅ and 𝒑𝑛

𝑠1̅̅ ̅̅ , is subtracted from the corresponding dataset:

𝒑𝑛𝑘̅̅̅̅ =

1

𝑛∑(𝒑𝑛

𝑘)𝑖

𝑛

𝑖=1

eq. 18

𝒑𝑛𝑠1̅̅ ̅̅ =

1

𝑛∑(𝒑𝑛

𝑠1)𝑖

𝑛

𝑖=1

eq. 19

And the centred datasets are:

(𝒑𝑛𝑘)𝑐𝑒𝑛𝑡𝑟𝑒𝑑𝑖

= (𝒑𝑛𝑘)𝑖 − 𝒑𝑛

𝑘̅̅̅̅

eq. 20

(𝒑𝑛𝑠1)𝑐𝑒𝑛𝑡𝑟𝑒𝑑𝑖

= (𝒑𝑛𝑠1)𝑖 − 𝒑𝑛

𝑠1̅̅ ̅̅

eq. 21

So the problem in (eq. 17) can now be rewritten as:

Page 57: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

37

min ∑‖(𝒑𝑛𝑘)𝑐𝑒𝑛𝑡𝑟𝑒𝑑𝑖

− 𝑹𝑠1𝑘 (𝒑𝑛

𝑠1)𝑐𝑒𝑛𝑡𝑟𝑒𝑑𝑖‖

2𝑛

𝑖=1

eq. 22

Since 𝑹𝑠1𝑘 (𝑹𝑠1

𝑘 )𝑇 = 𝑰, where 𝑰 is the identity matrix, and det(𝑹𝑠1𝑘 ) = 1, the eq. 22 is simplified to:

min ∑ (((𝒑𝑛𝑘)𝑐𝑒𝑛𝑡𝑟𝑒𝑑𝑖

T(𝒑𝑛

𝑘)𝑐𝑒𝑛𝑡𝑟𝑒𝑑𝑖) + ((𝒑𝑛

𝑠1)𝑐𝑒𝑛𝑡𝑟𝑒𝑑𝑖

T(𝒑𝑛

𝑠1)𝑐𝑒𝑛𝑡𝑟𝑒𝑑𝑖) − (2(𝒑𝑛

𝑘)𝑐𝑒𝑛𝑡𝑟𝑒𝑑𝑖

TRs1

k (𝒑𝑛𝑠1)𝑐𝑒𝑛𝑡𝑟𝑒𝑑𝑖

))

n

i=1

eq. 23

The minimization of eq. 23 is the equivalent of maximizing its last term, the 𝑡𝑟𝑎𝑐𝑒(𝑯) where 𝑯 is the correlation

matrix between the two datasets:

𝑯 = ∑ ((𝒑𝑛𝑠1)𝑐𝑒𝑛𝑡𝑟𝑒𝑑𝑖

(𝒑𝑛𝑘)𝑐𝑒𝑛𝑡𝑟𝑒𝑑𝑖

𝑇)

𝑛

𝑖=1

eq. 24

The optimal rotation matrix is calculated resorting to the singular value decomposition [61]:

𝑆𝑉𝐷(𝑯) = 𝑼𝑺𝑽𝑇

eq. 25

𝑹𝑠1𝑘 = 𝐔 𝑑𝑒𝑡(𝐔𝐕𝑇) 𝐕𝑇

eq. 26

Knowing 𝑹𝑠1𝑘 , the optimal translation vector between the two datasets is:

𝒑𝑠1𝑘 = 𝒑𝑛

𝑘̅̅̅̅ −𝑹𝑠1𝑘 𝒑𝑛

𝑠1̅̅ ̅̅

eq. 27

Therefore, having the estimated pose the reference frame s1 attached to the phantom expressed in the robotic

arm’s reference frame, 𝑹𝑠1𝑘 and 𝒑𝑠1

𝑘 , having the needle’s pose expressed in the robotic arm’s reference frame,

𝑹𝑛𝑘 and 𝒑𝑛

𝑘, and having the target point position and orientation expressed in the reference frame s1 frame

from the pre-operative stage, 𝒑𝑡𝑠1 and 𝒏𝑡

𝑠1, it is possible to obtain the target point position, 𝒑𝑡𝑘,and orientation

vector, 𝒏𝑡𝑘, expressed on the robotic arm’s reference frame:

𝒑𝑡𝑘 = 𝑹𝑠1

𝑘 𝒑𝑡𝑠1 + 𝒑𝑠1

𝑘

eq. 28

𝒏𝑡𝑘 = 𝑹𝑠1

𝑘 𝒏𝑡𝑠1

eq. 29

Page 58: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

38

3.2 SURGICAL CO-MANIPULATION MODE

In this thesis, a co-manipulation mode was developed that allows the surgeon to perform the simulated biopsy.

While the surgeon executes the biopsy, the robot will constrain the surgeon’s movement to a cone shaped

zone, guiding the needle to the desired target point with the desired orientation planned preoperatively (Figure

61).

Figure 61 - Illustration of the environment.

After registration, the following variables are known during the surgery:

𝒑𝑡𝑘 – Position of the target point expressed in the robotic arm’s base reference frame (obtained from the

patient’s registration)

𝒑𝑛𝑘 – Position of the tip of the needle expressed in the robotic arm’s base reference frame (obtained from the

FRI)

𝑹𝑛𝑘– Rotation matrix of the needle’s reference frame expressed in the robotic arm’s base reference frame

(obtained from the FRI)

ℎ𝑐𝑜𝑛𝑒- Height of the virtual impedance cone (defined by the user)

𝑟𝑚á𝑥- Maximum radius of the virtual impedance cone (defined by the user)

The orientation vector is also known from the patient’s registration, 𝒏𝑡𝑘 . It is also necessary to calculate a

rotation matrix of the target’s reference frame expressed in the robotic arm’s base reference frame, 𝑹𝑡𝑘 .

Defining that 𝒏𝑡𝑘 represents the 𝑧 axis of 𝑹𝑡

𝑘, it is possible to calculate the angle between 𝒏𝑡𝑘 and the 𝑧 axis of

the robotic arm’s base reference frame:

𝜃 = cos−1 (𝒏𝑡𝑘 ∙ [

001

])

eq. 30

and it’s also possible to calculate the rotation vector from the target to the robotic arm:

𝒗 = 𝒏𝑡𝑘 × [

001

]

eq. 31

Page 59: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

39

knowing the angle and the vector, the rotation matrix 𝑹𝑡𝑘 can be computed according to the angle-axis

representation:

𝑹𝑡𝑘 = [

𝑣𝑥2(1 − cos(𝜃)) + cos(𝜃) 𝑣𝑥𝑣𝑦(1 − cos(𝜃)) − 𝑣𝑧 sin(𝜃) 𝑣𝑥𝑣𝑧(1 − cos(𝜃)) + 𝑣𝑦 sin(𝜃)

𝑣𝑥𝑣𝑦(1 − cos(𝜃)) + 𝑣𝑧 sin(𝜃) 𝑣𝑦2(1 − cos(𝜃)) + cos(𝜃) 𝑣𝑦𝑣𝑧(1 − cos(𝜃)) − 𝑣𝑥 sin(𝜃)

𝑣𝑥𝑣𝑧(1 − cos(𝜃)) − 𝑣𝑦 sin(𝜃) 𝑣𝑦𝑣𝑧(1 − cos(𝜃)) + 𝑣𝑥 sin(𝜃) 𝑣𝑧2(1 − cos(𝜃)) + cos(𝜃)

]

eq. 32

It is now possible to define the impedance cone’s vertice, expressed in the target’s reference frame, 𝒑𝑐𝑡 , and

also expressed in the robotic arm’s base reference frame, 𝒑𝑐𝑘 :

𝒑𝑐𝑡 = [

00

𝑑𝑖𝑠𝑡𝑐𝑡]

eq. 33

𝒑𝑐𝑘 = 𝒑𝑡

𝑘 + (𝑹𝑡𝑘𝒑𝑐

𝑡 )

eq. 34

, where 𝑑𝑖𝑠𝑡𝑐𝑡 is the distance between the target point and the cone’s vertice. The orientation of the cone’s

reference frame expressed in the robotic arm’s base reference frame, 𝑹𝑐𝑘, is the same as the target point, 𝑹𝑡

𝑘:

𝑹𝑡𝑘 = 𝑹𝑐

𝑘

eq. 35

Once the pose of the cone’s vertice is defined, the needle’s position, 𝒑𝑛𝑐 , and the needle’s orientation, 𝑹𝑛

𝑐 ,

expressed in the cone’s reference frame are:

𝒑𝑛𝑐 = (𝑹𝑐

𝑘)𝑇(𝒑𝑛𝑘 − 𝒑𝑐

𝑘)

eq. 36

𝑹𝑛𝑐 = (𝑹𝑐

𝑘)𝑇𝑹𝑛𝑘

eq. 37

All these poses are shown in Figure 62:

Page 60: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

40

Figure 62 - Impedance environment reference frames. Cone’s vertice position, 𝐩ck, target point’s position, 𝐩t

k, and the

needle’s position, 𝐩nk, with respect to the robotic arm’s base reference frame and the needle’s position, 𝐩n

c , with respect to

the cone’s reference.

The virtual impedance environment can be separated in 6 areas (Figure 63). The desired behaviour of the

robot in each area is:

Figure 63 - Virtual impedance environment.

Area 1: Gravitational compensation, the user can move the robot freely

Area 2: The needle tends to go to the central axis of the cone, although not imposing any movement

along that axis. As the user guides the needle to the vertice, the orientation of the needle will be forced

to be the desired one.

Area 3: The user may try to go in the direction of the target point from the area 1, but from outside of

the cone. In this area, a force pulling the needle back to area 1 must be applied.

Page 61: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

41

Area 4: Once inside the cone, the user may try to leave the cone through its surface and reach area

4. In this area, a force pulling the needle back to area 2 must be applied, without imposing any

movement along the central axis of the cone.

Area 5: The needle must be aligned with the desired orientation and tends to go to the central axis of

the cone without imposing any movement along that axis. The robot must impose a strong constrain

in order to prevent the user to leave the central axis.

Area 6: Once the needle reaches the target point, a constraint must be imposed to make sure the

needle does not go beyond the target point. The only movement allowed is to go backwards.

Let’s define the distance from the needle to the impedance cone’s axis, 𝑑𝑖𝑠𝑡𝑜𝑟𝑡, as:

𝑑𝑖𝑠𝑡𝑜𝑟𝑡 = √{ 𝒑𝑛𝑐 }𝑦

2+ { 𝒑𝑛

𝑐 }𝑥2

eq. 38

and the radius of any cone’s section at a distance { 𝒑𝑛𝑐 }𝑧 from the cone’s vertice, 𝑟𝑠𝑒𝑐𝑡𝑖𝑜𝑛 ({ 𝒑𝑛

𝑐 }𝑧 ), as:

𝑟𝑠𝑒𝑐𝑡𝑖𝑜𝑛({ 𝒑𝑛𝑐 }𝑧 ) =

{ 𝒑𝑛𝑐 }𝑧 𝑟𝑚á𝑥

ℎ𝑐𝑜𝑛𝑒

eq. 39

So, having in mind the eq. 38 and eq. 39 it is possible to know at any time where the needle is with respect

to the cone, expressed in the cone’s reference frame. The conditions to determine in which area the needle is

are the following:

Area 1

{ 𝒑𝑛𝑐 }𝑧 > ℎ𝑐𝑜𝑛𝑒

eq. 40

Area 2

{0 < { 𝒑𝑛

𝑐 }𝑧 ≤ ℎ𝑐𝑜𝑛𝑒

𝑑𝑖𝑠𝑡𝑜𝑟𝑡 − 𝑟𝑠𝑒𝑐𝑡𝑖𝑜𝑛 < 0

eq. 41

Area 3

{0 < { 𝒑𝑛

𝑐 }𝑧 ≤ ℎ𝑐𝑜𝑛𝑒

𝑑𝑖𝑠𝑡𝑜𝑟𝑡 ≥ 𝑟𝑚á𝑥

eq. 42

Area 4

{

0 < { 𝒑𝑛𝑐 }𝑧 ≤ ℎ𝑐𝑜𝑛𝑒

𝑑𝑖𝑠𝑡𝑜𝑟𝑡 − 𝑟𝑠𝑒𝑐𝑡𝑖𝑜𝑛 ≥ 0𝑑𝑖𝑠𝑡𝑜𝑟𝑡 < 𝑟𝑚á𝑥

eq. 43

Area 5

−𝑑𝑖𝑠𝑡𝑐𝑡 < { 𝒑𝑛

𝑐 }𝑧 ≤ 0

eq. 44

Area 6

{ 𝒑𝑛𝑐 }𝑧 ≤ −𝑑𝑖𝑠𝑡𝑐

𝑡

eq. 45

Page 62: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

42

In areas 2, 5 and 6 it is desired that the robot guides the needle to the central axis of the cone.

To do that, the needle’s position must be projected in that axis on the cone’s reference frame, 𝒑𝑙𝑐,

and in the needle’s reference frame, 𝒑𝑙𝑛, and also define the distance between the needle and the

projected point, 𝒑𝑛𝑙 , as:

𝒑𝑙𝑐 = [

00

{ 𝒑𝑛𝑐 }𝑧

]

eq. 46

𝒑𝑛𝑙 = 𝒑𝑛

𝑐 − 𝒑𝑙𝑐

eq. 47

𝒑𝑙𝑛 = 𝑹𝑛

𝑐 𝑇(𝒑𝑙𝑐 − 𝒑𝑛

𝑐 )

eq. 48

Once the needle goes from area 2 to 4 it is desired to know the normal of the cone at { 𝒑𝑛𝑐 }𝑧 to

know the direction of the contact point. The contact point, 𝒑𝑐𝑜𝑛𝑡𝑎𝑐𝑡𝑐 , can be defined as:

𝒑𝑐𝑜𝑛𝑡𝑎𝑐𝑡𝑐 =

𝒑𝑛𝑙 𝑟𝑠𝑒𝑐𝑡𝑖𝑜𝑛

𝑑𝑖𝑠𝑡𝑜𝑟𝑡

+ 𝒑𝑙𝑐

eq. 49

All the positions expressed in the cone’s reference frame are illustrated in Figure 64:

Figure 64 - Points expressed on the cone's reference frame. Needle’s position, 𝒑𝒏𝒄 , the projected point, 𝒑𝒍

𝒄,

and the contact point, 𝒑𝒄𝒐𝒏𝒕𝒂𝒄𝒕𝒄 , in the cone’s reference frame.

Impedance control law

The impedance control law in cartesian space is:

Page 63: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

43

𝜏𝑐𝑚𝑑 = 𝑱𝑇(𝑲𝑐𝑎𝑟𝑡(𝒑𝒐𝒔𝒆𝑡𝑎𝑟𝑔𝑒𝑡 𝑝𝑜𝑖𝑛𝑡𝑘 − 𝒑𝒐𝒔𝒆𝑎𝑐𝑡𝑢𝑎𝑙 𝑝𝑜𝑖𝑛𝑡

𝑘 ) + 𝑭𝑐𝑚𝑑) + 𝑫(𝜀) + 𝒇𝑑𝑦𝑛𝑎𝑚𝑖𝑐𝑠

eq. 50

To correct the needle’s pose we update the robot’s stiffness, 𝑲𝑐𝑎𝑟𝑡.The goal is to change 𝑲𝑐𝑎𝑟𝑡 in

real-time in order to make the robot’s behaviour the one desired for the area (Figure 63) in which

the needle is. It will be assumed that there are no external forces applied to the robot, that is,

𝑭𝑐𝑚𝑑 is null.

𝑲𝑐𝑎𝑟𝑡 is a 6 × 6 diagonal matrix:

𝑲𝑐𝑎𝑟𝑡 = 𝑑𝑖𝑎𝑔 ([𝑲𝑃 𝑠𝑝𝑟𝑖𝑛𝑔 𝑲𝑇𝑠𝑝𝑟𝑖𝑛𝑔])

eq. 51

where 𝑲𝑃 𝑠𝑝𝑟𝑖𝑛𝑔 contains the stiffness parameters related with the position and 𝑲𝑇𝑠𝑝𝑟𝑖𝑛𝑔 contains

the stiffness parameters related with the orientation.

Position

Once we know where the needle is with respect to the cone, it is possible to correct its position.

In order to do that we will use a virtual spring (Figure 65) defined in the cartesian impedance law

(eq. 50):

Figure 65 - Correcting the needle’s position.

The force’s magnitude applied by the spring, 𝐹𝑠𝑝𝑟𝑖𝑛𝑔, is defined as:

𝐹𝑠𝑝𝑟𝑖𝑛𝑔 = 𝐾𝑠𝑝𝑟𝑖𝑛𝑔𝑖𝑛𝑑

eq. 52

where 𝐾𝑠𝑝𝑟𝑖𝑛𝑔𝑖𝑛 is the stiffness of the spring and 𝑑 is the distance from the current to the desired

position. It is desired that 𝐾𝑠𝑝𝑟𝑖𝑛𝑔𝑖𝑛 changes proportionally to the distance to the 𝑧 axis of the

cone’s reference frame, 𝑑𝑖𝑠𝑡𝑜𝑟𝑡. It is also desirable that 𝐾𝑠𝑝𝑟𝑖𝑛𝑔𝑖𝑛changes with the distance {𝒑𝑛

𝑐 }𝑧.

Therefore, the magnitude of 𝐾𝑠𝑝𝑟𝑖𝑛𝑔𝑖𝑛 is defined as:

Page 64: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

44

𝐾𝑠𝑝𝑟𝑖𝑛𝑔𝑖𝑛= 𝐾𝑠𝑝𝑟𝑖𝑛𝑔𝑖𝑛𝑚á𝑥 ∗ (1 − (

{𝒑𝑛𝑐 }𝑧

ℎ𝑐𝑜𝑛𝑒

)

14

) ∗ (𝑑𝑖𝑠𝑡𝑜𝑟𝑡

𝑟𝑠𝑒𝑐𝑡𝑖𝑜𝑛

4

)

eq. 53

𝐾𝑠𝑝𝑟𝑖𝑛𝑔𝑖𝑛 must be expressed in the needle’s reference frame but it is also desirable to pull the

needle, not to the vertice, but to the 𝑧 axis of the cone’s reference frame (Figure 65). The director

vector, 𝒗𝑠𝑝𝑟𝑖𝑛𝑔𝑖𝑛, is defined as:

𝒗𝑠𝑝𝑟𝑖𝑛𝑔𝑖𝑛=

|𝒑𝑙𝑛|

‖𝒑𝑙𝑛‖

eq. 54

So, the vector 𝑲𝑠𝑝𝑟𝑖𝑛𝑔𝑖𝑛 is defined

𝑲𝑠𝑝𝑟𝑖𝑛𝑔𝑖𝑛= 𝒗𝑠𝑝𝑟𝑖𝑛𝑔𝑖𝑛

∗ 𝐾𝑠𝑝𝑟𝑖𝑛𝑔𝑖𝑛

eq. 55

This is valid only when the needle has not reached the cone’s vertice (area 2 and 4, Figure 63).

When the needle goes beyond that point (area 5,Figure 63), the stiffness is set to maximum in

the axial directions and it is set to zero in the predefined needle’s trajectory direction:

𝑲𝑠𝑝𝑟𝑖𝑛𝑔𝑖𝑛= [0 𝐾𝑠𝑝𝑟𝑖𝑛𝑔𝑖𝑛𝑚𝑎𝑥 𝐾𝑠𝑝𝑟𝑖𝑛𝑔𝑖𝑛𝑚𝑎𝑥]

eq. 56

When the needle is in the area 1 and 3:

𝑲𝑠𝑝𝑟𝑖𝑛𝑔𝑖𝑛= [0 0 0]

eq. 57

and when the needle is in the area 6:

𝑲𝑠𝑝𝑟𝑖𝑛𝑔𝑖𝑛= [𝐾𝑠𝑝𝑟𝑖𝑛𝑔𝑖𝑛𝑚𝑎𝑥 𝐾𝑠𝑝𝑟𝑖𝑛𝑔𝑖𝑛𝑚𝑎𝑥 𝐾𝑠𝑝𝑟𝑖𝑛𝑔𝑖𝑛𝑚𝑎𝑥]

eq. 58

Virtual surface impedance contact law

When inside the cone, the robot’s stiffness will change in order to guide us to its central axis.

When we reach the cone’s surface we will notice an additional constraint to our movements, like

a virtual barrier. When the needle is in contact with the cone’s surface, the stiffness values along

the 𝑥 and 𝑦 axes of the cone’s reference frame are increased according to:

𝐾𝑠𝑝𝑟𝑖𝑛𝑔 = 𝐾𝑠𝑝𝑟𝑖𝑛𝑔𝑖𝑛+ 𝐾𝑐𝑜𝑛𝑡𝑎𝑐𝑡

eq. 59

Page 65: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

45

where 𝐾𝑐𝑜𝑛𝑡𝑎𝑐𝑡 is:

𝐾𝑐𝑜𝑛𝑡𝑎𝑐𝑡 = 𝐾𝑤𝑎𝑙𝑙 (𝑑𝑖𝑠𝑡𝑜𝑟𝑡 − 𝑟𝑠𝑒𝑐𝑡𝑖𝑜𝑛

𝑟𝑠𝑒𝑐𝑡𝑖𝑜𝑛

)

14

eq. 60

and 𝐾𝑤𝑎𝑙𝑙 is the additional maximum magnitude that the spring’s stiffness can have.

𝐾𝑐𝑜𝑛𝑡𝑎𝑐𝑡 must be expressed in the needle’s reference frame but it is also desirable to pull the

needle to the nearest point inside the cone. The director vector, 𝒗𝑐𝑜𝑛𝑡𝑎𝑐𝑡, is defined:

𝒗𝑐𝑜𝑛𝑡𝑎𝑐𝑡 =|𝑹𝑛

𝑐 𝑇 𝒏𝑐𝑜𝑛𝑡𝑎𝑐𝑡𝑇 |

‖𝑹𝑛𝑐 𝑇 𝒏𝑐𝑜𝑛𝑡𝑎𝑐𝑡

𝑇 ‖

eq. 61

Where 𝒏𝑐𝑜𝑛𝑡𝑎𝑐𝑡 is the normal of surface where the contact occurred expressed on the cone’s

reference frame, defined as:

𝒏𝑐𝑜𝑛𝑡𝑎𝑐𝑡 =[−2𝑥 − 2𝑦 2𝑧]

‖[−2𝑥 − 2𝑦 2𝑧]‖

eq. 62

and (𝑥, 𝑦, 𝑧) are the coordinates of 𝒑𝑐𝑜𝑛𝑡𝑎𝑐𝑡𝑐 (eq. 49).

Therefore, the vector 𝑲𝑐𝑜𝑛𝑡𝑎𝑐𝑡 is defined as:

𝑲𝑐𝑜𝑛𝑡𝑎𝑐𝑡 = 𝒗𝑐𝑜𝑛𝑡𝑎𝑐𝑡 𝐾𝑐𝑜𝑛𝑡𝑎𝑐𝑡

eq. 63

This is valid only when the needle has not reached the cone’s vertice (area 4, Figure 63). When

the needle goes beyond that point but does not reach the target point (area 5, Figure 63), 𝑲𝑐𝑜𝑛𝑡𝑎𝑐𝑡

is null because the radial stiffness of the robot is already in its maximum (eq. 56). The same

happens when the needle is in the area 6 (eq. 58). When the needle is in the area 1, 𝑲𝑐𝑜𝑛𝑡𝑎𝑐𝑡 is

also null because free movement is desired (eq. 57).

When the needle is in area 3, eq. 60 is not valid. In that area it is desired to force the needle to

go in the positive direction of the 𝑧 axis of the cone expressed in the needle’s reference frame. In

that case, 𝐾𝑐𝑜𝑛𝑡𝑎𝑐𝑡 is defined as:

𝐾𝑐𝑜𝑛𝑡𝑎𝑐𝑡 = 𝐾𝑤𝑎𝑙𝑙 (1 − ({𝒑𝑛

𝑐 }𝑧

ℎ𝑐𝑜𝑛𝑒

)

8

)

eq. 64

And the director vector is:

Page 66: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

46

𝒗𝑐𝑜𝑛𝑡𝑎𝑐𝑡 =|𝑹𝑛

𝑏 𝑇 𝒏𝑡𝑜𝑝 𝑤𝑎𝑙𝑙|

‖𝑹𝑛𝑏 𝑇

𝒏𝑡𝑜𝑝 𝑤𝑎𝑙𝑙‖

eq. 65

Where 𝒏𝑡𝑜𝑝 𝑤𝑎𝑙𝑙 is:

𝒏𝑡𝑜𝑝 𝑤𝑎𝑙𝑙 = {𝑹𝑐𝑏}𝑧

eq. 66

So, the vector 𝑲𝑐𝑜𝑛𝑡𝑎𝑐𝑡 for area 3 is defined as:

𝑲𝑐𝑜𝑛𝑡𝑎𝑐𝑡 = 𝒏𝑡𝑜𝑝 𝑤𝑎𝑙𝑙 𝐾𝑐𝑜𝑛𝑡𝑎𝑐𝑡

eq. 67

Orientation

Not only the needle’s position must be corrected but also its orientation. Once we know where

the needle is with respect to the cone, it is possible to correct it. In order to do that we will use the

torque spring (Figure 66) defined in the cartesian impedance law (eq. 50):

Figure 66 - Correcting the needle's orientation.

The torque’s magnitude applied by the spring, 𝑇𝑠𝑝𝑟𝑖𝑛𝑔 , is defined as:

𝑇𝑠𝑝𝑟𝑖𝑛𝑔 = 𝐾𝑇𝑠𝑝𝑟𝑖𝑛𝑔𝜃

eq. 68

Where 𝐾𝑇𝑠𝑝𝑟𝑖𝑛𝑔 is the stiffness of the torque spring and 𝜃 is the angle between the 𝑧 axis of the

cone’s reference frame and the 𝑥 axis of the needle’s reference frame (Figure 66). As defined

before for position, it is desired that the 𝐾𝑇 𝑠𝑝𝑟𝑖𝑛𝑔 changes proportionally to the distance to the 𝑧

axis of the cone’s reference frame. It is also desirable that 𝐾𝑇𝑠𝑝𝑟𝑖𝑛𝑔changes with 𝜃 . So the

magnitude 𝐾𝑇 𝑠𝑝𝑟𝑖𝑛𝑔 is defined as:

Page 67: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

47

𝐾𝑇𝑠𝑝𝑟𝑖𝑛𝑔= 𝐾𝑇𝑠𝑝𝑟𝑖𝑛𝑔 𝑚á𝑥

∗ (|𝜃|𝜋2

) + 𝐾𝑇𝑠𝑝𝑟𝑖𝑛𝑔 𝑚á𝑥∗ (1 − (

{𝒑𝑛𝑐 }𝑧

ℎ𝑐𝑜𝑛𝑒

)

14

)

eq. 69

The angle 𝜃 is:

𝜃 = cos−1 ({𝑹𝑛𝑐 }𝑥 ⋅ [

00

−1])

eq. 70

And the vector of rotation in the cone’s reference frame, 𝒗𝑟𝑜𝑡𝑐𝑜𝑛𝑒, is:

𝒗𝑟𝑜𝑡𝑐𝑜𝑛𝑒= [

00

−1] × {𝑹𝑛

𝑐 }𝑥

eq. 71

This vector must be expressed on the needle’s reference frame, 𝒗𝑟𝑜𝑡𝑛𝑒𝑒𝑑𝑙𝑒, defined as:

𝒗𝑟𝑜𝑡𝑛𝑒𝑒𝑑𝑙𝑒= (𝑹𝑛

𝑐 )𝑇 𝒗𝑟𝑜𝑡𝑐𝑜𝑛𝑒

eq. 72

So, the vector 𝑲𝑇𝑠𝑝𝑟𝑖𝑛𝑔 is defined as:

𝑲𝑇𝑠𝑝𝑟𝑖𝑛𝑔= 𝒗𝑟𝑜𝑡𝑛𝑒𝑒𝑑𝑙𝑒

𝐾𝑇𝑠𝑝𝑟𝑖𝑛𝑔

eq. 73

This is only valid when the needle’s tip has not reached the cone’s vertice (area 2 and 4, Figure

63). When the needle goes beyond that point (area 5 and 6, Figure 63), the rotational stiffness is

set to maximum except in the axis along the needle:

𝑲𝑇𝑠𝑝𝑟𝑖𝑛𝑔= [𝐾𝑇𝑠𝑝𝑟𝑖𝑛𝑔 𝑚𝑎𝑥

𝐾𝑇𝑠𝑝𝑟𝑖𝑛𝑔 𝑚𝑎𝑥 0]

eq. 74

When the needle is in the area 1 and 3:

𝑲𝑇𝑠𝑝𝑟𝑖𝑛𝑔= [0 0 0]

eq. 75

Finally, the vector sent to the robotic arm is:

Page 68: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

48

𝑲𝑐𝑎𝑟𝑡 = [𝑲𝑠𝑝𝑟𝑖𝑛𝑔 𝑲𝑇𝑠𝑝𝑟𝑖𝑛𝑔]

= [{𝑲𝑠𝑝𝑟𝑖𝑛𝑔}𝑥

{𝑲𝑠𝑝𝑟𝑖𝑛𝑔}𝑦

{𝑲𝑠𝑝𝑟𝑖𝑛𝑔}𝑧

{𝑲𝑇𝑠𝑝𝑟𝑖𝑛𝑔}

𝑧 {𝑲𝑇𝑠𝑝𝑟𝑖𝑛𝑔

}𝑦

{𝑲𝑇𝑠𝑝𝑟𝑖𝑛𝑔}

𝑥]

eq. 76

In the following table, we summarize the possible stiffness values:

Table 2 - 𝑲𝒄𝒂𝒓𝒕 components.

Page 69: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

49

Chapter 4 - Results

In this chapter, all the procedure’s steps performed at the Surgical Robotics Lab are described,

from the needle’s calibration with the optical tracker to the biopsy itself. The results and errors in

each step are presented.

1 PROCEDURE PERFORMED AT IST

1.1 NEEDLE’S CALIBRATION

One of the most important steps for this procedure is the needle’s calibration not only on the

robotic arm but also in the Polaris Spectra. As better the calibration, the better the results.

Robotic arm

The robotic arm kinematics provides its end-effector position with respect to its base reference

frame. When a tool is attached to the end-effector, it is necessary to calibrate it.

This calibration defines the tool coordinate system which has its origin at a user-defined point

called tool centre point (TCP). In this case it will be the tip of the needle. To perform the calibration,

the procedures that are included in the KUKA software were used. To define the TCP, the XYZ

4-Point procedure was used and the tool reference frame was set with the ABC 2-Point procedure.

The calibration was performed with an associated TCP error of 0.191 mm. The position and

orientation obtained by these two procedures are shown in the table below:

Position Orientation

𝑿 (𝒎𝒎) 𝑌 (𝑚𝑚) 𝑍 (𝑚𝑚) 𝐴 (𝑑𝑒𝑔) 𝐵 (𝑑𝑒𝑔) 𝐶 (𝑑𝑒𝑔)

-173.386 -0.052 134.072 -180 -13 0

Table 3- Needle's calibration on the robotic arm

In order to understand the precision of the robotic arm, after calibration we performed trajectories

of the null-space of the arm, with maximum Cartesian stiffness of the end-effector. We

continuously measured the pose of the needle and registered the variations around the initial

point, both via FRI and with the Polaris Spectra (Figure 67).

Page 70: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

50

Figure 67 - Robotic arm with a fixed pose of the end-effector and null stiffness in the null-space.

The results obtained from this movement are shown in Table 4 and the evolution of the error

along the movement is illustrated in Figure 68:

Mean error due to null-space motion

Position error Orientation error

Polaris Spectra 1.92mm±0.96mm 2.26°±0.15°

KUKA LWR 4+ 1.62mm±0.8mm 0.54°±0.24°

Table 4 – Mean errors of the tip obtained by varying the null-space

Figure 68 - Errors obtained by changing the null-space

Page 71: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

51

As we can observe in Figure 68, changing the null-space of the robotic arm will change the end-

effector’s pose. After the movement, the final errors were 0.33mm and 2.23°, measured by Polaris

Spectra and 0.4mm and 0.15°, measured by KUKA LWR 4+.

Polaris Spectra

Once the needle was calibrated for the robotic arm, the calibration for the Polaris Spectra

followed. The optical tracking device detects the reference tool attached to the needle’s support.

The reference frame of the tool must be translated by an offset to match with the tip of the needle

as shown in Figure 69:

Figure 69 - Needle's calibration on Polaris Spectra

- translation of the origen of the coordinate system.

Figure 70 - Needle's pivoting on Polaris Spectra.

To perform the translation, the algorithm included in the Polaris Spectra software allows to

perform the pivoting as illustrated in Figure 70. It implies that the tip of the needle must be fixed

and only allow rotations around that point. The resulting translation expressed in the tool’s

reference frame is:

𝑿 (𝒎𝒎) 𝒀 (𝒎𝒎) 𝒁 (𝒎𝒎)

-97.39 166.60 111.58

Table 5- Needle's calibration on the Polaris Spectra

The calibration was performed with an associated error of 0.64 mm.

1.2 PHANTOM’S REGISTRATION

Once the needle is calibrated on the robotic arm and on the Polaris Spectra, it is possible to

perform the phantom’s registration. This step is fundamental because it will highly influence the

results since it will allow the robotic arm to know where the target point is in its workspace.

When collecting points for the registration it is imperative to collects points of zones with significant

features of the phantom (Figure 71).

Page 72: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

52

Figure 71 – Desirable points to collect for the phantom's registration

For this thesis two approaches to select the set of points to perform the registration were followed.

In the first approach it was decided to collect points of the same facial areas as chosen at Hospital

Santa Maria, in the second approach it was decided to collect points of the cone of one of the

spheres (Figure 72).

Figure 72 - Points collected during the registration with the first hypothesis (left) and with the second

hypothesis (right). Green - points under 1mm error- Red - points above 1 mm error.

Figure 72 shows the points collected and Table 6 shows the percentage of points under a

threshold error value, obtained after the transformation from the Polaris Spectra reference frame

to the robotic arm’s reference system:

% of points under the threshold

Threshold 1st Hypothesis 2nd Hypothesis

<= 3 mm 99.95 100

<= 2 mm 98.14 100

<= 1,5 mm 90.59 100

<= 1 mm 61.96 100

Table 6 - Phantom's registration - % of points under the threshold

A total of 29266 points were collected for the first hypothesis and 2141 points for the second

hypothesis. The mean position error for the 1st hypothesis was 1.127mm ± 0.332mm and for the

2nd was 0.236mm ± 0.092mm.

Page 73: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

53

1.3 BIOPSY

Once the calibrations and the phantom’s registration was performed, the biopsies were

performed. The target points and desired orientation vectors expressed in the reference frame of

the phantom were defined as explain in Figure 39.

For each sphere, once the robotic arm reached its target, the tip of the needle’s pose expressed

in the phantom reference frame, was acquired by the Polaris Spectra.

Comparing the values of the desired poses and the ones reached, we can compute the error

associated with each sphere. The errors are presented in the following table and Figure 73:

Position’s error (mm) Orientation’s error (°)

Target sphere

number 1º Hypothesis 2º Hypothesis 1º Hypothesis

Hypothesis

1 2.23 1.73 1.30 1.08

2 0.43 2.18 1.87 1.27

3 4.09 1.26 1.52 1.56

4 1.27 2.06 6.43 5.36

5 3.95 3.21 3.82 1.93

6 2.22 3.02 10.79 7.32

7 1.8 1.85 8.78 8.99

8 5.57 4.91 2.20 1.83

9 1.9 1.89 2.32 3.18

Mean error 2.61 2.46 4.34 3.61

Standard deviation 1.52 1.04 3.30 2.75

Table 7 – Simulated biopsy errors - procedure performed at Surgical Robotics Lab

Figure 73 - Simulated biopsy errors - procedure performed at Surgical Robotics Lab

Page 74: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

54

Figure 74 and Figure 75 show two of the trials:

Figure 74 - Successful trials

Figure 75 - Unsuccessful trials

2 PROCEDURE PERFORMED AT HOSPITAL SANTA MARIA

2.1 IMAGE REGISTRATION ERROR

When analysing the results of the simulated biopsy procedure at Hospital Santa Maria, it is

necessary to separate the procedure in 2 phases: the preoperative phase and the intraoperative

phase.

The first step in the pre-operative phase is to quantify the error associated with image acquisition,

in other words, how much the MR and the CT-scan distort the real object. To do this, the images

were converted to a point cloud and the target spheres segmented. Having only the point cloud

that represents the spheres, by optimization as in eq. 11, the target points were obtained. The

scheme of this procedure can be observed in Figure 76:

Page 75: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

55

Figure 76 - Procedure to obtain the centres of the target spheres from the medical images. The steps for the

CT images are shown on the left, and the steps for the MRI are shown on the right.

To compute the orientation vector, matrix 𝑨 (eq. 13) was computed and the SVD decomposition

was applied.

Having the centre of the spheres in each image modality, these were registered to each other,

and the registration error for each sphere is shown in Table 8.

Page 76: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

56

Target sphere number Position’s error (mm) Orientation’s error (°)

1 0.3227 4.6084

2 0.6981 10.3568

3 0.3238 2.8332

4 0.4510 8.2543

5 0.9185 4.9925

6 0.5779 19.6981

7 0.2515 2.0065

8 0.0934 4.2906

9 0.4563 10.3427

Mean error 0.455 7.487

Standard deviation 0.249 5.506

Table 8 - Registration's error between MRI and CT-scan.

The second step is to quantify the error between the MRI and the phantom and the error between

the CT-scan and the phantom. The procedure is the same as before, the target points we

registered and the registration errors are presented in Table 9.

CT-scan MRI

Target sphere

number

Position’s error

(mm)

Orientation’s

error (°)

Position’s

error (mm)

Orientation’s

error (°)

1 1.3449 3.6556 1.3023 1.8185

2 1.5144 6.3715 0.8700 11.5040

3 1.2008 3.7402 0.9443 2.5838

4 0.8947 4.8009 1.2138 6.8907

5 0.7983 1.6134 0.9553 5.0057

6 2.1081 14.2331 1.7734 5.6813

7 0.6836 6.2664 0.6333 4.9875

8 1.1168 9.1980 1.0822 5.0677

9 1.1568 6.2587 1.2226 9.2238

Mean error 1.202 6.238 1.111 5.863

Standard deviation 0.429 3.692 0.324 3.030

Table 9 - Registration's error between the medical images and the phantom.

It is important to note that the errors presented in Table 8 and Table 9 are independent of the

surgeon, they only depend on the imaging equipment.

The next step in the pre-operative phase is to identify the capacity of the surgeon to select the

desired targets and orientations. The surgeon selects the target points and the desired trajectory

as explained in Figure 40 on the StealthStation Treon’ software.

Page 77: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

57

After performing the rigid body transformation to compute the points selected from the image

coordinate system to the phantom coordinate system, it is possible to identify the error associated

with this step:

Target sphere number Position’s error (mm) Orientation’s error (°)

1 0.74 9.43

2 1.37 3.63

3 1.37 2.63

4 1.02 6.53

5 1.47 3.40

6 1.65 8.14

7 0.51 0.66

8 0.56 8.99

9 1.18 6.55

Mean error 1.10 5.55

Standard deviation 0.39 2.91

Table 10 - Capacity of the surgeon to select the desired target point and orientation.

2.2 PHANTOM REGISTRATION

The registration performed at Hospital Santa Maria is different from the one performed at Surgical

Robotics Lab. At Surgical Robotics Lab the phantom to robotic arm registration is a point to point

match, where the points were collected with the same measurement system, the Polaris Spectra,

and the datasets have the same number of points. At Hospital Santa Maria it is a point to surface

match based on the iterative closest point method, where a point cloud obtained with an optical

tracking device is registered with images obtained from MRI and CT. This registration procedure

was computed by the StealthStation Treon software and its error is not available to the user.

2.3 BIOPSIES

Once the phantom’s registration was done, the biopsies were performed. The target points and

the desired orientation vectors, expressed in the phantom reference frame were defined as

explain in Figure 40.

For each sphere, once the needle reached its target, its tip position and orientation was acquired

by the Polaris Spectra system, and expressed in the phantom reference frame.

Comparing the values, we can compute the error at each sphere. These are presented in the

following table and in Figure 77:

Page 78: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

58

Target sphere number Position’s error (mm) Orientation’s error (°)

1 2.33 6.29

2 3.18 3.84

3 3.57 1.58

4 3.15 1.20

5 2.80 4.55

6 1.94 8.00

7 7.34 2.83

8 4.21 4.57

9 5.10 1.11

Mean error 3.74 3.77

Standard deviation 1.56 2.23

Table 11 - Simulated biopsy errors of the procedure performed at Hospital Santa Maria. Error between the

planned target points and the points reached.

Figure 77 - Simulated biopsy errors of the procedure performed at Hospital Santa Maria. Error between the

planned target points and the points reached.

Page 79: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

59

Chapter 5 - Discussion

The first stage of a brain biopsy consists in acquiring the CT-scan and the MRI. These

neuroimaging modalities play a critical role in diagnosis, preoperative assessment, surgical

planning, and post therapeutic monitoring of the neurosurgical patient. Optimal use of the different

MRI and CT techniques can be a substantial determinant of surgical safety, success and quality

of care and the surgeon must trust in these medical images.

As shown in Table 9, for the CT-scan to phantom registration the mean position error is

1.202mm±0.429mm and the mean orientation error is 6.238°±3.692°, and for the MRI- to-

phantom registration the mean position error is 1.111mm±0.324mm and the mean orientation’s

error is 5.863°±3.030°. These values prove that both image techniques represent, with

considerable precision, the reality.

Another important aspect related with the images is to know how they are related, in other words,

if they represent the same object in study. From Table 8, for the CT-scan-to-MRI registration the

mean position error is 0.455mm±0.249mm and the mean orientation error is 7.487°±5.506°. This

demonstrates that both images of the same object are complementary, allowing the surgeon to

take advantages of the benefits of each one.

A summary of the image’s error is presented in the following table:

Position’s error Orientation’s error

CT-scan-to-phantom

registration 1.202mm±0.429mm 6.238°±3.692°

MRI-to-phantom registration 1.111mm±0.324mm 5.863°±3.030°

CT-scan-to-MRI registration 0.455mm±0.249mm 7.487°±5.506°

Table 12 - Summary of the image registration results.

The image errors may be associated with the parameters defined in the capture of the medical

images, such as the slice thickness of cross-sectional imaging [62].

At Hospital Santa Maria, the surgeon marked the target points with a mean position’s error of

1.10mm±0.39mm and the mean orientation’s error is 5.55°±2.91° (Table 10). Comparing these

values with the errors obtained from the CT-scan-to-phantom registration or the MRI -to -phantom

registration, they are very similar. This indicates that the surgeon was quite precise during the

planning stage.

The final step of the results obtained at Hospital Santa Maria is the difference between the

planned target points and the points reached. The surgeon achieved a mean position error of

3.74mm±1.56mm and the mean orientation’s error is 3.77°±2.23° (Table 11). These values carry

the error associated with the planning stage showed before and the image -to -phantom

Page 80: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

60

registration performed with the neuronavigation system StealthStation Treon, which error was not

accessible in the case-study.

When analysing the results of the procedure performed at IST, the crucial stage is the calibration

of the equipment. Considering the calibration of the biopsy needle on the Polaris Spectra by the

pivoting technique included in the software, there is an error associated to this procedure of

±0.64mm (Table 5). If the tool tip becomes bent during pivoting, which is likely to happen due to

the low stiffness of the needle, the tool tip is incorrectly reported.

Another important feature of the robotic arm is the fact that when in a posed is fixed, for instant,

when the needle reaches the tumour, the surgeon is able to change the null-space configuration.

On the robotic arm’s side, after the needle calibration, when changing the null-space configuration

a mean position error of 1.62 mm ± 0.8 mm and a mean error for orientation of 0.54° ± 0.24°,

measured by the robotic arm itself, were obtained (Table 4). These values are similar as the ones

reached in previous works [63]. When the user executes the XYZ 4-Point procedure to calibrate

the offset of the needle related to the wrist of the robot, he must take 4 different poses of the robot

but needs to ensure that the tip of the needle remains in the same position; this can make this

task very difficult to execute. Regarding the orientation, the ABC 2-Point procedure although

easier to perform, the cylindrical shape of the needle’s holder does not facilitate the procedure.

As shown in Table 4, when changing the null-space, the tip of the needle does not maintain its

pose and the error is different if measure by the Polaris Spectra or by KUKA LWR 4+. The two

main reasons for that to happen are a bad calibration of the devices and a bad configuration of

the dynamic model of the robotic arm. Although, both methods show that, after changing the null-

space configuration, an error is added to the pose that the robot cannot correct it.

The lower the error associated with the calibration phase, the better the results since these errors

will be propagated throughout the whole process.

The best registration was obtained using only one cone of a target sphere (Table 6). When

performing the registration with similar points as in the procedure at Hospital Santa Maria (the 1st

hypothesis) the results were slightly worse, having only approximately 62% of the points with a

position error under 1mm comparing with the 100% of the 2nd hypothesis.

One of the reasons for the difference between registration’s performance is that on the 2nd

hypothesis the motion of the robotic arm has smaller amplitude. Therefore, the effects of the

cinematic decalibration of the robot are less noticeable.

When performing the phantom -to -robotic arm registration, besides the error associated to

calibration, there is an error associated with the synchronization between the Polaris Spectra and

the robotic arm. There is always a delay between the capture of the pose of the tool and the

position acquisition of the robotic arm therefore the datasets are not absolutely synchronized in

time. The delay may vary from 0.02s to 0.08s and is linked to the UDP communication between

the model running the Polaris Spectra in soft real-time and the FRI-xPC server.

Page 81: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

61

The simulated biopsy results obtained at IST have a mean position error of 2.46mm±1.04mm and

a mean orientation error of 3.61°±2.75° and were obtained using the registration based on the

2nd hypothesis (Table 7).

Although the 2nd hypothesis presents a better result, the difference is not that significant in terms

of position presenting only a 5.7% (0.15 mm) of improvement. In terms of orientation, the

difference is relatively higher, where the 2nd hypothesis presents an improvement of 16.8%

(0.73°).

When comparing the final results of the surgeon, the robotic arm and the ones obtained in [12]

(Table 13) it is possible to conclude that, in general, the robotic arm achieves better results:

Position’s error Orientation’s error

Surgeon 3.74mm±1.56mm 3.77°±2.23°

Robotic arm –1st Hypothesis 2.61mm±1.52 mm 4.34°±3.30°

Robotic arm – 2nd Hypothesis 2.46mm±1.04 mm 3.61°±2.75°

Salgueiro’s thesis 6.3mm±0.5mm Not evaluated

Table 13 -Resume of the simulated biopsy errors.

A straightforward comparison with [12] is not possible because the test subject is different and

the error measurement technique is different. Nevertheless, the position error obtained is about

61% smaller. In [12] no orientation’s error was measured so no comparison is possible.

When compared with the surgeon, the results are not substantially different in the orientation, only

a 4% difference, but for the position the robotic arm achieved results 34% better, a mean position

error 1.28 mm smaller.

It is important to highlight the procedure time. The surgeon took different times to perform the

biopsy for each sphere, from 10 minutes up to 40 minutes, depending on the orientation of the

sphere in question. The robotic arm took a mean time of 5 minutes per sphere, independent of

the desired orientation.

Page 82: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

62

Page 83: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

63

Chapter 6 - Conclusions and

Future work

1 CONCLUSION

The presented work focused on developing a complete robotic solution that can successfully

position a needle in a desired target inside the brain. Using a robotic system in this type of

surgeries brings countless advantages when compared with the conventional brain surgeries

namely, it is capable of removing the tremor of the human hand, it can achieve a higher level of

accuracy, it allows a decrease of the burr hole size and also decreases the time of the procedure.

Another important objective of this work was to test the capabilities of the KUKA LWR 4+ in a co-

manipulative surgery. Although the robotic system was tested on a brain biopsy surgery it can be

used in similar procedures that implicate some kind of positioning, like the placement of an

electrode in DBS.

The first phase of this work was to create an interface with the KUKA LWR 4+ in real-time using

MATLAB. That interface not only takes the full advantage of FRI at its highest communication

frequency, 1kHz, but is also easily adapted to future works using the robotic arm at the Surgical

Robotics Lab. This interface also allows the user to integrate another device to work in the same

system, such as an optical tracking device, a force sensor or the omega.6 haptic device.

The ultimate product of this thesis is the robotic system developed for brain surgeries. This system

comprises not only the KUKA LWR 4+ but also the optical tracking device Polaris Spectra. The

developed virtual impedance control solution is a safe and helpful way for the surgeon to guide

the robot to the desired target position. The robotic system developed provides a safe

environment to be used in surgery since a security zone was implemented that stops the robot to

enter the no-go zone even if the surgeon forces it.

The robotic solution was tested in a procedure that tried to simulate a brain biopsy as realistically

as possible. For this, the phantom that simulates the human skull was used as a patient. The

procedure was performed at Hospital Santa Maria with the current methods used in many

hospitals and then simulated at Surgical Robotics Lab. The brain biopsy simulated at Surgical

Robotics Lab consisted in three main steps, first, the choosing of the desired target position and

needle trajectory via image processing; second, the registration of the phantom; and third, the

guidance of the needle to the target point.

All the steps of the real procedure at Hospital Santa Maria and the simulated procedure at Surgical

Robotics Lab were analysed and its errors quantified. At Hospital Santa Maria, the position error

of the entire procedure was 3.74mm±1.56mm and the orientation error was 3.77°±2.23°. These

Page 84: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

64

results are positive because this error carries the error of many steps as, for example, the

registration error.

At Surgical Robotics Lab the position’s error of the entire procedure was 2.46mm±1.04mm and

the orientation’s error was 3.61°±2.75°. These are positive results when compared with the

current procedure at Hospital Santa Maria and previous projects.

The three main sources of error that influenced the final needle position at Surgical Robotics Lab

were the registration method, the needle’s positioning and the tool calibration.

The mean position’s error for the registration process was of 0.236mm ± 0.092mm, which can be

explained by displacement of the table and/or the phantom during the procedure. The tool

calibration’s error on Polaris Spectra was 0.64 mm and the calibration error on the robotic arm

was 0.191 mm. From these, one can conclude that the main source of error is in fact the

calibration process. This is a good sign for this project, since it is the easiest to correct. To do this

a calibration system could be employed to reduce this error as much as possible. In order to

reduce the error inherent to the registration procedure an improvement of the synchronization

method between the robotic arm and the Polaris Spectra should be done.

To conclude, the final outcome of this project is a tremendous improvement compared with

previous works and its positive results constitutes a good foundation for further development and

enhancement of this robotic surgical system. The advantages of using the developed system can

be resumed to the fact that it is a very elegant and compact solution that can provide a fast and

easy way to perform brain surgeries.

2 FUTURE WORK

My suggestions and recommendations for the continuity of this project so as to improve the

developed robotic solution are:

Implement a 3D graphical environment

Implement a calibration kit to help in the calibration of the needle in the robotic arm

Development of a specific reference tool to attach to the needle for the Polaris Spectra

Improve the Polaris Spectra – Robotic arm synchronization

Implement the same procedure based on the Joint Impedance Control mode

Page 85: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

65

Bibliography

[1] Y. S. Kwoh, J. Hou, E. A. Jonckheere and S. Hayati, “A robot with improved absolute

positioning accuracy for CT guided stereotactic brain surgery,” IEEE Transaction on

Biomedical Engineering, vol. 35, no. 2, pp. 153-160, February 1988.

[2] C. K. Curley, “An overview of the current state and use of sugical robots,” Oper. Tech. Gen.

Surg., vol. 7, no. 4, pp. 155-164, 2005.

[3] P. Gomes, “Surgical robotics: Reviewing the past, analysing the present, imagining the

future,” Robotics and Computer-Integrated Manufacturing, vol. 27, no. 2, pp. 261-266, 2011.

[4] R. H. Taylor, A. H. Paul, B. D. Mittelstadt, E. Glassman, B. L. Musits and W. L. Bargar, “A

robotic system for cementless total hip replacement surgery in dogs,” Images Twenty-First

Century. Proc. Annu. Int. Eng. Med. Biol. Soc., vol. 93, no. 4, pp. 887-889, 1999.

[5] S. Kalan, S. Chauhan, . R. F. Coelho, M. A. Orvieto, I. . R. Camacho, K. J. Palmer and V.

R. Patel, “History of robotic surgery,” J. Robot. Surg., vol. 4, no. 3, pp. 141-147, September

2010.

[6] “All About Robotic Surgery - The Official Medical Robotics News Center for AVRA Surgical

Robotics Inc.,” [Online]. Available: http://allaboutroboticsurgery.com. [Accessed 27

February 2016].

[7] “Intuitive Surgical,” [Online]. Available:

http://www.intuitivesurgical.com/company/media/images/da-vinci-xi/. [Accessed 27

February 2016].

[8] P. Jesus, “Robô capaz de operar é usado pela primeira vez em Portugal,” Diário de noticias,

25 June 2010.

[9] A. Alves, “Sanfil será o melhor hospital do Mundo em algumas áreas cirúrgicas,” Diário as

beiras, 1 August 2013.

[10] C. Foundation, “European Academy for Robotic Colorectal Surgery based at the

Champalimaud Foundation,” 13 February 2015. [Online]. Available:

http://www.fchampalimaud.org/en/newsroom/earcs-based-champalimaud-foundation-

view/. [Accessed 27 February 2016].

[11] A. R. Lanfranco, A. E. Castellanos, J. P. Desai and W. C. Meyers, “Robotic Surgery, a

current perspective.,” Ann. Surg., vol. 239, no. 1, pp. 14-21, 2004.

[12] T. Salgueiro, “Robot assisted needle interventions for brain surgery,” 2014.

[13] P. Pires, “Position and Force Control of Lightweight Robot Manipulators for Orthopaedic

Surgery,” Lisboa, 2010.

[14] D. Zerbino, Biopsy: its history, current and future outlook, Ukrainy: Likars’ ka

sprava/Ministerstvo okhorony zdorov’ia Ukrainy, 1994, p. 1.

[15] “American Brain Tumor Association,” [Online]. Available: http://www.abta.org/brain-tumor-

information/diagnosis/biopsy-procedure.html. [Accessed 23 February 2016].

[16] “Offical web site of Medtronic,” [Online]. Available: http://www.medtronic.com/for-

healthcare-professionals/products-therapies/neurological/surgical-navigation-and-

Page 86: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

66

imaging/neurosurgery-imaging-and-surgical-navigation/surgical-procedures/. [Accessed 23

February 2016].

[17] C. Daumas-Duport, V. Monsaingeon, L. Szenthe and G. Szikla, “Serial stereotactic biopsies:

a double histological code of gliomas according to malignancy and 3-d configuration, as an

aid to therapeutic decision and assessment of results,” in Stereotactic and Functional

Neurosurgery, 1982.

[18] [Online]. Available: https://functionalneurosurgery.net/parkinsonsurgery.htm.

[19] C. Alafaci, . A. Conti and F. Tomasello, “Navigated Brain Stimulation (NBS) for Pre-Surgical

Planning of Brain Lesion in Critical Areas: Basic Principles and Early Experience,” Messina,

Italy.

[20] P.-F. D’Haese, E. Cetinkaya, P. E. Konrad and C. Kao, “Computer-aided placement of deep

brain stimulators: from planningto intraoperative guidance,” in Medical Imaging, 2005.

[21] “Desert Spine and Neurosurgical Institute,” [Online]. Available: http://www.desert-

spine.com/Brainstemtumors.htm.

[22] “Neurological Surgery Blog,” [Online]. Available: http://www.neurologicalsurgery.in/blog/.

[23] [Online]. Available: http://ogles.sourceforge.net/Ogles/ogles-doc/.

[24] L. D. Lunsford and D. Kondziolka, “Leksell stereotactic apparatus,” in Stereotactic and

Functional Neurosurgery, Springer, 2009, pp. 469-485.

[25] M. Ryan, R. K. Erickson, D. N. Levin, C. a. Pelizzari, R. L. Macdonald and G. J. Dohrmann,

“Frameless stereotaxy with real-time tracking of patient head movement and retrospective

patient-image registration,” J. Neurosurg., vol. 85, no. 2, pp. 287-292, 1996.

[26] N. Haberland, K. Ebmeier, R. Hliscs, J. Grunewald, J. Silbermann, J. Steenbeck, H. Nowak

and R. Kalff, “Neuronavigation in surgery of intracranial and spinal tumors,” J. Cancer Res.

Clin. Oncol, vol. 126, no. 9, pp. 529-541, 2000.

[27] T. A. N. Hernes, F. Lindseth, T. Selbekk, A. Wollf, O. V. Solberg, E. Harg, O. M. Rygh, G.

A. Tangen, I. Rasmussen, S. Augdal, F. Couweleers and G. Unsgaard, “Computer-assisted

3D ultrasound-guided neurosurgery: technological contributions, including multimodal

registration and advanced display, demonstrating future perspectives,” Int. J. Med. Robot.

Comput. Assist. Surg., vol. 2, no. 1, pp. 45-59, 2006.

[28] “Stealth Station AxiEM - Medtronic,” [Online]. Available:

http://www.stealthstationaxiem.com/. [Accessed 2 March 2016].

[29] P. M. Black, T. Moriarty, A. E., P. Stieg, E. J. Woodard, P. L. Gleason, C. H. Martin, R. B.

Kikinis, R. B. Schwartz and F. A. Jolesz, “Development and implementation of intraoperative

magnetic resonance imaging and its neurosurgical applications,” Neurosurgery, vol. 41, no.

4, pp. 831-845, 1997.

[30] C. Nimsky, O. Ganslandt, P. Hastreiter and R. Fahlbusch, “Intraoperative compensation for

brain shift,” Surg. Neurol., vol. 56, no. 6, pp. 357-364, 2001.

[31] D. L. G. Hill, C. R. Maurer, R. J. Maciunas, J. A. Barwise, J. M. Fitzpatrick and M. Y. Wang,

“Measurement of intraoperative brain surface deformation under a craniotomy,”

Neurosurgery, vol. 43, no. 3, pp. 514-526, 1998.

[32] Y. Miyagi, F. Shima and T. Sasaki, “Brain shift: an error factor during implantation of deep

brain stimulation electrodes,” J. Neurosurg., vol. 107, no. 5, p. 989–97, 2007.

Page 87: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

67

[33] C. E. Chatillon, K. Mok, J. A. Hall and A. Olivier, “Comparative study of manual versus robot-

assisted frameless stereotaxy for intracranial electrode implantation,” Epilepsy Curr., vol.

12, no. 1, 2011.

[34] “French robots are becoming essential,” France in Canada - Consulate General of France

in Vancouver, 2012.

[35] M. Lefranc, C. Capel, A.-S. Pruvot-Occean, A. D. C. Fichten, P. Tousaint, D. Le Gars and

J. Peltier, “Frameless robotic stereotactic biopsies: a consecutive series of 100 cases,” J.

Neurosurg., vol. 122, no. February, pp. 342-352, 2015.

[36] S. Ferrand-sorbets, D. Taussing, M. Fohlen, C. Bulteau, G. Dorfmuller and O. Delalande,

“Frameless Stereotactic Robot-guided Placement of Depth Electrodes for

Stereoelectroencephalography in the Presurgical Evaluation of Children with Drug-resistant

Focal Epilepsy,” 2006.

[37] T. R. K. Varma, P. R. Eldridge, A. Forster, S. Fox, N. Fletcher, M. Steiger, P. Littlechild, P.

Byrne, A. Sinnott, K. Tyler and S. Flintham, “Use of the neuromate stereotactic robot in a

frameless mode for movement disorder surgery,” Stereotact. Funct. Neurosurg., vol. 80, no.

1-4, pp. 132-135, 2003.

[38] “Renishaw,” [Online]. Available: www.renishaw.com. [Accessed 10 December 2015].

[39] M. S. Eljamel, “Robotic applications in neurosurgery,” Med. Robot., pp. 41-64, 2008.

[40] J. Rosen, B. Hannaford and R. M. Satava, Surgical Robotics: Systems Applications and

Visions, Springer, 2011.

[41] J. Pridgeon, “Commentary: Virtual Reality and Robotics in Neurosurgery,” vol. 72, pp. 1-6,

January 2013.

[42] W. C. Hong, J. C. Tsai, S. D. Chang and J. M. Sorger, “Robotic skull base surgery via

supraorbital keyhole approach: a cadaveric study,” Neurosurgery, vol. 72, no. SUPPL.1,

January 2013.

[43] “Neuroarm,” [Online]. Available: http://www.neuroarm.org. [Accessed 20 December 2015].

[44] G. R. Sutherland, S. L. S. Wolfsberger and K. Zarei-Nia, “The evolution of NeuroArm,”

Neurosurgery, vol. 72, no. SUPPL.1, pp. 27-32, January 2013.

[45] M. J. Lang, A. D. Greer and G. R. Sutherland, “Intra-operative robotics: NeuroArm,” Acta

Neurochir. Suppl., vol. 109, pp. 231-236, 2011.

[46] G. R. Sutherland, I. Latour and A. D. Greer, “Integrating an image-guided robot with

intraoperative MRI,” IEEE Eng. Med. Biol. Mag., vol. 27, no. 3, pp. 59-65, 2008.

[47] A. D. Greer, P. M. Newhook and G. R. Sutherland, “Human–machine interface for robotic

surgery and stereotaxy,” IEEE/ASME Trans. Mechatronics,, vol. 13, no. 3, pp. 355-361,

2008.

[48] G. Nikolic, B. Jerbic and D. Chudy, “Robotic Applications in Surgery with Special Emphasis

on Applications in Neurosurgery,” pp. 149-172, 2013.

[49] B. Jerbic, G. Nikolic and D. Chudy, “Robotic Application in Neurosurgery Using Intelligent

Visual and Haptic Interaction,” Int. J. Simul. Model., vol. 14, pp. 71-84, 2015.

[50] P. Teodoro, “Robust surface reconstruction and US bone registration for hip resurfacing

surgery,” Lisboa, 2015.

Page 88: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

68

[51] R. Bischoff, J. Kurth, G. Schreiber, R. Koeppe, A. Albu-Schaffer, A. Beyer, O. Eiberger, S.

Haddadin, A. Stemmer, G. Grunwald and G. Hirzinger, “The KUKA-DLR lightweight robot

arm - a new reference platform for robotics research and manufacturing,” Robot. (ISR), 2010

41st Int. Symp. 2010 6th Ger. Conf. Robot., pp. 1-8, 2010.

[52] KUKA System Technology, KUKA system software 5.6 lr - operating and programming

instructions for system integratiors, 2010.

[53] KUKA System Technology, KUKA Fast Research Interface 1.0 – For KUKA System

Software 5.6 lr, 2011.

[54] “Official website of NDI,” [Online]. Available: http://www.ndigital.com/msci/applications/.

[55] Kuka Roboter GmbH, KUKA.FastResearchInterface 1.0 - For KUKA System Software 5.6

lr, 2011.

[56] L. Žlajpah and T. Petriˇc, “Fri-xpc server for kuka fri controller,” Ljubljana, Slovenia,

February,2014.

[57] K. Shoemake, “Animating Rotation with Quaternion Curves,” in ACM SIGGRAPH Comput.

Graph., San Francisco, 1985.

[58] V. Kremer, “Quaternions and SLERP,” 2008.

[59] “International Neurosurgical Center,” [Online]. Available:

http://www.neurosurgery.com.ua/index_en.php?p=32. [Accessed 20 March 2016].

[60] “Medtronic,” [Online]. Available: http://www.medtronic.com/for-healthcare-

professionals/products-therapies/neurological/surgical-navigation-and-

imaging/neurosurgery-imaging-and-surgical-navigation/surgical-procedures/#tab2.

[Accessed 20 March 2016].

[61] O. Sorkine, “Least-Squares Rigid Motion Using SVD”.

[62] U. Spetzger, U. Hubbe, T. Struffert, M. H. T. Reinges, T. Krings, G. A. Krombach, J. Zentner,

J. M. Gilsbach and H. S. Stiehl, “Error analysis in cranial neuronavigation,” Minim. Invasive

Neurosurg., vol. 45, no. 1, pp. 6-10, 2002.

[63] H. Monnich, D. Stein, J. Raczkowsky and H. Worn, “System for Laser Osteotomy in Surgery

with the KUKA Lightweight Robot - First Experimental Results”.

Page 89: A Co-manipulation Robotic System for Brain Biopsies · and availability to show me the real world of neurosurgery, and its main difficulties, from a hands-on point of view. ... A

69

Appendix A

1 FLAGS TO SELECT THE CONTROL MODE OF THE ROBOTIC ARM

Control mode cmdMode cmdFlag cmdRun

Monitor Mode 0 0 0

Joint Position control 1 1 1

Joint Impedance control 3 53 1

Cartesian Impedance control 2 13617 5

Table 14- Flags to select the control mode of the robotic arm

2 MINIMUM AND MAXIMUM STIFFNESS AND DAMPING COEFfiCIENT VALUES

FOR THE JOINT IMPEDANCE CONTROL OF THE KUKA MANIPULATOR

Spring Stiffness (𝑵/𝒎𝒓𝒂𝒅) Damping Coefficient

Min Max Min Max

A6 0.01 2000 0.1 1

A5 0.01 2000 0.1 1

A4 0.01 2000 0.1 1

A3 0.01 2000 0.1 1

E1 0.01 2000 0.1 1

A2 0.01 2000 0.1 1

A1 0.01 2000 0.1 1

3 MINIMUM AND MAXIMUM STIFFNESS AND DAMPING COEFfiCIENT VALUES

FOR THE CARTESIAN IMPEDANCE CONTROL OF THE KUKA MANIPULATOR

Spring Stiffness Damping Coefficient

Min Max Min Max

X 0.01 (𝑁/𝑚) 5000 (𝑁/𝑚) 0.1 1

Y 0.01 (𝑁/𝑚) 5000 (𝑁/𝑚) 0.1 1

Z 0.01 (𝑁/𝑚) 5000 (𝑁/𝑚) 0.1 1

A 0.01 (𝑁/𝑚𝑟𝑎𝑑) 300 (𝑁/𝑚𝑟𝑎𝑑) 0.1 1

B 0.01 (𝑁/𝑚𝑟𝑎𝑑) 300 (𝑁/𝑚𝑟𝑎𝑑) 0.1 1

C 0.01 (𝑁/𝑚𝑟𝑎𝑑) 300 (𝑁/𝑚𝑟𝑎𝑑) 0.1 1