Dynamic HRA for Surgery applications: development of … · Figure 2: List of GTT developed for...
Transcript of Dynamic HRA for Surgery applications: development of … · Figure 2: List of GTT developed for...
POLITECNICO DI MILANO
School of Industrial and Information Engineering
Laurea Magistrale in Mechanical Engineering – Production
Systems
Dynamic HRA for Surgery applications: development of
a Dynamic Event Tree simulation tool for Robotic
Radical Prostatectomy
Supervisor:
Prof. Paolo TRUCCO
Co-supervisor:
Eng. Rossella ONOFRIO
Master Thesis of:
Eleonora Paola TOFFOLO
Matr. 837522
Academic Year 2016/2017
TABLE OF CONTENTS
TABLE OF CONTENTS .......................................................................................I
TABLE OF CONTENT: FIGURES ................................................................... IV
TABLE OF CONTENT: TABLES ..................................................................... VI
ABSTRACT .......................................................................................................... 1
EXECUTIVE SUMMARY ................................................................................... 3
INTRODUCTION ............................................................................................... 17
CHAPTER 1: HUMAN RELIABILITY AND RECOVERY ANALYSIS IN
INDUSTRIAL AND HEALTHCARE SECTORS ............................................. 22
1.1 Human Reliability Analysis: from industrial to healthcare sector ....... 22
1.1.1 What is Human Reliability Analysis about? ...................................... 22
1.1.2 Role of human cognition in HRA ...................................................... 23
1.1.3 The definition of Performance Shaping Factors ................................ 25
1.1.4 Surgical environment peculiarities and current state of HRA application
..................................................................................................................... 26
1.1.5 Strengths and flaws of HEART thechnique ....................................... 28
1.2 Recovery analysis as a development of HRA second generation ............. 34
1.2.1 The concept of Recovery in System Safety Engineering ................... 34
1.2.2 How to model recovery: IFs and Dependency ................................... 36
1.2.3 The relevance of recovery paths in Surgery ....................................... 39
1.2.4 Applications of recovery analysis in literature ................................... 40
1.2.5 Current gaps in literature .................................................................... 44
1.2.6 Further developments in the Healthcare sector .................................. 45
CHAPTER 2: DYNAMIC RISK ASSESSMENT AND DYNAMIC EVENT
TREES ................................................................................................................. 47
2.1 Dynamic generation HRA ......................................................................... 47
2.1.1 From static to dynamic analysis ......................................................... 47
2.1.2 Historical Evolution of dynamic HRA in Industry ............................ 48
2.1.3 Simulation tools: benefits and challenges .......................................... 53
2.2 The crucial role of PSFs: properties and behaviour over time .................. 57
2.3 Dynamic Event Trees as a tool to formalize system/procedure evolution 60
2.3.1 Introduction ........................................................................................ 60
2.3.2 The five characteristics of DET ......................................................... 60
2.3.3 Industrial applications of DET ........................................................... 62
2.3.4 Gaps in literature ................................................................................ 67
2.3.5 Further developments in the Healthcare sector .................................. 68
2.4 Study objectives ........................................................................................ 72
CHAPTER 3: THE EMPIRICAL SETTING ..................................................... 74
3.1 Introduction ............................................................................................... 74
3.2 Minimally Invasive Surgery ..................................................................... 75
3.3.1 DaVinci Robot ................................................................................... 78
3.3 Robotic Surgery ........................................................................................ 82
3.1.1 Benefits and limitations ................................................................ 84
3.3.3 Robot applications ............................................................................. 86
CHAPTER 4: STUDY METHODOLOGY ........................................................ 91
4.1 Introduction ............................................................................................... 91
4.2 Dynamic Risk Assessment - preliminary phases ...................................... 92
4.2.1 Task flow diagram and recovery paths .............................................. 93
4.2.2 IFs and IFs’ impact definition ............................................................ 93
4.2.3 Modified HEART and integration with the DET framework .......... 103
4.3 Dynamic risk assessment implementation .............................................. 113
4.3.1 DET as a tool to integrate nominal probabilities procedures and paths
................................................................................................................... 113
4.3.3 Critical tasks identification .............................................................. 115
4.4 Illustration of the simulation procedure .................................................. 116
4.5 Factor Analysis ........................................................................................ 118
CHAPTER 5: CASE STUDY ........................................................................... 120
5.1 Introduction ............................................................................................. 120
5.2 Surgical Technique ............................................................................. 121
5.3 Application of the proposed Dynamic HEART Methodology ........... 124
5.3.1 Application of HEART technique ............................................... 126
CHAPTER 6: RESULTS .................................................................................. 128
6.1 Numerical analysis of the simulation results ........................................... 129
6.2 Probability Density Functions of Patient Grade Outcomes .................... 140
CHAPTER 7: CONCLUSIONS........................................................................ 145
7.1 Theoretical implications and future research .......................................... 147
7.2 Implications and relevance for practitioners ........................................... 150
REFERENCES .................................................................................................. 153
WEBSITE REFERENCES ............................................................................... 157
ACKNOWLEDGEMENTS ....................... Errore. Il segnalibro non è definito.
APPENDIX 1: Tools used for RARP procedure .............................................. 158
APPENDIX 2: Validated Task Analysis of BA-RARP procedure ................... 160
APPENDIX 3: Validated Task Analysis-Parallelism between tasks performed at
console and those at the table ............................................................................ 163
APPENDIX 4: Contributing factor classifications in the human factors
classification framework for patient safety (Mitchell et al. 2016) .................... 168
APPENDIX 5: Simulation Tool’s Script (Matlab®) ........................................ 172
APPENDIX 6: Matlab® functions ..................................................................... 178
APPENDIX 7: Questionnaire Results ............................................................... 180
TABLE OF CONTENT: FIGURES
Figure 1: Ranking by yearly death (Makary et al., 2016) ................................... 17
Figure 2: List of GTT developed for NARA ...................................................... 31
Figure 3: List of EPC developed for NARA ....................................................... 32
Figure 4: List of quantified GTTs developed for NARA ................................... 32
Figure 5: List of EPC developed for CARA ....................................................... 33
Figure 6: The uses of simulation and modelling in HRA ................................... 55
Figure 7: Proportion of use of MIS, Robotics and Open procedure in different
setting .................................................................................................................. 78
Figure 8: Typical set-up of robot system in operating room (a) sketch (b) real-life
............................................................................................................................. 80
Figure 9: International increase of DaVinci surgical procedures ....................... 87
Figure 10: Increase of DaVinci speciality surgeries in recent years ................... 89
Figure 11: Plots of the triangular pdf of IFs in surgery ...................................... 97
Figure 12: Flowchart representing main steps of traditional HEART methodology
........................................................................................................................... 104
Figure 13: Pdf distributions for the "homogenous" case .................................. 114
Figure 14: Phases for the Critical task identification ........................................ 116
Figure 15: Sequence of the procedure simulated by the tool ............................ 125
Figure 16:The probability of a Grade 0 outcome for the 0.95 percentile of patients
........................................................................................................................... 133
Figure 17:The probability of a Grade 3 outcome for the .95 percentile of patients
........................................................................................................................... 134
Figure 18:The probability of a Grade 3 outcome for the .05 percentile of patients
........................................................................................................................... 135
Figure 19: Grades’ PDF for the complete set of simulation runs ..................... 141
Figure 20: Grades’ PDF for the "only IF 1" set simulation run ........................ 141
Figure 21: Grades’ PDF for the "only IF 5" set simulation run ........................ 142
Figure 22: Grades’ PDF for the "only IF 7" set simulation run ........................ 142
Figure 23: Grades’ PDF for the "only IF 9" set simulation run ........................ 143
Figure 24: Grades’ PDF for the "only IF 10" set simulation run ...................... 144
Figure 25: Grades’ PDF for the "NO IF " set simulation run............................ 144
TABLE OF CONTENT: TABLES
Table 1: Taxonomy for the IFs in Surgery- high technology content (Onofrio et
al. 2015) .............................................................................................................. 27
Table 2: Recovery influencing factors (RIFs) (Subotic et al. 2007) ................... 38
Table 3 Literature review of dynamic HRA applications ................................... 49
Table 4: DaVinci surgical procedures ................................................................. 88
Table 5 : Validated surgical taxonomy of Influencing Factors ........................... 94
Table 6: Comparison between HEART, NARA, and CARA multipliers ........... 98
Table 7: Comparison between modified HEART multipliers and new ones .... 102
Table 8: Generic Task Types (GTTs) and relative Nominal Human Unreliability
(NHU) ............................................................................................................... 105
Table 9: HEART 38- Error-Producing Conditions (Williams, 1986) ............... 106
Table 10: Flowchart representing main steps of traditional HEART methodology
........................................................................................................................... 106
Table 11 Comparison between IFs’ taxonomy and traditional EPC one .......... 107
Table 12: Benefits of robotic prostatectomy over open and laparoscopic surgery
(http://roboticprostatesurgeryindia.com/) ......................................................... 123
Table 13: Outcomes following robotic radical prostatectomy in the select reported
studies ............................................................................................................... 123
Table 14: EMs’ probability range definition .................................................... 130
Table 15: EMs' grade range definition .............................................................. 130
Table 16: Probability of having the 95% of patients respectively with the
minimum and maximum grade possible ........................................................... 131
Table 19: Analysis of IF clusters' impact: probability of Grade 0 for the 0.95
percentile of patients and of Grade 3 for the 0.05 percentile of patients .......... 137
Table 17: Clavien-Dindo grading system for the classification of surgical
complications .................................................................................................... 181
1
ABSTRACT
La sicurezza del paziente e la prevenzione di danni dovuti ad errori medici,
diagnostici o terapeutici, è da sempre uno i temi prioritari in ambito sanitario; il
fenomeno è oggi ancora più accentuato dal crescente livello di informazione e
presa di coscienza dei pazienti che chiedono, con sempre maggior forza, più tutela
e certezze.
I dati riportati da diversi studi, tra cui il più recente sviluppato dalla Johns Hopkins
University of Medicine (Makary et al., 2016), confermano che la morte a causa di
errori medici è al terzo porto nella classifica delle cause di decesso negli Stati
Uniti, e si ha ragione di pensare che questo risultato sia facilmente trasponibile su
scala mondiale ad altri paesi avanzati.
Il concetto di “errore medico” ha subito diverse interpretazioni nel corso dei secoli
e si può definire come “un trattamento medico che sposta il livello di rischio al di
fuori dei margini di accettabilità di insuccesso suggeriti dalla pratica medica,
provocando danni al paziente”.
In un’ottica di continua evoluzione per migliorare le cure e la sicurezza in sanità,
si conferma la necessità di applicare tecniche di analisi del rischio, ed in
particolare di valutazione del rischio legato alla componente umana (Human
Reliability Analysis, HRA), al fine di poter implementare azioni correttive e/o
preventive, e di ridurre la vulnerabilità del processo clinico affrontando la gestione
del rischio ad esso correlato con l’adozione di un approccio sistemico.
Il presente studio si propone di sviluppare e testare uno strumento di simulazione
dell’affidabilità umana specificatamente progettato per applicazioni mediche, e in
particolar modo per procedure chirurgiche.
L’integrazione della tecnica quantitativa HEART, propriamente modificata per
applicazioni mediche, e della struttura del Dynamic Event Tree (DET), ci hanno
consentito di sviluppare uno strumento di simulazione dinamica di una procedura
chirurgica, soggetta a possibilità di errore da parte del chirurgo, da cui ottenere
stime di probabilità per diversi livelli di esito sul paziente (outcome). Il metodo e
lo strumento predisposti sono stati testati in un contesto di chirurgia robotica per
l’esecuzione di una specifica procedura chirurgica, la BA-RARP. Lo studio ha
consentito di trarre rilevanti conclusioni riguardanti i fattori maggiormente
influenzanti il suo buon esito.
L’analisi quantitativa ha dimostrato che la condizione che più di tutte peggiora la
prestazione del chirurgo in sala operatoria è il rumore di sottofondo dovuto ad
interazioni tra il personale, o tra quest’ultimo e la strumentazione stessa, non
2
inerenti con la procedura in esecuzione. Mentre, dall’analisi condotta per
categorie di fattori l’aspetto più critico (tra quello Personale, di Team e
Organizzativo) è risultato essere quello inerente alle dinamiche di equipe,
ponendo così un accento sulle abilità di coordinamento, cooperazione e
comunicazione dello staff coinvolto.
Questo lavoro ha contribuito a ridurre il gap osservato in letteratura circa la
diffusione di tecniche di analisi di affidabilità umana nel contesto sanitario,
confermando in particolare le potenzialità della tecnica HEART nell’applicazione
in aree differenti da quella industriale; si auspica infine che questo studio possa
essere di supporto alla evoluzione della formazione dei futuri chirurghi robotici,
alla progettazione di procedure chirurgiche più sicure, così come di checklist e
scenari di simulazione per l’apprendimento.
In conclusione al nostro lavoro, a completamento dell’analisi svolta, sono riportati
ulteriori approfondimenti sui risultati ottenuti ed alcune proposte per il
miglioramento dell’organizzazione del lavoro e l’ottimizzazione delle risorse in
ambito medico; non di meno, le possibilità di sviluppo del filone di studio a cui
abbiamo fatto riferimento sono illustrate insieme a diversi suggerimenti per futuri
approfondimenti.
3
EXECUTIVE SUMMARY
I. Introduction
The entire line of study regarding Human Error Probability is based on the quote
by Alexander Pope “To err is human”. This statement encloses the main pillars of
Human Reliability Analysis and Safety Engineering in general: the harmfulness
and futility of blame culture, since errors are inevitable; and the need to relate
human errors with the mental processes laying behind them.
The importance of the role of humans is easily recognised in the design,
implementation, control, and maintenance of any safety-critical system; and
complex systems, like modern hospitals, rise major safety concerns because of
their potential for accidents with fatal consequences.
It is from these key points that the need for a systematic approach to the analysis
of human actions and to the assessment of human reliability is of growing interest
in many sectors, including healthcare. Several formal Human Reliability Analysis
(HRA) methods have been proposed in the last 40 years, with several applications
in Nuclear, Transport and Process industry.
Everyone who has ever dealt with Safety Engineering knows that the most
dreadful scenario, in terms of event severity, is the one involving human loss, so
it is straightforward to think about Healthcare, and specifically Surgery, as a
proper field of applications for this kind of analysis.
Trying to transfer HRA techniques to the Healthcare sector, we must consider all
the customizable aspects of such techniques in order to select the one that better
fits our case and to calibrate the variants according to the application under study.
To achieve a quantitative estimate of the HEPs, many HRA methods utilize
Performance Shaping Factors (PSFs), which characterize significant facets of
human error and provide a numerical basis for modifying default or nominal HEP
levels (Boring 2006). Consequently, after having selected the most suitable HRA
technique, it was fundamental to determine the set of PSFs to be involved in this
kind of environment through the definition of an ad hoc taxonomy, which required
a deep investigation of pre-existing literature, starting from industry to medical
4
and surgical related one, and a validation oriented work by means of surgeons’
interviews and judgements evaluation.
This line of research started at Politecnico di Milano a couple of years ago, and
some preliminary work was already faced in previous studies.
Two specific studies have already been produced on the topic of HRA adaptation
for Surgery. The first one (Onofrio et al. 2015) was more related to the taxonomic
aspect of the problem, while the second one (Trucco et al. 2017) proposed an
empirical application of a quantitative technique derived from an adjusted version
of HEART, together with the task analysis development and the taxonomy
validation for the specific case.
The scope of this work was to make a step forward introducing the possibility of
quantifying recovery probabilities and paths, so we had to further alter the
approach presented in previous studies, introducing this concept through the
support of experts for validating recovery paths, hypothesis, and data coming into
play, and for calculating the related probabilities.
Searching for new developments of HEART, we got into two main updates of this
technique having the objective of re-actualizing and specializing the general, and
in some sense obsolete, tool for different fields of application, such as Nuclear
Power Plant (NPP) and Air Traffic Management (ATC).
In fact, whilst this technique has served well, it was developed many years ago
and it has remained principally the same technique, based on the same original
data set (Kirwan et al. 2016). It was therefore felt that a redefinition of the Error
Producing Conditions (EPCs) involved, and of the relative multipliers, could be
developed based on more recent and relevant data; since these are the guidelines
for future researches, we opt for the adoption of their more recent taxonomies and
GTT definitions, readapted for Surgery application.
Reliability and performance management look at HRA database and techniques,
almost exclusively, as tools to prevent human errors and failures. However, if we
take a closer look and think of what we exactly want to prevent, they are the
consequences of a failure rather than the occurrence of the failure itself (Jang,
Jung, et al. 2016a). Coherently, the recovery of human errors is as important as
5
the prevention of human errors and failures. This consideration actually paves the
way to a complementary field of study concerning the fostering and the
investigation of recovery processes.
The integration of time dimension in human behaviour analysis is the logical
consequence of the investigation of human mental processes, and of the fact that
many of the so-called influencing factors are implicitly related to the timeline of
the process/system they describe. In this sense, dynamic risk assessment allows
more detailed analyses and in deep mapping of performance measures.
Nowadays, it is recognized that a number of Dynamic Event Trees and direct
simulation software packages for treating operator cognition and plant behaviour
during accident scenarios are being developed or are already available (National
& Falls 1996); in particular, simulation-based HRA techniques differ from their
antecedents in that they are dynamic modelling systems that reproduce human
decisions and actions as the basis for performance estimation.
The possibility to use simulation tools to run an unlimited number of scenarios
(virtually without actual humans once the configuration is initiated), and to obtain
almost instantaneous results, dramatically reduces the costs. Hence, the
opportunity to perform, and analyse, a wider spectrum of scenarios in a generally
easier and more cost-effective way is the principal benefits of this type of
technique.
When an individual encounters an abnormal event, the natural reaction often
includes physical, cognitive, and emotional responses (Chang & Mosleh 2007).
These three types of response also influence each other; and there is ample
evidence that they also affect problem-solving behaviour; so, it is evident that also
in the dynamic analysis case PSFs cover a crucial role in the estimation of human
behaviour and error probability.
In addition to these internal PIFs influencing cognitive processes and decision-
making attitude, there are external PIFs (e.g., organizational factors) affecting
individuals’ behaviour both directly and indirectly; and each of these, both related
to personal and environmental domains, can potentially evolve over time.
6
To cope with this new focus, different types of PSFs adjustments were proposed
and analysed; but, in order to take proper decisions and to make the result
compatible also with different application, it is important to understand the
fundamental role that the scenario involved has on the process.
This point is backed by numerous studies confirming the fact that the first step in
a Dynamic Risk Assessment is to identify the accident scenarios where it appears;
indeed, the interface and the interaction between the plant and its operators is
obviously described by a critical dynamic process together with crew cognition.
For what regards our study we will limit ourselves to the implementation of
already validated taxonomy considering the impact of the factors as constant, but
changing the IFs considered depending on the task involved: allowing discrete
change of IFs over time.
Between the dynamic HRA tools encountered during our literature review,
Dynamic Event Trees resulted of particular interest to us due to their extreme
flexibility and their ability to analyse scenario dynamics under the combined
effects of stochastic events.
A Dynamic Event Tree is defined by five key characteristics:
1. The branching set (level of detail);
2. The set of variables defining the system state;
3. The branching rules (to determine when a branching should take place);
4. The sequence expansion rules (to limit the tree expansion);
5. The quantification tools to compute the deterministic state variables (e.g.,
process variables).
And for each branching point, the quantification process involves four steps:
1. Evaluation of crew's cognitive state and of the nature/quality of the
information regarding the plant available to the team;
2. Qualitative evaluation of the conditional likelihood of each branch;
3. Initial determination of the conditional probability for each branch;
4. Comparison of the conditional probabilities for similar situations in
different parts of the tree, and adjustment.
7
When it comes to construct an event tree, at each branch of the tree a probability
value must be determined. This value can derive from expert judgments, as in our
case, or from data collected in databases, adaptable to the situation of interest.
Clearly, this kind of technique has the same drawbacks attributed to all studies
making extensive use of expert judgments. However, the last step of the ones
mentioned above (i.e. Comparing similar branches) greatly facilitates the
assessment because it enables the analyst to use information concerning the
relative likelihoods of scenarios and to perform a double check on the results
obtained and proposed.
II. Study methodology and results
In the chapter illustrating the study methodology adopted, scientific evidences and
illustration of the various boundary conditions involved in our work, and of the
methodology through which we quantitatively evaluated our model are provided.
The scope of the chapter mentioned before is to prove the consistency of our
methodology; indeed, the most important aspect of this part of the work was the
adoption of a systematic approach; that we have applied in tackling every aspect
of the case study.
The steps we addressed in order to justify our analysis were:
- The estimation of the Proportion of Affect (PoA) of the Influencing
Factors (IF);
- The individuation of the Error Modes (Ems) and the estimation of their
relative probabilities;
- The individuation of the Generic Task Type (GTT) involved in the
procedure according to HEART;
- The development of the algorithm for the calculation of the DET;
- The definition of the Patient Outcome classification.
Of course, for starting our work we had to undergo several preliminary phases
since the elements needed to implement a study like the one we are approaching
to are numerous; and the major issue we encountered in dealing with the
Healthcare sector was the lack of reliable data; so, making the extensive use of
experts’ estimate an only choice.
8
The modified version of HEART proposed here is the result of a series of
considerations and adaptation of the original version in order to make it more
suitable for Surgery application; and, as mentioned before, the innovative aspect
of this work, with respect to the versions presented in previous studies, consisted
in the introduction of 9 the Error Modes (Ems) stemming from the most critical
tasks (already individuated as “Isolation of lateral peduncles and of posterior
prostate surface”; “Santorini detachment from the anterior surface of the
prostate”; and “Anastomosis”), together with the association of Patient outcome
grades, according to the Clavien-Dindo classification for Patient outcome (the
most widely accredited classification in the surgical sector), to each of the
recovery branches considered.
In order to identify the 9 most relevant recovery paths associated to these tasks we
collected the opinion of three surgeons through standardized and ad-hoc
interviews. Experts’ judgements were also employed for the evaluation of PoA;
for which we made reference to the IFs’ triangular distributions brought up by the
work of the PhD of the Politecnico di Milano, Rossella Onofrio, specifically
oriented in the direction of creating a statistical ground for the definition of
HEART’s weights in Healthcare.
As said before, in our study, as in (Trucco et al. 2017) one, surgeons were
responsible for those steps more “judgmental and structured”: selecting the
appropriate Nominal Human Unreliability (NHU) category, selecting the
Influencing Factors (IFs) from the surgical validated taxonomy and their
corresponding Assessed Proportion of Affect (PoA), plus the definition of the
Error Modes possible for each critical task and their relative probabilities (alpha).
Since the results of a survey significantly depend on the assessor’s knowledge of
the task and his personal opinion, the three surgeons involved in the study were
all well experienced, well trained, and aware of the steps and order of the
procedure.
Integrating the set of formulas of the modified HEART technique for Surgery with
a DET structure, a tool able to randomly generate probable paths for the procedure
was set up; and the Matlab® code resulting can be ideally divided in three main
parts:
9
- Initialization of data;
- Quantitative evaluation of paths (iterative part);
- Grade’s probability distribution evaluation.
The surgical procedure to which we have applied our model is the BA-RARP, a
revolutionary version of the traditional Robot-Assisted Radical Prostatectomy
(RARP), which has its only point of access through Douglas, so without opening
the anterior compartment and the endopelvic fascia, and without the need to
dissect the Santorini plexus (Galfano et al., 2010).
The following graph represent the structure of the DET we actually worked with
and highlights the sequence of the procedure together with the final Patient
Outcome grade associated to each deviation.
In the quantitative phase of the work, surgeon’s unreliability for the sequence of
Critical Tasks has been estimated by applying the modified dynamic HEART
10
technique in the evaluation of the DET’s nodes. Specifically, the following issues
have been addressed:
- Initialization of the Assessed Proportion of Affect, which gives a measure
of each EPC/IF effect magnitude;
- Initialization of the Assessed Nominal Likelihood of Unreliability
(ANLU) for the Critical Tasks “Isolation of lateral peduncles and of
posterior prostate surface”;” Santorini detachment from the anterior
surface of the prostate”, and “Anastomosis”;
- Identification of the Error Modes (Ems) undergone for each simulation,
i.e. paths, and evaluation of the branches’ probabilities through the
adoption of a linear additive model and the modified HEART’s set of
formulas;
- Identification of the final Patient Grade Outcome, according to Clavien-
Dindo classification;
- Calculation of the probability distribution of each Patient Outcome Grade
for the selected procedure, holding the Central Limit Theorem.
Once obtained the probabilities for the different grades, we performed a factor
analysis to investigate the effect of the various IFs considered in the calculation
on the probability of success of the surgery, and in particular on the health and
recovery of the patient. Through the simulation tool it was possible to select all
the variables, and so paths, in a completely independent and random manner for
20,000 iterations so that, holding the CLT, the resulting probabilities have global
validity.
According to the questionnaires collected, the worst possible scenario for a patient
undergoing this type of surgery (i.e. BA-RARP) is the Grade 3 outcome (i.e.
“Requiring surgical, endoscopic or radiological intervention”); and the results
obtained for the evaluation of the quantiles (q=0.95) of the optimum outcome, i.e.
no deviation from standard procedure (Grade 0), ant the maximum expected
degradation of patient outcome one (Grade 3) show that the more impacting factor
on the performance of the surgeon in the operating theatre is, by far, IF 1 (i.e.
Noise and ambient talk); followed by IF 5 (i.e. Poor management of errors and
threats to patient safety) tied to IF 10 (i.e. Poor or lacking coordination), IF 7 (i.e.
11
Rude talk and disrespectful behaviours), and IF 9 (i.e. Unclear or failed
communication).
We should have expected IF 1 to be the factor more heavily impacting on
surgeon’s performance in terms of Grade 0 quantile (-3.54%), since it has been
considered to describe all the three Critical Task under exam. Even though, it is
well known that background noise is a very relevant disturbing factor, the effect
produced from IF 1 on Grade 0 is also stressed by the way in which the software
evaluates the final grade of the procedure; in fact, in order to get the no deviation
case, we need to undergo a no deviation case for all the tasks involved, otherwise,
the highest grade encountered will be selected as the resulting one.
The same considerations can be done, on a different scale since they are taken into
account just in CT 1 and 3, for IFs 5 and 10, which share the same order value
(around 99.0%); and to IF 7 and IF 9, considered only in one of the three tasks
(aroung 99.4%). In the graph below the results regarding the probability of a
Grade 0 outcome for the 0.95 percentile of patients is displayed.
Probability of a Grade 0 outcome for the 0.95 percentile of patients
Analysing the probability of a Grade 3 outcome for the 0.95 percentile of patients,
the a priori consideration we made was that the only task presenting the possibility
of ending with this severity level is task 1; hence, only those factors affecting the
first CT (IF 1, IF 5, and IF 10) were supposed to have an impact on this KPI.
This was confirmed by the simulation results from which we could appreciate the
fact that considering only those factors not involved in CT1 evaluation (IF 7 and
99,71 99,48 99,39 99,03 98,72
96,46
93,47
90
91
92
93
94
95
96
97
98
99
100
101
No IF IF 7 IF 9 IF 10 IF 5 IF1 Complete
Grade 0 (q=0.95)
12
9) we ended up with a Grade 3 probability around 0.001% (i.e. the same resulting
from the NO IF case), while we had very similar results for IF 1, 5 and 10, all
involved in CT1 evaluation.
In order to provide clearer and sounder figures, we also decided to evaluate the
probability of a Grade 3 outcome for the 0.05 percentile of the patients, so the
probability for the 5% of the patients to end up with a Grade 3 outcome and the
results obtained from this investigation demonstrated that this probability is
around the 0.03%; the relative results are shown in the histogram below..
Probability of a Grade 3 outcome for the .05 percentile of patients
Finally, the scenario analysis was developed in order to make some reasoning
about the relative importance on BA-RARP of different categories of Influencing
Factors, namely: Team, Organizational, and Personal factors.
Analysis of IF clusters' impact: probability of Grade 0 for the 0.95 and Grade 3 for the 0.05
percentile of the patients
Patient
Outcome Complete
Team
(IF 1, 7, 9, 10)
Organisational
(IF 5)
Personal
(IF 9, 10)
Grade 0 93.47 % 94.58 % 98.72% 98.38 %
Grade 3 0.0324 % 0. 0221 % 0.0196 % 0. 0109%
0,003 0,0031 0,00420,0056
0,00730
0,0196
0,0324
0
0,005
0,01
0,015
0,02
0,025
0,03
0,035
NO IF IF 7 IF 9 IF 10 IF 5 IF1 Complete
Grade 3 (q=0.05)
13
As we can appreciate from the diagram above, resulting from the scenario
analysis, the category most impacting on the end result of the surgery is the one
related to Team and Teamwork conditions; secondly the Organizational one; and
finally, the one concerning Personal factors. Another interesting point is made by
the fact that the “Complete” scenario is much more similar to the “Team” one than
to the “Organizational” and “Personal” ones; which means that the first category
is the one better describing, and so more heavily influencing, the outcome of the
realistic case.
II. Conclusions
This study allowed the development, testing and validation of a simulation tool
based on Dynamic Event Tree theory and structure adopting a modified HEART
methodology for application in the Healthcare sector; and, through the running of
the simulation of the procedure’s simplified version, we have been able to validate
the correct behaviour of the tool designed.
The attention was directed to the analysis of surgeon’s unreliability in robotic
surgery, since it is an innovative sector where Minimally Invasive Surgery enables
optimizing precision, speeding up recovery, and potentially reducing human
errors. Still, since for now and the near future the robot does not replace the
surgeon but only supports him in close cooperation and interaction, the analysis
and management of human error and the application of HRA techniques are
fundamental and necessary.
The state of art review underscored firstly the importance made by HRA
techniques in the few surgery applications developed and secondly the need to
reduce the gap of applicability between Industrial and Healthcare sectors.
Even though, the first baby steps have been done in this sense, the majority of the
efforts in the socio-technical complex system of healthcare organizations is
characterized by reactive approaches, strongly focused on the retrospective
analysis of adverse events, such as incident data analysis; while, it would be for
sure more interesting to develop that branch of HRA discipline concerning
anticipatory analyses, which would represent a new twist in Healthcare helping in
14
the prediction, and hopefully elimination, of system’s vulnerabilities without the
necessity of occurrence of the failures itself.
The introduction of a DET structure allowed the inclusion of a procedural
timeline, still not considering the influence of the passage of time; while the
update of the multipliers used in Healthcare specifically designed HEART
methodology, defined a step forward in terms of database, and so results’
accuracy.
There is still much work to do in order to get specific and wide ranging database
directly produced by experts and experiences coming from the Healthcare sector;
nevertheless, through specific assumptions we manage to benefit from the
developments gained by more advanced, for what regards safety studies, contexts.
For developing and improving this study, it is important that other procedures and
surgical settings could experience this modified methodology and proactive
simulation approach, enhancing its diffusion, so that this work does not remain a
mere exercise of study.
This investigation represents a first step for the inclusion of dynamics in HRA
techniques for surgery applications and a few suggestions for future developments
could be:
• The description of the evolution over time of the Influencing Factors
involved;
• The dependences existing between the tasks composing the sequence of
the procedures and the IF/ECP themselves;
• The investigation of the cognitive models underlying surgeons’ behaviour
in order to develop high-performance simulating tools;
• The investigation of the recovery paths and of factors specifically designed
for recovery scenarios peculiarity.
A better modelling of the aspects mentioned before would constitute a valuable
consolidation of our study. In this way, quantitative consideration of goodness for
recovery strategies could be formulated, so to refine educational tools and
packages, and the whole Hospital system would benefit from this line of research.
15
The introduction of MIS has marked the beginning of a proper revolution in the
Surgical sector; and we hope that this work will support future training of robotic
surgeons and the design of new procedures and checklists; but most of all, that the
immediacy of use of simulation tools will foster the evolution of operating room’s
environment and organization.
The turning point represented by the kind of technology we issued consists in the
possibility of manipulating those factors actively, or passively, influencing human
behaviour and putting them in relation with the probability of success of the
surgery and its probable outcomes.
Still, the most hampering factor in the development of HRA techniques in
Healthcare is the lack of reliable data; we expect that the continuous theoretical
development, and the increasing ease of use and effectiveness of this kind of tools
will get the attention of surgical, and in general medical, world.
The study highlights the major factors, or class of factors influencing surgeons’
performance. Therefore, it is important to take that information into account and
to try to reduce their effect by raising surgeons’ awareness about errors promoting
conditions and implementing improvement actions.
This work represents a useful contribution to technology providers, paving the
way to the introduction of dependencies and recovery paths’ evaluation for HRA
applications in surgery. Thanks to the tool developed and tested in the present
study, performing a reliable and efficient simulation is more than ever affordable,
and the refinement and enlargement of the data involved would provide even more
precise and effective analyses, facilitating the optimization and improvement of
the operating room environment.
What is more fascinating of HEART and DET technique is their flexibility of
application to the most disparate fields of interest, and their adaptation from NPP
to Surgery environment is the prove that nowadays Safety Engineering is a
transversally valuable discipline for maximizing systems’ performances; which,
in the end, results in an improvement of work’s quality both from the point of
view of the worker and of the client/patient
16
For what specifically regards Robotic Surgery, it has not yet expressed its full
potential, and we expect future studies to introduce all those elements and
strategies already experimented in the industrial sectors (e.g. NPP, ATC)
producing a more comprehensive description of the phenomena occurring along
the procedure and a more accurate analysis of probabilities; with the hope of
seeing the spreading of the utilization for these methodologies and the increase in
awareness among potential users.
17
INTRODUCTION
Human error is the main cause of adverse events in Healthcare as demonstrated
by several epidemiological studies in the last two decades (Wilson, 1995; Schioler
et al., 2001; Vincent, 2001; Kable, 2002; Davis, 2002; Baker, 2004; Aranaz-
Andrés et al., 2008; Soop et al., 2009).
As shown in the picture below, Johns Hopkins University researchers have
estimated that medical error is now the third leading cause of death in the United
States after heart disease and cancer (Makary et al., 2016), and it reasonable to
think that the same trend remains valid on a global scale.
Human performance involves a complex interaction of factors, including the
inseparable tie between individuals, their equipment, and their general working
environment (Van Beuzekom et al. 2010).
Figure 1: Ranking by yearly death (Makary et al., 2016)
18
This criticality is emphasized in applications as Surgery due to the socio-technical
complexity involved in this field. Talking about complexity the operating room
environment is for sure the most challenging ambient in Healthcare because of the
increasing difficulty of the procedures; the highly interdependent multi-
professional staff; the sophisticated and technological equipment; the time
constraints; the stressing environment; and the occurrence of unexpected
situations due to the unstable and critical conditions of the patients.
Operating rooms are one of the most complex areas in healthcare where adverse
events are frequently seen with the rate of 47.7% to50.3%; this conclusion is
supported by numerous statistics showing higher number of adverse events
reported in surgery with respect to other clinical specialties (Brennan, 1991;
Anderson et al., 2013) and by the pace of recent developments, which suggests
that the practice is becoming both more complex and more tightly coupled
(Perrow, 1999). The study conducted in 2011 at the Acibadem University School
of Health Sciences, in Turkey, which involved OR staff (including physicians,
nurses, anaesthesia technicians and perfusion technicians) reported that the 65.2%
of health professionals witness condition of patient safety threatening event in the
OR throughout their professional life (Ugur et al. 2016).
Although the best measure for safety performance in Healthcare is still not
defined, except for patient outcomes, it is clear that, given the extreme complexity
of the problem, it is impossible to express a safety score as a single figure (Van
Beuzekom et al. 2010).
Influencing factors analysis has been considered as one of the most important
dimension of HRA in Surgery (Joice et al., 1998). In the last decades, many efforts
have been spent in the detection and analysis of the factors influencing tasks’
outcomes and incidents’ occurrence by performing systematic analyses of
databases and by comparing different organizational assets; first and foremost,
within the industrial framework but lately many investigations have also been
carried out in Healthcare settings. On the other hand, the attempt to combine one,
or more, of the many HRA techniques developed in industry to a not automated,
but at the same time extremely technical application, as Surgery is can be very
demanding and cumbersome because of the operative discrepancies between the
19
two worlds and the need for a confrontation between experts with different
backgrounds.
The first step for integrating HRA techniques in the new field of application
consisted in determining ad-hoc taxonomy for Safety in Surgery, whose
development represented a turning point for specialized studies providing solid
basis from which more practical investigations had the opportunity to stem.
After this initial effort, other important steps forward have been made, for
instance, in the determination of distribution curves for PSFs’ effect estimation by
expert judgments, in order to consider the variability of influencing factors’
perceived impact, and so paving the way for quantitative and simulation based
analysis, which are nowadays the new front line of risk analysis’ innovation.
HRA techniques are responsible for the identification of the weak points of a
system and of their classification and ranking in order to allow a proper
distribution of efforts and resources; and the last generation of techniques for this
discipline, the so called third generation HRA, proposes to integrate traditional
and/or new techniques with simulation tools to better model and express the
interaction between the system, the procedures, and the operators involved.
In general, this kind of techniques is more complex and, since it aims at capturing
and modelling as many peculiarities of the systems as possible, it tends to be less
flexible; and this lack of flexibility arises the need for specific development of the
methodology for each application.
For what concerns the Healthcare sector itself, we are forced to operate in
conditions very far from ideal ones. The majority of the problems arises from the
paucity of past data which makes the outcomes’ comparison between old and new
techniques impossible. But this is just one of the drawbacks of the blame culture,
still very strong in this kind of environment, which has generally lead to a dearth
of comprehensive and transparent accidents’ reports.
Anyway, it is noteworthy that in recent years the process of implementation of the
‘‘just’’ culture has started. For example, in the ATC sector air traffic controllers
are not punished for actions, omissions or decisions that lead to any safety-
relevant occurrence as long as these occurrences are reported via an appropriate
20
occurrence reporting scheme; and, hopefully, this initiative will create a more
trustworthy environment in which controllers are willing to report any safety-
related incident without fear of any disciplinary action (Subotic et al. 2007).
It is easy to understand that the lack of proper data hampers the optimization
process, making the foundations of the model and of the consequent calculations
fragile; but it is obvious that changing the mindset of a significant portion of the
population is beyond the scope of this work and will require years, so we will
focus on how to improve the methodological approach to the matter.
To this end, it was necessary to get around the problem figuring out new strategies
to gather experts’ opinion, and thus, finding a way to make those judgments, and
their sum, as much objective as possible. In particular, our idea was to question a
group of surgeons about safety, safety procedure, and affecting factors in a precise
and standardised manner (which will be presented in the Study Methodology
Chapter) and then translate their opinions numerically through a well established
procedure.
As mentioned above, literature review showed a recent and growing interest for
Safety Engineering methodologies, and the fact that HRA is gaining more and
more credibility and relevance in the medical setting is due to the fact that the
historical revolution it would bring to this field is becoming undeniable.
Nowadays performing HRA has become a mandatory requirement and a
fundamental element for a continuous improvement of safety in hospitals; it is
adopted worldwide whilst many details change basing on the country legislation
and on the department of pertinence.
The benefits of transferring and applying to Healthcare services the most
important proactive risk analysis methods already implemented in industry are
fully recognized in patient safety literature (Vincent, 2001; Lyons, 2004; Verbano
et al., 2010, Cagliano et al., 2011). On the other hand, as mentioned before the
higher variability, and sometimes complexity, of Healthcare operations in
comparison to industrial ones represents a big obstacle for the implementation of
HRA techniques in Healthcare.
21
The issue of applicability of HRA in Healthcare is largely discussed in literature,
and the large majority of the studies tries to modify and adapt existing techniques
to the clinical setting of interest by producing specific templates, procedure and
flow charts; but the biggest challenge in this sense is to make the methodologies
as much system-based as possible in order to make users more sympathetic to use
them.
As we expected, nothing was found in literature concerning quantitative dynamic
probability analysis for Surgery or Healthcare applications, so the objective of the
study was to put together all the information and results achieved up to now, and
to propose a simulation tool able to integrate the structure of a Dynamic Event
Tree with the flexibility of a quantitative tool such as HEART; so not losing fit
with the specific application but, at the same time, introducing more sophisticated
analysis’ methods.
The manuscript is divided into 7 chapters, as follows. In Chapter 1 we have
provided and introductory view of the area of investigation and its relevance from
both practical and scientific perspectives. Chapter 2 deals with the illustration of
the main findings in Dynamic HRA literature, not only concerning this study but
also looking at future developments. Chapter 3 describes the empirical setting we
will operate with, providing an overview of Robotic Surgery and technology and
applications. Chapter 4 introduces the study methodology we have adopted for
our work: the sequence of steps we went through, the assumptions made, the tools
and the classifications employed, and the backbone of the quantitative device
developed. Chapter 5 illustrates the process of customization of the resolving
structure, described in the preceding chapter, to the specific case of study:
initializing the variables and the quantitative data the evaluation will be based on.
In Chapter 6 the results obtained from the running of the simulation tool are
illustrated and commented.
Finally, in Chapter 7 the main conclusions are drawn along with suggestions for
future research endeavours.
22
CHAPTER 1: HUMAN RELIABILITY AND
RECOVERY ANALYSIS IN INDUSTRIAL
AND HEALTHCARE SECTORS
1.1 Human Reliability Analysis: from industrial to healthcare sector
1.1.1 What is Human Reliability Analysis about?
The study of safety as an attribute of system is a relatively new interest. It started
roughly in the 80’s, and this is no coincidence since it was in those years that, due
to the increasing complexity of systems and to the furious rate of innovation, the
most destructive accidents took place.
Disasters like the one of the Three-Mile Islands (TMI) on March 28 1979 made
the need for a structural role of safety in industry more evident than ever, fostering
the development of standards and risk evaluation techniques until then confined
to the military field.
From that moment on, Safety Engineering became a field of study itself
particularly fostered by Nuclear and Oil & Gas industry, where the complexity of
the systems, both from a physical and a technical point of view, greatly affects
performances.
Specifically, we can distinguish two main types of failures having utterly different
roots and so requiring very different treatments: those due to random breakage of
the instrumentation and those due to human errors.
For what regards the first type we can adopt different strategies concerning
maintenance, redundancy of specific critical nodes, quality improvement of the
single components, and many others.
The kind of strategy to be adopted has to be justified through a proper risk
assessment, according to specific standards and documentation. But this is the
easy part, in fact, the most challenging aspect of risk assessment regards the errors
23
arising from human-system interaction, in fact, in order to be comprehensive a
safety assessment must take into account all the elements of a system, including
human factors, and the corresponding failure probabilities (Subotic et al. 2007).
The entire line of study regarding Human Error Probability is based on the quote
by Alexander Pope “To err is human”. This statement encloses the main pillars of
Human Reliability Analysis and Safety Engineering in general: the harmfulness
and futility of blame culture, since errors are inevitable; and the need to relate
human errors with the mental processes laying behind them.
The importance of the role of humans is easily recognised in the design,
implementation, control, and maintenance of any safety-critical system; and
complex systems, like modern hospitals, present major safety concerns because
of their potential for accidents with fatal consequences.
It is from these key points that the need for a methodical and systematic approach
able to describe and analyse human actions and to individuate behavioural patterns
and specific calculation tools arises.
Applying a scientific approach to a considerable amount of data and observations
has produced more and more sophisticated instruments able to predict
probabilities, or at least provide a measure, of human failures performing a certain
series of tasks in a certain environment.
1.1.2 Role of human cognition in HRA
Patel et al. (2004) showed that in medicine experts tend to follow a top-down
reasoning strategy which seemed anomalous when compared to other domains,
wherein experts tend to first gather data and then assemble hypothesis; this is an
important finding from the perspective of studying errors related to this sector
since it gives us a taste of how the problem is approached, and so of the cognitive
procedures underlying errors and recovery.
Some HRA techniques try to adopt a cognitive approach taking into consideration
the operator, the system and their interactions. The use of the resulting cognitive
models can help in studying human mental processes leading to errors and thus
24
increases the possibility of successfully coping with the underlying causes of the
final outcome.
Unfortunately, cognitive approaches are often tailored on the specific applications
they refer to and a proper analysis of the cognitive process is a very demanding
job that would require a thesis itself. This is the reason why we will not focus on
the cognitive models involved making some strong, but reasonable, assumption
in order to be able to concentrate on the main topic of our work: proposing a risk
assessment technique able to integrate the dynamics of the procedure with
personal and environmental factors affecting the different recovery paths’
probabilities and outcomes.
In order to understand procedural anomalies, it is first of all crucial to have in
mind the standardised main procedure and the relative recovery paths; but also in
this case there is not much in literature, which makes us conclude that when a
failure occurs experience is the only resource a surgeon can rely on.
Basing on experience it is possible to predict the probable outcome of an action
enabling them to provide preventative or supportive inputs; which means that
more expert surgeons have higher probability of success with respect to beginners.
The aim of our HRA application is to predict human erroneous actions in a given
context and to provide, basing on statistical grounds, guidelines regarding the
safer choices to be made when a specific deviation from the standard procedure
occurs.
Although training has resulted to be the most effective way to counteract failures,
many cognitive errors can be counteracted also by system design aimed at
reducing complexity or by proper communication at any level, as will be shown
later on.
Therefore, we can say that our goal is to reduce as much as possible the variance
of failure probability between different performers by generating well-known
alternative, and/or recovery, procedures.
25
1.1.3 The definition of Performance Shaping Factors
Numerous formal HRA methods exist able to identify potential sources of human
error, incorporate them into overall risk models, and quantify the corresponding
Human Error Probabilities (HEPs). To achieve a quantitative estimate of the
HEPs, many HRA methods utilize Performance Shaping Factors (PSFs), which
characterize significant facets of human error and provide a numerical basis for
modifying default or nominal HEP levels (Boring 2006).
Nowadays, the need to adopt an exhaustive, meaningful and hierarchical
classification and taxonomy for all the factors influencing and shaping our
behaviour and the relative outcome is more than ever clear, since in many
circumstances the study of factors contributing to active failures is hampered by
the lack of consistent terminology.
In this sense, the development of Human Factors Classification Framework
(HFCF) for patient safety presented by Mitchell et al., (2016) is for sure a big
improvement in healthcare taxonomy thanks to the cognitive approach adopted,
the list of Contributing factors in HFCF for patient safety is provided in
APPENDIX 4: Contributing factor classifications in the human factors
classification framework for patient safety (Mitchell et al. 2016).
The particularity of this framework is that it provides a hierarchical classification
system that is able to identify multiple causation factors involved in the
occurrence of adverse clinical incidents; and allows temporal relationship between
factors (Mitchell et al. 2016).
The HFCF for patient safety is able to identify patterns of causation for clinical
incidents, and to highlight the need for targeted preventive approaches based on
understanding how and why incidents occur (Mitchell et al. 2016).
Taxonomy is required to diagnose why accidents are occurring and to support
prioritization of remedial actions. The choice of a particular taxonomic structure
(e.g. job-related and cognitive) is driven by the need to capture all types of
potential causes together with the need to identify where remedial actions can be
26
put in place; this is the reason why it must be closely related to the field of
relevance.
One of the developments that we want to experiment is the validation of the
grouping of influencing factors already proposed in previous studies. By doing
this we will be able to reason about behavioural patterns and PSFs’ classes of
importance, which will help the research in outlining a more detailed and complete
scenario.
1.1.4 Surgical environment peculiarities and current state of HRA
application
When it comes to link an extremely quantitative and little explored world as
human mind and a likely complex world as Surgery thousands of possible
considerations could be done.
Trying to transpose HRA techniques to the Healthcare sector we must consider all
the customizable aspects of such techniques in order to select the one that better
fits our case and to calibrate the variants according to the application under study.
First of all, it is fundamental to determine the set of PSFs involved in this kind of
environment through the definition of an ad hoc taxonomy. This requires a deep
investigation of pre-existing literature starting from industry to medical and
surgical related one; then, a validation oriented work by means of surgeons’
interviews and judgements evaluation must be carried out.
This line of research is nothing new for the Politecnico di Milano; in fact, this
study started a couple of years ago and two other theses have already been
produced on the topic of HRA adaptation for Surgery. The first one (Onofrio et
al. 2015) was more related to the taxonomic aspect of the problem, while the
second one (Trucco et al. 2017) proposed an empirical application of a
quantitative technique derived from an adjusted version of HEART; together with
the task analysis development and the taxonomy validation for the specific case.
27
Table 1: Taxonomy for the IFs in Surgery- high technology content (Onofrio et al. 2015)
Influencing Factors
Standardization & Formalization
Training
Equipment & HMI
Distractions
Lighting
Safety Climate
Safety Culture
Staffing
Temperature & Humidity
Space Design
Workload
Cyrcadian Rhythm & Sleep Loss
Communication
Cooperation
Coordination
Experience & Knowledge
Fatigue
Leadership
Physical characteristics & Health
Soft Skills
Stability & Familiarity among team members
28
As said earlier, the most important aspect in terms of customization of the
techniques concerns PSFs. According to several studies, communication errors
are key factors in medical settings. Lingard demonstrated that the 36% of errors
occurring in the operating room are mainly caused by communication issues
provoking waste of resources, inefficiency, list delays, patient inconvenience, and
an increased rate of procedural errors (Lingard et al., 2002); in fact, the
communication is a crucial aspect of modern medical practice and an
organizational issue.
Another peculiar aspect of Surgery applications deducted from previous studies’
results is that surgeons must be considered as “ideal” performers since they are
supposed to have a deep knowledge of the subject and good training for
procedures; in this sense the factors involved assume a crucial role since all
failures are mainly related to non-technical skills.
Hence, when dealing with medical personnel, the knowledge background, except
for experience, must be assumed homogeneous and of high level, so the decision-
making procedure can be standardised and, in general, can be presumed to be the
best possible under selected conditions.
1.1.5 Strengths and flaws of HEART thechnique
From previous studies, it was concluded that the best thing to do, in order to
evaluate the tasks involved in a surgery, is to develop a modified version of
HEART technique suitable for Healthcare application, which has implied the
adoption of the taxonomy presented in the previous paragraph together with the
relative weights attributed through experts’ judgements.
Since our scope is to make a step forward introducing the possibility of
quantifying recovery probabilities and paths, we have to farther alter the approach
presented in previous studies introducing this concept through the support of
experts for validating recovery paths, hypothesis, data coming into play and for
calculating the related probabilities.
Since no other study was carried out on the topic, except for the Politecnico di
Milano ones, there is no evidence of the existence of techniques more suitable
29
than HEART to the Healthcare framework. Aside of that, preserving the HEART-
like approach proposed in previous theses on the subject we can give continuity
to the work developed for this application and make use of the results obtained;
this choice will be better explained and justified in the Study Methodology
Chapter.
Searching for new developments of HEART we got into two main updates of the
technique having the objective of re-actualizing and specializing the general, and
in some sense obsolete, tool for different fields of application, such as Nuclear
Power Plant (NPP) and Air Traffic Management (ATC).
The need for these new tools stem from the fact that the most popular technique
for the quantification of human interactions in the UK, HEART, was developed
many years ago (Williams, 1986), and remained in use without any significant
modification (while HEP database, i.e. CORE-DATA (Computerised Operator
Reliability and Error Database (Taylor-Adams et al., 1995; Gibson et al, 1999) has been under development since 1992), and without any customization for the
different sectors analysed.
Despite the recognition of HEART as a flexible and resource efficient tool, its
extensive usage has also revealed several areas for improvement, including
(Kirwan et al. 2016):
- Under-pinning of the tool by more recent data;
- A clearer understanding of how the data are used to generate the GTTs and
EPC factors;
- Improvements in consistency of usage of HEART;
- Guidance on usage of GTTs, EPCs and APoAs;
- More focusing on NPP human error and recovery contexts;
- Provision of explicit examples or benchmarks for NPP HRA assessors.
The net result of these findings was the evidence that a new approach was
desirable; and the existence of a human error database made such a new approach
possible.
The new tool referring to NPP applications was called NARA (Nuclear Action
Reliability Assessment); and it was basically developed along the same lines as
30
HEART, but based on more recent and relevant data, and tailored to the needs of
UK NPP PSAs and HRAs (Kirwan et al. 2016).
The first step in its development consisted in a contextual adaptation of the tool
producing new list of GTTs (Generic Task Types), and EPCs. For what regards
the first one, the final outcome resulted to be partly a sub-set of the original
HEART GTTs, and partly a further refinement of GTTs’ definition to more
accurately encompass the actions being considered in the PSAs; the new list of
GTTs was then used as the basis for reviewing the current HEP data available
prior to GTT re-quantification (Kirwan et al. 2016).
For the EPCs’ selection instead, many set of EPCs used in the UK NPP PSAs were
reviewed in order to identify overlaps and mismatches, while other EPCs were
generated taking a cue from contemporary human error identification approaches.
On the other hand, to quantify human performance in the context of ATM the
Controller Action Reliability Assessment (CARA) was developed on the wake of
the results obtained adapting HEART to different domains such as in the Railway
(Cullen et al., 2004; Kim et al., 2006) and Nuclear ones (Kirwan et al. 2016). Also
in this case, as for the NARA one, the key modifications applied to the original
technique concern the GTT definition (a new set of GTTs have been developed
for CARA which are specific to the ATM environment), and the set of EPCs to
be involved in the investigation; the same considerations as NARA, regarding the
use of database and the validation of the final sets, were done. The CARA and
NARA GTTs are illustrated respectively in Figure 2 and 3, while the two lists of
EPCs are presented together with NARA ones in Figure 4 and 5 (for the CARA
case those EPCs shaded grey are the ones whose maximum values are supported
by weak validation and therefore should be treated with caution).
32
Figure 4: List of quantified GTTs developed for NARA Figure 2: List of quantified GTT developed for CARA
Figure 3: List of EPC developed for NARA
33
In the paper “Application of the CARA HRA tool to Air Traffic Management
safety cases” (Kirwan 2017) we also found a short review of the differences in
applying HEART and CARA to three safety cases in ATC; in particular, the three
cases were related to:
1. Aircraft landing guidance system (www.eurocontrol.fr);
2. A position/identity display for the air traffic control (ATC) aerodrome
environment (EUROCONTROL, 2005);
3. An aerodrome procedure for low visibility conditions using future ATC
systems;
The main findings related to the application of CARA with respect to HEART
were:
- The effectiveness of the GTTs’ redefinition which allowed to include
many more facets (e.g. we passed from including two to six GTTs), and
implied that fewer EPCs were required for CARA;
Figure 5: List of EPC developed for CARA
34
- The fact that CARA’s application led to new insights concerning display
features and their impacts on human reliability (e.g. via provision of a
dedicated audible and visual alarm).
Such insights were based on sensitivity to human factors not previously evident
in the analysis, and would enable the system design team to determine precisely
how to maximise human reliability and controller response to an alarm in the
control tower. In particular, this result shows that CARA can be useful not only
for quantification in safety cases, but also for determining how to improve Human
Factors in a safety-critical system.
As said before, the two HEART’s development presented before respond to the
immediate need for a technique allowing human factors and human reliability to
be considered within the specific safety case area. Indeed, there is a pragmatic
requirement for human factors to enter into the safety case dialogue, and for that
dialogue to be meaningful it is required to be in a quantified and well-defined
context.
This represents the future for HRA techniques and, to our little, with this work we
hope to foster the development of techniques specifically designed and validated
for Surgery, and in general Healthcare, applications.
1.2 Recovery analysis as a development of HRA second generation
1.2.1 The concept of Recovery in System Safety Engineering
Reliability and performance management look at HRA database and techniques,
almost exclusively, as tools to prevent human errors and failures; but if we take a
closer look and think of what exactly we want to prevent: they are the
consequences of a failure rather than the occurrence of the failure itself (Jang,
Jung, et al. 2016a).
This conclusion, that recovery of human errors is as important as the prevention
of human errors and failures, actually paves the way to a complementary field of
35
study concerning the fostering and the investigation of recovery processes
functioning.
Generally, recovery promotion involves the entire sequence from error detection
to the actual recovery; many studies have categorized the recovery process into
three phases; the detection of the problem and its situation, the explanation of the
causes of the problem or countermeasures against the problem, and the end of
recovery empirically (Bagnara et al., 1988; Bove and Andersen, 2001; Francis, 1998;
Frese et al., 1990; Frese, 1991; Johannson, 1988; Kontogiannis, 1997, 1999; Rizzo et al.,
1995; Van der Schaaf, 1995; Zapf and Reason, 1994). Due to the fact that the focus of
recovery promotion up to now has been on categorizing recovery phases and
modelling recovery process, the researches related to the recovery failure
probabilities of human operators are very few and so cannot constitute a reliable
source of data. This is proven by the fact that in our literature research we found
just one line of research coping with recovery probabilities.
The first part of the said study was published in 2014 (Jang et al. 2014) and treated
basic Human Error Probability; the second part was released in 2015 (Jang, Ryum,
et al. 2016) and was related to Recovery Failure Probability; while the third and,
for now, last part (Jang, Jung, et al. 2016a), reported the results gathered from new
experiments aiming at determining Nominal Human Error Probability and
Recovery Failure Probability; the content of the studies will be discussed in detail
in Section 1.2.4.
As mentioned before, the understanding of human cognition and human cognitive
modes represents a crucial node of HRA especially when dealing with complex
procedures and environments, because it strongly affects the way a series of tasks
is performed and of course the ability of recovering errors if any takes place.
Indeed, considering full procedures we cannot neglect the fact that the operator,
specifically the surgeon, can adopt lots of recovery strategies to cope with an
occurring failure; thus, the recovery analysis will cover most part of this
discussion being a not yet investigated topic.
According to several researches, in order to obtain a proper recovery modelling
and evaluation, the recovery process necessitates three main steps:
36
1. Detection;
2. Extrapolation of causes and formulation of possible
countermeasures;
3. Empirical recovery.
By the way, there is another phase which is fundamental in order to promote
effective recovery: the iterative check of the outcome. In fact, it is not sufficient
to implement recovery measures when necessary, but also, the operator always
has to check the effectiveness of his choices and actions, in an iterative fashion
till an acceptable outcome is reached.
What permeates the whole concept of recovery, and is so indispensable to produce
a meaningful study, is the need for a deep understanding of the influencing factors
acting on recovery and of the evolution of human performance and critical
judgement along the chain of events composing the process; so this will be the
focus of the following paragraphs.
1.2.2 How to model recovery: IFs and Dependency
One of the most popular shortcomings adopted while modelling the influence of
Performance Shaping Factors (PSFs) in conventional HRA methods is to
implicitly assume them independent; but this is definitely not true neither for ideal
nor for real cases.
There are two types of dependences that should be taken into account in order to
effectively model phenomena: dependences between tasks and dependences
between PSFs. While the first type has been largely investigated in literature, as
an aspect heavily influencing precision of HEP quantification and proper
understanding of the event itself; the second kind of dependency has been little
treated; so it would be an interesting topic for further HRA studies and
developments.
It is quite straightforward that, when dependences are taken into account, a
significant modification of the influence of PSFs over the operator performance
takes place; especially for complex systems.
37
In fact, the more complex the system is the easier getting links and synergies
between sequential actions is, due to the increasing interconnections of the
influencing factors involved.
If we think about the dependencies between tasks we can easily conclude that it is
not the task itself that influences the outcome of the following tasks, but the fact
that these tasks, or better their relative end result, affect the Global Influencing
Factors; e.g. triggering a steep increase in stress level, so modifying the scenario
in which the process evolves.
Therefore, still being different concepts, the dependency of tasks and the one of
PSFs are strictly intertwined, given the fact that those tasks related to the same
factors are more likely to happen, and so have a stronger reciprocal dependency
than others.
The case study carried out for Air Traffic Control domain using PROCOS by De
Ambroggi (2010) demonstrated a significant modification of PSFs’ influence over
operator’s performance when dependences are taken into account; highlighting
the importance of considering the mutual dependencies existing between PSFs
when it comes to analyse human performance, especially in complex systems.
Over the years, HRA researchers have shown the importance of the role played
by the context in which human errors take place, and so of contextual factors; this
is true also for recovery probability; but, assuming the factors influencing error
probability to be the same affecting recovery probability, and with the same
impact, may lead to misleading results.
Talking about “alternative/recovery paths” we should consider the fact that there
could be a change in the impact of the different factors on the probabilities, and
maybe, a change in the influencing factors themselves with respect to the error
producing conditions; anyway, this step would require a study itself so, as we will
see in the next sections, we will preserve the already validated taxonomy instead
of disproportionately refining the taxonomy issue.
The only article identified during the literature review phase specifically covering
the issue of recovery influencing conditions was the one of Subotic et al. (2007),
where ad-hoc recovery taxonomy was proposed.
38
In particular, in this document the authors validated a list of the relevant contextual
factors affecting the process of controller recovery from equipment failures in
ATC (Air Traffic Control); and suggest a definition for recovery: “recovery
factors are those factors aiming at preventing or reducing the negative
consequences of error or failure”.
Since an important aspect of the recovery process is to have a deep understanding
of the influencing factors; the first step in this direction was to review all
contextual factors identified in the most relevant current HRA methodologies;
concluding that the various techniques identify and emphasise several, and
sometime different, groups of contextual factors.
This study showed not only the crucial relevance of investigating the factors
influencing recovery, and the differences existing between the latter and the
simple failure related ones, but also the vastness of facets captured by the different
methodologies; promoting future researches in this sense.
As final result of this research, the following ad hoc taxonomy for Recovery
Influencing Factors (RIFs), with relative description, was proposed and then
validated through experts’ interviews.
Table 2: Recovery influencing factors (RIFs) (Subotic et al. 2007)
39
1.2.3 The relevance of recovery paths in Surgery
The discussion of the last paragraph emphasises the need for specific terminology
and taxonomy depending on the field of application we want to operate in, Robotic
Surgery in our case.
Since we want to consider also non-conventional paths, i.e. recovery ones, at first
we have to define and properly describe which are the paths of interest, generally
the more probable, and their result, i.e. the final condition they will lead to.
Starting from the assumption that we are dealing with knowledge based cognitive
processing we could say that before choosing the most suitable action the surgeon
has identified all possible recovery paths and evaluated them all; but this
deduction would imply that all recovery paths will end in a “complete recovery
state”, which is not always the case due to the numerous and wide ranging
evaluation he/she would be supposed to do in a few seconds.
Since in Surgery literature only the standard procedures are illustrated it is first
necessary to individuate and formalise the most successful, or at least the most
frequently adopted, strategies that surgeons implement in order to cope with
errors’ occurrence. Every time a failure takes place many variables are affected
and many influencing factors come into play impacting on the time required to
select a countermeasure and also on the subjectivity and quality of the choice
itself.
In order to maximize the efficiency and effectiveness of the procedures it is
essential to model and study how this kind of mechanisms, which are unavoidable
since “to err is human”, develops and works. To do this we have to investigate
those strategies and the relative influencing factors (both personal and contextual)
leading to sub-optimal performances, to produce a simulating tool able to generate
IFs-related probability, and to eventually propose measures to reduce the
probability of fatality.
40
1.2.4 Applications of recovery analysis in literature
A literature research regarding error recovery, paying particular attention to
medical applications, was conducted in order to catch up with the latest results in
this field.
The keywords used to filter the material from Scopus, Web of Science, and
Pubmed platforms were «Recovery Procedures»; «Error Recovery» AND
Surgery; «Error Recovery» AND HRA.
Most of the literature found adopting these research filters proposes modelling
techniques for human recovery process, since it is the first step for integrating this
concept in HRA methodologies; and many highlight the importance of the link
existing between error, recovery success, and cognitive models.
In this regard, there are also studies addressing the issue of how cognitive types
can influence the recovery process; trying to understand and formalise the steps
behind the procedures, even though, as mentioned before, in our case study the
assumption that all the surgeons think, medically speaking, in the same way is not
that far from reality since the theoretical background is homogeneous, or at least
is supposed to be.
The conclusion extrapolated from the analysis of several retrospective recovery
studies is that the error recovery process involves both stages of execution and
evaluation, and that this cyclical process can be modelled using Norman’s model
of interaction (Patel et al. 2011).
The Norman’s model has been intensely used to analyse the process of error
recovery, and, on account of its generic nature, has also been used to model a
broad range of cognitive interactions in which interpretation and action are tightly
connected.
This process incorporates the stages of triggering, diagnosis and correction; and
presents them as part of a decision and action cycle including additional aspects
of clinical decision making, as risk mitigation and cultural barriers to the detection
and correction of errors.
41
The evaluation of the impact of several factors -such as: recovery easiness,
severity, and detectability of the error- on recovery probability is suggested in (Su
et al. 2000); even though also in (Patel et al. 2011) some influencing factors were
identified; e.g.: expertise, complexity of the paths chosen, and completeness of
the available information.
In Kontogiannis' work (2011) a proposal for modelling recovery steps from the
cognitive point of view, and a scheme for state transition representation is
provided. This study presents a research framework in terms of error recovery
strategies providing hypotheses for empirical research and, also in this case,
pointing out several influencing factors (e.g.: conflicting goals, cooperation and
communication of team’s members).
We can say that the conclusions and results derived from the literature research
are pretty coherent, even when the field of interest is different, which is for sure a
meaningful achievement as well as a good starting point; but, aside from this
introductory investigation, we accuse a scarcity of quantitative validation.
In fact, modelling human recovery process is not sufficient to apply HRA since
its implementation also requires human error and recovery probabilities, i.e. a
quantitative approach.
As mentioned before, Jang’s studies are the only ones, found in our research,
practically evaluating recovery paths’ probability.
The work was developed studying a digital Human System Interface (HSI) in
Nuclear Power Plants (NPP) and was structured in three blocks: Task analysis and
human error modes identification through SHERPA; Analysis and modelling of
dependencies between error modes through THERP model; and Statistical
analysis of the experiment results through a Bayesian analysis.
An adequate taxonomy for human errors was defined during the process; in
particular it was made up of eight categories: Operation selection omission;
Operation execution omission; Wrong screen selection; Wrong device selection;
Wrong operation; Mode confusion; Inadequate operation, and Delayed operation.
In order to simplify the discussion, it was decided to take a shortcut in treating the
main factors affecting human error and recovery; in fact, recovery failure
42
probabilities were considered static, and the scenario was characterised by
constant parameters: no time urgency, no supervision, and high level of human
machine interface throughout the procedure.
Setting the values for these three external factors, the authors are somehow
defining a static scenario which dramatically affects the validity of failure and
recovery probabilities; leading to highly constrained results and without covering
the magnitude of variables involved.
One or more error types were associated to each task/subtask and the simulations
of the accident scenario, which required cognitive action, was performed.
A statistical analysis was then carried out on the results in order to obtain human
error and failure recovery probabilities (the statistical method based on Bayesian
analysis was used adopting 5% and 95% quantiles).
Further investigations on this application showed that such influencing factors as
task dependency are not negligible (Jang, Jung, et al. 2016b); indeed, the failure
or success of one subtask may affect the failure or success of the next subtask if
the two are not mutually independent.
Hence, the reason why dependency among subtasks should be considered is that
the HEP of a task could be overestimated or underestimated as the number of
subtasks, i.e. the level of detail of our model, increases (Jang, Jung, et al. 2016b).
To assess the degree of dependency, in this study, THERP dependency model was
adopted; this choice was justified by the lack of data on conditional probabilities
and the validity of the development process of the model (Jang, Jung, et al.
2016b).
Pr [failure dep. step] =
= Pr [failure initial step] *Pr [failure dependent step| initial step failure]
43
The formula above represents the basic definition for a subtask failure probability
depending on the one preceding it, given the fact that the latter failed. According
to THERP this concept can be also expressed through the following formula:
Pr [B|A] = a + b * Pr [B]
Where A and B represent the subtasks; while, a and b are positive numbers
obtained from judgments, and not from data, according to the following chart:
The THERP approach to dependency assessment uses several parameters to
determine the level of dependency between events, including same or different
crew, time, location, and cues (Swain and Guttmann, 1983); the general guidelines
in assessing the levels of dependence are provided by THERP handbook.
In this particular case, the different classes of dependency were identified by
considering different factors: Similarity of control devices, Separation between
control devices (closeness), Repeated action steps, and Group soft control.
Finally, a tree diagram was developed with binary exit (Y/N) for each of the
mentioned criteria, so that the final twelve branches were divided between the five
levels of dependency: Complete (all Y), High, Moderate, Low and Zero (all N).
Adopting this classification, it is possible to formulate, according to THERP, the
probability of occurrence of an event B given an event A.
Also the evaluation of HEPs was performed using THERP; but, it is important to
notice, that neither RIFs nor PSFs were taken into account since their weights
were considered unitary and constant.
The procedure for the quantitative evaluation of HEPs was:
1. Deduce error and recovery probabilities (E and R) from
experiments results;
44
2. Obtain dependency level estimates (k) from experts’ interviews;
3. Calculate HEPs related to each task/subtask adopting the following
expression:
HEPi=1-((1-R0E0)*∏ (1 + 𝑘1
1+𝑘(1 − ∑ 𝑅i𝐸i)𝑖≠0𝑖 )
So, the values obtained for the various probabilities, both failure and recovery
ones, were achieved through an empirical approach, observing the outcomes of a
certain number of experiments under fixed boundary conditions and focusing only
on discrete tasks selected in advance.
1.2.5 Current gaps in literature
The findings regarding recovery failure probabilities for the Industrial sector
cannot be imposed directly to Surgery applications since the context in which
error recovery is supposed to take place and the factors involved are, or at least
could be, totally different.
As mentioned in the influencing factors’ paragraph, the specificity of taxonomy
is a crucial aspect for conducting reliable and meaningful investigations. Also, the
validation of such terminology is essential to preserve a scientific approach to the
subject; and this aspect is completely missing in previous works.
Moreover, even in those few studies where influencing factors (PSFs/RIFs) are
discussed, as in Jang ones, they are only described in a static and reductive
fashion.
It is pretty easy to understand that considering the effect of time urgency, no
supervision, and a high level of human machine interface as constant the authors
made a strong approximation which prevented to get a reliable and complete
evaluation of the system; which is even less acceptable considering that the
challenge for the future is to develop a dynamic analysis.
For what regards dependency, even though the issue is treated in literature a
standardised and universal definition for the concept of dependency and
dependency level is missing.
45
Finally, talking about quantitative methodologies for studying recovery
probabilities only the statistical approach has been adopted up to now; and this
gap is a huge obstacle for the transposition of this kind of analysis in Healthcare
due to the lack of available and reliable data.
1.2.6 Further developments in the Healthcare sector
For what regards the Medical sector no technical and in deep studies has been
carried out to systematically analyse and integrate factors and procedure with
human risk assessment techniques.
This wide gap in research is due to the fact that the two scientific fields,
Engineering and Medicine, have few commonalities and so it is difficult to instil
the idea that, adapting industrial safety strategies, is a way to improve the
performance and optimize the resources in non-industrial environments.
Starting from the latest developments in HRA for Surgery, most of which are not
validated yet, it would be interesting to invest more and more effort in the creation
of specifically modified versions of standard tools, presented in literature and
designed for general industrial or nuclear applications, in order to produce
powerful techniques able to greatly impact the quality of life.
Previous studies (Trucco et al. 2017; Onofrio et al. 2015) have produced specific
taxonomies for Surgery and task flow analysis for the BA-RARP surgery which,
together with the modified HEART developed for the surgery environment,
constitute the basis for our discussion.
A first objective for future studies could be to propose and validate taxonomy for
Recovery Influencing Factors (RIFs). This could be different from the one
developed for Error Producing Conditions as, to initiate a recovery path, a failure
must have already taken place; so, the effect of circumstantial factors as high
stress, lack of time or difficult communication can be different from the one
relative to fault free situations; moreover, even completely new aspects may rise
from this mismatch. By a proper definition of RIFs it would be also possible to
investigate those factors influencing the detectability of errors and the judgement
capacity of the professionals involved in the procedures under analysis, so giving
46
an insight to the cognition model of the operator without introducing complicated
models in the quantitative simulation tools.
Beside the terminology aspect, it is fundamental to both validate and improve the
understanding and the description of functional, spatial and time relations
involved in the given setting (i.e. operating room layout, procedural sequence,
personnel and equipment involved, management policies, etc.) in order to justify
the dependencies considered and to evaluate their impact.
Another key aspect together with the definition of the various correlations is to
define the ideal level of detail to adopt, or better, the level of detail that is relevant
and suitable for investigating Surgery applications; in fact, in order to optimize
the effort, it is desirable to consider only those elements/tasks whose impact on
the procedure outcome is tangible and not negligible. This “resolution aspect” will
transpire from the recovery modelling phase, and an interesting objective for
future studies and developments could be to create and validate recovery models
taking into account different degrees of resolution to investigate the properties of
the various combinations for “any” situation.
47
CHAPTER 2: DYNAMIC RISK
ASSESSMENT AND DYNAMIC EVENT
TREES
2.1 Dynamic generation HRA
2.1.1 From static to dynamic analysis
Most HRA models are designed to capture human performance at a particular
point in time. These models can be considered static HRA models, in that they do
not explicate how a change of one PSF affects other PSFs and the event
progression downstream. On the other side, most HRA methods do account for
dependency, which is the effect of related events on HEP calculation (Boring
2006)
Dependency, however, is typically based on overall HEPs and does not
systematically model the progression of PSF levels across events, while dynamic
HRA needs to account for the evolution of PSFs and their consequences to the
outcome of events (Boring 2006).
The need to integrate the time dimension in human behaviour analysis is the
logical consequence of the investigation of human mental processes, and of the
fact that many of the so-called influencing factors are implicitly related to the
timeline of the process/system they describe. In this sense, dynamic risk
assessment allows more detailed analysis and in deep mapping of performance
measures.
Going back to the basics, we can individuate three main families of HRA: first
generation, second generation, and dynamic HRA.
The main defects of I generation methods were identified as: lack of distinction
and identification of omission and commission errors; production of statistical
results due to too little database; insufficient structure (which makes it
48
unrepeatable); and absence of a causal picture (which prevents the development
and the implementation of effective countermeasures).
The crucial difference between these first two generations of methods consists in
the fact that while the first one largely fails to consider the context in which human
failures occur, not considering human cognitive processes, the second generation
carefully considers and models the influence of context on human behaviour and
on failure occurrence.
Finally, what distinguishes the so called third HRA generation from the previous
two is the fact that it provides a dynamic framework for HRA modelling and
quantification. In order to get the feeling of the direction that scientific innovation
is taking, we now focus our discussion on those literature research findings
specifically dealing with this last category of HRA methods, which objective is to
deliver tools able to simulate the real-time behaviour of a system.
2.1.2 Historical Evolution of dynamic HRA in Industry
Nowadays, it is recognized that a number of Dynamic Event Trees and direct
simulation software packages for treating operator cognition and plant behaviour
during accident scenarios are being developed or are already available (National
& Falls 1996).
Simulation-based HRA techniques differ from their antecedents in that they are
dynamic modelling systems that reproduce human decisions and actions as the
basis for performance estimation.
To conclude this introduction about dynamic HRA, and its evolution over the
years, a summary table of the applications found in literature is provided. It is
evident that almost all of the applications refer to the nuclear sector (NPP), and
none of them makes reference to the surgical one, which will make our job even
more challenging.
49
Table 3 Literature review of dynamic HRA applications
Authors Title Objectives
Field of
application
Results
(National
& Falls
1996)
Representing
context,
cognition, and
crew
performance in
a shutdown risk
assessment
Demonstrate
how the DET
analysis method
(DETAM) can
be used in a
realistic analysis
to treat context,
cognition, and
crew
performance.
NPP
It demonstrates
how quantitative
risk predictions are
affected by the
treatment of
dynamics.
(Boring
2006)
Modelling
Human
Reliability
Analysis Using
MIDAS
Point out the key
considerations
for creating
dynamic HRA
framework
(including event
dependency and
granularity).
Nuclear
power plant
(NPP)
control
room
operations.
Division of the
eight starting
factors into the
three types of
PSFs’
modifications
50
(Trucco
& Leva
2007)
A probabilistic
cognitive
simulator for
HRA studies
(PROCOS)
Develop a
simulator
(PROCOS) for
approaching
human errors in
complex
operational
frameworks.
Air Traffic
Control
(ATC)
The comparison
between the results
of the proposed
approach and those
of traditional HRA
methods shows the
capability of the
simulator to
provide coherent
and accurate
analysis.
(Chang
&
Mosleh
2007)
Cognitive
modelling and
dynamic
probabilistic
simulation of
operating
crew response
to complex
system
accidents
Discuss the
information,
decision and
action in crew
context (IDAC)
model for HRA.
NPP An overview of the
IDAC architecture
and principles of
implementation as
a HRA model.
(Rao et
al. 2009)
Dynamic fault
tree analysis
using Monte
Carlo
simulation in
probabilistic
safety
assessment
Validation of
Monte Carlo
simulation to
solve dynamic
gates
NPP
regulation
system
Monte Carlo has
proven to be a
reliable tool to
solve DFT.
51
(Gil et al.
2011)
A code for
simulation of
human failure
events in
nuclear power
plants:
SIMPROC
Demonstrate the
Demonstrate the
validity of
SIMPROC tool.
NPP SIMPROC is an
adequate tool to
incorporate in the
simulation of
Plant dynamics the
effects of actions
performed by
operators while
following the
operating
procedures.
(Ge et al.
2015)
Quantitative
analysis of
dynamic fault
trees using
improved
Sequential
Binary
Decision
Diagrams
Confirm the
applicability and
merits of SBDD
for generating
DFTs.
Highly
coupled
DFTs of
non-
repairable
mechanical
systems
Compared with
Markov methods
SBDD overcomes
the notorious
problem of “state
space explosion”
and is also
applicable for
DFTs modelling
systems with
arbitrary time-to-
failure distributed
components.
52
(Rao et
al. 2015)
A Dynamic
Event Tree
informed
approach to
probabilistic
accident
sequence
modelling:
Dynamics and
variabilities in
medium LOCA
Develop
alternative
Dynamic Event
Trees and
quantify damage
frequency.
NPP Risk sensitivity to
numerous
assumptions and
the benefits that
DETs provide in
terms of
characterizing
scenario dynamics
were pointed out.
(Gyung
et al.
2016)
Development
of a systematic
sequence tree
model for feed-
and-bleed
operation under
a combined
accident
Validate the
adoption of
sequence trees to
systematically
analyse accident
sequences and
plant conditions.
NPP Eleven possible
accident sequences
under a combined
Accident (TLOFW
accident) were
identified and
systematically
categorized.
The literature review has pointed out several approaches to the dynamics’ topic;
one of the presented methodologies is the Dynamic Fault Tree one (DFT), which
opens the doors to the computational issue that comes together with simulation
tools.
Dynamic Fault Trees extend traditional Fault Trees by defining additional gates
called dynamic gates to model complex interactions; which are generally solvable
through Markov models deployment. However, when the number of gate inputs
increases state space becomes too large for calculation with Markov models.
Moreover, Markov model is applicable only for exponential failure and repair
distributions; so, to address these difficulties, Monte Carlo simulation-based
approach is commonly adopted.
53
Monte Carlo (MC) simulation methods have been broadly used to evaluate
complex modelling industrial systems with arbitrary distributed components, and
they are often regarded as benchmarks in the validation of new proposed
approaches.
This simulation method treats the problem as a series of real experiments
conducted in a simulated time, and it estimates probability, as well as other
indices, by counting the number of times an event occurs during the simulated
time (Rao et al. 2009). It represents one of the latest improvements in the field of
complex systems’ simulations.
2.1.3 Simulation tools: benefits and challenges
Cacciabue and Hollnagel (1995) have the credit of having been the firsts providing
a formal and comprehensive definition of cognitive simulation:
“The simulation of cognition can be defined as the replication, by means
of computer programs, of the performance of a person (or a group of
persons) in a selected set of situations. The simulation must stipulate, in
a pre-defined mode of representation, the way in which the person (or
persons) will respond to given events. The minimum requirement to the
simulation is that it produces the response the person would give. In
addition, the simulation way may also produce a trace of the changing
internal mental states of the person”.
Simulation-based HRA methods provide a new direction for the development of
advanced methodologies in order to study the effect of operators’ actions during
procedures.
Human performance simulation tools utilise virtual scenarios, virtual
environments, and virtual humans to mimic the performance of humans in actual
scenarios and environments.
Simulations may be used to produce estimates of Performance Shaping Factors
(PSFs) and to quantify Human Error Probabilities (HEPs) in dynamic frameworks,
so better representing reality. Hence the main challenge of such an approach is to
replicate the stream of consciousness of the human component of the system,
54
considering both influencing factors and cognitive type, in order to provide a
dependable analysis not only of risks and outcomes in general, but also of their
root causes.
We can find an example of performance measures mapping in (Boring, 2006). In
his work, Boring wanted to demonstrate that through task/procedure iterations it
is possible to systematically explore the range of human performance, and obtain
an estimate of failure (or success) frequency, which can finally be used as a
frequentist approximation of HEP.
We can say simulation tools address the dynamic nature of human performance
in a way that has not been found in other HRA methods. This kind of tools
represents one of the pillars of dynamic HRA’s evolution; in fact, functioning as
data sources and serving as a privileged observatory at the same time, it constitutes
the basis for this approach.
The possibility of dealing with a virtual and ideal reality drastically reduces the
complexities of the dynamic environment, fostering the creation of new ad hoc
techniques and the adaptation of old ones to new requirements (e.g.: integrating
the dynamic progression of human behaviour throughout the task and the failure
itself).
The paper “Dynamic Human reliability analysis: benefits and challenges of
simulating human performance” (Boring 2007) reviews the differences between
first, second, and dynamic generation HRA, outlining potential benefits and
challenges of this last approach.
As suggested before, simulation-based HRA differs from its antecedents in that it
is a dynamic modelling system that reproduces human decisions and actions in
order to constitute a sample on which basing performance estimations; so,
providing the fundamental grounding for dynamic HRA modelling.
In particular, the article points out the fact that a generic simulation tool might be
used in several ways, as we can see represented in the picture below.
55
Figure 6: The uses of simulation and modelling in HRA
As we mentioned before, the most interesting uptake of dynamic methods from
our point of view is the generation of PSFs’ estimates which can consequently be
used for the quantification of the HEPs through specific HRA techniques.
Simulation-based HRA may also augment previous HRA methods by
dynamically computing PSF levels, and so being able to derive punctual HEPs for
any given point in time.
This could be done by integrating the procedures with the simulation of the plants,
and adding operators’ actions as boundary conditions. Adopting a frequentist
approach for calculating HEPs (where a variety of human behaviours is modelled
through a series of Monte Carlo style replications) would then enable the
production of an error rate over a denominator of repeated trials.
In this regard, Gil et al. (2011) presented a work where the SIMPROC tool was
implemented to simulate human failure events in a Nuclear Power Plant (NPP).
This tool generates a Dynamic Event Trees (DET) stemming from given initiating
events which, taking into account the different factors related with the tasks
involved, efficiently simulates all the branches which may affect the dynamic
plant behaviour in each sequence.
It is now important to specify the fact that there is a key distinction between
simulation and simulator data. Indeed, while simulations utilize virtual
environments and virtual performers to model the tasks of interest; simulators
56
utilize virtual environments with human performers (Bye et al., 2006). Given the
above, using actual people, simulators can capture the full spectrum of human
PSFs for a given task, whereas simulations must rely on those PSFs for which a
virtual modelling is possible.
Nonetheless, the possibility to use simulation tools to run an unlimited number of
scenarios (virtually without actual humans once the configuration is initiated), and
to obtain almost instantaneous results, dramatically reducing the costs, is the
principal benefits of this type of technique. Hence, the opportunity to perform,
and analyse, a wider spectrum of scenarios, in a generally easier and more cost-
effective way makes simulations technology the most commonly used between
the two.
On the other hand, one of the arguments raised against the use of these tools is
that the predictive ability of simulation is hampered by epistemic and random
uncertainty, and by the mismatching and shortcoming attributable to the lack of
full understanding of the modelling parameters and random variance respectively.
Most of the HRA methods follow general task analysis guidelines for event
decomposition, but there is significant variability in the level of decomposition
adopted across analyses and analysts. While one analysis may focus on a detailed
step-by-step breakdown of human actions and intentions, another may cluster
human actions at a higher level according to resultant errors; and this
inconsistency is particularly problematic in making headway on dynamic HRA
(Boring 2007).
In conclusion, human performance simulations surely have revealed important
new data sources and possibilities for exploring human reliability; though, there
are still significant challenges to be addressed, particularly with regard to the
dynamic nature of HRA vs. the mostly static nature of conventional first and
second generation HRA methods (Boring 2007).
57
2.2 The crucial role of PSFs: properties and behaviour over time
When an individual encounters an abnormal event, the natural reaction often
includes physical, cognitive, and emotional responses (Chang & Mosleh 2007).
These three types of response also influence each other; and there is ample
evidence that they also affect problem-solving behaviour. In addition to these
internal PIFs, there are external PIFs (e.g., organizational factors) affecting
individuals’ behaviour both directly and indirectly.
Different types of PSFs adjustments were proposed and analysed; but, in order to
take proper decisions and to make the result compatible also with different
application, it is important to understand the fundamental role that the scenario
involved has on the process.
This point is backed by numerous studies confirming the fact that the first step in
a dynamic risk assessment is to identify the accident scenarios where it appears;
indeed, the interface and the interaction between the plant and its operators is
obviously described by a critical dynamic process together with crew cognition.
For what regards the present study, it was limited to the implementation of an
already validated taxonomy, considering the impact of the factors as constant, but
changing the set of relevant IFs depending on the task involved. In other words,
the study only considered discrete changes of IFs over time, as a first attempt to
introducing HRA simulation tools in healthcare. The possibility of investigating
the evolution in PSFs’ estimation and impact will be one of the suggestions for
future studies; nonetheless, we want to provide a quick insight to the topic.
When addressing a dynamic framework, not only it is important to analyse the
behaviour and the performances of the simulated operators, but also to capture
and manipulate the PSFs that have an effect on those outcomes.
A realistic simulation is comprised not only of the normal random span of human
behaviour for a given situation, but also of the range of PSFs that influences the
result of the simulation and their evolution throughout the procedure.
58
In particular, Boring (2007) defined three types of possible modifications to PSFs:
4. Static Condition;
5. Dynamic Progression;
6. Dynamic Initiator.
In “Static Condition”, the PSFs remain constant across the events or tasks in a
scenario (e.g. we can consider static the educational background of a surgeon
during a surgery). With “Dynamic Progression” we describe both positive and
negative evolution of PSFs’ impact on the performance (e.g. the stress level can
affect both positively or negatively a surgery outcome). Finally, a “Dynamic
Initiator” is defined as a sudden change in scenario, which badly affects the
general outcome.
An important aspect of the transition from a static to a dynamic tool stands in the
need for a coherent transposition of the PSFs in the second domain. This is the
topic investigated by Boring in its work “Modelling Human Reliability Analysis
Using MIDAS International Workshop on Future Control Station Designs and
Human”; where the initial effort to model the SPAR-H PSFs in MIDAS is
addressed.
MIDAS is a simulation tool permitting Monte Carlo style multiple runs of
scenarios; and enabling the adoption of a frequentist approach to HEP calculation,
through which simulated errors may be mapped back to the PSF states at the time
the error occurred. In this way, it is also possible, to calculate HRA dynamically
across scenarios (Boring 2006).
The three types of PSFs modification listed above were considered by Boring, and
it was agreed that the simulation tool must (Boring, 2006):
- Include the nominal effects of a PSF for static conditions;
- Feature the full range of PSF effects, from performance enhancing to
performance decreasing effects;
- Incorporate the natural cause-and-effect relationship of one task on
another in terms of the PSF progressions;
59
- Consider PSFs over time, in terms of diminishing effects (i.e., the natural
decay of an effect) and effect proliferation (i.e., the natural increase of a
PSF over time, even if it begins as a latent effect)
- Reconfigure PSFs in the face of changing scenarios while retaining PSF
latency and momentum states from the scenario forerunner for a suitable
refractory period.
The model developed in the work of De Ambroggi (2010) allows to distinguish
two components of PSF influence: Direct Influence; i.e. the influence that the
considered PSF is able to express by itself notwithstanding the presence of other
PSFs; and Indirect Influence; i.e. the incremental influence of the considered PSF
due to its dependence on other PSFs.
The results of this study showed the relevance of considering both direct and
indirect influence of the nine selected factors and the predominance of the
acquired component in modifying the weights of the PSFs; thus not considering
the latter leads to a biased estimation. (De Ambroggi 2010).
Another interesting point regarding Performance Influencing Factors was made in
(Chang & Mosleh 2007) where, in modelling PSF groups entering a simulation
process (the fifty PIFs were classified into eleven hierarchically structured
groups), also interdependencies between the factors were taken into account so
improving the result’s accuracy achieved by means of the IDAC model.
This paper provides detailed discussions of two important modelling elements of
IDAC: firstly, the identification of the set of relevant PIFs; and secondly, the
important topic of PIFs’ interdependencies providing a diagram of PIFs influence
linking externally observable inputs and outputs to internal PIFs (i.e.: PSFs’
grouping).
Also, a complementary discussion is presented in the paper about the methods for
assessing the states or values of the single PIFs; and, in order to facilitate the use
of these models in a dynamic PRA framework, the qualitative PIF dependencies
were transformed resulting in an explicit and quantitative causal model. This
would set a foundation for the integration of further evidence and an orderly
60
improvement of the accuracy and completeness of the causal model (Chang &
Mosleh 2007).
2.3 Dynamic Event Trees as a tool to formalize system/procedure
evolution
2.3.1 Introduction
In the previous paragraphs, we have introduced Dynamic Fault Trees (DFTs) as
powerful tools for modelling systems with sequence and function dependent
failure behaviours; anyway, although DFTs are very effective in introducing the
dynamic behaviour of systems, their quantitative analyses are pretty much
troublesome, especially for large scale and complex DFTs.
Even though Monte Carlo (MC) simulation methods have been broadly used to
evaluate complex DFTs modelling industrial systems with arbitrary distributed
components, and are often adopted as benchmarks to validate new proposed
approaches, another appealing alternative to DFTs is represented by Dynamic
Event Trees (DET); which basically consists in Event Trees for which branching
is allowed to occur at discrete points in time, and where the definition of system
states is left to the analyst.
We can say that DET framework is extremely flexible (National & Falls 1996),
and aside that, it provides a means to analyse the scenario dynamics under the
combined effects of stochastic events, so, it is of particular interest to us.
2.3.2 The five characteristics of DET
A Dynamic Event Tree is defined by five key characteristics:
5. The branching set (level of detail);
6. The set of variables defining the system state;
7. The branching rules (to determine when a branching should take place);
8. The sequence expansion rules (to limit the tree expansion);
61
9. The quantification tools to compute the deterministic state variables (e.g.,
process variables).
The branching set is probably the key characteristic of the DET technology, since
it determines the scope and level of detail treated by the Dynamic Event Tree
(National & Falls 1996).
Most HRA follows general task analysis guidelines for event decomposition, but
the level of decomposition adopted is significantly variable due to its dependency
on the operator modelling it, and this generates problem in the quantitative
evaluation phase; in particular (Boring 2006):
- Most simulation systems offer a highly-detailed level of task
decomposition that may be incompatible with certain HRA approaches;
- Adjustments to HEPs for dependency based on human action clusters may
be artificially inflated when used with a highly-detailed level of task
decomposition, because there is no granularity adjustment on dependency
calculations;
- No current HRA method offers guidance on the treatment of continuous
time-sliced HEP calculation as is afforded by dynamic HRA.
This variability is due to the degree of freedom left by the possibility to arbitrary
discretize the different variables (e.g. hardware state, crew diagnosis state, crew
quality state, and crew planning state); indeed, the larger number of state for
system variables are allowed, the more detailed the event tree will be.
For what regards the branching rules, even if branching is allowed at discrete time
intervals, it is not performed for each time interval due to the size that the Dynamic
Event Tree would have; instead, branching is performed only when at least one of
these two conditions arises:
- A hardware system is demanded to function (e.g. alarm);
- A critical time point is reached in the scenario (e.g. failure occurring).
Talking about quantification tools, two types are typically used in a Dynamic
Event Tree analysis; the first type includes the tools used to predict the dynamic
behaviour of plant process variables for each scenario in the tree; while the second
62
type includes the ones necessary to develop the conditional branching (National
& Falls 1996).
For each branching point, the quantification process involves four steps:
- Evaluation of crew's cognitive state and of the nature/quality of the
information regarding the plant available to the team;
- Qualitative evaluation of the conditional likelihood of each branch;
- Initial determination of the conditional probability for each branch;
- Comparison of the conditional probabilities for similar situations in
different parts of the tree, and adjustment.
When it comes to construct an event tree at each branch of the tree a probability
value must be determined. This value can derive from expert judgments or from
data collected in databases adaptable to the situation of interest; but, as long as the
mental processes followed in the decision-making process or in the actual
performance of the task are not considered, important sources of information may
be lost.
Clearly, this kind of analysis has the same drawbacks attributed to all studies
making extensive use of expert judgments. However, the last step of the ones
mentioned above (i.e. Comparing similar branches) greatly facilitates the
assessment because it enables the analyst to use information concerning the
relative likelihoods of scenarios and to perform a double check on the results
obtained and proposed.
2.3.3 Industrial applications of DET
The applications of Dynamic Event Trees have spread largely after the birth and
improvement of simulation tools taking advantage of the various peculiarities of
this technology, such as flexibility and completeness.
One of the first studies in this direction was performed by Gertman in 1996, when
he demonstrated that the DET analysis method, specifically DETAM (Dynamic
Event Tree Analysis Method), can be used in a realistic analysis to treat context,
cognition, and crew performance. This work is particularly interesting for us
because it introduces the Dynamic Event Tree’s property that allows to bypass the
63
problem of resource limitation, issue that we may instead encounter in Cognitive
Event Trees.
This last category of Event Trees is useful for presenting potential crew decisions
and actions, and is quantified in the same way as THERP Event Trees; without
explicitly representing current plant conditions, and modelling potential stochastic
variations in certain PSFs (e.g., stress) (National & Falls 1996).
One common argument against the use of DETAM, or related methods, concerns
the complexity of the approach; that is why, one of the scope of the mentioned
paper was to demonstrate that this method could be practically applied to a
realistic accident scenario.
This approach, in which the dynamic evolution of possible scenarios is modelled
explicitly, allows the treatment of various crew states and their interaction with
the different plant; and also the treatment of various Performance Shaping Factors
within the physical, cognitive, and psychological context of the evolving scenario
(National & Falls 1996).
The study concluded that DETAM is a useful technique for realistically modelling
crew/plant interactions during complex accident scenarios; enabling the treatment
of cognitively based errors of commission and omission; and also to deal with
situations where cognition is not a key factor; still being less efficient than
conventional action-based models for such applications.
Finally, although DETAM manual implementation is possible, a software
implementation was suggested in order to improve the efficiency of the analysis,
and have improvements in terms of both reliability and consistency.
The paper “A probabilistic cognitive simulator for HRA studies (PROCOS)”
(Trucco & Leva 2007) described the development of a simulator for approaching
human errors in complex operational framework integrating the quantification
capabilities of the so-called ‘first-generation’ human reliability assessment
methods with a cognitive evaluation of the operator.
Also, a scenario analysis was performed setting the PSFs values through
commissioning personnel judgments regarding three selected standard situations:
optimal case scenario; nominal conditions scenario; and worst case scenario.
64
The simulator proposed (PROCOS) allowed analysing both error prevention and
error recovery; and the comparison of the results obtained through the proposed
approach and those of traditional HRA techniques pointed out the capability of
the simulator to provide coherent and accurate analysis.
As we mentioned before in our discussion, there is a fundamental difference
between simulation and simulator data. In particular, we want to recall the fact
that simulators utilize virtual environments with human performers (Bye et al.,
2006); so, since simulators employ real humans, they are able to capture the full
spectrum of human PSFs for a given task, whereas simulations must rely on those
PSFs for which a virtual modelling is possible.
Specifically speaking about PROCOS’s structure, the components this simulator
is made up of are the following (Trucco & Leva 2007):
- The operator module, which implies the cognitive flow charts for
action execution and recovery phase, plus the error types/error modes
matrix. The critical underlying feature of this module is the
mathematical model for decision block criteria of the flow charts;
- The task execution module, based on the event tree referred to the
procedure that has to be simulated;
- The human–machine interface module, made up of tables regarding
the hardware state and its connection with the operator actions (task
executed or error modes committed).
While the inputs required for the simulation process are:
- Set of PSFs affecting the task to be simulated;
- Hardware involved in the execution of the task and its possible states;
- Steps of the task (task analysis);
- Set of error modes to be considered.
Only a few years after this study Universidad Politécnica de Madrid in
collaboration with the Consejo de Seguridad Nuclear developed the so-called
SIMulator of PROCedures (SIMPROC).
This tool aims at simulating events related with human actions and is able to
interact with simulation model of plants; moreover, SIMPROC helps the analyst
65
quantifying the importance of human actions in the final plant state (Gil et al.
2011).
This software tool was coupled with a software package (SCAIS), a simulation
system developed to support the practical application of Integrated Safety
Analysis (ISA) methodologies able to generate Dynamic Event Trees stemming
from an initiating event, based on a technique able to efficiently simulate all
branches taking into account different factors related with headers which may
affect the dynamic plant behaviour in each sequence (Gil et al. 2011)
In this paper, also a methodology to computerize an Emergency Operator
Procedure (EOP) is proposed:
1. Obtain a task flow diagram from the procedure, considering the procedure
instruction of interest for the simulation;
2. Identify, one by one, each task action with a computerized instruction and
define the variables needed to manage the information related with the
instructions, identify the plant systems, components or physical
parameters;
3. Computerization.
The computerization phase can be carried out through the following steps:
- Specify the modelling detail level for plant systems and components;
- Clarify the plant physical parameters relevant to the procedure;
- Conduct the computerization of actions over components, to control
physical parameters within a range.
The computerized procedure obtained should have the same functionality as the
hardcopy procedure; this can be checked by comparing the original task flow
diagram with the computerized task diagram (Expósito et al. 2008).
Ultimately, this work demonstrated that SIMPROC is an adequate tool to integrate
the simulation of the plant dynamics with the effects of actions performed by
operators while following the operating procedures.
During the literature review, another version of the DET for dealing with dynamic
modelling was presented: The Sequence Tree.
66
A Sequence Tree is a type of branch model that categorizes the plant condition by
considering the plant dynamics. Using the Sequence Tree model, all possible
scenarios requiring a specific safety action to prevent core damage can be
highlighted, and success conditions of the safety actions performed during a
complicated situation, such as a combined accident, will be also identified. In
short, we can say that Sequence Trees are the qualitative version of DET; in fact,
if the initiating event frequency under a combined accident can be quantified, the
sequence tree model can translate into a Dynamic Event Tree model based on the
sampling analysis results (Gyung et al. 2016).
Finally, in the study it was demonstrated that through the utilisation of Sequence
Tree models all the theoretically possible undesirable sequences, under a specific
combined accident situation, were identified and systematically categorized
adopting the modelling tool suggested in the paper.
In “A Dynamic Event Tree informed approach to probabilistic accident sequence
modelling: Dynamics and variabilities in medium LOCA“, the discussion goes
back to the quantitative dimension. Here the Human Error Probabilities are
calculated as the summation of Diagnosis Error Probability and Execution Error
Probability; the Diagnosis Error Probability is multiplied by the basic HEP of a
diagnosis error and the relative weighting factor.
In their work Rao et al. (2015), the Dynamic Event Tree framework is applied in
order to support the definition of success criteria; the following PSFs were
considered: Man-machine interface (e.g. alarm), Decision burden, Procedure, and
Education/training.
In particular, the aim was to assess the impact of variabilities and scenario
dynamics on success criteria, and ultimately on the results of the PSA model of
the scenario, i.e. on the core damage frequency (CDF) and dominant contributors
(Rao et al. 2015). This was made possible by the fact that DET framework
constitutes a valuable means to analyse scenario dynamics under the combined
effects of stochastic events.
Adopting this kind of approach, it was evident that a few considerations regarding
the operators’ path choice are necessary, and that those assumptions will depend
67
on safety systems’ state, on plant parameter values, and sometimes, they could
also be forced by the process dynamics.
This work highlighted not only the fact that the complexity of an accident scenario
dynamics arises from the interactions between the plant and operator responses,
but also that the variants of the scenario could have very different dynamics
depending on scenario’s characteristics; which, by the way, is in line with the
conclusions of the other papers examined.
Indeed, the analysis’ outcomes resulted to be consistent with the ones found in
literature and the ones reached adopting analytical approaches, so confirming the
validity of the method.
In order to conclude the discussion about dynamic tools, it is important to mention
that, recalling the fact that even though Markov models are generally adopted to
solve dynamic gates in Dynamic Fault Tree, with increasing system’s complexity
it is more suitable to opt for a Monte Carlo simulation; during the literature
research a validation of this statement, and of the goodness of the results deriving,
was found in “Quantitative analysis of dynamic fault trees using improved
Sequential Binary Decision Diagrams” (Rao et al. 2009).
2.3.4 Gaps in literature
The review of the extant literature revealed that no specific simulation software
tools have been developed for Dynamic HRA in healthcare so far; so, the basic
guidelines found in literature were taken in order to properly model an ad hoc tool
for our application.
Moreover, models and values documented in literature to support different phases
of a Dynamic HRA were assumed as a starting point, but required a critical
evaluation and selection in order to be transferred in healthcare from their domain
of origin. For example, existing knowledge on recovery failure probabilities,
obtained from industrial applications, could not be adopted directly, since the
context of error recovery is totally different.
Another aspect missing in previous documented studies is also a dynamic
quantification of recovery probabilities; in fact, even in those studies where
68
recovery was taken into consideration, i.e. (Jang et al. 2014; Jang, Jung, et al.
2016c; Jang, Jung, et al. 2016a), and analysed through PSFs/RIFs (as Time
urgency, No supervision, and High level of human-machine interface), the latter
were considered only statically, hindering reliable, and most of all, complete
evaluations of the evolving scenario.
Furthermore, the aforementioned studies’ results strongly depend on the statistical
basis on which the whole discussion was built on; this requirement strongly
hampers the implementation of these techniques, especially when it comes to
introduce additional data regarding recovery, since the production of reliable
database always requires time and, in some cases, is an out-of-reach requirement.
2.3.5 Further developments in the Healthcare sector
According to National & Falls (1996) and Gil et al. (2011), the step-wise
procedure to perform a dynamic risk analysis is:
1. Identify the scenario (mental models, interfaces SHELL, IFs, …);
2. Generate a task flow diagram including recovery paths;
3. Define the tasks/subtasks and the variables involved (also
hardware, diagnosis, quality and planning state can be identified
for each node);
4. Chose the HRA method through which evaluating the HEP for
each task/subtask;
5. Carry out the simulation pointing out the level of detail of the
system.
Actually, for almost each of these points there is still a lot of work to do. For what
regards the first point it could be interesting to start the development of wide
ranging scenario templates available, together with the relative modelling and
computational software. The standardization, under proper hypothesis, of well-
defined scenarios would pave the way to wider scope tools, still preserving
flexibility. In fact, in this way we would privilege the customization of the
computational part of the structure (e.g. the DET ramification and task
descriptions), preserving the one of the generic tool. In this way it would be
69
possible fostering the reduction, in number, of existing models, which will be wide
ranging (for what regards a single field of application), instead of following the
actual trend of generating ad-hoc solutions for each possible system/case.
For what regards the second point, not much can be done to automate the
generation of trees, but it would be of great benefit to step up efforts for creating
standardized trees for all those operations subject to this kind of studies, as we are
proposing for Surgery, in order to facilitate analysis. This would help in promoting
safe procedures and corresponding safety measures in a coherent and effective
way.
As mentioned before, in the paper «Modelling human reliability analysis using
MIDAS» (Boring, 2006) the initial efforts to translate HRA to human
performance simulation were investigated.
This process has required a re-thinking of several fundamentals of HRA, ranging
from dependency to PSFs’ characterization; but the key point to consider when
approaching dynamic HRA for sure concerns the role of PSFs.
While static HRA models do not imply the fact that a change in one PSF affects
other PSFs along with the event evolution, dynamic HRA must consider PSF
latency (i.e. once activated a PSF will retain some activation across tasks) and
momentum (i.e. the propensity of the antecedent PSF to change).
This whole topic has much to do with the third of the suggested steps for Dynamic
HRA. In fact, if we want to investigate the evolution over time of a process we
cannot neglect the way PSFs change over time. In this sense, it is fundamental to
proceed in a systematic manner and it would be also useful to standardise the types
of PSF updates to be considered: for instance, the three behaviours identified by
Boring (2006) and cited in previous paragraphs could be adopted: Static
Conditions, Dynamic Progression, and Dynamic Initiator.
The introduction of the time domain and of system’s dynamics in HRA is the main
challenge in terms of innovation of safety evaluation techniques. But there is still
a lot of work to do in this sense both in terms of dependency and links’
formalization between tasks and PSFs, and in terms of quantitative estimation, i.e.
readjustment of the algorithms involved in the different HRA techniques.
70
The choice of the HRA technique is a very sensitive spot; obviously, there is not
just one technique prevailing over others in all respects, but each of them has its
own pros and cons.
An interesting development in the description of the tight coupling of events in
real life would be well represented by the massive investigation of dependencies
relating tasks and PSFs.
During our literature search four aspects were identified as relevant in order to
determine the level of dependency between tasks: Spatial closeness, Checking
systems’ similarity, Similarity of the gesture, and Time coupling.
The concept of “Spatial closeness” is easily understandable, generally refers to
the area were the robotic arms are operating and, we could say, to the relative level
of precision required in order not to affect the work already done or coming next.
The notion of “Similarity of the control systems” refers to the similarity of the
clinical, or not, parameters able to influence critical patient condition or the
occurrence of failure. When this kind of dependence is present it could be difficult
to individuate the failing subtask; and so, the causes of the failure if the error is
not immediately detected.
The “Similarity of the gestures” involved in two consequential subtasks can lead
to a double error; due to an increment in performance anxiety; or, eventually, to
slips; since having just performed the same action could induce the operator to an
easy-going approach.
The “Time coupling”, instead, covers the aspect of the length of the time interval
that can elapse between the two tasks. If the tasks are considered coupled it means
that no, or very short, delay is allowed between the end of the first and the
beginning of the second action; thereby, if we have this condition the failure of
one of the two is made more probable by the fact that they are in sequence.
After having validated the fact that these, and only these, are the meaningful
aspects to be considered in a dependency analysis for a surgical procedure,
experts’ could be asked, for each of these parameters, to express how the outcome
and the presence of a certain subtask can affect the probability of occurrence
71
and/or failure of another subtask, so as to create a dependency tree and allow the
distinction between Complete, High, Moderate, and Low dependency.
Having in mind the specific meaning of the dependency factors and the Recovery
Influencing Factors (RIFs), we can think about reporting this result in the HEP
formulation by introducing a dependency coefficient “k” according to the model
adopted in THERP for treating dependency. This can be done by starting from the
discussion made by Jang et al. (2016).
In the following lines, we will present a set of formula combining THERP’s
approach to dependency (though the introduction of “k”) and the extremely
flexible and common HEART algorithm. This conjunction is of particular interest
from our point of view since the latter, i.e. HEART, is the methodology we
selected for our study, and in general for HRA application to surgery, as will be
illustrated later on; the following formulas are suggested in order to perform the
risk analysis:
RFP = (1-D0)*∏ [(1 + 𝑘)1
1+𝑘(1 − ∑ 𝑅i𝐸i)𝑖≠0𝑖 ] (1)
Ri=1 - ANLUi = 1 – (NHU ∗ ∑ 𝐴𝑠𝑠𝑒𝑠𝑠𝑒𝑑_𝑅𝐼𝐹_𝐴𝑓𝑓𝑒𝑐𝑡𝑛𝑖=1 i) (2)
Assessed_RIF_Affecti = [(RIFMultiplieri - 1) ∗ PoAi] + 1 (3)
Equation (1) refers to the calculation of the Recovery Path Probability (RPR),
where the subscript -i defines the task under evaluation, “R” and “E” the recovery
and the error probability associated to relative task respectively, while “D0“ is the
term representing the detectability of the failure from which the recovery path
starts in the first place. In particular, “E” can be evaluated through the Assessed
Nominal Likelihood of Unreliability (ANLU), as prescribed by HEART. Finally,
for what specifically regards the application of HRA techniques to Healthcare it
is evident from the literature research brought up that all previous aspects
remained unexplored; plus, there is still need for validating database, suitable
techniques, well-defined procedure, and taxonomy.
72
2.4 Study objectives
The primary objective of this study was to create a prototype for a quantitative
risk assessment methodology able to take into account the main recovery paths
relative to a specific surgical procedure, trying to combine some of the
developments proposed both from a recovery analysis and a dynamic risk
assessment point of views.
Given the fact that in medical literature only standard procedures are available,
and that inexperienced surgeons only rely on good sense when a failure occurs in
the operating theatre, at first, we outlined the most efficient and frequent recovery
path through experts’ interviews, only for those tasks that previous studies already
identified as the most critical ones.
Even though it would be of great interest to have a DET considering the dynamics
of the scenario involved through dependencies and PSFs’ evolution for the
procedure under analysis, this work only addressed the evaluation of a traditional
DET due to the complications involved in the introduction of such characteristics.
The final probability is relative to the specific recovery path, and assessed through
an ad hoc modified HEART technique. The assumptions and the algorithm
involved is commented in the Study Methodology chapter and the full script is
available in Appendix 5.
Due to the high complexity and heterogeneity of the surgery process, we were
forced to proceed relying on qualitative and subjective judgements elicited by a
sample of expert surgeons, without having the possibility of validate these data
with statistical analysis of real observations, as suggested in literature.
The dynamic aspect of our investigation will be covered by a specific simulation
tools; in fact, we will provide a model able to reproduce the evolution of the
procedure along the recovery paths by launching Monte Carlo simulations and
making the PSFs and probabilities range between upper and lower limit value
according to triangular or rectangular distributions, and analysing their impact on
the recovery branches’ probability and outcome grade.
73
Another interesting element was the inclusion of a path clustering step,
considering that each path is characterized by a certain «level of success», which
depends on the final patient’s condition (outcome). To this end, we asked experts
to identify the level of Patient Outcome according to the Clavien-Dindo
classification (Dindo et al., 2017; Mitropoulos et al., 2013).
To test the proposed methodology and tool in a real case, and demonstrate how
they should be implemented, they were applied to a specific Radical Robotic
Prostatectomy surgical procedure (BA-RARP) which could also serve as a
reference for future validations.
74
CHAPTER 3: THE EMPIRICAL SETTING
3.1 Introduction
In the last decades the interest in patient care and safety has been growing in face
of the enormous progress of medicine and of the increasing awareness of the
possible drawbacks due to the mismanagement of preventable accident.
Together with the technological growth we have seen the birth of new surgery
techniques such as Minimally Invasive Surgery (MIS).
MIS is mainly characterized by the increasing support of technology for the
surgeon; fascinated by the promise of less invalidating surgeries, with all the
annex benefits, the scientific world has put many efforts in developing the
interface existing between equipment and surgeons in the most effective way.
This evolution not only concerns physical and software modifications but also
requires a change in the organisational aspect of the operating room: from equip
management to policies.
Modern surgery not only seeks to treat the patient, but at the same time tries to
minimize the possible consequences from both the patient and the hospital point
of view.
The main objective is to get minimum repercussions on the patients being the least
invasive, which also implies shorter hospital stay and less lawsuits, so resulting in
significant money saving.
From this last consideration, we see that the improvement of patients’ conditions
goes with the optimization of resources, and the results obtained in the last years
definitely justify the interest for the subject, even though there are conflicting
opinions on the size of the improvement.
In particular, MIS market has been catalysed by the DaVinci robotic system,
which is nowadays a worldwide accredited excellence in the Advanced Healthcare
Systems sector. Also Italy has ride the wake of robotic surgery, with considerably
75
good results, being now one of the leading countries in Europe. In the next chapter
we will discuss in detail the aforementioned topics starting from the big picture
and ending with the specific application of our interest.
3.2 Minimally Invasive Surgery
Since the nineteenth century the technological evolution had an incredible impact
and a wide range of applications in the medical-surgical sector. From its birth,
Endoscopy has radically changed its role in the healthcare landscape, moving
from being a purely diagnostic to a fully-fledge surgery technique.
The advent of this technique represented a real revolution in the history of surgery
and, together with the huge evolution of instrumentation and anaesthetic, plus the
X-rays discovery, constituted a big step forward in terms of patient care but most
of all produced a drastic reduction of risk in Surgery.
As a consequence, the traditional open surgery has become an obsolete procedure
adopted only in very extreme conditions and not advisable in most cases, giving
the way to less invasive and safer techniques.
Minimally Invasive Surgery includes endoscopic, laparoscopic and more recently
robotic surgery, which requires a separate discussion.
The first application of Endoscopy as a surgical technique dates back to 1987,
when Lyon, Philippe Mouret performed the first successful cholecystectomy on
humans; from that moment on we can say that the evolution of MIS has proceeded
relentlessly, and what continues to foster its developments are the numerous
scientific evidences of the benefits of this discipline, especially in Oncology.
Laparoscopy is mainly used to work with those organs contained in the abdominal
and pelvic cavity; thoracoscopic, for organs contained in the thoracic, and those
interventions within hollow organs, such as transanal, transoesophageal and trans
gastric surgeries.
The clue of this kind of surgery is to minimize the interference between the
instrumentation and the organism: accessing the organs of interest through small
76
incisions by means of specific instruments and video systems, thus minimising
the number and entity of surgical traumas on patient.
Anyway, the significant advantages supporting the spreading of this method are
not only related to minor surgical inference on the body, which results in quicker
and less painful postoperative course, minor exposure to infections and faster
rehabilitation of the patient; but also to the aesthetic aspect that is gaining more
and more importance in order to psychologically overcome the experience.
Talking about the costs of the operation, all things considered they are not much
of a constraint. In fact, generally speaking about MIS the expense, in terms of time
and money invested on the training necessary to use this kind of technologies, is
definitely offset by the considerable economic saving due to the shorter surgery
duration and hospital stay, and to the reduction of complications both after and
during the procedure.
Since Robotic Surgery also implies the purchase of a robot a more in-depth debate
will be proposed in the dedicated Section 3.3.
The general benefits of MIS can be so summarised as:
- Small incisions;
- Less mental and physical impact for the patient;
- Less risk of infection;
- Less wound surgery complications;
- Shorter hospital stays;
- Shorter surgery duration;
- Less trauma for the patient;
- Less pain;
- Less blood loss;
- Smaller scars.
However, Minimally Invasive Surgery is not a totally risks-free practice, it is
possible to have intraoperative complications, even very serious ones, mostly due
to surgeons’ initial lack of experience in using complex technology devices, poor
coordination between team members or inadequate equipment and ergonomics of
the workspace.
77
The ergonomic aspect has proven to be very important for this technology , in fact,
several studies show that the majority of laparoscopic surgeons report neck, back,
shoulder or hand pain; and it has been reported that 87% of them regularly
experience musculoskeletal pain during or after laparoscopy (Zihni et al., 2014).
Not least, the use of minimally invasive instruments (e.g. trocars) denies surgeons
the tactile feature of the operating gesture (Cao & Rogers, 2006).
These limitations can be overcome through training activity, involving the use of
simulators to pc, box trainer, educational videos, etc. (Guzzo & Gonzalgo, 2009).
The development of these simulation supports has the aim to reduce, as much as
possible, the gap between more experienced surgeons and beginners, aside of
minimizing intra-operative complications associated to MIS surgery in general.
Anyway, often surgeons agree to define laparoscopic surgery, and in general MIS,
as more stressful than open surgery due to the visual and instrumental obstacles,
the higher level of concentration required and the demanding, as well as
necessary, training program (Guzzo & Gonzalgo, 2009; Berquer et al., 2002).
In conclusion, the most significant disadvantages of this technology can be
summarized by following aspects:
- The surgeon needs to move the instruments watching his gestures through
a monitor;
- Expensive and special equipment is required;
- Maximum hand-eye coordination is required, worsened by the fact that the
laparoscope is usually operated by an assistant;
- The coordination of the operating surgeon is incredibly disturbed by
external factors;
- The freedom of movement is strongly limited;
- The tactile sensation is gravely undermined or nullified;
- Ergonomics problems;
- MIS requires additional safety concerns and precision requirements
compared to traditional open surgery;
78
- Only 2D visual feedback is available for Laparoscopic applications.
In view of those considerations, it is necessary that trainees and surgeons, before
approaching MIS for the first time, acquire skills in performing surgical
procedures involving a minimally invasive access and get familiar with handling
the dedicated instrumentation, which is totally different from the one used in
traditional open surgery (Hamad, 2010). In Figure 2 we can see that the proportion
between the rate of use of MIS, Robotics and Open procedure in different settings
can vary sensibly, and we can also appreciate the fact that Prostate Surgery is the
case for which Robotics has its wider field of application.
3.3.1 DaVinci Robot
The DaVinci Robot enables surgeons to operate with enhanced vision, precision,
dexterity and control with respect to any open, laparoscopic or endoscopic
surgery.
This is mostly due to its 3D high-definition vision system, with magnification up
to 15x and special wristed instruments that bend and rotate far greater than human
Figure 7: Proportion of use of MIS, Robotics and Open procedure in different setting
79
wrist. This system incorporates the patented technology EndoWrist, which
reproduces the movement degrees of freedom of surgeon forearm and wrist during
the operation, providing up to 7 degrees of freedom (Ficarra et al., 2010).
The DaVinci System allows a great versatility of movements, providing access to
narrow and deep anatomical spaces (not always possible with laparoscopic) and it
gives highest surgical accuracy that cannot be compared with other techniques. In
addition, the 3D visualisation, freedom of instrument movement and intuitiveness
of the surgical motion are able to restore hand-eye coordination, which is usually
lost in laparoscopic surgery (Al-Naami et al., 2013).
The DaVinci robot is the most advanced Minimally Invasive Surgery system in
the market, and it is available in two versions:
- Da Vinci Si: arrived on the market in 1999 and considered the gold
standard for medium complexity procedures in urology, gynaecology and
general surgery in a single quadrant;
- Da Vinci Xi: an innovative system, introduced in Italy in 2014; it is the
ideal tool for highly complex surgery and multi-quadrant surgical fields,
allowing extreme freedom of movement. These features make it suitable
for operations in the field of urology, gynaecology and general complex
surgery, maximizing anatomical access and guaranteeing a 3D-HD vision.
The Da Vinci surgical robotic system is a master-slave remote-controlled system,
consisting in a console, where the operating surgeon (master) directs the robotic
surgical arms (slave) from a computer-video console (Ficarra et al., 2010).
One of the robotic arms holds the video scope, which provides binocular vision of
the operative field, while the others hold instrument adapters to which specialised
robotic instruments are attached. All instruments have articulated elbow and wrist
joints, enabling a range of movement which mimics the natural motions of open
surgery.
The surgeon directs the robotic arms using master handles which are locate below
the video console and transmit the exact motions of the surgeon’s hands to the
robotic arms, filtering hand/arm tremor and providing feedbacks.
80
Additional videos can be positioned, inside the operation room, to facilitate the
work of the rest of the staff working at the operating table.
Figure 8: Typical set-up of robot system in operating room (a) sketch (b) real-life
81
The Main assets of the Da Vinci robotic system are the following:
➢CONSOLE
The console provides the computer interface between the surgeon and the surgical
robotic arms. It is positioned outside the sterile field, and represents the control
centre of the system, where the surgeon controls the 3D endoscope, the EndoWrist
instrumentation by means of two manipulators (master) and the pedals.
As mentioned before, the surgeon’s hand movements are digitised and transmitted
to the robotic arms which perform the identical movements in the operative field,
for safety reason, the robotic arms are automatically deactivated whenever the
surgeon’s eyes are removed from the display. On the other side, pedals are used
to activate the electro cauterizer and the ultrasonic devices; and for relocate the
master handles when necessary.
The surgeon can also chose to switch between full-screen view to a multi-image
mode, which shows the 3D image of the surgical field together with two other
images (ultrasound and ECG), providing auxiliary inputs.
➢ROBOTIC ARM CART
The robotic arm cart is placed beside the patient on the operating table, holding
the robotic arms on a central tower; one arm holds the video scope and the others
are used as supports for the instrument which are applied to robotic arms’ ends
through reusable trocars.
The DaVinci system makes use of the remote centre technology, defining a fixed
point in space around which the robotic arms move (Tooher & Pham, 2014). This
technology allows the system to manipulate instruments and endoscopes within
the surgical site, while minimizing the force exerted on the body of the patient. It
is also possible to perform manual positioning, in terms of height (relative to the
base) and advancement and rotation of the group of arms up to a maximum of
270°.
82
➢CART VIEW
It contains the central processing unit of images; which includes a 24-inch
touchscreen, an ERBE VIO dV electrosurgical for delivering unipolar and bipolar
energy, and adjustable shelves for optional auxiliary surgical equipment, such as
insufflators (DaVinci System Xi also includes a full HD video).
➢SURGICAL INSTRUMENTS AND ENDOWRIST™
The EndoWrist® devices of DaVinci Xi have a diameter of 8mm and a length of
about 60cm. They are equipped with a wrist that allows a freedom of movement
on 7 axes and a rotation of almost 360°, mimicking the natural motions available
in open surgery.
There is a range of different tools available: needle holders, graspers, scissors,
small clip applier, micro-forceps, long tip forceps, ultrasonic shears, cautery with
spatula, scalpel cautery, bipolar dissectors of different types and so on (Tooher &
Pham, 2014); and each of those can be used up to ten times before being replaces.
3.3 Robotic Surgery
Robotic surgery represents the latest frontier of Minimally Invasive Surgery.
Thanks to robotics it is possible to overcome many of the limitations observed in
the laparoscopic case, extending the benefits of Minimally Invasive approach to
extremely complex surgery procedures.
The first surgery robot prototype was developed in the 80’s by the American Army
and NASA, but only in the 1995 it was further expanded by two American
companies Computer Motion (Goleta, CA) and Intuitive Surgical Inc. (Mountain
View, CA).
These two companies have merged in the 2003 constituting the Intuitive Surgical
Inc., which cornered the market thanks to the DaVinci ® system.
The DaVinci robotic system has received the FDA approval in 2000, and was
rapidly adopted by hospitals all over the United States and Europe for the
treatment of a wide range of conditions. Up to June 30 2014 3,102 robotic systems
83
were installed all over the world; of these: 2,153 in the United States, 499 in
Europe, 183 in Japan, and 267 in the rest of the world.
The extent of robotic surgery practice varies widely due to a variety of factors
implicated (i.e. physician training, equipment availability and cultural factors).
Over the years several applications have been developed in oncology,
gynaecology, orthopaedics, maxillofacial, thoracic, paediatric, ophthalmology
and cardiac surgery.
Robotic surgery, or robot-assisted surgery, allows doctors to perform many types
of complex procedures with more precision, flexibility and control; which may be
difficult, or impossible, with other methods (Al-Naami et al., 2013).
As already mentioned, robotic surgery has the goal to overcome limitations of
laparoscopic surgery; for instance flat two-dimensional vision, inconsistencies in
instruments movements, unnatural surgeon positions, dissociation between vision
and instrument control and inability to carry out micro sutures.
Thanks to a computer and a remote handling system, the surgeon is able to
reproduce the movements of the human hand in the surgical field (Al-Naami et
al., 2013).
The most widely used clinical robotic surgical system is composed by one camera
arm and several mechanical arms with surgical instruments attached to their ends.
The centre of action, if we can say so, is the desk of first surgeon, the one
responsible for the robot operation, which consists in a computer console,
detached from the operating table, from which he/she controls the robotic arms
relying on the high-definition, magnified, 3-D view provided by the cameras.
Indeed, it is from this location that the surgeon leads the rest of the team members
who assist the surgery at the operational table (Binder et al., 2004; Al-Naami et
al., 2013).
One of the most important, and probably the most appealing, aspect of the
adoption of this technique is for sure the impressive cost saving deriving.
Despite the fact that robotic surgery requires a large initial investment (in the order
of US$1 million to US$2 million); an on-going annual maintenance (costing
84
approximately US$250,000; and disposable or limited use instruments (e.g.
shears, needle drivers, graspers, forceps; with an average cost of approximately
US$2,000 per instrument), which are replaced every 10 surgeries versus the
mostly reusable instruments in open surgery; many reports have shown that the
overall hospital costs were significantly lower for robotics compared with
traditional surgery, and that, in some cases, the hospital could break even on their
robotic investment after as few as 90 surgeries.
In fact, not only is Robotic Surgery already cost-effective for insurance companies
and hospitals and a better option for the patients recovery, but as robotic
technology expands and improves, as is the case with most other technologies,
costs will further decrease – it is only a matter of time before that is passed on to
‘consumers.’
The evidences of these statements are many and vary, and a list of the independent
articles and studies showing the cost-efficiencies and positive impact of robotic
surgery is proposed in a dedicated section of the Bibliography.
3.1.1 Benefits and limitations
The Da Vinci robotic system offers several advantages, compared to open
and laparoscopic surgery, for both operators and patients (Tooher & Pham,
2014), as shown in the table below.
Table 5: Major clinical and patient's advantages with DaVinci system (Ab
Medica website)
Major clinical advantages Major patient advantages
• Ease of access to
difficult anatomies;
• Excellent visualization
of anatomical
landmarks;
• Small incisions with
mild bleeding;
• Less need for blood
transfusions;
• Less postoperative pain;
85
• More detailed view of
the cleavage planes;
• Greater precision in the
procedure;
• Greater accuracy;
• Ability to configure the
accuracy of motion
surgery.
• Reducing hospitalization
time;
• Reduced recovery times;
• Faster recovery of
normal
activities.
The DaVinci system has several safety devices: for instance, when the camera is
moved and repositioned the tools remain stationary; moreover, the system
automatically enters “standby mode” also when the surgeon removes the head
from the console; and tools can be stopped during the repositioning of the robotic
arms.
This does not mean that the robot replaces the surgeon, but that it becomes its
extension and reinforcement constituting an important technological aid; in fact,
experience keeps its fundamental role in the assessment, selection of information
and execution of the various tasks.
In order to get the best from Robotics it is important to properly assess the status
and condition of the patient, his/her disease and the "risk class" it belongs to; in
fact, for some patients/cases robotic surgery is definitely not suitable,
unnecessarily expensive and perhaps even more risky than the traditional one.
Aside of the various benefits that Robotic surgery offers over conventional
laparoscopic or open techniques, there is a significant learning curve and
substantial investment involved (Binder et al., 2004; Al-Naami et al., 2013).
In fact, surgeries made by means of the Da Vinci may encounter severe
complications, as any other surgery, which may require prolonged and/or
unexpected hospitalization and/or reoperation. Examples of serious or life
threatening complications may be: injury to tissues/organs, bleeding, infection and
internal scarring that can cause long-lasting dysfunctions and pain.
86
The major issues of this technology are related to the fact that hardware and
software updates are required, as with any computer-based equipment; but
additional limitations of the DaVinci robotic surgery can be identified in the
robotic set-up (mainly time related) and equipment size; in familiarisation with
the robotic system (primarily related to the learning curve and lack of experience);
and in communication problems between the operating surgeon and the rest of the
surgical team, particularly the surgical assistant (Cao & Rogers, 2006).
Robotic surgery undoubtedly disrupts the existing workflow and introduces
modifications in the roles of every team member and teamwork, since it is based
on a new way of conducing surgery (Lai & Entin, 2005). Other technical
difficulties that may be encountered are related to the malfunction of the system,
or collision of the robotic arms either with the patient, the surgeon or with each
other, or instrumentation issues (Binder et al., 2004).
Despite the improvements, there are still some unresolved problems typical of
minimally invasive surgery: the assistants at the table, for example, remain
confined to a two dimensional view, and the high number of cables and wires
inside the room, necessary to connect the various components of the system, can
be dangerous both for the staff members and for the surgery itself, that can be
compromised producing a negative effect on the patient.
Main surgery risks can be attributed to equipment failure and human error; and as
reported from Intuitive Surgical, specific risks include the following conditions:
temporary pain/nerve injury associated with positioning; temporary
pain/discomfort from the use of air or gas during the procedure; longer operation
and time under anaesthesia due to the possible conversion to another surgical
technique, additional or larger incisions and/or increased complications.
3.3.3 Robot applications
Reports from abmedica® , Italy’s leading company in the production and
distribution of medical technologies, inform that over the last decade the DaVinci
System has brought Minimally Invasive Surgery to over 2 million patients
worldwide.
87
In 2014, 570.000 robotic surgeries were performed in the world, increasing of 9%
compared to 2013, and the Surgical robot device markets, which was estimated to
be around $3.2 billion in 2014, are anticipated to reach $20 billion by 2021.
Gynaecology and General Surgery have driven the growth especially in the US;
while Urology supported the robotics activities at international level. During the
2015, in Italy, there have been more than 13,200 robotic procedures, the 66% of
which concerns urological diseases; certifying the growing interest and credit in
regard of this technology.
This point is also demonstrated by the increasing number of installations on the
Italian territory, which now counts for more than 70 hospitals proposing this
technology to their patients.
The graphs below show the increase of the number of procedure in the world in
the last seven years and the DaVinci system installations’ distribution in Italy.
Since its introduction on the market, the DaVinci Surgical System has been
successfully adopted in thousands of procedures; its safety, effectiveness and
superiority in terms of clinical results are proved by hundreds of scientific papers.
Figure 9: International increase of DaVinci surgical procedures
88
The DaVinci surgical procedures are routinely performed in the specialties of:
- General and Vascular Surgery;
- Uro-Gynecological Surgery;
- Thoracic Surgery;
- Cardiac Surgery;
- Paediatric Surgery;
- Otorhinolaryngology.
The table below shows the main operations performed with the DaVinci robot,
while the graph displays world installations of the DaVinci system.
Table 4: DaVinci surgical procedures
89
In particular, the series specialty in the world is divided as shown in the chart:
Robotic Surgery has proved to be the best technique for surgical treatment of
prostate cancer, and nowadays, in the US, over 80% of prostatectomies are
performed with the aid of the DaVinci Surgical System.
Figure 10: Increase of DaVinci speciality surgeries in recent years
90
The immediate advantages of this technology are better and faster post-operative
urinary continence and savings of optimal neurovascular bundles, with net
benefits on erectile/sexual functions (more patients return to pre-surgery erectile
function at 12-month check-up). Moreover, the use of DaVinci robot in
prostatectomy surgery allows to have more precise removal of the cancerous
mass, less chance of nerve and rectum injuries, less risk of deep vein thrombosis,
lower risk of complications and shorter operating time (Rashid et al., 2006).
The introduction of robotics can offer to the patient radical grubbing of the cancer
and low impact on the quality of life and earlier return to normal activities, thus
improving the overall outcome of the procedure and satisfaction of the patient.
The chart above shows the global growth of robotic presence in Healthcare sector;
in addition, being DaVinci system the leader product in this field the trend
represented gives us an idea of its broadcasting.
91
CHAPTER 4: STUDY METHODOLOGY
4.1 Introduction
As Kirwan & Gibson (2007) stated, if a system engineer can identify that a system
component will fail with a certain frequency, the human factor community needs
to be able to state whether the human component will be more or less reliable.
The aim of this chapter is to provide scientific evidences and illustration of the
various boundary conditions involved in our case study, and of the methodology
through which we quantitatively evaluate our model.
As we pointed out many times in our discussion the first thing to do is to highlight
the main characteristics of the scenario we are addressing through extensive and
specific use of PSFs. The key aspect of this chapter will be to prove the
consistency of our methodology; indeed, the most important aspect of this part of
the work is the adoption of a systematic approach; that we will have to apply in
tackling every aspect of the case study.
The points we have to address in order to justify our analysis are:
- The estimation of the Proportion of Affect (PoA) of the Influencing
Factors (IF);
- The individuation of the Error Modes (EMs) and the estimation of their
relative probabilities;
- The individuation of the Generic Task Type (GTT) involved in the
procedure according to HEART;
- The development of the algorithm for the calculation of the DET;
- The definition of the Patient Outcome classification.
92
4.2 Dynamic Risk Assessment - preliminary phases
Of course, for starting our work we had to undergo several preliminary phases
since the elements needed to implement a study like the one we are approaching
to are numerous.
The first issue we will cope with is the one regarding the identification of the type
of data we will base our analysis on.
The main types of data sources available for HRA are experiments, experts’
estimates, empirical acquisition from real-life or accidents experience, and
simulation studies. The major issue in dealing with the Healthcare sector is that
reliable data are missing so opting for experts’ estimate is the only choice.
Several reasons have been individuated for this lack of useful information in
literature, and all the researchers agree on the fact that serious measures must be
taken to change the blaming attitude which makes the personnel reluctant to
objectively report accidents; and to increase the awareness of the effectiveness of
HRA tools starting from higher organization to single actor level.
As said before, in this kind of environment the best thing to do is to simply rely
on experts’ judgements, since taking into account any past data could lead to
misleading conclusions according to the afore mentioned issues. At second, it is
necessary to identify the level of detail of the procedure task analysis but also of
its branching. It is only at this point that we are able to associate the various failure
probabilities to the specific tasks of the process.
In our discussion, we will try to adopt a linear approach and to simplify as much
as possible the level of resolution of the problem preserving the description of the
standardise procedure already used in previous studies and keeping the recovery
path size reasonably small; combining the analysis of expert surgeons with the
need of simplicity required for the quantitative evaluation.
To perform effective risk assessment, aside a properly defined task analysis, it is
also necessary to have a specifically defined taxonomy, or at least adaptable to the
context under evaluation, at disposal.
93
All these issues will be addressed in the following paragraphs, still trying to keep
the discussion general, for what regards Surgery applications; while in Chapter 5
the application to the specific case (BA-RARP) will be illustrated.
4.2.1 Task flow diagram and recovery paths
The task flow diagram of the main procedure was taken from (Trucco et al. 2017);
and is available for consultation in Appendix 2 and 3; while the step forward we
did in this work was to add recovery branches stemming from the most critical
tasks.
The most critical tasks were individuated in previous studies as “Isolation of
lateral peduncles and of posterior prostate surface”; “Santorini detachment from
the anterior surface of the prostate”; and “Anastomosis”.
In order to identify the most relevant recovery paths associated to these tasks we
collected the opinion of three surgeons through standardized and ad-hoc
interviews, whose text is also available in Appendix 7.
4.2.2 IFs and IFs’ impact definition
The PSFs to be considered has been elicited by three steps: literature review of
the PSFs taxonomies with particular focus on domains with high complexity and
human engagement; comparison of the different proposed taxonomies; and
collection of experts’ judgements.
As anticipated, the taxonomy relative to the Error Producing Conditions (i.e. IF
as we used to call them) will be the same outlined in (Trucco et al. 2017) and
presented below.
94
Table 5 : Validated surgical taxonomy of Influencing Factors
SURGICAL INFLUENCING FACTORS
1 Noise and ambient talk
Continuous or sudden noise; team members talking in the background or coming
and going and moving around in a noisy way.
2 Music
Presence of background music in operating room.
3 Noisy use of social media
Team members talking about and obtrusively sharing social media content.
4 Verbal interruptions
Verbal Interruptions that are either untimely or not patient relevant.
5 Poor management of errors and threats to patient safety
Failure to share information promptly and openly about errors and threats to
patient safety.
6 Poor guidelines, procedures or checklists
Guidelines, procedures or checklists are inadequate: lacking, too complex, or not
at right level.
7 Rude talk and disrespectful behaviours
Derogatory remarks, behaviours showing lack of respect of OR team members,
shouting and harsh tones of voice.
8 Improper use of procedures and checklists
The improper use, or non-use, of the WHO checklist (or similar), protocols and
procedures.
95
9 Unclear or failed communication
Communication that should have been given wasn’t or was inadequate or was
misunderstood and not corrected.
10 Poor or lacking coordination
Failure in coordinating team activities; failure to anticipate the needs of the lead
surgeon or lead anaesthetist (surgeon at the console in robotic surgery).
11 Poor decision making
Failure to consider, select and communicate options; inadequacy or delay in
implementing and reviewing decisions.
12 Poor situation awareness
Failure to gather and/or to integrate information or failure to use information to
anticipate future tasks, problems and states of the operation.
13a Lack of experience of surgical team colleagues
Lack of experience of within surgical team, with the surgical procedure or
technology.
13b Lack of experience of anaesthetic team colleagues
Lack of experience of within anaesthetic team, with the anaesthetic procedure or
technology.
14 Fatigue
Mental fatigue or physical fatigue.
15 Time pressure
Psychological stress resulting from experiencing a need to get things done in less
time than is required or desired.
16 Poor leadership
Failure to set and maintain standards or to support others in coping with pressure.
96
17 Team member familiarity
Team members unfamiliar with each other and each other’s competencies.
18 Poor use of technology
Lack of ability to use relevant technology.
19 Inadequate ergonomics of equipment and work place
Equipment and workplace not designed to optimize usability and reduce operator
fatigue discomfort.
20 Preoperative emotional Stress
Stress caused by factors not directly related to the team, the characteristics and
evolution of the surgery, such as responsibility for the budget and other business
objectives, organizational problems of the department, other critical patients or
legal cases.
The surgical validated taxonomy was previously obtained through the following
phases (Trucco et al. 2017):
- Literature research of Human Factors in laparoscopic and robotic surgery;
- Identification of factors to place into macro categories;
- Observational activity of different laparoscopic and robotic surgeries
(face validity): all the elements found in literature were observed in the
surgical context too;
- Surveys and focus group with surgeons (in the Italian and Danish context):
discussion and confrontation with surgeons regarding meanings,
definitions and wording;
- Determination of the final taxonomy and validation from surgeons.
During the discussion of previous chapters regarding literature review, we did not
mention the work brought up by one of the PhD of the Politecnico di Milano,
Rossella Onofrio, specifically oriented in the direction of creating a statistical
ground for the definition of HEART’s weights in Healthcare. The main result of
this study, from the point of view of the work presented here, is the construction
97
of triangular probability density functions, one for each of the 20 IFs, through a
national scale survey specifically regarding surgery applications.
For this study the list showed in Table 5 was presented to the surgeons involved
in the survey. They were asked to choose as many of these factors they considered
meaningful in the evaluation of recovery probability and to what extent, adopting
a range from 0 to 100 in order to allow higher resolution with respect to the
traditional 1 to 10 scale. The research has involved more than 200 surgeons, and
the resulting distributions are shown in the picture below.
Figure 11: Plots of the triangular pdf of IFs in surgery
As said before, the reason why we opt for HEART is that in the past decade it has
been the principal tool used to quantify the reliabilities of human interactions
(Williams, 1986).
Whilst this technique has served well, it was developed many years ago and it has
remained principally the same technique, based on the same original data (Kirwan
et al. 2016). Since, as suggested in the literature review, this technique did not
always 'fit' very well tasks assessed, for example for the NPP and ATC case. It
98
was therefore felt that a redefinition of the Error Producing Conditions (EPCs)
involved, and of the relative multipliers, could be developed based on more recent
and relevant data.
We have shown the lists of the modifications proposed by the NARA and CARA
tools in the previous chapter, and since these are the guidelines for future
researches, we will provide an example of the adoption of these more recent
suggestions we readapted for Surgery application in the sections regarding the
quantitative evaluation and the comment of the results.
In particular, the following table shows the discrepancies between the multipliers
involved in the different proposals: HEART, NARA and CARA.
Table 6: Comparison between HEART, NARA, and CARA multipliers
EPC HEART
multipliers
NARA
multipliers
CARA
multipliers
1 17 20 20
2 11 11 11
3 10 10
4 9 9
5 8
6 8
7 8 9
8 6 6 6
9 6 24 24
10 5,5
11 5
12 4
13 4 4 5
99
14 4
15 3 8 8
16 3 3 5
17 3
18 2,5 2,5
19 2,5 3
20 2
21 2 2
22 1,8
23 1,6 1,6
24 1,6
25 1,6
26 1,4 2
27 1,4
28 1,4
29 1,3 2 5
30 1,2
31 1,2 2
32 1,2
33 1,15 8 8
34 1,1
35 1,1
100
36 1,06
37 1,03
38 1,02
The cells highlighted in purple represent all those EPCs involved in the Surgery
taxonomy developed by (Onofrio et al. 2015), the one that has been adopted in the
present study. The cells highlighted in blue represent those cells for which we
have different multipliers suggested by NARA and CARA’s sets, i.e. the EPC is
considered in just one of the two classifications. Finally, the orange ones are those
EPCs for which different multipliers are defined for the two tools, i.e. the EPC is
considered in both sets but with different importance. This last group of EPCs will
require the identification of proper criteria for selection, in order to adopt a unique
and comprehensive set of EPCs to be implemented in the quantitative analysis.
Not being allowed to make considerations about the numbers themselves, we must
justify our choices for the definition of a new set of multipliers according to the
similarities between the field of application we have to deal with and the ones for
which the database have been updated and upgraded.
In literature, the similarity existing between ATC and Surgery’ working
environment has been largely addressed, and we can say that from an industrial
psychologist’ perspective, anaesthesia has much in common with the aviation, air
traffic control and nuclear power generation industries. In fact, all of these high
reliability domains share safety as a prime goal and rely on having well-designed
workplaces, equipment and systems, as well as safety-focused organizational
climates. Personnel must be suitably skilled to ensure they can deal with the
demands of their complex work environments; this usually involves maintaining
awareness of dynamic situations involving multiple players, and being able to deal
with critical events in stressful, time-pressured situations characterized by ill-
structured problems, shifting goals, and incomplete feedbacks (Fletcher et al.
2002).
101
Situation awareness is a vital non-technical skill and it strongly depends on the
interface systems putting the operator in contact with the object of its operations.
Even though the differences between the various applications are numerous, for
example in Surgery we do not have a rigid, well defined, and unique procedure to
be followed, the choice we made of assimilating the surgical taxonomy to the ATC
one (i.e. CARA’s one) relies on the fact that these two environments have many
more commonalities with respect to the ones between Surgery and NPP contexts.
These similarities basically concern: workplace ergonomics; the centrality of the
operator in the execution of the procedure with respect to technology; the absence
of actual technological barriers preventing accidents (e.g. in NPP we have
instrumentation monitoring and filtering human behaviour and correcting
dangerous states of the system in a completely autonomous way, while this is not
the case in Surgery and ATC); the possibility of directly, and most of the time
personally, verify the empirical state of the system through visual inspection.
The first “orange EPC” is the thirteenth (i.e. Poor, ambiguous or ill-matches
system feedback) which is involved in the evaluation of the seventh IF with a
relative weight of 30%, according to (Onofrio et al. 2015) evaluation. We can see
that from a multiplier of 4, relative to the HEART technique, we have a value of
4 and 5 respectively for the NARA and CARA cases. Considering the previous
remarks and the description of the operating room environment, it was considered
appropriate to align the new set with CARA classification, since, unlike in NPP,
in both cases we have the possibility of visually checking the response of the
object of the procedure, i.e. the patient in the surgery case.
The second “orange EPC” is the sixteenth (i.e. An impoverished quality of
information conveyed by procedures & person interaction) which completely
defines the seventeenth IF, and represents the 88% of IF eight. This EPC describes
the difficulty in keeping the sequence of steps straight in mind along the evolution
of the action; for what regards this specific topic we do not have an evident overlap
of the different applications but to be consistent with the considerations made in
previous paragraphs we will keep the CARA result.
102
The third, and final, “orange EPC” is the twenty-ninth (i.e. High level emotional
stress) which covers the 70% of the twentieth IF. This is the EPC for which the
difference in maximum effect magnitude, especially between traditional HEART
multiplier and the CARA one, is more evident, which is reasonable since the
context are completely different. It mainly depends on the different role played by
technology in the two cases: while in NPP human behaviour, and occasionally
errors, are mediated by the instrumentation, this is not possible for ATC
applications. All this considered, it is clear that the emotional and personal aspects
have so much more impact on the success probability for ATC, and that the
environment closer to the surgical one is the one described by CARA.
For all the others EPCs we had no need to assume or chose between the two
different techniques, since, as said before, the suggested multipliers were equals
for NARA and CARA, or constituted an only choice since those EPCs were
considered in just one of the two taxonomies.
Finally, the new set of multipliers we worked with is reported in Table 7, where
the cells highlighted in yellow show those IFs for which the multipliers have been
modified:
Table 7: Comparison between modified HEART multipliers and new ones
IF Old
Multipliers
New
Multipliers
1 10 10
2 10 10
3 9,8 9,8
4 1,048 1,048
5 9,725 9,775
6 1,4 2
7 3,3 5
8 3,101 5,7606
103
9 6,8 6,8
10 4,525 4,525
11 1,95 1,95
12 17 20
13 3 8
14 1,31 1,31
15 11 11
16 1,6 1,6
17 3 5
18 6,3 6,4
19 1,285 6,08
20 1,45 4,04
4.2.3 Modified HEART and integration with the DET framework
The modified version of HEART proposed in this section is the result of a series
of considerations and adaptation of the original one in order to make it more
suitable for Surgery application.
The adjusting phase was developed by Cordioli (Trucco et al. 2017), and even
though it would have been interesting to introduce the concepts of task
dependency and PSFs’ evolution as additional modifications of the original
technique we decided to limit ourselves to the implementation of HEART as
quantitative method to evaluate the single node probability without including the
interrelations between the several branches of the tree.
104
The original and modified version for Surgery application will be now illustrated
in order to point out the several actors coming into play during the estimation of
the probabilities, and the adjustments made.
The traditional HEART method flow structure can be divided in main functional
steps, as presented in the diagram below:
The first step consists in the identification of the task under analysis (Step 1), that
in our case consists in the identification of the critical tasks. There are eight
Generic Task Types (GTTs) described in HEART method; to each of them is
associated a range for the Nominal Human Unreliability (NHU) from which the
values to be assigned to the specific task are selected; this is done according to
HEART generic categories reported in the table below (Step 2).
Figure 12: Flowchart representing main steps of traditional HEART methodology
105
Table 8: Generic Task Types (GTTs) and relative Nominal Human Unreliability (NHU)
If none of these eight task descriptions fits the type of task under analysis, then
the following values can be considered as reference points:
Generic Task Proposed Nominal Human Unreliability
(5-95th Percentile Bounds)
(M) Miscellaneous task for which
no description can be found 0.03 (0.008-0.11)
The assessor chooses the relevant EPCs that mainly influence the operator’s task
performance (Step 3); paying attention not to double-count EPCs by overlaying
them on generic tasks. Subsequently, the assessor determines the Assessed
Proportion of Affect (PoA) (Step 5). Thanks to this value, rated on a scale from
zero to one in the original version, it is possible to give a measure of each EPC’s
effect magnitude. The Multiplier factor associated to each EPC is defined by
Williams as “maximum predicted nominal amount by which unreliability might
change going from good conditions to bad” (Williams, 1986). If an analyst
perceives a multitude of applicable EPCs, then the model will tend towards further
unreliability (pessimism) (Williams, 1986).
The list of EPCs selected for the primary version of HEART is here provided:
106
The set of general formula used to evaluate the error probability at each critical
task, and covering the procedure’s steps from number 6 to 9) is the following:
AssessedEPCAffecti= [(EPCMultiplieri − 1) ∗ PoAi] + 1 (4)
ANLU=NHU*∏ AssessedEPCAffecti 𝑛𝑖=1 (5)
%CU =AssessedEPCAffecti : (NHU + ∑ni=1 AssessedEPCAffecti) (6)
They respectively calculate: Equation (4) the Assessed Affect of the i-th EPC;
Equation (5) the Assessed Nominal Likelihood of Unreliability (ANLU); and
Equation (6) the Percentage Contribution to Unreliability (%CU) of each EPC.
The problem of lexical precision is crucial at this point since the whole EPC
classification, and so the quantitative assessment, is strictly related to it. Also in
literature, there are evidences of difficulties due to the fact that HEART was
written with an industrial language and it is not easy to translate it for Healthcare
applications. It is evident that, if Williams’ taxonomy is straight applied, possible
Table 9: HEART 38- Error-Producing Conditions (Williams, 1986)
107
misunderstanding of background and in the use of the generic error categories and
EPCs may arise during the analysis.
Together with the cases presented before of new ad-hoc tools development,
specifically NARA and CARA ones, also for the Healthcare settings a prototype
of tailor made EPCs classification was proposed; in fact, also in literature there
are evidences of difficulties due to the fact that HEART was written with an
industrial language and it is not easy to translate it for this kind of application. It
is evident that, if Williams’ taxonomy is straight applied, possible
misunderstanding of background and in the use of the generic error categories and
EPCs may arise during the analysis, in the following table the link between
validated surgical taxonomy and the relative EPCs is presented.
Table 11 Comparison between IFs’ taxonomy and traditional EPC one
Validated surgical
taxonomy Traditional HEART EPC
1 Noise and ambient talk
3 A low signal-to-noise ratio.
2 Music 3 A low signal-to-noise ratio.
3
Noisy use of social media
3
4
A low signal-to-noise ratio.
A means of suppressing or overriding
information or features which is too
easily accessible.
4
Verbal interruptions 36
37
Task pacing caused by intervention of
others.
Additional team members over and
above those necessary to perform task.
108
5
Poor management of errors and
threats to patient safety
2
7
12
18
A shortage of time available for error
detection & correction.
No obvious means of reversing an
unintended action.
A mismatch between perceived & real
risk.
A conflict between immediate and
long-term objectives.
6 Poor guidelines, procedures or
checklists 26
No obvious way to keep track of
progress during an activity.
7
Rude talk and disrespectful
behaviours
16
13
An impoverished quality of
information conveyed by procedures &
person-person interaction.
Poor, ambiguous or ill-matches system
feedback.
8
Improper use of procedures and
checklists
16
32
11
9
21
14
An impoverished quality of
information conveyed by procedures &
person-person interaction.
Inconsistency of meaning of displays
and procedures.
Ambiguity in the required performance
standards.
A need to unlearn a technique & apply
one which requires the application of
an opposing philosophy.
An incentive to use other more
dangerous procedures.
No clear, direct & timely confirmation
of an intended action from the portion
of the system over which control is to
be exerted.
109
9
Unclear or failed communication
8
5
A channel capacity overload,
particularly one caused by
simultaneous presentation of non-
redundant information.
No means of conveying spatial &
functional information to operators in a
form which they can readily assimilate.
10
Poor or lacking coordination 10
25
The need to transfer specific
knowledge from task to task without
loss.
Unclear allocation of function and
responsibility.
11
Poor decision making
25
17
Unclear allocation of function and
responsibility.
Little or no independent checking or
testing of output.
12 Poor situation awareness
1 Unfamiliarity with a situation which is
potentially important.
13 Lack of experience 15 Operator inexperience.
14
Fatigue
35
22
Disruption of normal work sleep
cycles.
Little opportunity to exercise mind and
body outside the immediate confines of
a job.
15 Time pressure
2 Time shortage (from Williams’
description).
16
Poor leadership
24
A need for absolute judgements which
are beyond the capabilities or
experience of an operator.
110
17
Team member familiarity
16
An impoverished quality of
information conveyed by procedures &
person- person interaction.
18
Poor use of technology 6
20
19
Poor system/human user interface.
A mismatch between the educational
achievement level of an individual and
the requirements of the task.
No diversity of information input for
veracity checks.
19
Inadequate ergonomics of
equipment and work place
33
23
A poor or hostile environment.
Unreliable instrumentation.
20
Emotional perioperative stress
29
22
High level emotional stress.
Little opportunity to exercise mind and
body outside the immediate confines of
a job.
In our study, as in (Trucco et al. 2017) one, surgeons were responsible for those
steps more “judgmental and structured”: selecting the appropriate Nominal
Human Unreliability (NHU) category; associating Influencing Factors (IF) from
the surgical validated taxonomy and their corresponding Assessed Proportion of
Affect (PoA); plus the definition of the Error Modes possible for each critical task.
Since the results significantly depend on assessor’s knowledge of the task and on
his personal opinion, the three surgeons involved in the study were all well
experienced, well trained, aware of the steps of the procedure, as well as of the
order in which they should be applied.
The PoA, used to determine the extent to which each identified EPC affects
operators’ performance, was rated on a scale from zero to one hundred, differently
from traditional HEART where PoA is a value ranging from zero to one; this
choice has been defined in order to obtain a greater precision of the values. This
111
is just one of the modifications the traditional tool has undergone to apply to the
Surgery application; in the table below we find the main differences between the
traditional HEART algorithm and the one we will work with.
Proposed modifications
of HEART Traditional HEART Rationale
Observational data Data collections and
comparison with similar
applications.
Availability of
standardized procedures.
Lack of accurate
quantitative human
reliability data, poor data
audit from healthcare
HRA applications.
Observational data capture
based on video recording
of the operations and
direct observational
experience in surgery
room.
- Specific taxonomy for
surgical context: 20-
Influencing Factor
Traditional 38-EPC
taxonomy for the
industrial practice
Useful list of context
sensitive Influencing
Factors ad hoc for the
clinical/surgical practice.
- Assessor team
composed by three
people
Single assessors Reduce subjectivity
aspect, heavily based on
the experience of the
single assessor.
- Group of experts on the
subject: surgeon
External expert assessor Experts with highly
specialisation in medical
domains, task, and
processes.
112
The main modifications introduced in this work to the already modified version
of HEART are the introduction of the Error Modes (ME) and the association of
Patient outcome grades to each of the branches of the tree.
- Rating Scales, from 0 to
100, is used to obtain
PoA values
for each EPC
Calculation of PoA rated
on a scale from zero to
one
In this way it is possible to
take into account, more
precisely, the uncertainties
of the EPC factors.
Averaging PoA values for
each EPC allows to obtain
a balanced result.
- Assessor team is asked
to assess the amount of
PoA (PoA*) attributed to
the EPC, already
established, that better
means the examined IF
(EPC*)
Not present In this way it is possible to
create a weighted analysis
- Component tasks are
not always easily
separable, it is necessary
to identify the dimension
and complexity of each
task
Easier task analysis,
characterized by repetitive
routinely operations
Hazard zones need to be
identified that consist of a
series of interrelated tasks.
For example, in the
anastomosis, the outcome
does not depend on a
single task (suturing or
stapling) but also on
preparation of the bowel
end, ensuring a good
blood supply, anastomosis
without any tension, etc.
113
The Error Modes represent the ways in which the failure of a task may occur; in
this way, we have many paths stemming from a specific task for which is possible
to define different outcomes and probabilities.
The same three surgeons interviewed for the IFs’ selection were so asked to define
a set of ME and the relative probabilities (alpha) for each critical task, so keeping
the linearity of the judgements; and then the more meaningful ones were included
in the simulation following two guidelines: keep the number of branches
reasonably low, and comprehensively describe the scenario under study.
Finally, in order to properly define the outcome of each task we selected the
Clavien-Dindo classification for Patient outcome which is the most widely
accredited one in the surgical sector, and which is articulated as follows:
4.3 Dynamic risk assessment implementation
4.3.1 DET as a tool to integrate nominal probabilities procedures and
paths
When it comes to create an interface between an extremely quantitative and little
explored world as human mind and a likely complex world as Surgery thousands
of possible considerations can be done; introducing now the simulation tool we
will also illustrate all the hypothesis on which the model is based on. The
114
interested reader is referred to the full text of the on Matlab® simulation code
available in Appendix 5.
Integrating those formulas illustrated in the last paragraph with a DET structure,
a tool able to randomly generate probable path of the procedure was set up. The
numerical data we started from are: the extremes of NHU ranges; the pdf of IFs
multipliers defined by (Trucco et al. 2017); the experts’ judgements regarding the
relative probabilities of the ME (alphas); the IFs involved for each critical task;
and the grades associated to the different ME.
The first step consisted in evaluating a proper number of trials for which the
simulations had to run in order to gain reliable results. This was done arbitrarily
setting the probabilities of the various ME as constant; so implicitly imposing the
final probability of all the grades as constant as well; and increasing the number
of iteration of the simulation till the best fitting Gaussian for each of the five
grades resulted to be the same (µ=0.12; σ=0.15) with a reasonably good degree of
approximation, which led us to opt for the number of 20,000 iterations. The graphs
representing the curves obtained for the six different grades are shown in the
picture below.
Figure 13: Pdf distributions for the "homogenous" case
115
For what regards the initialization of the data coming from the survey: for the
alphas, we considered continuous ranges of values, also in this case delimited by
the top and bottom values assigned by the surgeons; while, for the Patient outcome
grades we kept a range described by a discrete rectangular PDF with the lowest
and the highest judgements assigned as extremes.
4.3.3 Critical tasks identification
The HEART method application requires the identification of one or more critical
tasks on which performing the quantitative analysis.
The criticality of the tasks may be attributed to different features, according to the
person’s opinion and the context. A task may be considered critical because it
requires significant additional time compared to others, or needs to be redone and
adjusted several times, or can have serious bad consequence for the completion of
the task, for the performers (serious injuries or death) or for the system (damage
or permanently compromise system), and so on. Critical tasks may be totally
unfamiliar, performed with no real idea of consequences, or contrarily completely
familiar, highly practised or even routinely; they may be fairly simple tasks
requiring low level of skill and attention or very complicated one, requiring high
level of comprehensions and skills.
Generally, two or three tasks are chosen as the most critical ones of the procedure
and, after validation from the performer of the tasks, a specific risk analysis is
executed.
116
In (Trucco et al. 2017) work a critical tasks identification process was already
performed following the phases illustrated below:
Figure 14: Phases for the Critical task identification
Starting from literature research it was possible to find studies on laparoscopic
and robotic prostatectomy in which most critical, dangerous or complex stages of
the surgical procedure clearly emerged. Literature shown that there are several
studies regarding robotics training and, from these data, it was possible to deduce
which are the most critical tasks that consequently need more training (Trucco et
al. 2017).
Subsequently, surgeon’s opinion was asked in order compare it with the results
obtained from literature; and the resulting set of critical tasks for BA-RARP
procedure was the one below:
- Isolation of lateral peduncles and of posterior prostate surface;
- Santorini detachment from the anterior surface of the prostate;
- Anastomosis.
4.4 Illustration of the simulation procedure
The Matlab® code proposed can be ideally divided in three main parts:
- Initialization of data;
- Quantitative evaluation of paths (iterative part);
- Grade’s probability distribution evaluation.
The first section of the Matlab® code, was already discussed in Section 4.3.1,
while in the second one we find the computational part defined as a for-cycle
performing the necessary number of iterations.
At this point, the first thing to do is to extrapolate random values from the
distributions initialized during the first step; i.e. the PoA of the IFs involved, and
117
the NHU value; so, these values can be introduced in the formulas of the modified
version of HEART for Surgery (Cfr. Section 4.2.3).
Since we are working with a linear and additive model we can assume that no
relation is involved in the random selection of PoA values so different random
inputs are selected for each IF; still, each IF’s PoA value will be fixed for the
single run.
In this way, adopting the modified HEART set of formulas, for each run and
critical task we obtain a failure, and consequently success, probability; where the
first of these will be additionally decomposed, according to the alphas randomly
selected from the range described by surgeons’ judgments, into the probabilities
of the different MEs.
Hence, we end up with a probability vector (Prob_ME) having size equal to the
total number of Error Modes of the procedure plus three positions representing
the success probability of the single Critical Tasks.
The resulting probability of the path chosen will be defined as the product of the
elements of this vector, so it was initialized as a unitary vector and only those cells
corresponding to the randomly selected event will be filled with the probability
value specifically evaluated.
When choosing the path according to the alphas value, the simulation tool is also
selecting the Patient Outcome Grade to be associated to the Critical Task
performance; so, at the end of each iteration we will end up with three potentially
different outcome grades for the three tasks. Since in our study we did not took
care of the dependency aspect in a quantitative and formal way at this point we
made the conservative and strong, but still reasonable, assumption that the final
patient outcome related to the single iteration path corresponded to the most
severe of the three.
Finally, the last step of the code consists in the assembling of Grade Probability
vectors (Grade_final_i with i ranging from zero to five) showing the probability
distribution of the various Patient Outcome grades. They are plotted through
histograms and to each of them a best fit Gaussian is associated. Some of the plots
118
obtained through the running of the simulations will be shown in the Results’
chapter.
4.5 Factor Analysis
There is evidence in literature regarding the fact that, according to personal and
environmental factors describing the scenario in which the procedure takes place,
the influence of the different EPCs and the relative probability for different end
results can be sensibly different.
This is the reason way we decided to investigate the results obtained through the
assessment of the surgeons with the ones deriving from the decomposition of the
scenario itself in order to define a hierarchy of IFs for the selected procedure and
application; to do this a Factor analysis was performed.
Factor analysis is a method for explaining the structure of data by highlighting the
correlations between observed variables called factors; it is a useful tool for
investigating variable relationships for complex concepts.
The purpose of Factor Analysis is to analyse patterns of response as a way of
getting the underlying factors influencing the phenomenon; and allows the use the
weighted item responses to create what are called factor scores.
We can say that Factor analysis is a way to take a mass of data and shrinking it to
a smaller data set that is more manageable and more understandable; so, a way to
find hidden patterns, and show how those patterns overlap and their
characteristics.
Basically, there are two kind of FA: Exploratory and Confirmatory; namely,
Exploratory factor analysis is to be adopted when one has no idea of the structure
of the dataset and/or of how many dimensions are in a set of variables; while
Confirmatory Factor Analysis is used for verification, as long as the user has a
specific idea about the kind of dataset he/she is dealing with
In our case, we will opt for a simple version of a Confirmatory FA, since we have
a clear idea regarding the kind of result we are going to obtain.
119
We will hence perform a one-by-one factor analysis aiming at confirming the
correctness of simulation code through an a priori reasoning of the data through a
theoretical approach. Still, we will make some reasoning about the results
obtained in terms of description of those factors more heavily influencing the
performance of our system; and a very simple scenario analysis, obtained by
grouping the Influencing Factors into classes, will be performed.
120
CHAPTER 5: CASE STUDY
5.1 Introduction
The aim of the Case Study Chapter is to provide an example of application of the
methodology proposed for the analysis of the recovery paths, and a validation of
the variations applied to the modified HEART approach for a Surgery procedure.
Since, the creation of patterns and highlighting the crucial relationships between
IFs and operation performances opens many opportunities for future research and
will for sure foster the improvement of surgical training and teaching methods.
In the previous chapter the modified HEART technique, specifically designed for
the study of error recovery for surgery applications, has been presented. Now,
starting from the valuation of the surgical context and analysing a specific Radical
Robotic Prostatectomy procedure, the methodology is applied, in order to evaluate
recovery paths’ probabilities of success in robot-assisted Minimal Invasive
Surgery. In this way, we will gain a better understanding of the dynamics of the
problem and so we will be able to propose more efficient and effective
improvements.
Today, prostate cancer, especially if intercepted in the early stages of the disease,
has the opportunity to be fully removed, ensuring high probability of recovery,
and what is even more important, of total recovery thanks to increasingly
sophisticated surgical techniques and to the use of the DaVinci robot.
Despite that, radical prostatectomy surgery in some cases could lead to serious
implications regarding urinary incontinence and impotence. These forms are often
reversible with time, but in some cases, they permanently affect patients’ quality
of life.
Nowadays, the robot-assisted radical prostatectomy (RARP) technique has
become the surgical option of choice for clinically localized prostate cancer.
Additionally, the innovative approach named after surgeon Bocciardi, hence the
name BA-RARP, passing through the Douglas space, following a completely
121
intra-fascial plane without any dissection of the anterior compartment (which
contains neurovascular bundles, Aphrodite’s veil, endopelvic fascia, the Santorini
plexus, pub urethral ligaments) allows to preserve many important nerves and
therefore plays an important role in the maintenance of continence and potency.
(Bocciardi, 2014; Galfano et al., 2010).
The young age of robotic surgery, its high technology content, and the incredible
success of the DaVinci robot in prostatectomy surgery, justifies the outflow of
resources implied by the implementation of risk assessment techniques.
Obviously, the assessment of the values necessary for the application of modified
HEART, as well as the validation of the task analysis, the choice of the critical
tasks to be analysed, the validation and description of taxonomy together with the
Error Modes definition have required the opinion of experts, i.e. robotic surgeons.
Additionally, through observation of robot-assisted prostatectomy surgeries at
Niguarda Ca’ Granda Hospital in Milan, it was possible to directly experience
operating room environment, and to record and identify those factors influencing
human performance during the various stages of the surgery and their evolution
over time.
5.2 Surgical Technique
In the last years, the robot-assisted radical prostatectomy (RARP) has gained
increasing importance, changing the general approach and understanding of the
surgical anatomy of the prostate. It has become very popular in the United States
and Europe and it has been estimated that more than 75% of radical
prostatectomies are performed using the DaVinci platform (Tanimoto et al.,
2015).
Professor Francesco Rocco, Urology Director of IRCCS Foundation at Ca 'Granda
Ospedale Maggiore Policlinico in Milan, underlines that robotic prostatectomy is
a gold standard in Italy too, thanks to three-dimensional view (as opposed to the
2D vision of laparoscopy) and precision of the instruments that reduces to a
minimum the possibility of complications (Rocco, 2014).
122
As mentioned before, in 2010 a new access to the prostate for the robot assisted
radical prostatectomy has been presented: the “Bocciardi approach” (BA-RARP),
which uses only access through Douglas, without opening the anterior
compartment and the endopelvic fascia, and without the need to dissect the
Santorini plexus (Galfano et al., 2010).
Briefly, the originality of this technique is to use a fully posterior approach,
without opening the Retzius and passing through the Douglas, not only for the
isolation of the seminal vesicles (such as from Montsouris technique), but for the
whole isolation of the prostate and the anastomosis phase. The BA-RARP
technique uses an unusual access to the prostate for the urologist. However,
despite the initial apparent complexity of the technique, it allows to obtain
excellent results both from the oncological and functional point of view (Trucco
et al. 2017).
By analysing results from the first 200 patients operated with this approach, at
Niguarda Ca’ Granda Hospital in Milan, and with one-year minimum follow-up,
it is possible to conclude that the oncological results have improved after a
learning curve of 100 patients (Galfano et al., 2013).
The great strength of the “Retzius-sparing” technique seems to be the immediate
recovery of continence. Indeed, just a week after the catheter removal more than
91% of patients reacquire continence; and these positive margins are consistent
with those described in literature and reported in series of patients treated with the
anterior technique (Galfano et al., 2013).
Thanks to robot technology it is possible to have little bleeding permitting to
avoid transfusions and to have lower hospital stay (2 ½ days on average) and thus
it allows the patient to face the surgery with more serenity.
Of course, all this is possible in the early stages of the prostate cancer disease; so,
an early diagnosis is still a fundamental condition in order to permanently solve
the oncological problem, which is the priority of course, and to recover a full and
unrestricted daily emotional, social and work life (Bocciardi, 2014).
123
In conclusion, numerous studies have been conducted in the past few years to
measure the effectiveness of Robot-Assisted Prostate Surgery, and to compare it
with the results observed from open surgery; almost all the researchers agree that
Robotic-Assisted Radical Prostatectomy (RARP) has improved outcomes in the
longer terms when compared with open surgery. Some of the afore mentioned
studies’ results are reported in the tables below.
Table 12: Benefits of robotic prostatectomy over open and laparoscopic surgery
(http://roboticprostatesurgeryindia.com/)
)
Table 13: Outcomes following robotic radical prostatectomy in the select reported studies
124
5.3 Application of the proposed Dynamic HEART Methodology
First of all, experts must be selected in the area of HRA and operators with
professional experience or knowledge in the application domain (Embrey et al.,
1984; Seaver & Stillwell, 1983). In our case, we questioned surgeons with past
experience on this kind of practice; and after having gathered the information
related to the submitted questionnaires, it was possible to fulfil the data
initialization phase of the simulating tool.
As anticipated, the tool consists in a DET branching at three different points, i.e.
the three Critical Tasks, and for each of these a set of Error Modes is defined.
At this stage, we are now able to depict the total and general scheme of the DET
we will deal with; in particular, we are able to define the following graph
highlighting the sequence of the procedure and emphasising the final Patient
Outcome grade.
126
5.3.1 Application of HEART technique
As said many times in this work, the basic technique that we are going to adopt to
compute the probability of the different paths is HEART; the fundamental
requirements in order to apply this methodology, and its “dynamic” version,
consist in the quantification of the Proportion of Affect (PoA) and of the number
of paths to be considered.
This information was obtained by the questionnaires presented in Appendix 7
(their result, in terms of DET, is presented in Figure 15), which has been submitted
to three surgeons of Niguarda Ca’ Granda Hospital, considered fully trained in the
procedure.
In particular; referring separately to the first, and third critical task, they had to
identify which Influencing Factors are considered as major influencers of human
operations’ performance; and, in this case also including the second CT, all the
possible Error Modes stemming from each task with its relative Patient Outcome
grade (referring to Clavien-Dindo taxonomy).
For what regards the IFs to be considered in the analysis of the second critical (i.e.
Detachment of Santorini from the prostate’s frontal surface) task we referred to
the results obtained by Cordioli’s survey, plus the one deduced from literature
review. The final sets of IFs to be involved in the evaluation of the different
probabilities are:
Critical
Task 1:
- Noise and ambient talk (IF 1)
- Poor management errors (IF 5)
- Poor coordination (IF 10)
Critical
Task 2:
- Noise and ambient talk (IF 1)
- Rude talk and disrespectful behaviour (IF 7)
127
Critical
Task 3:
- Noise and ambient talk (IF 1)
- Poor management errors (IF 5)
- Poor communication (IF 9)
- Poor coordination (IF 10)
128
CHAPTER 6: RESULTS
This chapter illustrates the results obtained from the simulation campaign based
on the data gathered through surgeons’ interviews and questionnaires. The chapter
illustrates the analysis of such outcomes in terms of factors’ impact, probability
distributions, and system’s reliability; moreover, some improvement measures are
suggested for limiting the negative effect of different Influencing Factors on
surgeon’s performance.
Through an empirical approach, previous works investigated which Influencing
Factors (IFs) of the surgical taxonomy are met when performing the single critical
tasks of the case procedure by means of the comparison of those identified in the
operating room, during the observational phase, with the ones directly selected by
surgeons. This information has been exploited in order to initialize our DET,
whose branches have been defined on the basis of surgeons’ judgements (the
completed questionnaires are available in Appendix 7).
In the quantitative phase of the work, surgeon’s unreliability for a fixed sequence
of Critical Tasks has been estimated by applying the modified dynamic HEART
technique in the evaluation of the DET’s nodes. Specifically, the following issues
have been addressed:
- Initialization of the Assessed Proportion of Affect, which gives a measure
of each EPC/IF effect magnitude;
- Initialization of the Assessed Nominal Likelihood of Unreliability
(ANLU) for the Critical Tasks “Isolation of lateral peduncles and of
posterior prostate surface”;” Santorini detachment from the anterior
surface of the prostate”, and “Anastomosis”;
- Identification of the Error Modes (Ems) undergone for each simulation,
i.e. paths, and evaluation of the branches’ probabilities through the
adoption of a linear additive model and the modified HEART’s set of
formulas;
- Identification of the final Patient Grade Outcome, according to Clavien-
Dindo classification;
129
- Calculation of the probability distribution of each Patient Outcome Grade
for the selected procedure, holding the Central Limit Theorem.
Once obtained the probabilities for the different grades, we performed a factor and
scenario analysis to investigate the effect of the various IFs considered in the
calculation on the probability of success of the surgery, and in particular on the
health and recovery of the patient.
6.1 Numerical analysis of the simulation results
In the Study Methodology chapter, we already mentioned the fact that, while for
the first and third Critical Tasks we considered as influencing only those factors
commonly identified by all the surgeons in previous studies, no data were
available for the second Critical Task, i.e.” Santorini detachment from the anterior
surface of the prostate”.
The choice made regarding the factors acting on the reliability of this task was
based on the results obtained from different studies, showing that two of the most
frequently named influencing factors in surgical practice are “Noise and ambient
talk” (IF 1) and “Rude talk and disrespectful behaviour” (IF 7).
We recall the fact that through the simulation tool it was possible to select all the
variables, and so paths, in a completely independent and random manner for a
number of 20,000 iterations so that, holding the CLT, the resulting probabilities
have global validity.
The range of probability for the different EMs are represented in the table below;
the extremes of the various ranges were defined through the individuation of the
minimum and maximum values assigned to the EMs by experts; the same criterion
was adopted for the grades’ range definition. For the second critical task, we see
that, having a unique identified Error mode, the range describing the possible
values of alpha corresponds to a unitary conditional probability.
130
Table 14: EMs’ probability range definition
Critical
Task / α
EM CT-1 EM CT-2 EM CT-3 EM CT-4
min max min max min max min max
CT 1 0.39 0.425 0.1 0.6 0.01 0.05 0.425 0.58
CT 2 1 1 - - - - - --
CT 3 0.28 0.5 0.1 0.5 0.2 0.6 0.3 0.57
Table 15: EMs' grade range definition
Critical
Task /
Grade
EM CT-1 EM CT-2 EM CT-3 EM CT-4
min max min max min max min max
CT 1 1 1 1 2 2 3 1 2
CT 2 1 2 - - - - - --
CT 3 1 2 1 2 1 1 1 1
As already mentioned, according to the questionnaires collected and so to our
analysis, the worst possible scenario for a patient undergoing this type of surgery
(i.e. BA-RARP) is the Grade 3 outcome (i.e. “Requiring surgical, endoscopic or
radiological intervention”); anyway, this is actually not true in terms of real
practice.
For example, the possible interference with the iliac artery can lead to much more
serious outcomes, and can also provoke the death of the patient; in any case, this
is quite a remote possibility since only the 0.2%, approximatively, of the patients
dies due to surgery complications.
Even though, the scope of our investigation was to model a tool able to replicate
the behaviour of a surgeon in the operating room, being a first attempt, it has been
131
meaningful to take into account only those complications considered as the most
frequent in common practice, so not to unreasonably complicate the tool
validation process. In Table 16 we can see all the results obtained for the
evaluation of the quantiles (q=0.95) of the optimum outcome, i.e. no deviation
from standard procedure (Grade 0), ant the maximum expected degradation of
patient outcome one (Grade 3). The simulation was run 12 times in order to cover
the following cases: no IF considered, all IF considered, only one IF considered
per simulation run (i.e. IF-i), and all except one IF (i.e. NO IF-i) considered per
simulation run.
CO
MP
LE
TE
93
.47 %
3.1
7 %
CO
M
PL
ET
E
93
.47 %
3.1
7 %
IF 1
0
99.0
3 %
0.1
0 %
NO
IF
10
94
.58%
0,6
%
IF 9
99.3
9 %
0.0
042 %
NO
IF
9
94.6
3 %
3.0
3 %
IF 7
99.4
8 %
0,0
031 %
NO
IF
7
95.9
7 %
2.9
8 %
IF 5
98.7
2 %
0.1
3 %
NO
IF
5
94.5
0 %
0.5
3 %
IF 1
96
,46
%
0.1
5 %
NO
IF
1
97
.86
%
0.6
%
NO
IF
99
.71
%
0.0
03
%
NO
IF
99
.71
%
0.0
03
%
PA
TIE
NT
OU
TC
OM
E
Gra
de
0
Gra
de
3
PA
TIE
NT
OU
TC
OM
E
Gra
de
0
Gra
de
3
Tab
le 1
6:
Pro
bab
ilit
y o
f h
avin
g t
he
95
% o
f p
ati
en
ts r
esp
ect
ivel
y w
ith
th
e m
inim
um
an
d m
ax
imu
m g
rad
e p
oss
ible
132
As already mentioned, the first and the seventh factors were arbitrarily attributed
to the second Critical Task respectively for the following reasons:
- As result of a national scale survey, IF 7 resulted to be one of the most
relevant factors acting in Surgery having recorded a mode value of 8, on a
scale from 0 to 10 assessing the IF’s maximum potential bad impact on
surgery operations’ performance, which actually is the highest mode value
observed;
- IF 1 has been selected by all the three surgeons as important influencer of
both the other two tasks and its mode value for maximum potential bad
impact on surgery operations has been evaluated as 4, which is actually
the second highest mode registered.
Anyway, since our decision was based on personal speculations, we recommend
the interested reader, and future researchers, not to keep the results presented in
these pages literally and to consider the numbers as the outcome of a mere, even
if reasonable, exercise more related to the validation of the proper behaviour of
the simulation tool than on the precision of the numbers themselves.
In any case, as shown in Table 16: Probability of having the 95% of patients
respectively with the minimum and maximum grade possible and in the following
histograms, an analysis of IFs’ impact was performed: firstly considering, and
then removing, a single IF per simulation, in order to better appreciate their impact
on the resulting system’s behaviour.
Focusing now on the upper part of the chart, we can refer to the graphs below to
better appreciate the relative impact of IFs on the achievement of the “extreme”
outcomes. In the first one, showing the probability of a Grade 0 outcome for the
0.95 percentile of patients, we see the progressive decrease in probability of Grade
0 from left to right; and in particular, we can say that the more impacting factor
on this Key Performance Index is, by far, IF 1; followed by IF 5 tied to IF 10, 7,
and 9.
133
Figure 16:The probability of a Grade 0 outcome for the 0.95 percentile of patients
Trying to make some reasoning on this result, we can say that we should have
expected IF 1 to be the factor more heavily impacting on surgeon’s performance
in terms of Grade 0 quantile (-3.54%), since it has been considered to describe all
the three Critical Task under exam.
Even though it is well known that background noise is a very relevant disturbing
factor, the effect produced from IF 1 on Grade 0 is also stressed by the way in
which the software evaluates the final grade of the procedure; in fact, in order to
get the no deviation case, we need to undergo a no deviation case for all the tasks
involved, otherwise, the highest grade encountered will be selected as the resulting
one.
The same considerations can be done, on a different scale since they are taken into
account just in CT 1 and 3, for IFs 5 and 10, which share the same order value
(around 99.0%); and to IF 7 and IF 9, considered only in one of the three tasks
(aroung 99.4%).
The small differences in terms of affected percentage can have several reasons.
For what specifically regards the evaluation of Grade 0 probability we can say
that, aside the approximations introduced setting a finite number or runs (20,000),
the discrepancies between the different factors showing up an equal number of
99,71 99,48 99,3999,03 98,72
96,46
93,47
90
91
92
93
94
95
96
97
98
99
100
101
No IF IF 7 IF 9 IF 10 IF 5 IF1 Complete
Grade 0 (q=0.95)
134
time in the evaluation, can be imputed to the differences in terms of mode values
and EMs probabilities introduced for the relative critical tasks.
Analysing now the scond hystogram, descirbing the probability of a Grade 3
outcome for the 0.95 percentile of patients, the a priori consideration we can make
is that the only task presenting the possibility of ending with this severity level is
task 1, hence only those factors affecting the first CT (IF 1, IF 5, and IF 10) are
supposed to have an impact on this KPI.
Figure 17:The probability of a Grade 3 outcome for the .95 percentile of patients
This is well illustrated by the picture above from which we can appreciate the
fact that considering only those factors not involved in CT1 evaluation (IF 7 and
9) we end up with a probability around 0.001% for Grade 3 (1.e. same result
obtained from the NO IF case), while we have very similar results for IF 1, 5 and
10, all involved in CT1 evaluation.
Aside of the comments made before for justifying the discrepancies between
results that would be supposed to be equals for the quantiles of Grade 0 (also
visible in the chart according to the colours associated to the different cells), for
what regards Grade 3 evaluation we must also remember the fact that the alphas
associated to the only Error Modes leading to Grade 3, i.e. ME 1-3, range from 1-
5%, resulting in a very small sample, and consequently not precise figures.
In order to provide clearer and sounder figures, we also decided to evaluate the
probability of a Grade 3 outcome for the 0.05 percentile of the patient. This
0,003 0,0031 0,0042 0,10 0,13 0,15
3,17
0
0,5
1
1,5
2
2,5
3
3,5
No IF IF 7 IF 9 IF 10 IF 5 IF 1 Complete
Grade 3 (q=0.95)
135
actually corresponds to the complementary result with respect to the 0.95
percentile calculated before; and in this way, we will appreciate the probability of
the Grade 3 being the outcome attributed to the 5% of the patients
The results obtained from its calculation are shown in Figure 18.
Figure 18:The probability of a Grade 3 outcome for the .05 percentile of patients
We can see that the same considerations done for the 0.95 percentile case are still
valid; but, in this case the impact of IF 1 is almost more than the double of the
ones of IF 5 and IF 10, fact probably to be imputed to the difference in their
maximum impact mode value: 4 out of 10 for IF1 and 0 out of 10 for IF 5 and IF
10.
Anyway, it is worth noting that having a Grade 3 outcome for the 5% of the
patients undergoing a BA-RARP is around the 0.03%; and that it is two orders of
magnitude less with respect to the one evaluated with a 95% confidence level.
To be complete, we implemented another a complementary analysis: starting from
the simulation of the complete scenario and removing one IF for each simulation
(results shown in Table 14).
Also in this case the coherency of the result is demonstrated since for those factors
not influencing CT1 (IF 7 and 9) we see that the results for Grade 3 quantile
remain unchanged with respect to the full-set case.
0,003 0,00310,0042
0,00560,00730
0,0196
0,0324
0
0,005
0,01
0,015
0,02
0,025
0,03
0,035
NO IF IF 7 IF 9 IF 10 IF 5 IF1 Complete
Grade 3 (q=0.05)
136
At the same time, comparable results are obtained for Grade 3 quantiles
respectively removing IF 1, 5, and 10 from the simulation; in fact, aside for their
PoA mode values, they affect the reliability of task 1 in the same way.
According to the considerations made before and regarding the number of times
a certain factor shows up in the model, we are not surprised that Grade 0 quantiles
for the different trials, exception made for the “without IF 1” case, are very
similar; while for this last case we see that if not considering it the probability of
no deviation sensibly increases.
Even though, IF 7 has been identified as a very relevant factor in surgery practice,
this does not emerge from our results (aside the fact that in the “without IF 7” case
we have the second highest quantile) and it was not identified from surgeons as a
major factor in the analysis of the three critical tasks of the BA-RARP; so it would
be no doubt interesting to verify its impact for those Healthcare applications where
it is kept in high consideration.
Before showing the plots resulting from the software runs, we want to make some
reasoning about the relative importance in BA-RARP of different categories of
Influencing Factors, specifically: Team, Organizational, and Personal factors. The
various IFs were grouped as follows:
INFLUENCING FACTORS CATEGORY
1 Noise and ambient talk Team
5 Poor management of errors Organizational
7 Rude talk and disrespectful behaviour Team
9 Unclear communication Team/Personal
10 Poor coordination Team/Personal
137
The relative results in terms of quantiles are shown in Table 17: Analysis of IF
clusters' impact; from which we can appreciate that for sure the most influencing
category on the outcome of the surgery is the one related to Team and Teamwork
conditions; secondly the Organizational one; and finally, the one concerning
Personal factors. Indeed, from left to right we see the probability of Grade 0
increasing and the one of Grade 3 decreasing at the same time.
As we expected, and as shown in Table 17, the simulation run with the full set of
IF has the minimum probability for Grade 0 and the maximum value for Grade 3.
Another interesting point is made by the fact that the “Complete” scenario is much
more similar to the “Team” one than to the “Organizational” and “Personal” ones;
which means that the first category is the one better describing and mainly
affecting the outcome of the realistic case.
Table 17: Analysis of IF clusters' impact: probability of Grade 0 for the 0.95 percentile of patients
and of Grade 3 for the 0.05 percentile of patients
PATIENT
OUTCOME
(UPDATED
MULTIPLIER
S)
COMPLETE TEAM
(IF 1, 7, 9, 10)
ORGANISATI
ONAL
(IF 5)
PERSONAL
(IF 9, 10)
GRADE 0 93.47 % 94.58 % 98.72% 98.38 %
GRADE 3 0.0324 % 0. 0221 % 0.0196 % 0. 0109%
In the following, a short qualitative description of the three scenarios, and of the
key features obtained from direct observation in the operating room, is provided.
Team Category
The first category is the one concerning the influence of Team related
factors. Coordination and communication can either occur explicitly or implicitly.
Team members can intentionally communicate or they can anticipate, assist and
adjust without verbal instructions, relying on shared understanding of tasks and
138
situations; they are continuously involved in reciprocal process of sending/
receiving information that forms and re-forms a team’s attitudes, behaviours, and
cognitions.
In this kind of scenario, it is recommended to train team member to develop open,
adaptable, accurate and concise communication.
Moreover, the implementation of inter-professional education should contribute
to provide guidance on how to implement information and exchange protocols,
indeed the identification of a team based approach for improving quality care
requires the implementation of inter-professional education, training sessions and
meeting involving the whole operating team to instil advanced knowledge, skills,
and attitudes required for optimal teamwork. (Trucco et al. 2017).
Another fundamental element is team-stabilization: it is important to keep the
surgical team, as much as possible, unchanged for similar surgeries that required
analogous knowledge and skills.
Familiarization between team members is a crucial factor, which contribute to
improve communication and coordination. Researches have shown that the longer
a team is together, the better its results, also in terms of good communication, are
(Lingard et al., 2004). For these reasons, the scheduling of work shifts should take
into account these issues and, above all, avoiding change of shift during the
execution of a surgical procedure, in order to keep silence and concentration.
Finally, to avoid problems related to communication and coordination, it is also
recommended to only use equipment or personnel that are strictly required.
Organisational Category
Despite this is a not in deep explored topic, for Healthcare applications, it
is demonstrated that organisational issues have a very powerful impact on
Healthcare operators and procedures’ performances; as shown by the results
obtained in (Trucco et al. 2017), were IF 5 was proven to be the most impacting
factor on ANLU when the single tasks were considered.
139
Nowadays, the importance of procedures to be clear, complete, understandable,
updated, well known and followed, as much as having emergency procedures
identified for possible scenarios of deviation, has been ascertained.
One of the factors that can affect good error management is surgeon’s experience.
It is fundamental for the surgeon to be always aware of the situation and of the
possible, and/or probable, relative consequences.
What is even more remarkable, from HRA perspective, is that for the
maximisation of error management, and so of surgery performance, it is important
to recognize the value and potentiality of clinical documentation for clinical risks
prevention and the analysis of the events related to it; to this end, the use of
checklists to count instruments used during surgery (threads, needle, etc.) and
verify their final number is also recommended.
These kinds of practices are gaining a more and more central role in the medical
sector, as their benefits are becoming undeniable, and we hope that this trend will
encourage and foster the spreading of Human Risk Assessment and Safety culture.
Personal Category
The third, and last, category considered is related to personal factors that
come into play during the execution of a surgery.
Unclear communication and poor coordination can be associate to the personal
aspect because they can be related to the individual temperament of the surgeon,
although there are clear links with team aspects too; indeed, IF 9 and IF 10 may
be associated to both categories and benefit from common improvement actions.
Lingard and his colleagues have found that 31% of all communications could be
categorised as unsuccessful (i.e. failures), whether the information was missing,
the timing was poor, or where issues were not resolved (Lingard et al, 2004).
With the aim of improving this aspect of operating theatre’s practice, it would be
for sure advisable to increase the number and quality of training sessions in order
to make surgeons, and medical staff in general, more comfortable with stressful
situations; and to promote the standardization of technical communications during
the procedures, so to avoid misunderstandings leading to potential failures.
140
A standardization through codes of communication would be even more valuable
in a Robotic surgery context since it would bypass the lack of visual feedback
arising from the implementation of the robot.
6.2 Probability Density Functions of Patient Grade Outcomes
Except for the Factor Analysis illustrated in the previous paragraph, the final
outcome of the simulation tool was constituted by the PDF plots obtained for each
of the scenarios listed above.
In particular, the plots of the Grades’ PDFs, together with the relative best fitting
Gaussian, resulting from the full set of IFs are shown in the following figures
(from Figure 19 to 5).
We see that the Grade 0, i.e. the No-deviation case, is the more probable grade for
all the computations and that the probability of the grades (µ and frequency),
decreases for increasing severity grades; which is a good point medically speaking
since it means that the worst-case surgery outcome is also the less probable.
Except for the fact that the plot of Grade 3 is not available for IF 7 and 9 due to
the lack of available data (for the reasons already explained in the previous
paragraph), for all the grades, and all single factor analysis, the plots result to have
a good, or at least acceptable, fit with the associated Gaussian; even though the
data relative to Grade 3 are always quite confused and sparse due to the paucity
of samples collected. For what regards Grade 0, all of the factors’ PDF result to
be squeezed in the region between [0.8;1].
Through the running of the simulation of the procedure’s simplified version, we
have been able to validate the correct behaviour of the simulation tool designed
for this study; but, more challenging and interesting results and comments could
be done by applying this software to a refined and more complete schematization
of the surgery procedure itself, including the recovery paths suggested in this
work.
141
Figure 19: Grades’ PDF for the complete set of simulation runs
Figure 20: Grades’ PDF for the "only IF 1" set simulation run
142
Figure 21: Grades’ PDF for the "only IF 5" set simulation run
Figure 22: Grades’ PDF for the "only IF 7" set simulation run
143
Figure 23: Grades’ PDF for the "only IF 9" set simulation run
Figure 24: Grades’ PDF for the "only IF 10" set simulation run
145
CHAPTER 7: CONCLUSIONS
This study allowed the development, testing and validation of a simulation tool
based on Dynamic Event Tree theory and structure adopting a modified HEART
methodology for application in the Healthcare sector.
The attention was directed to the analysis of surgeon’s unreliability in robotic
surgery, since it is an innovative sector where Minimally Invasive Surgery enables
optimizing precision, speeding up recovery, and potentially reducing human
errors.
The importance of robotic surgery and the clear investment in future
developments highlight the need to carry out studies and important researches in
the field of Human Reliability Analysis. Since, for now and the near future, the
robot does not replace the surgeon, but only supports him in close cooperation and
interaction, the analysis and management of human error, and the application of
HRA techniques, are fundamental and necessary.
The state of art review underscored firstly the important results obtained by HRA
techniques in the few surgery applications developed; and secondly, the need to
reduce the gap of applicability between the Industrial and Healthcare sectors.
Even though, the first baby steps have been done in this sense, the majority of the
efforts in the socio-technical complex system of healthcare organizations is
characterized by reactive approaches, strongly focused on the retrospective
analysis of adverse events, such as incident data analysis; while, it would be for
sure more interesting to develop that branch of HRA discipline concerning
anticipatory analyses, which would represent a new twist in Healthcare helping in
the prediction, and hopefully elimination, of system’s vulnerabilities without the
necessity of occurrence of the failures itself.
The first aim of this work was to develop a first prototype of a DET simulation
tool able to differentiate the various paths deriving from one, or more, failures
along a surgical procedure.
The introduction of a DET structure allows the inclusion of a procedural timeline,
still not considering the influence of the passage of time; while the update of the
146
multipliers used in Healthcare specifically designed HEART methodology,
defined a step forward in terms of database, and so results’ accuracy.
There is still much work to do in order to get specific and wide ranging database
directly produced by experts and experiences coming from the Healthcare sector;
nevertheless, through specific assumptions we manage to benefit from the
developments gained by more advanced, for what regards safety studies, contexts.
The methodological steps developed to achieve the objectives were:
- Literature analysis of dynamic HRA techniques and their applications in
the industrial and Healthcare sector;
- Empirical observational activity of two robotic surgeries;
- Modelling of a simulation tool basing on the modified version of HEART
in Surgery and DET principles: identification of the branches and of the
factors to be considered in the analysis;
- Identification of the most significant recovery paths and relative
probabilities through the collection of experts’ data (i.e. three robotic
surgeons);
- Initialization of the simulation tool and application of the model to a
specific case study;
- Influencing Factors Analysis; for those considered as the more impacting
on surgeon’s performances.
The observational activities and collaboration with team of robotic surgeons
allowed to:
- Obtain a validation for the recovery paths’ task analysis of BA-RARP;
- Define the relative probability for the different Error Modes;
- Get surgeons’ opinions on the impact of Influencing Factors on two of the
three different Tasks of the selected surgical procedure.
Finally, an Influencing Factor Analysis was carried out to validate the model and
understand how the various factors affect Human Unreliability rate with a special
focus on the variability of the extreme grades’ outcomes (i.e. Grade 0 - No
deviation/ and Grade 3 – Deviations requiring surgical, endoscopic, or
radiological intervention).
147
The quantitative analysis, brought up by means of the simulation tool, confirmed
the quantitative relevance of certain factors (such as IF 1 and in general Team
Factors) over others, so identifying those requiring special care and, eventually,
remedial measures aimed to limit their negative influence on the execution of the
procedure’s sequence.
The quantitative analysis was limited by the small scale of the survey affecting
the objectivity of the data used; anyway, it has been possible to show that the
model developed produces reliable and coherent results.
7.1 Theoretical implications and future research
Assessor Team’s inexperience in HRA techniques has been confined by the use
of the Influencing Factors taxonomy validated for Surgery and by the adoption of
minimal and simplified questionnaires. Therefore, the surgeons involved in the
work faced a familiar and understandable list of factors and were allowed to
describe the recovery tasks freely.
Despite this, there have been some misunderstandings and difficulties in assigning
the required values, probably due to the fact that they had not much experience in
HRA and Statistics. This is one of the limit of this kind of technique, since it aims
at involving and directly interacting with professionals with no or limited specific
knowledge on HRA methodology and safety techniques.
The questionnaires were submitted to three surgeons of the Ca’ Granda Niguarda
Hospital of Milan and they were asked to identify and give estimates of the relative
probability of the most significant Error Modes that could stem from the three
Critical Tasks of the procedure already identified in previous studies.
The surgeons were also questioned about the different recovery steps to be
followed in order to cope with the failures described, and an interesting fact was
the almost complete overlap of the answers gathered; this is for sure a relevant
indication, which will facilitate future studies’ attempts in defining more complete
DET including the probabilities of the various recovery tasks.
148
Moreover, the fact that we got reasonably homogeneous results about the tree
representation of surgeons’ procedure (i.e. Error Modes, recovery steps, and
probabilities), demonstrates the validity of the approach. Indeed, even though
there are not well defined procedures and recovery paths in literature for BA-
RARP, from our study we can appreciate that surgeons with the same level of
experience are on the same page, especially for what regards the branching of the
procedure.
Looking at the fuller picture, it was observed that if new tasks and procedures
would be analysed, it will be necessary to acquire new and specific values for all
those variables included in the evaluation of probabilities according to HEART
technique. In fact, the judgements included in the study are not only subjective,
according to the professional opinion of the surgeons interviewed, but also strictly
contingent to the selected phases of the specific procedure. indeed, each operation
is characterized by peculiarities and uniqueness that influence the choice of the
most significant IFs, their quantitative impacts, and also their correspondence with
Williams’s EPCs.
As a step forward from previous analysis, we focused on the full procedure
success probability, leaving behind the single-tasks performance analysis
approach. Anyway, the assumption regarding the GTT allocation to the different
tasks was preserved by assigning the same Generic Task -G- to all the three CTs,
which implies same Nominal Human Unreliability (NHU) range, even if they
actually have different technical complexity. This was made for seek of continuity
and simplicity, but it would be a good point to update the GTT description and
evaluation according to the developments introduced by NARA and CARA
approaches also for surgical applications.
For developing and improving this study, it is important that other procedures and
surgical settings could experience this modified methodology and proactive
simulation approach, enhancing its diffusion, so that this work does not remain a
mere exercise of study.
On the other hand, it is necessary to take into account that the applicability in
complex areas needs a long time and readjustments.
149
This works represents a first step for the inclusion of dynamics in HRA techniques
for surgery applications; as suggested before in this discussion, future
developments should explore:
• The description of the evolution over time of the Influencing Factors
involved;
• The dependences existing between the tasks composing the sequence of
the procedures and the IF/ECP themselves;
• The investigation of the cognitive models underlying surgeons’ behaviour
in order to develop high-performance simulating tools;
• The investigation of the recovery paths and of factors specifically designed
for recovery scenarios peculiarity.
To recall what has been said after the state of art overview, we will briefly resume
the starting points for future researches and the main goals to be pursued.
The results obtained from the work of (Ambroggi & Trucco 2011) showed the
relevance of considering both direct and indirect influence of the factors involved
in any HRA analysis and the predominance of the acquired component in
modifying the weights of the PSFs; thus not considering the latter leads to a biased
estimation (De Ambroggi 2010). It would be interesting to review the
interrelations between the various factors involved; to get a fuller picture and
allowing experts to modify their assessment for the same task in different scenario.
In this way, the set of factors involved will influence the properties of the set itself
having the possibility of changing its components’ impact, so being more realistic.
The first step for the introduction of IFs’ dynamics is the integration of factors’
latency and momentum, and a suggestion for the ways in which the factors could
be able to evolve along the procedure is the one proposed by Boring’s study
treating parameters’ dynamics (Boring 2006), differentiating their behaviour
between Static Condition; Dynamic Progression; and Dynamic Initiator.
Many studies have demonstrated the importance of considering the effect of task
dependencies on probability evaluation to correctly estimate them. To show the
validity of this literature’s statement, also on the specific case under analysis, the
150
suggestion is to perform a dependency analysis, at first, on the most critical tasks
of the BA-RARP, and lately on the full procedure.
For what regards the formulation of a dedicated set of factors and multipliers
referred to the recovery paths evolution, we had an example of that in (Subotic et
al. 2007) where a set of Recovery Influencing Factors (RIFs) were defined for
ATC applications and the importance of the logical differences deriving from the
fact that we are dealing with a situation where a failure has already occurred were
underlined.
For the transposition of this study in Surgery, it would be firstly necessary to
modify the taxonomy in use so that the differences in scenario could be better
appreciated, the same for what regards multipliers’ evaluation. This, together with
the development of a proper database would foster the optimization and reliability
of future simulation tools.
In conclusion, a better modelling of all aspects mentioned before would constitute
a valuable consolidation of our study; and in this way, quantitative considerations
of goodness for recovery strategies could be formulated to refine educational tools
and packages; so, the whole Hospital system would benefit from this line of
research.
7.2 Implications and relevance for practitioners
The prostate cancer is the top of the list in terms of male population’s incidence
and its numbers are relentlessly growing. For sure, with a view of prevention, the
first step consists in fostering PSA exam which, monitoring the medical situation
over time, can point out the necessity to undergo more detailed exams, so
permitting the detection of the cancer during its early stages and incrementing the
chance of cure without invoking surgery.
The introduction of MIS has marked the beginning of a proper revolution in the
Surgical sector.
The shorter stay allows to have the patient weight-bearing in two days, but the
gold standard of this technology is the maintenance of the functions; so, BA-
151
RARP, aside the removal of prostate cancer, is an efficient and effective solution
to post-operative incontinence and impotence.
We hope that this work will support future training of robotic surgeons and the
design of new procedures and checklists; but most of all that the immediacy of
use of simulation tools will foster the evolution of operating room’s environment
and organization.
What is appealing of the kind of technology we issued is the possibility of
manipulating those factors actively, or passively, influencing human behaviour
and putting them in relation with the probability of success of the surgery and its
probable outcomes.
As mentioned many times along the study, one of the most hampering factor in
the development of HRA techniques for Healthcare is the lack of reliable data;
but, we expect that the continuous theoretical development, and the increasing
ease of use and effectiveness of this kind of tools will get the attention of surgical,
and in general medical, world.
The study highlights the major factors, or class of factors, influencing surgeons’
performance. Therefore, it is important to take that information into account and
to try to reduce their effect by raising surgeons’ awareness about errors promoting
conditions and implementing improvement actions, such as those proposed in the
study.
Additionally, the work represents a useful contribution to technology providers,
paving the way to the introduction of dependencies and recovery paths’ evaluation
for HRA applications in surgery.
Thanks to the tool developed and tested in the present work, performing a reliable
and efficient simulation is more than ever affordable, and the refinement and
enlargement of the data involved would provide even more precise and effective
analyses, facilitating the optimization and improvement of the operating room
environment.
What is more fascinating of this kind of technique is its flexibility of application
to the most disparate fields of interest, and its adaptation from NPP to Surgery
environment is the prove that nowadays Safety Engineering is a transversally
152
valuable discipline for maximizing systems’ performances; which in the end
results in an improvement of work’s quality both from the point of view of the
worker/surgeon and of the client/patient
For what specifically regards Robotic Surgery, it has not yet expressed its full
potential, and we expect future studies to introduce all those elements and
strategies already experimented in the industrial sectors (e.g. NPP, ATC)
producing a more comprehensive description of the phenomena occurring along
the procedure and a more accurate analysis of probabilities; with the hope of
seeing the spreading of the use of these methodologies and the increase in risk
awareness among potential users.
153
REFERENCES
De Ambroggi, M., 2010. THE USE OF COGNITIVE SIMULATION TO SUPPORT DYNAMIC
RISK MODELLING. , (January).
Ambroggi, M. De & Trucco, P., 2011. Modelling and assessment of dependent performance
shaping factors through Analytic Network Process. Reliability Engineering and System
Safety, 96(7), pp.849–860. Available at: http://dx.doi.org/10.1016/j.ress.2011.03.004.
Van Beuzekom, M. et al., 2010. Patient safety: Latent risk factors. British Journal of Anaesthesia,
105(1), pp.52–59.
Boring, R.L., 2007. Dynamic Human Reliability Analysis : Benefits and Challenges of Simulating
Human Performance.
Boring, R.L., 2006. Modeling Human Reliability Analysis Using MIDAS International Workshop
on Future Control Station Designs and Human.
Chang, Y.H.J. & Mosleh, A., 2007. Cognitive modeling and dynamic probabilistic simulation of
operating crew response to complex system accidents. Part 2: IDAC performance
influencing factors model. Reliability Engineering and System Safety, 92(8), pp.1014–1040.
Cordioli, 2015. HRA for Surgery : methodological improvements of HEART technique with
applications in Robotic Surgery.
Fletcher, G.C.L. et al., 2002. The role of non-technical skills in anaesthesia : a review of current
literature. , 88(3), pp.418–429.
Ge, D. et al., 2015. Quantitative analysis of dynamic fault trees using improved Sequential Binary
Decision Diagrams. Reliability Engineering and System Safety, 142, pp.289–299. Available
at: http://dx.doi.org/10.1016/j.ress.2015.06.001.
Gil, J. et al., 2011. A code for simulation of human failure events in nuclear power plants :
SIMPROC. , 241, pp.1097–1107.
Gyung, B., Joon, H. & Gook, H., 2016. Annals of Nuclear Energy Development of a systematic
sequence tree model for feed-and-bleed operation under a combined accident. Annals of
Nuclear Energy, 98, pp.200–210. Available at:
http://dx.doi.org/10.1016/j.anucene.2016.08.006.
Jang, I. et al., 2014. Annals of Nuclear Energy An empirical study on the human error recovery
failure probability when using soft controls in NPP advanced MCRs. Annals of Nuclear
Energy, 73, pp.373–381. Available at: http://dx.doi.org/10.1016/j.anucene.2014.07.004.
Jang, I., Ryum, A., et al., 2016. Annals of Nuclear Energy Study on a new framework of Human
Reliability Analysis to evaluate soft control execution error in advanced MCRs of NPPs.
Annals of Nuclear Energy, 91, pp.92–104. Available at:
http://dx.doi.org/10.1016/j.anucene.2016.01.007.
Jang, I., Jung, W. & Hyun, P., 2016a. Annals of Nuclear Energy Human error and the associated
recovery probabilities for soft control being used in the advanced MCRs of NPPs. Annals of
Nuclear Energy, 87, pp.290–298. Available at:
http://dx.doi.org/10.1016/j.anucene.2015.09.011.
Jang, I., Jung, W. & Hyun, P., 2016b. Annals of Nuclear Energy Human error and the associated
recovery probabilities for soft control being used in the advanced MCRs of NPPs. , 87,
pp.290–298.
Jang, I., Jung, W. & Hyun, P., 2016c. Annals of Nuclear Energy Human error and the associated
recovery probabilities for soft control being used in the advanced MCRs of NPPs. , 87,
pp.290–298.
154
Kirwan, B., 2017. Application of the CARA HRA tool to Air Traffic Management safety cases. ,
(January).
Kirwan, B. et al., 2016. Nuclear action reliability assessment ( NARA ): a data-based HRA tool. ,
7353(September).
Kontogiannis, T., 2011. A systems perspective of managing error recovery and tactical re-planning
of operating teams in safety critical domains. Journal of Safety Research, 42(2), pp.73–85.
Available at: http://dx.doi.org/10.1016/j.jsr.2011.01.003.
Mitchell, R.J., Williamson, A. & Molesworth, B., 2016. Application of a human factors
classification framework for patient safety to identify precursor and contributing factors to
adverse clinical incidents in hospital. Applied Ergonomics, 52, pp.185–195. Available at:
http://linkinghub.elsevier.com/retrieve/pii/S0003687015300478.
National, I. & Falls, I., 1996. Representing context , cognition , and crew performance in a
shutdown risk assessment. , 52, pp.261–278.
Onofrio, R., Trucco, P. & Torchio, A., 2015. ScienceDirect Towards a taxonomy of influencing
factors for human reliability analysis ( HRA ) applications in surgery. Procedia
Manufacturing, 0(Ahfe), pp.167–174. Available at:
http://dx.doi.org/10.1016/j.promfg.2015.07.119.
Patel, V.L. et al., 2011. Recovery at the edge of error : Debunking the myth of the infallible expert.
Journal of Biomedical Informatics, 44(3), pp.413–424. Available at:
http://dx.doi.org/10.1016/j.jbi.2010.09.005.
Rao, D., Kim, T. & Dang, V.N., 2015. A dynamic event tree informed approach to probabilistic
accident sequence modeling : Dynamics and variabilities in medium LOCA. Reliability
Engineering and System Safety, 142, pp.78–91. Available at:
http://dx.doi.org/10.1016/j.ress.2015.04.011.
Rao, K.D. et al., 2009. Dynamic fault tree analysis using Monte Carlo simulation in probabilistic
safety assessment. , 94, pp.872–883.
Su, K., Hwang, S. & Liu, T., 2000. Knowledge architecture and framework design for preventing
human error in maintenance tasks. , 19, pp.219–228.
Subotic, B., Ochieng, W.Y. & Straeter, O., 2007. Recovery from equipment failures in ATC :
Determination of contextual factors. , 92, pp.858–870.
Torchio, A., 2014. Affidabilità dell ’ operatore nella chirurgia mininvasiva : analisi tassonomica
dei fattori umani e organizzativi.
Trucco, P.Ã. & Leva, M.C., 2007. A probabilistic cognitive simulator for HRA studies ( PROCOS
). , 92, pp.1117–1130.
Trucco, P., Onofrio, R. & Galfano, A., 2017. Human Reliability Analysis (HRA) for Surgery: A
Modified HEART Application to Robotic Surgery. In V. G. Duffy & N. Lightner, eds.
Advances in Human Factors and Ergonomics in Healthcare: Proceedings of the AHFE 2016
International Conference on Human Factors and Ergonomics in Healthcare, July 27-31,
2016, Walt Disney World®, Florida, USA. Cham: Springer International Publishing, pp. 27–
37. Available at: http://dx.doi.org/10.1007/978-3-319-41652-6_3.
Ugur, E. et al., 2016. Medical errors and patient safety in the operating room. , 66(5), pp.593–597.
155
Bybliography of the independent articles and studies showing the cost-efficiencies
and positive impact of robotic surgery:
• General Robotic
Epstein, A. J. G., P. W.; Harhay, M. O.; Yang, F.; Polsky, D. (2013). “Impact of Minimally
Invasive Surgery on Medical Spending and Employee Absenteeism.” JAMA Surg: 1-7.
• Gynecology
Bell, M. C. T., J.; Seshadri-Kreaden, U.; Suttle, A. W.; Hunt, S. (2008). “Comparison of outcomes
and cost for endometrial cancer staging via traditional laparotomy, standard laparoscopy
and robotic techniques.” Gynecologic Oncology 111(3): 407-411.
Halliday, D. L., S.; Vaknin, Z.; Deland, C.; Levental, M.; McNamara, E.; Gotlieb, R.; Kaufer, R.;
How, J.; Cohen, E.; Gotlieb, W. H. (2010). “Robotic radical hysterectomy: comparison
of outcomes and cost.” Journal of Robotic Surgery: 1-6.
Hoyte, L. R., R.; Mezzich, J.; Bassaly, R.; Downes, K. (2012). “Cost analysis of open versus
robotic-assisted sacrocolpopexy.” Female Pelvic Med Reconstr Surg 18(6): 335-339.
Landeen, L. B. B., M. C.; Hubert, H. B.; Bennis, L. Y.; Knutsen-Larson, S. S.; Seshadri-Kreaden,
U. (2011). “Clinical and cost comparisons for hysterectomy via abdominal, standard
laparoscopic, vaginal and robot-assisted approaches.” South Dakota Medicine 64(6):
197-199, 201, 203 passim.
Lau, S. V., Z.; Ramana-Kumar, A. V.; Halliday, D.; Franco, E. L.; Gotlieb, W. H. (2012).
“Outcomes and cost comparisons after introducing a robotics program for endometrial
cancer surgery.” Obstetrics and Gynecology 119(4): 717-724.
Reynisson, P. P., J. (2013). “Hospital costs for robot-assisted laparoscopic radical hysterectomy
and pelvic lymphadenectomy.” Gynecol Oncol.
• Thoracic
Park, B. J. F., R. M. (2008). “Cost Comparison of Robotic, Video-assisted Thoracic Surgery and
Thoracotomy Approaches to Pulmonary Lobectomy.” Thoracic Surgery Clinics 18(3):
297-300.
• Urology
Alemozaffar, M. C., S. L.; Kacker, R.; Sun, M.; Dewolff, W. C.; Wagner, A. A. (2012).
“Comparing Costs of Robotic, Laparoscopic, and Open Partial Nephrectomy.” J
Endourol.
Cooperberg, M. R. R., N. R.; Duff, S. B.; Hughes, K. E.; Sadownik, S.; Smith, J. A.; Tewari, A.
K. (2012). “Primary treatments for clinically localised prostate cancer: a comprehensive
lifetime cost-utility analysis.” BJU Int.
Hohwü, L. B., M.; Ehlers, L.; Venborg Pedersen, K. (2011). “A short-term cost-effectiveness
study comparing robot-assisted laparoscopic and open retropubic radical
prostatectomy.” Journal of Medical Economics 14(4): 403-409.
156
Martin, A. D. N., R. N.; Castle, E. P. (2011). “Robot-assisted radical cystectomy versus open
radical cystectomy: A complete cost analysis.” Urology 77(3): 621-625.
Morgan, J. A. T., B. A.; Peacock, J. C.; Hollingsworth, K. W.; Smith, C. R.; Oz, M. C.;
Argenziano, M. (2005). “Does robotic technology make minimally invasive cardiac
surgery too expensive? A hospital cost analysis of robotic and conventional techniques.”
J Card Surg 20(3): 246-251.
157
WEBSITE REFERENCES
Prostatectomia Radicale Robotica per Tumore alla Prostata con Tecnica
Retrovescicale L'accesso posteriore per mette di risparmiare i nervi ed evitare
Incontinenza e impotenza. Scarso sanguinamento e niente trasfusioni, nessun
catetere vescicale e tempi di recupero minori. Dott. Aldo Massimo Bocciardi,
Ospedale Niguarda, Milano, (Bocciardi, 2014):
• http://www.medicinaeinformazione.com/-tumore-alla-prostata-
chirurgiarobotica-ininvasiva-con-tecnica-retrovescicale-per-evitare-
incontinenza-eimpotenza.html
• http://www.medicinaeinformazione.com/la-chirurgia-urologica-oggi-
laparoscopia-3d-e-robotica-per-interventi-piugrave-precisi-e-conservativi
Abmedica website: http://www.abmedica.it/it/prodotti/da-vinci
• http://roboticprostatesurgeryindia.com/
• https://robotenomics.com/2014/06/05/the-cost-effectiveness-and-
advantages-of-robotic-surgery/
• http://www.centreforroboticsurgery.com/robotic-radical-prostatectomy/
Montebelli, M.R., 15.02.2014. DaVinci: un robot chirurgico geniale e
costeffective. Scienza e Farmaci. Quotidiano online di formazione sanitaria
(Giulianotti; Rocco, 2014):
• http://www.quotidianosanita.it/scienza-
efarmaci/articolo.php?articolo_id=19725
• http://www.eurocontrol.fr/Newsletter/2002/November/GBAS/GBASv0.3
2.htm
• http://www.medicinaeinformazione.com
http://www.montallegro.it/images/pubblicazioni/pdf/per_saperne_di_piu/Prostat
a_Simonato
158
APPENDIX 1: Tools used for RARP
procedure
Tools used for RARP- Procedure at Ca’Granda Niguarda Hospital (Milano)
Bisturi lama 11
Kocker curvo per divaricare sottocute
Bakhaus sulla fascia e tensione verso l’alto
2 Farabeuf piccoli
2 pinze (anatomiche, chirurgiche o Durante)
3 trocar robotici
1 ottica robotica 30°
1 trocar Airseal
1 trocar 5 mm
1 trocar 12 mm
1 Johanne laparo
1 forbici laparo
1 clip Bbraun DS M
1 clip Bbraun DS SM
1 clip Bbraun DS S
1 clip Aesculap solo piccolo
159
1 clip metalliche da 10 mm
1 Ago di Verres
2 Ethilon 2-0 con aghi retti
1 Vicryl rapid 3-0 ago HR 22 non tagliente
2 V-Lok (15 cm e 23 cm con ago a semicerchio non tagliente)
1 seta 1 ago tagliente per fissare il drenaggio
1 Vicryl 0 ago 5/8 non tagliente per fascia
2 Vicryl rapide 2-0 con ago tagliente da cute
1 aspiratore Elefant 45 cm
1 Forbice monopolare curva robotica
1 Cadiere robotica
1 Maryland robotica
1 portaaghi robotico
1 catetere Dufour 18 Ch Simplastic
1 set per cistostomia a palloncino Foley 14 Ch
1 drenaggio tubulare
160
APPENDIX 2: Validated Task Analysis of
BA-RARP procedure
Tecnica Prostatectomia Radicale Robotica RETZIUS SPARING
https://www.youtube.com/watch?t=13&v=DS7ddQltHRY (Retzius-sparing Approach for Robot-assisted Laparoscopic Radical Prostatectomy) (Dicembre 2013) TASK ANALYSIS
1) POSIZIONAMENTO DELLE PORTE
Per questo intervento vengono usati : - 4 bracci robotici - 2 trocars per gli assistenti
posizionati in modo standardizzato Il GRASP viene posizionato nel secondo braccio robotico mentre la bipolare nel terzo, differentemente da quanto avviene nell’approccio anteriore.
2) INCISIONE DEL PERITONEO ED ISOLAMENTO DELLE VESCICOLE SEMINALI L’operazione inizia con un’incisione di 5-7 cm nello spazio del Douglas in modo da isolare le vescicole seminali.
La prima struttura che si incontra è il deferente dentro: - Il deferente destro viene isolato e sezionato.
161
e vescicole seminali destre vengono isolate grazie all’utilizzo di clips (grandi circa 3 mm). La stessa manovra viene fatta a sinistra:
- Il deferente sinistro viene isolato e sezionato - Le vescicole seminali sinistre vengono isolate usando delle clips
1) SOSPENSIONE DEL PERITONEO Al fine di allargare lo spazio di lavoro a disposizione, vengono inseriti dall’assistente due punti
con aghi retti, tangenziali all’area prepubica in maniera transaddominale. Questi passano
attraverso il peritoneo (vicino alla vescica). Quelle che si vengono a creare sono come due
‘tendine’, una a destra e una a sinistra. - Le vescicole seminali e il deferente vengono sospesi alle due ‘tendine’
2) ISOLAMENTO DELLA SUPERFICIE POSTERIORE DELLA PROSTATA E DEI PEDUNCOLI LATERALI
- Viene aperto un piano intra-extrafasciale a seconda del livello oncologico del tumore
- Isolamento grazie all’utilizzo di clips delle vescicole seminali - Isolamento peduncolo destro, con l’utilizzo di clips - Sezione del peduncolo destro limitando l’utilizzo di energia - Isolamento peduncolo sinistro, con l’utilizzo di clips - Sezione del peduncolo sinistro limitando l’utilizzo di energia
In questo modo si ottiene lo spazio laterale della prostata.
3) ISOLAMENTO DEL COLLO VESCICALE - Le vescicole seminali vengono trazionate verso il basso con il Grasp, in modo da
avere una migliore esposizione del collo vescicale - La giunzione vescico-prostatica viene raggiunta
Sia dal lato destro che da quello sinistro del collo vescicale si osserva che la vescica è situata sopra e la prostata sotto, differentemente dalla tecnica standard
- La giunzione vescico-prostatica viene sezionata e il collo vescicale viene risparmiato (se oncologicamente fattibile)
- Le fibre muscolari possono essere coagulate seguendo il piano che separa la vescica dalla prostata
- Passaggio della Maryland dietro il collo vescicale, in modo che abbracci il catetere
- Con le forbici monopolari viene incisa la parte posteriore del collo vescicale - Il catetere appare - Vengono posizionati due punti di cardinali alle ore 6 e 12, per facilitare
l’identificazione del collo vescicale nella fase di anastomosi ed evitare la retrazione della mucosa del collo
- Pinzare il repere delle ore 6 (Primo punto) - Il catetere viene tirato verso il basso - Con la Maryland mollare il punto di ore 6 - Secondo punto a ore 12 sul collo vescicale, pinzare il repere delle ore 12 - Trazionare verso l’alto - Completamento dell’incisione del collo vescicale: viene incisa la parte anteriore
4) ISOLAMENTEO DELA SUPERFICIE ANTERIORE DELLA PROSTATA E DELL’APICE
PROSTATICO - La parte anteriore e quella laterale della prostata vengono isolate per via
smussa - Evitare di entrare nel plesso del Santorini: senza sezionare, legare o aprire i vasi
del complesso venoso del Santorini. - Dissezione delle fasce laterali, quando possibile
162
- La dissezione continua verso l’apice prostatico La differenza tra il metodo tradizionale è che in questo caso la vescica è posizionata al di sopra e non posteriormente alla prostata
- L’apice così è isolato e anche l’uretra viene identificata: è possibile vedere chiaramente le sue fibre longitudinali
- Sezione dell’uretra - Appare il catetere - La dissezione dell’apice prostatico viene completata con l’incisione della parte
posteriore dell’uretra 5) POSIZIONAMENTO DELLA PROSTATA IN ENDOBAG
- La prostata è completamente isolata - La prostata viene posizionata in una sacca: Endobag - Rimozione della prostata - Lavaggio della loggia - La loggia lasciata dalla prostata viene controllata da eventuali sanguinamenti con
l’utilizzo di clips - Si estraggono gli strumenti per pulirli
6) ANASTOMOSI - Quando è necessario possono essere utilizzate sostanze emostatiche - Pulizia con una garza
L’anastomosi viene fatta prima nel lato sinistro e poi in quello destro. L’anastomosi viene eseguita secondo una tecnica di Van Velthoven modificata. Utilizziamo due fili di sutura V-Loc partendo dal quarto anteriore sinistro del margine uretrale, quindi al quarto destro anteriore e posteriore, infine al quarto posteriore sinistro. Al termine dell’anastomosi viene eseguita una prova di tenuta.
- Lato sinistro (quarto anteriore sinistro): passaggi fuori-dentro vescica, dentro-fuori uretra
- Il primo punto tira verso il basso il collo vescicale, dopo 3-4 passaggi (dipende dallo spessore del collo vescicale) si passa al lato destro
- Lato destro (quarto anteriore destro): 5-7 passaggi fuori-dentro vescica, dentro- fuori uretra
Il piano anteriore è completato, si passa ad eseguire il piano posteriore. - Quarto posteriore destro: fuori-dentro vescica, dentro-fuori uretra - Il filo di sinistra viene utilizzato ancora per 2-3 passaggi per completare - I fili vengono tesi - Il catetere viene fatto passare - I fili vengono tagliati - I fili delle due ‘tendine’ vengono tagliati - La vescica viene riempita con soluzione fisiologica, in modo da testare
l’anastomosi - Se non ci sono controindicazione e l’anastomosi è a tenuta, posizioniamo una
cistostomia sovrapubica e rimuoviamo il catetere uretrale 7) DRENAGGIO 8) ALLONTANAMENTO DEL ROBOT 9) RIMOZIONE DELLA PROSTATA E DELLE PORTE
163
APPENDIX 3: Validated Task Analysis-
Parallelism between tasks performed at
console and those at the table
168
APPENDIX 4: Contributing factor
classifications in the human factors
classification framework for patient safety
(Mitchell et al. 2016)
172
APPENDIX 5: Simulation Tool’s Script
(Matlab®)
%% DEFINIZIONE INPUT COSTANTI e INIZIAIZZAZIONE
GT=8;
Bound_GT=[0.35 0.97; 0.14 0.42;0.12 0.28; 0.06 0.13;0.007
0.045;0.0008 0.007;8e-5 0.009;0.000006 0.0009];
chir=3;
CT=3;
trial=20000;%%
ME=[4 1 4];%% %% dim
[Crtask]
grade=5;
IF=20;
multiplier=[10 10 9.8 1.048 9.775 2 5 5.7606 6.8 4.525 1.95 20 8
1.31 11 1.6 5 6.4 6.08 4.04]; %%new %%dim [IF]
%multiplier=[10 10 9.8 1.048 9.725 1.4 3.3 3.101 6.8 4.525 1.95
17 3 1.31 11 1.6 3 6.3 1.285 1.45]'; %%old %%dim [IF]
NHU_int=zeros(GT,10);%%
min_tr=zeros(20,1); %%dim [IF]
moda=[4 0 0 0 0 0 8 0 0 0 0 0 0 0 2 8 0 0 0 0]; %%dim [IF]
max_tr=ones(20,1)*10; %%dim [IF]
Prob_grade=zeros(grade,1);
%% Range alfas for the relative ME
alpha_crtask1=[0.39 0.425;0.1 0.60;0.01 0.05; 0.425 0.58];%%
alpha_crtask2=[1 1];%%
alpha_crtask3=[0.28 0.5;0.1 0.5;0.2 0.6;0.3 0.57];%%
alpha=[alpha_crtask1;alpha_crtask2;alpha_crtask3];
Prob_fail=zeros(CT,trial);
grade_chir=[1 1;1 2;2 3; 1 2;1 2;1 2;1 2;1 1;1 1];
Prob_ME=ones(sum(ME)+3,trial);
Prob_grade=zeros(grade,trial);
Prob_ME_mean=zeros(CT+3,max(ME));
grade_jud=zeros(CT,trial);
Clavien_D=zeros(trial,1);
%% CALCOLO PROBABILITA' PATHs
for t=1:trial
crtask=1;
f_vect1=[1 5 10]; %collego gli IF alle CT
p=zeros(IF,1);
for f=1:IF;
173
p(f)=(trirnd(min_tr(f),moda(f),max_tr(f)))./10;%%
end
p1=zeros(IF,1);
for n=f_vect1
p1(n)=p(n);
end
g=7;
NHU_int = Bound_GT(g,1) + (Bound_GT(g,2)-Bound_GT(g,1)).*rand(1);
prod_vect=[];
if sum(p1)==0 prod_vect=1; else for f=1:IF
prod_vect(f)=((multiplier(f)-1)*p1(f))+1;
end
end
Prob_fail(crtask,t)=NHU_int*prod(prod_vect);
Prob_succ(crtask,t)=1-Prob_fail(crtask,t);
a=zeros(ME(crtask),1);
% Creo gli alfa randomici per i vari ME
while ( a(end)<alpha(sum(ME(1:crtask)),1) ) || (
a(end)>alpha(sum(ME(1:crtask)),2) )
for i=1:(length(a)-1)
a(i)=alpha(sum(ME(1:crtask-1))+i,1) +
(alpha(sum(ME(1:crtask-1))+i,2)-alpha(sum(ME(1:crtask-
1))+i,1)).*rand(1);
end
a(end)=1-sum(a(1:(end-1)));
end
scelta=rand(1);
if (scelta<=Prob_fail(crtask,t)*a(1))
Prob_ME(sum(ME(1:crtask-
1))+1,t)=Prob_fail(crtask,t).*a(1);
grade_jud(crtask,t)=randi([grade_chir(sum(ME(1:crtask-
1))+1,1),grade_chir(sum(ME(1:crtask-1))+1,2)],1);
elseif
(Prob_fail(crtask,t)*a(1)<scelta<=Prob_fail(crtask,t)*(a(2)+a(1)
))
Prob_ME(sum(ME(1:crtask-1))+2,t)=Prob_fail(crtask,t).*a(2);
grade_jud(crtask,t)=randi([grade_chir(sum(ME(1:crtask-
1))+2,1),grade_chir(sum(ME(1:crtask-1))+2,2)],1);
elseif
(Prob_fail(crtask,t)*(a(2)+a(1))<scelta<=Prob_fail(crtask,t)*(a(
2)+a(3)))
Prob_ME(sum(ME(1:crtask-
1))+3,t)=Prob_fail(crtask,t).*a(3);
grade_jud(crtask,t)=randi([grade_chir(sum(ME(1:crtask-
1))+3,1),grade_chir(sum(ME(1:crtask-1))+3,2)],1);
174
elseif
(Prob_fail(crtask,t)*(a(2)+a(3))<scelta<=Prob_fail(crtask,t)*(a(
4)+a(3)))
Prob_ME(sum(ME(1:crtask-
1))+4,t)=Prob_fail(crtask,t).*a(4);
grade_jud(crtask,t)=randi([grade_chir(sum(ME(1:crtask-
1))+4,1),grade_chir(sum(ME(1:crtask-1))+4,2)],1);
else
Prob_ME(sum(ME(1:crtask-1))+5,t)=Prob_succ(crtask,t);
grade_jud(crtask,t)=0;
end
crtask=2;
f_vect2=[1 7];
p2=zeros(IF,1);
for n=f_vect2
p2(n)=p(n);
end
g=7;
NHU_int = Bound_GT(g,1) + (Bound_GT(g,2)-Bound_GT(g,1)).*rand(1);
prod_vect=[];
if sum(p2)==0 prod_vect=1; else for f=1:IF
prod_vect(f)=((multiplier(f)-1)*p2(f))+1;
end
end
Prob_fail(crtask,t)=NHU_int*prod(prod_vect);
Prob_succ(crtask,t)=1-Prob_fail(crtask,t);
a=zeros(ME(crtask),1);
% Creo gli alfa randomici per i vari ME
while ( a(end)<alpha(sum(ME(1:crtask)),1) ) ||
( a(end)>alpha(sum(ME(1:crtask)),2) )
for i=1:(length(a)-1)
a(i)=alpha(sum(ME(1:crtask-1))+i,1)+
(alpha(sum(ME(1:crtask-1))+i,2)
-alpha(sum(ME(1:crtask-1))+i,1)).*rand(1);
end
a(end)=1-sum(a(1:(end-1)));
end
scelta=rand(1);
if (scelta<=Prob_fail(crtask,t)*a(1))
Prob_ME(sum(ME(1:crtask-
1))+2,t)=Prob_fail(crtask,t).*a(1);
grade_jud(crtask,t)=randi([grade_chir(sum(ME(1:crtask-
1))+1,1),grade_chir(sum(ME(1:crtask-1))+1,2)],1);
175
else
Prob_ME(sum(ME(1:crtask-1))+3,t)=Prob_succ(crtask,t);
grade_jud(crtask,t)=0;
end
crtask=3;
f_vect3=[1 5 9 10];
p3=zeros(IF,1);
for n=f_vect3
p3(n)=p(n);
end
g=7;
NHU_int = Bound_GT(g,1) + (Bound_GT(g,2)-Bound_GT(g,1)).*rand(1);
prod_vect=[];
if sum(p3)==0 prod_vect=1; else for f=1:IF
prod_vect(f)=((multiplier(f)-1)*p3(f))+1;
end
end
Prob_fail(crtask,t)=NHU_int*prod(prod_vect);
Prob_succ(crtask,t)=1-Prob_fail(crtask,t);
a=zeros(ME(crtask),1);
% Creo gli alfa randomici per i vari ME
while ( a(end)<alpha(sum(ME(1:crtask)),1) ) || (
a(end)>alpha(sum(ME(1:crtask)),2) )
for i=1:(length(a)-1)
a(i)=alpha(sum(ME(1:crtask-1))+i,1) +
(alpha(sum(ME(1:crtask-1))+i,2)-alpha(sum(ME(1:crtask-
1))+i,1)).*rand(1);
end
a(end)=1-sum(a(1:(end-1)));
end
scelta=rand(1);
if (scelta<=Prob_fail(crtask,t)*a(1))
Prob_ME(sum(ME(1:crtask-
1))+3,t)=Prob_fail(crtask,t).*a(1);
grade_jud(crtask,t)=randi([grade_chir(sum(ME(1:crtask-
1))+1,1),grade_chir(sum(ME(1:crtask-1))+1,2)],1);
elseif
(Prob_fail(crtask,t)*a(1)<scelta<=Prob_fail(crtask,t)*(a(2)+a(1)
))
Prob_ME(sum(ME(1:crtask-
1))+4,t)=Prob_fail(crtask,t).*a(2);
grade_jud(crtask,t)=randi([grade_chir(sum(ME(1:crtask-
1))+2,1),grade_chir(sum(ME(1:crtask-1))+2,2)],1);
176
elseif
(Prob_fail(crtask,t)*(a(2)+a(1))<scelta<=Prob_fail(crtask,t)*(a(
2)+a(3)))
Prob_ME(sum(ME(1:crtask-
1))+5,t)=Prob_fail(crtask,t).*a(3);
grade_jud(crtask,t)=randi([grade_chir(sum(ME(1:crtask-
1))+3,1),grade_chir(sum(ME(1:crtask-1))+3,2)],1);
elseif
(Prob_fail(crtask,t)*(a(3)+a(2))<scelta<=Prob_fail(crtask,t)*(a(
4)+a(3)))
Prob_ME(sum(ME(1:crtask-
1))+6,t)=Prob_fail(crtask,t).*a(4);
grade_jud(crtask,t)=randi([grade_chir(sum(ME(1:crtask-
1))+4,1),grade_chir(sum(ME(1:crtask-1))+4,2)],1);
else
Prob_ME(sum(ME(1:crtask-1))+7,t)=Prob_succ(crtask,t);
grade_jud(crtask,t)=0;
end
% Final run evaluation
Clavien_D(t)=max(grade_jud(:,t));
if Clavien_D(t)>0
Prob_grade(Clavien_D(t),t)=prod(Prob_ME(:,t));
Prob_grade0(t)=prod(Prob_succ(:,t));
else
Prob_grade0(t)=prod(Prob_succ(:,t))+prod(Prob_ME(:,t));
end
end
Prob_Grade_Final=[Prob_grade0;Prob_grade];
%% Raggruppo e riordino le probabilità per ogni grade
Grade_final_0=[]; Grade_final_1=[]; Grade_final_2=[];
Grade_final_3=[]; Grade_final_5=[]; Grade_final_4=[];
for i=1:trial
Grade=Prob_Grade_Final(:,i);
val=find(Grade);
for j=1:length(val)
if val(j)==1
if (Grade(val(j))>0) && (Grade(val(j))<1)
Grade_final_0=[Grade_final_0 Grade(val(j))];
end
elseif val(j)==2
Grade_final_1=[Grade_final_1 Grade(val(j))];
elseif val(j)==3
Grade_final_2=[Grade_final_2 Grade(val(j))];
end
end
end
Grade_prob_vect_0=sort(Grade_final_0);
pd = fitdist(Grade_prob_vect_0','Normal');
xi=linspace(0,1,100);
y = pdf(pd,xi);
177
figure
histogram(Grade_prob_vect_0')
hold on
scale = 10/max(y);
plot((xi),(y.*scale))
hold off
title(['PDF of grade',num2str(0)]);
Grade_prob_vect_1=sort(Grade_final_1);
xi=linspace(0,1,100);
pd = fitdist(Grade_prob_vect_1','Normal')
xi=linspace(0,1,100);
y = pdf(pd,xi);
figure
histogram(Grade_prob_vect_1');
hold on
scale = 10/max(y);
plot((xi),(y.*scale))
hold off
set(gca,'xlim',[0 max(Grade_prob_vect_1)])
title(['PDF of grade',num2str(1)]);
Grade_prob_vect_2=sort(Grade_final_2);
xi=linspace(0,1,100);
pd = fitdist(Grade_prob_vect_2','Normal')
xi=linspace(0,1,100);
y = pdf(pd,xi);
figure
histogram(Grade_prob_vect_2')
hold on
scale = 10/max(y);
plot((xi),(y.*scale))
hold off
set(gca,'xlim',[0 max(Grade_prob_vect_2)])
title(['PDF of grade',num2str(2)]);
Grade_prob_vect_3=sort(Grade_final_3);
xi=linspace(0,1,100);
pd = fitdist(Grade_prob_vect_3','Normal')
xi=linspace(0,1,100);
y = pdf(pd,xi);
figure
histogram(Grade_prob_vect_3')
hold on
scale = 10/max(y);
plot((xi),(y.*scale))
hold off
set(gca,'xlim',[0 max(Grade_prob_vect_3)])
title(['PDF of grade',num2str(3)]);
perc_g0=quantile(Grade_prob_vect_0,0.95);
perc_g3=quantile(Grade_prob_vect_3,0.95);
178
APPENDIX 6: Matlab® functions
function randomVector = trirnd(minVal, topVal, maxVal,
varargin);
TRIRND generates discrete random numbers from a triangular distribution. randomValue
= TRIRND(minVal, topVal, maxVal);
The distribution is defined by:
- a minimum and a maximum value
- a "top" value, with the highest probability
The distribution is defined with zero probability at minVal-1 and maxVal+1, and with
highest probability at topVal. Hence every value in the range (including the maximum
and minimum values) have a non-zero probability to be included, whatever topValue is.
The output is a random integer.
randomMatrix = TRIRND(minVal, topVal, maxVal, nrow, ncolumns)
returns a (nrow x ncolumns) matrix of random integers.
NOTES:
* This is a numeric approximation, so use with care in "serious"
statistical applications!
* Two different algorithms are implemented. One is efficient for large number of random
points within a small range (maxVal-minVal), while the other is efficient for large range
for reasonable number of points. For large ranges, there is a O(n^2) relation with regard
to the product of range*number_of_points. When this product reach about a billion, the
runtime reach several minutes.
if nargin < 3
error('Requires at least three input arguments.');
end
nrows = 1;
ncols = 1;
if nargin > 3
if nargin > 4
nrows = varargin{1};
ncols = varargin{2};
else
error('Size information is inconsistent.');
end
end
if topVal > maxVal || topVal < minVal || minVal > maxVal
randomVector = ones(nrows, ncols).*NaN;
return;
end
179
% go for the randomization
mxprob = maxVal-minVal+1;
if mxprob < 51 || (mxprob < 101 && nrows*ncols > 500) ||
(mxprob < 501 && nrows*ncols > 8000) || (mxprob < 1001 &&
nrows*ncols > 110000)
vector = ones(1,mxprob).*topVal;
j = (topVal-minVal+1);
slope = 1/j;
j = j -1;
for i = (topVal-1):-1:minVal
vector = [vector ones(1,floor(mxprob*slope*j)).*i];
j = j - 1;
end
j = (maxVal+1-topVal);
slope = 1/j;
j = j -1;
for i = (topVal+1):maxVal
vector = [vector ones(1,floor(mxprob*slope*j)).*i];
j = j - 1;
end
randomVector =
vector(unidrnd(size(vector,2),nrows*ncols,1));
else
probs = mxprob:-1*mxprob/(topVal-minVal+1):1;
probs = [probs(end:-1:2) mxprob:-1*mxprob/(maxVal-
topVal+1):1];
probs = cumsum(probs./sum(probs));
if nrows*ncols*mxprob > 1000000
% dealing with large quantities of data, hard on
memory
randomVector = [];
i = 1;
while nrows*ncols*mxprob/i > 1000000
i = i * 10;
end
probs = repmat(probs, ceil(nrows*ncols/i), 1);
for j = 1:i
rnd = repmat(unifrnd(0, 1, ceil(nrows*ncols/i),
1), 1, mxprob);
randomVector = [randomVector sum(probs < rnd,
2)+1];
end
randomVector = randomVector(1:nrows*ncols);
else
probs = repmat(probs, nrows*ncols, 1);
rnd = repmat(unifrnd(0, 1, nrows*ncols, 1), 1,
mxprob);
randomVector = sum(probs < rnd, 2)+1;
end
end
% generate desired matrix:
randomVector = reshape(randomVector, nrows, ncols);
180
APPENDIX 7: Questionnaire Results
Si chiede gentilmente a TRE CHIRURGHI di completare le seguenti tabelle,
individualmente.
Le tabelle si riferiscono rispettivamente alla task:
- Isolamento dei peduncoli laterali e della superfice posteriore della
prostata;
- Distacco del Santorini dalla superficie anteriore della prostata;
- Anastomosi;
Le tre tabelle andranno compilate adottando gli stessi criteri di giudizio e le
seguenti istruzioni.
Nella prima colonna sarà necessario descrivere quali sono, a partire dal Critical
Task relativo alla tabella, i Modi di Errore (EMs) possibili che portano al
fallimento della task.
Nella seconda colonna si dovrà indicare la porzione (; valore percentuale da 0
a 100) che lo specifico Modo di Errore (EMs) rappresenta rispetto al numero totale
di fallimenti attesi dell’azione.
Ad esempio:
Task critico: 3. Anastomosi
Modi di Errore: 3.1) Asimmetria sutura;
3.2) Posizione sutura;
3.3) Mancato accostamento dei lembi;
Se 4 volte su 10 che si verifica un errore nell’eseguire un’anastomosi il modo di
errore è “Asimmetria sutura” allora il valore di α relativo al suddetto ME sarà 40.
Nella terza colonna (“Sequenza di recupero”) sarà necessario indicare la
sequenza di azioni, se necessarie, per recuperare l’errore; ovviamente è possibile
181
avere diverse strategie per risolvere lo stesso problema quindi si richiede di
evidenziare tutte le più significative.
Nella quarta colonna (“Punto di rientro”) sarà necessario indicare se e in quale
punto della procedura standard (descritto dal Task Flow fornito) la sequenza di
azioni di recupero si ricongiunge all’originale; nel caso non sia prevista nessuna
azione per recuperare l’errore sarà sufficiente lasciare la terza colonna vuota e
indicare nella quarta colonna la fase immediatamente successiva alla task critico
considerato.
Nella quinta colonna (“Outcome paziente”) sarà invece necessario indicare quale
tra gli outcome descritti da Clavien-Dindo (Mitropoulos, D., et al. (2013); Dindo, et al.
(2004)) è quello che meglio descrive la situazione del paziente assumendo che: si
sia verificato il ME relativo alla riga che si sta compilando, che l’errore sia stato
identificato e che il recupero sia stato portato a termine senza ulteriori errori.
Table 18: Clavien-Dindo grading system for the classification of surgical complications
(Mitropoulos, D., et al. (2013); Dindo et al., (2004))
Grades Definitions
Grade I
Any deviation from the normal postoperative course without the need for
pharmacological treatment or surgical, endoscopic and radiological
interventions. Acceptable therapeutic regimens are: drugs such as antiemetics,
antipyretics, analgesics, diuretics and electrolytes, and physiotherapy. This
grade also includes wound infections opened at the bedside.
Grade II
Requiring pharmacological treatment with drugs other than those allowed for
grade I complications. Blood transfusions and total parenteral nutrition are also
included.
Grade III Requiring surgical, endoscopic or radiological intervention
Grade IV Life-threatening complication
Grade V Death of a patient
182
References:
- Dindo, Daniel, Nicolas Demartines, and Pierre-Alain Clavien. “Classification of Surgical
Complications: A New Proposal With Evaluation in a Cohort of 6336 Patients and
Results of a Survey.” Annals of Surgery 240.2 (2004): 205–213. PMC. Web. 5 Jan. 2017.
- Mitropoulos, D., et al. "Complications after Urologic Surgical Procedures." (2013).
SURGEON ONE
TASK1: Isolamento dei peduncoli laterali e della superfice posteriore della
prostata
Descrizione possibili Modi
di Errore (ME) per la task 1
α
[0-
100]
Sequenza di
recupero del ME “Punto di rientro”
Outcome
paziente
[grade]
ME
1.1
ERRATO PIANO
CHIRURGICO CON
ROTTURA DELLA
PROSTATA
39 TORNARE
ALL’INIZIO DELLO
STEP CHIRURGICO E
RIPRENDERE IL
PIANO CORRETTO
INIZIO
DELL’ISOLAMENTO
DEI PEDUNCOLI
-
ME
1.2
ERRATO PIANO
CHIRURGICO CON
LESIONE DEL
BUNDLE
NEUROVASCOLARE
60 -
- 2d (disfunzione
erettile)
ME
1.3
LESIONE DEL RETTO 1 IDENTIFICAZIONE
DELLA LESIONE,
SUTURA E
RIPARAZIONE
PROSEGUE (SE NON
RICONOSCIUTO
Può PORTARE A
3b-d
(colostomia)
183
TASK2: Distacco del Santorini dalla superficie anteriore della prostata
Descrizione possibili Modi
di Errore (ME) per la task 2
α
[0-
100]
Sequenza di recupero
del ME “Punto di rientro”
Outcome
paziente
[grade]
ME
2.1
APERTURA
PARZIALE O
COMPLETA DEL
SANTORINI CON
SANGUINAMENTO
100 L’ASSISTENTE AL TAVOLO
COMPRIME CON
L’ASPIRATORE IL VASO
SANGUINANTE E USA IL
LAVAGGIO
L’INIZIO
DELL’ANASTOMOSI
2
(trasfusioni)
INCREMENTO DELLO
PNEUMOPERITONEO
L’INIZIO
DELL’ANASTOMOSI
2
(trasfusioni)
SUTURA DEI VASI
SANGUINANTI
L’INIZIO
DELL’ANASTOMOSI
2
(trasfusioni)
TASK3: Anastomosi
Descrizione possibili
Modi di Errore (ME) per
il task 3
α
[0-
100]
Sequenza di recupero
del ME “Punto di rientro”
Outcome
paziente
[grade]
ME
3.1
ANASTOMOSI
NON A TENUTA
40 POSIZIONAMENTO DI
PUNTI AGGIUNTIVI
POSIZIONAMENTO
DELLA CISTOSTOMIA
-
RIFACIMENTO
DELL’ANASTOMOSI
POSIZIONAMENTO
DELLA CISTOSTOMIA
MANCATA RISOLUZIONE NON VIENE
POSIZIONATA
CISTOSTOMIA
-
ME LESIONE
DELL’URETRA
10 POSIZIONAMENTO
PUNTI AGGIUNTIVI
POSIZIONAMENTO
CISTOSTOMIA
-
184
3.2
ME
3.3
PINZAMENTO
DEL CATETERE
CON I PUNTI DI
ANASTOMOSI
20 SEZIONE DELLA SUTURA
E RIFACIMENTO
POSIZIONAMENTO
CISTOSTOMIA
ME
3.4
SUTURA DELLA
PARETE
POSTERIORE
DELLA VESCICA
CON QUELLA
ANTERIORE
30 SEZIONE DELLA SUTURA
E RIFACIMENTO
POSIZIONAMENTO
CISTOSTOMIA
185
SURGEON TWO
TASK1: Isolamento dei peduncoli laterali e della superfice posteriore della
prostata
Descrizione possibili Modi di
Errore (ME) per il task 1
α
[0-
100]
Sequenza di recupero del
ME
“Punto di
rientro”
Outcome
paziente
[grade]
ME
1.1
MANCATA
IDENTIFICAZIONE DEL
PIANO CHIRURGICO
CORRETTO
42.5 INDIVIDUAZIONE DI UN
NUOVO PIANO
CHIRURGICO
LA SEQUENZA
DI AZIONI
PERMETTE IL
RECUPERO
TOTALE
CLAVIEN-
DINDO I
ME
1.2
DIFFICILE CONTROLLO
DEL
SANGUINAMENTO
42.5 APPLICAZIONE DI CLIPS O
PUNTI DI SUTURA
LA SEQUENZA
DI AZIONI
PERMETTE IL
RECUPERO
TOTALE
CLAVIEN-
DINDO
I – II
ME
1.3
LESIONE DEL RETTO 5 RAFFIA DEL RETTO/
COLONSTOMIA
LA SEQUENZA
DI AZIONI
PERMETTE IL
RECUPERO
TOTALE
CLAVIEN-
DINDO
III
ME
1.4
LESIONE DELLA
PARETE VESCICALE
10 RAFFIA DELLA LESIONE
VESCICALE
LA SEQUENZA
DI AZIONI
PERMETTE IL
RECUPERO
TOTALE
CLAVIEN-
DINDO I
186
TASK2: Distacco del Santorini dalla superficie anteriore della prostata
Descrizione possibili Modi
di Errore (ME) per il task 2
α
[0-100]
Sequenza di recupero del
ME
“Punto di
rientro”
Outcome
paziente
[grade]
ME
2.1
APERTURA DEL
SANTORINI
100 SUTURA DEL SANTORINI
LA SEQUENZA DI
AZIONI
PERMETTE IL
RECUPERO
TOTALE
CLAVIEN-
DINDO I
TASK3: Anastomosi
Descrizione possibili Modi
di Errore (ME) per il task 3
α
[0-100]
Sequenza di recupero del
ME
“Punto di
rientro”
Outcome
paziente
[grade]
ME
3.1
MANCATO
ACCOSTAMENTO DEI
LEMBI CON FISTOLA
URINOSA
50 RE-ANASTOMOSI/
APPLICAZIONE DI ULTERIORI
PUNTI DI SUTURA
LA SEQUENZA DI
AZIONI
PERMETTE IL
RECUPERO
TOTALE
CLAVIEN-
DINDO II
ME
3.2
LACERAZIONE
URETRALE E DEL
COLLO VESCICALE
50 RE-ANASTOMOSI/
APPLICAZIONE DI ULTERIORI
PUNTI DI SUTURA
LA SEQUENZA DI
AZIONI
PERMETTE IL
RECUPERO
TOTALE
CLAVIEN-
DINDO II
187
SURGEON THREE
TASK1: Isolamento dei peduncoli laterali e della superfice posteriore della
prostata
Descrizione possibili Modi di
Errore (ME) per il task 1
α
[0-
100]
Sequenza di recupero del
ME
“Punto di
rientro”
Outcome
paziente
[grade]
ME
1.1
PRESENZA DI
ADERENZE E
DIFFICOLTOSA
INDIVIDUAZIONE
PIANO CHIRURGICO
40 IDENTIFICAZIONE PIANO
CORRETTO
RECUPERO
PIANO
CORRETTO
GRADO I
ME
1.2
SANGUINAMENTO
PEDUNCOLI
58 APPLICAZIONE CLIPS
METALLICHE O
COAGULAZIONE
RECUPERO GRADO I
ME
1.3
LESIONE RETTO 2 IDENTIFICAZIONE LESIONE
E SUTURA
RECUPERO CON
AZIONI
SUDDETTE
GRADO II-
III
188
TASK2: Distacco del Santorini dalla superficie anteriore della prostata
Descrizione possibili Modi di
Errore (ME) per il task 2
α
[0-100]
Sequenza di recupero
del ME
“Punto di
rientro”
Outcome
paziente
[grade]
ME
2.1
SANGUINAMENTO 100 CHIUSURA SANTORINI
(“TAPPATO” CON
ASPIRATORE) O
LAVAGGIO A BASSA
PRESSIONE
RECUPERO
SEMPRE
POSSIBILE
GRADO I
TASK3: Anastomosi
Descrizione possibili Modi di
Errore (ME) per il task 3
α
[0-100]
Sequenza di recupero del
ME
“Punto di
rientro”
Outcome
paziente
[grade]
ME
3.1
PINZAMENTO DEL
CATETERE
57 RIMOZIONE/TAGLIO DEL
PUNTO DAL CATETERE
SEMPRE
POSSIBILE
GRADO I
ME
3.2
ANASTOMOSI NON
A TENUTA
28 POSIZIONAMENTO ALTRI
PUNTI
SEMPRE
POSSIBILE
GRADO I
ME
3.3
CHIUSURA DELLA
PARETE VESCICALE
ANTERIORE E
6 RIMOZIONE PARZIALE PUNTI
ANASTOMOSI O ESEGUIRE
NUOVA ANASTOMOSI
SEMPRE
POSSIBILE
GRADO I
189
POSTERIORE CON
PUNTO, TALE DA
IMPEDIRE
PASSAGGIO DEL
CATETERE
ME
3.4
LACERAZIONE
MARGINE URETRALE
9 POSIZIONAMENTO NUOVO
PUNTO SUTURA
RECUPERO
POSSIBILE
GRADO I