Monte Carlo model of the Brainlab Novalis Classic 6 MV linac using
the GATE simulation platformGraduate Studies The Vault: Electronic
Theses and Dissertations
2014-09-29
MV linac using the GATE simulation platform
Wiebe, Jared
Wiebe, J. (2014). Monte Carlo model of the Brainlab Novalis Classic
6 MV linac using the GATE
simulation platform (Unpublished master's thesis). University of
Calgary, Calgary, AB.
doi:10.11575/PRISM/27004
http://hdl.handle.net/11023/1845
University of Calgary graduate students retain copyright ownership
and moral rights for their
thesis. You may use this material in any way that is permitted by
the Copyright Act or through
licensing that has been assigned to the document. For uses that are
not allowable under
copyright legislation or licensing, you are required to seek
permission.
Downloaded from PRISM: https://prism.ucalgary.ca
UNIVERSITY OF CALGARY
Monte Carlo model of the Brainlab Novalis Classic 6 MV linac using
the GATE simulation
platform
by
IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE
DEGREE OF MASTER OF SCIENCE
DEPARTMENT OF PHYSICS AND ASTRONOMY
CALGARY, ALBERTA
SEPTEMBER, 2014
ii
Abstract
Monte Carlo (MC) simulations are known to be the most accurate
method to model radiation
transport and absorbed dose. Geant4 application for tomographic
emission (GATE) is a Monte
Carlo simulation platform built and is relatively new at modeling
radiotherapy systems. A MC
model of the Brainlab Novalis Classic 6 MV was developed using GATE
which will investigate
GATE’s capabilities in the application of a stereotactic
radiosurgery (SRS) dedicated linear
accelerator. Measured depth dose distributions and dose profiles
using an ion chamber, diode
detector, and radiochromic film are used as reference measurements
for the model. The model
shows agreement with the measured data with greater than 90% of
points passing a 3%/1mm
gamma comparison, with the exception of smaller field sizes in the
penumbra region which still
agree within 3%/3mm. This model demonstrates GATE ability to model
linacs and can be used
in the future for research and clinical comparisons.
iii
Acknowledgements
I would like to acknowledge the University of Calgary, Tom Baker
Cancer Centre, and NSERC
for funding and financial support. I would also like to thank my
supervisor Dr. Nicolas Ploquin
and co-supervisor Dr. Rao Khan for the opportunity to work on this
project and their
understanding and guidance throughout my Master’s program. I would
also like to thank Edward
Pranoto, a summer student who created the CAD drawings seen in this
thesis.
iv
Table of Contents
...............................................................................................................
iv List of Tables
.....................................................................................................................
vi List of Figures and Illustrations
........................................................................................
vii List of Symbols, Abbreviations and Nomenclature
.............................................................x
CHAPTER ONE: INTRODUCTION
................................................................................11
1.1 Radiation therapy
.....................................................................................................11
3.1 Overview of method
................................................................................................28
3.2 Particle simulation
...................................................................................................31
CHAPTER FOUR: EXISTING MONTE CARLO CODES IN RADIATION
THERAPY46
4.1 EGSnrc
.....................................................................................................................46
4.2 MCNP
......................................................................................................................47
4.4.1 Application of GATE to radiation therapy
......................................................57 4.5
Motivation for using GATE as the MC package for this project
.............................61
CHAPTER FIVE: MATERIALS AND METHODS
........................................................62
5.1 Overview
..................................................................................................................62
7.2 Future Work
...........................................................................................................112
7.3 Closing comments
..................................................................................................113
APPENDIX B: GATE MACRO
......................................................................................130
Table 4.1: Table summarizing the similarities and differences
between EGSnrc, MNP, and
Geant4 for modeling photon and electron transport. The table is
adapted from
Verhaegen and Seuntjens [38].
.............................................................................................
51
Table 4.2: Summary of each parameter for the multiple scattering
algorithm, ionisation, and
production cuts. The user must specify each of these parameters in
Geant4........................ 52
Table 5.1: Table of the in-house measurements of the MLC on the
Novalis. The table
displays each leaf width type (as coloured in Figure 5.4) and shows
the width and length
of the sub-volume for each leaf corresponding to Figure 5.5. All
measurements are in
mm.
.......................................................................................................................................
72
Table 5.2: Recommended physics settings in GATE for radiotherapy
applications involving
EM interactions. Table adapted from the OpenGATE collaboration
website [51]. ............. 77
Table 6.1: The columns display the results of the mean dose
difference , mean gamma ,
and percentage of points passing gamma criteria of 3%/1mm and
3%/3mm. Results
shown are for dose profiles and PDDs.
.................................................................................
90
vii
List of Figures and Illustrations
Figure 2.1: A plot of an x-ray spectrum produced from a keV
electron beam. The unfiltered
spectrum is shown by a straight line, reaching a maximum energy of
the electrons used
to produce the x-rays. The spectrum is filtered through the target
causing the lower
energy photons to be absorbed. Both axes are in arbitrary units and
use a linear scale. ...... 15
Figure 2.2: Diagram of the main linac components.
.....................................................................
17
Figure 2.3: Diagram of the geometric penumbra due to the spot size.
A narrower spot size
will leave a smaller penumbra region, will a large spot size will
cause a large penumbra .. 17
Figure 2.4: Diagram of the main linac components and a water
phantom. The electron beam
(red arrow) travels from the top to the bottom and produces photons
(green lines) via
Bremsstrahlung on the Tungsten and Copper target. The cone
extending to the water
phantom represents the edge of the x-ray beam.
...................................................................
18
Figure 2.5: A plot showing the angular distribution of x-ray
production via bremsstrahlung.
The measure of production is given in barns per steradian. The
radial line indicates the
number of barns per steradian at a particluar angle from the
direction of the incident
electrons. This trend continues when looking at higher electron
energies. The plot
isadapted from Attix, F. H. [2].
.............................................................................................
20
Figure 2.6: Image shows an example of a Copper flattening filter
(one specific to the Novalis
linac was not available). The total conical shape is made up of
smaller conical sections
of varying heights and angles. The filter is designed to produce a
flat dose profile at
treatment depth. The filter will also harden the beam – having a
greater effect along the
central axis.
...........................................................................................................................
21
Figure 2.7: Leaf end view of one bank of the micro MLC system on
the Novalis. The width
of the each leaf varies along its height to produce a tongue and
groove. Each leaf can
move separately from the other in and out of the field (in and out
of the page). At this
viewing angle the x-ray beam would be incident from the bottom.
Looking at the
vertical in the image, the leaves are at an angle to match the
divergence of the incident
radiation beam.
......................................................................................................................
23
Figure 2.8: Plot of a percent depth dose curve for a 6 MV linac in
a 98x98 mm 2 square field
size. The measurement is performed in a water phantom with 100 cm
SSD using a CC13
ion chamber. The plot is normalised to the maximum dose value. The
curve
characterizes the dose deposition with depth along the central
axis. .................................... 26
Figure 3.1: A flowchart of the MC simulation of photon transport.
The initial starting particle
or primary is picked off the stack. If the particle is ever below a
cuttoff threshold, it will
be terminated and the energy deposited locally. The distance to
interaction, type of
interaction, and resulting properties of the particles due to the
interaction all require
random sampling. The resulting particle properties (of both primary
and daughter
viii
particles) are stored in the stack. The process is repeated until
no more particles are in
the stack. Adapted from D. W. O. Rogers and A. F. Bielajew [11].
.................................... 33
Figure 3.2: A diagram showing an incident electron passing by an
atom. The impact
parameter is the distance from the nucleus to the trajectory of the
approaching
electron and is the classical radius.
....................................................................................
37
Figure 3.3: A sketch of the hypothetical paths of an electron using
single scattering and a
condensed history approach. The path for single scattering is not
continuous as shown
but is still made of discrete events. There are many interactions
with single scattering
and it would require a large amount of time. To approximate, the
many small scattering
events are grouped in large scattering events (the vertices) by
using a multiple scattering
theory for a condensed history. The large numbers of small
interactions are elastic or
semi-elastic and are approximated in the condensed history
technique by a continuous
energy loss along the electrons path; this energy is deposited
locally about the track. ........ 40
Figure 3.4: Diagram of a phase space. When the particle (green
line) passes through the
circular plane (blue), the energy , position in radius and angle ,
and direction given
in angles and are recorded.
.............................................................................................
44
Figure 5.1: Novalis Classic 6 MV linac at the Tom Baker Cancer
Centre. .................................. 64
Figure 5.2: Diagram of the MLC leaves on the Novalis. The outer
leaves, in green, on both
banks have a width of 5.5 mm at isocenter, the red leaves are
thinner, with 4.5 mm
width at isocenter, and the yellow leaves are 3 mm wide at
isocenter. Each leaf can
move in and out of the field (left-right in the diagram).
....................................................... 66
Figure 5.3: Two banks of MLC’s on the Novalis Classic 6 MV. The
leaves are all in a closed
position. The view is looking up towards the source.
...........................................................
67
Figure 5.4: Cross-sectional area of one of the MLC banks on the
Novalis linac. The tongue-
and-groove design can be seen for each leaf. Red, green, and yellow
indicate a projected
leaf width of 5.5, 4.5, and 3 mm. The triangle of the same colour
is a circle leaf flipped
180° (or vice versa).
..............................................................................................................
68
Figure 5.5: Image of one leaf end. The white lines section each
part of the leaf according to
width. Each of the leaf widths were measured and used in the MC
model. The widths
can be found in Table 5.1.
.....................................................................................................
69
Figure 5.6: Image of one leaf on its side (the leaf would move in
and out of the field
horizontally). The leaf end in the radiation field is the left hand
side. Note the leaf end
has three different straight angles to mimic a curve. The yellow
lines indicate the
separation of these straight sections on the left hand side.
................................................... 70
Figure 5.7: Welloffer Blue Phantom system. The tank is filled with
water and the detector is
placed on the black holder as indicated in the image. The position
of the detector can be
controlled remotely in and in predetermined paths.
..............................................................
73
ix
Figure 5.8: Left – Diagram of the positions of the MLC for one the
measured fields (from
iPlan commissioning report).Three rectangular openings are made by
closing two leaves
in the upper middle portion and three leaves in the lower middle.
Right – Image of the
same MLC positions rendered in GATE. The point of view is looking
up the x-ray
beam, into the linac.
..............................................................................................................
75
Figure 5.9: - CAD model of the linac components modeled in GATE.
....................................... 79
Figure 5.10: Components from the CAD model rendered in GATE. The
green lines represent
photons and the red lines electrons.
......................................................................................
80
Figure 5.11: Diagram of the flattening filter used in the
simulation. It consists of two conical
sections to form an overall cone shape.
................................................................................
83
Figure 6.1: Plots of depth dose distributions for square field
sizes of 98x98, 60x60, 30x30,
18x18, 6x6 mm 2 . Plots display measured data (black line), MC data
(blue dots), and the
gamma at each point (green) with the gamma value boundary of 1 (red
line). Dose is
normalised to maximum. Some error bars are too small to be seen.
.................................... 93
Figure 6.2: Plots of the dose profiles for square field sizes of
98x98, 60x60, 30x30, 18x18,
and 6x6 mm 2 for depths of 15 mm and 100 mm, with the exception of
the 98x98 mm
2
field size, which is at a depth of 16 mm instead of 15 mm and
includes a depth of 200
mm. The plots display the measured data (black line), MC data (blue
dots), and gamma
comparison of each point (green) with the gamma boundary of one
(red line). The
gamma criteria is 3%/1mm. The dose is normalised to the central
axis. Some error bars
are too small to see in the plots.
............................................................................................
99
Figure 6.3: Diagonal profiles at depths of 16 mm and 100 mm (top
and bottom, respectively)
at a field size of 98x98 mm 2 . The blue dots are MC data and the
black line are measured
data. The green triangles are the gamma values for each point with
a 3%/1mm criteria. .. 101
Figure 6.4: Plots comparing a dose profiles at a depth of 15 mm
(top) and 100 mm (bottom),
in the transverse irregular field shape. Blue dots are MC data,
black line is measured
data, and green is the gamma comparison (3%/1mm criteria).
........................................... 104
Figure 6.5: Comparison of relative output factors from MC and
measured data. The MC dose
decreases compared to the measured data with decreasing field size.
The maximum
discrepancy is seen at the 6x6 mm 2 field size by 17%. The measured
data is taken from
the CAX of the profile
measurements.................................................................................
105
Figure 6.6: - Plot of the MC calculated linac spectrum of the
Novalis compared to a MC
spectrum of the Elekta Precise [17].. The vertical axis is in
arbitrary units and the curve
is normalised.
......................................................................................................................
106
Symbol Definition
CAX Central axis
DBS Directional bremsstrahlung splitting
EGS Electron gamma shower
Geant4 Geometry and Tracking 4
IMRT Intensity modulated radiation therapy
Linac Linear accelerator
MC Monte Carlo
SRS Stereotactic radiosurgery
SSD Source-to-surface distance
Chapter One: Introduction
This research project will use the GATE Monte Carlo (MC) simulation
platform to model the
Novalis Classic 6 MV linear accelerator (linac). GATE is an acronym
for Geant4 Application for
Tomographic Emission. The GATE software is written from the Geant4
toolkit. Only a handful
of linacs have been modeled with GATE since its release in 2006.
This research will investigate
GATE’s capabilities for small field applications and provide a
validated linac model to extend to
multiple research projects. Also, the validated model will have the
potential to provide an
independent dose calculation for evaluating treatment plans. The
first few chapters provide
background material on radiation therapy, basic linac components
and x-ray production, and
commonly used MC codes with a focus on Geant4 and GATE.
1.1 Radiation therapy
Ionizing radiation has the ability to damage cells in living tissue
which can lead to cell death.
This attribute is exploited in clinical treatments called radiation
therapy or radiotherapy. The
most common use is in radiation oncology where the objective is to
use ionizing radiation to
damage, and ultimately kill cancerous cells in the patient.
Approximately 50% of all cases of
cancer treatment should undergo radiation therapy [1].
One type of radiation therapy is called external beam radiation
therapy (EBRT). In EBRT the
radiation source is external to the patient and is most often
created and delivered by a medical
linac. In order to reach the intended treatment site in the
patient, the radiation must pass through
12
and irradiate healthy normal tissues. In many situations EBRT
planning seeks to maximize the
dose to cancer cells while minimizing the dose to healthy normal
tissue.
For an EBRT treatment, the dose is usually divided into fractions.
The amount of dose and the
fractionation scheme are different depending on the patient, tumor
size, site, and diagnosis. A
Gray (Gy) is a unit of dose equal to energy (Joules) per unit mass
(kilograms. An example of a
radiation prescription would be 60 Gy over 30 fractions (that is 2
Gy per treatment and 30
treatments total) to a target volume defined by the RO, treating
once a day. While it is always
important to accurately deliver dose to the target, small errors in
patient setup will affect the
accuracy of the treatment. These setup errors are compensated for
by added a margin around the
target volume to ensure adequate dose coverage of the target.
However, the wider margin will
increase the dose to surrounding healthy normal tissue. Also, when
the treatment is delivered
over many fractions, these errors tend to average out from the
variations in each setup. A special
technique of radiation therapy is Stereotactic Radiosurgery (SRS):
SRS is a high dose single
fraction treatment to a small lesion, most commonly used for
tumours in the brain. Given such a
high dose in one fraction, accurate tumour localization and patient
setup is absolutely required as
there are smaller margins for random error. Another type of
stereotactic treatment is stereotactic
body radiation therapy (SBRT), typically this is a high dose in
three to five fractions, delivered to
other sites outside the central nervous system/brain
Therapeutic radiation planning requires an accurate method to
predict the dose delivered to the
patient. One of the most accurate methods used is Monte Carlo
simulations of radiation
transport. This research used the GATE (Geant4 application for
tomographic emission)
13
simulation platform to model the Novalis linac– a dedicated SRS
radiation treatment system.
Background details on linear accelerators, MC simulation, and MC
codes will be given followed
by information on previous research in radiation therapy using GATE
and the methods used for
this work.
2.1 X-ray production
A linear accelerator (linac) is the most common way to produce
x-rays for external beam
radiation therapy. It produces x-rays through bremsstrahlung.
Electrons are accelerated into a
high atomic number target causing the electrons to collide with the
atoms and as a result,
produce photons. When the electrons undergo a high acceleration (or
deceleration in this case),
energy is radiated as light, more specifically x-rays for the
accelerating energies utilised. The x-
ray beam produced via bremsstrahlung is polyenergetic. An
unfiltered spectrum in a vacuum
would be a straight line with a negative slope from some maximum at
zero (or near zero) to a
minimum at the maximum energy of the incident electron (see Figure
2.1). The unfiltered
spectrum would also contain characteristic x-rays from fluorescence
of the target material. The
photons produced cannot have an energy greater than the electrons
that produce them.
Practically, the spectrum is filtered through the target
(self-attenuation). The bremsstrahlung
photons created must traverse the remaining target material causing
lower energy photons to be
attenuated (the lower the energy of the photon, the higher
probability of attenuation). This leads
to a spectrum in the shape of a curve that is peaked at
approximately 1/3 of the incident energy
of the electron beam for keV electrons and peaked approximately at
1/6 of the incident energy
for MeV electrons. Usually, the x-ray spectrum is never
characterised directly, instead dosimetric
measurements are used characterise the beam indirectly – this is
discussed in section 2.3.
15
Figure 2.1: A plot of an x-ray spectrum produced from a keV
electron beam. The
unfiltered spectrum is shown by a straight line, reaching a maximum
energy of the
electrons used to produce the x-rays. The spectrum is filtered
through the target causing
the lower energy photons to be absorbed. Both axes are in arbitrary
units and use a linear
scale.
2.2 Linear accelerator design
In a MC simulation of a linac, there are key components to be
modeled in order to develop an
accurate model. These key components will be discussed here. An
electron gun is required to
produce free electrons and then these electrons are accelerated
into a waveguide, which as a
whole is called the accelerating structure. The electrons are
accelerated to energies ranging from
6 MeV to 20 MeV. Since the energy from the x-ray beam is a
continuous spectrum and not
mono-energetic, the energy is reported as 6 MV for a 6 MeV electron
beam striking the target.
16
This nomenclature is adopted from kV x-rays where the beam is
reported based on the potential
used to accelerate the electrons. A magnetron is used to produce
the microwaves in the wave-
guide. Modern linacs will use an RF driver and a klystron to
amplify the EM waves produced by
the RF driver in order to achieve a higher energy. A diagram of
these main components of a linac
is shown in Figure 2.2. For some linacs, a bending magnet is used
to change the direction
(usually through 270 o ) and focus the electron beam on the high Z
target. Some linacs will house
the entire electron gun and wave-guide vertically above the
isocenter with no bending magnet.
The entire gantry can rotate about the isocenter and the linac
head, where the x-ray target is, can
rotate as well. The area the electron beam strikes on the target is
called the spot size. The
electron spot size will affect the penumbra of the dose profile the
most. The penumbra is broken
into two parts, the geometric penumbra and the radiation penumbra.
The geometric penumbra is
due to the spot size as shown in Figure 2.3.
17
Figure 2.2: Diagram of the main linac components.
Figure 2.3: Diagram of the geometric penumbra due to the spot size.
A narrower spot size
18
Figure 2.4: Diagram of the main linac components and a water
phantom. The electron
beam (red arrow) travels from the top to the bottom and produces
photons (green lines) via
Bremsstrahlung on the Tungsten and Copper target. The cone
extending to the water
phantom represents the edge of the x-ray beam.
Figure 2.4 shows a cross-sectional diagram of the linac components.
The components seen in this
figure are critical to model correctly for our simulation: the
target, primary collimator, flattening
filter, monitor chamber, secondary collimators, and the multileaf
collimator (MLC). The target
usually consists of a combination of Tungsten and Copper. The
Copper is included to dissipate
heat from the target and to filter out low energy x-rays; in
particular the characteristic x-rays
19
the desired beam energy. After the target, the primary collimator
shapes the beam into a cone,
absorbing photons that will reach beyond the field size limits at
treatment depth. The term
collimator in this context refers to an absorbing material that
will block the beam.
The x-ray production via Bremsstrahlung at these energies is
radially symmetric and forward
directed. More photons are produced along where is the azimuthal
angle from the axis
along the direction of the incident electron beam. There is a sharp
falloff in the number of
photons as increases. Looking through a plane that is perpendicular
to the direction of the
beam, this forward peak creates a high fluence along the central
axis that decreases radially
outward. A plot of bremsstrahlung production from a few electron
energies and their angular
distributions are shown in Figure 2.5. This leads to some
difficulty to create a treatment plan
with an inhomogeneous energy fluence over the field area. For
conventional 3D radiation
therapy, it is more desirable to have a flat energy fluence across
the whole field at the depth of
the treatment target resulting in a flat dose profile. To produce
such a beam, a flattening filter is
introduced after the primary collimator. The flattening filter is
an approximately conical piece of
metal placed in the path of the beam and aligned with the central
axis. Figure 2.6 shows an
example fo a typical flattening filter – the one in the Novalis
linac being modeled was not
available. The central axis will have a larger path of material to
attenuate the beam, which
decreases radially outward. More photons are attenuated along the
central axis compared to the
outer edges to produce a uniform energy fluence across the whole
field. A side effect of the
flattening filter is the beam hardening effect and decrease in the
dose rate. This effect is the
increase of the average energy of the beam due to preferential
attenuation of low energy photons.
A lower energy photon has a higher probability of being stopped in
a specific length of a
20
material. The flattening filter geometry is important to have
correctly modeled, however if the
exact geometry is not known, a simple cone or combination of a few
conical shapes can be used
and fitted to match measurements, which will be shown in section
5.6.
Figure 2.5: A plot showing the angular distribution of x-ray
production via
bremsstrahlung. The measure of production is given in barns per
steradian. The radial line
indicates the number of barns per steradian at a particluar angle
from the direction of
the incident electrons. This trend continues when looking at higher
electron energies. The
plot isadapted from Attix, F. H. [2].
Electron beam
direction
21
Figure 2.6: Image shows an example of a Copper flattening filter
(one specific to the
Novalis linac was not available). The total conical shape is made
up of smaller conical
sections of varying heights and angles. The filter is designed to
produce a flat dose profile
at treatment depth. The filter will also harden the beam – having a
greater effect along the
central axis.
The output of the linac is monitored by two ionization chambers,
one placed after the other, and
are known as monitor chambers. The chambers are placed in the beam
path after the flattening
filter and are redundant to have a backup in case one fails. The
composition and size of these
chambers vary depending on the manufacturer. The attenuation from
the monitor chambers is
very small compared to the attenuation of the flattening
filter.
In order to improve conformality to the treatment target, photon
beams are shaped to match the
target. Jaws are pieces of Tungsten alloy, placed in the path of
the beam to produce rectangular
field sizes. The Novalis Classic linac modeled in this thesis has
four jaws: two for each direction
22
labelled X1, X2, Y1, and Y2. These jaws are made of Tungsten alloy
with 95% W, 3.5% Ni, and
1.5% Cu. To ensure there is uniform attenuation of the radiation
beam outside the treatment
field, the faces of the jaw are aligned to match the divergence of
the radiation beam. This will
create a sharper penumbra, which will improve dose conformality to
the target. The upper jaws
closest to the target are the Y jaws. The divergence is matched by
moving the jaws along an arc.
The X jaws move linearly in and out of the field and rotate to have
the face match the
divergence. With the X and Y jaws the beam can be collimated to any
rectangular field size.
In addition to the jaws, the MLC (seen in Figure 2.7) is used to
collimate the beam further to be
able to create a variety of shapes other than rectangles. It
consists of many thin rectangular slabs
of Tungsten alloy adjacent to one another. Each slab is called a
leaf and can slide in and out of
the field to an accuracy of 1 to 2 mm. Leaf width is typically
given as the projected width at
isocenter. For example, the Novalis linac has widths of 5.5, 4.5,
and 3 mm for the MLC system.
The leaf positioning accuracy for the MLC system on the Novalis
linac is 0.5 mm instead of the
standard 1 to 2 mm. A highly conformal radiation beam can be
delivered to the target using
many leaves. Leakage between leaves is reduced by a
tongue-and-groove design to ensure there
is some material between adjacent leaves. Each manufacturer has a
different design for their
MLC.
23
Figure 2.7: Leaf end view of one bank of the micro MLC system on
the Novalis. The width
of the each leaf varies along its height to produce a tongue and
groove. Each leaf can move
separately from the other in and out of the field (in and out of
the page). At this viewing
angle the x-ray beam would be incident from the bottom. Looking at
the vertical in the
image, the leaves are at an angle to match the divergence of the
incident radiation beam.
All of the main components mentioned in this section must be
accurately modeled in a Monte
Carlo simulation. Clinically, the most important information about
a linac is the dosimetric
behaviour of the radiation beam. Different types of measurements
are used to characterise this
behaviour and it is important to use this data to create a model
that matches these measurements.
2.3 Radiological Measurements
A variety of dosimeters are available to measure radiation dose.
Two of the most common ones
used to measure the output of linacs are ion chambers and diodes.
Ion chambers measure the
Direction of
x-ray beam
24
ionizations created from radiation passing through a volume of gas.
As ions are produced they
are accelerated toward a cathode or anode due to a potential. The
charge collected can be related
to the number of ionizations and in turn to the dose. Diode
detectors are semiconductors that
detect charge carriers created from radiation that passes through
the detector.
The x-ray beam of a linac is never fully characterised. The output
rate is too high for any direct
measurement method used for spectroscopy, although some research
has been done in the past to
measure and/or reconstruct the spectrum of a linac [3,4,5,6]. Since
these spectra measurements
have proven to be difficult and clinically speaking only the dose
deposition is important, the
linac can instead be characterised by how the dose is deposited in
a specific material. Water is
chosen as a standard in radiological measurements because of its
abundance/ease of access and
its properties are close to most of the human body, specifically
soft tissue. A clinical standard is
a large rectangular tank filled with distilled water placed such
that the surface of the water is 100
cm from the target or source. Such a setup is known as 100 cm
source-to-surface distance or
SSD.
Two types of relative measurements are particularly important: how
the dose changes with depth
along the central axis and how the dose changes along a line, in a
plane perpendicular to the
central axis. These two measurements are called a depth dose curve,
referred to as a percent
depth dose curve (PDD), and a dose profile, respectively. The PDD
is usually normalised to the
maximum dose along the central axis (CAX). Figure 2.8 shows a plot
of a PDD. The depth at
which the maximum dose occurs is called .The dose profiles are
broken into inline,
crossline, or diagonal (a line along x or y or diagonally across
the field) and are always reported
25
at a specific depth in the water phantom. The dose profile is
usually normalised to the dose on
the CAX. Since the dose profile varies with depth, measurements
must be taken at various depths
in the phantom. Measurements are performed at various depths
because the energy spectrum
changes with depth, becoming harder (average energy of the beam is
increasing) and as a result,
the relative dose profiles change shape with depth. The field size
affects the amount of scattered
radiation that contributes to dose, increasing with field size, and
therefore causing surface dose to
increase for PDDs. Both of these measurements vary depending on the
field size used. PDD and
dose profiles are the two types of measurements need to
characterize the beam. These types of
measurements are used in this work to ensure the MC model matches
the linac.
Another factor is the relative output factor (ROF). This is the
ratio of the dose along the CAX, at
a specific depth, for a given square field size, over the dose
under the same conditions in a
reference field size, normally taken to be 10x10 mm 2 . The
relative output factor gives a simple
and qualitative method to compare the effects of field size on the
dose output. The ROF changes
with field size due to the scattering in the phantom and scattering
within the linac head from the
collimators and flattening filter.
26
Figure 2.8: Plot of a percent depth dose curve for a 6 MV linac in
a 98x98 mm 2 square field
size. The measurement is performed in a water phantom with 100 cm
SSD using a CC13
ion chamber. The plot is normalised to the maximum dose value. The
curve characterizes
the dose deposition with depth along the central axis.
Finally, the dosimetric characteristics of the MLC need to be
measured. The leaf transmission is
a measure of how much of the radiation beam has travelled through a
leaf. MLC systems should
have <5% transmission through a leaf [7]. Despite the tongue and
groove design, there is still
less material in between leaves, allowing some radiation through;
this is called the interleaf
transmission. Another quantity of interest is the leaf end
transmission, which is the radiation that
leaks through the end where leaves of opposite banks are abutted
against one another. These can
be measured by looking at the dose deposited from a field shape
created with the MLC. A 2D
0
20
40
60
80
100
120
0 20 40 60 80 100 120 140 160 180 200
R el
at iv
27
dose map can provide all the information, although 1D profiles
acquired along the 2D dose map
are useful as well.
All of these measurements are important to understand the behaviour
of the x-ray beam. These
measurements are used to plan treatments by extrapolating from the
measured doses to patient
specific circumstances. Monte Carlo simulation of particle
transport is one method used to
calculate dose using dosimetric measurements as a reference to
ensure the MC simulation
matches the real dose.
3.1 Overview of method
Monte Carlo is known to be the most accurate method to predict dose
deposition. MC calculation
does not use corrections to existing measurement data but instead
develop the dose deposition
from knowledge and data of the principle interactions of particles,
such as attenuation
coefficients and cross-section data. Very few MC calculation
engines are employed in a clinical
setting because of the prohibitive calculation time. This was a
greater issue in the past due to the
processing power available on computers at the time. However,
considering how rapidly
computer technology has advanced and continues to, it is reasonable
to predict that MC
treatment planning would be the standard in clinical centres in the
near future.
The Monte Carlo method uses random sampling of probability
distributions to solve problems.
Two main components required to utilize the Monte Carlo method are:
i) the ability to produce
random numbers or pseudo-random numbers that pass sufficient
randomness tests and ii) a
stochastic model of the problem in question. The problem that must
be solved can be inherently
stochastic (such as particle transport) or if it is deterministic,
a suitable stochastic model must be
constructed. Problems are not always one or the other but can be
both stochastic and
deterministic. A complete simulation of a naturally occurring
stochastic problem is known as an
analog MC simulation. Analog simulations can be partly non-analog
if desired, usually to reduce
the computation time.
29
True random numbers can be produced through natural phenomenon such
as radioactive decay
and electronic noise but these methods are cumbersome and
undesirable. It is useful to be able to
reproduce the same results or sequence of events to be able to find
any errors, which could be
very difficult if the numbers were truly random. Instead the
preferred method is pseudo-random
numbers. These numbers are produced from an algorithm and are
reproducible. The numbers
pass randomness test sufficiently to act as random numbers.
Extensive literature can be found on
computer implemented random number generators. The Mersenne Twister
is the random number
generator used in the GATE MC software, which is used for this
project [10].
The simplest MC case to look at is discrete probabilities. Let
there exist probabilities where
, such that ∑ , where each probability represents an event. If a
random
number where lies in , then the event occurs.
The key is the cumulative probability.
The method is more complicated for continuous probabilities. There
exists a probability density
function over a range [ ] , where , which has the following
properties:
i)
∫ d
. (4.1)
30
Note that and . Since where , random number can be
used to sample values of , equating with . The value of interest
though is what value of
will give This is the fundamental concept behind MC methods. How to
sample the
probability function for a value of differs depending on the
functions and . The most
direct method is to invert the cumulative probability distribution
so that
. (4.2)
If inverting the distribution is too complicated the rejection
method can also be used to sample a
distribution. If the probability distribution is finite over its
range and has a global maximum, a
new function is made by normalizing to the global maximum:
where
is the point where there is a maximum. The rejection method then
requires two random
numbers: the first is used to find an value with , the second will
test
this value found using by rejection: if then is accepted as the
sampled value,
otherwise reject it. The rejection method is not as efficient as
the direct inversion method
because two random numbers are required for one sample. However if
direct inversion is too
complicated it may be faster to use the rejection method. Both
direct inversion, either through
analytical or numerical methods, and the rejection method can be
used together by factoring the
probability distribution; assuming it equals the product of two
distributions. With this technique,
the more complicated characteristics of the function can be
factored out to apply the rejection
method to this component and use the direct inversion method for
the other function.
31
3.2 Particle simulation
The stochastic nature of particle interactions makes them ideal to
model with Monte Carlo
methods. The random numbers generated are used to sample data
points from probability
distributions, such as cross sections and differential cross
sections that are derived from
measured particle data, which have been collected over the years
from many experiments. A key
component in particle simulation not seen in other applications of
MC methods is a system to
track geometries and particle position. This is also referred to as
ray tracing.
3.2.1 Photons
For a MC simulation of a photon there are four main steps where
random sampling occurs. i)
The first step is to determine the step size or the length of the
path travelled by a photon of a
given starting energy and direction. This is to determine how far
the photon travels before an
interaction occurs. ii) Once a step size is determined, the
particle is moved to the new position
(the step size in a straight line from the photon’s original
direction) and then the type of
interaction that occurs is determined. For a photon, the competing
interactions are Rayleigh
scattering, photoelectric effect, Compton scattering, and pair
production (both triplet and
double). Rayleigh scattering is an elastic scattering of photons
interacting with molecules or
atoms. Rayleigh scattering is not an important process in the
operating energy range of a 6 MV
linac and is dominated by the other processes. The photoelectric
effect is an interaction with the
electrons in molecules or atoms. All of the photons energy is given
to an electron that is liberated
from the atom. The energy of the photon must be greater than the
binding energy of the electron
in the atom, allowing the electron to escape the atom. The
electrons in the K-shell have a higher
probability of being liberated. This will cause a hole left in the
electronic structure of the atom
32
and electrons from higher energy levels will fill the hole (the
point of lowest energy). This
cascade is called atomic relaxation and can result in multiple
photons being emitted, known as x-
ray fluorescence, or electrons to be emitted, known as Auger
electrons. Compton scattering is a
process where the incident photon gives a significant amount of
energy to an electron in the atom
causing both the photon to be scattered, leaving with a different
energy, and an electron to be
emitted from the atom. Pair production is a process where a high
energy photon, greater than
1.022 MeV, interacts with the electric field of the nucleus of an
atom to produce an electron and
positron (or interact with the electric field of an electron,
ejecting the electron and producing the
positron and electron pair). The electron and positron share the
energy of the photon with 511
keV needed for each particle (the rest energy) and the remaining
energy used as kinetic energy.
A cross section can be interpreted as a probability of interaction
and is given in units of area. It
can be thought of as the effective target area for the incident
particle; the bigger the target, the
more likely the interaction is to occur. The cross section for each
type of interaction depends on
the energy of the photon and material where the interaction occurs.
The cross sections are
weighted by the total cross section (which is the sum of cross
sections from all possible
interactions) giving each a statistical likelihood compared to one
another. iii) After a type of
interaction is determined, the energy and angle of deflection of
the original photon and every
daughter particle must be determined. iv) The original particle is
in this manner until it loses all
of its energy or leaves the volume of interest and then the
daughter particles are modeled in the
same fashion. Once the parent and daughter particles are all
finished another starting photon is
modeled. This process is repeated for millions of starting or
primary particles, also called
histories. Figure 3.1 shows a simple flowchart for a MC simulation
of photon transport.
33
Figure 3.1: A flowchart of the MC simulation of photon transport.
The initial starting
particle or primary is picked off the stack. If the particle is
ever below a cuttoff threshold,
it will be terminated and the energy deposited locally. The
distance to interaction, type of
interaction, and resulting properties of the particles due to the
interaction all require
random sampling. The resulting particle properties (of both primary
and daughter
particles) are stored in the stack. The process is repeated until
no more particles are in the
stack. Adapted from D. W. O. Rogers and A. F. Bielajew [11].
To determine the step size of an interaction data about the linear
attenuation coefficient is used.
The attenuation of photons in a given material, with a given energy
will be
, (4.3)
34
where is the number of particles after travelling a distance and is
the number of particles
before attenuation. µ is the linear attenuation coefficient which
is in units of inverse distance.
The equation can be rearranged to
(
). (4.4)
Note that the ratio will always be between 0 and 1. The quantity is
known as the mean
free path. The mean free path is the average distance a photon
travels in a material between
interactions.
Once the photon has been moved by the determined step size to the
interaction site, the type of
interaction that occurs must be determined. The total cross section
of a photon interaction in a
specific material with a given energy is
, (4.5)
where , , , and are the Rayleigh, Compton, photoelectric effect,
and pair production
cross sections, respectively. The probabilities of each interaction
can be found by dividing the
total cross section by itself:
, (4.6)
35
where , , , and are the Rayleigh, Compton, photoelectric effect,
and pair production
probabilities, respectively. If these probabilities are arranged
into different intervals between 0
and 1 we get
[ ]. (4.7)
Whichever interval a random number [ ] lies in, that interaction is
chosen to occur.
Once the type of interaction is chosen, the probability
distributions for resulting energy and
scattering angle are sampled to determine the primary particle and
any daughter particles and
their properties. Sampling methods are from equations from first
principles (Klein-Nishina) or
from evaluated data sets, or a mix of both. The data used is either
parameterised or can be used
directly with interpolation schemes. In most codes the particles
will repeat the whole process
until their energy becomes lower than a predefined cutoff energy,
resulting in the remaining
energy deposited locally as dose. Some codes will follow particles
down to zero range (or zero
energy).
3.2.2 Electrons
MC simulations of electrons interactions are more complicated than
photons. A charged particle
is far more likely to interact in a material when compared to a
neutral particle due to their
36
electromagnetic field. A charged particle undergoes near constant
interactions because of
Coulomb interactions from atoms. Additionally, electrons have
little mass causing them to
scatter more frequently at larger angles. Another issue is the
approximate continuous energy loss
of electrons; although interactions are stochastic, electrons
interact frequently and can be
approximated by a continuous interaction, with the exception of
large or catastrophic events.
Electron interactions can be broken into three categories: soft
collisions, hard collisions (or
knock-on), and nuclear Coulomb field interactions. Approximately
50% of an electron’s energy
loss is due to soft collisions. Soft collisions happen when the
electron passes near an atom but
not close with respect to the size of the atom. Consider the
approximate radius of the atom is
(the classical radius with the origin at the nucleus) and an
incoming charged particle that passes
at a distance (known as the impact parameter) from the nucleus (see
Figure 3.2). If , then
the collision is considered soft. Soft collisions are the most
probable interactions to occur.
37
Figure 3.2: A diagram showing an incident electron passing by an
atom. The impact
parameter is the distance from the nucleus to the trajectory of the
approaching electron
and is the classical radius.
Hard or knock-on collisions occur when . Hard collisions are
considered to interact with an
individual electron in the atom, which can liberate an electron
from the atom, the secondary
electron is sometimes referred to as a delta ray. The secondary
electron travels through the
matter in the same fashion as the primary. Although soft collisions
are capable of ionizing atoms,
only the outer electrons can be ejected. For hard collisions, the
inner shell electrons typically get
ejected from the atom. The atom is left in an excited state and
will relax into a lower energy
state. The lower energy state is achieved by electrons in the outer
shells “falling” into the “hole”
left by the liberated electron. When the atom relaxes, energy is
released via two competing
processes: x-ray fluorescence (energy released as photons) or the
Auger effect where an electron
is released from the atom instead of a photon.
38
Lastly, when , the electron interacts with the nucleus field. The
impinging electron is
either elastically scattered approximately 98% of the time or
undergoes inelastic scattering for
the remaining two percent. The inelastic scattering process
radiates energy via x-rays and is
known as Bremsstrahlung.
The most accurate way to model electron transport in a Monte Carlo
simulation is by modeling
each electron scattering event individually, no matter how small.
This is time consuming and
most small events do not have a large effect on the electrons
trajectory. To speed up the
calculation, most MC codes used a condensed history approach,
treating many small interactions
as one larger event.
Berger [12] in 1963 developed a classification scheme for how a
condensed history is
implemented; these are class I and class II. Class I algorithms
will treat all types of electron
collisions the same and transport the electron along a
predetermined energy-loss grid. The
secondary particles produced are accounted for after. This allows
the calculation of the
probability distributions for each grid point to be done before
simulating the electron. In class II
algorithms, the distribution is calculated during the electron
simulation process. The soft
collisions are grouped together like the Class I scheme but the
hard collisions are simulated
explicitly. In other cases, during a grouped phase where the
electron is transported to the next
interaction point, the energy loss from soft collisions are
accounted for using the continuous
slowing down approximation (CSDA). The CSDA assumes that, since
electrons interact
frequently over even a small range, the energy loss can be modeled
as a deterministic quantity
39
rather than a stochastic one. Figure 3.3 illustrates the difference
between a single scattering and
the condensed history technique. The code uses a multiple
scattering theory for electrons in the
condensed history technique. The theory used differs depending on
the model. There are three
commonly used multiple-scattering theories for electrons developed:
Molière [13], Goudsmit-
Saunderson [14], and Lewis [15]. Some implementations in software
will combine aspects from
each or add small corrections as well. Each of the three, or
variants of them, can be found in use
today for MC simulations. Most of the inaccuracies in MC
simulations are a result of the
approximations made in the MS approach.
40
Figure 3.3: A sketch of the hypothetical paths of an electron using
single scattering and a
condensed history approach. The path for single scattering is not
continuous as shown but
is still made of discrete events. There are many interactions with
single scattering and it
would require a large amount of time. To approximate, the many
small scattering events
are grouped in large scattering events (the vertices) by using a
multiple scattering theory
for a condensed history. The large numbers of small interactions
are elastic or semi-elastic
and are approximated in the condensed history technique by a
continuous energy loss
along the electrons path; this energy is deposited locally about
the track.
3.2.3 General user parameters in a MC simulation
Most MC calculations will allow the user to set some general
parameters. There are some that
are common to all codes, including the one used in this thesis. The
first is a production threshold:
the user can set an energy for a primary particle that will
prohibit secondary particles from being
41
produced if the primary particle is below this energy. For example,
a 100 keV production
threshold set for photons will prohibit any secondary electrons
produced once the photon’s
energy is below 100 keV. Instead of producing a secondary particle,
the energy that would be
lost is deposited locally. These thresholds are set for each type
of particle and typically allow the
user to set different energies in different regions, for example in
the target and in the water
phantom. More details can be found in sections 5.3.1 for Geant4 and
6.3.1 for the settings used in
GATE for this work.
Another parameter is an energy cutoff for the life of the particle.
This is set for a type of particle
and the region it is in. Once a particle’s energy is reduced to
this cut-off, the particle is
terminated and all the energy is deposited locally. This is
sometimes referred to as a range cut-
off as well. The user may also specify the maximum distance a
particle is allowed to travel
without interacting. This would be useful for looking at electron
dose in a very small region
relative to the range of the electron. Geant4, and therefore GATE,
track all particles down to zero
energy. Details can be found in sections 4.3.1 and 5.4.1.
3.2.4 Variance reduction techniques
Variance reduction techniques (VRT) are methods to lower the
overall variance of the
simulation, usually with the goal of reducing the simulation time
while still achieving the same
results, without biasing the results of the simulation. Some VRTs
used in MC simulations of
photon/electron radiation therapy will be discussed here.
42
Particle splitting is the technique of artificially creating
multiple secondary particles from an
interaction where typically only one would be produced. An example
is Bremsstrahlung splitting.
For a single bremsstrahlung interaction from an electron striking a
target, a single photon is
produced. This photon would have a statistical weight of . If the
splitting technique is
applied with a splitting value of , number of photons are produced
instead of one,
but are sampled from the same distribution. The statistical weight
of each new photon will be
equally .
Another technique is called Russian roulette. In Russian roulette,
a survival probability is
assigned to a particle type (photons, electrons, etc.) and if a
particle of this type is created during
the simulation, a random number is generated to determine if the
particle survives, < or if it is
killed. The surviving particles statistical weight is adjusted to
not bias the simulation by
. The user may want to eliminate particles that will not reach the
region of interest or
contribute to the simulation rather than simulate them all. The
Russian roulette technique is
usually combined with particle splitting.
Directional Bremsstrahlung splitting (DBS) is a combination of the
splitting technique and
Russian roulette. DBS uses a constant splitting factor of applied
to Bremsstrahlung
photons. If the photons are directed towards the region of
interest, than they are transported as
normal. If they are not, Russian roulette is applied to the
particles with a survival probability of
. The photons directed towards the region of interest will have a
weight of
and photons directed away will have a weight of . Other VRTs exist
as well, such as
43
range rejection, cross-section enhancement, interaction forcing,
woodcock tracking, correlated
sampling, and more [16]. GATE allows users to use the DBS VRT and
it has been shown to
shorten calculation time without biasing results [17].
3.2.5 Phase space
A phase space is a method to record particle properties in a
simulation. A phase space is a user
defined plane or volume in the simulation which records the
properties of all particles that pass
through the plane or volume. Some codes will allow the user to
specify the type of particle to be
recorded as well as a range of energies.. Typically a plane is
chosen where the normal aligns
with the beam direction along the CAX. When a particle crosses this
plane, the particle type,
energy, position, and direction (usually given as two angles) are
stored. Figure 3.4 is a diagram
depicting a particle’s properties as it travels through a phase
space. A phase space can be used as
a particle source in a simulation. The target, flattening filter,
primary collimator, and monitor
chambers are independent of patient treatments and a phase space
placed below these structures,
but above the jaws can save time for future simulations.. A phase
space file was not used for this
work and would be the next step taken to shorten the simulation
time and increase the efficiency.
Once the parameters are adjusted to ensure the MC data matches
measurements, a phase space
file can be created for subsequent simulations.
44
Figure 3.4: Diagram of a phase space. When the particle (green
line) passes through the
circular plane (blue), the energy , position in radius and angle ,
and direction given in
angles and are recorded.
3.2.6 Statistics in MC simulation
A MC simulation follows each primary particle individually until
the primary particle and its
daughter particles deposit all of their energy (the primary
particle can be referred to as the parent
particle in this context). One event alone will not be
representative of reality but only one of the
possible outcomes for a single particle. To produce results that
are consistent with reality, a large
amount of primary particles are needed. Monte Carlo methods use the
law of large numbers to
produce results that converge to the real expected value. In the
limit where , where is
the number of particles simulated, then the result calculated will
converge to the expected value.
It is not necessary to simulate an infinite amount; a sufficiently
large amount will approach the
expected value. Regarding dose calculation in a medium, the MC
simulation will cumulate the
45
dose deposited from the many particles simulated, but only a
fraction of the total number of
particles simulated may actual reach the target.
46
4.1 EGSnrc
The EGSnrc code developed by Kawrakow, I. at the National Research
Council of Canada
(NRC) [18] is an improved version of the EGS4 code EGS is an
acronym for electron gamma
shower and was designed to simulate electron and photon
interactions. All EGS codes have been
written in FORTRAN and/or FORTRAN based languages. The development
of the EGS code
started in 1974 by Ford et al [19]. Version three, EGS3, was
publically released in 1978 and
received wide spread use in physics research [19]. EGS3 was
designed for simulating 1 keV to
100 GeV photons and 1.5 MeV to 100 GeV for charged particles. A
notable change in EGS4
from EGS3 was the lower limit in particle energy: charged particles
down to 10 keV were
allowed.
Two extensions of EGSnrc are BEAMnrc and DOSXYZnrc [20]. The
BEAMnrc is a user-code
based on EGSnrc designed to model the linac head in radiation
therapy. BEAMnrc comes with
prebuilt geometries common to a linac such as the target, primary
collimator, flattening filter,
secondary collimators (jaws), and MLC. DOSXYZnrc is included in
BEAMnrc to score dose in
voxelized geometries, such as a voxelized CT image from a patient.
EGSnrc has been tested and
used extensively in radiation therapy research. It is often
considered the gold standard in Monte
Carlo simulations of radiation therapy systems. Details and history
on the EGS code extensively
discusses in the EGS4 [21] and EGSnrc [22] documents.
47
4.2 MCNP
The Monte Carlo N-Particle (MCNP) code has gone through a number of
iterations. MCNP can
be traced directly to the original development of MC methods in
neutron transport done by Von
Neumann and Ulam during the Manhattan Project [23]. The code was
developed at the Los
Alamos National Laboratory to be able to model radiation transport
in nuclear reactors. Despite
the original focus on nuclear reactors, it is a general-purpose
code. MCNP has been applied to
many problems including MCNP5 deals solely with nuclear processes
modeling neutrons,
photons, and electrons whereas MCNPX (where the X stands for
extended) is capable of dealing
with more types of particles and interactions beyond nuclear
processes. MCNP6 is the latest
software and is a merger of MCNP5 and MCNPX. MCNP5 was shown to
have some differences
up to 20% when modeling electron dose transport, in a comparative
study with EGS and
measurement [24]. MCNP is known to be computational intensive
compared to the EGS code.
4.3 Geant4
Geant4 (GEometry ANd Tracking) is an open source C++ toolkit
developed by CERN [25]. The
purpose was to develop a general purpose Monte Carlo code that is
continuously supported by
the research community. While the code is general purpose, the
focus was on high energy
particle physics rather than medical physics applications.
Currently it has been applied to high
energy physics, nuclear and accelerator physics, space science, and
medical physics. Many years
of research and validation of Geant4’s physics models make it a
very robust toolkit for particle
modeling. A notable project, the Large Hadron Collider, takes
advantage of Geant4 to design
detectors for experiments [26]. The original release of Geant4 was
in 2003 and there have been
48
many updates since then [27]. The most current version is Geant4
10.0 as of June 13, 2014, but
the release relevant to this research is 9.5 (released in December
2011) [28].
Geant4 has some advantages over other Monte Carlo codes previously
discussed in sections 5.1
and 5.2. It has the ability to model all types of particles in a
wide range of energies, down to eV
range and up to TeV. Extensive research has shown Geant4 to be well
validated for a wide range
of particles and particle energies [29,30,31,32,33]. Geant4 has the
most advanced geometry
modeling allowing complex geometries to be used in the simulation
and is written with the
object-oriented C++, whereas most other codes are written in
FORTRAN. The Geant4 user
manual and guides provide more details about Geant4’s capabilities
[28]. More details on
Geant4’s physics models are discussed next.
4.3.1 Geant4 physics models and settings
Geant4 is considered a Class II algorithm under Berger’s
classification scheme (section 3.2.2)
and it employs a variant of the Lewis MS theory, the Urban95 model
[34], for multiple scattering
of all charged particles with all of the EM physics models. The
electromagnetic processes are of
particular interest for the scope of this research. Geant4 contains
three main options for EM
physics models: i) Standard, ii) Low Energy, and iii) Penelope. The
Standard is said to be
applicable from 10 keV up to 1 TeV, the Low Energy and Penelope
options extend the range
down to 250 eV. Details of the physics models used for each process
can be found in the Gean4
physics reference manual [35]. The similarities and differences
between EGSnrc, MCNP, and
Geant4for photon and electron transport are summarized briefly in
Table 4.1.
49
The user has number of parameters to specify for the simulation. A
production threshold for
secondary particles is specified by the user in terms of an energy;
if a secondary particle is
produced below this threshold energy, the energy that would be used
for the transport of a
secondary particle is deposited locally instead of modeling the
transport of the secondary
particle. This cutoff is specified as a range and Geant4 converts
this value to an energy
depending on the material. This allows one parameter to be given
for all geometries. The user
can also set a cut for the maximum step size allowed in a given
region for a given particle type.
There are two notable parameters in Geant4 related to electron
ionization: dRoverRange and
finalRange. The dRoverRange parameter is the fractional range; it
is the maximum fraction of
the electron’s stopping power range it can travel in one step. The
dRoverRange ensures the
electron’s step size s will be calculated such that
, where R is the remaining
range of the electron of a given starting energy, in a given
medium. The electron will travel until
the remaining range reaches below the finalRange parameter, at
which point the last step is
reached and the electrons remaining energy is deposited locally.
Another restriction on electron
ionization is the linear loss limit - the user specifies a maximum
fraction of energy the particle is
allowed to lose in a given step.
The user can specify some parameters relating to the multiple
scattering algorithm used for the
condensed history technique. The range factor limits the step size
in a region so that it never
exceeds a fraction (the range factor) of the larger of the
particles remaining range or mean free
path. The geometric factor also limits the step size ensuring a
minimum number of steps occur
50
within a given volume so that a particle cannot traverse a volume
without having a step in the
volume. The skin is a setting that determines the thickness around
a volume boundary where,
once the charged particle enters, a single scattering model is used
instead of a multiple scattering
algorithm. The setting choices and a brief description are
summarized in Table 4.2.
51
sampling
Gamma Conversion Both pair and triplet Pair, not triplet Both pair
and
triplet
scattering
keV
Table 4.1: Table summarizing the similarities and differences
between EGSnrc, MNP, and
Geant4 for modeling photon and electron transport. The table is
adapted from Verhaegen
and Seuntjens [38].
Multiple
Scattering
Algorithm
Range factor Limits step size to a fraction of the larger of range
or
mean free path
Limits step size to ensure a minimum number of steps
occurs in a volume
occurs at the boundary
dRoverRange Step size cannot be such that step/range >
dRoverRange
finalRange Final range at which point the last step is the
remaining
range of the particle
initial energy in a given step
Cuts
Production
threshold
produce secondary particles
Step
limitation
Sets a maximum step size allowed in a region for a
particle type
Table 4.2: Summary of each parameter for the multiple scattering
algorithm, ionisation,
and production cuts. The user must specify each of these parameters
in Geant4.
53
Validation of Geant4 for electromagnetic processes has been
investigated. A study in 2005 by
Poon et al [39] showed there were some deficiencies with the
electron transport with respect to
calculating dose in clinically significant conditions. Since then,
there have been updates to
Geant4 to address these issues in the electron lateral displacement
to improve electron transport
[35].
Geant4 has been used for radiation therapy applications. In 2003,
Rodrigues et al [40] used
Geant4 to calculate the dose in anthropomorphic phantoms and
achieved an agreement within
2.4% of for PDDs and within 2.6% for dose profiles in comparison to
measured doses using ion
chambers and TLDs. The study used a phase space file of a 6 MV
photon beam from a Siemens
Mevatron KD2 linac provided by Chaves et al [41]. The next year in
2004, Carrier et al [42]
looked at the validation of Geant4 for medical physics by looking
at the dose from electron and
photon beams in homogeneous phantoms and multi-slab phantoms of
different materials and
compared to MCNP, EGS4, EGSnrc, and experimental data. For
monoenergetic photon beams
the dose difference in a multi-slab heterogeneous phantom was
approximately 5% between MC
codes and for simulated linac x-ray beams the dose difference was
2.5%. The study concluded
that Geant4 produced comparable results to MCNP, EGSnrc, and EGS4.
Foppiano et al [43] in
2004 modeled an IMRT prostate treatment with Geant4 and compared to
ion chamber and film
measurements. A full linac model was developed and validated
compared with ion chamber
measurements of PDDs and dose profiles for various field sizes. The
study used a Kolmogorov-
Smirnov test for the comparisons and found the two distributions to
be equal within statistical
uncertainties. Prostate IMRT fields were modeled and qualitatively
compared to radiographic
54
film measurements. In 2005, Poon and Verhaegen [36] investigated
the three Geant4 (version
4.6.1) EM models (Standard, Low Energy, and Penelope) in the
application of radiotherapy. The
study looked at the cross sections and sampling algorithms and
compared them to EGSnrc and
the XCOM [44], ICRU 37 [45], and PEGS4 data [21]. The comparisons
of incident photon
beams, both monenergetic and clinical beams, showed an agreement
within 2% of EGSnrc and
the databases for all three models. When looking at an incident
electron beam, the study found
some issues with the electron transport algorithms and concluded
that a more thorough
investigation was required. Again in 2005, a Fano cavity study was
conducted with Geant4 4.6.1
by Poon et al [46] to investigate the electron transport
algorithms. By adjusting the maximum
step size, the step function (for imposing a fractional range with
parameters dRoverRange and
finalRange), and the function (determines the step length after an
electron is transported away
from a boundary) they demonstrated that there were some
dependencies on the step function and
, which can lead to erroneous results. These two studies
demonstrated issues with the electron
transport in Geant4, however the clinical photon beams were modeled
accurately within 2%.
In 2005, Amako et al [47] compared the newer Geant4’s (version 6.2)
electromagnetic models to
the National Institute of Standards and Technologies (NIST)
reference data using a goodness-of-
fit test. Geant4 was found to be equivalent to NIST reference data
within a 0.05 confidence level.
Faddegon et al [48] modeled a Siemens Primus linac with Geant4 9.0
and investigated dose in
water phantom for electron beams with energies from 6 to 21 MeV,
for a 40x40 cm 2 field size.
MC calculated PDDs and profiles were compared to measured data,
using diodes and ion
chambers, and were found to be within 2% for most areas with some
exception in the build-up
55
region. No significant differences were found between the Standard
and Low Energy EM physics
models.
In 2011, Cornelius et al [49] modeled a 6 MV photon beam on the
Varian iX Clinac using
Geant4. The linac model was validated against PDD and profile
measurements in a water
phantom, as well as radiochromic film measurements for the modeled
MLC. Good agreement
was seen with experimental data where 98% of points passed a 3%/3mm
dose
difference/distance-to-agreement criteria. The same year,
Constantin et al [50] used Geant4 to
model the Varian Truebeam linac. The 6 MV beam was modeled using
dose profiles and PDDs
at field sizes ranging from 4x4 to 40x40 cm 2 and at depths ranging
from 2.5 to 30 cm. The model
agreed with experimental measurements having 98% of points within
2% of experimental data.
In all cases, Geant4 was shown to be a capable and accurate tool.
Geant4 does have limitations
for electron dose deposition on a small scale, 5-90 micrometers. A
cubic voxel used to score dose
in the MC simulation tends to range from 1 to 5 mm for each side
and on these scales Geant4 is
capable of accurate dose calculation. Geant4 has been shown to
accurately model clinical photon
beams. Geant4 is robust toolkit that has shown big improvements
over the past decade. It is
continuously being validated against measurements and improved with
frequent releases to
address any issues and discrepancies.
4.4 Geant4 Application for Tomographic Emission (GATE)
Geant4 Application for Tomographic Emission or GATE is a
macro-structured software built
from the Geant4 toolkit by the OpenGATE collaboration [51]. The
software is a community
56
effort from researchers around the world and is open-source where
users are encouraged to offer
suggestions and edit source-code to expand GATE’s utility. The
software was initially designed
for nuclear medicine and PET and SPECT imaging and has been
validated for these applications
[49,50,51], but has also been extended to dosimetry. The goal was
to develop a MC tool for
researchers in medicine that was as robust as Geant4 but easier to
learn and use. Geant4 is a
toolkit in C++ where the user must write the application using the
toolkit whereas GATE is
macro structured software with predefined commands. Users skilled
in C++ can add new
commands and libraries or modify existing libraries of GATE or
Geant4. The first release of
GATE was in 2003 accompanied by a paper from Santin et al [55] and
Strul et al [56]. The next
year in 2004 another publication by Jan et al [57] introduces GATE.
Many studies have applied
GATE to SPECT and PET imaging but this review will focus on the
publications regarding
dosimetry and radiation therapy. The software comes with many
different examples for common
medical research scenarios that help new users learn GATE and can
be reworked to fit the user’s
needs.
The software inherits the robust capabilities of Geant4, notably
well-validated physics models,
geometry modeling tools, and visualization and three-dimensional
rendering. An important
feature of GATE is the ability to model time-dependent events.
Time-dependence is incorporated
into all steps, including dynamic sources, source decays, and
geometry motion. The program
uses a synchronised virtual clock to keep track of all time
dependent events coherently. A recent
update to GATE, GATE V6, facilitates modeling of CT and
radiotherapy systems [58]. The
version of GATE used for this research is v6.2. This version was
released in September of 2012
and is validated for use with Geant4 9.5p01. The new version
brought some extra key features:
57
new options to speed up simulations, components to use for dose
calculation, and optical physics
models for modeling detector response.
4.4.1 Application of GATE to radiation therapy
Most of the work done with GATE was with respect to PET and SPECT
imaging until version 6
was released in 2011. Before version 6 a study by Visvikis et al
[59] in 2006 investigates the use
of GATE for dosimetry applications. They compared GATE to MCNPX2
and EGSnrc in two
depth dose curves, from an 18 MV x-ray and 20 MeV electron beam
incident on a slab of
different thicknesses of water, aluminum, lung, and water. They
found the GATE results were in
good agreement (approximately 2.3%) with EGSnrc. The GATE
calculations had a slower
computational time and had some deficiencies in the boundary
crossing models and multiple
scattering algorithms used in Geant4. It is worth noting that since
this study, the Geant4 version
of GATE used as been updated to address the electron transport
issues and GATE has been
updated to include more variance reduction techniques to improve
the efficiency. In 2008 Thiam
et al [60] published their results on evaluating the low-energy
photon dose calculated by GATE
v3.0.0. They looked at doses from brachytherapy sources of
125
I from Best, Symmetra, and
Amersham and compared to the TG-43 formulism. Looking at the radial
dose function,
anisotropy function, and dose rate constants, all were
approximately 1 – 3.5% with the exception
of the radial dose function from the Amersham source which had a
difference of ~15%. Grevillot
et al [61] in 2010 investigated the free parameters in Geant4
through GATE for the purpose of
modeling proton therapy. All of the work done using GATE in
radiotherapy has been done with
versions before 6. In version 6, new features were added
specifically for radiotherapy.
58
Version 6 of GATE was released in 2011, which included updates to
facilitate modeling
radiotherapy systems. The first linac modeled with GATE v6 was a 6
MV Elekta Precise by
Grevillot et al [17] in 2011. Dose profiles and depth dose curves
were measured and compared to
MC calculated values from the 6 MV beam delivered to a water
phantom for field sizes from 5x5
to 30x30 cm 2 . Good agreement was achieved with 90% of all points
passing a 3%/3mm gamma
comparison. The next couple of studies calculate electron and
photon dose kernels using GATE.
In 2011, Maigne et al [62] compared electron dose calculations from
GATE with results from
EGSnrc and MCNP for energies between 15 keV and 20 MeV. The
comparison used the
Standard EM option for the physics models to calculate dose point
kernels and pencil beam
kernels. The dose point kernels were calculated at energies from 15
keV to 4 MeV. Energies
from 50 keV to 4 MeV had less than 3% difference between GATE and
EGSnrc while 15 keV
was greater than 8%, perhaps indicating the Standard EM physics
models were no longer
appropriate. The pencil beam kernels were calculated from 15 keV to
20 MeV and the results
showed that GATE and EGSnrc differed by less than 4%. The authors
concluded that GATE is
well suited for calculating electron dose distributions for
energies greater than 50 keV. In 2012
Papadimitroulas et al [63] developed a dose point kernel database
for nuclear medicine
applications using GATE. Dose point kernels were calculated with
monoenergetic electron
beams and various radioisotopes. GATE calculated values were
compared to previously
published results. Electron dose point kernels had a difference of
less than 5% for energies
greater than 50 keV and approximately 6% for less than 50 keV. Beta
radionuclides had a mean
difference of 4% and photon values 2%. These studies have shown
that GATE is adequate for
59
photon dose calculations. Electron dose calculations using GATE are
feasible at energies greater
than 50 keV. The studies discussed next are in clinical
settings.
A clinical IMRT (intensity modulated radiation therapy) treatment
planning system was
evaluated using GATE by Benhalouche et al [64] in 2013. They
modeled the Siemens Oncor 6
MV linac with a 160 MLC. The model was validated with measurements
of profiles, at 15 mm
depth, and depth dose curves for field sizes of 4x4, 5x5 10x10,
15x15, 20x20, 25x25, 30x30, and
40x40 cm 2 . The MC PDD and profile data were within 1.472% ± 0.3%
and 4.827% ± 1.3% of
measured data, respectively. Seven IMRT head-and-neck plans were
reproduced with the MC
model and compared to the treatment planning system and phantom
measurements. A 2D gamma
comparison was used to compare measurements. For seven different
plans, 90.8% of all points
passed a 5%/4mm dose difference/distance-to-agreement gamma
criteria. Later in 2013,
Sadoughi et al [65] compared GATE with MCNPX applied to a linac. A
model of the Elekta
Compact 6 MV linac was developed for both GATE 6.1 and MCNPX 2.6.
GATE results agreed
with MCNPX calculations having 100% of points passing a 2%/2mm dose
difference/distance-
to-agreement gamma criteria. The author concluded that the Standard
and Low Energy EM
physics models are appropriate for modeling a 6 MV linac, but not
the Penelope EM model. A
Monte Carlo model of the Elekta Synergy 6 MV linac was developed by
Tayalati et al [66] in
2013 using GATE v6.2. Depth dose curves and dose profiles for
square field sizes of 5x5, 10x10,
20x20, and 30x30 cm 2 were calculated and compared to measured
values. Greater than 98% of
points passed a 3%/3mm gamma comparison for the depth dose curves
and 100% of points
passed for the dose profiles. All of the studies up until now only
dealt with static treatment or at
60
most dynamic leaf motion for IMRT. In the following study, GATE was
applied to a treatment
with dynamic geometry.
GATE has also been used to model more complex treatment techniques
such as Volumetric-
modulated arc therapy (VMAT). VMAT is a technique where the gantry
continuously rotates
around the patient while the beam is delivered. Throughout the
rotation, gantry rotation speed,
MLC’s, and dose rate are modulated. A study by Boylan et al [67]
used GATE to model an
Elekta Synergy 6 MV linac performing VMAT treatments in 2013. The
model was intended as a
quality assurance tool for patient plans. The static model (no
rotation or MLC motion) was
matched against depth dose curves and dose profiles with field
sizes from 4x4 cm 2 to 30x30 cm
2 .
All field sizes had over 95% of points pass a 2%/2mm dose
difference/distance-to-agreement
gamma comparison. Two VMAT plans were looked at: a prostate plan
and a head & neck plan.
The MC calculated distributions were compared to film measurements.
The MC calculated
values had 99.8% of points pass a 3%/3mm gamma comparison for the
prostate plan and 98.4%
of passed a 4%/4mm gamma comparison for the head & neck plan.
This study demonstrates
GATE’s time-dependent geometry capabilities applied to complex
radiotherapy techniques.
Lastly, a review paper on GATE’s capabilities, regarding
radiotherapy and dosimetry, was
published in 2014 by Sarrut et al [68]. The paper highlights GATE’s
capabilities demonstrated
by previous research and provides insights to the future of GATE
for modeling radiotherapy
systems and dosimetry studies. While there is still room for
improvement in GATE’s features
and accuracy, previous research has demonstrated that it is capable
of accurately modeling
radiotherapy dosimetry.
61
4.5 Motivation for using GATE as the MC package for this
project
While GATE is closely connected with Geant4, other MC particle
simulation software differs in
their calculation methods. Most often these differences arise in
the method used to model
electron transport. Many of the studies previously mentioned
compare multiple MC codes with
one another and often the EGSnrc code is considered a gold standard
due to its wide spread use
and validation in the medical physics field. Within statistical
uncertainty and parameters of the
experiments, these codes are found to be in agreement for linac
modeling. A major advantage of
GATE over the other codes is its robust geometry capabilities
(built on Geant4), object-oriented
language, and well-validated physics models for all particle types.
A key difference from Geant4
is GATE’s time-dependent modeling capabilities. GATE allows users
to specify motion of any
and all geometries in one simulation, synchronising all motion
together. Another advantage is
GATE’s user-friendly macro structure makes it a more attractive
simulation package to use.
This research project will use GATE to model the Novalis Classic 6
MV linac. The Novalis
Classic linac has not been modeled by GATE, or by any other MC
software. This research will
investigate GATE’s capabilities for small field applications and
provide a validated linac model
to extend to multiple research projects. Also, the validated model
will have the potential to
provide an independent dose calculation for evaluating treatment
plans.
62
5.1 Overview
This chapter will outline the materials and methods used for the
development and validation of
the Monte Carlo model of the Novalis Classic linac. First, an
overview of the linac and its
capabilities, then the reference data used, followed by the GATE
simulation settings and
methods to analyze the data.
The strategy used for modeling the linac is as follows: first
appropriate computational power
needs to be assessed and acquired, whether a local machine or
cluster,