Efficient Fourier transformation of unstructured meshes
and application to MRI simulation
by
Alejandro Martinez
A thesis submitted in conformity with the requirementsfor the degree of Master of Mechanical Engineering
Graduate Department of Mechanical and Industrial Engineering
University of Toronto
Copyright c© 2014 by Alejandro Martinez
Abstract
Efficient Fourier transformation of unstructured meshes and application to MRI
simulation
Alejandro Martinez
Master of Mechanical Engineering
Graduate Department of Mechanical and Industrial Engineering
University of Toronto
2014
This thesis demonstrates different ways to compute the MRI signal in finite element
magnetic resonance imaging or FE-MRI. For linear elements the MRI signal can be found
analytically. The error metric for linear elements to calculate the signal with a fixed error
in k-space using an adaptive Gaussian quadrature is derived using finite element function
derivatives and is then extended for use in quadratic elements. The feasibility of using
the numerical steepest descent (NSD) method, which is designed to compute oscillatory
integrals like the MRI signal equation, was also studied. The method was found to
compute the MRI signal faster than adaptive Gauss quadrature for quadratic surface
elements using typical MRI imaging parameters. The feasibility of NSD is discussed for
volumetric quadratic elements and for different mesh geometries.
ii
Dedication
For my love, Helena Violet Kelly, with gratitude for your love and support
iii
Acknowledgements
I would like to thank prof. Steinman. and prof. Huybrechs and the other members of
the examination committee prof. Nachman and prof. Mandelis for their support and
discussions. I would also like to thank my family for all their love and support.
iv
Contents
1 Introduction 1
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Introduction to MRI theory and simulation . . . . . . . . . . . . . . . . . 1
1.2.1 Signal generation . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.2 Slice selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.3 Spatial encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.4 Signal acquisition and reconstruction . . . . . . . . . . . . . . . . 10
1.2.5 Image resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.6 Image artefacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3 Literature review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3.1 Analytic solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.3.2 Simulators using magnetization maps . . . . . . . . . . . . . . . . 20
1.3.3 Simulators using T1, T2 and proton density maps . . . . . . . . . 20
1.3.4 Finite element based simulators . . . . . . . . . . . . . . . . . . . 21
1.3.5 Bloch equation simulators . . . . . . . . . . . . . . . . . . . . . . 23
1.4 MRI acquisition within the FE-MRI library . . . . . . . . . . . . . . . . 23
1.5 Research objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2 Adaptive Gauss quadrature for FE-MRI elements 26
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
v
2.2 Gauss quadrature to compute oscillatory integrals . . . . . . . . . . . . . 29
2.3 Computing oscillatory integrals in higher dimensions . . . . . . . . . . . 30
2.4 Error estimation for tri3 elements . . . . . . . . . . . . . . . . . . . . . . 31
2.5 Error estimation for tri6 elements . . . . . . . . . . . . . . . . . . . . . . 36
2.6 Error estimation for tet4 elements . . . . . . . . . . . . . . . . . . . . . . 38
2.7 Error estimation for tet10 elements . . . . . . . . . . . . . . . . . . . . . 40
3 Algorithm details and implementation 41
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.2 FE-MRI software architecture . . . . . . . . . . . . . . . . . . . . . . . . 42
3.3 K-space acquisition algorithm . . . . . . . . . . . . . . . . . . . . . . . . 43
3.4 Cycles per element quadrature table . . . . . . . . . . . . . . . . . . . . . 46
3.5 Calculation of quadrature rules . . . . . . . . . . . . . . . . . . . . . . . 48
3.6 Slice profile simulation for volumetric meshes . . . . . . . . . . . . . . . . 48
3.7 Additions and improvements to current FE-MRI functions . . . . . . . . 49
4 Results and discussion 52
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.2 Validation of optimal algorithm for tri3 elements . . . . . . . . . . . . . . 52
4.3 Validation of optimal algorithm for tri6 elements . . . . . . . . . . . . . . 56
4.4 Validation of optimal algorithm for tet4 elements . . . . . . . . . . . . . 56
4.5 Validation of optimal algorithm for tet10 elements . . . . . . . . . . . . . 58
4.6 Computation time comparison between linear and quadratic surface dis-
cretizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.7 Simulation of MRI acquisition of a carotid artery . . . . . . . . . . . . . 61
5 Feasibility of NSD for faster signal calculation 62
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.1.1 Approximate evaluation of oscillatory integrals . . . . . . . . . . . 62
vi
5.1.2 Adaptive quadrature . . . . . . . . . . . . . . . . . . . . . . . . . 63
5.1.3 Filon method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
5.1.4 Levin method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.1.5 Numerical steepest descent . . . . . . . . . . . . . . . . . . . . . . 65
5.2 Experiment using NSD for quadratic surface elements . . . . . . . . . . . 66
5.2.1 NSD decomposition for quadratic surface elements . . . . . . . . . 68
5.2.2 Algorithm details . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.3 Results and discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
6 Conclusions 75
6.1 Summary of work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
6.2 Future directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
A Generation of Gauss quadratures 77
A.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
A.2 The Golub-Welsch algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 82
A.3 Sample C++ quadrature code . . . . . . . . . . . . . . . . . . . . . . . . 83
B Mesh generation with same discretization error 88
B.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
B.2 Discretization error when using quadratic elements . . . . . . . . . . . . 89
B.3 Discretization error when using linear elements . . . . . . . . . . . . . . . 91
B.4 MATLAB implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 92
C Implementation of NSD algorithm 97
C.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
C.2 G1 decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
C.3 Steepest descent paths for G . . . . . . . . . . . . . . . . . . . . . . . . . 105
C.4 Steepest descent paths for F . . . . . . . . . . . . . . . . . . . . . . . . . 106
vii
Bibliography 108
viii
List of Tables
1.1 Values used for fig. 1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.2 Finite element types used in this thesis . . . . . . . . . . . . . . . . . . . 25
4.1 K-space error error maps and histograms for all the element types imple-
mented. Notice that the x-axis for the histograms in in terms of n, where
error = 10n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.2 Test meshes for each element type and their simulated MRI images. For
the meshes with quadratic elements one of the elements is selected to show
the blue edges shown are between element nodes and do not delineate a
quadratic element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.1 Total computation times in seconds and time ratios for different k-space
extents for an error threshold of 10−2 . . . . . . . . . . . . . . . . . . . . 72
5.2 Total computation times in seconds and time ratios for different k-space
extents for an error threshold of 10−3 . . . . . . . . . . . . . . . . . . . . 72
A.1 Fejer quadrature points and weighs . . . . . . . . . . . . . . . . . . . . . 81
ix
List of Figures
1.1 Measured (top) and simulated (bottom) magnetization and T2 maps of an
artery wall . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.1 Transformation of a plane wave in local coordinates in the standard trian-
gle to the unit square . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.2 tri3 domain s and vertices in global and local coordinate systems with a
given K space vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.3 The plane formed by the gradient of g with respect to ξ1 and the location
of its possible maximum absolute values in domain s. In this case the
maximum absolute value of the gradient is at ξ1 = ξ2 = 0 . . . . . . . . . 37
3.1 Flowchart of the main FE-MRI algorithm . . . . . . . . . . . . . . . . . 44
4.1 K-space error as a function of k-space location for a tet4 unit cube with a
quadrature order of 14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.2 Number of k-space points as a function of n where error = 10n for a tet4
unit cube with a quadrature order of 14 . . . . . . . . . . . . . . . . . . . 58
4.3 Mesh of segmented artery (a) and its MRI reconstruction (b) . . . . . . . 61
5.1 K-space computation time maps in milliseconds for different adaptive al-
gorithms and error thresholds. The time ratios between the algorithms is
also shown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
x
B.1 Discretization of a circle using quadratic curve segments. The quantity to
be minimized is proportional to the area between the lines . . . . . . . . 89
B.2 Discretization of a circle using line segments. The quantity to be mini-
mized is proportional to the area between the lines . . . . . . . . . . . . 91
xi
Chapter 1
Introduction
1.1 Motivation
MRI simulators are primarily used to determine the origin of acquisition errors and to
design and optimize new pulse sequences. For example in [1], an MRI simulator is used
to design and optimize a MRI pulse sequence for the characterization of artherosclerotic
plaque. They are also used for education and training [2]. The motivation of this thesis
is to design an adaptive Gauss quadrature to evaluate the MRI signal equation for un-
structured meshes and to design and test an algorithm using numerical steepest descent
(NSD) [3], [4], [5], [6]. Using NSD one can compute the Fourier transform of a mesh
faster than other approaches like numeric quadrature [7]. The algorithm was tested us-
ing parameters needed for simulating an MRI acquisition. These results are then used
to implement and improve upon new functions in the FE-MRI library [7] which can be
used to simulate MRI acquisition.
1.2 Introduction to MRI theory and simulation
In this section, an introduction to magnetic resonance imaging (MRI) will be given with
emphasis on signal acquisition, its Fourier transform approximation (k-space) and image
1
Chapter 1. Introduction 2
artefacts. The introduction will have an emphasis on imaging and reconstruction methods
that are based on the Fourier transform and for 2D imaging. This introduction will also
aid the reader in understanding how in some cases an MRI acquisition of an object can be
simulated by the Fourier transform and inversion of a set of points or elements weighted
by the actual material properties of said element such as in the FE-MRI library.
The MRI acquisition process can be separated into the following steps:
1. Signal generation: generating a net magnetization in an object
2. Slice selection: selectively exciting a plane with a given thickness if a 2D acquisition
is desired
3. Spacial encoding: spatially encoding the signal in that plane
4. Signal acquisition and reconstruction: acquiring a radio frequency signal and re-
constructing the image
These steps will be described in detail in the subsequent subsections. This introduc-
tion is based on the following sources: [8], [9], [10]. An overview is given next. MRI
works by aligning nuclei in an object along their spin axes more in one direction than
another. This spin is a magnetic and quantum effect but it can be described in the same
way as a spinning top. This is done by a very strong fixed magnetic field B0. Then
another moving magnetic field that is perpendicular to the fixed field called B1 tips the
spins and they precess around the original fixed field. The amount of tipping depends on
how long the B1 field is applied. The speed of precession is dependant on the strength
of B0. The B1 field can be chosen in a way that allows for selective excitation of a slice
perpendicular to the z axis. Another smaller fixed gradient field Gx (i.e B = B0 +Gxx)
is superimposed on B0 to cause a gradient in the total fixed field. The precessing spins
emit radio waves that can be picked up by a coil. The signal’s frequency and magnitude
are related to the spatial location of the spins along the x axis and the amount of spins
Chapter 1. Introduction 3
in that given location. Hence an image of the spin distributions can be reconstructed by
taking the inverse Fourier transform of the signal. To obtain a 2D distribution of spins,
the locations along the y-axis can be encoded by using a gradient field Gy applied before
the measurement is taking place, so that the phase of each spin depends on the location
along the y-axis. The readout coils which do the measurement can measure the phase,
magnitude and frequency of the signal which means the 2D distribution of spins in a slice
can be determined.
1.2.1 Signal generation
The coordinate axes that will be used in this and subsequent sections are only used for
convenience and consistency with MRI literature. Some nuclei in living things, like in
hydrogen atoms (1H), can have nuclear spin angular momentum. These types of nuclei
will be referred to as spins. The most common spin used for MRI is 1H since it is a
component of water and the human body is mostly water. Spins can be described as
small bar magnets oriented randomly in a given material. When the spins are exposed
to a static magnetic field, some spins will align with the magnetic field and others align
in the opposite direction to it and in both orientations they will rotate around the z axis
with an angular frequency given by the Larmor equation:
ω = γB0 (1.1)
Where γ is the gyromagnetic ratio which for a particle is the ratio between its magnetic
dipole moment and its angular momentum. For 1H, γ/2π = 42.48 MHz/T [10]. A group
of spins in an object with the same γ value are commonly referred to as isochromats. B0
is the applied magnetic field referred to as the static field and it points in the positive
z-axis. Slightly more spins will align in the same direction as B0 because a spin is more
Chapter 1. Introduction 4
likely to take the lower energy state than a higher energy state. The net magnetization
is simply the spins oriented towards the positive z-axis minus the ones oriented towards
the negative z-axis and it is given by the following equation:
M0z = | ~M | = γ2
~2B0Ns
4KTs(1.2)
Where ~ is Plank’s constant divided by 2π, Ns is the number of spins, K is Boltzmann
constant and Ts is absolute temperature of the spins. The derivation of equation 1.2
is similar to the one used in [11] to determine the properties of a simple paramagnetic
solid. More information is given in [10] and [11]. Equation 1.2 only applies to so called
spin-1/2 systems like 1H isochromats. The spins also can be made to rotate with other
magnetic fields with respect to any line in the x-y plane but only by discrete angles;
it is a quantum mechanical effect. However since the net magnetization is a sum of
billions of spins in the whole object, the angle of rotation for the magnetization can be
assumed to be continuous and a classical description analogous to a precessing spinning
top can be used. After applying the static field, the net magnetization will point in the
same direction. The equation that describes the time-dependent behaviour of ~M in the
presence of an applied magnetic field is called the Bloch equation and it is given by:
d ~M
dt= γ ~M × ~B − Mxi−My j
T2
− Mz i−M0k
T1
(1.3)
Where ~B is the total applied magnetic field, Mx−z are the components of ~M and T1 and
T2 are the longitudinal and transverse relaxation times respectively. The range of values
for the relaxation times for 1H are T1 ∈ [100, 1500] ms and T1 ∈ [100, 1500] ms [10]. Let
us introduce a transverse magnetic field that oscillates at the Larmor frequency:
Chapter 1. Introduction 5
~B1(t) = 2Be1(t)cos(ωt+ ϕ)i (1.4)
Where Be1(t) is called an envelope function that for example can have a value of B1 from
time 0 to some time τp and 0 otherwise. τp is simply the amount of time the B1 field is
applied. It is also referred to as the pulse width. It will be shown later in this Section
that the useful factor is only the integral of Be1(t) from 0 to τp and not its shape. The
value ϕ is an initial phase angle. This longitudinal magnetic field can be decomposed
into fields rotating in opposite directions about the z-axis. One of the terms rotates in
the opposite direction as the processing spins and has a negligible effect. Therefore the
effective field that needs to be considered is:
~B1(t) = Be1(t)[cos(ωt+ ϕ)i− sin(ωt+ ϕ)j] (1.5)
The magnetization vector 1.5 can be transformed into a vector in the Larmor-rotating
frame with the following coordinate transformations:
i′ ≡ cos(ωt)i− sin(ωt)j
j′ ≡ sin(ωt)i+ cos(ωt)j
k′ ≡ k (1.6)
Using the transformation 1.6, the magnetic field 1.5 becomes:
~B1,rot(t) = Be1(t)i
′ (1.7)
Chapter 1. Introduction 6
Which is expected since the vector ~i′+~j′ is spinning around the z-axis in the same direction
and angular speed as the original ~B1(t) vector. If the pulse width is small compared to
both relaxation times, the effect of ~B1(t) on the magnetization ~M is described by the first
term of the Bloch equation. As will be shown below, it is also advantageous to transform
that term into the Larmor-rotating frame. The transformed Bloch equation for a short
exitation using the transformation 1.6 is:
∂ ~Mrot
∂t= γ ~Mrot × ~Beff (1.8)
Where ~Beff = ~Brot + ~ω/γ is referred to as the effective magnetic field. Since the total
rotating field is B0~k′ + Be
1(t)i′, the motion of the bulk magnetization vector will be
described by:
∂ ~M
∂t= γ ~Mrot × ~Be
1(t) ~i′ (1.9)
Notice that equation 1.9 will be equal to zero (i.e the bulk magnetization does not precess)
if the only field present is the original B0~k′ magnetization. The solution to equation 1.9
up to the pulse width time τp with the initial condition ~M(0) = M0z is the following:
Mx′(t) = 0 (1.10)
My′(t) = M0z sin(α) (1.11)
Mz′(t) = M0z cos(α) (1.12)
Chapter 1. Introduction 7
Where α is referred to as the flip angle and its given by:
α =
∫ t
0
γBe1(u)du (1.13)
If the envelope function is rectangular then α = γB1t. So in this case the bulk magneti-
zation tips away from the z′ axis about x′ or precesses (since its tipping but also rotating
about z in the laboratory frame) at an angular velocity of −γB1. Also notice that as
long as the area under B1(t) is the same, the bulk magnetization will tip to the same
final angle.
1.2.2 Slice selection
Since the envelope function Be1(t) can have any shape and still produce the desired tip
angle as long as its area is constant, it can be used to selectively excite a slice of the object
being imaged. For simplicity we will assume the slice is perpendicular to the z axis. If
we apply a gradient to the B0 field as a function of the z component of location while
the spins are being tipped, their positions will be spatially encoded by their resonant
frequencies with the following frequency selection function;
p(f) =
1, |f − fc| < ∆f
0, otherwise
(1.14)
Where:
fc = f0 +γ
2πGzz0 (1.15)
Chapter 1. Introduction 8
∆f =γ
2πGz∆z (1.16)
To obtain the time signal needed to produce the corresponding frequency spectrum, the
Fourier transform can be applied on equation 1.14:
B1(t) ∝∫
∞
−∞
p(f)e−2πift dt = ∆fsinc(π∆ft)e−2πifct (1.17)
From equation 1.18, we get the envelope function:
Be1(t) = Asinc(π∆ft) (1.18)
Where the constant A depends on the desired spin tip angle. This pulse is not possible
to apply in practice so it is truncated from time 0 to some time τp, with the center of the
sinc function at τp/2.
1.2.3 Spatial encoding
One can use the static field B0 to encode spatial information in the final MRI signal. A
simple way of achieving this is with a constant gradient field such as:
ω(x) = ω0 + γGxx (1.19)
So now the field B0 pointing along the z-axis changes magnitude along x. After tipping
the spins uniformly along a slice in the z-axis to 90◦ for example, this gradient can be
used to spatially encode a signal using both phase and frequency. This is needed because
Chapter 1. Introduction 9
the image signal in 2D acquisition is:
s(t) =
∫∫
m(x, y)e−iγ(Gxx+Gyy)tdxdy (1.20)
Which is the Fourier transform of the object’s magnetization, m(x, y), on the particular
slice being imaged:
M(kx, ky) =
∫∫
m(x, y)e−2πi(kxx+kyy)dxdy (1.21)
Where the Fourier coordinates for general G gradients are given by:
kx =γ
2π
∫ t
0
Gx(u) du (1.22)
ky =γ
2π
∫ t
0
Gy(u) du (1.23)
In the MRI literature Fourier space is referred to as k-space. Phase encoding can be
achieved as follows: first tip all spins 90◦ in the same direction using the B1(t) field
discussed in Section 1.2.1, then apply a gradient field which happens along a line in
the x-y plane (the magnetic field is still pointing in the z-axis) for a small time Tpe
which stands for phase encoding time. After Tpe seconds, the spins will have a phase
proportional to their location. For example if the G gradient that happens along the
y-axis and is constant then the phase of the signal as a function of y is:
φ(y) = −γGyyTpe (1.24)
Chapter 1. Introduction 10
If the signal is acquired right after Tpe for a very short time interval, we would obtain
the sum of the magnetizations for each line that contains the same phase. However, if we
apply yet another gradient field such as Gx while recording the signal every ∆t seconds,
we will also obtain the magnetization along those lines as a function of the frequency,
i.e the spatial information along the line has been frequency encoded. This procedure is
referred to as a pulse sequence. The sequence described above is referred to as spin-wrap
imaging and it is one of the most commonly used sequences because it results in equally
spaced points in k-space on a grid which can be easily transformed into an image using
the fast Fourier transform.
1.2.4 Signal acquisition and reconstruction
Equation 1.20 for the MRI signal is obtained from radio waves given off by the object
being imaged which are measured by coils in the MRI machine. The signal from the
coils is then converted to a digital signal and its phase and frequency components are
determined. If the signal is sampled in k-space at uniform intervals such as in spin-warp
imaging, the signal can be obtained applying the inverse discrete Fourier transform of
the discrete version of the signal equation. This method can take O(N2) operations.
The same transform can be performed in O(Nlog2N) operations using the fast Fourier
transform or FFT. FFT is the reconstruction method most likely used in MRI equipment.
Other transforms are available for k-space acquisition that is sampled radially such as
the Radon transform. For more information refer to [9].
1.2.5 Image resolution
When the MRI signal is being acquired, k-space is recorded at discrete time intervals.
Using equations 1.23 and 1.22 the k-space spacing is:
Chapter 1. Introduction 11
∆ky =γ
2π∆Gyty (1.25)
∆kx =γ
2πGx,max∆t (1.26)
Because of the Fourier sampling theorem [9], the field of view of the image should be the
following:
FOVx =1
∆Kx
(1.27)
Which also applies for the y and z axes. The spatial resolution of the image in the x axis
is the field of view divided by the number of readout samples:
∆x =FOVx
Nread=
1
∆kxNread(1.28)
While the spatial resolution in the y axis is the field of view divided by the number of
phase encoding steps:
∆y =FOVy
Npe=
1
∆kyNpe(1.29)
Notice the above relations apply only to spin-warp imaging. They do not apply to other
pulse sequences that sample k-space non-uniformly.
Chapter 1. Introduction 12
1.2.6 Image artefacts
Image artefacts arise in MRI acquisition either due to insufficient or inaccurate data or
both. The motivation of this thesis is to provide a tool to simulate MRI such that the
cause of image errors can be understood and also to design and test protocols such as the
resolution and how it affects the measurement accuracy of some anatomic feature similar
to [12] for example. With this in mind, the following image errors are described together
with their causes and how they can be simulated in general or in the FE-MRI library.
Gibbs or truncation artefact
The Gibbs or truncation artefact is seen as spurious ringing around sharp discontinuities
in the reconstructed image. If the image is a smooth function, the inverse transform of
the signal equation uniformly converges to the true image when the number of samples
N → ∞ for a given FOV . However if the function is discontinuous, the inverse transform
will have ringing with an overshoot and an undershoot of about 9% of the difference at
the discontinuity. Surprisingly the overshoot and undershoot percent value is constant
regardless of N . However the period of the ringing will decrease with higher N values.
If the signal was not truncated to a finite number of N samples then the ringing period
would be zero and the exact image would be reconstructed hence the name truncation
artefact. This artefact can be made to appear on any simulated MRI acquisition simply
by using a low enough number of N samples.
Aliasing
Aliasing happens when k-space is under-sampled. The artefact is a wraparound of the
image in the direction that is being under-sampled. This can be clearly understood by
the relationship between the true image function and its Fourier series reconstruction [9]:
Chapter 1. Introduction 13
I =∞∑
n=−∞
I
(
x− nx
∆kx
)
(1.30)
Where the term 1/∆kx = FOVx. This error can be readily be simulated by simply
using an FOV value that is smaller than the object size along the same direction. This
error can be eliminated simply by choosing an FOV value larger than the object being
imaged.
Chemical shift artefacts
Chemical shift artefacts happen when an object with different isochromats is being im-
aged. Different isochromats will have a different γ value. Since spatial position is fre-
quency encoded in the readout direction when an MRI signal is being obtained, signals
coming from two different isochromats in the same location such as water and fat will
end up in different locations along the frequency encode signal. The artefact will look
like a region of the object such as fat which is not being explicitly imaged will be shifted
compared to the rest of the image features. The frequency shift due to the different
isochromat is given by [9]:
∆ωc = γδB0 (1.31)
The frequency bandwidth of a pixel in k-space is:
∆ωx = γG∆x (1.32)
Since the pixel bandwidth is proportional to the pixel length, the spatial displacement
in units of a pixel is:
Chapter 1. Introduction 14
δx =∆ωc
∆ωx
=δB0
G∆x(1.33)
And the spatial shift will be:
∆xc = δx∆x =δB0
G(1.34)
Therefore an effective way to reduce the chemical shift is to use a strong readout
gradient. In practise this will also reduce the signal to noise ratio of the acquired signal.
Therefore an optimal value for G has to be determined. This error can be simulated
simply by obtaining the k-space or each isochromat separately and then adding the
signal.
Motion artefacts
A type of very obvious motion artefact that happens when the whole object being scanned
is in motion is called ‘ghosting’. It can be clearly identified by copies of the object being
imaged along the direction of motion and with a lower contrast than the actual image.
This error happens when the object moves along the phase-encoding direction. For
example, assuming the object moves at a constant speed vx along the x-direction and
there is a constant phase encoding gradient Gx applied for some time interval τ starting
from time tn, the phase shift is the following:
φ = γ
∫ tn+τ
tn
Gn(x+ vxt) dt
= γGnxτ + γGnvxτ(tn + τ/2)
= 2πknx+ 2πknvx(tn + τ/2) (1.35)
Chapter 1. Introduction 15
So the motion induced phase shift is:
∆φm = 2πknvxtn + 2πknvxτ/2 (1.36)
Since the time τ is much smaller than the total acquisition time up to that point, the
second term of equation 1.36 can in most cases be omitted. Equation 1.36 without the
second term can be obtained directly by applying the Fourier shift theorem to the MRI
signal equation. For constant vx, Gx and a negligible τ , the signal equation is:
sm(k, t) = e−2πiknvxts(k, t) (1.37)
Equation 1.37 has the same phase shift as the one given in equation 1.36. To simulate
this error, either equation 1.37 can be applied directly to the simulated k-space map for
a constant speed vx or the object can be translated a certain distance based on vx in the
x direction and then its corresponding k-space points calculated to simulate the error. A
further discussion on other types of motion artefacts is given in [9].
Flow artefacts
Flow artefacts are caused by the motion of fluids such as blood in the living tissue being
imaged. The main feature of these errors is a loss or gain of signal intensity where the
flow is taking place like inside an artery [13]. The only simulation method that has been
used successfully for these types of error is to either measure the flow directly using other
imaging methods or to simulate the flow using computational fluid dynamics (CFD).
Then a certain number of streamlines are extracted from the flow field. Afterwards the
individual effect of spins moving along the streamlines is computed to obtain an MRI
Chapter 1. Introduction 16
signal. References [13] and [14] use this method of simulation. Since a set of discrete
spins are used to simulate a continuum fluid are used, the number of spins needed and
the related computation time is high.
1.3 Literature review
The different MRI simulation methods can be grouped into the following types:
1. Analytic solutions
2. Simulators using magnetization maps
3. Simulators using T1, T2 and proton density maps
4. Finite element simulators
5. Bloch equation simulators
The order on the list is based on increasing physical accuracy of the models in most
cases at the expense of computation time. Except for some Bloch equation simulators, the
other types use the Fourier transform of a pre-described signal therefore they technically
are not simulators in the sense that they are not simulating a full MRI machine. A full
simulation would require modelling both the object using the Bloch equation and the
MRI components such as the coils and the B0 magnet. However the methods described
below will be referred to as simulations since they do still crudely simulate the object
and parts of the MRI machine. Each type is described in the following sections.
Chapter 1. Introduction 17
1.3.1 Analytic solutions
In some situations, an analytic solution for k-space can be found. The author co-authored
a paper, [12], where MRI acquisition of an artery was performed to determine how the
total arterial wall thickness obtained from the segmentation of the MRI image depends
on the T2 values of the arterial layers (intima, media and adventitia) and TE . To test
if this is due to a partial volume effect (a low resolution) or because of the different
T2 values of each layer, an analytic k-space was constructed for each of the 4 acquired
images by adding the k-spaces of 4 concentric circles. Then the obtained images were
segmented. The values used are given in the following table:
Layer ro(mm) pTE=17ms pTE=34ms pTE=51ms pTE=68ms
arterial blood 3.7 0.0000 0.0000 0.0000 0.0000
intima 3.82 1.0242 0.8800 0.7561 0.6496
media 4.35 1.0000 0.7699 0.5927 0.4563adventitia 4.42 0.7063 0.3539 0.1773 0.0888
background N/A 0.0000 0.0000 0.0000 0.0000
Table 1.1: Values used for fig. 1.1
Where ro and p in Table 1.1 stands the outer radius for the image intensity. The image
intensity is in arbitrary units; the actual value depends on the specific MRI scanner. The
thickness changes as a function of TE closely match the experimental results. TE is called
the echo time and is simply the time where the spins are allowed to relax after a RF
filed is applied. Then a mono-exponential fit was performed (p = e−TE/T2 , where p is the
image intensity at a pixel) on the 4 simulated images to get a T2 map; it also closely
matches the T2 map obtained from the actual MRI images. For the mono-exponential
fit, if a pixel had p values lower than 1.5 × 10−5 on all the 4 images, it was not taken
into consideration and was given an arbitrary value of 0. The measured and simulated
magnetization and T2 maps are shown in Figure 1.1.
The magnetization image shown has a magnetization normalized to 1. This image
weighted with different magnetization values based on the echo time and the T2 decay are
Chapter 1. Introduction 18
Image x−axis (mm)
Image y
−axis
(m
m)
−6 −4 −2 0 2 4 6
−6
−4
−2
0
2
4
6
Image x−axis (mm)
Image y
−axis
(m
m)
−6 −4 −2 0 2 4 6
−6
−4
−2
0
2
4
6
T2 (m
s)
50
100
150
200
250
Figure 1.1: Measured (top) and simulated (bottom) magnetization and T2 maps of anartery wall
used in the paper. To obtain the k-space, corresponding to a simplified artery, 4 circle k-
spaces were added, this can be done since the Fourier transform is a linear transform [9].
If the artery is long compared to its radius, a 2D Fourier transform can be used instead
of the Fourier transform for a cylinder. If image and k-space coordinates are changed to
cylindrical coordinates: r =√
(∆x)2 + (∆y)2, θ = tan−1(∆x/∆y), kr =√
k2x + k2
y and
φ = tan−1(kx/ky). Therefore:
Kxx+Kyy = krr(cosφcosθ + sinφsinθ) = krrcos(θ − φ) (1.38)
Chapter 1. Introduction 19
Then the Fourier transform of a circle with unit radius and height is given in cylin-
drical coordinates as follows:
Fcirclesimple(kr) =
∫ 1
0
r
[∫ 2π
0
e2πikrrcos(θ−φ)dθ
]
dr (1.39)
The integral can be simplified in cylindrical coordinates since the function for a cycle
is circularly symmetric. The solution of equation 1.39 is given in terms of the first order
Bessel function of the first kind J1 [9] and its the following:
Fcirclesimple(kr) =J1(2πkr)
kr(1.40)
In the case of kr = 0, a Taylor series can be taken of equation 1.40 about kr = 0, and
simplified to yield:
Fcirclesimple(0) = π (1.41)
Applying the scaling and linearity properties of the Fourier transform [9] to equations
1.40 and 1.41 yield the following equations for an arbitrary image intensity (h) and radius
(r):
Fcircle(kr) = hrJ1(2πrkr)
kr(1.42)
Fcircle(0) = hr2π (1.43)
Chapter 1. Introduction 20
This k-space is implemented in the FE-MRI library as a python script. An example
of its use in Unix is given below:
femri2dcylinderkspace -fov 12.0 12.0 1.0 -fovcenter 0.0 0.0 0.0
-matrix 32 32 1 -mvalue 1.0 -slicethickness 0.0 -radius 3.0
-center 0.0 0.0 0.0 --pipe femrikspacetoimage -dimensionality 2
-ofile circleImage.vti
Both in the magnetization maps derived from measurements and the analytic model
it was found that the mean wall thickness obtained by segmentation decreases with echo
time because of the small T2 time of the adventitia. Therefore a short echo time should be
included to properly image it. This layer plays an important role in plaque development
[12].
1.3.2 Simulators using magnetization maps
In [15], artefacts due to relaxation, linear flow and motion after an RF-pulse are added
to the k-space of the magnetization also called proton density map. Some limitations
of this method are not able to simulate errors due to non-linear gradients and motion
during and RF-pulse, since they are considered to be momentary.
1.3.3 Simulators using T1, T2 and proton density maps
In [2], the T1, T2 and proton density maps are Fourier transformed and changed according
to limited properties of MRI pulse sequences. Although the method is computationally
fast and can simulate wraparound and simple motion artefacts accurately, by design it
cannot simulate errors related to signal generation. This type of simulation is used in [1]
to optimize a MRI pulse sequence for the characterization of atherosclerotic plaque.
Chapter 1. Introduction 21
1.3.4 Finite element based simulators
Finite element based simulators also use a magnetization map and can include T1, T2
maps, however they use the finite element discretization which allows the magnetization
to vary continuously within the object. The elements used for this thesis are shown
in Table 1.2. The names given in the figure will be used in the rest of the thesis.
Both types of simulators have to transform the magnetization map into the Fourier
domain. The Fourier transformation is then done in two different ways: discretizing the
object into a set of points and then performing an FFT, or discretizing the object into
elements with continuous values inside them as stated above and then computing exactly
or approximating to a given error the Fourier transform for each element. The total
acquired signal using the finite element discretization is the following:
Stot(~k) =∑
m
[
fm(T1, T2, ρ)∑
e
se(~k)
]
(1.44)
Where m stands for materials with different T1, T2, or ρ values, and e stands for all
the elements representing the object. The advantage of this method is that if the error
incurred by the finite element discretization is low enough, the error between the actual
k-space values of the object and the simulated values can be either zero or can be made
as low as is required (depending on the allowable computation time) depending on the
element type. Also this approach can be faster than Fourier transform simulators if one
starts with a mesh of a complex object, ideally an unstructured mesh to minimize anti-
aliasing or “stair-step” effects, and takes into account the interpolation step to obtain
a structured grid. This last operation scales as O(E) where E is the number of finite
elements which also scales with the mesh volume [16]. Publications using this approach
are [17] and [18]. In [17], the linear elements tri3 and tet4 are integrated analytically.
If the magnetization inside a certain region of an object can be assumed to be constant
Chapter 1. Introduction 22
Gauss’ theorem can be used to transform the signal equation into a surface integral [19]:
se(~k) =
∫∫∫
element
me−2πi~k·~xd~x
= m
∫∫∫
element
i
2π~k · ~k∇ · e−2πi~k·~xd~x
=im
2π~k · ~k
∫∫
surface
~k · n(x)e−2πi~k·~xdS (1.45)
Where dS is an infinitesimal surface area and n(x) is the unit normal pointing outwards
from the closed surface. The normal is constant for a given linear element but varies
with the global coordinates for quadratic elements. As explained in [19], this method is
more efficient than volumetric integration of the signal equation in the following three
ways. No volumetric meshing step is needed where volume elements are constructed
from the surface mesh. The number of elements over which integration is performed is
reduced and scales with the surface area of the shape and not its volume. Also fewer
integration points are needed for surface vs. volume elements for the same integration
order. The analytic solution for tri3 in [17] can also be adapted to solve equation 1.45
exactly. This method also reduces the storage requirements for the phantom compared
to Fourier transform simulation methods that require a structured grid with a resolution
greater than the nominal element size of the mesh [19]. The drawback of this method is
that one has to decide in how many parts and object should be segmented so that the
difference between material properties within each part is kept to a minimum.
The advantage of FE-MRI over voxel based methods is that if the imaging resolution
approaches the voxel size then “stair-step” errors will be produced. These error are also
referred to as partial volume effects. This does not happen if an unstructured mesh
with continuous changes in material properties within each element are used. These
property makes this simulation method ideally suited to study how changing MRI imaging
Chapter 1. Introduction 23
parameters affect the image quality. With voxel based methods the image quality might
show errors that are due to the simulation itself.
1.3.5 Bloch equation simulators
Bloch equation simulators such as [20] and [21] allow for close to all possible computation
of errors seen in MRI such as the errors due to static field inhomogeneity. However, the
excessive time needed to compute one voxel means that not enough voxels can be used
such that there will be voxel discretization artefacts [20].
1.4 MRI acquisition within the FE-MRI library
The volumetric and surface integration steps described in Subsection 1.3.4 have been
implemented in the FE-MRI library for all the elements in Table 1.2. The main advantage
compared to simulators using a structured grid of T1, T2 and proton density values is
that the interpolation step is not needed if one has an unstructured mesh. If one only has
a surface mesh and can assume these values are constant, the Fourier transform of the
surface mesh can be computed directly from it as explained in Subsection 1.3.4. This is a
fair assumption since organs with the same tissue types will have the same isochromats.
Also the Fourier transform can be calculated exactly for linear elements such as tri3 and
tet4, or with a desired image error for quadratic elements tri6 and tet10.
1.5 Research objectives
In previous work, [18], an error metric for linear elements was derived and used for an
adaptive Gauss quadrature to compute 1.44 and 1.45. The error metric is an estimate
of the error between the calculated and exact signal for a given element and k-space
value. The error metric is used to estimate the number of points for an adaptive Gauss
Chapter 1. Introduction 24
quadrature that integrates the MRI signal for each element in an unstructured grid.
This was implemented by prof. Steinman and Luca Antiga in the FE-MRI library. The
contributions of this thesis are as follows:
1. A derivation of the error metric found in [18] using the element’s shape functions
and k-space values and derivations of error metrics for three more element types
including quadratic elements
2. An addition to FE-MRI that can estimate errors of Gauss quadratures in many
dimensions from the error obtained in one dimension using a result in [22] and ad-
ditions to FE-MRI that allow it to compute the MRI signal using the new elements
3. A test to see if the most promising of the methods to compute highly oscillatory
integrals (NSD) could be used instead of adaptive Gauss quadrature which is cur-
rently used in FE-MRI
The next Chapters explain in detail each of the above contributions.
Chapter 1. Introduction 25
Element diagram Name Nodes Shorthand
Linear triangle 3 tri3
Quadratic triangle 6 tri6
Linear tetrahedron 4 tet4
Quadratic tetrahedron 10 tet10
Table 1.2: Finite element types used in this thesis
Chapter 2
Adaptive Gauss quadrature for
FE-MRI elements
2.1 Introduction
To obtain a signal with a uniform error in k-space, an adaptive Gauss quadrature scheme
can be used. Gauss quadrature is a method of numerical integration where the function
to be integrated is evaluated at some point called a quadrature point and multiplied by
a weight for each point called a quadrature weight. The sum of all these products is an
estimate of the integral:
1∫
−1
f(x)W (x) dx =N∑
i=1
wif(xi) + E (2.1)
Where f(x) is the function to be evaluated and wi and xi are the quadrature points and
weights. W (x) is another function that has a constant form. A gauss quadrature is a
special quadrature for numerical integration that allows the exact integration of f(x) for
a given W (x) such as 1 or e−x if f(x) is a polynomial of degree 2N − 1 or less. If the
function is ‘smooth’ in the sense that it can be approximated by a polynomial (such as a
26
Chapter 2. Adaptive Gauss quadrature for FE-MRI elements 27
sinusoid), the integration error will be low. Gaussian quadrature can be readily extended
for higher dimensions. These quadratures are referred to as product rules. Although
these are easily derived from 1D rules, they require more function evaluations than is
necessary to integrate over higher dimensional domains. Optimal Gauss weights and
point locations exist for linear tetrahedral domains, but are only available for low inte-
gration orders and are not easily derived for higher orders. Non-optimal rules use more
Gauss points and weights than optimal rules but they are easier to extend to arbitrary
integration orders. The parametric representation of finite elements allows them to be
integrated in a standard local tetrahedral domain. One can do a coordinate transfor-
mation referred to as a Duffy transformation [23] that maps the quadrature points and
weights of the quadrature product rule to the same local standard tetrahedral domain.
One of the integrals we would like to compute is the MRI signal equation for volumetric
finite elements which is the following:
se(~k) =
∫∫∫
element
m(~x) e−2πi~k·~x d~x
=
∫ 1
0
∫ 1−ξ1
0
∫ 1−ξ1−ξ2
0
|J |m(~ξ ) e−2πi~k·~x d~ξ (2.2)
The term J is the element’s Jacobian which is given by:
J = det
∂x∂ξ1
∂y∂ξ1
∂z∂ξ1
∂x∂ξ2
∂y∂ξ2
∂z∂ξ2
∂x∂ξ3
∂y∂ξ3
∂z∂ξ3
(2.3)
As can be seen from 2.2, the higher the magnitude of the k-space vector, the more
Chapter 2. Adaptive Gauss quadrature for FE-MRI elements 28
oscillatory the integral is which can only be approximated by a higher order polynomial
for a given approximation error. Therefore on average more quadrature points will be
needed to compute with a fixed error when the magnitude of the k-space vector is higher.
The global coordinates in terms of local coordinates are:
xj =
n∑
i=1
Ni(~ξ ) xij (2.4)
And the magnetization in local coordinates is given by:
m(~ξ) =n∑
i=1
Ni(~ξ ) mi (2.5)
The terms Ni(~ξ ) are called the element’s shape functions and they are polynomials
and n is the number of element nodes which are shown in Table 1.2. The term xij is the
j component of node i. So the integral is a 3D polynomial multiplied by a sinusoid. The
MRI signal equation for surface elements as derived in Subsection 1.3.4 is the following:
se(~k) =im
2π~k · ~k
∫∫
surface
~k · n(x)e−2πi~k·~xdS
=im
2π~k · ~k
∫ 1
0
∫ 1−ξ1
0
|J |~k · n(x)e−2πi~k·~xd~ξ (2.6)
=im
2π~k · ~k
∫ 1
0
∫ 1−ξ1
0
~k · ~n(x)e−2πi~k·~xd~ξ (2.7)
The vector n(x) is the unit normal pointing outwards from the element which is part
of a closed surface. The unit normal is given by:
Chapter 2. Adaptive Gauss quadrature for FE-MRI elements 29
n(x) =~n(x)
|~n(x)| =
∂~x
∂ξ1× ∂~x
∂ξ2∣
∣
∣
∣
∂~x
∂ξ1× ∂~x
∂ξ2
∣
∣
∣
∣
(2.8)
The simplification from equation 2.6 to equation 2.7 can be made since the denomina-
tor in equation 2.8 is also the element’s Jacobian for surface elements. Again the integral
is a polynomial multiplied by a sinusoid.
By changing variables to change the integration domain to the standard tetrahedron,
the same product quadrature rules can be used each time. Each element signal is then
weighted with the appropriate material properties and added to get the full MRI signal
as shown in equation 1.44. To find out the number of quadrature points needed for both
integrals, an estimate of the error for a given number of quadrature points is needed.
Since the integration is carried out in the same standard domain with a fixed size, the
error is only related to the maximum local frequency.
2.2 Gauss quadrature to compute oscillatory inte-
grals
The maximum local oscillation of a function will determine the minimum distance be-
tween quadrature points such that the Nyquist criteria is followed and the equation can
be approximated properly in that area. We can then use a Gauss quadrature with enough
points to guarantee a given accuracy at that point and the rest of the function since the
oscillation everywhere else will be lower. Gauss quadrature also has the advantage of
integrating the magnetization term exactly since it is described by a polynomial in finite
elements. A way of estimating the maximum local oscillation is as follows: Consider the
following 1D function:
Chapter 2. Adaptive Gauss quadrature for FE-MRI elements 30
y = eig(x) (2.9)
The equation can be approximated at x0 by taking the Taylor series of g(x) at x0 and
substituting the result back into equation 2.9:
y = eig(x0)eig′(x0)(x−x0)ei
1
2g′′(x0)(x−x0)2 ... (2.10)
The term that is non-constant and will usually have the the highest oscillation fre-
quency is eig′(x0)(x−x0). This term produces a frequency of g′(x0). To determine the
maximum local frequency of y to perform a numerical integration from a standard do-
main such as 0 to 1 one should therefore determine the following:
max freq = max|g′(x)|, x ∈ [0, 1] (2.11)
2.3 Computing oscillatory integrals in higher dimen-
sions
In 2D, if a product rule using Gauss quadrature is used with the same number of points
in both coordinates axes to integrate a function in the unit square, the error will be
dependant on the maximum local frequency in the unit square in the directions of the
coordinate axes. Therefore this value will be given by:
max freq = max
(∣
∣
∣
∣
∂g
∂η1
∣
∣
∣
∣
,
∣
∣
∣
∣
∂g
∂η2
∣
∣
∣
∣
)
, x ∈ s (2.12)
Chapter 2. Adaptive Gauss quadrature for FE-MRI elements 31
Where η1 and η2 are the unit square coordinates and s is the boundary of the unit
square.
2.4 Error estimation for tri3 elements
The integration in local coordinates for triangular elements in the finite element method
and FE-MRI is done using a Gauss quadrature product rule for the unit square where
the point coordinates are transformed using the Duffy coordinate transform [23] to a
unit triangle. This is the same as transforming g in the triangle to the unit square
and integrating it with a Gauss quadrature product rule in the unit square. So to
find the maximum oscillation in the standard triangle one should find how the maximum
absolute values of the gradients of the function g in the unit square relate to the maximum
absolute values of the gradients of g in the unit triangle. Although there are symmetric
quadrature rules for triangles that are optimal in the Gauss-quadrature sense such as
[24], their computation requires finding a non-linear least square solution for a system of
polynomial equations which in [24] could only be done numerically up to 20 points per
dimension due to round-off error. Also the estimation of the quadrature error is very
non-trivial [25]. For a survey of different cubature (quadrature rules for 2 and higher
dimensions) formulas over triangles refer to [25].
To find these relationship lets look more closely at what happens to a plane wave
(produced by the MRI signal equation with a given k-space vector for example) in local
coordinates in the standard triangle when it is transformed to the standard square. Such
a transformation is shown in Figure 2.1.
In this case the gradient of g from the plane wave is constant in local coordinates
along both coordinate axes. This gradient is also proportional to the number of cycles in
the direction of each coordinate axes. The number of cycles is also proportional to the
Chapter 2. Adaptive Gauss quadrature for FE-MRI elements 32
ξ1
ξ 2
0 0.2 0.4 0.6 0.8 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
η1
η2
0 0.2 0.4 0.6 0.8 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Figure 2.1: Transformation of a plane wave in local coordinates in the standard triangleto the unit square
number of red ‘crests’ or blue ‘troughs’ for a given length along a given direction. We
can see that the magnitude of the gradient along η1 remains the same as the gradient
along ξ1 in the unit square at ξ2 = 0 and decreases to 0 when ξ2 increases to 1. We can
also see that the magnitude of the gradient along η2 appears to remain the same as the
gradient along ξ2 in the unit square at ξ1 = 0. However in this case when ξ1 increases
from 0 to 1, the gradient decreases to 0 and increases again to what appears to be the
gradient along the edge from (0, 1) to (0, 1) of the standard triangle. These observations
can be made precise as follows. Based on the derivation in [23] for the standard square
with (−1, 1) bounds, the Duffy coordinate transform between the standard triangle and
square used in FE-MRI is the following:
ξ1 = η1(1− η2) (2.13)
ξ2 = η2 (2.14)
Chapter 2. Adaptive Gauss quadrature for FE-MRI elements 33
Using the chain rule for a multi-varied function we can relate the gradients of g in
the unit square to the ones in the unit triangle as follows:
∂g
∂η1=
∂g
∂ξ1
∂ξ1∂η1
+∂g
∂ξ2
∂ξ2∂η1
=∂g
∂ξ1(1− η2) (2.15)
∂g
∂η2=
∂g
∂ξ1
∂ξ1∂η2
+∂g
∂ξ2
∂ξ2∂η2
=∂g
∂ξ2− ∂g
∂ξ1η1 (2.16)
The maximum absolute value of equation 2.15 will happen when η2 is 0 and the
maximum absolute value of equation 2.16 will happen when η1 is 0 is either 0 or 1. When
η1 = 0 the gradient is along the same direction as the side from (0, 1) to (1, 0) in the
standard triangle.
In the MRI signal equation g is kxx + kyy or K · x. In finite element analysis, for
triangular linear elements in 2D, the global coordinates are related to local coordinates
in the following way:
x =n∑
i=1
Ni xi, y =n∑
i=1
Ni yi (2.17)
Where n is number of points in the element xi, yi are the global coordinates of point
i and Ni are the element’s shape functions. For the tri3 element they are given by:
N1 = ξ1, N2 = ξ2, N3 = 1− ξ1 − ξ2 (2.18)
Chapter 2. Adaptive Gauss quadrature for FE-MRI elements 34
Using the element’s shape functions the maximum oscillation in the ξ1 and ξ2 direc-
tions can be determined by:
∂g
∂ξ1= kx
∂x
∂ξ1+ ky
∂y
∂ξ1= K · e31 (2.19)
∂g
∂ξ2= kx
∂x
∂ξ2+ ky
∂y
∂ξ2= K · e32 (2.20)
Where eij is a vector from element vertices i to j in global coordinates. Using the
above two results yields:
∂g
∂ξ2− ∂g
∂ξ1= K · e32 −K · e31 = K · e12 (2.21)
Note that these values are constant in local coordinates since the partial derivatives
of the shape functions for linear finite elements are constant, therefore checking their
maximum value in domain s is not necessary. These are the same error metrics used
in [18]. In [18] the error metric is arrived at by observing that it should be invariant
with respect to element size and shape and the k-space vector. These results can be
interpreted as the dot product of the k space vector in local coordinates with each of the
standard triangle side vectors expressed in terms of ξ1 and ξ2 which give the oscillation
frequency per unit length in the local coordinate axes. The domain s in global and local
coordinates is given in Figure 2.2.
Since the k space vector in global coordinates is the following:
Chapter 2. Adaptive Gauss quadrature for FE-MRI elements 35
y
x(0,0) (1,0)
(0,1)
Figure 2.2: tri3 domain s and vertices in global and local coordinate systems with a givenK space vector
K =
kx
ky
=
∂g∂x
∂g∂y
(2.22)
The k space vector in local coordinates is given by:
Kl =
∂g∂ξ1
∂g∂ξ2
=
∂x∂ξ1
∂y∂ξ1
∂x∂ξ2
∂y∂ξ2
∂g∂x
∂g∂y
= JK (2.23)
Where J is the element’s Jacobian. Using the above result, the maximum oscillation
estimates can be written as follows:
∂g
∂ξ1= Kl · (1, 0) = Kl · ξ1 (2.24)
∂g
∂ξ2= Kl · (0, 1) = Kl · ξ2 (2.25)
∂g
∂ξ2− ∂g
∂ξ1= Kl · (−1, 1) = Kl · (ξ2 − ξ1) (2.26)
Chapter 2. Adaptive Gauss quadrature for FE-MRI elements 36
Where ξ1, ξ2 and ξ2 − ξ1 are the standard triangle side vectors. For the tri3 surface
element the same results hold, except that the element sides eij and the k-space vector
are in three dimensions. In this case the local k-space vector in the still 2D local domain
is a projection of the original k-space vector.
2.5 Error estimation for tri6 elements
For quadratic elements equation 2.12 still applies, however the values of the gradients of
g with respect to ξ1 and ξ2 in local coordinates will vary linearly with ξ1 and ξ2. The
boundary s in local coordinates for tri6 is the same as the one for the tri3 element. Since
the gradient of g for a given local coordinate g′(ξ1, ξ2) is a plane in the space formed by
ξ2 and ξ2 and the possible values of g′(ξ1, ξ2), therefore the maximum absolute value of
g′(ξ2, ξ2) will be at one of the vertices of the standard triangular boundary. Therefore
equation 2.12 will have to be computed 3 times for each gradient for a total of 3 x 3 =
9 gradients. A plane formed by the gradient of g in one of the 3 coordinates and the
possible maximum absolute values are shown in Figure 2.3.
The number of gradients that need to be computed is the number of gradients times
the number of element vertices which is 9. The computation time will be lower if the
gradients with respect to ξ1 and ξ2 are computed first and then the results are used to
compute the gradient given by equation 2.16 with ξ1 = 1.
These gradients can be computed even faster by storing and reusing some of the
intermediate results. The needed computations for the tri6 element are as follows. The
Chapter 2. Adaptive Gauss quadrature for FE-MRI elements 37
Figure 2.3: The plane formed by the gradient of g with respect to ξ1 and the location ofits possible maximum absolute values in domain s. In this case the maximum absolutevalue of the gradient is at ξ1 = ξ2 = 0
element’s shape functions are as follows:
N1 = ζ(2ζ − 1)
N2 = ξ(2ξ − 1)
N3 = η(2η − 1)
N4 = 4ξζ
N5 = 4ξη
N6 = 4ηζ (2.27)
For the gradient in the ξ1 direction:
Chapter 2. Adaptive Gauss quadrature for FE-MRI elements 38
∂g
∂ξ1= kx
∂x
∂ξ1+ ky
∂y
∂ξ1(2.28)
= kx
6∑
i=1
∂Ni
∂ξ1xi + ky
6∑
i=1
∂Ni
∂ξ1yi (2.29)
= K
[
x
y
]
∂N
∂ξ1(2.30)
Where K is a 1 by 2 matrix, [x y]T is a 2 by 6 matrix with the element’s node
coordinates and ∂N/∂ξ1 is a 6 by 1 matrix. The same result applies for the gradient in
the ξ2 direction. The matrices ∂N/∂ξ1 and ∂N/∂ξ2 can be precomputed for all elements
by evaluating them at the local domain vertices (0,0),(1,0),(0,1). Half of the entries in
these matrices are 0 therefore it is more useful to calculate the product [x y]T∂N/∂ξ1
directly.
2.6 Error estimation for tet4 elements
For linear tetrahedral elements, to estimate the quadrature error, one has to estimate
the maximum oscillation frequency in the standard cube given by equation 2.31.
max freq = max
(∣
∣
∣
∣
∂g
∂η1
∣
∣
∣
∣
,
∣
∣
∣
∣
∂g
∂η2
∣
∣
∣
∣
,
∣
∣
∣
∣
∂g
∂η3
∣
∣
∣
∣
)
, x ∈ s (2.31)
Where η1−3 are the unit cube coordinates and s is the boundary of the unit cube.
The Duffy transformation for the unit cube, again based on the derivation in [23] for the
standard cube with (−1, 1) bounds is the following:
Chapter 2. Adaptive Gauss quadrature for FE-MRI elements 39
ξ1 = η1(1− η3)[1− η2(1− η3)] (2.32)
ξ2 = η2(1− η3) (2.33)
ξ3 = η3 (2.34)
The derivation can be understood as using the 2D transformation recursively in two
of the coordinate axes to ‘collapse’ the standard cube into the standard tetrahedron.
Using equations 2.32, 2.33 and 2.34, the gradients of g with respect to the standard cube
coordinates are the following:
∂g
∂η1=
∂g
∂ξ1(1− η3)[1− η2(1− η3)] (2.35)
∂g
∂η2= − ∂g
∂ξ1η1(1− η3)
2 +∂g
∂ξ2(1− η3) (2.36)
∂g
∂η3=
∂g
∂ξ1η1[2η2(1− η3)− 1]− ∂g
∂ξ2η2 +
∂g
∂ξ3(2.37)
Since the values of ξ1−3 can only vary from 0 to 1, the maximum absolute value of
the following gradients will determine the maximum oscillation frequency of g:
∂g
∂ξ1,
∂g
∂ξ2,
∂g
∂ξ3(2.38)
∂g
∂ξ2− ∂g
∂ξ1,
∂g
∂ξ3− ∂g
∂ξ2,
∂g
∂ξ3− ∂g
∂ξ1(2.39)
Chapter 2. Adaptive Gauss quadrature for FE-MRI elements 40
∂g
∂ξ3− ∂g
∂ξ2− ∂g
∂ξ1,
∂g
∂ξ3− ∂g
∂ξ2+
∂g
∂ξ1(2.40)
The gradients shown in 2.38 and 2.39 are gradients along all the side edges of the
standard tetrahedron as shown in the 2D case for the triangle. However the gradients
shown in 2.40 which are derived from equation 2.37 are not related to any side of the
standard tetrahedron, and would not have been taken into account in the maximum
frequency calculation if extended the results for 2D elements (that the gradients along
all the element side edges determine the maximum oscillation frequency) directly to 3D
elements.
2.7 Error estimation for tet10 elements
For quadratic tetrahedral elements equation 2.31 still applies, however their values will
vary linearly on ξ1−3 like in the tri6 element. The boundary s for tet10 is the same as the
one for the tet4 element. To estimate the maximum frequency gradients 2.38, 2.39 and
2.40 should be calculated at the standard tetrahedron vertices. The gradient calculations
can be performed faster if all the components of the gradients are computed before hand
and then used for gradients 2.39 and 2.40. Also the values of some of the intermediate
steps can be stored and reused like in the case of the tri6 element.
Chapter 3
Algorithm details and
implementation
3.1 Introduction
The FE-MRI algorithm simulates either a 2D or 3D MRI signal acquisition. If a 2D
signal is required the program computes a given number of k-space planes depending
on the slice thickness and the needed resolution. If not the program computes a given
k-space volume. If using an acquisition with a fixed number of quadrature points, the
program computes the needed Gauss quadrature and uses that quadrature for all elements
and k-space values. If the adaptive quadrature is used, the program computes all the
Gauss quadratures requested, then it computes the ones needed for all the element/k-
space point combinations and uses the appropriate ones in order to keep the k-space
image error constant. If more quadrature points are needed in some elements the user
is warned. Then the user can either use the result knowing some points at the edge of
k-space might have a slightly higher image error or he/she can run the function again
with more quadrature points. This is a useful feature if the computation time even using
the adaptive algorithm with all the needed quadratures is prohibitive.
41
Chapter 3. Algorithm details and implementation 42
3.2 FE-MRI software architecture
The FE-MRI library is implemented as a set of python functions that can be used through
a command line interface by typing a script in Linux or Unix. An example of a script
that can be typed in the Unix command line after installing the FE-MRI library is given
below:
femritrisurfmeshkspace -ifile modifiedQuadSphere.vtu -dimensionality 2
-fov 30 30 30 -fovcenter 0.0 0.0 0.0 -matrix 32 32 1 -useoptimal 1
-error 0.001 -qorder 59 -sliceprofile quadratic -slicethickness 6
--pipe femrikspacezeropadding -padding 64 64 64
--pipe femrikspacetoimage --pipe vmtkimageviewer
The above script reads an unstructured mesh of a sphere-like closed surface composed
of tri6 elements. Then a 2D acquisition is performed with a field-of-view of 30, a 32 by
32 k-space matrix and a quadratic slice profile in k-space. This is done using the optimal
algorithm which can use a Gauss quadrature up to order 59 or points per dimension. Then
the k-space is zero-padded and transformed back into an image which is displayed on the
screen. Since the size of the shape is 40 by 40 by 40 units, the resulting image displays
an aliasing error. In FE-MRI information from one function can be used into the next
needed function without writing files by using the –pipe command. Some of the python
functions call C++ functions to perform the needed computations. Both languages are
used together so the user is able to both use fast and efficient C++ functions while being
able to easily customize either the script or the python functions. The C++ functions can
also be modified if needed however it is usually harder to modify that a python script.
Both the python and C++ functions use data structures and visualization functions
from the visualization toolkit or VTK1 and the vascular modelling toolkit or VMTK2
1Visualization toolkit website: https://www.vtk.org/2Vascular modelling toolkit website: https://www.vmtk.org/
Chapter 3. Algorithm details and implementation 43
libraries. The mesh used in the example script above is in VTK’s .vtu format. The
FE-MRI acquisition algorithm and its related C++ functions are shown in Figure 3.1.
The algorithm shown in Figure 3.1 is implemented in the vtkfemriTetVolNumer-
icKSpaceGenerator C++ class for meshes composed of tet4 and tet10 elements. The
algorithm can use a mesh with both elements present. The same is the case for a surface
mesh when it is used by its C++ class vtkfemriTriSurfNumericKSpaceGenerator, it can
contain both tri3 and tri6 elements. The class contains a set of data structures such as
the k-space and functions that operate on them. This class is used to declare an object
(an instance of a class) that then can call other objects declared using the classes vtkfem-
riVolTetQuadratureOrderCalculator and vtkfemriGaussArbQuadrature which compute a
list of the cycles per element as described in Chapter 2 and their corresponding num-
ber of quadrature points to obtain a desired image error and the needed quadratures
respectively.
3.3 K-space acquisition algorithm
The k-space signal acquisition in the flowchart in Figure 3.1 is done using Algorithm
1. The k-space acquisition algorithm shown in Algorithm 1 starts by calculating the
maximum cycles/element from all the k-space point and element combinations. This is
done using a design table generated for a user specified error. The design table used for
the optimal algorithm is a table that contains all the number of quadrature rules required
and its corresponding cycles per element for a given error. Then it computes and stores
all quadratures from orders 1 to the order needed for the maximum cycles/element to get
the required error if the optimal algorithm will be used. The quadratures are calculated
by the vtkfemriGaussArbQuadrature C++ class. Then the algorithm loops through all
the k-space values needed and all the elements in the mesh and uses the given Gauss
quadrature to compute the MRI signal. One thing to take into account in the algorithm
Chapter 3. Algorithm details and implementation 44
Start
Read mesh and MRIacquisition parameters
Dimensionality?
Find location andboundary of slices
Set slice magnetization profile
k-space aquisition
More slices?
Output k-space file and/orsend data to next python
function
Stop
2D
No, or 3D acquisition
3D
Yes
Figure 3.1: Flowchart of the main FE-MRI algorithm
Chapter 3. Algorithm details and implementation 45
is to make sure the magnetization and the Jacobian are being evaluated exactly. If
the optimal algorithm is used, for k-space vectors with a low magnitude the number of
quadrature points needed will be lower than the ones needed to compute the polynomial
composed of the Jacobian multiplied by the magnetization. This is because even if the
sinusoidal term is close to a straight line at low magnitudes of the k-space vector, needing
only an order of 1 for example, one still wants enough quadrature order to compute the
quadrature of the polynomial composed of the Jacobian multiplied by the magnetization.
Algorithm 1 K-space acquisition algorithm
if optimal algorithm thenmake design table relating cycles/element to quadrature order at desired image errorcompute Gauss quadratures from order 1 to specified order
end iffor each needed k-space location k doset S(k) = 0for each element e doif optimal algorithm thencalculate cycles/element from k and eset Gauss quadrature to get desired error for calculated cycles/element usingdesign table
elseset to user-defined Gauss quadrature
end ifif order of Gauss quadrature < order of magnetization and Jacobian thenset Gauss quadrature to same order as magnetization and Jacobian
end ifobtain Gauss points pobtain corresponding Gauss weights wset s(k) = 0for each p doevaluate s(k) = s(k) + w m(p) exp(k, p)
end forevaluate S(k) = S(k) + f(T1, T2, ρ) s(k)
end forend for
Chapter 3. Algorithm details and implementation 46
3.4 Cycles per element quadrature table
The algorithm to find the design table is shown in Algorithm 2.
Algorithm 2 Design table algorithm
compute max cycles/element (max cpe) for all k-space points and elementsfor 1 to max number of quadrature points specified, n doset point list, pointList(n) = nset start cpe to 0, cpe = 0set error to 0while error < error threshold doadd cpe interval to cpe, cpe = cpe+ dcpeget exact integral, Ie(cpe)obtain Gauss points pobtain corresponding Gauss weights wset Ic = 0for each p doevaluate Ic = Ic + w exp(cpe, p)
end forevaluate error = |mag(Ie)−mag(Ic)|dim(e)
end whileset cpe list, cpeList(n) = cpe
end forif max cpe > cpeList(max n) thendisplay warning
end if
The algorithm is carried out in the vtkfemriQuadratureOrderCalculator C++ class.
The algorithm first calculates the maximum cycles/element from the mesh and the MRI
acquisition parameters. Then it loops over all the quadratures from 1 to a number
specified by the user, storing each number in pointList. Within that loop it cycles
thought increasing cycle/element values, until the error is above the user defined value.
Then the cycles/element are stored in the corresponding entry in cpeList. The integral
being computed is akin to a 1D MRI signal equation:
I =
∫ 1
0
e−2πik (3.1)
Chapter 3. Algorithm details and implementation 47
Where k is the cycles/element. The exact solution of its magnitude is:
mag(Ie) =1
2πk
√
2− 2 cos(2πk) (3.2)
The error is defined as follows:
E = |mag(Ie)−mag(Ic)|dim(e) (3.3)
Where Ic is an approximation of equation 3.1 calculated using Gauss quadrature and
dim(e) is the dimensionality of the elements that the mesh is composed of. Note that
the algorithm could have looped over the cycles/element first. However this could have
caused some integrals computed with the quadrature rule to have a lower error simply
because they are not being ‘sampled’ properly by the points; the Nyquist criteria is not
followed with that loop ordering. With the current loop ordering the oscillatory integral
gets more oscillatory while the number of quadrature points is kept the same, making
sure that once the error is larger than the threshold there were also enough points to
sample the function. The only difference between the algorithm when computing it for
2D or 3D elements is that the 1D error is multiplied by 2 and 3 respectively. This is
because the error of a product rule if using the same number of quadrature points in
all dimensions can be estimated to first order as the error in one dimension times the
number of dimensions of the product rule [22]. So the algorithm finds the error in one
dimension and multiplies it by the number of dimensions to get an error estimate for the
product rule which is then used to evaluate the element signal s(k) in Algorithm 1.
Chapter 3. Algorithm details and implementation 48
3.5 Calculation of quadrature rules
The quadrature rules are computed for the design table and the k-space acquisition cal-
culations done in the C++ classes vtkfemriVolTetQuadratureOrderCalculator and vtk-
femriTriSurfNumericKSpaceGenerator. Both classes make an object which is an instance
of the class vtkfemriQuadratureOrderCalculator. The class can produce Gauss-Laguerre
and Gauss-Jacobi quadratures. The class also computes all the product rules needed by
all the classes. These quadratures have different weighting functions W (x) as shown in
equation 2.1. This allows one to make a product rule that integrates the Jacobian from
the Duffy transformation exactly. Therefore the only error comes from the transformed
coordinates in the oscillatory term. That is why the derived error metrics in Chapter
2 only take into account that term. The details of how the quadratures are computed
are given in Appendix A. The author has added a simpler design table calculation, all
the error metrics in Chapter 2 and a new class that can compute an arbitrary number of
quadrature points and weights for the relevant Gauss quadratures.
3.6 Slice profile simulation for volumetric meshes
A method to simulate an arbitrary slice profile could be to adaptively subdivide the
elements inside the slice to linearly approximate the slice. This is referred to in the
finite element literature as h-refinement. Another approach is to use a higher polynomial
order for the elements inside the slice to adaptively get a more accurate simulation.
This method is referred to as p-refinement. While those methods will give an accurate
solution, they are computationally expensive. A more practical method that does not
require modifying the mesh or the elements is to weigh the s(k) signal in Algorithm 1
after it is evaluated with a quadrature point, by using the location of the quadrature
point and the required slice profile. For example, if one requires a quadratic slice profile
along the z-direction, the weighting based on the location of the quadrature point within
Chapter 3. Algorithm details and implementation 49
the slice is the following:
m = 1− 4
t2(pz − z0)
2 (3.4)
Where m is the weighting, t is the slice thickness, z0 is the slice origin location and
pz is the z-component of the quadrature point. Everywhere else the weighting is 0. An
ideal slice profile is simulated simply by making the weighting 1 within the slice and
0 otherwise. Notice this method does not compromise the accuracy of the result given
by the quadrature since the weighting convoluted with the magnetization distribution
forms a new function that the quadrature is integrating. The ideal, quadratic and a
trapezoidal slice simulation was already implemented in the C++ class vtkfemriTetVol-
NumericKSpaceGenerator of the FE-MRI library.
3.7 Additions and improvements to current FE-MRI
functions
The new error metrics in Chapter 2 have been implemented as C++ functions in the FE-
MRI library. The code has been improved by removing the normalization and Jacobian
calculation step for surface elements which are the same value for a given quadrature
point and cancel each other. The algorithm to find the design table shown in Algorithm
2 was rewritten using dynamically allocated C++ arrays instead of vtk arrays to make
its execution faster and to aid in readability. Also instead of having a lookup table for
the quadrature weights up to a certain number of points, the user can specify how many
points can be used. The points are calculated from first principles once and stored at the
beginning of the calculation. The details of the quadrature point and weights calculation
can be found in Appendix A. If the program needs more points to calculate a certain
Chapter 3. Algorithm details and implementation 50
k-space point there is a warning shown that recommends using more points and running
the function again. If not the program uses the maximum number of points available.
The memory to store the points is dynamically allocated and the points are stored in a
contiguous memory block for faster access. This is done in C++ as follows:
inline double** newQuadratureMatrix(int numPs)
{
double *matrix = new double[numPs*(numPs+1)*(2*numPs+1)/6];
double **m = new double *[numPs];
for(int i = 0; i < numPs; i++)
{
m[i] = &matrix[i*(i+1)*(2*i+1)/6];
}
return m;
}
The above function returns an array of pointers in m to the memory block named
matrix, which contains all the points from 1 to numPs. The actual number of points is
the sum of the square of the the 1 to numPs series since we are storing Gauss quadrature
product rules which in this case are for 2D surface elements. The series needed and its
result is shown below:
n∑
x=1
x2 =n(n+ 1)(2n+ 1)
6(3.5)
This type of sum can be found using finite calculus [26]. Also the computation time
has been cut in close to half by computing only half of k-space using the property [9]:
se(−k) = s∗e(k) (3.6)
Chapter 3. Algorithm details and implementation 51
Where s∗e(k) is the complex conjugate of the original MRI signal. Equation 3.6 applies
for the Fourier transform of a real signal. However in actual MRI acquisition this con-
straint cannot be exploited because of object motion and magnetic field inhomogeneities
[9]. In the case of a simulated signal object motion can be added later to the simulated
signal by adding a motion dependent phase to the computed k-space for example.
Chapter 4
Results and discussion
4.1 Introduction
This chapter shows the k-space error maps and error histograms for meshes with each
of the element types discussed in this thesis. This is done to confirm the efficacy of the
algorithms in Chapter 3. Also the MRI acquisition of an anatomically realistic carotid
artery was simulated to show the potential uses of the surface FE-MRI algorithm. The
desired error threshold for all elements tested was 10−3.
4.2 Validation of optimal algorithm for tri3 elements
To validate the optimal algorithm for tri3 elements first the k-space error map and a
histogram of the errors found in Table 4.1 for the adaptive algorithm were computed
for a deformed unit cube. The deformed cube mesh and its simulated MRI image is
shown in Table 4.2. The histogram shows a sharp peak at an error of 10−4. A sharp
peak is expected since the oscillation due to the k-space value is constant within each
element. The lower value for the error might be due to the error estimate for product
rules given in [22] being a conservative estimate. The shape was chosen because it was
easy to generate in the Paraview program starting with a meshed cube. Paraview is
52
Chapter 4. Results and discussion 53
based on VTK and can produce files that can be read by the FE-MRI library. Then
the shape was made asymmetric by moving the nodes. This was done to highlight any
possible errors in the algorithm which might not be evident by imaging a symmetric
object. The imaging parameters are as follows: FOV = 2.5 x 2.5, image size = 32 x 32
pixels. The magnetization value is 1 throughout. The exact image was computed using
the exact algorithm available for surface linear elements in FE-MRI. As an example, the
exact k-space using the analytic solution for the tri3 element was obtained by typing the
following line in Unix:
vmtkmeshtosurface -ifile cubesurf.vtu --pipe vmtksurfacenormals
-cellnormals 1 --pipe femrinumericsurfacekspace -useexact 1
-dimensionality 2 -fov 2.5 2.5 2.5 -fovcenter 0.5 0.5 0.5
-matrix 32 32 1 -okspacefile exactkspace.vti
Notice that the python function also requires the outward normals. The adaptive
algorithm for tri3 elements was obtained using the following line:
femritrisurfmeshkspace -ifile cubesurf.vtu -dimensionality 2
-fov 2.5 2.5 2.5 -fovcenter 0.5 0.5 0.5 -matrix 32 32 1
-useoptimal 1 -qorder 59 -error 0.001 -okspacefile voltetkspace.vti
Chapter 4. Results and discussion 54
Elementtype
K-space error map Histogram
tri3
x kspace location
y ks
pace
loca
tion
5 10 15 20 25 30
5
10
15
20
25
30
log1
0(ab
s(ks
pace
Err
or))
−14
−12
−10
−8
−6
−4
−2
0
−16−15−14−13−12−11−10 −9 −8 −7 −6 −5 −4 −3 −2 −1 0 1 20
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Absolute error
Fra
ctio
n of
k−
spac
e po
ints
tri6
x kspace location
y ks
pace
loca
tion
5 10 15 20 25 30
5
10
15
20
25
30
log1
0(ab
s(ks
pace
Err
or))
−14
−12
−10
−8
−6
−4
−2
0
−16−15−14−13−12−11−10 −9 −8 −7 −6 −5 −4 −3 −2 −1 0 1 20
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Absolute error
Fra
ctio
n of
k−
spac
e po
ints
tet4
x kspace location
y ks
pace
loca
tion
5 10 15 20 25 30
5
10
15
20
25
30
log1
0(ab
s(ks
pace
Err
or))
−14
−12
−10
−8
−6
−4
−2
0
−16−15−14−13−12−11−10 −9 −8 −7 −6 −5 −4 −3 −2 −1 0 1 20
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Absolute error
Fra
ctio
n of
k−
spac
e po
ints
tet10
x kspace location
y ks
pace
loca
tion
5 10 15 20 25 30
5
10
15
20
25
30
log1
0(ab
s(ks
pace
Err
or))
−14
−12
−10
−8
−6
−4
−2
0
−16−15−14−13−12−11−10 −9 −8 −7 −6 −5 −4 −3 −2 −1 0 1 20
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Absolute error
Fra
ctio
n of
k−
spac
e po
ints
Table 4.1: K-space error error maps and histograms for all the element types imple-mented. Notice that the x-axis for the histograms in in terms of n, where error = 10n
Chapter 4. Results and discussion 55
Elementtype
Test mesh MRI image of test mesh
tri3
tri6
tet4
tet10
Table 4.2: Test meshes for each element type and their simulated MRI images. For themeshes with quadratic elements one of the elements is selected to show the blue edgesshown are between element nodes and do not delineate a quadratic element
Chapter 4. Results and discussion 56
4.3 Validation of optimal algorithm for tri6 elements
To validate the optimal algorithm for tri6 elements as with the tri3 element, first the
k-space error map and a histogram of the errors found in Table 4.1 for the adaptive
algorithm were computed for a deformed sphere mesh. The histogram shows a peak at
an error of 10−4 and also at an error of 10−5. This can be explained by the oscillation due
to the k-space value calculated using the error metric. This value is the maximum one
within an element , therefore other parts of the element might have a lower oscillation and
the signal in that region will be computed with more quadrature points than is needed
to get the desired error threshold. The deformed sphere mesh and its simulated MRI
image is shown in Table 4.2. The shape was chosen because it was easy to generate in the
Paraview program starting with a meshed sphere. Then the shape was made asymmetric
by moving the nodes of two of the elements. This was done to highlight any possible
errors in the algorithm which might not be evident by imaging a symmetric object. The
imaging parameters are as follows: FOV = 30 x 30, image size = 32 x 32 pixels. The
magnetization value is 1 throughout. The exact image was computed by computing the
k-space with a quadrature order of 40.
4.4 Validation of optimal algorithm for tet4 elements
To illustrate the increasing error as the k-space vector magnitude increases, which is very
noticeable for volumetric elements, first the k-space error map and a histogram of the
errors for a tet4 unit cube are computed for the algorithm using the same number of
quadrature points in all k-space locations. The unit cube mesh and its simulated MRI
image is shown in Table 4.2. The shape chosen is a cube since it has a simple analytic
Fourier transform and it can be exactly discretized by linear tetrahedral elements. The
imaging parameters are as follows: FOV = 2.5 x 2.5, image size = 32 x 32 pixels. The
magnetization value is 1 throughout and the quadrature order used is 14. As expected
Chapter 4. Results and discussion 57
the error increases when the k-space vector magnitude is high if the same quadrature
order is used for all k-space points as shown in Figure 4.1. As can be seen from the error
histogram in Figure 4.2, the number of k-space points with a given error goes up linearly
with the logarithm of the error. This can be explained by the fact that the design curves
calculated by Algorithm 2 seem to approximate a line at large cycles/element values. Also
the number of points in k-space with a given k-space vector magnitude scales linearly
with that magnitude.
x kspace location
y ks
pace
loca
tion
Logarithm of the kspace abs error for mesh of unit cube
5 10 15 20 25 30
5
10
15
20
25
30
log1
0(ab
s(ks
pace
Err
or))
−14
−12
−10
−8
−6
−4
−2
0
Figure 4.1: K-space error as a function of k-space location for a tet4 unit cube with aquadrature order of 14
To validate the optimal algorithm for tet4 elements the same mesh was run with the
same imaging parameters with the adaptive algorithm. As can be seen from k-space error
map in Table 4.1, the error is 10−4 for most of k-space or is lower. Also its histogram in
Table 4.1 displays a clear peak at 10−4 error.
Chapter 4. Results and discussion 58
−16−15−14−13−12−11−10 −9 −8 −7 −6 −5 −4 −3 −2 −1 0 1 20
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Absolute error
Fra
ctio
n of
k−
spac
e po
ints
Figure 4.2: Number of k-space points as a function of n where error = 10n for a tet4unit cube with a quadrature order of 14
4.5 Validation of optimal algorithm for tet10 ele-
ments
To validate the optimal algorithm for tet10 elements the shape chosen is a cube for the
same reasons as with the tet4 element. The cube can also be discretized exactly with
tet10 elements. However in this case to test the adaptive algorithm with curved elements
the middle nodes of the elements were shifted in the z-direction from 0.5 to 0.7. So
although all the elements have a curved side, they still form a cube. The curved sides are
all within the cube and the sides which are also edges of the cube are straight. The tet10
cube mesh and its simulated MRI image is shown in Table 4.2. Note that the edges seen
in the mesh are between the element nodes and do not delineate an element. This is how
Paraview displays quadratic elements. The same mesh was run with the same imaging
Chapter 4. Results and discussion 59
parameters as for the tet4 element test with the adaptive algorithm. As can be seen from
k-space error map in Table 4.1, the error is 10−2 for most of k-space or is lower. Also its
histogram in Table 4.1 displays a peak at an error of 10−3 and also at an erro of 10−4.
This higher spread in the histogram towards lower errors can be explained in the same
way as it was explained for tri6 elements..
4.6 Computation time comparison between linear and
quadratic surface discretizations
So far the errors being calculated are the errors due to integrating the MRI signal equa-
tion. However error is also introduced when the original ’object’ such as a voxelized shape
is discretized using finite elements. To compare both algorithms for linear and quadratic
elements a discretization using linear and quadratic elements can be found with the same
discretization error and then compute the k-space using the same k-space image error
with the given adaptive algorithm and find out how their computation times differ. The
discretization error between an object with an arbitrary shape and magnetization and
its discretization can be computed as:
F =
∫
V
[
mdiscrete(~x)−mactual(~x)]2 (4.1)
If the object has a constant thickness and magnetization, eq. 4.1 can be written to
account just for the difference in cross sectional shape:
F =
∫
A
[
mdiscrete −mactual]2 (4.2)
Chapter 4. Results and discussion 60
For a simple shape such as a cylinder, equation 4.2 and its discretization into linear
and quadratic finite elements can be computed exactly using cylindrical coordinates:
F =
∫ 2π
0
[
rdiscrete(θ)− ractual(θ)]2
(4.3)
Where the integral is one dimensional since the shape can be described by a curve in
cylindrical coordinates. We would like a discretization that minimizes the ‘area’ difference
between the actual and discretized shapes. The cross sectional area of the cylinder (a
circle) can be cut into ‘pie’ finite elements and a discretization for linear and quadratic
elements with the same discretization error can be found. The details of this calculation
are given in Appendix B. As an example, for a cylinder with a radius of 1 and discretized
into 4 quadratic pie slices such that the discretization error of its cross section is minimum,
the number of linear pie slices that produces the same discretization error is 22. This
result compares favourably with [27], which graphs the maximum MRI image intensity
error for both linear and quadratic ’pie’ slices using a large fixed number of quadrature
points. This is essentially the same as comparing both discretizations to the original
shape being imaged. The number of linear slices needed to get the same error as 4
quadratic slices is 26, which is close to 22 and slightly higher since the slice edges are
made to intercept the circle’s edges and not chosen to minimize the discretization error,
therefore more elements are needed to get the same error. The shape imaged in the paper
is also a cylinder with a uniform magnetization. The radius of both cylinders is 0.5 but
the number of slices should be the same regardless of the size of the cylinder since the
error scales with the cylinder’s size.
Chapter 4. Results and discussion 61
4.7 Simulation of MRI acquisition of a carotid artery
A simulation of MRI acquisition and reconstruction of a carotid artery was carried out
using the FE-MRI algorithm for tri6 elements. The imaging parameters used were the
following: field of view: [30 30 60] and number of sampling points: [1 32 64]. The number
of quadratic triangular surface elements is 1160. The number of quadrature points per
dimension used for all elements is 4. The mesh of the segmented image using vmtk and
the simulated MRI reconstruction are shown in Figure 4.3.
(a) (b)
Figure 4.3: Mesh of segmented artery (a) and its MRI reconstruction (b)
The final image was zero padded with [0 32 64] zeros. The reconstructed image shows
some pixelation and some ringing artefact.
Chapter 5
Feasibility of NSD for faster signal
calculation
5.1 Introduction
After developing an algorithm in Chapter 2 that uses an adaptive quadrature to evaluate
FE-MRI elements, other methods were investigated to calculate the element’s k-space
value (an oscillatory integral) with less number of quadrature evaluations. There are a
family of methods that allow one to calculate an oscillatory integral with a given error
with less number of quadrature points if one increases the minimum local oscillation.
The method most suitable for FE-MRI elements is numerical steepest descent or NSD.
An adaptive algorithm using this method was designed and simulated. Then the results
were compared to the algorithm described in Chapter 2 for tri6 elements. The algorithm
was tested using parameters needed for simulating an MRI acquisition.
5.1.1 Approximate evaluation of oscillatory integrals
An oscillatory integral has the following form:
62
Chapter 5. Feasibility of NSD for faster signal calculation 63
I =
∫ b
a
f(x)eiωg(x) (5.1)
In the first section the limitations of Gaussian quadrature are explained. In the
subsequent sections the family of methods that, in principle, require less quadrature
points for a fixed error as ω is increased are described.
5.1.2 Adaptive quadrature
When either uniform, Simpson’s or Gauss quadrature is used, the number of points
used depends on the minimum local frequency of oscillatory term within the integration
domain due to Nyquist criteria. Gauss quadrature is the most desirable method since it
approximates the term f in equation 5.1 exactly and approximates the oscillatory term g
with a low error. However the error depends both on Nyquist criteria and how closely a
sinusoid function can be approximated by a polynomial. For all the quadrature methods
above the number of quadrature points needed will increase with increasing ω for a fixed
error.
5.1.3 Filon method
A simple method described in [28] is simply to approximate f(x) in equation 5.1 by a
polynomial and simply find the integral by pre-computing the moments:
m =
∫ b
a
xmeiωg(x) dx (5.2)
This method is very effective when f(x) is a smooth function and the moments can
be easily found. Although the method could provide the exact solution for the integrals
for the tri6 element, the oscillator g(x) will be different for every element and k-space
Chapter 5. Feasibility of NSD for faster signal calculation 64
value. So computing the moments is as hard as the original problem. Also there are no
known analytic solutions for all the needed moments if g(x) is a quadratic such as the
MRI signal equation for quadratic elements in local coordinates.
5.1.4 Levin method
Another method by Levin described in [29] and [30] is as follows. Suppose f(x) can be
expressed as follows:
f(x) = iωg′(x)p(x) + p′(x), a 6 x 6 b (5.3)
Then equation 5.1 can be found directly as:
I =
∫ b
a
(iωg′(x)p(x) + p′(x))eiωg(x)
=
∫ b
a
d
dx(p(x)eiωg(x)) dx
= p(b)eiωg(b) − p(b)eiωg(a) (5.4)
Although the general solution of p(x) has the following general form:
p(x) = eiωg(x)[∫ x
a
f(x)eiωg(t) dt + c
]
(5.5)
which is also a highly oscillatory integral, in [29] it is shown that there is a slowly
oscillatory solution that can be found by the collocation method without any initial
conditions for equation 5.3 [29]. This method was not considered for further investigation
since it requires solving a linear system to find the collocation weights in 2D and can
Chapter 5. Feasibility of NSD for faster signal calculation 65
only be used for rectangular domains.
5.1.5 Numerical steepest descent
Numerical steepest descent or NSD [5], [4] is a method to evaluate integrals of the same
form as 5.1. NSD can be used to evaluate an oscillatory integral by changing the in-
tegration boundaries in the complex plane by using a change of variables. This will
yield the same integration value because of Cauchy’s integral theorem. A good introduc-
tion to Cauchy’s theorem and its applications can be found in [31]. However now the
integral becomes the sum of complex constant terms multiplied by a rapidly decaying
non-oscillatory integral that can be evaluated using Gauss quadrature. The quadrature
error of the new integrals evaluated with the same number of quadrature points actually
decreases with increasing ω. For example, if g(x) = x the integral can be decomposed as
follows:
∫ b
a
f(x)eiωx = ieiωa∫
∞
0
f(a+ ip)e−ωpdp− ieiωb∫
∞
0
f(b+ ip)e−ωpdp (5.6)
Instead of integrating from a to b along the real line, the integration is carried out
from a to positive infinity in the direction of the complex axis then to (b,∞) and then
back to (b, 0).
The general decomposition is shown below:
g(ha(p)) = g(a) + ip p ∈ [0,∞) ha(0) = a (5.7)
∫
∞
0
f(ha(p))h′
a(p)eiω(g(a)+ip)dp = eiωg(a)
∫
∞
0
f(h′
a(p))ha(p)e−ωp (5.8)
Chapter 5. Feasibility of NSD for faster signal calculation 66
Where h′
a(p) is the partial derivative of ha(p) with respect to p. As shown in the
conditions shown in 5.7, the paths in the complex plane have to be chosen such that the
equation of the path parametrized by p has to be equal to g(a) + ip when its substituted
in the oscillator term g(x). Also the paths have to be equal to the original integration
limits when p = 0. The path also has to go through stationary points which are points
where g′(x) = 0. the method can be used recursively for integration in n dimensions [6].
5.2 Experiment using NSD for quadratic surface el-
ements
Using NSD, a general algorithm to integrate oscillatory integrals in the standard simplex
was developed. Then the algorithm was simulated and compared to the one developed in
Chapter 2. The simulation was carried out as follows: first calculate the exact integral for
every element and k-space point. Then calculate the same integrals using adaptive Gauss
quadrature and also record the time needed for each. Do this for a range of numbers
of quadrature points. Then calculate the same integrals again with NSD and record the
integration time. As with the adaptive Gauss quadrature algorithm, use the same number
of quadrature points in all dimensions. Then calculate the exact integration time of the
Gauss quadrature algorithm simply by adding all the times for each integral where the
error between the calculated and exact value is the desired one. The NSD computation
times were used to simulate and calculate the time needed for Algorithm 3 to compute
the full k-space signal.
Where rtime is the time ratio between what it takes to evaluate one quadrature point
after carrying out the decomposition for NSD compared to evaluating one point with
Gauss quadrature. This value will be greater than 1 since we are evaluating the function
Chapter 5. Feasibility of NSD for faster signal calculation 67
Algorithm 3 NSD algorithm
if optimal algorithm thenmake design table for desired image error for Gauss and NSDcompute Gauss quadratures from order 1 to specified order
end iffor each needed k-space location k doset S(k) = 0for each element e doif optimal algorithm thenif num points for Gauss quad > rtime × num points for NSD thenuse NSD quadrature
elseuse Gauss quadrature
end ifend ifif order of Gauss quadrature < order of magnetization and Jacobian thenset Gauss quadrature to same order as magnetization and Jacobian
end ifobtain Gauss points pobtain corresponding Gauss weights wset s(k) = 0for each p doevaluate s(k) = s(k) + w m(p) exp(k, p)
end forevaluate S(k) = S(k) + f(T1, T2, ρ) s(k)
end forend for
Chapter 5. Feasibility of NSD for faster signal calculation 68
f(x) (another 2D polynomial) in the complex plane and the decomposition will produce
a sum of integrals that will need to be evaluated. Notice that the actual adaptive Gauss
quadrature algorithm was not used since it is assumed that it could take a negligible
amount of time to estimate the errors for each element and k-space location. By sim-
ulating the NSD algorithm the speed of the algorithm compared to the adaptive Gauss
algorithm could be determined (and decide if programming the actual NSD algorithm is
worth pursuing) without having to derive and program the error estimates for NSD.
5.2.1 NSD decomposition for quadratic surface elements
Using equations 2.17 and 2.27 in the MRI signal equation, the oscillator g(x) in 5.1 will
be a 2D quadratic function:
g(x, y) = a1x2 + a2y
2 + a3xy + a4x+ a5y + a6 (5.9)
Where a1−6 are constants that depend on the element geometry and the k-space value.
Therefore a suitable NSD decomposition should be found for all possible values of a1−6
including when some of them are close to 0. The full algorithm will be described in the
next section. In this section the different cases on their own and the extension to the 2D
standard simplex will be discussed. The first thing to consider is the sign of the oscillator.
This will change the direction of the steepest descent paths. For example, if g(x) = −x
then the decomposition is the following:
∫ b
a
f(x)e−iωx = −ie−iωa
∫
∞
0
f(a− ip)e−ωpdp+ ie−iωb
∫
∞
0
f(b− ip)e−ωpdp (5.10)
Which is the same as the one shown in equation 5.6 but with the paths going and
Chapter 5. Feasibility of NSD for faster signal calculation 69
coming back from negative infinity in the direction of the complex axis. If g(x) is a
quadratic, there might be a stationary point within the integration domain. In that
case the steepest descent path has to go through it. For example, if g(x) = x2 then the
stationary point is at the origin and the decomposition is:
∫ 1
−1
f(x)eiωx2
= eiω∫
∞
0
f(h−1,1(p))e−ωph′
−1,1(p)dp−∫
∞
0
f(h0,1(p))e−ωph′
0,1(p)dp
+
∫
∞
0
f(h0,2(p))e−ωph′
0,2(p)dp− eiω∫
∞
0
f(h1,2(p))e−ωph′
1,2(p)dp (5.11)
With the following steepest descent paths:
h−1,1(p) = −√
1 + ip
h0,1(p) = −√
ip
h0,2(p) =√
ip
h1,2(p) =√
1 + ip (5.12)
Where ha,b(p) is the path starting at a in the direction given by one of the 2 solutions
to the quadratic equation obtained by solving g(ha(p)) = g(a) + ip for ha(p). The value
b specifies which solution is used. The equation has two solutions that can give a path
towards either the positive or negative imaginary axis and it is evaluated at -1,0 or 1 to
yield the 4 paths needed to have a closed loop as required by Cauchy’s theorem. This
example is easily extended to the a general quadratic function.
The outer terms of 5.11 can be evaluated using Gauss-Laguerre quadrature with the
simple substitution u = ωp. For the inner terms another quadrature has to be used since
Chapter 5. Feasibility of NSD for faster signal calculation 70
it has a weak singularity. This can be seen when the third term in 5.11 is written in full:
∫
∞
0
f(h0,2(p))e−ωph′
0,2(p)dp =i
2
∫
∞
0
f(√
ip)e−ωp 1√ipdp (5.13)
By doing the substitution ωp = u2, the term becomes:
√i
ω
∫
∞
0
f
(
√
i
ωu
)
e−u2
du (5.14)
Which can be integrated using a half-range Gauss-Hermite quadrature. Both of the
quadratures discussed can be calculated for any number of points as shown in Appendix
A. The method can be used recursively by decomposing the integral with respect to
one dimension and then with respect to the next dimension so one is integrating along a
manifold of steepest descent [6]. For the standard simplex the integral is the following:
S =
∫ 1
0
∫ 1−x
0
f(x, y)eiωg(x,y) dy dx (5.15)
If there are no stationary points along y then the integral is equal to:
S =
∫ 1
0
G(x, 0)−G(x, 1− x) dx (5.16)
With:
G(x, y) = eiωg(x,y)∫
∞
0
f(x, vy(x, q))∂vy∂q
(x, q)e−ωq dq (5.17)
And vy(x, q) being the solution to: g(x, vy(x, q)) = g(x, y) + qi. Applying the decom-
Chapter 5. Feasibility of NSD for faster signal calculation 71
position again if there are no stationary points along x will yield:
S = [F1(0, 0)− F1(1, 0)]− [F2(0, 0)− F2(1, 1)] (5.18)
With:
F (x, y) = eiωg(x,y)∫
∞
0
∫
∞
0
f(ux(p), vy(ux(p), q))∂ux(p)
∂p
∂vy(ux(p), q)
∂qe−ω(q+p) dq dp
(5.19)
All the above functions can be evaluated with Gauss product rules composed of
Gauss-Laguerre and half-range Gauss-Hermite quadratures. If the oscillation is really
low, product rules that are also composed of the standard Gauss-Legendre quadrature
were used in the NSD algorithm.
5.2.2 Algorithm details
A naive implementation of an algorithm checking for all the cases outlined in the previous
section could be to use symbolic computation in C++ using the CINaC library for ex-
ample. However the computation time could be prohibitive. Instead the algorithm uses
the constants a1−6 to determine what decompositions need to be made in each dimension
and then passes the relevant a constants and calculated values to the next sub function.
All the possible combinations are included in the program. The program checks for the
full decomposition in each dimension if: the oscillator has a low oscillation on the 0 to 1
interval of the standard simplex (then it uses Gauss-Legendre quadrature), if the oscilla-
tor is linear and not quadratic (then the paths in equation 5.6 are used), if the oscillator
is quadratic (then use quadratic paths) and if the oscillator has a stationary point within
the integration interval if it is quadratic (then the paths in equation 5.11 are used). The
Chapter 5. Feasibility of NSD for faster signal calculation 72
program also checks if the path directions should be switched depending on the sign of
the a constants. The decompositions are evaluated using functions for each type of path.
Each one of the path functions also has an input variable to evaluate it at either of the
decided directions in the complex plane. The relevant parts of the code with comments
are in Appendix C.
5.3 Results and discussion
The results of the simulation applied to the deformed sphere mesh used in Section 4.3 are
shown in Figure 5.1. The extent of the mesh is -1 to 1 in all dimensions. The total times
taken for both the adaptive Gauss quadrature and NSD for different errors are shown in
Tables 5.1 and 5.2.
K-space extent (x and y)Algorithm -2 to 2 -4 to 4 -6 to 6 -8 to 8Gauss quadrature 0.132 0.815 2.354 4.815Gauss quadrature with NSD 0.129 0.731 1.858 3.362Time ratio 1.021 1.112 1.246 1.384
Table 5.1: Total computation times in seconds and time ratios for different k-spaceextents for an error threshold of 10−2
K-space extent (x and y)Algorithm -2 to 2 -4 to 4 -6 to 6 -8 to 8Gauss quadrature 0.220 1.671 5.898 14.570Gauss quadrature with NSD 0.209 1.259 3.583 7.716Time ratio 1.050 1.320 1.609 1.834
Table 5.2: Total computation times in seconds and time ratios for different k-spaceextents for an error threshold of 10−3
As can be seen from Tables 5.1 and 5.2, the NSD algorithm is faster for higher
resolutions and for lower error thresholds. However the difference in speed between both
algorithms for physically relevant MRI acquisition simulations is not high. Since typical
simulations like the one shown in 4.3 take less than half an hour a 28% (1−1/1.384 = 0.28)
Chapter 5. Feasibility of NSD for faster signal calculation 73
decrease in computation time for a uniform k-space error of 1% is not significant. This
can be understood more clearly with the following equation:
λ
le= α
2δ
le(5.20)
Where λ is the maximum k-space wavelength, le is a characteristic element size, δ is
the resolution and α is a constant close to 1 dependant of the element geometry. The
inverse of the value of equation 5.20 is the cycles per element value described in [7]. For
MRI simulation the resolution or imaging pixel size will be on the order or greater than
the element size therefore the cycles per element value will be low. Therefore although
the element signal equation is an oscillatory integral and the examples shown in [6] show
NSD to be much more efficient than Gauss quadrature at even low oscillation values,
the adaptive Gauss quadrature algorithm described in Chapter 2 is the most appropriate
algorithm to compute the element signal equation in FE-MRI. This conclusion also applies
for NSD for volumetric elements since the number of terms in the decomposition can be 4
times higher and finding the stationary points in a standard tetrahedron is more difficult
than finding them in the standard triangle.
Chapter5.
FeasibilityofNSD
forfastersignalcalculation
74
Figure 5.1: K-space computation time maps in milliseconds for different adaptive algorithms and error thresholds. The timeratios between the algorithms is also shown
Chapter 6
Conclusions
6.1 Summary of work
In this thesis a derivation of Simedrea’s error metric [18] was derived from an estimate of
the maximum oscillation frequency produced by the oscillatory term of the MRI signal
equation. The metric was then extended to triangular and tetrahedral quadratic ele-
ments. The metric was used to design an algorithm to obtain a k-space with a uniform
image error for linear and quadratic triangular and tetrahedral elements. The algorithm
for both volumetric and surface meshes was implemented in the FE-MRI library. Also
changes were made to the algorithms in the library to make them faster and to be able
to generate an arbitrary number of quadrature rules. Another algorithm using NSD to
calculate a k-space with a uniform error for tri6 elements was simulated and was shown
to have a performance increase for normally used MRI acquisition parameters.
6.2 Future directions
The maximum frequency estimate found in this thesis could be extended to rectangular
and prismatic finite elements. Since the local domain of integration is a square and
a cube, the extra gradients that need to be calculated taking into account the Duffy
75
Chapter 6. Conclusions 76
transformation are not necessary. The reason error metrics for triangular finite elements
were only considered in this thesis is because triangular elements are the preferred element
type when segmenting an medical image or performing a CFD simulation of the flow of an
artery for example. Also segmentation and meshing programs for those types of elements
are not as common compared to triangular or tetrahedral element based programs. The
surface MRI signal algorithm could also be adopted for a surface composed of Non-
Uniform Rational B-Splines or NURBS which is the representation used in CAD models.
By using the same surface discretization as the original CAD model, the meshing step
to approximate the surface is not needed. NURBS surfaces have already been used
in finite element simulations such as in [32], where a scattering problem is simulated
using a volumetric mesh with finite elements in the interior of the problem domain and
NURBS surface elements on its surface. The use of NURBS surfaces in finite element
analysis is also referred to as isogeometric analysis [33]. The finite element approach
for MRI simulation could be extended to simulate the signal acquisition process itself,
by implementing a Bloch equation solver using finite elements. Also motion artefacts
might be simulated more accurately if the motion of the object is represented by finite
elements with moving vertices. The NSD algorithm could be a viable alternative to one
using an adaptive Gauss quadrature if extended for higher order and volumetric elements
or NURBS surfaces, however the number of stationary points that need to be located
also increase with element order. The FE-MRI library will be released as open-source
software on GitHub1 so that anyone working on MRI research will be able to use it, add
other algorithms to simulate MRI signal acquisition and add algorithms to modify the
computed k-space to simulate an acquisition error of interest.
1GitHub website: https://github.com/
Appendix A
Generation of Gauss quadratures
A.1 Introduction
To generate an arbitrary amount of quadrature points and weights in FE-MRI and for
NSD, different Gaussian quadratures need to be calculated. Some of them standard and
others non-standard. The outline of this appendix is as follows: first all the types of
integrals that need quadrature rules are presented, then a general outline for finding the
quadrature rules is presented and the C++ code is given that solves integral of type A.7
which is used in some functions of the FE-MRI library. The integrals that need to be
integrated are of the following types:
∫
∞
0
f(x)e−x2s
dx (A.1)
∫
∞
−∞
f(x)e−x2s
dx (A.2)
∫
∞
0
f(x)e−x2s+1
dx (A.3)
∫
∞
−∞
f(x)e−x2s+1
dx (A.4)
∫
∞
0
f(x)e−x dx (A.5)
∫
∞
0
f(x)xαe−x dx (A.6)
77
Appendix A. Generation of Gauss quadratures 78
∫ 1
−1
f(x)(1− x)α(1 + x)β dx (A.7)
Integral types A.1 and A.3 are needed for the methods in [5] to be useful while types
A.2 and A.4 are used to reduce the amount of computations required for the integrals
related to the stationary points. Type A.5 is just a special case of A.3. Its quadrature
is called Gauss-Laguerre. It is written as a separate type since its recurrence coefficients
are explicitly know and there might be faster methods to find its related quadrature.
The calculation can be performed more efficiently if the related quadrature is found for
the specific case of A.5 instead of A.3. Type A.6’s quadrature rule is called General
Gauss-Laguerre. It is only used to deal with some types of singularities that occur when
the methods in [5] are used. Type A.7 is the Gauss-Jacobi rule which is used to obtain
arbitrary 3D product rules for the adaptive Gauss quadrature scheme used for tet10
elements in the FE-MRI library.
The appropriate quadrature points and weights are found in the following steps:
1. Find the recurrence coefficients of the two coefficient recurrence equation corre-
sponding of the orthogonal polynomials that also correspond to the integral types
1 to 5
2. Find the eigenvalues of the related Jacobian matrix which are equal to the quadra-
ture points
3. Solve the related linear system to find the eigenvectors and use them to find the
corresponding weights
The recurrence coefficients of the recurrence equation corresponding of the orthogonal
polynomials can be found using one of the following methods: find the recurrence equa-
tion analytically using the three coefficient recurrence equation and/or other recurrence
Appendix A. Generation of Gauss quadratures 79
equations for orthogonal polynomials (such as Rodrigue’s formula) or using Stieltjes’ pro-
cedure or using Lanczos’ algorithm. Lanczos algorithm is unstable except in a few cases
[34]. The detail one has to be aware of is what kind of recurrence equation you need to go
to the next step (finding the eigenvalues of the related Jacobian matrix). One of the main
results of Gaussian quadrature theory is that the zeros of the corresponding orthogonal
polynomials are the quadrature weights. Using that result in [35] it is shown that the
coefficients of the two term recurrence relation for the corresponding monic polynomials
can be used to get the quadrature weights. Unfortunately most recurrence equations in
the literature have three coefficients. For example, the recurrence relation that produces
the Laguerre polynomials corresponding to equation A.5 usually given is:
L0(x) = 1
L1(x) = 1− x
Ln+1(x) =1
n + 1((2n+ 1− x)Ln(x)− nLn−1(x))
(A.8)
Which can also be written in the form:
xLn(x) = an+1Ln+1(x) + bnLn(x) + anpn−1(x) for n > 1 (A.9)
What is needed for the related eigenvalue problem is a two coefficient relation for
the related monic orthogonal polynomials (polynomials with a leading coefficient of 1).
These are also the polynomials produced by Stieltjes’ algorithm. For example, the first
three monic orthogonal polynomials related to the Laguerre polynomials above are:
Appendix A. Generation of Gauss quadratures 80
l0(x) = 1
l1(x) = x− 1
l2(x) = x2 − 4x+ 2
(A.10)
Their recurrence relation is the following:
lk+1(x) = (x− αn)ln(x)− βnln−1(x) for n > 1 (A.11)
Since Laguerre polynomials are also orthonormal ([36]):
α0 = 1
β0 = any number (not used)
αn = bn = 2n+ 1
βn = a2n = n2 for n > 0
(A.12)
As it was mentioned before the coefficients can be either derived analytically using
the three coefficient recurrence equation and/or other recurrence equations for orthogonal
polynomials (such as Rodrigue’s formula) or using Stieltjes’ algorithm. The coefficient
values from A.12 can be used to check an implementation to find a quadrature for integral
A.5. Stieltjes’ algorithm (described in [34]) was used to check the coefficients from A.12.
For evaluating the integrals needed in Stieltjes’ algorithm a Fejer type quadrature was
used since it is recommended and described in [37]. The authors of [5] also used the
same quadrature. They used the following MATLAB codes from the companion website
to [37]: fejer.m, stieltjes.m and quadgp.m. This algorithm works for integral types A.1
Appendix A. Generation of Gauss quadratures 81
to A.5, but it is computationally intensive and finding its error is not trivial. It could
be a better option to find the recurrence coefficients using known functions (such as the
gamma function) that can be evaluated to a certain number of decimal places. This issue
is discussed later. The Fejer points and weights for a 6 point quadrature are given in
Table A.1.
Fejer point Fejer weight
-0.96592583 0.11866102-0.70710678 0.37777778
-0.25881905 0.503561200.25881905 0.50356120
0.70710678 0.377777780.96592583 0.11866102
Table A.1: Fejer quadrature points and weighs
Stieltjes’ algorithm needs to carry out integrations using the same range as integral
types A.1 to A.6. For example, for integral type A.5, in [37] there is a fractional transform
that allows you to evaluate the following integral:
∫
∞
0
f(x) dx (A.13)
As this integral:
∫
−1
−1
f(φ(τ))φ′(τ) dτ (A.14)
Using the following transform:
φ(τ) =1 + τ
1− τ(A.15)
Appendix A. Generation of Gauss quadratures 82
A.2 The Golub-Welsch algorithm
Step 2 is done using a result found by Golub and Welsch, and described in [35]. The use
of this result to calculate a quadrature is called the Golub-Welsch algorithm. The result
states that the zeros of the orthogonal polynomials, corresponding to the quadrature
points, are the eigenvalues of a symmetric tridiagonal matrix with the first coefficient
in A.12 in the main diagonal, evaluated from k = 0 to n, and the square root of the
second coefficient in A.12 in the side diagonals, evaluated from k = 1 to n where n is
the number of quadrature points/weights needed. This matrix is also called a Jacobian
matrix. The eigenvalues can then be found fairly efficiently with the QR algorithm.
It is an iterative procedure that produces new tridiagonal matrices that converge on a
diagonal matrix, and the eigenvalues of a diagonal matrix are its diagonal entries. There
are various methods to increase this convergence but the most common are explicit and
implicit shifting, deflation and subdivision. Unfortunately, the QR method does not
work for integral type A.2 for example since αn = 0 for all n. The quadrature points and
weights for these integrals were calculated beforehand in MATLAB using the function
eig() and only needed to be included in the FE-MRI function that estimated the NSD
computation time to simulate an MRI signal.
After finding the eigenvalues of the Jacobian matrix, the eigenvectors can be found
and then used to find the weights for each quadrature point as done in [35]. If the
eigenvector A.16 corresponds to eigenvalue/quadrature point i.
~xi = [x1, x2, ... , xn]T (A.16)
Then the quadrature weight is given by equation A.17.
Appendix A. Generation of Gauss quadratures 83
wi =x21
‖~xi‖2∫
∞
0
e−xr
dx (A.17)
Completing step 3. It is mentioned in [35] that as the Jacobian is iterated, the
matrix Q converges to the values of its normalized eigenvectors in its rows. Extracting
the eigenvectors from Q directly may not be successful possibly due to round-off error.
Another solution is to make the first entry of A.16 equal to 1. It can be any value since
A.16 will be normalized to find the weights. Then the system (since its only based on a
tridiagonal matrix) can be solved by direct row by row substitution. Also A.17 reduces
to:
wi =1
1 + x22... x
2n
∫
∞
0
e−xr
dx (A.18)
The related quadrature for integral type A.7 is calculated for an arbitrary amount of
points and weights (also using the MATLAB functions in the companion website to [37])
in C++ using VTK functions and it is included in the FE-MRI library.
A.3 Sample C++ quadrature code
The code to find any number of quadrature points and weights for the integral type A.7
with β = 0 is given below.
//This function finds the quadrature points and weights for
//Gauss-Jacobi quadrature with beta = 0
//Interval [0,1]. Transformation done using a change of variables
//For more information consult Gautschi’s book:
Appendix A. Generation of Gauss quadratures 84
//Numerical Analysis: An Introduction, section 2.4
void vtkfemriGaussArbQuadrature::Initialize1DJacobi(int alpha)
{
if (this->QuadraturePoints)
{
this->QuadraturePoints->Delete();
this->QuadraturePoints = NULL;
}
this->QuadraturePoints = vtkDoubleArray::New();
if (this->QuadratureWeights)
{
this->QuadratureWeights->Delete();
this->QuadratureWeights = NULL;
}
this->QuadratureWeights = vtkDoubleArray::New();
//Added Golub’s method to get an arbitary number of gauss points
int numPs; //Number of points
int i; //Variable used for for loops
//Zeroth moment of weight function
//(just the integral of the weight function on quad interval)
double m_0 = 1.0/(alpha+1.0);
double lengthSquared; //Variable used to calculate weight
//Calculate max number of points needed based on max quad order
if(this->Order % 2 == 0)
Appendix A. Generation of Gauss quadratures 85
numPs = (this->Order + 2)/2;
else
numPs = (this->Order + 1)/2;
this->QuadraturePoints ->SetNumberOfTuples(numPs);
this->QuadratureWeights->SetNumberOfTuples(numPs);
//Fill points and weights using the GolubWelsch algorithm
double* points = this->QuadraturePoints->GetPointer(0);
double* weights = this->QuadratureWeights->GetPointer(0);
//Make 0 matrix to apply Golub-Welsch algorithm
//using the JacobiN function in vtkMath.h
double **A = vtkNewMatrix(numPs,numPs);
vtkZeroMatrix(A,numPs,numPs);
//Include in A the square root of 2 term recurrence
//coefficients in the lower and upper diagonals
for(i = 0; i < numPs; i++)
{
A[i][i] = -alpha*alpha/(2.0*(i+1.0)+alpha)/(2.0*(i+1.0)-2.0+alpha);
if(i < numPs-1)
A[i+1][i] = A[i][i+1] =
sqrt( 4.0*(i+1.0)*(i+1.0)*(i+1.0+alpha)*(i+1.0+alpha) /
((2.0*(i+1.0)+alpha)*(2.0*(i+1.0)+alpha)*(2.0*(i+1.0) +
alpha+1.0)*(2.0*(i+1.0)+alpha-1.0)));
Appendix A. Generation of Gauss quadratures 86
}
//Find points using JacobiN function
double *qPoints = new double[numPs];
double **eigenVectors = vtkNewMatrix(numPs,numPs);
vtkMath::JacobiN(A, numPs, qPoints, eigenVectors);
//Put found points (from 0 to 1) in point array taking into account
//they are ordered from positive to negative
for(i = 0; i < numPs; i++)
points[i] = (qPoints[numPs-1 - i] + 1.0)/2.0;
//Calculate weights
//(first component of normalized eVect^2 times first moment of W)
for(i = 0; i < numPs; i++)
weights[i] = m_0*eigenVectors[0][numPs-1-i]*eigenVectors[0][numPs-1-i];
//Delete matrices when done using them
vtkDeleteMatrix(A);
delete [] qPoints;
vtkDeleteMatrix(eigenVectors);
}
The first part of the code calculates the two term recurrence coefficients given by
simple functions and uses them to make the tridiagonal matrix A. Then the eigenvalues
and eigenvectors of A are calculated using the VTK function vtkMath::JacobiN() and
the eigenvalues are ordered and stored as the quadrature points. This function uses the
Jacobi eigenvalue algorithm which is an alternative method to the QR method and is
available in the VTK library. Then equation A.18 corresponding to the integral type A.7
Appendix A. Generation of Gauss quadratures 87
with β = 0 is used to calculate the quadrature weights from the calculated eigenvectors.
The function uses three small inline functions above the function in the same file taken
from the file vtkThinPlateSplineTransform.cxx within the VTK C++ library. They are
vtkNewMatrix(), vtkDeleteMatrix() and vtkZeroMatrix() and allow one to easily use of
matrices within C++.
Appendix B
Mesh generation with same
discretization error
B.1 Introduction
In order to compare the computation expense of calculating the MRI signal of two differ-
ent discretizations like the linear and quadratic tetrahedral in this thesis, the discretiza-
tion error should be equal. Although this is not possible for a typical mesh, it can be
achieved for a simple geometry with curved features such as a cylinder. The points and
connectivity matrices for these two meshes with the same discretization error were cal-
culated as follows: first the number of slices with quadratic tetrahedral elements were
chosen, then the points on the curved side are chosen such that the difference in volume
in a radial least square sense is minimized and finally the number of linear slices and
their points that will produce that same difference are calculated.
88
Appendix B. Mesh generation with same discretization error 89
B.2 Discretization error when using quadratic ele-
ments
To simplify the analysis, one should start with half of the slice which has one side on
the x axis, the other side in the positive xy quadrant and its corner at the origin. The
angle between the two is simply θN = 2π/(2N) where N is the number of slices. For
the quadratic elements there will be points along the lines made by the two sides. The
quadratic element will have a curve in the xy plane that is given by x = a + by2 were a
positive and b is negative. The arc and its approximation is shown in Figure B.1.
y
x
Figure B.1: Discretization of a circle using quadratic curve segments. The quantity tobe minimized is proportional to the area between the lines
This curve in radial coordinates is given by:
rq(θ, a, b) =cos θ +
√
cos2 θ − 4ab sin2 θ
2b sin2 θ(B.1)
Appendix B. Mesh generation with same discretization error 90
One wants the best discretization possible which means that the least squares area
difference between an arc with the same radius as the cylinder and the quadratic curve
in radial coordinates between 0 and θN should be minimized:
minF =
∫ θN
0
[
rq(θ, a, b)−R]2
(B.2)
This is done by choosing the right a and b values. The usual procedure of differ-
entiating the integrand of B.2 by a and b, integrating and solving for the variables is
cumbersome if not impossible to do analytically. The equations found involve elliptic in-
tegrals with no closed form solution and solving two non-linear equations to find a and b.
It is much simpler to use a simple derivative free minimization algorithm to minimize F
and integrate it numerically to find the approximate values for a and b. The calculations
were done in MATLAB. Since the integrand has a weak singularity at θ = 0 due to sin2 θ,
a simple adaptive two point Gaussian quadrature algorithm by Brian Bradie was used
for the integration.1 The application of Classic Quadrature rules does not require evalu-
ating the integral limits. The adaptive integration method in function quad.m available
in MATLAB will not work since it uses a recursive adaptive Simpson quadrature which
needs to evaluate the integral limits. The initial guesses for a and b are a value close to
R for a like a = 1.01R and for b equation B.5 can be solved for b with r = R and θ = θN .
Because of the accurate initial guess and the low dimensionality of the problem, a simple
pattern search method called compass search [38] was used to minimize the integral.2
After the a and b values are found, they are used to find the radii at θ = 0 and θ = θN
which are a (by definition) and equation B.5 at θ = θN respectively. Also the value of F
is stored.
1The MATLAB function can be fund at: http://www.pcs.cnu.edu/~bbradie/mquadrature.html2The MATLAB function can be found at: http://people.sc.fsu.edu/~%20jburkardt/m_src/
compass_search/compass_search.html
Appendix B. Mesh generation with same discretization error 91
B.3 Discretization error when using linear elements
The same derivation can be carried out for linear elements. Starting again with half a slice
in the same location in the x-y plane, one side of the linear element should approximate
an arc with the same radius as the cylinder. The arc and its approximation is shown in
Figure B.2.
y
x
Figure B.2: Discretization of a circle using line segments. The quantity to be minimizedis proportional to the area between the lines
Because of symmetry, the line is a vertical line in the x-y plane and its given by:
rl(θ, a, b) =a
cos θ(B.3)
To get the best possible fit again the function to the minimized is:
minF =
∫ θN
0
[
rl(θ, a, b)−R]2
(B.4)
Appendix B. Mesh generation with same discretization error 92
Integrating F , taking the partial derivative of the result with respect to a, equating
to 0 and solving for a yields:
a =θNR
ln(sec θN + tan θN)(B.5)
The value of F can be found analytically:
F = a2 tan θN − 2aR ln(sec θN + tan θN ) +R2θN (B.6)
Which allows one to compare the discretization error of a cylinder composed of slices
with linear elements and the error when having a given number of quadratic element
slices. This is calculated by calculating a and F for a linear element with higher values
of N until NlFl < NqFq. For the number of quadratic slices and the dimensions used in
Section 4.6 this happens when N = 22 or about 6 times the number of elements compared
to the quadratic mesh. After the a value that produces the same discretization error is
found it is used to find the radii at θ = θN which is equation B.3 at θ = θN .
B.4 MATLAB implementation
The MATLAB function that finds the optimal a and b values for the linear and quadratic
meshes for the values in Section 4.6 is shown below.
function pieLeastSquares
Appendix B. Mesh generation with same discretization error 93
N = 4; %Number of slices
theta = 2*pi/N/2; %Angle for least squares calculation in radians
num_vars = 2;
R = 0.5;
disp(sprintf(’\n Number of quadratic slices: %d’, N))
disp(sprintf(’\n Angle in degrees: %f’, 360/N))
disp(sprintf(’\n Radius of cylinder: %f’, R))
%New Rs for quadratic pie slice
a = R*1.01;
b = (R*cos(theta) - a)/R/R/sin(theta)^2;
a0 = [a b]; % Make a starting guess at the solution
fx = myfun(num_vars, a0, theta, R);
disp(sprintf(’\n\n Starting a and b values: %f %f’, a0(1), a0(2)))
disp(sprintf(’\n Starting area difference: %f’, fx*2*N))
x = theta/100:theta/100:theta;
for j = 1:length(x)
y(j) = quadraticCurve(x(j), a0(1),a0(2));
end
figure;
polar(x,y)
axis([0 R*1.1 0 1.25*theta]);
%Call solver
delta = 1e-2;
delta_tol = 1e-10;
Appendix B. Mesh generation with same discretization error 94
k_max = 1000;
[a0,fx,k]=compass_search(@myfun,num_vars,a0,theta,R,delta_tol,delta,k_max);
finalAdiff = fx*2*N;
disp(sprintf(’\n Final a and b values: %f %f’, a0(1), a0(2)))
disp(sprintf(’\n Final area difference: %f’, finalAdiff))
disp(sprintf(’\n Number of iterations: %d’, k))
ra = a0(1);
rb = quadraticCurve(theta, a0(1),a0(2));
disp(sprintf(’\n ra and rb values: %f %f’, ra, rb))
x = theta/100:theta/100:theta;
for j = 1:length(x)
y(j) = quadraticCurve(x(j), a0(1),a0(2));
end
figure;
polar(x,y)
axis([0 R*1.1 0 1.25*theta]);
%Get equivalent number of slices to get same area difference
N = 2;
aDiff = 4*R*R; %Just some high value proportional to R^2
while aDiff > finalAdiff
N = N+1;
Appendix B. Mesh generation with same discretization error 95
theta = 2*pi/N/2;
a = R*theta/log(sec(theta) + tan(theta));
%Put a in least squares integral solution
%Check if area difference is smaller than the one with quadratic slices
area = a*a*tan(theta) - 2*R*a*log(sec(theta) + tan(theta)) + R*R*theta;
aDiff = area*2*N;
end
disp(sprintf(’\n\n Number of linear slices: %d’, N))
ra = linearCurve(theta, a);
disp(sprintf(’\n ra value: %f’, ra))
disp(sprintf(’\n\n’))
x = theta/100:theta/100:theta;
for j = 1:length(x)
y(j) = linearCurve(x(j), a);
end
figure;
polar(x,y)
axis([0 R*1.1 0 1.25*theta]);
function F = myfun(m, a, theta, R)
F = adapt_gq2(@(x) leastSquaresIntegral(x, a(1),a(2), R), 0.0,theta,1e-12);
Appendix B. Mesh generation with same discretization error 96
function I = leastSquaresIntegral(x, a,b, R)
I = (quadraticCurve(x, a,b) - R)^2;
function r = linearCurve(x ,a)
r = a/cos(x);
function r = quadraticCurve(x ,a,b)
r = (cos(x) - sqrt(((cos(x)^2) - 4*a*b*(sin(x)^2))))/2/b/sin(x)^2;
The optimal a and b values are then used in a MATLAB function that produces the
optimal meshes and writes them in ascii .vtk format. Figure B.1 suggests there might
be another minimum for the quadratic error. This was checked by changing the starting
values for a to a = R ∗ 0.99. However the algorithm generates the same final a and b
values.
Appendix C
Implementation of NSD algorithm
C.1 Introduction
Part of the NSD algorithm was programmed in C++ class vtkfemriUnstructuredGridNS-
DTime. When it is run it outputs a text file with all the signals for each element and
k-space value and its related computation time. The same is done with the tri6 gauss
quadrature. A MATLAB program then uses the data to simultae the full NSD algorithm.
The parts of the NSD algorithm implemented in C++ are shown below.
inline bool vtkfemriUnstructuredGridNSDTime::decompNSD(vtkCell* cell,
double* cellValue, double* ga, double a, double b, int numberOfCellPoints,
double* freq)
{
int dirY;
bool nsdSuccess = 1;
comp fVal, G1,G4;
97
Appendix C. Implementation of NSD algorithm 98
//Choose direction in y dimension
if (ga[4] < 0.0) dirY = -1;
else dirY = 1;
//Get x value of stationary point in ’y’ boundaries
double spa = -ga[4]/ga[2];
double spb = (2*ga[1]+ga[4])/(2*ga[1]-ga[2]);
//If sp is outside [a,b], don’t use it
if ( ((spa < a) || (spa > b)) && ((spb < a) || (spb > b)) )
{
nsdSuccess = 1;
G1 = innerDecompA(cell, dirY, ga, a,b, freq);
G4 = innerDecompB(cell, dirY, ga, a,b, freq);
fVal = G1 - G4;
}
else
{
nsdSuccess = 0;
fVal = 0.0;
}
cellValue[0] = fVal.real();
cellValue[1] = fVal.imag();
return nsdSuccess;
}
Appendix C. Implementation of NSD algorithm 99
The function starts by choosing the direction of the paths and decomposing the
integral into two parts like in equation 5.17.
C.2 G1 decomposition
The first inner decomposition G1 is shown below.
inline comp vtkfemriUnstructuredGridNSDTime::innerDecompA(vtkCell* cell,
int dirY, double* ga, double a, double b, double *freq)
{
double gax[3];
gax[0] = ga[0];
gax[1] = ga[3];
gax[2] = ga[5];
double a1 = gax[0];
double a2 = gax[1];
double sp = -a2/(2.0*a1); //Stationary point
int q;
//If oscillator can be considered nonoscillatory in interval integrate using
//standard quadrature
if ((fabs(a1) < 0.1) && (fabs(a2) < 0.1))
{
comp G = 0.0;
comp f;
comp g;
Appendix C. Implementation of NSD algorithm 100
double quadPoint[2];
double quadWeight;
int numQuadPs = this->qGab->GetNumberOfQuadraturePoints();
for (q=0; q < numQuadPs; q++)
{
this->qGab->GetQuadraturePoint(q,quadPoint);
quadWeight = this->qGab->GetQuadratureWeight(q);
f = fx(quadPoint[0], h_ay(dirY, quadPoint[1], quadPoint[0],ga), cell, freq);
g = gx(quadPoint[0], h_ay(dirY, quadPoint[1], quadPoint[0],ga), ga );
G = G + quadWeight*f*exp(comp(0.0,1.0)*g) * dh_ay(dirY, quadPoint[1],
quadPoint[0], ga);
}
return G;
}
if (fabs(a1) < 100.0*DBL_EPSILON) //If a1 is 0, oscillator is linear
{
comp F1 = 0.0;
comp F2 = 0.0;
comp u, f, g;
double quadPoint[2];
double quadWeight;
int numQuadPs = this->qLab->GetNumberOfQuadraturePoints();
Appendix C. Implementation of NSD algorithm 101
for (q=0; q < numQuadPs; q++)
{
this->qLab->GetQuadraturePoint(q,quadPoint);
quadWeight = this->qLab->GetQuadratureWeight(q);
//CALCULATE F1
u = hl(a, 1, quadPoint[0], a2);
f = fx(u, h_ay(dirY, quadPoint[1], u,ga), cell, freq);
F1 = F1 + quadWeight*f * dh_ay(dirY, quadPoint[1], u,ga)*dhl(1, a2);
//CALCULATE F2
u = hl(b, 1, quadPoint[0], a2);
f = fx(u, h_ay(dirY, quadPoint[1], u,ga), cell, freq);
F2 = F2 + quadWeight*f * dh_ay(dirY, quadPoint[1], u,ga)*dhl(1, a2);
}
return exp(comp(0.0,1.0)*polyval(gax,a))*F1
- exp(comp(0.0,1.0)*polyval(gax,b))*F2;
}
else //If a1 is not 0, its quadratic
{
//Switch path direction based on direction of h on imaginary axis
int dir;
if (2.0*a*a1+a2 < 0.0) dir = -1; else dir = 1;
if ((sp < a) || (sp > b)) //If sp is outside [a,b], don’t use it
{
Appendix C. Implementation of NSD algorithm 102
comp F1 = 0.0;
comp F2 = 0.0;
comp u, f, g;
double quadPoint[2];
double quadWeight;
int numQuadPs = this->qLab->GetNumberOfQuadraturePoints();
for (q=0; q < numQuadPs; q++)
{
this->qLab->GetQuadraturePoint(q,quadPoint);
quadWeight = this->qLab->GetQuadratureWeight(q);
//CALCULATE F1
u = hab( dir, quadPoint[0], a,a1,a2);
f = fx(u, h_ay(dirY, quadPoint[1], u,ga), cell, freq);
F1 = F1 + quadWeight*f * dh_ay(dirY, quadPoint[1], u,ga)
*dhab( dir, quadPoint[0], a,a1,a2);
//CALCULATE F2
u = hab( dir, quadPoint[0], b,a1,a2);
f = fx(u, h_ay(dirY, quadPoint[1], u,ga), cell, freq);
F2 = F2 + quadWeight*f * dh_ay(dirY, quadPoint[1], u,ga)
*dhab( dir, quadPoint[0], b,a1,a2);
}
return exp(comp(0.0,1.0)*polyval(gax,a))*F1
- exp(comp(0.0,1.0)*polyval(gax,b))*F2;
Appendix C. Implementation of NSD algorithm 103
}
else //If sp is inside [a,b], calculate NSD integral with sp
{
comp F1 = 0.0; comp F2 = 0.0;
comp F3 = 0.0; comp F4 = 0.0;
comp u, f, g;
double quadPointL[2];
double quadWeightL;
double quadPointH[2];
double quadWeightH;
int numQuadPs = this->qLab->GetNumberOfQuadraturePoints();
for (q=0; q < numQuadPs; q++)
{
this->qLab->GetQuadraturePoint(q,quadPointL);
quadWeightL = this->qLab->GetQuadratureWeight(q);
this->qHab->GetQuadraturePoint(q,quadPointH);
quadWeightH = this->qHab->GetQuadratureWeight(q);
//CALCULATE F1
u = hab( dir, quadPointL[0], a,a1,a2);
f = fx(u, h_ay(dirY, quadPointL[1], u,ga), cell, freq);
F1 = F1 + quadWeightL*f * dh_ay(dirY, quadPointL[1], u,ga)
*dhab( dir, quadPointL[0], a,a1,a2);
//CALCULATE F2 and F3
Appendix C. Implementation of NSD algorithm 104
u = hsp( dir, quadPointH[0], a1,a2);
f = fx(u, h_ay(dirY, quadPointH[1], u,ga), cell, freq);
F2 = F2 + quadWeightH*f * dh_ay(dirY, quadPointH[1], u,ga)*dhsp( dir,a1);
u = hsp(-dir, quadPointH[0], a1,a2);
f = fx(u, h_ay(dirY, quadPointH[1], u,ga), cell, freq);
F3 = F3 + quadWeightH*f * dh_ay(dirY, quadPointH[1], u,ga)*dhsp(-dir,a1);
//CALCULATE F4
u = hab(-dir, quadPointL[0], b,a1,a2);
f = fx(u, h_ay(dirY, quadPointL[1], u,ga), cell, freq);
F4 = F4 + quadWeightL*f * dh_ay(dirY, quadPointL[1], u,ga)
*dhab(-dir, quadPointL[0], b,a1,a2);
}
return exp(comp(0.0,1.0)*polyval(gax,a ))*F1
- exp(comp(0.0,1.0)*polyval(gax,sp))*F2 +
exp(comp(0.0,1.0)*polyval(gax,sp))*F3
- exp(comp(0.0,1.0)*polyval(gax,b ))*F4;
}
}
}
The only difference between G1 and G2 is that G1 uses the boundary 0 to 1 in the
y-axis and the G2 uses the 1 − x boundary. Therefore it does not need to be included
to explain the algorithm. The first part of the decomposition checks if the oscillator
has a low oscillation on the 0 to 1 interval of the standard simplex. Then it uses Gauss-
Appendix C. Implementation of NSD algorithm 105
Legendre quadrature. The second part checks if the oscillator is linear and not quadratic.
If so then the paths in equation 5.6 are used and their directions are calculated. The
third part of the program checks if the oscillator is quadratic and if so decides if the
next decomposition should include or exclude the calculated stationary point. This will
produce either 2 or 4 F decompositions. So the maximum number of F terms is 4 for
each for a maximum of 8. If the stationary point is needed for the path calculation then
the same type of paths as in equation 5.11 are used.
C.3 Steepest descent paths for G
The paths for G are shown below.
inline comp h_ay(int dir, double p, comp x, double* ga)
{
comp term = ga[2]*x+ga[4];
return ( -(ga[2]*x+ga[4]) + ((double) dir)*sqrt(term*term
+ comp(0.0, 4.0*ga[1]*p)) ) / (2.0*ga[1]);
}
inline comp dh_ay(int dir, double p, comp x, double* ga)
{
comp term = ga[2]*x+ga[4];
return ((double) dir)*comp(0.0,1.0) / sqrt(term*term
+ comp(0.0, 4.0*ga[1]*p));
}
inline comp h_by(int dir, double p, comp x, double* ga)
Appendix C. Implementation of NSD algorithm 106
{
comp term = (ga[2]-2.0*ga[1])*x + 2.0*ga[1] + ga[4];
return ( -(ga[2]*x+ga[4]) + ((double) dir)*sqrt(term*term
+ comp(0.0, 4.0*ga[1]*p)) ) / (2.0*ga[1]);
}
inline comp dh_by(int dir, double p, comp x, double* ga)
{
comp term = (ga[2]-2.0*ga[1])*x+2.0*ga[1]+ga[4];
return ((double) dir)*comp(0.0,1.0) / sqrt(term*term
+ comp(0.0, 4.0*ga[1]*p));
}
C.4 Steepest descent paths for F
The paths for F are shown below.
inline comp hl(double x, int dir, double p, double a2)
{
return x + comp(0.0, ((double) dir)*p/a2);
}
inline comp dhl(int dir, double a2)
{
return comp(0.0, ((double) dir)/a2);
}
Appendix C. Implementation of NSD algorithm 107
inline comp hab(int dir, double p, double x, double a1, double a2)
{
double term = 2.0*a1*x+a2;
return (-a2 + ((double) dir)*sqrt(term*term
+ comp(0.0, 4.0*a1*p)))/(2.0*a1);
}
inline comp dhab(int dir, double p, double x, double a1, double a2)
{
double term = 2.0*a1*x+a2;
return ((double) dir)*comp(0.0,1.0)/sqrt(term*term
+ comp(0.0, 4.0*a1*p));
}
inline comp hsp(int dir, double q, double a1, double a2)
{
return (-a2 + ((double) dir)*q*sqrt(comp(0.0, 4.0*a1)) ) / (2.0*a1);
}
inline comp dhsp(int dir, double a1)
{
return ((double) dir)*comp(0.0,1.0) / sqrt(comp(0.0,a1));
}
Note that the direction variable is converted to a double to make sure the right type
is used throughout the calculation.
Bibliography
[1] Michael Schar. Optimization of MR Pulse Sequences for the Characterization of
Atherosclerotic Plaque. Diploma Thesis, University of ETH Zurich, 2001.
[2] Thomas Hacklnder and Heinrich Mertens. Virtual MRI: A PC-based simulation of
a clinical MR scanner. Academic Radiology, 12(1):85–96, 2005.
[3] Sheehan Olver. Numerical Approximation of Highly Oscillatory Integrals. Ph.D.
Thesis, University of Cambridge, 2006.
[4] Daan Huybrechs and Sheehan Olver. Highly Oscillatory Problems, chapter 2, pages
25–50.
[5] Daan Huybrechs and Stefan Vandewalle. On the Evaluation of Highly Oscillatory In-
tegrals by Analytic Continuation. SIAM Journal on Numerical Analysis, 44(3):1026–
1048, 2006.
[6] Daan Huybrechs and Stefan Vandewalle. The Construction of cubature rules for
multivariate highly oscillatory integrals. Mathematics of Computation, 76:1955–
1980, 2007.
[7] Paul Simedrea. Numerical Simulation of MRI Using Unstructured Meshes. Master
Thesis, University of Western Ontario, 2006.
108
Bibliography 109
[8] Mark E. Haacke, Robert W. Brown, Michael R. Thompson, and Ramesh Venkatesan.
Magnetic Resonance Imaging: Physical Principles and Sequence Design. Wiley-Liss,
1999.
[9] Zhi-Pei Liang and Paul C. Lauterbur. Principles of Magnetic Resonance Imaging:
A Signal Processing Perspective. Wiley-IEEE Press, 1999.
[10] D. G. Nishimura. Principles of Magnetic Resonance Imaging. Stanford University,
1996.
[11] Franz Mandl. Statistical Physics. John Wiley & Sons Ltd., 2nd edition, 1988.
[12] Ye Qiao, David A. Steinman, Maryam Etesami, Alex Martinez, Edward G. Lakatta,
and Bruce A. Wasserman. Impact of t2 decay on carotid artery wall thickness
measurements. Journal of Magnetic Resonance Imaging, 37(6):1493–1498, 2013.
[13] Rem van Tyen, David Saloner, Liang-Der Jou, and Stanley Berger. Mr imaging
of flow through tortuous vessels: A numerical simulation. Magnetic Resonance in
Medicine, 31(2):184–195, 1994.
[14] David A. Steinman, C. Ross Ethier, and Brian K. Rutt. Combined analysis of spatial
and velocity displacement artifacts in phase contrast measurements of complex flows.
Journal of Magnetic Resonance Imaging, 7(2):339–346, 1997.
[15] Golman K. Petersson JS, Christoffersson JO. MRI simulation using the k-space
formalism. Magnetic Resonance Imaging, 11:557–568, 1993.
[16] Gustavo H.C. Silva, Rodolphe Le Riche, Jrme Molimard, and Alain Vautrin. Exact
and efficient interpolation using finite elements shape functions. European Journal
of Computational Mechanics/Revue Europenne de Mcanique Numrique, 18(3-4):307–
331, 2009.
Bibliography 110
[17] Kent J. Truscott and Michael H. Buonocore. Simulation of Tagged MR Images
With Linear Tetrahedral Solid Elements. Journal of Magnetic Resonance Imaging,
14:336–340, 2001.
[18] P Simedrea, L Antiga, and DA Steinman. Towards a new framework for simulating
magnetic resonance imaging. In First Canadian student conference on biomedical
computing, 2006.
[19] Antiga L. and Steinman D. A. Efficient MRI simulation via integration of the signal
equation over triangulated surfaces. Toronto, Canada, 2008. International Society
of Magnetic Resonance in Medicine.
[20] Duane A Yoder, Yansong Zhao, Cynthia B Paschal, and J Michael Fitzpatrick. Mri
simulator with object-specific field map calculations. Magnetic resonance imaging,
22(3):315–328, 2004.
[21] H Benoit-Cattin, G Collewet, B Belaroussi, H Saint-Jalmes, and C Odet. The simri
project: a versatile and interactive mri simulator. Journal of Magnetic Resonance,
173(1):97–115, 2005.
[22] Charles Schwartz. Numerical integration in many dimensions. ii. Journal of Math-
ematical Physics, 26(5):955–957, 1985.
[23] George E. Karniadakis and Spencer J. Sherwin. Spectral/hp Element Methods for
CFD. Oxford University Press, USA, 1st edition, 1999.
[24] Linbo Zhang, Tao Cui, and Hui Liu. A set of symmetric quadrature rules on triangles
and tetrahedra. Journal of Computational Mathematics, 27(1), 2009.
[25] James N Lyness and Ronald Cools. A survey of numerical cubature over triangles.
1994.
Bibliography 111
[26] Ronald L. Graham, Donald E. Knuth, and Oren Patashnik. Concrete Mathematics:
A Foundation for Computer Science. Addison-Wesley Longman Publishing Co., Inc.,
Boston, MA, USA, 2nd edition, 1994.
[27] P Simedrea, L Antiga, and DA Steinman. Fe-mri: Simulation of mri using arbitrary
finite elements. In Proc Int Soc Magn Reson Med, volume 14, page 2946, 2006.
[28] EA Flinn. A modification of filon’s method of numerical integration. Journal of the
ACM (JACM), 7(2):181–184, 1960.
[29] David Levin. Procedures for computing one-and two-dimensional integrals of func-
tions with rapid irregular oscillations. Mathematics of Computation, 38(158):531–
538, 1982.
[30] David Levin. Analysis of a collocation method for integrating rapidly oscillatory
functions. Journal of Computational and Applied Mathematics, 78(1):131–138, 1997.
[31] Tristan Needham. Visual Complex Analysis. Oxford University Press, USA, 1999.
[32] Ruben Sevilla, Sonia Fernndez-Mndez, and Antonio Huerta. Nurbs-enhanced fi-
nite element method (nefem). Archives of Computational Methods in Engineering,
18(4):441–484, 2011.
[33] Y Bazilevs, L Beirao da Veiga, JA Cottrell, TJR Hughes, and G Sangalli. Isogeo-
metric analysis: approximation, stability and error estimates for h-refined meshes.
Mathematical Models and Methods in Applied Sciences, 16(07):1031–1090, 2006.
[34] Walter Gautschi. Orthogonal Polynomials (in MATLAB). Journal of Computational
and Applied Mathematics, 178:215–234, 2005.
[35] Gene H. Golub and John H. Welsch. Calculation of Gauss Quadrature Rules. Tech-
nical report, Stanford University, Computer Science Department, November 1967.
Bibliography 112
[36] Walter Van Assche and Mama Foupouagnigni. Analysis of Non-Linear Recurrence
Relations for the Recurrence Coefficients of Generalized Charlier Polynomials. Jour-
nal of Nonlinear Mathematical Physics, 10(2):231–237, 2003.
[37] Walter Gautschi. Orthogonal polynomials: computation and approximation. Oxford
University Press, 1st edition, 2004.
[38] Robert Hooke and T. A. Jeeves. “direct search” solution of numerical and statistical
problems. J. ACM, 8(2):212–229, April 1961.
Top Related