CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf ·...

48
CT Problems, Filtered Back Projection, SVD Analysis Per Christian Hansen DTU Compute Department of Applied Mathematics and Computer Science Technical University of Denmark Per Christian Hansen SVD Analysis 1 / 48

Transcript of CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf ·...

Page 1: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

CT Problems, Filtered Back Projection, SVD Analysis

Per Christian Hansen

DTU ComputeDepartment of Applied Mathematics and Computer Science

Technical University of Denmark

Per Christian Hansen SVD Analysis 1 / 48

Page 2: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Plan for Today – and Tomorrow Morning

1 A bit of motivation.2 Singular values and functions.3 Picard condition and “range condition.”4 The singular value decomposition (SVD) of a matrix.5 SVD analysis.

Points to take home:Singular values/functions provide insight into the Radon transform:

High-frequency noise has a larger effect on the reconstruction.Errors on the sinogram edges are especially troublesome.

Filtering is necessary to reduce the influence of the noise.When the problem is discretized, we use the SVD to obtain insight.The SVD can always be used to study more general CT scenarios.

Per Christian Hansen SVD Analysis 2 / 48

Page 3: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Some Notation

Vectors Functions

Norm ‖x‖22 =n∑

i=1

|xi |2 ‖f ‖22 =∫ ba |f (t)|2 dt

= x · x = 〈f , f 〉

Inner prod. x · y =n∑

i=1

xi yi = xTy 〈f , g〉 =∫ ba f (t) g(t) dt

Orthonormal v i · v j = δij 〈vi , vj〉 = δij

Expansion x =n∑

i=1

ci v i f (t) =∞∑i=0

ci vi (t) means:

= [v1 . . . vn]︸ ︷︷ ︸matrix

cn∑

i=0

ci vi (t)→ f (t) for n→∞

Per Christian Hansen SVD Analysis 3 / 48

Page 4: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Wanted: More Insight

We have studied an efficient algorithm – filtered back projection (FBP) –for computing the CT reconstruction.

And we have also seen that the reconstruction is somewhat sensitive tonoise in the data.

How can we further study this sensitivity to noise?How can we possibly reduce the influence of the noise?What consequence does that have for the reconstruction?

We need a mathematical tool that lets us perform a detailed study of theseaspects: the singular value decomposition/expansion.

But before going into these details, we will start with a simple examplefrom signal processing, to explain the basic idea.

Per Christian Hansen SVD Analysis 4 / 48

Page 5: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Signal Restoration

System

'&

$%Input

'&

$%Output⇒ ⇒

Assume that we know the characteristics of the system, and that we havemeasured the noisy output signal g(t). Now we want to reconstruct theinput signal f (t).

The mathematical (forward) model, assuming 2π-periodic signals:

g(t) =

∫ π

−πh(τ − t) f (τ) dτ or g = h ∗ f (convolution).

Here, the function h(t) (called the “impulse response”) defines the system.

Per Christian Hansen SVD Analysis 5 / 48

Page 6: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Deconvolution

In this example, the input f (t) is white noise, and the output g(t) isfiltered noise.The corresponding Fourier transforms are f (ω) and g(ω).

Deconvolution: reconstruct the input f from the output g = h ∗ f .Per Christian Hansen SVD Analysis 6 / 48

Page 7: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Fourier Series of Periodic Functions

Our tool is the Fourier series of a 2π-periodic function f :

f (t) =∞∑

n=−∞cn e

i n t , i =√−1,

with the Fourier coefficients

cn =12π

∫ π

−πe−i n t f (t) dt =

12π〈ψn, f 〉, ψn = e i n t .

We can think of the functions ψn as a convenient basis for analysing thebehavior of periodic functions. Similarly:

g(t) =∞∑

n=−∞dn ψn, dn = 〈ψn, g〉.

Per Christian Hansen SVD Analysis 7 / 48

Page 8: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Convolution and Deconvolution in Fourier Domain

Due to the linearity, we have

g = h ∗ f = h ∗

( ∞∑n=−∞

cn ψn

)=

∞∑n=−∞

cn (h ∗ ψn).

Hence, all we need to know is the system’s response to each basis functionψn = e i n t .

For the periodic systems we consider here, the convolution of h with ψn

produces a scaled version of ψn:

h ∗ ψn = µn ψn, for all n,

where µn = 〈ψn, h〉 (we do not prove this). Hence,

g =∞∑

n=−∞cn µn ψn =

∞∑n=−∞

dn ψn ⇔ f =∞∑

n=−∞

dnµn

ψn.

Deconvolution is transformed to a simple algebraic operation: division.Per Christian Hansen SVD Analysis 8 / 48

Page 9: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Naive Reconstruction from Noisy Data

. Top left: the input f (t) and the noisy output g(t) (noise not visible).

. Bottom left: corresponding Fourier coefficient; note the “noise floor.”

. Bottom right: the reconstructed Fourier coefficients dn/µn are dominatedby the noise for n > 100, and a naive reconstruction is useless.. Top right: the naive and useless reconstruction.

Per Christian Hansen SVD Analysis 9 / 48

Page 10: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Filtered/Truncated Reconstruction from Noisy Data

. Left: same as previous slide.

. Bottom right: let us keep the first ±100 coefficients only.

. Top right: comparison of f (t) and the truncated reconstruction using±100 terms in the Fourier expansion. It captures the general shape of f (t).

Per Christian Hansen SVD Analysis 10 / 48

Page 11: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

What We Have Learned So Far

With the right choice of basis functions, we can turn a complicatedproblem into a simper one.Here: the basis functions are the complex exponentials;deconvolution → division in Fourier domain.Inspection of the expansion coefficients reveals how and when thenoise enters in the reconstruction.Here: the noise dominates the output’s Fourier coefficients for higherfrequencies, while the low-frequency coefficients are ok.We can avoid most of the noise (but not all) by means of filtering, atthe cost of loosing some details.Here: we simply truncate the Fourier expansion for the reconstruction.

Let us apply the same idea to parallel-beam CT reconstruction!

Per Christian Hansen SVD Analysis 11 / 48

Page 12: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

The Radon Transform

M. Bertero & P. Boccacci, Introduction to Inverse Problems in Imaging,CRC Press, 1998.

The radon transform g = R f In previous slides g was called pθ(s).

The image: f (ξ) with ξ = (x , y) ∈ D, the disk with radius 1.The sinogram (the data): g(s, θ) with s ∈ [−1, 1] and θ ∈ [0◦, 360◦].

Per Christian Hansen SVD Analysis 12 / 48

Page 13: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Singular Values and Singular Functions

There exist scalars µmk and functions umk(s, θ) and vmk(ξ) such that

R vmk = µmk umk , m = 0, 1, 2, 3, . . . k = 0, 1, 2, . . . ,m.

The scalars are called the singular values:

µmk = 2√π/(m + 1) with multiplicity m+1.

The corresponding left and right singular functions are

umk(s, θ) ∝√

1− s2 · Um(s) · e−i (m−k) θ,

vmk(ξ) ∝ r |m−2k| · P 0,|m−2k|ν (2r2−1) · e−i (m−k)φ,

where r = ‖ξ‖2, Um is the Chebyshev polynomial of the second kind,P0,|m−2k|ν is the Jacobi polynomial, and ν = 1

2(m+|m−2k |).

(The word “singular” is used in the sense “special” or “unique.”)

Per Christian Hansen SVD Analysis 13 / 48

Page 14: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

About the Singular Functions

From the previous slide:

R umk = µmk vmk , m = 0, 1, 2, 3, . . . k = 0, 1, 2, . . . ,m.

The functions umk are an orthonormal basis for [−1, 1]× [0◦, 360◦].The functions vmk are an orthonormal basis for the unit disk D.Both functions are complex – similar to the Fourier basis.

The expansions of f and g take the form

f (ξ) =∑m,k

〈vmk , f 〉 vmk(ξ), g(s, θ) =∑m,k

〈umk , g〉 umk(s, θ).

〈vmk , f 〉 =

∫Dvmk(ξ) f (ξ) dξ,

〈umk , g〉 =

∫ 1

−1

∫ 360◦

0◦umk(s, θ) g(s, θ) dθ ds.

Per Christian Hansen SVD Analysis 14 / 48

Page 15: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Some Left Singular Functions for m = 0, 1, 2, 3, 4

Per Christian Hansen SVD Analysis 15 / 48

Page 16: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Some Right Singular Functions for m = 0, 1, 2, 3, 4

Per Christian Hansen SVD Analysis 16 / 48

Page 17: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Singular Values, and Some Comments

All singular values are positive (no zeros); they decay rather slowly.If µmk = µj with j = 1

2m(m+1) + k + 1, then µj ∝ 1/√j for large j .

Singular functions with higher index j have higher frequencies.The higher the frequency, the more the damping in R vmk = µmk umk .Hence the Radon transform g = R f is a “smoothing” operation. . . and the reverse operation f = R−1g amplifies higher frequencies!

These are intrinsic properties of the mathematical problem itself.Per Christian Hansen SVD Analysis 17 / 48

Page 18: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

The Coefficients for the Sinogram

These are the coefficients 〈umk , g〉 for the sinogramcorresponding to the Shepp-Logan phantom – orderedaccording to increasing index m.

They decay, as expected. The specific behavior for k =0, . . . ,m is due to the symmetry of the phantom.

Per Christian Hansen SVD Analysis 18 / 48

Page 19: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

The Inverse Radon Transform is Unbounded

From linear algebra we know that if b = Ax ⇔ x = A−1b then

‖b‖2 ≤ ‖A‖ ‖x‖2 and ‖x‖2 ≤ ‖A−1‖ ‖b‖2, ‖A‖2 =∑

i , j a2ij .

Hence, a perturbation ∆b of b produces a reconstruction perturbation∆x = A−1∆b with ‖∆x‖2 ≤ ‖A−1‖ ‖∆b‖2.

For the Radon transform g = R f ⇔ f = R−1g we have

‖g‖2 ≤ ‖R‖‖f ‖2 with ‖R‖2 =∑m,k

µ2mk

‖f ‖2 ≤ ‖R−1‖ ‖g‖2 with ‖R−1‖2 =∑m,k

1µ2mk

.

Trouble: ‖R−1‖ =∞. An unperturbed g = R f satisfies the Picardcondition. But a noisy g does not, and noisy reconstruction is unbounded!

Per Christian Hansen SVD Analysis 19 / 48

Page 20: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Left Singular Functions and a “Range Condition”

Due to the factor√1− s2, all the left singular functions satisfy

umk(s, θ)→ 0 for s → ±1.

This reflects the fact that rays through D that almost grace the edge of thedisk contribute very little to the sinogram.

This puts a restriction on sinograms g(s, θ) that admit a reconstruction:The sinogram g = R f is a sum of the singular functions umk .Hence, the sinogram inherits the property g(s, θ)→ 0 for s → ±1.A perturbation ∆g of g that does not have this property may notproduce a bounded perturbation R−1∆g of f .

Per Christian Hansen SVD Analysis 20 / 48

Page 21: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Illustration of the “Range Condition”

When the noise violates the “range condition” g(s, θ)→ 0 for s → ±1.

Top row: the noise in the sinogram maps to large errors in the cornersof the reconstructed image.Note that this phenomenon is not restricted to FBP and disk domains!Fix the problem: add damping to the sinogram data near s = ±1.

Per Christian Hansen SVD Analysis 21 / 48

Page 22: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Let’s Reconstruct

In terms of the singular values and functions, the inverse Radon transformtakes the form

f (ξ) =∑m,k

〈umk , g〉µmk

vmk(ξ).

Since the image f (ξ) has finite norm (finite energy), we conclude that themagnitude of the coefficient 〈umk , g〉/µmk must decay “sufficiently fast.”

The Picard Condition. The expansion coefficients 〈umk , g〉 for g(s, θ)must decay sufficiently faster than the singular values µmk , such that

∑m,k

∣∣∣∣〈umk , g〉µmk

∣∣∣∣2 <∞.When noise is present in the sinogram g(s, θ), then this condition is notsatisfied for large m (cf. the signal restoration example from before). Thiscalls for some kind of filtering.

Per Christian Hansen SVD Analysis 22 / 48

Page 23: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Let’s Introduce Filters

A simple remedy for the noise-magnification, by the division with µmk , is tointroduce filtering:

f (ξ) =∑m,k

ϕmk〈umk , g〉µmk

vmk(ξ).

The filter factors ϕmk must decay fast enough that they, for large m, cancounteract the factor µ−1mk . More on this later in the course.

We can think of the filter factors as modifiers of the expansion coefficients〈umk , g〉 for the sinogram.

In other words, they ensure that the filtered coefficients ϕmk 〈umk , g〉 decayfast enough to satisfy the Picard condition from the previous slide.

The filtering inevitably dampens the higher frequencies associated with thesmall µmk , and hence some details and edges in the image are lost.

Per Christian Hansen SVD Analysis 23 / 48

Page 24: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Connection to Filtered Back Projection

Recall the filtered back projection (FBP) algorithm:1 For fixed θ compute the Fourier transform g(ω, θ) = F

(g(s, θ)

).

2 Apply the ramp filter |ω| and compute the inverse Fourier transformgfilt(s, θ) = F−1

(|ω| g(ω, θ)

).

3 Do the above for all θ ∈ [0◦, 360◦].4 Then compute f (ξ) =

∫ 360◦0◦ gfilt(x cos θ + y sin θ, θ) dθ, ξ = (x , y).

It is the ramp filter |ω| in step 2 that magnifies the higher frequencies inthe sinogram g(s, θ).

This amplification is equivalent to the division by the singular values µmk inthe above analysis.

Per Christian Hansen SVD Analysis 24 / 48

Page 25: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Filtered Back Projection, now with Low-Pass Filtering

How the filtered back projection algorithm (FBP) is really implemented:1 Choose a low-pass filter ϕLP(ω).2 For every θ compute the Fourier transform g(ω, θ) = F

(g(s, θ)

).

3 Apply the combined ramp & low-pass filter, and compute the inverseFourier transform gfiltfilt(s, θ) = F−1

(|ω|ϕLP(ω) g(ω, θ)

).

4 Then frec(ξ) =∫ 360◦0◦ gfiltfilt(x cos θ + y sin θ, θ) dθ.

The low-pass filter ϕLP(ω) counteracts the ramp filter |ω| for large ω. It isequivalent to the filter factors ϕmk introduced on slide 23.

Per Christian Hansen SVD Analysis 25 / 48

Page 26: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Discretization

A couple of interesting quotes:

“Theorem 4.2. A finite set of radiographs tells nothing at all.”

K.T. Smith, D.C. Solmon & S.L. Wagner, Practical and mathematicalaspects of the problem of reconstructing objects from radiographs, Bull.AMS, 83 (6), pp. 1227–1270, 1977.

“This is contrary [. . . ] to the experience obtained in practice, whereCT scanners produce reliable pictures [. . . ] although only a finitenumber of projections are used.”

A.K. Louis & F. Natterer, Mathematical problems of computerizedtomography, Proc. IEEE, 71 (3), pp. 379–389, 1983.

Per Christian Hansen SVD Analysis 26 / 48

Page 27: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Vectors and Images/Sinograms

Note that in the discretized problem Ax = b, the image and the sinogramare represented by the vectors x and b, respectively.

While this is a convenient notation when we need the language of linearalgebra, they are really 2D arrays and they should be visualized as such.

In Matlab notation:

imagesc( reshape(x,N,N) ), axis image

imagesc( reshape(b,Ns,Nθ) ), axis image

where Ns = number of detector pixels, and Nθ = number of projections.

Going from an image X to a vector x is simple: just write x = X(:).

Per Christian Hansen SVD Analysis 27 / 48

Page 28: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Matrix Notation and Interpretation

Notation:

A =

| | |c1 c2 · · · cn

| | |

=

−−− r1 −−−...

−−− rm −−−

.

The matrix A maps the discretized absorption coefficients (the vector x) tothe data in the detector pixels (the elements of the vector b) via:

b =

b1b2...bm

= Ax = x1 c1 + x2 c2 + · · ·+ xn cn︸ ︷︷ ︸linear combination of columns

=

r1 · xr2 · x...

rm · x

.

See next slides for examples of the column and row interpretations.

Per Christian Hansen SVD Analysis 28 / 48

Page 29: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Example of Column Interpretation

A 32× 32 image has four nonzero pixels with intensities 1, 0.8, 0.6, 0.4.In the vector x , these four pixels correspond to entries 468, 618, 206, 793.Hence the sinogram, represented as a vector b, takes the form

b = 0.6 c206 + 1.0 c468 + 0.8 c618 + 0.4 c793.

Note that each pixel is mapped to a single sinusoidal curve in the sinogram.

Per Christian Hansen SVD Analysis 29 / 48

Page 30: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Example of Row Interpretation

The ith row of A maps x to detector element i via the ith ray:

bi = r i · x =n∑

j=1

aij xj , i = 1, 2, . . . ,m.

This inner product approximates the line in-tegral along ray i in the Radon transform.

A small example:

aij = length of ray i in pixel jr i = [ ai1 ai2 0 ai4 0 0 ai7 0 0 ]

bi = r i · x = ai1x + ai2y + ai4x4 + ai7x7

Per Christian Hansen SVD Analysis 30 / 48

Page 31: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Back Projection and the Matrix Transpose

Recall the back projection,

f (ξ) =

∫ 2π

0g(x cos θ + y sin θ, θ) dθ,

where we integrate g along a sinusoidal curve in the sinogram:

Multiplication with the matrix transpose performs this operation:

x = ATb =

| |c1 · · · cn

| |

Tb =

−−− cT1 −−−...

−−− cTn −−−

b =

c1 · b...

cn · b

where each inner product c j · b corresponds to the above integration.

Per Christian Hansen SVD Analysis 31 / 48

Page 32: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Back Projection – Alternative Interpretation

Another formula for back projection:

x = ATb =

−−− r1 −−−...

−−− rn −−−

T

b =

| |r1 · · · rm| |

b

The interpretation here is that each data bi is “distributed back” along theith ray to the pixels, with “weight” = aij :

x =m∑i=1

bi r i =m∑i=1

biai1...

biain

.

The small example from before:bi r i = [ biai1 biai2 0 biai4 0 0 biai7 0 0 ]

Per Christian Hansen SVD Analysis 32 / 48

Page 33: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

And Now: The Singular Value Decomposition

Assume that A is m × n and, for simplicity, also that m ≥ n:

A = U ΣV T =n∑

i=1

u i σi vTi .

Here, Σ is a diagonal matrix with the singular values, satisfying

Σ = diag(σ1, . . . , σn) , σ1 ≥ σ2 ≥ · · · ≥ σn ≥ 0 .

The matrices U and V consist of singular vectors

U = (u1, . . . ,un) , V = (v1, . . . , vn)

and both matrices have orthonormal columns: UTU = V TV = I n.

Then ‖A‖2 = σ1, ‖A−1‖2 = ‖V Σ−1UT‖2 = σ−1n , and

cond(A) = ‖A‖2 ‖A−1‖2 = σ1/σn.

Per Christian Hansen SVD Analysis 33 / 48

Page 34: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Relation to Eigenproblems

The matrix ATA is symmetric and hence 1) its eigenvalues are real, and 2)its eigenvectors are real and orthonormal (standard linear algebra stuff):

ATAv i = σ2i v i , i = 1, 2, . . . , n.

I.e., the right singular vectors v i are the eigenvectors of ATA and thesquared singular values are the corresponding eigenvalues.

This is not how the SVD should be computed – use only good numericalsoftware. In Matlab:

Use [U,S,V] = svd(A) or [U,S,V] = svd(A,0) to compute the fullor “economy-size” SVD.Use [U,S,V] = svds(A) to efficiently compute a partial SVD (thelargest singular values and corresponding singular vectors).

Per Christian Hansen SVD Analysis 34 / 48

Page 35: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Important SVD Relations

Relations similar to the analysis of the Radon transform:

Av i = σi u i , ‖Av i‖2 = σi , i = 1, . . . , n.

Also, if A is nonsingular, then

A−1u i = σ−1i v i , ‖A−1u i‖2 = σ−1i , i = 1, . . . , n.

These equations are related to the solution:

x =n∑

i=1

(v i · x) v i

Ax =n∑

i=1

σi (v i · x) u i , b =n∑

i=1

(u i · b) u i

A−1b =n∑

i=1

u i · bσi

v i .

Per Christian Hansen SVD Analysis 35 / 48

Page 36: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

What the SVD Looks Like

The singular values decay to zero, with no gap in the spectrum. The decayrate determines how difficult the problem is to solve.Discretization of the Radon transformation gives a mildly ill-posed problem.

Per Christian Hansen SVD Analysis 36 / 48

Page 37: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

What the SVD Looks Like

The singular vectors (1D problem) have increasing frequency as i increases.

Per Christian Hansen SVD Analysis 37 / 48

Page 38: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

“Picard Plot” – Without and With Noise

Left: no noise. Singular values σi and rhs coefficients |u i · b| both level offat the machine precision.

Right: with noise. Now the rhs coefficients |u i · b| level off at the noiselevel, and only ≈ 18 SVD components are reliable.

Per Christian Hansen SVD Analysis 38 / 48

Page 39: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

The “Naive” Solution

From now on, the coefficient matrix A is allowed to have more rows thancolumns, i.e.,

A ∈ Rm×n with m ≥ n.

For m > n it is natural to consider the least squares problem

minx‖Ax − b‖2.

When we say “the naive solution” xnaive we either mean the solution A−1b(when m = n) or the least squares solution (when m > n).

We emphasize the convenient fact that the naive solution has precisely thesame SVD expansion in both cases:

xnaive =n∑

i=1

u i · bσi

v i .

Per Christian Hansen SVD Analysis 39 / 48

Page 40: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Ill-Conditioned Problems

Discrete ill-posed problems are characterized by having coefficient matriceswith a large condition number. The solution is very sensitive to errors inthe data.

Specifically, assume that the exact and perturbed solutions x and x satisfy

A x = b, Ax = b = b + e,

where e denotes the perturbation (the errors and noise). Then classicalperturbation theory leads to the bound

‖x − x‖2‖x‖2

≤ cond(A)‖e‖2‖b‖2

.

Since cond(A) = σ1/σn is large, this implies that xnaive = A−1b can bevery far from x .

Per Christian Hansen SVD Analysis 40 / 48

Page 41: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

The Geometry of Ill-Conditioned Problems

Rn = span{v1, . . . , vn} Rm = span{u1, . . . ,um}

•Exact sol.: x

•b = A x-

◦ b = b + e@@R

��

��

���

���

���

��+◦

Naive sol.: xnaive

Per Christian Hansen SVD Analysis 41 / 48

Page 42: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

The Need for Filtering

Recall that the (least squares) solution is given by

xnaive =n∑

i=1

u i · bσi

v i .

When noise is present in the data b = b + e, then

u i · b = u i · b + u i · e ≈

{u i · b, |u i · b| > |u i · e|

u i · e, |u i · b| < |u i · e|.

We note that:Due to the Picard condition, the noise-free |u i · b| decay.The “noisy” components |u i · b| are those for which |u i · b| is small,and they correspond to the smaller singular values σi .

Per Christian Hansen SVD Analysis 42 / 48

Page 43: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Spectral Filtering

Many of the noise-reducing methods treated in this course producesolutions which can be expressed as a filtered SVD expansion of the form

xfilt =n∑

i=1

ϕiu i · bσi

v i ,

where ϕi are the filter factors associated with the method.

These methods are called spectral filtering methods because the SVD basiscan be considered as a spectral basis.

A simple approach is to discard the SVD coefficients corresponding to thesmallest singular values:

ϕTSVDi =

{1 i = 1, 2, . . . , k0 else.

More sophisticated methods will be discussed in the third week.Per Christian Hansen SVD Analysis 43 / 48

Page 44: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Truncated SVD

Define the truncated SVD (TSVD) solution as

xk =n∑

i=1

ϕTSVDi

u i · bσi

v i =k∑

i=1

u i · bσi

v i , k < n.

We can show that if Cov(b) = η2Im then

Cov(xk) = η2k∑

i=1

1σ2i

v i vTi

and therefore

‖xk‖2 � ‖xnaive‖2 and ‖Cov(xk)‖2 � ‖Cov(xnaive)‖2.

The prize we pay for smaller covariance is bias: E(xk) 6= E(xnaive).

Per Christian Hansen SVD Analysis 44 / 48

Page 45: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

TSVD Perturbation Bound

Theorem

Let b = b + e and let xk and xk denote the TSVD solutions computedwith the same k .Then

‖xk − xk‖2‖xk‖2

≤ σ1σk

‖e‖2‖Axk‖2

.

We see that the condition number for the TSVD solution is

κk =σ1σk

and it can be much smaller than cond(A) = σ1/σn.

Per Christian Hansen SVD Analysis 45 / 48

Page 46: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

TSVD Solutions

As we increase the truncation parameter in the TSVD solution xk , weinclude move SVD components and also more noise.At some point the noise becomes visible and then starts to dominate xk .

Per Christian Hansen SVD Analysis 46 / 48

Page 47: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

The Truncation Parameter

Note: the truncation parameter k in

xk =k∑

i=1

u i · bσi

v i

is dictated by the coefficients uTi b, not the singular values.

Basically we should choose k as the index i where |u i · b| start to “leveloff” due to the noise.

The TSVD solution and residual norms vary monotonically with k :

‖xk‖22 =k∑

i=1

(u i · bσi

)≤ ‖xk+1‖22,

‖Axk − b‖22 =n∑

i=k+1

(u i · b)2 ≥ ‖Axk+1 − b‖22.

Per Christian Hansen SVD Analysis 47 / 48

Page 48: CT Problems, Filtered Back Projection, SVD Analysispcha/HDtomo/SC/Week1Days3and4.pdf · CTProblems,FilteredBackProjection,SVDAnalysis PerChristianHansen DTUCompute DepartmentofAppliedMathematicsandComputerScience

Where TSVD Fits in the Picture

Rn = span{v1, . . . , vn} Rm = span{u1, . . . ,um}

•Exact sol.: x

•b = A x-

◦ b = b + e@@R

��

��

���

���

���

��+◦

Naive sol.: xnaive

∗TSVD solution: xk

AAAAAAAAAK

Per Christian Hansen SVD Analysis 48 / 48