February 2016 - projects.au.dk · 2017. 7. 12. · The theory introduced in this chapter is from...
Transcript of February 2016 - projects.au.dk · 2017. 7. 12. · The theory introduced in this chapter is from...
Numerical Optimization of Up-conversion Using NanoParticles
February 2016
Author:Nikolaj Knudsen (09925)
Supervisor:Søren P. Madsen
Aarhus University, Department of Engineering
Nikolaj Knudsen, 09925 Contents
Contents
Contents
1 Abstract 3
2 Introduction to the Thesis 5
2.1 The Goal of this Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.1 The Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3 Nano Fabrication 7
3.1 Manufacturing at the Nano Scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4 Introductory Theory of Light, Waves and Optics 11
4.1 Light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.2 Electromagnetic Waves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.3 Refraction and Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.4 Interference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.5 Intensity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.6 Scattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5 Calculation of Background Field Analytically 21
5.1 The setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.2 The Matrix Transfer Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.3 Analytical Solution of Background Field . . . . . . . . . . . . . . . . . . . . . . . . 23
5.4 Numerical Solution of Background Field . . . . . . . . . . . . . . . . . . . . . . . . 26
5.4.1 The Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5.5 Validation of Background Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Contents Aarhus University - Department of Engineering
6 FEM model in ComSol 31
6.1 Perfectly Matched Layers (PML) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.2 Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
6.3 Applying Symmetry Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
6.4 Object Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
6.5 Meshing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
6.5.1 Skin Depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
7 Optimization 37
7.1 Optimum Design Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
7.2 Global Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
7.3 Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
7.3.1 Derivative Free Simulated Annealing Algorithm (DFSA) . . . . . . . . . . 42
7.3.2 Test of DFSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.4 Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7.4.1 Test of Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.4.2 Rough Domain Sweeps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
8 Results 49
8.1 Parameter Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
8.1.1 Derivative Free Simulated Annealing . . . . . . . . . . . . . . . . . . . . . 49
8.1.2 Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
8.2 Algorithms Applied to Multiple Geometries and Materials . . . . . . . . . . . . . 51
8.2.1 Derivative Free Simulated Annealing . . . . . . . . . . . . . . . . . . . . . 51
8.2.2 Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
8.3 Including Pitch as Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
8.3.1 DFSA Including Pitch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
8.3.2 GA Including Pitch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
8.4 The Best Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
9 Conclusion and Final Remarks 57
10 Appendix 59
10.1 Appendix 1: Mesh Convergence studies . . . . . . . . . . . . . . . . . . . . . . . . 59
10.1.1 Cone - Silver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
10.1.2 Cone - Gold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Nikolaj Knudsen, 09925 Contents
10.1.3 Square - Silver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
10.1.4 Square - Gold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
10.1.5 Ring - Silver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
10.1.6 Ring - Gold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
10.2 Appendix 2: Domain Sweeps for all studies . . . . . . . . . . . . . . . . . . . . . . 62
10.2.1 Cone - Silver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
10.2.2 Cone - Gold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
10.2.3 Square - Silver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
10.2.4 Square - Gold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
10.2.5 Ring - Silver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
10.2.6 Ring - Gold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
10.3 Appendix 3: Refractive Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
10.3.1 Silver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
10.3.2 Gold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Bibliography 67
CHAPTER
1Abstract
Frequency conversion of sunlight can be used for boosting the efficiency of solar cells by
up-conversion of low energy photons to higher energy photons, which are then used for
current generation in Si solar cells. To increase the efficiency of the up-conversion process, the
light must be concentrated inside the up-converting medium. This thesis will investigate the
focusing capabilities of nano particles by modeling of the nano particles using the finite element
method. In addition, numerical optimization methods will be used to determine the optimal
size, shape, arrangement and material content of the nano particles. The project is a part of the
“SunTune” project and has included some collaboration with other parties doing numerical and
experimental work on similar systems.
3
CHAPTER
2Introduction to the Thesis
2.1 The Goal of this Thesis
As a continuation of the work done by Johannsen et al1, the main objective of this master thesis
is to expose the up-conversion enhancement by including nano particles in the up-converting
layer of a solar cell. This enhancement will be determined by applying optimization algorithms
to the problem and identify the most favorable designs of these nano particles in terms of
enhanced up-conversion. In the pursuit of an optimum design for these nano particles, different
geometries and materials will be investigated and discussed.
2.1.1 The Setup
The modeling of the setup in the solar cells will go as follows; first step is to determine the electric
field contributed from the incident light without any particles in the domain, this electric field
will be termed the background field from now on. This will be done analytically in order to save
computational time. Afterwards the influence of the nano particles will be included by letting
the multi-physics finite element based program ComSol calculate the scattering of the incident
light scattering on these nano particles. The General setup can be seen in figure 2.1.
1Johannsen2015
5
Nikolaj Knudsen, 09925 Chapter 2. Introduction to the Thesis
Figure 2.1. left: Background electric field in the up-converting layer. right:Scattering of this background field on a nano particle.
The general idea is to get as much of the light, and hence energy, into the up-
converting layer to increase the up-conversion process as this up-converting layer is doped
by Erbium which will decay faster, thereby releases more energy and up-convert the light of
wavelengths in the solar cell appliccable visible light. In the figure only one nano particle is
shown, but naturally these particles should cover the entire surface of the up-converting layer
(ucl from now on). In order to keep the computational model as simple as possible, the geometry
and spacing are assumed identical over the whole domain. This assumption makes it possible to
do all the necessary calculations on just one particle due to the symmetries in the geometry and
spacing. It is however necessary to include periodic boundary conditions in order to simulate
the presence of the neighboring nano particles. This definition of a highly regular cell is also
termed a unit cell. The most important property of the unit cell is that it can be translated
periodically through all of the domain and still represent the problem exact.
CHAPTER
3Nano Fabrication
If the actual manufacturing is not considered in the optimization design calculations it could
end out in a geometry looking more then a miniature horse then something more conservative
and actually producible. This is often not desired. So, when optimizing the geometry of nano
particles for whatever purpose it is crucial that it is possible to manufacture in order to make
the calculations meaningful, and preferably cheap as well. With this in mind an investigation
of what is possible and feasible to produce and implement in real solar cells. In this chapter
the basic principles of nano fabrication is introduced in order to set the stage for fabricating
nano particles. In the end of this chapter some simple and fairly symmetrical geometries will be
introduced for further application in the thesis.
3.1 Manufacturing at the Nano Scale
In general there are two approaches in nano/micro fabrication, the bottom up and the top down
approaches. The first is adding material and latter is removing material from a silicon wafer.
They are also referred to as surface micro-machining and bulk micro-machining respectively. In
this case it is most relevant to build nano particles on the the up-converting layer (T iO2Er ) and
hence adding material is a logic approach. The basic idea in the bottom up approach is:
• 1. Ad material (thin film) to the wafer/substrate
• 2. Create a mask for patterning (spin-coat photo resist)
• 3. Pattern the resist by photo lithography
• 4. Remove or ad material in the specified pattern
• 5. Go back to step 1 if structure is not completed
7
Nikolaj Knudsen, 09925 Chapter 3. Nano Fabrication
The process is illustrated in figure 3.1. 2
Figure 3.1. Making an oxide pattern: a) deposition of SiO2; b) adding photoresist; c) UV expose photo resist through mask; d) develop photoresist image; e) etching (removing) exposed part of the oxide (SiO2);f ) removal of remaining photo resist.
This nano fabrication is highly highly governed by this photo lithography process
and how one can exploit it in the most appropriate way. On figure 3.1 the photo resist is sketched
as straight but this is rarely the case since diffraction of the UV-light tend to focus it in the center
and thus creates lightly inclined walls as is exaggerated on figure 3.23, but it proves an important
point.
Figure 3.2. Exaggerated illustration of the application of positive and negativeresist to the left and right respectively.
If the negative resist is applied then a process termed liftoff can be exploited to create
almost vertical lines of eg metallic nano particles. This can be done because metals are most
often applied to a surface by some kind of sputtering, meaning heating up a target and let it
evaporate and diffuse onto the desired surface until the desired thickness is obtained. This
2cite[figure. 9.1]Franssila20103cite[figure. 9.10]Franssila2010
3.1. Manufacturing at the Nano Scale Aarhus University - Department of Engineering
process naturally cannot move around corners since the diffusion process is of a very directional
nature, hence shadow areas will be created if the surface is not completely even. The liftoff
process is illustrated in figure 3.34.
Figure 3.3. Exaggerated cross sectional view of the application of positive andnegative resist to the left and right respectively.
Positive resist is weakened when exposed to UV-light and negative resist are made
stronger due to some cross linking effect. This means that the two illustrated scenarios can be
created from the same mask.
The edges in figure 3.3 are not entirely straight since the diffusion can not be entirely controlled to
the desired purpose. Only using this principle, many simple and if wanted symmetric geometries
can be constructed. With this in mind a couple of geometries which are simple to implement
as highly symmetric FEM models can be constructed. A simple cone and eg square shape
will look exactly the same in a cross sectional view, which often is the important view when
having to construct something using the thin film deposition method described. A hollow
cylinder or ring shape would likewise almost be the exact same process constructing. The three
mentioned geometries (cone, square and ring) will be the ones investigated in this thesis as they
are very simple, highly symmetric and suits the purpose. The people associated with the clean
room facility in the iNano building at Aarhus University has supplied approximate production
tolerances on eg a radius and a height to be:
• ∆r =±2.5nm
• ∆h =±1nm
• ∆pi tch =±2.5nm
Furthermore an inclination angle of bet a = 15o is assumed from now on.
4cite[figure. 23.15]Franssila2010
CHAPTER
4Introductory Theory of Light,
Waves and Optics
To let the reader ease in to the problem handled in this paper, some introduction to the
underlaying theory is necessary. This chapter will begin by explaining what light is and how
we describe it in form of waves. Afterwards the concept of interference will be introduced,
immediately followed by an explanation of the Matrix Transfer Method which describes thin film
interactions. Following, scattering due to the nano-particle will be introduced and the concept
of skin depth will be elaborated. No unnatural backscattering from the surroundings is a must
when calculating the scattering effect. This can be ensured in form of a FEM technique called
Perfectly Matched Layers. The theory introduced in this chapter is from Modern Electrodynamics
by Andrew Zangwill 2013 and Introduction to Modern Optics by Grant R. Fowles 2012.
4.1 Light
Light rays or often meant visible light is a small part of the wide electromagnetic spectrum. As the
wavelength increases the frequency decreases. This tendency can be confirmed by inspecting
the velocity as a function of these to parameters:
v =λ f
Where v is the wave velocity in the given medium, λ is the wavelength and f is the frequency.
Here it is clear that if one of the two increases, the other has to decrease for the given wave
velocity.
The sun irradiates solar rays in many different frequencies, termed the solar radiation spectrum.
In this spectrum the human eye can only perceive a part of the solar rays and in general just a
tiny fraction of the whole electromagnetic spectrum. Fortunately the solar radiation spectrum
is not difficult to model as it behaves much like the well known blackbody spectrum.
11
Nikolaj Knudsen, 09925 Chapter 4. Introductory Theory of Light, Waves and Optics
4.2 Electromagnetic Waves
The theory in this section will mainly be picked from XZhangwill et al. Light rays are electromag-
netic radiation and can be described as waves propagating through some medium (or vacuum
as in space). Waves can propagate in some direction even though there is no overall movement
in the medium. Waves carry energy or information and the wave amplitude indicates directly
the given physical quantity. Electromagnetic waves are different compared to density waves
(water and sound waves etc.) because they do not need a medium to propagate in. This is why
the solar rays can reach the earth through the wast empty space in between. The reason for this
phenomenon is well described by James Clerk Maxwell and his Maxwell’s equations. Just below,
the differential form of the equations i simple matter are displayed.
∇·D = ρ f ( Gauss’ law ) ∇·B = 0 ( 2. law )
∇×E =−∂B
∂t( Faradays law ) ∇×H = J f +
∂D
∂t( Ampéres law )
Where E is the electric field, B is the magnetic field, t is time, ρ f is the free charge
density and J f is the free current density. It can be seen from Maxwell’s equations that a change
in the electric field induces a magnetic field and vice versa. This means that the electromagnetic
waves are self sustainable and hence no need for a medium to propagate in. For non-dispersive,
simple matter the constitutive relations are:
D = εE and B =µH
Where D is termed the electric displacement, ε is the permittivity, µ is the permeability and H
is simply termed ’the H field’. The constitutive relations collapses the four vector fields to only
two, which is an important feature. An example of an ideal electromagnetic wave can be seen in
figure 4.15.
5cite[figure 16.4]Zangwill2013
4.2. Electromagnetic Waves Aarhus University - Department of Engineering
Figure 4.1. Electromagnetic wave composed of the real parts of the electric field (E)and magnetic field (B). k is the propagation direction
In figure 4.1 the perpendicular nature of each of the k, E and B fields is clearly
illustrated. The wavelength, λ, and the phase/wave velocity, c, is also displayed. In general the
frequency in a given wave is proportional to the energy in the wave. This is formulated as:
E = h f , h = 6.626×10−34 J · s
Where E is the energy, h is the Planck constant and f is the frequency. This is the
reason that very high frequency waves like γ-rays are the most powerful, because they carry
the most energy and hence release the most energy at impact. The simplest and most ideal
waves are plane waves. This means that the wave fronts are plane surfaces in the 3-D case,
like electromagnetic waves in air or vacuum, and they are perpendicular to the propagation
direction. An example of plane waves can be seen on figure 4.26.
Figure 4.2. Plane waves propagating in the direction of k
Where rx are position vectors, k is the propagation direction (wave vector), φ is the
phase at different positions and it is constant on the given plane. The E and B amplitudes are
constant on these planes. To describe such waves’ movement a second order partial differential
equation is used due to the space and time dependence, termed a wave equation. The generic
6cite[figure 16.3]Zangwill2013
Nikolaj Knudsen, 09925 Chapter 4. Introductory Theory of Light, Waves and Optics
wave equation looks like this:(∇2 − 1
c2
∂2
∂t 2
)u(r, t ) = 0 (4.1)
Where ∇ is the nabla operator, c is the speed of light and u(r,t) is an arbitrary function
of space, r(x,y,z), and time, t. Often it is assumed that the waves are monochromatic, which
means that every point on the wave is oscillating at the same frequency. By making this
assumption it is possible to describe the waves movement only by spatial variation using
a Helmholtz wave equation. This is a valid step because one can factor out the sinusoidal
oscillation in time. A Helmholtz wave equation in 3-D is a common second order partial
differential equation and looks like this:
(∇2 + ω2
c2
)u(r,ω) = 0 , u(r,ω) =
∞∫−∞
u(r, t )e iωt d t
Where u(r, t ) is an arbitrary function of space, r(x, y, z), and time, t, andωwhich is the
angular frequency. The Fourier method utilizes the linear relations from the Maxwell equations
to concentrate on a single frequency component like u(r,ω)e−iωt . Solving this Helmholtz
equation makes u(r, t ) a general solution when substituted back in the integral below:
u(r, t ) = 1
2π
∞∫−∞
u(r,ω)e−iωt dω
To get a particular solution, initial and boundary conditions have to be applied as
well. A great features of such a wave equation is that the solution to it is on the very simple form:
E(r, t ) = Ee[i (k·r−ωt )]
Here E(r, t) is the electric field, E is the amplitude of the electric field and i is the
imaginary number.
4.3 Refraction and Reflection
When a plane wave hits an interface between two different materials it will generally be split up
in a reflected wave and a transmitted wave. If the wavelength is small compared to the curvature
of the interface boundary, then the solution to Maxwell’s equations in this case can be expressed
simply as an wave amplitude equilibrium at the interface:
EI (r, t ) = ET (r, t )+ER (r, t ) = ET e i (kT ·r−ωT t ) +ER e i (kR ·r−ωR t )
4.3. Refraction and Reflection Aarhus University - Department of Engineering
Where subscripts I, T and R is respectively the: Incident, Transmitted and Reflected
waves. As one can imagine, the speed of light depends on which medium it is propagating in.
This is due to the fact that different materials have different molecular structures, making it
more or less difficult to propagate through. The consequence of this change in velocity when
light for instance pass from air to water leads to the phenomenon of refraction, which physically
is a redirection of the light. This is illustrated in figure 4.3.
Figure 4.3. Incident light is partly reflected and partly refracted over an interface
In figure 4.3 it is shown that some incident light is partly reflected and partly refracted.
In general the refracted angle θ2 will be smaller then the incident angle θ1 if the light moves
from a faster medium to a slower one. The refracted angle would be larger then the incident
angle if it moved from a slower medium to a faster medium.
The index of refraction is described as the ratio between the speed of light in vacuum
and the speed of light in a given medium. The index of refraction is expressed like this:
n = c
v= c
pµε > 1 , c = 299.8×106 m
s
Where n is the index of refraction for the given medium, c is the speed of light in
vacuum, v is the speed of light in the medium, µ is the permittivity for the medium and ε is the
permeability for the medium. Since nothing travels faster then the speed of light in vacuum, the
index of refraction will in general be greater then one. The reflective angles are determined by
the law of reflection, which simply states that the reflected angle is equal to the incident angle.
θi nci dent = θr e f lected
The refractive angles can easily be found due to Snell’s law which relates the angles
to the traveling light’s properties in the given medium. The relations can be seen just below.
si nθ1
si nθ2= v1
v2= λ1
λ2= n2
n1or more commonly: n1si nθ1 = n2si nθ2
Where θ1,2 are the angles, v1,2 are the velocities, λ1,2 are the wavelengths and n2,1
are the refractive indexes. As attempted illustrated it is not everything that is refracted. Some
Nikolaj Knudsen, 09925 Chapter 4. Introductory Theory of Light, Waves and Optics
fraction of the incident light is also reflected and that fraction depends on the material. As the
incident angle grows, a larger fraction of the light intensity will be reflected. If the incident
angle exceeds the critical angle then all the light will be reflected, this is called total internal
reflection. The specific intensities are determined through the help of Augustin-Jean Fresnel
and his equations, the Fresnel equations, and the matching conditions introduced by Hertz will
be introduced in the following:
The s-polarized Fresnel equations are basically ratios of the reflected and trans-
mitted electric field amplitudes and the incident electric field amplitude. The reflection and
transmission coefficients for s-polarization are defined as:
rs =[
ER
E I
]s= Z2cosθ1 −Z1cosθ2
Z2cosθ1 +Z1cosθ2ts =
[ET
E I
]s= 2Z2cosθ1
Z2cosθ1 +Z1cosθ2
And the reflection and transmission coefficients for p-polarization are defined as:
rp =[
ER
E I
]p= Z1cosθ1 −Z2cosθ2
Z1cosθ1 +Z2cosθ2tp =
[ET
E I
]p= 2Z2cosθ1
Z1cosθ1 +Z2cosθ2
In these initial investigations only the s-polarization will be used as it is basically the
same calculations twice. Furthermore, at normal incident light the two polarizations will be
identical. The reflectance and transmittance can be calculated.
Rtot = | rtot |2 =∣∣∣∣E B
0
E F0
∣∣∣∣2
and Ttot = n3 cos(θ3)n1 cos(θ1) | ttot |2 = n3 cos(θ3)
n1 cos(θ1)
∣∣∣∣E F5
E F0
∣∣∣∣2
To check for energy conservation the reflectance and transmittance should sum to
unity.
Rtot +Ttot ≡ 1
n · [D1 −D2] = 0 n · [B1 −B2] = 0
n× [E1 −E2] = 0 n× [H1 −H2] = 0
Where n is the unit normal vector on the interface of interest. These matching
conditions or interface conditions basically dictates that all of the fields must be continuous
over interfaces. In the calculation the fact that any plane wave can be decomposed into two
orthogonal waves is taken advantage of. the fact that any plane wave can be decomposed into
two orthogonal waves. Normally s- and p-polarized waves are used, where E is respectively
strictly perpendicular to the plane of incidence and strictly parallel to the plane of incidence as
seen on figure 4.47 below.
7cite[figure 17.4]Zangwill2013
4.4. Interference Aarhus University - Department of Engineering
Figure 4.4. The standard orthogonal polarizations. X-axis represents an interface
4.4 Interference
When there are multiple layers of interest, the transmitted waves will become the ’incident’
wave at the next interface and thus partly be reflected and refracted. The reflected wave will
then bounce back at the first interface and be partly reflected and transmitted at that interface
and so on. This means that there will be more than one wave in each layer, letting them interact
with each other. This phenomenon is called interference and can be the amplifying (constructive
interference) or the cancellation of the waves ()destructive interference). The multiple waves in
each layer are sketched in figure 4.5.
Figure 4.5. S-polarized monochromatic plane wave reflected and refracted in multi-ple layers of matter
In figure 4.5 the first and final layer (air and SiO2) are considered infinite, hence no
reflection except in the middle layer, the thin film in this case. Interference is best understood
by inspecting droplets hitting a water surface. The droplets will create rings in the water
and after a while the outermost rings will be very wide and significantly weaker. Due to the
large circumference the waves can be seen as plane waves as discussed earlier this chapter. If
two identical droplets a certain distance apart are hitting the surface, interference of the two
monochromatic waves of same frequency will occur. This scenario is illustrated in figure 4.68.
8cite[figure B10]Miller2008
Nikolaj Knudsen, 09925 Chapter 4. Introductory Theory of Light, Waves and Optics
The figure shows three snapshots taken from a place in between the to droplets’ release points.
Figure 4.6. Top: Single droplet observed creating a wave moving downward. Middle:Single droplet released at a spot ’below’ the other droplets’ release point,creating a wave moving slightly upwards. Bottom: Two droplets observedat a point in between the dropping spots creating interference
In the bottom of the figure, the destructive interference is indicated by the blurred
regions with dotted lines and the constructive interference is indicated by the clearer or sharper
regions exactly in between the destructive regions.
4.5 Intensity
The position-dependent intensity of a general electromagnetic wave is defined as the magnitude
of the time average of the Poynting vector over a time T that is much larger than any characteristic
time scale. For a monochromatic plane wave, it is though sufficient to choose T as the period of
the wave and the intensity reduces to the cycle-average.
I = | < S > | = 1
2ε0c|E ⊥ |2
4.6 Scattering
If the assumption of the wavelength of an incident monochromatic wave being small compared
to the curvature of a material is not valid, then the Fresnel theory breaks down and the wave
is said to scatter or diffract instead. From the Fresnel perspective this is a case where many
different reflected and refracted waves interfere with each other, producing many waves in
many different directions. Physical it is the same effects that fostered the Fresnel equations, a
4.6. Scattering Aarhus University - Department of Engineering
wave sets the charged particles into movement and these movements induces a retarded field
which is felt everywhere. The sum of all these induced fields are termed the scattered field and
is expressed like this:
E = Ei nc +Escat t
CHAPTER
5Calculation of Background Field
Analytically
In this chapter the reader will be introduced to the problem at hand and how to compose an
analytical solution for it, comparing it to a numerical ComSol model of the same setup. In the
last section a comparison will be made to hopefully validate the analytical solution compared to
the FEM calculation in ComSol. The theory applied in this chapter is supplied by Fowles et al
2012.
5.1 The setup
The problem at hand is a multilayer structure subject to a TE(s)-polarized time harmonic
monochromatic plane wave. The scenario can be seen on figure 5.1 but it is not to scale.
Figure 5.1. Waves traveling through three layers: Air, TiO2Er and SiO2.
The E0 notation simply indicates the incident forward and backward traveling waves
and E1 is the E-field amplitudes just before crossing the first interface and so forth. An angle
of incidence of, θ1 = 30 has been chosen to illustrate it works for all angles of incidence.
The wavelength has been set initially to λ1 = 808nm. The thicknesses for the three layers are
dAi r = 500nm, dT iO2Er = 300nm and dSiO2 = 500nm. The refractive indexes are: nAi r = 1,
nT iO2Er = 2.35 and nSiO2 = 1.46. These specific values are not important when the goal simply is
21
Nikolaj Knudsen, 09925 Chapter 5. Calculation of Background Field Analytically
to validate the approach, but these values are inspired by the ones used by Madsen et al 2015
for a similar problem. In this project multiple wavelengths will be relevant and since refractive
index is very dependent on the wavelength it should be adjusted accordingly. For the coming
calculations where refractive index is involved, data from Appendix 3 will be used.
5.2 The Matrix Transfer Method
When encountering a multi layered structure like the one in figure 5.1 or more explicitly in figure
4.5. The solution could be to add up all the reflected and refracted waves, but this would be
cumbersome. Instead one can describe the structure as an optical system with one inlet port and
one outlet port and thus express everything in between using a transfer matrix, Mtot , taking into
account all the individually layers’ reflections and transmissions. This transfer matrix consists
of an assembly of local transfer matrices multiplied together, one from each layer and each
interface. The idea is to describe everything in one layer at a time using a forward traveling wave
and a backwards traveling wave, which each will represent all the refractions and reflections
going in each direction, positive and negative respectively in direction of propagation. This
allows the electric field in each layer to be expressed as a superposition of these forward and
backward traveling wave amplitudes:
E(x, y, z
)= AF e−i (kx x+kzi z) + AB e−i (kx x−kzi z)
Where i is the imaginary number, kx is the wave vector amplitude in the x-direction,
A is the electric field amplitude and the F and B subscripts indicate forward and backward
traveling wave amplitudes respectively. The additional negative sign in the backwards going
second term indicates that it is moving against the propagation direction. The wave vector
amplitude is dependent on the refractive index and hence the medium:
k = nω
c
Where n is the refractive index, ω is the angular frequency and c is the speed of light
in vacuum. The individual transfer matrices are as mentioned divided into two groups, those
which describes the exponentially decaying wave amplitude through a layer and those which
describes the wave amplitude over an interface according to the Fresnel coefficients. The first
type is denounced by Ti , where i indicates the given layer. The latter is termed Fi−(i+1), where
the notation indicates that the wave begins in layer i and crosses the interfaces to layer (i +1).
5.3. Analytical Solution of Background Field Aarhus University - Department of Engineering
The transfer matrix in layer one would eg look like this:
T1 =e−(−i kz1) 0
0 e(−i kz1)
And the transfer matrix crossing from layer one to two looks like this:
F12 = 1
t12
1 r12
r12 1
Where t12 and r12 are the Fresnel coefficients. The linear equation system for the
whole system can describe the amplitudes everywhere in the domain using this principle. This
means that in order to describe the electric field in the whole domain, one can set up the total
transfer matrix first calculating the outer most amplitudes. E F0
E B0
=[
Mtot
] E F5
E B5
Where subscript 0 and 5 refers to the incident wave amplitudes and the wave
amplitudes at the bottom of the system respectively. This is useful because the incident
amplitude is known and the fact that nothing gets reflected in the last layer can be exploited, i.e.
that E F0 = 1 and E B
5 = 0. Using this yields the two remaining amplitudes directly. Knowing both
amplitudes at either end of the system makes it possible to calculate the adjacent amplitudes in
the same manner: E F0
E B0
=[
M1
] E F1
E B1
,[
M1
]=
[T1
]=
e−(−i kz1) 0
0 e(−i kz1)
This procedure can be carried out for all the layers and interfaces to determine
the required amplitudes in order to describe the electric fields in the different layers. Finally
these partial electric fields can be sewed together to form the entire electric field for the whole
multilayer structure.
Ey,tot(x, y, z
)=
EAi r(x, y, z
)if 0 ≤ z < Int2;
ET iO2Er(x, y, z
)if Int2 ≤ z ≤ Int1;
ESiO2
(x, y, z
)if Int1 < z ≤ L.
5.3 Analytical Solution of Background Field
The analytical solution emerges from the Matrix Transfer Method which takes into account the
numerous reflections and refractions that occurs when a multilayer structure is exposed to a
Nikolaj Knudsen, 09925 Chapter 5. Calculation of Background Field Analytically
light wave at a given angle of incidence. First the amplitudes of the E-fields in both ends of the
multilayer structure are determined through a series of transformation matrices and these three
auxiliary quantities:
A = i kz1d1 , B = i kz2d2 , C = i kz3d3
E F0
E B0
=e A 0
0 e−A
︸ ︷︷ ︸
T1
1
t12
1 r12
r12 1
︸ ︷︷ ︸
F12
eB 0
0 e−B
︸ ︷︷ ︸
T2
1
t23
1 r23
r23 1
︸ ︷︷ ︸
F23
eC 0
0 e−C
︸ ︷︷ ︸
T3︸ ︷︷ ︸Mtot
E F5
E B5
= 1
t12
e A r12e A
r12e−A e−A
1
t23
eB r23eB
r23e−B e−B
eC 0
0 e−C
E F5
E B5
= 1
t12t23
e(A+B) + r12r23e(A−B) r23e(A+B) + r12e(A−B)
r12e(−A+B) + r23e(−A−B) r12r23e(−A+B) +e(−A−B)
eC 0
0 e−C
E F5
E B5
= 1
t12t23
e(A+B+C ) + r12r23e(A−B+C ) r23e(A+B−C ) + r12e(A−B−C )
r12e(−A+B+C ) + r23e(−A−B+C ) r12r23e(−A+B−C ) +e(−A−B−C )
︸ ︷︷ ︸
Mtot
E F5
E B5
↔ E F
0
E B0
= 1
t12t23
(r23e(A+B+C ) + r12e(A−B+C )
)E F
5 + (r23e(A+B−C ) + r12e(A−B−C )
)E B
5(r12e(−A+B+C ) + r23e(−A−B+C )
)E F
5 + (r12r23e(−A+B−C ) +e(−A−B−C )
)E B
5
Solving the two equations with the two unknowns and applying the assumption of
no reflection from the final layer, i.e. E B5 = 0 and simply setting E F
0 = 1 for this calculation results
in:
E B0 =−0.395+ i 0.390 and E F
5 = 0.122− i 0.650
Now knowing all parts of the incoming electric field (the forward and backward
traveling part) it is possible to calculate the parts hitting the first surface and use the Matrix
Transfer Method again. Now only using the T1-matrix to calculate the forward and backward
traveling electric field components E F1 and E B
1 . The system of equations can be seen just below. E F0
E B0
=e A 0
0 e−A
︸ ︷︷ ︸
T1︸ ︷︷ ︸M1
E F1
E B1
5.3. Analytical Solution of Background Field Aarhus University - Department of Engineering
E F0
E B0
= e AE F
1
e−AE B1
↔ E F
1
E B1
= e−AE F
0
e AE B0
Rewriting in terms the actual numbers yields the expressions:
E B1 = 0.472− i 0.292 and E F
1 =−0.975+ i 0.224
The same procedure can be applied through the first interface according to the
Fresnel equations implied in the F12-transfer-matrix
E F1
E B1
= 1
t12
1 r12
r12 1
︸ ︷︷ ︸
F12︸ ︷︷ ︸M2
E F2
E B2
= 1
t12
E F2 + r12E B
2
r12E F2 +E B
2
Solving the system of equations for the forward and backward traveling waves at
location 2, i.e. just on the right hand side of the first interface, obtains following amplitudes:
E B2 = 0.021− i 0.131 and E F
2 =−0.524+ i 0.063
One can continue like this and calculate all the components of the forward and
backward traveling waves. The resulting amplitudes of the E-fields are as follows:
E B3 =−0.092− i 0.096 and E F
3 =−0.365− i 0.381
E B4 = 0 and E F
4 =−0.457− i 0.477
Checking whether the circle is completed by calculating the final amplitudes and
compare to the initially calculated values.
E B5 = 0 and E F
5 = 0.122− i 0.650
They are exactly the same as they should be. It is also good custom to calculate the
total reflectance and transmittance and check whether conservation of energy is fulfilled. This
is done by calculating the reflection and transmission coefficients, which can be described
through the ratio of the initial forward and backward traveling wave amplitudes and the ratio of
the final forward traveling wave and the initial forward traveling wave respectively.
rtot =E B
0
E F0
=−0.395+ i 0.390 and ttot =E F
5
E F0
= 0.122− i 0.650
Nikolaj Knudsen, 09925 Chapter 5. Calculation of Background Field Analytically
The total reflectance and transmittance are calculated through these coefficients.
Rtot = |rtot |2 = 0.308 and Ttot = n3cos(θ3)n1cos(θ1) |ttot |2 = 0.692
These two numbers should sum to one as no loss is included in this model in terms
of joule heating etc..
Rtot +Ttot = 0.308+0.692 = 1
Fortunately this is the case. Now the calculated electric field amplitudes can be used
to describe the whole electric field in each layer by the rule of superposition of the forward and
backward traveling waves since they are all considered linear through the layers. The boundary
conditions have to be fulfilled and for a TE(s)-polarization the electric field has to be continuous
through each interface. This is taken care of by off setting each partial electric field such that
they all ’begin from zero’ independent of the actual z-position in the multilayer domain
The three layers E-fields including the boundary conditions look like this:
EAi r (z) = E F0 e−i (kx1x+kz1(z−L)) +E B
0 e−i (kx1x−kz1(z−L))
ET iO2Er (z) = E F2 e−i (kx2x+kz2(z−zInt1)) +E B
2 e−i (kx2x−kz2(z−zInt1))
ESiO2 (z) = E F4 e−i (kx3x+kz3(z−zInt2)) +E B
4 e−i (kx3x−kz3(z−zInt2))
As can be seen, all the waves are expressed as decreasing exponentials in the direction
of propagation. This is why the backward traveling waves have a negative kz -component as
these by definition are propagating in the opposite direction of the forward traveling waves. The
total electric field for the multilayer structure can be expressed as a superposition of these three
partial electric fields. It can therefore be expressed as the following:
Ey,tot(x, y, z
)=
EAi r(x, y, z
)if 0 ≤ z < Int2;
ET iO2Er(x, y, z
)if Int2 ≤ z ≤ Int1;
ESiO2
(x, y, z
)if Int1 < z ≤ L.
Where Int1 and Int2 refers to the interfaces one and two. L is the total length/height
of the entire system.
5.4 Numerical Solution of Background Field
The numerical calculations is set up in ComSol solving the physics called Electromagnetic Waves,
Frequency domain which solves a wave equation at the specified initial frequency:
∇× (∇×E)−k20 εr E , f1 = c
λ1
5.5. Validation of Background Field Aarhus University - Department of Engineering
The multilayer structure in ComSol can be seen in figure 5.2.
Figure 5.2. ComSol geometry for the multilayer structure
5.4.1 The Setup
The incident light is simulated using ports where the normal intensity is specified in port one.
The TE(s)-polarization is assured by inserting e−i kx1 x in the y-direction when specifying the
electric field. The propagation constant is β= abs (kz1). In port two the TE(s)-polarization is
assured by inserting e−i kx3 x in the y-direction when specifying the electric field. The propagation
constant is β= abs (kz3). A Periodic Condition is used in the x-direction and a Perfect Electric
Conductor condition is applied in the y-direction. The mesh consists of tetrahedral and the
coarseness in the three regions is adjusted by the wavelength in the given medium such that five
mesh elements will cover a wavelength approximately.
5.5 Validation of Background Field
By running a parametric sweep of four different incident angles and plotting both the analytical
and the numerical solution of the electric field amplitudes in a line segment down through the
multilayer structure, a match should be apparent. In the plots in figure 5.3 there is electric field
amplitude up the vertical axis and z-coordinates out the horizontal axis.
Nikolaj Knudsen, 09925 Chapter 5. Calculation of Background Field Analytically
Figure 5.3. Line segment of the E-field amplitudes down the multilayer structure.This line segment is from the center of the structure
And just to check another line segment in a corner of the structure see figure 5.4.
Figure 5.4. Line segment of the E-field amplitudes down the multilayer structure.This line segment is from the center of the structure
Both figures seems to agree everywhere. Another visual check can be done by
comparing the electric fields in a plane (slice) of the multilayer structure at different angles.
Normal incident and 55 have been chosen here.
5.5. Validation of Background Field Aarhus University - Department of Engineering
Figure 5.5. E-field viewed in a vertical plane at normal incidence. Analytical to theleft and numerical to the right.
And at 55 it looks like this:
Figure 5.6. E-field viewed in a vertical plane at 55 incidence.Analytical to the left and numerical to the right
Again, everything seems to agree which is considered to be a reasonable solid
validation of the analytical solution. This makes a great stepping stone for implementing
a nano-particle in the TiO2Er-layer using the analytical solution instead of using numerical
power on calculating it again and again in an future iterative optimization procedure. A spot
Nikolaj Knudsen, 09925 Chapter 5. Calculation of Background Field Analytically
check of the method as a sweep over many wavelengths have been calculated as well. It is
illustrated in figure 5.7.
Figure 5.7. Evaluation of E-field analytically and numericallyas a probe point in the domain are plotted over manywavelengths
CHAPTER
6FEM model in ComSol
6.1 Perfectly Matched Layers (PML)
The top and the bottom of the domain consists of perfectly matched layers in the sense that they
imitates a large absorbing medium where are no impedance difference over the surface, hence
introducing no reflection of the electric field when surpassing it just as if it passed through
a medium of air into another medium of the same air, nothing happens. This condition is
desirable as it would produce an unphysical situation in the FEM model. A simple check of the
effect has been done by calculating the variance of the E-field in the SiO2 layer with and without
the PMLs. It can be seen in figure 6.1.
Figure 6.1. Variance of analytical and numerical E-field in the SiO2 layer. Left: Withthe PML on. Right: PMLs are turned off.
The effect is clear since no difference is applied to the analytical field in both cases
and the numerical is almost completely removed when PMLs are off.
31
Nikolaj Knudsen, 09925 Chapter 6. FEM model in ComSol
6.2 Boundary Conditions
In general there are applied three types of boundary conditions in the models used in this
project.
• Periodic boundary conditions
• perfect electric conductor
• perfect magnetic conductor
Periodic boundary conditions is applied to the boundaries in the direction of the polarization.
This ensures that for non-normal incident light the phase shift introduced will be conserved as
it should be. This is also called a Bloch-Floquet condition and can in its nature only be applied
to a full unit cell.
The perfect electric conductor condition for the TE(s)-polarized waves is used in the y-direction
and can be used to simulate mirror symmetry, thus only be used for normal incident light since
the phase shift would not be preserved for angled incident light! the defining equation is:
n×E = 0
The perfect magnetic conductor condition is applied instead of the periodic condition in the
x-direction for TE-polarized waves and the equation to satisfy is:
n×H = 0
6.3 Applying Symmetry Conditions
In order to keep the computational time down a smaller domain is a very good idea. As a rule of
thumb the cpu-time scales exponentially with the number of degrees of freedom (dof).
tcpu α (#do f )β
This implies that half a domain consists of half the number of degrees of freedom (dof) and hence
the cpu-time drastically decreases. This can be exploited by applying symmetry conditions to
the model and thereby reduce the domain to a fraction of its original size. In this particular
case the ComSol model can be reduced to a quarter unit cell and in the limitation that the nano
particle geometry posses the same symmetries as the domain. It should be noted that it is the
E-field which needs to be symmetric and not exclusively the geometry.
6.4. Object Function Aarhus University - Department of Engineering
6.4 Object Function
As the up-conversion is wanted enhanced an appropriate cost function should be defined which
takes this enhancement into account. The up-conversion process is thought to be proportional
to the light intensity in the ucl, probably to some exponent which is dependent on the intensity
in the ucl. Madsen et al. suggests an expression which includes integration of the intensity in
the whole volume of the ucl.
L =∫
UC L I dV∫UC L I0dV
By subjecting this expression to the exponent n, an suitable and very responsive object function
is defined. This exponent is set to be n = 1.5 since it goes as I 2 for low intensities and more
like I for high intensities. Furthermore, when optimizing, it is custom to minimize function,
hence when we want to maximize something we minimize the negative value instead. Thus the
objective function applied in this thesis will be the following:
L =−( ∫
UC L I dV∫UC L I0dV
)1.5
6.5 Meshing
6.5.1 Skin Depth
Skin effect is when an alternating current is mainly distributed near the surface of a conductor
and not evenly in all of the conducting area. The skin effect occurs due to the opposing eddy
currents induced by the alternating current. It is a combination of Faraday’s law and Ohm’s
law. The skin depth is the characteristic length scale for quasi-static field penetration into a
conductor and is defined as this expression:
δ (ω) =√
2
µσω
Where µ is the permeability, σ is the conductivity and ω is the angular frequency. As the
expression dictates, the skin depth increases with increasing frequency, which leads to an
increasing effective resistance due to the eddy currents. In figure 6.2 an electric field is basically
being damped due to the increased resistance near the surface.
Nikolaj Knudsen, 09925 Chapter 6. FEM model in ComSol
Figure 6.2. Skin depth in a good conductor. Z is the distance from the surface
Furthermore Snell’s law suggests that the refracted wave propagates normal to the
interface, independent of the angle of incidence.
Mesh convergence have been investigated for all the geometries and materials introduced
as well and initially mesh refining in the ucl alone was investigated, obtaining no real gain
though computational time quickly exploded. Secondly the max allowable mesh size in the
nano particle was examined. An example of this for the square geometry for both silver and gold
has been showed in figure 6.3.
Figure 6.3. Mesh convergence studies for square geometry. Left: Silver particle. Right:Gold particle. The green line i the object function. The blue line is theintensity ratio. The red i cpu-time and the magenta is the change inintensity between the current intensity and the one from last calculation
As can be seen in both cases, no real mesh convergence has been reached and it does
not seem like it will at all. The remaining mesh convergence studies for the other geometries
have been appended in Appendix 1. All of them show the same pattern though. This is though
not a problem since the goal is simply to locate the good deigns for the nano particles and a
6.5. Meshing Aarhus University - Department of Engineering
fair indication of a good design is enough to run the optimization algorithms successfully. The
cpu-time is seen to rise fast as the mesh factor goes towards infinitely mesh. A mesh factor of 1.5
have been for all of the studies. An example of the chosen final mesh for eg the cone geometry
can be seen in figure 6.4.
Figure 6.4. Mesh example of symmetric cone geometry with a mesh factor of 1.5. Left:Full domain. Right: Focused at the nano particle
CHAPTER
7Optimization
In this chapter the reader will first be introduced to the concept of optimum design and
the general terminology. Following global optimization will be introduced and a couple of
optimization algorithms will be discussed, leading up to which approach to take on optimizing
the wave-optics problem at hand. The theory in this chapter is mainly supplied by Jasbir Arora
2012 (Introduction to Optimum Design).
7.1 Optimum Design Theory
It has always been one of the grandest challenges for engineers to come up with the most cost-
and design-efficient solution to whatever problem they may face. This is why optimization
algorithms are so useful compared to the conventional method of trial and error which is highly
dependent on the given engineers background, experience and funding in terms of attempts.
The flowcharts of these to methods are outlined in figure 7.1.9
For both processes multiple steps are identical and both are in fact iterative. The
charts are self explanatory but the main differences is that for the optimum design method a cost
function is defined in means of evaluating how good the design is. The cost function is generally
minimized and defined as:
f (x) = f (x1, x2, ..., xn)
This cost function can be subject to multiple constraints depending on the problem, eg p
equality constraints:
h j (x) = h j (x1, x2, ..., xn) = 0 ; j = 1 to p
9cite[figure 1.2]Arora2012
37
Nikolaj Knudsen, 09925 Chapter 7. Optimization
Figure 7.1. Comparison of (a) the conventional methodand (b) the optimum design method.
and m inequality constraints:
gi (x) = gi (x1, x2, ..., xn) ≤ 0 ; i = 1 to m
Usually the iterative process is stopped when a satisfactory design is reached in both cases. For
the optimum design method this satisfactory design is usually based on when the design can not
be improved anymore or have been running for a very long time. This also termed a convergence
criteria or a stop criteria. The degree of how much and how fast an algorithm is stalling is
problem specific and has a great influence on computational time. The convergence/stop
criteria is usually defined as:
| f (x∗)− f (x)| = ε
Where x∗ is the existing minimum point and x is the present trial or candidate point. The
conventional method might suffer the fate of being stopped before the real gain is achieved
because no real merit (in terms of a cost function) is measured and trend information is often
not calculated, loosing valuable insights.
A feasible point is a point which obeys all the constraints and thus is located in the feasible
region of the design space. All the feasible points represents the feasible set:
S = x|h j (x) = 0, j = 1 to p; gi (x) ≤ 0, i = 1 to m
A minimum point is a feasible point where it is not possible to move in any direction in the
design space without increasing the cost function value, or more formal:
f (x∗) ≤ f (x)
7.1. Optimum Design Theory Aarhus University - Department of Engineering
If this statement is satisfied in all of the design space, it is called a global minimum. If it is only
satisfied in a small neighborhood of the feasible point, x∗, then it is termed a local minimum.
Generally, for both types of minima, if the inequality condition is strictly satisfied then it is
a strong minimum (strict), otherwise it is a weak minimum. These different definitions is
illustrated in figure 7.2.10.
Figure 7.2. Representation of optimum points in a bounded domain.
Point A would be a local maximum since it cannot increase in the neighborhood of
this point, but not a global maximum since greater values are achieved for multiple points (C
and F). Point B is a local minimum since it cannot decrease in a small neighborhood of this
point, but again not a global extrema. C is a local maximum for the same reasons as point A.
Point D is like point B a local minimum. Because this function is bounded, point E and F are the
global extrema, global minimum and maximum respectively. Would it have been an unbounded
problem, point D and C would be the global extrema instead. These imposed constraints makes
point E and F strict global extrema. In the unbounded scenario point D and E would be weak
global extrema.
Generically there are two methodologies of optimization, namely the optimality criteria methods
and the search methods. The first specify some optimality conditions the given function should
satisfy at its minimum point, and it typically locates a minimum point through numerical
methods which includes these optimality criteria. It is also termed indirect methods and will
often focus in a small neighborhood of the given point and thus probably locate a local minimum.
The latter searches the design space from an initial point and iteratively improves the existing
minimum point until the convergence criteria is satisfied. Compared to the optimality criteria
methods, these search methods does often not investigate the small neighborhood around the
existing minimum point greedily since they always compare to the best point found and could
have a high degree of random search in the design space. This also implies that a better point
10cite[figure 4.2]Arora2012
Nikolaj Knudsen, 09925 Chapter 7. Optimization
could very well be located close by. These search methods are also termed direct methods and
thus can be viewed as global search algorithms and will be applied heavily to the problem of
this thesis.
In the case one has to maximize instead of minimizing a cost function, a simple sign change of
the cost function is carried out, but attention should be turned towards the inequality constraints
as well, since they should be flipped accordingly due to the sign change. If the problem only
consists of two variables and some constraints, then simply plotting the contours is a very
illustrative tool and fully valid optimization method. An example of this graphical method is
shown in figure 7.3. The principle of this method will in the coming sections also be applied
to the problem of this thesis in terms of locating a suitable domain based on contours from a
rough domain sweeps.
Figure 7.3. Graphical method. Contour bounded by four inequality constraintsis shown and the cost function is shown in the title of the figure. Allthe dots are possible solutions. Red dots are infeasible points, lightblue are feasible points, dark blue are local minima and the green isthe best design (point) found. The colorbar indicates cost functionvalue.
In this particular problem illustrated in figure 7.3 the analytic expression for the cost
function is known, continuous and smooth, i.e. easy and fast to solve. Furthermore, it only
consists of two variables. Often more variables are needed and the cost function might be a lot
more complex or non-linear as it often has to represent a real physical system of interest. If a
’nice’ analytical expression is readily available though, gradient based search algorithms will
do very well in terms of determining the optimum design since these methods utilize the local
gradient information. The most famous might be the steepest descend algorithm which simply
takes steps in direction of the steepest descend, as the name suggests. This means it moves in
a characteristic 90o angle. There is a lot of ways to determine the step length in a smart and
7.2. Global Optimization Aarhus University - Department of Engineering
efficient way and even other auxiliary algorithms to estimate the gradient numerically, eg the
Newton methods. An even smarter gradient based algorithm is the conjugate gradient which is
based on the steepest descend algorithm, but ’cuts corners’ in the sense that it uses information
from the next two steepest descend steps instead of just one. This feature greatly decreases the
computational time in comparison to the steepest descend algorithm.
For the problem illustrated in figure 7.3 the graphical method is well suited since it only has
two variables and a continuous design space. This allows for a fairly fast computation since the
number of possible combinations are limited to polynomial time.
Nc =nd∏
i=1qi
Where nd is the number of variables and qi is the allowable discrete values for each variable.
This means that for even three variables compared to two of eg 100 possible values each the
number of possible values goes from 1002 = 10,000, to 1003 = 1,000,000 and so on for increasing
number of variables. This makes it time consuming for the indirect methods which has a high
degree of full enumeration compared to the direct methods which tries to reduce the search to
a partial list of the possible combinations by using heuristics and sometimes multiple clever
strategies.
7.2 Global Optimization
As mentioned in the previous section the direct search methods are very suitable for global
optimization since they require no derivatives of the given cost function and generally searches
the entire design space without getting stuck in a local minimum. This independence of
derivatives allows optimization of problems of which derivatives are unavailable or too expensive
computational to calculate. Furthermore, convergence of these methods can be proved if the
functions are assumed to be differentiable and continuous. Two of the most common global
optimization algorithms are the Simulated Annealing algorithm and Genetic algorithm which
both can solve a wide range of optimization problems though they might be computational
heavy due to this versatility. 11.
7.3 Simulated Annealing
The Simulated Annealing (SA) algorithm is a stochastic approach which as the name suggests
emulates the annealing process in metallurgy. This implies heating of a material followed by a
11cite[table. 15.4]Arora2012
Nikolaj Knudsen, 09925 Chapter 7. Optimization
slow cooling letting the crystal structure reduce its defects. At high temperatures the atoms in
the crystal structure are able to move freely and randomly around, potentially through states
of higher internal energy to possibly obtain an absolute minimum energy. In this process
it is crucial that the cooling is slow and controlled and thus spending enough time at each
temperature level to avoid getting stuck in a local minimum. The idea of this method is to
create some random points in the neighborhood of the current best point and evaluate the cost
function. If the cost function is decreased the point is accepted and the best cost function value
is updated accordingly. If the cost function is not lowered there is still some probability of the
point being accepted depending on the emulated temperature of the system. The acceptance is
based on the probability density of the Bolzman-Gibbs distribution. If this probability density
function has a greater value than a certain random number, the point is accepted and the best
cost function value is updated even though it might worsen the value. In order to calculate this
probability density function, a parameter termed the temperature is defined. This temperature
is initially set high and then is reduced as the system emulates cooling and is thus termed the
cooling schedule of the algorithm. As mentioned the acceptance of trial points is linked to
this temperature and as it is lowered, the probability of accepting a bad point is decreased
toward zero. This strategy ensures that the algorithm does not get stuck in a local minimum.
Since the method only requires evaluation of the cost function and corresponding constraints
it is not a necessity for the cost function to be continuous or differentiable unlike many other
methods. The deficiencies of this algorithm is the unknown number of evaluations affecting the
computational time. Since almost no information is needed, none is given and it is therefore
hard to determine when the global minimum has been reached.
7.3.1 Derivative Free Simulated Annealing Algorithm (DFSA)
The regular Simulated Annealing algorithm consists of a global search phase and a local search
phase where the local search phase could be a gradient based search method such as the
Conjugate Gradient method or the derivative free Nelder-Mead method. The variant which has
been implemented here is called the Derivative Free Simulated Annealing12 (DFSA), the local
search phase is replaced by a trial point generation mechanism. This generation mechanism
either picks a random point in the design space or locates a point in the small neighborhood of
12Ali2012
7.3. Simulated Annealing Aarhus University - Department of Engineering
the current minimum point. The generation mechanism is defined:
gx y =
1m(Ω) if ω≤ 0.75
RD(x) if ω> 0.75
Where 1m(Ω) indicates a completely random point in the design domain and ω is a random
number between 1 and zero. RD(x) (Random Direction) is the direct substitute for the local
search phase and the biggest difference is that this RD(x) is only evaluating the cost function
once per trial point generation and not multiple times as in a local search method. The following
will present the algorithm and elaborate on the different steps afterwards. Some of the steps
have been modified slightly in order to adapt to the problem at hand and namely the different
nano particle geometries.
The DFSA algorithm:13
Where x is the initial design point, t is a counter, T0 is the initial temperature, ∆S A0 is the initial
step length for the RD(x), Lt is length of the Markov chain which means number of trial points
generated for each temperature which are all independent of the previous points generated.
Theoretical convergence is assumed as T → 0, t →∞ and Lt →∞. The implementation of this
algorithm has been done as prescribed except for ’Remark 1’14 which addresses the issue of
a point being perturbed out of the design space due to the random nature of the generation
mechanism.
yi = xi +ω (ui −xi ) or yi = li +ω (xi − li )
13Ali201214Ali2012
Nikolaj Knudsen, 09925 Chapter 7. Optimization
Where u and l are the upper and lower boundaries respectively. The modification is necessary
since the geometries chosen introduces some constraints in order to obey the prescribed incline
angle for all three geometry configurations. A constraint more is introduced, since the inner
radius has a certain minimum angle a constraint for the ring geometry is necessary. The
reason for the article not taking such incidents into account is because they recommend to
use the Augmented Lagrangian algorithm to handle constraints by penalizing the cost function
if constraints are violated. This have not been done here. The modification has been tested
in order to ensure the integrity of the implementation. In figure 7.4 it can be seen that the
modification is not necessary for the chosen domain since the inclination angle has been locked
at 15o for all three geometries. Nevertheless, it works and will not be removed in case another
domain is wanted for future calculations. The modification simply picks a random in-domain
value either the specified height or radius depending on which one is violating the domain.
Figure 7.4. Test of the modified generation mechanism at different inclinationangles.
Calculating the initial step length as:
∆S Ao = ζ×maxui − li |i = 1, ...,n, where 0 < ζ< 1
The initial temperature, T0 is calculated by running a Markov chain before entering the algorithm
in order to probe the design space and adjust accordingly.
T0 =∆ f +(ln
(m2
m2χ0 −m1(1−χ0
)))−1
Where m2 and m1 are the number of times a point lower or rises the cost function value
respectively, χ is the ratio of the accepted and rejected points of the first Markov chain, it
7.3. Simulated Annealing Aarhus University - Department of Engineering
is set to 0.9 to ensure 90% of the points are accepted initially. ∆ f + is the average ∆ f of all the
rejected moves. The temperature are afterwards adjusting using this expression:
Tt+1 = Tt
(1+ Tt × l n(1+δ)
3σ(Tt )
)−1
Where δ is a parameter controlling the cooling speed and σ is the standard deviation of the cost
function evaluations from the present Markov chain. The stopping criteria in the article is set to:
Tt ≤ ε , ε= mi n10−3,10−3T0
7.3.2 Test of DFSA
The implemented DFSA has been tested on four suitable test function to check how well it
performs and if it is able to locate the global minimum point. The test functions have been
plotted in figure 7.5 in order to visualize them better.
Figure 7.5. Test function plots. The two upper functions are considered fairlyeasy to solve due to the simple structure but the two lower are muchtougher to locate the global minimum. The color bars indicate thecost function value.
Nikolaj Knudsen, 09925 Chapter 7. Optimization
The actual test results from the most relevant, the Drop Wave function can be seen
in table 7.1 and it is illustrated in figure 7.5 in the right hand side in the bottom.
Table 7.1. Drop Wave function calculation carried out by the DFSA algorithm.The global minimum point is located at (0,0) and the minimumcost function value (LowestEnergy) is −1. The problem has beensolved 15 times to give a more statistical result. The cpuMIN is thecomputational time in minutes.
From these results it is clear that a lot of function evaluations is needed in order to
fulfill the stopping criteria for this problem. Several of the ensembles or runs of the algorithm
shows a significantly smaller amount of function evaluations indicating that the initial probing of
the domain located something close to the global minimum value and thus began the algorithm
with a very low T0, making almost no moves possible. These particular cases also reveals that
they actually did not locate the global minimum as well as almost all the other runs did. This
confirms that the algorithm gets stuck in a local minimum if the annealing is too fast.
7.4 Genetic Algorithm
In order to have something to compare with the DFSA algorithm, the Genetic Algorithm is also a
versatile global search algorithm and will be tested and applied if successful on the problem
of this thesis. The Matlab library has such one build in in the optimization app and will thus
be exploited in the following optimization analysis. Generally the Genetic Algorithm emulates
the evolution of some population (designs) through generations (iterations) of mating (local
information), mutations(random search) and elitism (preserving most fit members). The idea is
that each new generation is build from the current population through three different rules.
• Selection rules select the individuals, termed parents, that contributes to the population
7.4. Genetic Algorithm Aarhus University - Department of Engineering
at the next generation
• Crossover rules combines two parents to form children for the next generation
• Mutation rules which applies random changes to individual parents to form children
The algorithm can be tuned by adjusting the number of generations, population size and
different rules of selection, crossover and mutation.
7.4.1 Test of Genetic Algorithm
To test and make an initial comparison of the two algorithms, the genetic algorithm has been
applied to the same four test functions and the results are presented in table 7.2.
Table 7.2. Genetic algorithm applied to four test functions.
As can be seen from the table not all of the initial tests located the global minimum
cost function value but came fairly close as for Rosenbrocks Valley it is in 0 and −1 for the Drop
Wave. It should be noted that only 2000 evaluations has been carried out in all four cases, much
less than the 40,000 for DFSA on the Drop Wave problem which naturally implies a much lower
cpu-time in comparison to DFSA.
7.4.2 Rough Domain Sweeps
In order to optimize the two algorithms to run as efficiently as possible on the FEM-problem
which is the subject of this thesis, some initial parameter studies has been done. But first a
suitable design space has to be determined and this has been carried out for three different
geometries (Cone, Square and Ring) and two different materials (Ag and Au). Based on the
findings of Madsen et al.15 for a similar problem exposed to a 808nm wavelength light, i.e.
approximately half the wavelength of the problem at hand. The domain has to be at least the
same size of that, which is the radius in the interval [30;60]nm and for the height [5;25]nm
accordingly. The size of the domain sweep has been chosen to be:
15Madsen2015
Nikolaj Knudsen, 09925 Chapter 7. Optimization
• Radius = [35;250]nm item Height = [6;70]nm
An example of one of these rough domain sweeps is shown in figure 7.6.
Figure 7.6. Rough domain sweep for a silver nano particle formed as a cone. Theresolution of the sweep is 7nm. The left plot shows the 3D surfaceof the cost function, L = I 1.5
r ati o and the color bars indicates this costfunction value. The right plot is a contour plot of the same sweep.
The plots indicate that the swept area was unnecessarily large and actually fairly
similar to the 5 other studies. The remaining domain sweeps can be seen in Appendix 2. A
rescaling of the domain and refined resolution of the domain sweep has been carried out and
can be seen in figure 7.7.
Figure 7.7. Rough domain sweep for a silver nano particle formed as a cone. Theresolution of the sweep is 2nm. The left plot shows the 3D surfaceof the cost function, L = I 1.5
r ati o and the color bars indicates this costfunction value. The right plot is a contour plot of the same sweep.
CHAPTER
8Results
In this chapter the actual findings of this thesis will be represented in terms of the most
efficient parameters for the two global search algorithms, the determination of the best nano
particle designs for two and three variables (including the pitch as variable) and corresponding
visualization of these optimum designs.
8.1 Parameter Tests
Many tests of tuning algorithm efficiency for each of the two applied have been conducted and
analyzed. The tests have in general been carried out for the cone geometry made of silver. The
tests are compared to a base calculation suggested by the authors. Hopefully this will not show
cone shaped silver nano particles as the best answer for shape and material question since this
could be due to this tuning of the parameters. Ideally these parameter tunings would have been
carried out for every geometry and material configuration. The parameters has been chosen
such that the number of function evaluation should match approximately for the two algorithms
in order to ease the comparison. This have been carried out by defining a max number of
iterations for DFSA, 700 per run in this case.
8.1.1 Derivative Free Simulated Annealing
For the DFSA 8 parameter tests have been conducted (9 if base calculation is counted in) and
the results can be seen in 8.1.
49
Nikolaj Knudsen, 09925 Chapter 8. Results
Table 8.1. DFSA parameter study. Test 1 is considered to be the reference as it consists of theparameters recommended by M.M. Ali et ala. T0 is the heat bath temperature initially.stopT is the stopping criteria. GMech is the trial point generation mechanism. alphais a parameter controlling the step length of the local search feature. delta controlsthe annealing speed. Fevals is the total number of function evaluations. cpuMIN isthe total cpu-time in minutes. Ir ati o is the best obtained ratio of the scattered electricfield intensity and the background electric field intensity in the up-converting layer.L is the best up-conversion enhancement found as L =−I 1.5
r ati o . AvgL is the averageup-conversion enhancement found over the four runs. Radius and Height are thecorresponding best design found from the four runs
aAli2012
Based on table 8.1 the most efficient parameters for the DFSA algorithm has been
determined:
• stopT = mi n(10−2,10−2T0)
• GMech = GMII
• alpha = 0.1
• delta = 0.2
It is assumed that the adjusting of these parameters does not affect each other in a
negative way and that they can readily represent the other geometries and material as well.
8.1.2 Genetic Algorithm
For GA 5 parameter tests have been conducted (6 if base calculation is counted in) and the
results can be seen in 8.2.
8.2. Algorithms Applied to Multiple Geometries and MaterialsAarhus University - Department of Engineering
Table 8.2. GA parameter study. Test 1 is considered to be the reference as it consists of theparameters recommended by the Matlab documentation. PopSize is the populationsize or the number of design configurations in the current generation. Generations arethe number of iterations the population can evovle over. Fevals is the total number offunction evaluations. cpuMIN is the total cpu-time in minutes. Ir ati o is the best ratioof the scattered electric field intensity and the background electric field intensity in theup-converting layer. L is the best up-conversion enhancement found as L =−I 1.5
r ati o .AvgL is the average up-conversion enhancement found over the four runs. Radiusand Height are the corresponding best design found from the four runs
Based on table 8.2 the most efficient parameters for the GA algorithm has been
determined:
• PopulationSize = 25
• Generations = 15
8.2 Algorithms Applied to Multiple Geometries and Ma-
terials
Now having determined the most efficient parameters, the actual geometry and material
investigations can be carried out in a meaningful way.
8.2.1 Derivative Free Simulated Annealing
For DFSA, the two variable problem consisting of radius and height and the tuned parameters
the results can be seen in table 8.3.
In this study it can be seen that all the configurations consisting of silver is consider-
ably better then for the gold version of same geometry. The Ring geometry yields remarkable
better up-converting potential as it for silver reaches almost twice as much then the competing
configurations. All of the studies have reached the max function evaluation limit as they have all
reached the exact same value. Judging from the geometry features it stands out that for almost
similar geometrical sizes are reached in the study of same shape indicating that the material
Nikolaj Knudsen, 09925 Chapter 8. Results
Table 8.3. DFSA for two variables and a max iteration limit of 700 per run.
does not affect how large a cone should be, but it has a significant effect on the up-converting
potential. Interestingly the cone and square shapes seem to be very comparable.
8.2.2 Genetic Algorithm
For GA, the two variable problem consisting of radius and height and the tuned parameters
the results can be seen in table 8.4. In this study the ring configuration results have gotten lost
somewhere in cyberspace.
Table 8.4. GA for two variables
In this study the silver configurations dominates just like for the DFSA results of same
study. The square shape has a small advantage, similar to the DFSA results indicating that both
algorithms do well and are comparable in both quality and cpu-time.
8.3 Including Pitch as Variable
Now lets have a look at the effect of including the pitch as a variable and thus not locked at
200nm any longer. In this section all the same configurations as in the previous section will be
carried out with the one exception of including the pitch to estimate the effect of this parameter.
Fortunately both algorithms should still do well despite the extra dimension of the design space.
The pitch variable has been included and investigated in the range:
20 nm ≤ pitch ≤ 700 nm
8.3. Including Pitch as Variable Aarhus University - Department of Engineering
8.3.1 DFSA Including Pitch
Table 8.5. DFSA for three variables, including the pitch variable.
From the table it is clear that the pitch has a big influence on the up-converting
potential and a max enhancement of almost 1700 has been reached for a cone shaped silver
nano particle. A shift in the optimum height of the particle is also observed to be around 20nm
now compared to approximately 7nm previously and the same tendency is applicable to the
radius. There might be a relation to the volume to surface ratio since particle dimensions are
rising accordingly to an increase of distance between the nano particles. Including the pitch
have improved all the configurations, but the ring shape is now the worst candidate compared
to previously best design shape. This might as well be due to the mentioned volume to surface
ratio as the ring has a hole and is thus not as sensitive to increasing dimension due to the way it
has been defined initially.
8.3.2 GA Including Pitch
Table 8.6. GA for three variables, including the pitch variable.
Almost the same tendencies can be seen in this study as for the DFSA despite the
fact that DFSA uses almost twice as many function evaluations. It should though be noted that
DFSA reaches more stable averages compared to GA, likely for the increase of evaluations. In
this case the square and cone shape have switched rank in terms of a smaller cost function.
Nikolaj Knudsen, 09925 Chapter 8. Results
8.4 The Best Designs
In this section the best designs will be presented in terms of plotting the intensity enhancement
in the up-converting layer. The slightly more qualified optimum design discovered for a nano
particle which function is to enhance the intensity in this up-converting layer is found to be a
silver nano particle of the square geometry. It has obtained a significant enhancement of a factor
1761 compared to the intensity with no particle present. It was obtained with an inclination
angle of bet a = 15o and the optimized geometry is:
• Radi us, or more accurately (half side length) = 100nm
• Hei g ht = 25nm
• Pi tch = 699nm
And the intensity enhancement is visualized in figure 8.1.
Figure 8.1. Best design found to be a square silver nano particle. Left is for the y-z plane andright is seen in the x-z plane. The color bar indicates the enhancement strength andthus completely white indicates more then a factor 10 in enhancement due to thenano particle.
8.4. The Best Designs Aarhus University - Department of Engineering
The best design found for the through the optimization of the cone shape was found
to be made of silver. It yields an enhancement of L = 1691 and looks like this:
• Radi us = 114nm
• Hei g ht = 23.5nm
• Pi tch = 699nm
And the intensity enhancement is visualized in figure 8.2.
Figure 8.2. Next best design found to be a cone silver nano particle. Left is for the y-z plane andright is seen in the x-z plane. The color bar indicates the enhancement strength andthus completely white indicates more then a factor 10 in enhancement due to thenano particle.
Nikolaj Knudsen, 09925 Chapter 8. Results
The best design for the ring shape was obtained for silver, once again. It reached an
enhancement of L = 621 and has the geometry:
• Radi us = 63nm
• Hei g ht = 22.5nm
• Pi tch = 693nm
And the intensity enhancement is visualized in figure 8.3.
Figure 8.3. Best design found for a ring shaped nano particle was found to be made of silver.Left is for the y-z plane and right is seen in the x-z plane. The color bar indicates theenhancement strength and thus completely white indicates more then a factor 10 inenhancement due to the nano particle.
It should be mentioned once again that the mesh convergence studies showed that
no final truth should be concluded from the models as there were no indication of convergence.
With this in mind it makes no sense to deduct which of the closely rated cone and square
geometry was the absolute best. The L-expression should be viewed as a rough scaling parameter
in these studies.
CHAPTER
9Conclusion and Final Remarks
As the final step of this investigation of up-conversion enhancement by introducing nano
particles in the up-converting layer, this chapter will sum up the findings of this project and the
limitations of the results acquired. Furthermore, additional investigations will be suggested.
Notable enhancements have been achieved by the chosen optimization algorithms for all six
models (three geometries and two materials each). It has been a close race of determining the
most favorable algorithm since both located some of the best designs achieved and both have
been obtaining good averages over multiple runs indicating great stability of the algorithms
and thus seem suitable for solving the problem as expected of them. However, more algorithms
could be applied to the problem to confirm this statement. In the final investigations the DFSA
have been using twice as many function evaluations as GA but did not overall locate better
optimum designs. The average best designs found indicated that DFSA located better designs
on average though. To ease the comparison of the algorithms the number of cost function
evaluations should have been controlled more strict. The cpu-time has been proportional to
the function evaluation number, indicating that solving the FEM problem in ComSol has been
the greatest contributor to the time consumption of the calculations. This could be reduced by
using coarser mesh, but this could on the other hand make it harder for ComSol to converge
the results due to too great inaccuracies as the max mesh size is already fairly rough. Another
probably better solution to reducing computational time would be to parallelize the algorithms.
Superficial attempts of such have been tried out but without success due to problems in sharing
the vast amount of data from the ComSol models and the according Livelink to Matlab.
In this project emphasis has been on determining the optimum design and not on the exact
value of up-conversion enhancement and thus a simplified cost function has been applied to
model this up-conversion. A more truthfully and probably more complex expression of this
physical phenomenon could therefore be used instead in future investigations.
57
Nikolaj Knudsen, 09925 Chapter 9. Conclusion and Final Remarks
Regarding the geometries examined in this project, the upper limit of the pitch parameter was
set to 700nm and the findings indicated optimum value of this was very close to this boundary.
This indicates the upper limit should have been considerably larger in order to investigate the
effect of this parameter properly. The findings shows that almost identical dimensions does
not necessarily yield the same value for L indicating that small changes has a considerably
effect. This is not good news as the production tolerances of a couple of nano meters imply
great variations in the enhancement but on average the desired effect should be somewhat
measurable. This should be determined experimentally of course.
More interesting variables could also be investigated. The inclination angle could probably be
produced with some accuracy in a small range around the chosen 15o in these studies. For
the ring geometry, the design possibilities has been somewhat limited due to the way it has
been defined, thus loosening of these limitations or even make the inner radius a variable as
well could be an interesting study in itself. Of course the three chosen geometries is not the
only possibilities and less symmetrical shapes like stars could be explored in the expense of the
necessity of a full unit cell instead. In this project it has been assumed that the nano particles
should be placed on top of the ucl but an interesting approach could be to investigate the
effect of embedding the particles in this layer instead of putting them on top of it or even dope
the ucl with some sort of particles like the Erbium atoms. Another exiting investigation could
be of self-assembling particles. This could be done by depositing a layer of a desired particle
material on top of the ucl and heat it up accordingly to a temperature close to the melting point
and thus let the atoms move more freely around and likely form clusters of atoms in a kind of
randomly sized droplet shapes. If these shapes could somehow be simplified as an evaluable
input to ComSol, then the up-conversion enhancement could be measured as well and thus be
comparable to other particles.
CHAPTER
10Appendix
10.1 Appendix 1: Mesh Convergence studies
10.1.1 Cone - Silver
Figure 10.1. Mesh convergence study
10.1.2 Cone - Gold
Figure 10.2. Mesh convergence study
59
Nikolaj Knudsen, 09925 Chapter 10. Appendix
10.1.3 Square - Silver
Figure 10.3. Mesh convergence study
10.1.4 Square - Gold
Figure 10.4. Mesh convergence study
10.1. Appendix 1: Mesh Convergence studies Aarhus University - Department of Engineering
10.1.5 Ring - Silver
Figure 10.5. Mesh convergence study
10.1.6 Ring - Gold
Figure 10.6. Mesh convergence study
Nikolaj Knudsen, 09925 Chapter 10. Appendix
10.2 Appendix 2: Domain Sweeps for all studies
10.2.1 Cone - Silver
Figure 10.7. Domain sweep with a resolution of 7nm.
10.2.2 Cone - Gold
Figure 10.8. Domain sweep with a resolution of 7nm.
10.2. Appendix 2: Domain Sweeps for all studiesAarhus University - Department of Engineering
10.2.3 Square - Silver
Figure 10.9. Domain sweep with a resolution of 7nm.
10.2.4 Square - Gold
Figure 10.10. Domain sweep with a resolution of 7nm.
Nikolaj Knudsen, 09925 Chapter 10. Appendix
10.2.5 Ring - Silver
Figure 10.11. Domain sweep with a resolution of 7nm.
10.2.6 Ring - Gold
Figure 10.12. Domain sweep with a resolution of 7nm.
10.3. Appendix 3: Refractive Indexes Aarhus University - Department of Engineering
10.3 Appendix 3: Refractive Indexes
10.3.1 Silver
Figure 10.13. Domain sweep with a resolution of 7nm.
10.3.2 Gold
Figure 10.14. Domain sweep with a resolution of 7nm.
Bibliography
[1] Andrew Zangwill, Modern Electrodynamics, 2013, Cambridge University Press
[2] Sami Franssila, Introduction to Microfabrication, 2010, John Wiley & Sons
[3] Jasbir S. Arora, Introduction to Optimum Design, 2012, Academic Press
[4] M.M.Ali et al., A derivative-free variant called DFSA of Dekkers and Aarts’ continuous
simulated annealing algorithm, 2012, Elsevier
[5] Søren P. Madsen et al., Optimizing Plasmonically Enhanced Upconversion, 2015, Elsevier
[6] David AB Miller, Quantum mechanics for scientists and engineers, 2008, Cambridge
University Press
[7] Sabrina Rostgaard Johannsen et al, Upconversion of near-infrared light through Er doped
TiO 2, and the effects of plasmonics and co-doping with Yb, 2015, Aarhus University
[8] Grant R. Fowles, Thin film, Introduction to modern optics, Courier Corporation
67