Torpin Thesis

download Torpin Thesis

of 86

Transcript of Torpin Thesis

  • 8/3/2019 Torpin Thesis

    1/86

  • 8/3/2019 Torpin Thesis

    2/86

  • 8/3/2019 Torpin Thesis

    3/86

    Constraints on Universal Extra-Dimensional Dark Matter

    from Direct Detection Results

    by

    Trevor Torpin

    A THESIS

    Submitted to the faculty of the Graduate School of Creighton University in partialfulfillment of the requirements for the degree of Master of Science in the

    Department of Physics

    Omaha, NE (August 15, 2011)

  • 8/3/2019 Torpin Thesis

    4/86

  • 8/3/2019 Torpin Thesis

    5/86

    Abstract

    Detection of dark matter is one of the most challenging and important problems in

    astro-particle physics. One theory that produces a viable particle dark matter can-

    didate is Universal Extra Dimensions (UED), in which the existence of a 4th spatial

    dimension is theorized. The extra dimension is not seen because it is compactifed on

    a circular orbifold whose radius is too small to be observed with current technology.

    What separates this theory over other Kaluza-Klein-type theories is that UED allows

    all standard model particles and fields to propagate in the extra dimension. The dark

    matter candidate in UED theories is a stable particle known as the Lightest Kaluza-

    Klein Particle or LKP, and the LKP can exist with sufficient relic density to serve

    as the dark matter. This work will present bounds on UED model parameters from

    direct dark matter searches such as the XENON100.

    iii

  • 8/3/2019 Torpin Thesis

    6/86

    Acknowledgements

    I would like to thank the faculty and staff of the Physics Department of Creighton

    University. Im proud of the education I have received here at Creighton and I believe

    it has helped prepare me for my future. I would especially like to thank my advisor

    Dr. Gintaras Duda for his support and help.

    I would also like to acknowledge the friendship and camaraderie of my fellow

    graduate students who supported me throughout my time here. Finally, I also would

    like to thank my family and friends for their unwavering faith and support over my

    time here.

    iv

  • 8/3/2019 Torpin Thesis

    7/86

    Dedication

    This work is dedicated to my family and friends who have supported me through thick

    and thin throughout my life. Their encouragement has helped more than they know.

    v

  • 8/3/2019 Torpin Thesis

    8/86

    Contents

    Abstract iii

    Acknowledgements iv

    Dedication v

    Table of Contents vi

    List of Figures viii

    List of Tables ix

    1 Dark Matter 1

    1.1 Evidence for Dark Matter . . . . . . . . . . . . . . . . . . . . . . . . 1

    1.1.1 Cosmological Evidence . . . . . . . . . . . . . . . . . . . . . . 5

    1.2 Potential Candidates for Dark Matter . . . . . . . . . . . . . . . . . . 7

    1.3 Detection Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

    1.3.1 Direct Detection . . . . . . . . . . . . . . . . . . . . . . . . . 11

    1.3.2 Indirect Detection . . . . . . . . . . . . . . . . . . . . . . . . . 17

    1.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    2 Kaluza-Klein Theory 20

    vi

  • 8/3/2019 Torpin Thesis

    9/86

    2.1 Kaluzas Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    2.2 Kaluza-Klein Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

    3 Universal Extra Dimensions Theory 31

    4 The XENON Experiments 37

    4.1 The Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

    4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

    5 Results and Conclusions 44

    5.1 The Direct Detection Program . . . . . . . . . . . . . . . . . . . . . . 44

    5.2 Finding R and . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

    5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

    6 Conclusions 55

    Bibliography 57

    Appendix A: Program Code 59

    Appendix B: Maple Worksheet 69

    vii

  • 8/3/2019 Torpin Thesis

    10/86

    List of Figures

    1.1 Galactic Rotation Curve . . . . . . . . . . . . . . . . . . . . . . . . . 3

    1.2 A Composite Image of the Bullet Cluster. . . . . . . . . . . . . . . . 4

    1.3 CMB Anisotropy Power Spectrum for various values of b and dm. . 6

    3.1 Mass Spectrum of the UED particles with one Loop Corrections for

    Mass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    3.2 Feynman Diagrams for B(1)-Quark Elastic Scattering . . . . . . . . . 36

    4.1 Rate of Detection by Elements vs Recoil Energy . . . . . . . . . . . . 38

    4.2 XENON100 Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

    4.3 Event Distribution using a Discrimination parameter . . . . . . . . . 414.4 Event Distribution within the Target Volume . . . . . . . . . . . . . . 41

    4.5 XENON100 Limits on WIMP-Nucleon Cross Section vs WIMP Mass 42

    5.1 XENON100 Limits: WIMP-Nucleon Cross Section vs WIMP Mass . . 46

    5.2 WIMP-Nucleon Cross Section vs WIMP Mass . . . . . . . . . . . . . 49

    5.3 Value of n vs WIMP Mass . . . . . . . . . . . . . . . . . . . . . . . . 51

    5.4 Value of R vs WIMP Mass . . . . . . . . . . . . . . . . . . . . . . . . 53

    viii

  • 8/3/2019 Torpin Thesis

    11/86

    List of Tables

    5.1 Limitations on Mass Data Table . . . . . . . . . . . . . . . . . . . . . 50

    5.2 Summary of Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

    ix

  • 8/3/2019 Torpin Thesis

    12/86

    Chapter 1

    Dark Matter

    Dark matter is one of the most challenging and important problems in both astronomy

    and particle physics today. Dark matter is a form of matter that does not emit or

    absorb electromagnetic radiation; therefore, it is non-luminous. There is substantial

    evidence that dark matter makes up the majority of the mass in the universe. In fact,

    it is believed that dark matter makes up 22% of the universe, with 4% of the universe

    being composed of normal mater, and the rest of the universe being composed of

    something called dark energy. The composition of dark matter is unknown but we do

    know that either it exists due to its gravitational effects on matter that we can see

    such as the galaxies in the universe or else our fundamental understanding of gravity

    is wrong. There is substantial observational evidence for the existence of dark matter

    dating as far back as the 1930s.

    1.1 Evidence for Dark Matter

    There are two different main categories of evidence that imply the existence of dark

    matter. The first category is astrophysical evidence and includes the motion of stars

    1

  • 8/3/2019 Torpin Thesis

    13/86

    in galaxies and the motions of galaxies in clusters, and the phenomena of gravitational

    lensing. J. H. Oort began studying the motion of stars in our Milky Way galaxy in

    the early 1930s. By measuring the Doppler shift of these stars, astronomers can

    estimate the velocity of the stars and from these velocities we can estimate the total

    mass to account for these orbits. Oort was surprised to find that in order to account

    for these observed orbits, there must more mass than calculated from standard mass

    to luminosity ratios. Standard mass to luminosity ratios work by letting the ratio

    of our suns mass to its luminosity to be equal to one. Then by measuring the light

    output of an object we can get an estimate of its mass.

    The same essential method was used by F. Zwicky except that rather than looking

    at stars, he examined the motion of galaxies in clusters. Zwicky applied the Viral

    Theorem to find the gravitational potential energy and through that, the mass of the

    cluster. The Viral Theorem is given by:

    K = 12U, (1.1)

    where K is the average kinetic energy of the galaxy, and U is the average potential

    energy.

    By using the Doppler shift of these galaxies Zwicky was able to find their velocities

    and through the Virial Theorem an estimate of the mass of the cluster. He also found

    that there must be more mass than previously thought from mass to luminosity ratios,

    which are well understood and roughly constant for a galaxy. Zwicky found that with

    the velocity that individual galaxies were moving at, the cluster should have drifted

    apart. Because of this, Zwicky came to the conclusion that there must be more

    mass than what was visible through mass to luminosity ratios. This observation has

    been repeated many times for different galaxy clusters and the results have all been

    2

  • 8/3/2019 Torpin Thesis

    14/86

    consistent that there must be more mass than what would be observed. [1]

    The next evidence for dark matter was found in a study done on galactic rotation

    curves. In this study, Vera Rubin and other collaborating astronomers measured

    the velocity of stars as a function of their distance to the galactic center[2]. The

    collaboration expected that the galaxy would exhibit the same behavior of our solar

    system and that the rotational velocity will be given by:

    v(r) =

    G

    m(r)

    r, (1.2)

    where v(r) is the rotation speed of an object at a position of a radius r, G is the

    gravitational constant, and m(r) is the total mass contained within the radius r.

    What they expected was that as the distance that a star orbits the galactic center

    increases, the stars velocity will decrease approximately by v(r) 1/r. Insteadthey found that the velocity of the stars was relatively constant no matter the distance

    from the galactic center as shown in Figure 1.1. Here A is the expected velocity as a

    function of distance and B is what was found. In order for B to occur, there must be

    substantially more mass than what is expected from the mass-luminosity ratios and

    in a different distribution.

    Figure 1.1: Galactic Rotation Curve

    3

  • 8/3/2019 Torpin Thesis

    15/86

    The last major astrophysical evidence is a phenomenon called gravitational lens-

    ing. Gravitational lensing was first realized through Einsteins theory of general

    relativity. This phenomenon occurs when light from a distant object such as a galaxy

    or a quasar is bent around a closer object such as a cluster of galaxies. From the

    amount of angular deflection we can estimate the mass of the lensing object. Perhaps

    the most direct evidence for dark matter comes from here, in the form of the Bullet

    Cluster. Here in Figure 1.2, the visible image is from the Hubble telescope, Red is

    the X-Ray Image from Chandra Telescope, and Blue is the gravitational map [3].

    Figure 1.2: A Composite Image of the Bullet Cluster.

    Scientists have been studying this particular cluster for quite some time which is

    actually two colliding galaxy clusters. The astronomers found that as the galaxies

    collided, the interstellar medium of dust was stripped from the cluster centers and

    heated up in the central region between the two clusters and emitted x-rays that were

    detected by the Chandra Telescope. It was assumed that since in a galaxy cluster, the

    majority of the luminous mass is made up of the interstellar medium so it was assumed

    that the central region would also be where the center of gravity would be. Scientists

    then mapped the gravitational potential within the cluster to find the location of the

    majority of the clusters mass through the use of gravitational microlensing. They

    found that instead of one central region of gravity, there was actually two, one on

    4

  • 8/3/2019 Torpin Thesis

    16/86

    either side of the central region. This lead astronomers to conclude that as the

    galaxies collided, the centers passed through each other and that the baryonic matter

    interacted and heated up in the central region, while the dark matter passed right

    through without interacting[3]. This provided evidence that the majority of the mass

    of a galaxy was actually not luminous and was in fact dark matter.

    1.1.1 Cosmological Evidence

    The next category of evidence implying the existence of dark matter comes from

    cosmology. The first example of evidence comes from the Big Bang Nucleosynthesis.

    This is a time period that began a few seconds after the big bang and only lasteda few minutes. During this brief time period, deuterium, helium, and other light

    elements were formed from the combination of protons and neutrons. Currently

    any deuterium being formed is pretty much only found in stellar structures, where

    it is fused essentially immediately into helium so we actually can not observe it.

    Because of this, any deuterium that we see can be taken to be leftover from the

    Big Bang and we can use this as a lower limit for the amount of deuterium created

    in the Big Bang. Using this lower limit we can calculate the theoretical elemental

    abundances and measuring the deuterium/hydrogen ratio gives a good estimate of

    the overall baryon abundance. . Using these abundances we can estimate the total

    baryonic mass density and compare it to the total mass density. The dimensionless

    parameter represents the ratio of the enery density to the critical energy density

    when relating to the curvature of the universe. The calculated baryonic density

    is given as bh2 = 0.0229 0.0013 or bh2 = 0.02160.00200.0021 [4] depending on themeasurement of deuterium used. The total mass density is mh

    2 = 0.1334+0.00560.0055[5].

    Clearly baryonic matter is not the only type of matter in the universe.

    The next cosmological evidence that we will consider is from the cosmic microwave

    5

  • 8/3/2019 Torpin Thesis

    17/86

    background. The cosmic microwave background is radiation leftover from approxi-

    mately 380,000 years after the Big Bang. From new experiments, such as the WMAP

    survey, we can see small fluctuations in the temperature in this background. These

    fluctuations are dependent on the amount of baryons in the universe at the time.

    Scientists have realized that these fluctuations are too small to account for the com-

    plete structure formation as we have observed it. This is because baryonic matter

    doesnt become neutrally charged until the epoch of recombination. In order to fit

    this microwave background with the appropriate time scale, scientists have found that

    here must be matter in the universe that is weakly interacting, and thus dark matter

    comes into play to match these observations. The Cosmic Microwave Background

    also limits the amount of baryonic dark matter because matter and dark matter will

    interact differently with the CMB. When scientists plot the CMB anisotropy power

    spectrum, it shows that there must be less baryonic dark matter than ordinary matter

    as shown here in Figure 1.2.

    Figure 1.3: CMB Anisotropy Power Spectrum for various values of b and dm.

    The last cosmological evidence that we will examine here is in Large Scale Struc-

    ture Formation simulations. Scientists have developed models that mimic how we

    expect our galaxy to form into its observed structure. One such simulation was the

    Millennium-II simulation which observed more than 10 billion particles and how they

    6

  • 8/3/2019 Torpin Thesis

    18/86

    interacted in an effort to study dark matter halo structure and formation[6]. Each

    particle represented 6.89 106h1M in a volume of (100h1Mpc)3. In these simula-tions, scientists have found that if we just assume that there is no dark matter we find

    that these structures take too long to form or do not form the filament and void-type

    structures and hence they do not match well with our observations of our universe.

    In order to run a simulation that will form structure on the appropriate time scale,

    scientists have to include more matter. Another very important conclusion from these

    simulations is that this dark matter must be non-relativistic meaning that it does not

    move at a significant portion of the speed of light. If scientists run a simulation where

    dark matter is relativistic we find that the universe does not form as we would expect

    from our observations.

    1.2 Potential Candidates for Dark Matter

    Dark matter candidates come in two major categories which are baryonic dark mat-

    ter and nonbaryonic dark matter. Baryonic matter is matter that is made up of

    quarks and this is the sort of matter that we observe all around us. Some possi-

    ble candidates of baryonic dark matter include brown dwarfs, neutron stars, black

    holes, and unassociated planets. These candidates are all classified as Massive Com-

    pact Halo Objects or MACHOs for short. There have been several collaborations

    that have searched for these objects through the use of gravitational microlensing.

    Gravitational microlensing is the changing brightness of a distant object due to the

    interference of another nearby object that is between the object and the observer.

    The two major collaborations are the MACHO collaboration and the EROS-2 Sur-

    vey. The MACHO collaboration claims to have found between 13 -17 microlensing

    events which was substantially more than expected[7]. This would imply that there

    7

  • 8/3/2019 Torpin Thesis

    19/86

    could be enough 0.5 Solar Mass MACHOs to account for about 20% of the dark mat-

    ter in the galaxy. However, the EROS-2 Survey did not substantiate the signal claims

    by the MACHO group. They did not observe enough microlensing effects while hav-

    ing a higher sensitivity than the MACHO group[8]. Both these collaborations found

    very few possibilities for candidates implying that most dark matter cannot exist in

    the form of baryonic astrophysical objects.

    From these various studies, we conclude that baryonic dark matter must not

    make up the majority of dark matter in our universe. Thus it must be in the form of

    nonbaryonic dark matter. Nonbaryonic dark matter candidates are generally known

    as WIMPs or Weakly Interacting Massive Particles. These WIMPs are generally

    very massive particles that are electrically neutral and do not interact strongly with

    matter. They generally only interact via the gravitational force and the weak force.

    The Standard Model of particle physics is a theory in which three of the four

    fundamental forces in nature are described. The three forces described in the Standard

    Model are electromagnetism, the weak nuclear force, and the strong nuclear force. The

    force that is not included is the force of gravity. The Standard Model predicts 16

    types of particles and of these, there is only one potential candidate for dark matter.

    The neutrino is the only stable, electrically neutral, and weakly interacting particle

    in the Standard Model that fits the parameters for dark matter. This candidate has

    the advantage over other potential particle candidates because it is actually known

    to exist. Unfortunately the neutrino cannot account for the majority of dark matter.

    This is because neutrinos are relativistic and would inhibit structure formation; this

    implies a top-down formation of the universe which is inconsistent with observations.Another argument against the neutrino is that neutrinos are very light. Current

    estimates place an upper limit on the mass of less than .58 electron-volts[5]. This

    implies that to account for the estimated value of dark matter in the universe, there

    8

  • 8/3/2019 Torpin Thesis

    20/86

    would have to have a larger energy density than what is calculated from the Cosmic

    Microwave Background. Therefore we can conclude that neutrinos could account for

    a portion of dark matter but not all of it.

    To find more possible dark matter candidates, scientists have theorized various

    extensions to the Standard Model. The most prominent extension to the standard

    model is known as supersymmetry. The exact specifics vary on the model proposed.

    Essentially all supersymmetries create a symmetry between bosons and fermions.

    It relates elementary particles of one spin to other particles that are different by a

    half unit of spin. These other particles are called superpartners and must be very

    massive as they would have been spotted at particle accelerators before now. Thus

    implying that supersymmetry must be a broken symmetry in nature. Supersymme-

    try effectively doubles the number of particles in the standard model. Thus using

    supersymmetry, there are now several possible candidates for a dark matter particle.

    The most likely predicted particles that are neutral and also weakly interacting, and

    thus the ideal WIMP candidate, are the neutralino, the sneutrino, and the gravitino.

    However, there are several arguments against some of these candidates.

    The sneutrino particle is the superpartner of the neutrino and is relatively light.

    Sneutrinos annihilate very rapidly in the early universe and their relic density is

    too low today to make the sneutrino a viable candidate or at least not a significant

    portion of dark matter[9]. We can also rule out the gravitino as a possible candidate.

    The gravitino is the superpartner to the graviton which is the theorized force carrier

    particle of gravity. The problem with the gravitino as a dark matter candidate is

    that the particle would be relativistic. As we saw earlier, relativistic dark matter willinhibit structure formation, hence, we can conclude that the gravitino is not a viable

    dark matter candidate.

    The most promising candidate by far is the neutralino. The neutralino is the

    9

  • 8/3/2019 Torpin Thesis

    21/86

    superposition of the neutral superpartners of the Higgs and gauge bosons. The neu-

    tralino composition is given by:

    a1B + a2W(3)

    + a3H1 + a4H2, (1.3)

    where is the neutralino, B is the bino component, W is the wino component, and

    H are the two higgisino components. The neutralino is a viable candidate due to

    what is known as R-parity. R-parity is when the baryon number and lepton number

    are no longer conserved in the couplings of the theory and because of this, the light-

    est supersymmetric particle cannot decay. In most theories of supersymmetry, the

    lightest supersymmetric particle is the neutralino. We know that the relic abundance

    of neutralinos should be sizeable and thus be significant. Most importantly the de-

    tection rates are high enough to be observed in a laboratory experiment but are not

    high enough that they can be ruled out through experiment as of yet.

    There are other more exotic particles that are theorized that could be a dark mat-

    ter candidate such as the axiono[10], Q-balls[11], mirror particles[12], WIMPzillas[13],

    branons[14] among others. However the neutralino remains by far the most promising

    candidate motivated by theory. But how could it be detected?

    1.3 Detection Methods

    If we assume that the dark matter in our galaxy are WIMPs, then there must be a

    large number of dark matter particles passing through the Earth each second. There

    are many experiments that are being performed that are attempting to discover these

    WIMPs. These experiments can be divided into two classes: direct detection ex-

    periments, which search for the scattering of dark matter particles off atomic nuclei

    within a detector, and indirect detection, which look for the products of WIMP anni-

    10

  • 8/3/2019 Torpin Thesis

    22/86

    hilations. There is another possibility of detection in which supersymmetric particles

    are produced in accelerators. These have not been found yet, but hopes are high that

    they will be observed when the Large Hadron Collider becomes fully operational. The

    Large Hadron Collider is the worlds largest and highest-energy particle accelerator

    located near Geneva, Switzerland and operated by CERN (European Organization

    for Nuclear Research).

    1.3.1 Direct Detection

    Direct detection experiments attempt to observe dark matter in a detector that is

    extremely sensitive to nuclear recoils. The detector attempts to measure the nuclearrecoils when dark matter particles collide with a nucleus within the detector. These

    events can manifest themselves in three ways. There is a Phonon/Thermal interaction

    in which the nuclear recoil causes a vibration in the detector that causes a slight rise in

    temperature in the detector. The next type of interaction is an Ionization interaction

    in which the event causes a charge to move across an applied electric field in the

    detector. The final type of event interaction is scintillation in which the event causes

    an electron to go to a higher energy state that emits a photon when it decays. A

    WIMP signal should have several distinct characteristics. They should be uniformly

    distributed throughout the detector because the events are so weakly interacting

    they should pass through the surface easily. They should also be a single site event

    because these events are rare enough that it is highly unlikely that these events occur

    at consecutive events. The signal should also vary at different times through out

    the year because of the Earths velocity relative to the dark matter in our galaxy.

    In general, a detector detects two of these interactions and then these events are

    analyzed and based on the timing and other characteristics of these interactions,

    potential background events are discarded.

    11

  • 8/3/2019 Torpin Thesis

    23/86

    There are many collaborations that are performing direct detection experiments.

    The majority of direct detection experiments use one of two detector technologies:

    Cryogenic detectors, operating at temperatures below 100mK, detect the heat pro-

    duced when a particle hits an atom in a crystal absorber or a noble liquid detector

    that detects the flash of scintillation light of the particle collision. Direct detection

    experiments operate in deep underground laboratories to reduce the background from

    cosmic rays. Some examples are CDMS, CRESST, CoGeNT, and XENON experi-

    ments. One direct detection experiment, the DAMA experiment, claimed to have

    observed an annual modulation in the event rate, which they claim is due to dark

    matter particles velocity relative to the Earth as it orbits around the sun. So far this

    claim has not confirmed by other collaborations.

    Most direct detection experiments set limits on WIMP-proton or neutralino-

    proton cross sections. Let us work through an example on how they calculate these

    limits. The spin-independent neutralino-nucleus elastic cross section with a pointlike

    nucleus of Z protons and A Z neutrons is given by:

    SIi = 2

    i

    |ZGps + (A Z)Gns |2, (1.4)

    where Gps and Gns are the scalar four-fermion couplings of a WIMP with point-like

    protons and neutrons and i = mM/(m + M) is the WIMP-nucleus reduced mass

    where m is the neutralino mass and M is the nucleus mass. Typically, it is assumed

    that Gps = Gns . This allows us to rewrite the spin-independent WIMP-proton cross

    section as

    SIi = pA2

    ip

    2. (1.5)

    In detectors, the expected number of events with recoil energy in the range (E1, E2)

    12

  • 8/3/2019 Torpin Thesis

    24/86

    is the sum over the nuclear species in the detector given by

    NE1E2 =

    iE1E2

    dRidE

    i(E)dE, (1.6)

    where i(E) is the effective exposure of each nuclear species in the detector and

    dRi/dE is the expected recoil rate per unit mass of species i per unit nucleus recoil

    energy and per unit time. The effective exposure is given by the function

    i = MiTii(E), (1.7)

    where Ti is the active time of the detector,

    Mi is the mass of the nuclear species i

    exposed to the signal, and i(E) is the counting efficiency of the detector for nuclear

    recoils of energy E. The differential rate dRi/dE is given by

    dRidE

    =SIi |F(q)|2

    2m2i

    v>q/2

    f(v, t)

    vd3v, (1.8)

    where E is the energy of the recoiling nucleus, is the local halo WIMP density, f(v, t)

    is the WIMP velocity distribution function in the frame of the detector (generally

    assumed to be a Maxwell-Boltzmann distribution[?]), SIi is the spin-independent

    WIMP-nucleus elastic cross section off a nucleus and |F(q)|2 where q is the recoilmomentum given by q =

    2ME is a nuclear form factor. The cross section is found

    by comparing the expected number of events to the observed number of events.

    The nuclear form factor, F(q), is the Fourier transform of a spherically symmetric

    ground state mass distribution normalized such that F(0) = 1:

    F(q) =1

    M

    mass(r)e

    iqrd3r. (1.9)

    We can perform part of the of the integration leaving

    13

  • 8/3/2019 Torpin Thesis

    25/86

    F(q) =1

    M

    0

    mass(r)sin qr

    qr4r2dr. (1.10)

    Mass and charge densities are generally assumed to be related because it is difficult

    to determine how the mass is arranged in the nucleus. The densities are related by

    mass(r) =M

    Zecharge(r). (1.11)

    This is done because it is fairly simple to measure the charge densities through the

    use of elastic electron scattering. It is worth noting that because the nuclear form

    factor is normalized, this implies:

    Fmass(q) = Fcharge(q) (1.12)

    The most basic charge density for a nucleus is known as the uniform model[16]. In

    this model, the charge density of the nucleus is a constant value until a cutoff radius

    R. Therefore the charge density is given by:

    U(r) =

    3Ze4R3

    , r < R

    0, r > R(1.13)

    Now, the total charge of the nucleus is Ze due to the fact that the charge density has

    been normalized. Now let us derive the form factor. Using Eq. 1.12, we can write

    the mass density as

    mass(r) =

    3M

    4R3 , r < R

    0, r > R. (1.14)

    The form factor can be written as

    14

  • 8/3/2019 Torpin Thesis

    26/86

    F(q) =1

    M

    0

    mass(r)sin qr

    qr4r2dr.

    Plugging in our mass density we find

    F(q) =1

    M

    R0

    3M

    4R3 sin qr

    qr4r2dr. (1.15)

    Pulling out the constants we are left with

    F(q) =3M4

    M4

    R0

    r2

    R3 sin qr

    qrdr. (1.16)

    Performing the integration, we obtain

    F(q) =3

    R3q3(sin(qR) R cos(qR)q). (1.17)

    We can rewrite this as

    F(q) =3

    qR

    sin(qR)

    q2R2 Rq cos(qR)

    qR

    . (1.18)

    Now, we can use a spherical Bessel function of the first order, namely:

    j1(x) =sin(x)

    x2 cos(x)

    x(1.19)

    Using this we can write the nuclear form factor for the the uniform model as

    F(q) =3

    qR

    j1(qR). (1.20)

    The uniform model is just an idealization of a nucleus. A nucleus cannot have

    such a abrupt cutoff in the charge distribution. One solution of this problem is to

    make use of the Helm charge density[16]. The Helm charge distribution is given by:

    15

  • 8/3/2019 Torpin Thesis

    27/86

    H(r) =

    U(r

    )G(r r )d3r (1.21)

    where G(r) is given by

    G(r) =1

    (2g2)3/2er

    2/2g2, (1.22)

    and g is a parameter dealing with the radius of the Gaussian smearing surface density.

    The Helm density has a simple form factor found using the convolution theorem to

    be just the product of the form factors of U and G:

    F(q) = FU(q)FG(q)

    =3

    qRj1(qR)e

    g2q2/2.(1.23)

    There are three main distributions that are used in dark matter distributions[16].

    The first one is known as the Woods-Saxon distribution. This is given by the density:

    (r) = c

    e(rc)/a + 1, (1.24)

    where c is the half-density radius, c is the density at r = c and the parameter a

    is related to the surface thickness t by t = (4ln 3)a. The problem with the Woods-

    Saxon distribution is that the Fourier transform cannot be computed analytically.

    The second main distribution is known as the the sum of Gaussian expansion. In this

    distribution, the charge density is modeled as a series of Gaussians. It can be written

    as

    (r) =Ni=1

    Ai

    e[(rRi)/]2

    + e[(r+Ri)/]2

    , (1.25)

    16

  • 8/3/2019 Torpin Thesis

    28/86

    where is the width of the Gaussians and Ai is given by

    Ai =ZeQi

    23/23(1 + 2R2i /2)

    . (1.26)

    Here Qi stands for the fractional charge in the ith Gaussian. If we assume spherical

    symmetry, the form factor for the sum of Gaussians expansion can be written as:

    F(q) = eq22/4

    Ni=1

    Qi1 + 2R2i /

    2

    cos(qRi) +

    2R2i2

    sin(qRi)

    qRi

    . (1.27)

    Now the third and final commonly used distribution is known as the Fourier-Bessel

    expansion. This charge density is modeled as a sum of Bessel functions up to a cutoff

    radius R. The density can be written as:

    (r) =

    N=1

    aj0(r/R), r R

    0, r R,(1.28)

    where j0(x) = sin x/x is the zeroth-order spherical Bessel function. If we assume

    spherical symmetry, we can write nuclear form factor as

    F(q) =sin(qR)

    qR

    N=1(1)a/(22 q2R2)N

    =1(1)a/22. (1.29)

    These form factors play a role in the calculation of the cross section and can affect

    their accuracy. Generally, this effect is typically small and is not considered here as

    a major factor in my calculation.

    1.3.2 Indirect Detection

    Indirect detection experiments search for the products of WIMP annihilation. Some

    current experiments include EGRET, the Fermi Gamma-ray Space Telescope, and

    17

  • 8/3/2019 Torpin Thesis

    29/86

    PAMELA which is a satellite that is studying cosmic radiation. Neutralinos are

    majorana particles, meaning that they are their own antiparticle, and hence they

    annihilate with each other. These annihilations generate products which we can

    detect, such as gamma-rays, neutrinos and antimatter. These products are typically

    searched for in the sun, earth, and in the galactic center. This is because neutralinos

    are expected to be drawn to large objects due to gravity where they will annihilate.

    Gamma-rays from neutralino annihilation is believed to occur mostly in the galac-

    tic center. There are two main processes in which gamma-rays are produced. They are

    produced when the annihilation creates a quark and anti-quark which create particle

    jets in which a spectrum of gamma-rays are released from the decay of0 particles

    and which is proportional to the mass of the WIMP. Neutralino annihilation also

    yields neutrinos which occur through many different annihilation processes. Detec-

    tion of neutrinos depends on the WIMP mass, annihilation rate, density as well as

    several other factors. It is difficult to detect neutrinos because they are very weakly

    interacting so detectors must be very large in order to detect a significant signal.

    Neutralino annihilations can also yield antimatter. The antimatter can be antipro-

    tons from quark and antiquark pairs or it can be positrons created from secondary

    products of the annihilation. Antimatter products are charged particles. This means

    that they can be affected by magnetic fields in space and they can lose energy due to

    inverse Compton (photon gains energy on interaction with matter) and synchrotron

    processes[17]. This is important, because of these effects, we cannot determine where

    the annihilations occurred. However, the observation of any of these products would

    not be proof of dark matter, as the backgrounds from other sources are not yet fullyunderstood.

    18

  • 8/3/2019 Torpin Thesis

    30/86

    1.4 Conclusion

    Detection of dark matter is one of the most challenging and important problems in

    both astronomy and particle physics today. This chapter was aimed at providing a

    basic introduction into the subject of dark matter, particularly its potential candi-

    dates and experimental detection methods. There are many experiments that are

    being performed that are constraining potential dark matter candidates. Because of

    this, and with new results that are being released regularly it is a very exciting time

    to be in this field!

    19

  • 8/3/2019 Torpin Thesis

    31/86

    Chapter 2

    Kaluza-Klein Theory

    Kaluza-Klein Theory is an interesting example of a unification theory in theoretical

    physics that came about in 1921. It resulted from a large push of unification theories

    in physics throughout the early 1900s. Unification theories of course are theories that

    attempted to combine the forces of gravity and electromagnetism, the only forces that

    physicists were aware of at the time. One of the largest proponents of unification

    theories was the esteemed Albert Einstein. Now, Kaluza-Klein Theory was one of

    such attempts to unify gravity with electromagnetism, but what made this theory so

    interesting and intriguing was that the attempt at unification was done through the

    use of an extra dimension.

    2.1 Kaluzas Theory

    Theodor Kaluza was a German mathematician/physicist who was the main architect

    behind this theory, but there were other attempts of unification, most notably Her-

    mann Weyl who published his theory three years before Kaluza. For various reasons,

    all the earlier ones were discredited. Kaluza first came up with his theory around

    20

  • 8/3/2019 Torpin Thesis

    32/86

    1919 though, due to some reservations by Einstein, it was not published until 1921 in

    a paper entitled On the Unity Problem in Physics[18]. Kaluzas approach was based

    on the idea of a single universal tensor which is the source of both the gravitational

    and electromagnetic fields.

    Kaluza began by considering gravity in a five dimensional space. When the 5D

    Riemannian line element is given by

    d =

    ikdxidxk, (2.1)

    where ik are the covariant components of a 5D symmetric tensor and the xs are the

    5 coordinates of space with x5 representing the 5th dimension. This can be written

    in a more modern form in this way:

    ds2 = gdxdx (2.2)

    Throughout this thesis we will use Einstein summation notation, meaning that

    we will drop explicit summation signs and assume that repeated indices imply sum-

    mation. We must be careful and note that indices that use the Latin alphabet sum

    from one to four and the indices that use the Greek alphabet sum from one to five.

    Kaluza started here because Riemann proved that one can consider force a con-

    sequence of geometry, which in this case is determined by the metric. Kaluza had

    to make several assumptions. He assumed that x1, x2, x3, x4 characterizes the usual

    space-time. Now, Kaluza introduces what he called the cylinder condition which

    forces the metric tensor to depend only on the observable space-time coordinates.

    We can do this by setting the derivative of the metric with respect to the 5th dimen-

    sion equal to zero:

    21

  • 8/3/2019 Torpin Thesis

    33/86

    dikdx5

    = 0 (2.3)

    This sets the structure of the 5D space to be a cylinder world whose axis is the 5th

    dimension and is preferred. This section is what Einstein wasnt convinced of, and it

    took two years before he agreed with what Kaluza did.

    The cylinder condition implies that coordinate transformations must be of the

    form:

    x5 = x5

    + 0(x1 , x2

    , x3

    , x4

    ),

    xi = i(x1, x2

    , x3

    , x4

    ).(2.4)

    That is, transforming the 5th dimension allows the terms to mix with regular space-

    time coordinates but when transforming the regular coordinates, these are not allowed

    to mix with the 5th dimension coordinate because if that were to occur, the fifth di-

    mension would be observable to us. Under this set of transformations, 55 is invariant.

    This allows us to set this element of the metric to a constant:

    55 = constant. (2.5)

    Also invariant under the transformation are the differential quantities

    d = dx5 +5i55

    dxi, (2.6)

    ds2 =

    ik 5i5k

    55

    dxidxk, (2.7)

    where d represents a point in the 5th dimension and ds2 represents a line element

    22

  • 8/3/2019 Torpin Thesis

    34/86

    in the regular four dimensions. We can connect these two quantities by another line

    element:

    d2

    = d2

    + ds2

    (2.8)

    which is invariant as well due to the fact that d and ds2 are invariant themselves.

    From the properties of transformations and the fact that d and 55 are invariant, we

    eventually find that 5i transforms the same as a 4D vector, which we may call i.

    Next, Kaluza began to define his metric for his 5D space-time. We may let 5i =

    i where is a constant and i is defined as an arbitrary vector. We can substitute

    this back into our line element so that we have

    d = dx5 + idxi. (2.9)

    Kaluzas next step was to write the line element ds2 as the line element used in

    Einsteins highly successful Theory of General Relativity. To do this, Kaluza set his

    metric equal to

    ik = gik + 2ik, (2.10)

    where gik is the usual four dimensional metric that Einstein used. This metric, gik,

    is usually chosen such that in Cartesian coordinates

    ds2 = dx2 + dy2 + dz2 c2dt2, (2.11)

    which is the typical space-time interval. Therefore Kaluzas metric is given by,

    23

  • 8/3/2019 Torpin Thesis

    35/86

    ik =

    g11 g12 g13 g14 1

    g21 g22 g23 g24 2

    g31

    g32

    g33

    g34

    3

    g41 g42 g43 g44 4

    1 2 3 4 55

    . (2.12)

    By examining the metric, we can see that it is composed of the usual 4D metric along

    with a vector.

    The next thing that Kaluza derived were the equations of motion in this five

    dimensional geometry. He began by defining an invariant scalar

    P =

    ik

    {i}xk

    {ik}

    x+ {i}{k } {ik}{ }

    , (2.13)

    where ik are the contravariant components of the metric, and {rsi } representsChristoffel symbols. Christoffel symbols relate vectors in the tangent space of nearby

    points and are defined as

    {rsi } =1

    2

    i{r

    xs+

    sxr

    rsx

    }. (2.14)

    In this interpretation, P is the five dimensional Ricci Scalar. The Ricci Scalar is

    written in modern terms as

    R = R = R =

    + , (2.15)

    where represents the Christoffel symbols which are defined as

    =1

    2g(g + g g). (2.16)

    24

  • 8/3/2019 Torpin Thesis

    36/86

    Now in R, it is assumed that it is independent of x5 and of 55 = . This must be

    done because otherwise the fifth dimension would be observable.

    Next, Kaluza considered the action of the system; this action is known as the

    Einstein-Hilbert Action, and works exactly as in general relativity except in five

    dimensions. This is given by

    J =

    R

    dx1dx2dx3dx4dx5, (2.17)

    where represents the determinant of the metric, ik. Next, Kaluza used the calculus

    of variations. He then formed J by varying ik andikdxl

    while keeping the boundary

    values fixed and keeping constant. Then, using the Principle of Least Action, he

    set J = 0, which will give the extremes of the motion.

    By setting J = 0, Kaluza found[19]:

    Rik 12

    gikR +2

    2Sik = 0, (2.18)

    gFi

    x = 0. (2.19)

    Next Kaluza set 2

    2 = where is the gravitational constant used by Einstein in

    general relativity. This gives

    Rik 12

    gikR + Sik = 0, (2.20)

    gFix

    = 0. (2.21)

    Eq. (2.20) is the Einstein equation for the metric, which gives four dimensional

    general relativity. But what is Eq. (2.21)? It looks like an equation of motion for a

    25

  • 8/3/2019 Torpin Thesis

    37/86

    vector field. In modern terms, we can write Eq. (2.21) as

    F = 0. (2.22)

    Switching gears a little bit, Maxwells equations in empty space are given by

    E = 0, (2.23)

    B = 0, (2.24)

    B = 00Et

    , (2.25)

    E = Bt

    . (2.26)

    where E represents the electric field, B is the magnetic field, 0 is the permeability of

    free space, and 0 is the permittivity of free space. This allows a tensor to be defined

    as

    F =

    0 E1 E2 E3

    E1 0 B3 B2

    E2 B3 0 B1

    E3 B2 B1 0

    (2.27)

    Then if we take

    F

    and set it equal to zero we find that it equals Maxwells

    equations through the use of Calculus of Variations. Classical electrodynamics comes

    from the action

    26

  • 8/3/2019 Torpin Thesis

    38/86

    S =

    Ld4x (2.28)

    in the case of no sources. The Lagrangian for this theory is defined as

    L = 14

    FF. (2.29)

    where F = A A. Working through the Euler-Lagrange equations willyield Maxwells equations.

    Going back to our arbitrary vector i when we defined the metric, we can build

    F out of i such that in Cartesian coordinates:

    i = (x, y, z) = A

    t = cV(2.30)

    where A is the usual Electromagnetic vector potential, V is the usual scalar potential,

    and c is the speed of light. All of this leads to a conclusion that gFix = 0 is

    equivalent to the equation of motion for an electromagnetic field.

    Our equations of motion are therefore given by

    Rik 12

    gikR + Sik = 0,

    gFi

    x= 0,

    where the elements of the above equations as follows:

    R is the Ricci Scalar

    27

  • 8/3/2019 Torpin Thesis

    39/86

    Rik are the contravariant components of Einsteins Ricci tensor

    gik are the contravariant components of Einsteins metric tensor

    Sik

    are the contravariant components of the Electromagnetic Energy-Momentumtensor

    g is the determinant of gik

    Fi are the contravariant components of the electromagnetic field strength ten-

    sor

    We essentially recover the gravitational field equations of general relativity alongwith the generalized Maxwell equations! Thus Kaluza proved that gravity in five

    dimensions is mathematically equivalent to just four dimensional gravity along with

    electromagnetism! This was a very interesting and amazing result. However, there

    are some problems with this theory that ultimately led to it being discredited and

    being considered a mathematical oddity.

    One such problem is that we still have the 55 scalar field which Kaluza left alone.

    This is ultimately what led to the downfall of this theory. A scalar field will act as

    another force, which would be a fifth force that we have not observed. Now there are

    some theories that postulate a fifth force, but if there is such a fifth force, it would

    have to be tightly constrained. Interestingly enough, there are some theories that

    suggest that this 55 scalar field could be what accounts for dark matter.

    2.2 Kaluza-Klein Theory

    Someone that studied Kaluzas theory and made some improvements to it was the

    Swedish physicist named Oskar Klein. Oskar Klein was a theoretical physicist who is

    28

  • 8/3/2019 Torpin Thesis

    40/86

    much better known for his work on quantum theory. Early on in his career, he worked

    on Kaluzas theory and ended up publishing some additions in 1926 in the paper The

    Atomicity of Electricity as a Quantum Theory Law[19]. His major addition to the

    theory was linking it to the new theory of quantum mechanics.

    Essentially, what Klein ended up doing was curling up or compactifying the fifth

    dimension. Basically this compactification is why the fifth dimension is not observed.

    Kleins Condition was based on the Bohr-Sommerfeld quantization rule given by

    pdr = N h. (2.31)

    The specific quantization rule that Klein used was

    p5 =N h

    l, (2.32)

    where p5 is the particles momentum in the fifth dimension, N is the quantum number,

    l is the period or the circumference of the fifth dimension, and h is Planks constant.

    Because this is quantized, the motion in the fifth dimension will manifest as stand-

    ing waves at different excitation levels. These excitations will form a tower of particles

    which will have ever increasing masses. This could be what gives dark matter mass.

    This is because a particle that is just sitting there in our normal space-time but

    is moving in the fifth dimension will actually have kinetic energy which would just

    appear to us as adding mass to a particle.

    Klein also calculated the size of the fifth dimension. He did this through

    l =hc2

    , (2.33)

    where is the electronic charge. The size of the dimension came out to be about

    0.8 1030 cm which is about the size of the Plank length. This turns out to be

    29

  • 8/3/2019 Torpin Thesis

    41/86

    about 1020 times smaller than the diameter of the nucleus of an atom. Now, thereare still some problems with what is now known as Kaluza-Klein theory. Largest of

    course, is that Kleins modifications still has the flaw of the scalar field that Kaluzas

    theory had, so ultimately it fell out of favor as well.

    30

  • 8/3/2019 Torpin Thesis

    42/86

    Chapter 3

    Universal Extra Dimensions

    Theory

    Universal Extra Dimensions Theory (UED Theory) is a modern Kaluza-Klein theory.

    UED theory was first proposed in 2001 by Applequist, Cheng, and Dobrescu and

    it differs from other modern theories because it allows all standard model fields to

    propagate in the extra-dimensions[20]. These particles will gain mass the same as in

    Kaluzas theory with the excitation of the standing waves in the extra dimension. In

    UED theory the extra dimension is compactified along a S1

    Z2orbifold. This creates a

    tower of higher dimensional excited particles, where the mass of these particles scale

    according to 1R , where R is the size of the extra dimension. The Kaluza-Klein number

    (KK-number) is essentially a measure of the momentum of the a particle in the extra

    dimension. KK-parity preserves the evenness or oddness of the number which keeps

    the lightest Kaluza-Klein particle (LKP) stable, which can be a natural dark matter

    candidate with the appropriate relic density[20] [21]. A more detailed review can be

    found in [22].

    UED theory can be adapted to any number of extra dimensions, but the most

    31

  • 8/3/2019 Torpin Thesis

    43/86

    common number of dimensions used is five. Four normal dimensions and one extra

    dimension that has a radius ofR. The Lagrangian for the UED model of 5 dimensions

    is given by[23]:

    L5D = 14

    GAMNGAMN 1

    4WIMNW

    IMN 14

    BMNBMN

    + (DMH) DMH + 2HH 1

    2

    HH2

    + iMDM

    +

    ELEH+ UQUH+ DQDH+ h.c.

    + ...

    , (3.1)

    where GMN, WMN, BMN are the 5D SU(3)CSU(2)WU(1)Y gauge field strengths.

    The covariant derivatives are defined as DM = M+ ig3GAMTA + ig2WIMTI+ ig1Y BM,

    where gi are the 5D gauge couplings with engineering dimension m1/2. The ellipsis

    represent higher order terms which are not relevant to us and h.c. represents the

    Hermitian conjugate of the previous terms.

    The next step in the theory is to specify the compactification of the extra-dimensions.

    The simplest choice would be to compactify the extra-dimension onto the spherical S1

    orbifold but unfortunately this causes an issue with the chirality of fermions. Whatworks for this 5D theory is the S1/Z2 orbifold. This orbifold can be thought of by

    thinking of the normal four dimensions as a straight line and then the fifth dimension

    be a circle attached to the line at two points. Next we need to specify how the orbifold

    will transform. When a Fourier expansion of the gauge fields is expanded into the Aand A5 components, this compactification that was used requires that A (where represents the normal four dimensions) is even under a y y transformation andthat A5 is odd under a y y transformation, where the y coordinate representsthe 5th dimension. We can visualize what occurs in this transformation as that of

    a particle moving in a circle will be in the same state it was before and after one

    32

  • 8/3/2019 Torpin Thesis

    44/86

    revolution around the circle i.e. a cyclic boundary condition.

    The fourier expansion will give us the components of the gauge and scalar fields

    in KK modes[22]:

    (H, A) (x, y) = 1R

    (H0, A,0) (x) +

    2

    n=1

    (Hn, A,n) (x)cosny

    R

    , (3.2)

    A5 =

    2

    R

    n=1

    A5,n (x)sinny

    R

    , (3.3)

    where H represents the standard model Higgs Boson and will be even under this

    orbifold transformation. A5 is needed for chirality. When n = 0 those terms arethe zero-mode excitation particles and are in fact just the normal standard model

    particles. However, there exists a tower of higher dimensional excitations for these

    standard model particles in the 5th dimension. Another important thing to note

    about this orbifold is that the boundary conditions are given by y = 0, R so that:

    A =2A

    y2= 0 for odd fields,

    Ay

    = 0 for even fields.

    (3.4)

    The next step is to transform this model into a 4-dimensional effective theory.

    This is done by inputting the expanded fields back into the Lagrangian and then

    integrating over the 5th dimension using the limits ofy = 0

    R for even fields and

    for y = R R for the odd fields. Doing the integration yields a solution in theform of

    33

  • 8/3/2019 Torpin Thesis

    45/86

    K0A,0 + Knn=1

    F(A,n, (1/R)) . (3.5)

    The Ks are constants that will include R and F is a function of

    A,n and 1/R.

    This expansion should have observable consequences that should help determine what

    exactly dark matter is if this theory pans out. We should see are a bunch of zero

    mode particles (Standard Model particles) and then a tower of other particles whose

    masses scales with 1/R.

    R must be very small because otherwise extra dimensional excitations would al-

    ready be observable. This means that these other particles will be massive. This

    could potentially pose a problem for this theory. This is because in particle physics,

    the heavier a particle is, the more unstable it is and the more likely it will be to

    decay into lighter, stabler particles. This implies that these massive particles would

    have long since decayed as they would have been created in the Big Bang. Thus they

    would not be able to be a viable dark matter candidate.

    However, this turns out to not be a problem due to a inherent property of the

    compactification. This is due to what is known as KK-parity. n is what is known

    as the Kaluza-Klein number (KK-number). The KK-number is essentially a mea-

    sure of the particles momentum in the extra dimension and if it the theory was

    compactified on the S1 surface then the KK-number would be conserved. Compact-

    ifying on the S1/Z2 orbifold breaks this symmetry and calls for KK-parity to be

    conserved. KK-parity means that the evenness or oddness of the KK-number will be

    conserved in an interaction. This means that the lightest KK-particle cannot decay

    (n = 1 cannot go to n = 0). Thus all the KK particles will have now decayed into

    the LKP which would still exist and could be dark matter.

    Cheng, Matchev and Schmaltz investigated what particle could be the LKP and

    34

  • 8/3/2019 Torpin Thesis

    46/86

    in so doing developed a particle spectrum shown in Figure 3.1[24]. The mass of the

    paricles is proportional to 1/R at the tree level, but quantum corrections induce a

    splitting between the particles.

    Figure 3.1: Mass Spectrum of the UED particles with one Loop Corrections for Mass

    They found that the first excitation of the photon is the LKP, which is called the B(1).

    Their mass spectrum depends on two parameters R , which is the size of the extra

    dimension, and which is the cut-off scale of the theory. Generally, R measures

    how many excitation modes can be counted before the theory breaks down.

    The Standard Model fields appear as towers of Kaluza-Klein states at the tree

    level, where the mass is given by

    m2X(n) =n2

    R2+ m2X(0), (3.6)

    where X(n) is the nth Kaluza-Klein excitation of the Standard Model Field, R is the

    size of the extra dimension (R TeV1due to the fact that c = 1 ), and X(0) standsfor the ordinary Standard Model Particle. Corrections to the KK masses are given by

    loop diagrams traversing around the extra dimension and by brane-localized kinetic

    terms at the orbifold boundaries. The corrections for the B(n) are given by:

    35

  • 8/3/2019 Torpin Thesis

    47/86

    m2B(n)

    =g2

    162R2

    392

    (3)

    2 n

    2

    3ln R

    . (3.7)

    The corrections for the quarks Q(n) are given by

    mQ(n)

    =n

    162R

    6g23 +

    27

    8g2 +

    1

    8g2

    ln R. (3.8)

    The goal of my project is to constrain R and in UED theory using the lastest

    data from the XENON100 direct detection experiment. In order to constrain R and

    we will have to analyze cross sections. A cross section is the likelihood of an

    interaction between particles. In this case, the elastic scattering of the LKP from a

    nucleus in a detector. The detectors used at XENON100 were made of Xe-131 and

    hence the dominant factor is the elastic scattering of the LKP off of quarks. The

    leading Feynman diagrams for B(1)-quark elastic scattering is given by[25]:

    Figure 3.2: Feynman Diagrams for B(1)-Quark Elastic Scattering

    Going through the cross section calculation and performing some numerical calcula-

    tions, one finds that the cross section is given by[22]

    B(1)n,SI 1.2 1010

    pb1TeV

    mB(1)2 100GeV

    mh2

    + 0.091TeV

    mB(1)20.1

    2

    2

    .(3.9)

    36

  • 8/3/2019 Torpin Thesis

    48/86

    Chapter 4

    The XENON Experiments

    The XENON Experiments are a series of direct detection experiments that are cur-

    rently under way. The experiments are located at the Gran Sasso National Laboratory

    located under the Gran Sasso Mountain in Italy, which is the largest underground

    particle physics laboratory in the world. The first phase of the project, known as

    XENON10, took data from March of 2006 through October of 2007. The second

    phase is currently running and the third and final phase of the project, XENON1T,

    is in the design phase.

    4.1 The Detector

    The XENON experiments use very pure, liquid xenon as the detection medium.

    Xenon has an atomic mass of 131 which implies that there should be a high rate

    for spin-independent interactions between dark matter and xenon. This is because

    the cross section is proportional to the square of the atomic mass. Xenon is an ef-

    fective target material due to the fact that xenon is self-shielding and has a high

    stopping power. Xenon has a large atomic number and high density ( 3 g/cm3).

    37

  • 8/3/2019 Torpin Thesis

    49/86

    Background gamma rays are stopped around the edges of the detector and so the

    central region will have a low background[26]. Another reason that xenon is used

    is that xenon works well as a scintillator and an ionizer. It has the highest yield of

    energy among the noble liquids[27] which allows for easier detection. Xenon is also

    radiologically pure as it has no long lived radioactive daughters except for Krypton,

    but Krypton can be separated from the xenon through well established methods.

    Xenon is also relatively easy to cool down to cryogenic temperatures to help reduce

    the background signal. Xenon should also be more sensitive to lower energetic recoils

    than the material used in various other dark matter searches as shown here in Figure

    4.1[27].

    Figure 4.1: Rate of Detection by Elements vs Recoil Energy

    The XENON100 detector itself consists of a position sensitive Xenon Time Pro-

    jection Chamber (XeTPC) shown here in Figure 4.2[27]. The position sensitivity of

    the detector plays a key role in reducing the background because it is able to localize

    events with millimeter precision in all spatial dimensions so researchers can select the

    volume in which the background is at a minimum[28]. Inside the chamber there is 161

    38

  • 8/3/2019 Torpin Thesis

    50/86

    Figure 4.2: XENON100 Detector

    kg of liquid xenon, of which 99 kg are used as a scintillator veto, and 62 kg left over

    serves as the active target and is optically separated from the rest in a cylinder of

    height 30 cm and with a radius of 15 cm. The way the XENON Detector determines

    the background is through simultaneously measuring the charge and light within the

    detector through the use of 242 Photomultiplier tubes[28]. Through the use of the

    simultaneous measurements, more than 99.5% of the background is rejected[29].

    The XENON10 detector was basically a smaller version of the XENON100 detec-

    tor and was able to limit the cross section of the WIMP to be SI = 8.81044cm2 [30],while the XENON100 is projected to limit the cross section to SI = 2 1045cm2,and the planned XENON1T experiment is hoping to limit a cross section to SI