Chapter 1 The runaway galaxies€¦ · Web viewChapter 5 Big Bang Parameters. This chapter...

78
Chapter 5 Big Bang Parameters This chapter describes the advent of observational cosmology and the search for the main parameters of the big bang model. We then consider the emergence of several big bang puzzles and the problem of fine tuning By the 1970s, cosmology had finally become a major research area in physics. There were now four main planks of evidence for the big bang model; the recession of the galaxies (Hubble’s law), the abundance of the lightest elements, the distribution of the radio galaxies and the cosmic microwave background. However, all of this evidence was rather qualitative and a new generation of physicists took on the task of establishing quantitative parameters of the model. The search for two numbers One problem with the big bang model is that the relativistic models of Friedmann and Lemaître describe evolving universes in general; they do not specify which type of universe we live in. Key parameters are not predicted by the model, but are determined by observation. In particular, the two most important variables - 1

Transcript of Chapter 1 The runaway galaxies€¦ · Web viewChapter 5 Big Bang Parameters. This chapter...

Chapter 5 Big Bang Parameters

This chapter describes the advent of observational cosmology and the search for the main parameters of the big bang model. We then consider the emergence of several big bang puzzles and the problem of fine tuning

By the 1970s, cosmology had finally become a major research area in physics. There were now

four main planks of evidence for the big bang model; the recession of the galaxies (Hubble’s

law), the abundance of the lightest elements, the distribution of the radio galaxies and the cosmic

microwave background. However, all of this evidence was rather qualitative and a new

generation of physicists took on the task of establishing quantitative parameters of the model.

The search for two numbers

One problem with the big bang model is that the relativistic models of Friedmann and Lemaître

describe evolving universes in general; they do not specify which type of universe we live in.

Key parameters are not predicted by the model, but are determined by observation. In particular,

the two most important variables - the rate of expansion of the universe and the density of matter

- are not specified. We recall from chapter 2 that both the fate and the geometry of the universe

are decided by a competition between these two parameters1. If the initial density of matter in the

universe is below a certain critical value, the expansion of the universe will overcome the pull of

gravity and expand forever (with the open geometry shown in figure 5): if the matter density is

above this value, gravity eventually overcomes the expansion (and the universe will exhibit

closed geometry). In between these cases lies the special case of a universe with an exact balance

between expansion and gravity (flat geometry).This is one reason one talks about models of the

1

expanding universe; it was realised early on that these two parameters could only be determined

by observation2.

Fig 5 Three possibilities for the universe depending on the initial values of the rate of expansion versus the density

of matter

The search for the Hubble constant

Considering the rate of expansion first, it can in principle be estimated directly from the Hubble

constant H0 , i.e. from the slope of the velocity-distance graph of the galaxies (figure 5).

Unfortunately, this parameter is very sensitive to errors in our estimates of the absolute distances

to the stars, as we saw in chapter 4. From the 1970s onwards, astronomer devoted a great deal of

time and energy to the determination of a reliable scale for stellar distance3. A key player in this

program was Allan Sandage, Baade’s successor at the Mount Wilson observatory. Working with

a 200-inch Palomar telescope, Sandage approached the problem using a variety of methods,

putting particular emphasis on obtaining extremely accurate measurements to the nearest galaxy

clusters. However, a serious dispute arose between his estimate of H0 and that of a group led by

2

Gerard de Vaucouleurs at the McDonald observatory of the University of Texas. This debate

lasted right up until the 1990s, with the Hubble Space Telescope providing a definitive answer

midway between the two estimates4.

The search for the density of matter

As regards the density of matter in the universe, we recall first that it is specified in the

Friedmann models in terms of the parameter Ω, the ratio of the actual density to the critical

density required to close the universe5. The critical density can be calculated from theory

(although it depends on the Hubble constant, as you might expect), but how does one determine

the actual density of matter at any epoch?

A first guess comes from calculations of primordial nucleosynthesis (chapter 3). In order to

accurately predict the observed abundances of the lightest elements, the calculations must

assume a value for the matter density that is very low (Ω ~ 0.3) relative to the critical value (Ω =

1). This value implies a universe that easily overcomes the pull of gravity. However, this method

is very limited as it considers only baryonic matter i.e. particles that take part in nuclear reactions

(protons and neutrons). Nucleosynthesis puts no limit on the amount of non-baryonic matter in

the universe and such matter could contribute significantly to the mass of the universe.

A second method is to estimate the mass of all the observable galaxies and galaxy clusters; this is

done by measuring their gravitational effects on other bodies6. These measurements offered

support for a surprising idea that had been around for some time; there appears to be a great deal

of matter in the universe that is observable only by its gravitational effect. For example,

measurements of the speed of rotation of certain galaxies strongly suggest the presence of nearby

3

matter that is not detectable through telescopes at any wavelength. Such matter is called dark

matter and its existence was first mooted by the Swiss astronomer Fritz Zwicky in the 1930s.

Although dark matter was seen as something of a fudge for some years, the phenomenon is now

accepted by the vast majority of cosmologists7. That said, the nature of the particles that make up

dark matter remains unknown8. We shall return to this topic many times - for the moment, we

note that, even including the contribution of dark matter, the ‘gravitational method’ suggested a

density value of about Ω = 0.2 for today’s universe, again indicating an expanding universe that

overcomes the pull of gravity.

A third method is to use very distant objects to measure the curvature of space (recall that this

curvature is directly related to mass, according to relativity). Such methods also suggested a

value of Ω less than 1, but they turned out to be rather error-prone. Yet another method is to

measure the luminosity of galaxies and convert this to a measurement of mass density using an

estimate of the mass-to-light ratio of matter. This is a rather complicated method9 and we simply

note that such measurements suggested a value of Ω = 0.25. All in all, the various observational

methods seemed to indicate a value of Ω between 0.1 and 0.3, indicative of an open universe.

But where did this value come from? What property of the early universe resulted in a value of

Ω ~ 0.3?

The problem of structure

Another question soon emerged; how did large-scale structures such as galaxies and galaxy

clusters form? The Friedmann/Lemaître models presume a universe that is both isotropic and

homogenous on the largest scales. As astronomy progressed in the 1960s and 70s, this principle

seemed to be supported more and more by observation. On the other hand, the density of matter

4

in a particular galaxy is approximately one million times the density in the cosmos at large. How

did this happen?

The obvious approach in the study of the formation of the galaxies is to assume that natural

infinitesimal fluctuations in the density of the early universe were amplified by the force of

gravity, becoming the structures we see today. However, early calculations by Lemaître, Richrad

Tolman and Evgenii Lifshitz showed that, in an expanding universe, such fluctuations are too

small to give rise to the large-scale structures observed today 10. Hence, one must assume that the

universe was ‘born’ with irregularities of just the right size to give rise to the galaxies we see

today.

In the 1960s,Yakov Zeldovitch and Igor Novikov of Moscow University, and Jim Peebles of

Princeton, re-examined this question in the context of measurements of the cosmic microwave

background. As we saw earlier, the background radiation is a snapshot of the universe at the time

when free particles coalesced into atoms (the epoch of recombination, long before the formation

of galaxies). Fluctuations in density existing at that time were imprinted on the cosmic

background radiation, measurable today as tiny fluctuations in the temperature of the radiation.

From these considerations, two competing models of galaxy formation arose – a bottom-up

process where the initial perturbations in the primordial plasma simply grew larger and larger to

form galaxies (Peebles et al) and a top-down process, where the initial perturbations clumped to

form large growths of matter which later fragmented into galaxies (Zelowitch et al). However,

neither model was very successful11.

Another approach concerned neutrinos. The lightest known particles, neutrinos travel almost at

the speed of light and are extremely weakly interacting. The number of neutrinos in the universe

5

is not constrained by nucleosynthesis calculations because they do not partake in nuclear

reactions (see above). Hence a universe filled with enough neutrinos could give rise to the

density fluctuations necessary for the formation of galaxies; this theory became known as the hot

dark matter model of galaxy formation. However, the model fell out of favour when

measurements in particle accelerators showed that neutrinos are simply too light to play a major

role in galaxy formation 12.

A final possibility was cold dark matter. In this scenario, dark matter particles that are heavier

and slower than neutrinos might provide a mechanism for the formation of galaxies. In 1982,

Jim Peebles showed that this hypothesis could indeed produce the measured fluctuations in the

cosmic microwave background13. To this day, the hypothesis of cold dark matter predicts a

spectrum for the microwave background that is in good accord with measurements from satellite

telescopes , as we shall see in chapter seven. Unfortunately, the model does not specify the

nature of the particles making up the cold dark matter!

The problem of baryon number

Another puzzle concerned the number of baryons in the universe (recall that a baryon is the

name given to particles that partake in nuclear reactions i.e. protons and neutrons). In the big

bang model, one expects the total number of baryons in the universe to remain constant as the

universe expands14. One also expects the number of photons (the particles that make up

radiation) to remain constant. Hence the ratio of baryons to photons is a characteristic parameter

that was fixed early in our universe – and in fact is measured as about 1 billion photons for every

baryon. But what determined this ratio? Was the universe born with this characteristic?

6

The problem of fine tuning

All of the parameters above share an important feature; they are not predicted by the Friedmann-

Lemaître models, but have to be assumed. This is known as the problem of initial conditions or

the fine tuning problem. Why should the Hubble constant have the value it does? What

determined the initial value of the density of matter? What determined the ratio of photons to

baryons? What caused the fluctuations in matter density that gave rise to today’s galaxies? While

the four planks of evidence for the big bang model stood on solid ground, it was unsatisfactory

that so many parameters of the model had to be assumed, rather than be predicted by the theory.

This led some to wonder whether the model was incomplete; and an old puzzle lent weight to

this idea.

The problem of the singularity

A final big bang puzzle was the old conundrum of the singularity. As we saw in chapter 3,

backtracking along the Friedmann-Lemaître graphs brings one to a universe that is infinitely

small and infinitely dense, which seems rather unreasonable. The big bang universe does not

make sense at ‘time zero’!

This problem was sidelined for some years – after all, it is not unusual for perfectly good theories

to break down at some point (note that Newtonian gravity also contains a singularity14). It is

interesting that Einstein himself warned of the dangers of extrapolating relativistic models back

to a universe of atomic dimensions. However, in the 1970s, Stephen Hawking and Roger Penrose

published a number of theorems suggesting that an expanding universe must begin in a

7

singularity, assuming only very general conditions15. This development brought the problem of

the singularity once more to the fore.

How does one resolve the puzzle of the singularity? The key undoubtedly lies in the realm of

quantum physics. The Friedmann-Lemaître model is rooted in general relativity, a theory that

takes no account of quantum effects – yet one can certainly expect quantum effects to become

important in a universe of atomic dimensions16. Hence we cannot expect to have a reliable

description of the origin of the universe until we have a version of general relativity that

incorporates quantum physics. Much effort has been devoted to achieving this synthesis but it

has proved elusive. It is a remarkable fact that the two great pillars of modern physics, general

relativity and quantum physics, have so far proved irreconcilable. One consequence of this failed

marriage is that the big bang model is an effective model, not a complete one. We note once more

that the moniker ‘big bang’ is a terrible misnomer as the model breaks down long before a ‘bang’

is reached. That said, we shall see in the next chapter that a radical new version of the theory was

to cast some light on this great puzzle…

8

Chapter 6 The inflationary universe

In this chapter, we encounter two major puzzles associated with the big bang model, the so-called horizon and

flatness problems. We see how a new version of the big bang model, the theory of cosmic inflation, addresses these

problems and offers a new insight into the formation of the galaxies

We saw in the last chapter that despite its great successes, a shortcoming of the big bang model

is that many parameters are not specified by the model, and one must assume the universe was

‘born’ with certain characteristics such as the Hubble constant, the initial density of matter and

the baryon-to-photon ratio. As theorists analysed the model further, some puzzles of a more

fundamental nature emerged, notably the horizon and flatness problems.

The horizon problem

One outstanding puzzle concerned the homogeneity of the universe. We recall that the

Friedmann-Lemaître models assume that the universe is both isotropic and homogeneous on the

largest scales (the cosmological principle). As astronomy progressed in the 1960s and 70s, this

assumption was increasingly supported by observation. The Hubble expansion appeared to be the

same in every direction. Galaxy surveys also revealed a large-scale uniformity; for any given

epoch, the density of galaxies and galaxy clusters appeared to be approximately constant. Most

tellingly, there seemed to be no detectable variation in the intensity of the cosmic background

radiation, suggesting a high degree of homogeneity in the universe at the time of the formation of

atoms. As studies of the cosmic microwave background progressed, it became increasingly clear

9

that the radiation was extremely smooth, indicating an extremely homogeneous universe at least

at the time of recombination.1

Unfortunately, it was not clear why the universe should be so homogeneous. In nature, such

equilibrium is only achieved by objects coming into thermal contact with one another and

exchanging energy until any inhomogeneities are balanced out (just as a hot cup of tea eventually

cools to the temperature of its environment). However, calculations show that the most distant

regions of our universe could not have been in such causal contact; there simply hasn’t been

enough time for light to travel from one such region to another during the lifetime of the

universe. The limit of influence any section of space is set by the finite speed of light and is

called its horizon; hence this paradox is known as the horizon problem. Simply put, the

smoothness of the microwave background suggests that regions of the universe separated by

distances far greater than their respective horizons have nonetheless been in thermal contact.

You might argue that no such paradox should apply in a universe that originated in a minute

volume of space; surely all regions were originally in thermal contact? In a way this is correct.

The problem is one of backtracking, as we try to reconcile the homogeneity of today’s universe

with both its scale and its age. One way of thinking about the horizon problem is that the size of

the universe, as measured from the Hubble graph, doesn’t seem to match its contents.

The flatness problem

Another puzzle concerned the geometry of the universe. As we saw earlier, relativity predicts

that the curvature of the universe is determined by the density of matter within it; in an

expanding universe, the density of matter decreases over time and hence the density parameter Ω

10

also evolves (recall that Ω is the ratio of the actual density of matter to the critical value required

to close the universe). A careful analysis of the behaviour of this parameter over time led the

Princeton physicist Robert Dicke to a startling prediction; if the density of matter diverged from

the critical value (Ω = 1) by even a minute amount in the first fractions of a second, this

divergence would accelerate rapidly with time, resulting in either a runaway closed or a runaway

open universe 1(see figure 8). Since observations suggest that we live in neither of these, Dicke’s

analysis forces us to conclude that the density of matter in the infant universe must have been

extremely close to the critical value. However, this conclusion was problematic. First, how could

it be reconciled with the observational value of Ω ~ 0.3 (see last chapter)? Second, it seems

extraordinary that the infant universe should be so delicately balanced between the energy of

gravity and the energy of expansion. This conundrum became known as the flatness problem.

Figure 8; The flatness problem; calculations show that the slightest deviation from flatness in a universe 1

nanosecond old quickly amplifies, resulting in a runaway closed or runaway open universe.

11

The theory of inflation

In the early 1980s, a new version of the big bang model was proposed. This model arose from

considerations in particle physics and marked the beginning of an extremely fruitful alliance

between the fields of sub-atomic physics (the world of the extremely small) and cosmology (the

world of the extremely large). The core of the proposal was simple but startling; what if, during

the first fractions of a second, the infant universe underwent a dramatic, exponential expansion

of space, after which it relaxed to the slower expansion we measure today?2 This idea is

associated with the American particle physicist Alan Guth and he named his proposal the

inflationary universe .

What could cause the infant universe to undergo a rapid expansion in the first fractions of an

instant? Guth postulated that, as the nascent universe cooled, it may have been trapped in an

energy state known as a false vacuum; the latter is a ‘metastable’ state of high energy density that

is temporarily prevented from relaxing to the natural or ‘ground’ state of lowest energy. (A good

analogy is a marble sitting on top of an inverted bowl: the marble is stable, but will easily fall to

a lower energy state if nudged). An important property of the false vacuum is that it exerts a

negative pressure on its surroundings, not unlike suction. Now pressure, according to general

relativity, has an associated gravitational field; in particular negative pressure creates a repulsive

gravitational field. Hence, a false vacuum state could create a significant force of repulsion.

The idea of a universe that expands exponentially was not entirely new (it arises in the context of

the de Sitter model and more modern versions were proposed by Andrei Linde , Yakov

Zeldovich, Alexei Starobinsky and others4); however, Guth’s key insight was that an exponential

12

expansion of the infant universe could offer a simultaneous solution to the horizon and flatness

problems5.

Inflation and the horizon problem

For the case of the horizon problem, we imagine a minute6 patch of space before inflation begins;

a uniform, homogeneous state is easily established in such a region because all points are in

thermal contact. When inflation occurs, this region of space is stretched exponentially;

neighbouring points are driven apart to distances so large they cannot communicate even by light

signals. In the language of physics, the components of the region are swept far beyond their

particle horizons by inflation (recall that the horizon of each component is set by the finite speed

of light). The homogeneity of the region is preserved, although it will appear quite mysterious to

an observer! (see figure 10).

Figure 10 If a minute, homgeneous region of space is stretched to a size larger than today’s observable universe, its

homogeneity is preserved

But surely the rate of expansion of the early universe is constrained by the speed of light? In fact,

relativity specifies the speed of light in vacuum as a limiting speed for any object in spacetime ;

it sets no limits on the behaviour of space itself. Thus, it is possible in principle for the expansion

13

of space to be arbitrarily large. Indeed, according to inflation, a small region of space could have

been inflated to dimensions larger than the universe we observe today. This aspect of inflation

forces one to consider that the universe we measure may simply be the observable universe - a

small patch of a much larger entity!

Inflation and the flatness problem

The concept of inflation also offered a neat solution to Dicke’s flatness problem. If a region of

space is inflated in the first fractions of an instant to a size larger than the universe we observe,

its geometry will inevitably appear flat to us, just as the surface of an enormous balloon appears

flat to an insect on it. This solves the flatness paradox beautifully; instead of deviations from

flatness leading quickly to either a runaway open or runaway closed universe, inflation posits a

universe that is driven towards flatness (see figure 10). Hence, the theory makes a very clear

prediction; the geometry of the observable universe should be exactly flat, to a high degree of

accuracy (more on this in chapter 7).

Figure 10 Inflation drives the infant universe towards flatness

14

In sum, inflation pushes the universe into a remarkably simple state, since all inhomogeneities

and local curvatures of space are smoothed out by the enormous expansion; such a universe is

nowadays called a no-hair universe7.

New inflation

Guth’s paper on inflation was ‘a shot around the world’, not least because he emphasized how

the model offered an intriguing solution to both the horizon and flatness problems. However, it

was clear from the outset that it contained a significant flaw. The theory could not describe how

inflation ends, relaxing to the familiar Hubble expansion (Guth initially postulated that the

universe got out of its metastable state by a process of quantum tunnelling,8 but calculations

showed that this process gives rise to huge inhomogeneities not observed in today’s universe).

This problem became known as the graceful exit problem.

The problem was overcome in 1982, when the Russian theorist Andrei Linde and the American

physicists Paul Steinhardt and Andy Albrecht independently published new versions of the

inflationary universe9. In these models, the false vacuum state is less severe than that of Guth and

the phase transition to the state of lowest energy is a much gentler process (figure 12). These

models solved the graceful exit problem and became known as new inflation. It’s worth noting

that new inflation did not specify a particular quantum field, simply that the field should have an

extremely flat potential energy and a slow transition to the true vacuum10.

New inflation also gave a good description of the important phenomenon of reheating. During

the garangtuan expansion, one can expect that the universe underwent a tremendous cooling.The

new inflation models described an exit from the false vacuum state that resulted in a huge release

15

of energy in the form of incredibly hot radiation and particles, exactly the ‘initial state’ required

by traditional big bang models. But the best was yet to come...

Figure 12: New and old models of inflation

A mechanism for galaxy formation

Many cosmologists were struck by the way inflation simultaneously addressed both the horizon

and flatnesss problems. With the problem of the ending of inflation being cleared up, they

became interested in inflation as a potential mechanism for galaxy formation .

As we saw earlier, quantum physics predicts minute variations in the density of matter in the

infant universe. 11 However, detailed studies of galaxy formation had long suggested that such

perturbations were simply too small to give rise to the large-scale structures of today. Hence, one

was forced to assume that the universe was ‘born’ with certain inhomogeneities. Inflation

breathed new life into this question; with the aid of an exponential expansion during the first

16

fractions of a second, could natural fluctuations in density have given rise to the galaxies after

all?

A feverish amount of calculation followed, with analyses by Guth, Linde, Steinhardt, Hawking,

Micheal Turner and many others. (A key stepping stone was to see whether quantum

perturbations in an inflationary universe could give rise to the inhomogeneites in the cosmic

background radiation necessary for the seeding of the galaxies). After several false starts, a great

deal of thought and three weeks of hard calculation at a workshop in Cambridge University12, a

stunning result was announced - natural fluctuations in density in an inflationary universe could

indeed give rise to the perturbations responsible for today’s large scale structures!13.

This was an exciting advance – a theory that was posited to address the horizon and flatness

problems had given the first working explanation for the seeding of the galaxies. Best of all, the

explanation arose from fundamental considerations of quantum physics, opening up a new area

of research – the synthesis of quantum theory, particle physics and cosmology, a thriving field

now known as particle cosmology. As we shall see in the next chapter, increasingly precise

measurements of the cosmic microwave background were to offer further support for the

analysis.

Inflation and the philosophy of science

It is often stated that the theory of inflation constituted a new paradigm in cosmology. However,

it is probably more accurate to say that it is an intriguing addition to the existing big bang

paradigm. After all, the theory simply superimposes an extremely brief period of hyper-

expansion on traditional models of the expanding universe. Inflation is therefore a variant on the

17

big bang model, a version of the theory that provides a natural explanation for several

‘coincidences’ that are otherwise hard to explain – the homogeneity of the universe, its geometry

and its large-scale structure (we shall see in chapter 7 that more accurate measurements of all

three of these parameters offer further support for the theory). These are no mean

accomplishments - scientists are always impressed by a theory that can explain apparently

special conditions in terms of general considerations.

However, it should be noted that some physicists find the theory of inflation rather contrived. Is

it reasonable to be talking about unimaginably large expansions of space (of the order of 1050)

ocurring over unimaginably short timespans (of the order of 10 -30 s)? It seems rather speculative

and divorced from reality. How could such a theory ever be tested directly?

A more specific problem concerns the mechanism of inflation. To this day, it is not known what

type of physical field could give rise to the phenomenon. A great many models of inflation have

emerged and it is not clear how to decide between them – this is not a situation physicists enjoy .

There is also a new problem of fine tuning; although inflation neatly avoids many of the special

initial conditions required by traditional big bang models, the ‘no-hair’ inflationary universe

requires a few initial conditions of its own – namely a certain type of quantum field and a certain

type of phase transition. Hence it can be argued that inflation has simply replaced one set of

initial conditions with another.

These are the technical drawbacks of the model of an inflationary universe. However, it is the

philosophical implications of the theory that are most disturbing. Recall that inflation posits that

a small region of space could have been inflated to our observable universe – this immediately

raises the possibility that the universe we observe is just a fraction of a much larger,

18

unobservable ensemble. Worse, one has to consider the possibility that other regions of space

were inflated. As such regions would lie far beyond our horizon, they would effectively become

parallel universes. This idea, that we live in one of a multitude of parallel universes (the

multiverse) is not at all attractive to empirically minded physicists, but it has proved hard to rule

out.

Further, the theorist Andrei Linde has shown that it is unlikely that the inflation field was exactly

the same everywhere; hence we can expect other universes to have different properties to our

own (this theory is known as chaotic inflation.) Some claim that chaotic inflation offers an

intriguing solution for the fine tuning problem – in a multitude of universes with vastly different

properties , it is not so unlikely that at least one universe should have exactly the right conditions

for life to emerge. However, most physicists dislike this sort of speculation. The problem was

neatly summarized by the Archbishop of Canterbury a few years ago, when he stated “I find it

disappointing that, in order to explain the properties of one observable universe, scientists are

now postulating the existence of an infinite number of unobservable ones”! Touché.

In conclusion, it is important to note that, far from being a contrived put-up job, the theory of

inflation arose from fundamental considerations of particle physics14. Yet the theory provides a

natural explanation for several aspects of our universe that are otherwise hard to explain – its

homogeneity, its geometry and its large-scale structure. In particular, the theory predicts that the

universe should exhibit a flat geometry. This prediction seemed in glaring conflict with

observational data at the time, but that was soon to change…

19

Chapter 7

Dark energy and the accelerating universe

In this chapter, we will see how modern measurements of the cosmic microwave background gave convincing support for both the big bang model and the theory of inflation. We shall also see how a new method of estimating astronomical distance led to the discovery of the accelerating universe. These data combine to form today’s model of a flat, accelerating universe that contains dark matter and dark energy

The discovery of the cosmic microwave background (CMB) heralded a new era in cosmology.

However, the ensuing study of the background radiation led to several puzzles, which in turn led

to the development of the theory of cosmic inflation (chapter 6). Inflation caused something of a

divide between theorists and experimentalists; many theoreticians now believed in an

inflationary universe of flat geometry1, while astronomers measured a universe with a low mass

density and thus open geometry. Which was the real universe?

A second disjunction of theory and experiment gradually emerged. It was realized from the first

that the seeds of today’s galaxies and galaxy clusters should be detectable as small perturbations

in the temperature of the cosmic background radiation. Yet non-uniformities in the radiation

were not detected throughout the 1980s, despite the use of sensitive detectors on balloon flown

high above the atmosphere2; the radiation appeared stubbornly smooth.

The COBE mission

In 1989, the Cosmic Background Explorer (COBE) was launched into orbit 900 km above the

earth (figure 10). This was the world’s first satellite measurement of the cosmic microwave

background radiation; the project had been planned since the 1970s, but had suffered a number

20

of mishaps and delays.3The mission had two main aims; to measure the shape of the full

spectrum of the background radiation and to search for minute fluctuations in the radiation. The

spectrum was mapped by an instrument known as FIRAS while the fluctuations were measured

by a Differential Microwave Radiometer (DMR)4.

In 1990, the FIRAS results were announced; the measured spectrum of the background radiation

fitted the theoretical spectrum of a black body most precisely (figure 11). This was a result of

great significance as it constituted evidence that the radiation was of primeval origin, effectively

ruling out many alternative models5. The data were presented at a meeting of the American

Astronomical Society in 1990 and led to a standing ovation for John Mather, the instrument’s

chief designer.

Figure 10a; the COBE satellite at an altitude of 900 km above earth

21

Figure 10b; spectrum of the CMB as measured by the FIRAS instrument aboard the COBE satellite. The data

(squares) fit perfectly to the theoretical spectrum of a perfect blackbody (solid curve)

In April 1992, the data from COBE’s DMR experiment were announced; small variations in the

temperature of the radiation had at last been observed! However, the fluctuations were tiny

indeed, of the order of one part in 100,000 or 0.001%. Small wonder that they had not been

observed earlier.

The COBE measurements provided a significant boost to the big bang model; the spectrum of

the microwave background was exactly as predicted and perturbations in the radiation

corresponding to the seeds of structure had finally been detected. Granted, these ripples were

extremely small; it was hard to see how they would lead to today’s galaxies in the standard big

bang model. However, the theory of inflation provided a ready answer, as we saw in chapter six.

When George Smoot, chief scientist on the DMR instrument, presented the results at a press

conference in 1992, he described the data as ‘direct evidence of the birth of the universe’ and

said that looking at the data was ‘like seeing God if you are religious’. 6Soon afterwards, the

famous cosmologist Stephen Hawking described the COBE results as ‘the most significant

scientific finding of the twentieth century’. Some years later, John Mather and George Smoot

were awarded the Nobel prize in physics “for their discovery of the blackbody form and

anisotropy of the cosmic microwave background radiation”.

The Hubble Space Telescope

Another important advance was the launch of the famous Hubble Space Telescope (HST). The

concept of an observatory orbiting in space had been mooted for many years, and after many

delays the HST was finally launched aboard the space shuttle Discovery in 1990. After some

22

initial problems, the telescope began taking breath-taking images of deep space from 1993

onwards7.These images have become icons of our age and led to a great upsurge of interest in

astronomy (see figure 11).

The new Space Telescope extended measurements of the cosmos by detecting Cepheid variables

(see chapter 1) in galaxies much further way than ever before. The result was a more accurate

estimate of Hubble’s constant (73 +/- 1 km/s/Mpc), settling the dispute between Sandage/de

Vaucouleurs (see chapter 5). Ironically, the revised Hubble constant presented a new version of

the age problem; it implied an age for the universe that was uncomfortably close to that

calculated for some galaxy clusters!8

Figure 11a; the Hubble space telescope

Fgure 11b Iconic image of the ... galaxy taken by the Hubble space telescope

23

The accelerating universe

An exciting new method of measuring the cosmos was discovered towards the end of the 1990s.

This was the technique of using supernovae for the calibration of astronomical distance9. A

supernova is what occurs when a certain type of star collapses to form a black hole; during this

event, the material of the star is spewed outwards in a gigantic explosion that is extremely

luminous. Supernovae are very rare events, but as ever more sophisticated telescopes reached

further into space (and back in time) more of them were observed. More importantly, it was

discovered that the luminosity of a particular type of supernova – type 1a - is extremely uniform,

ideal for the calibration of the astronomical distance10.

During the 1990s, two teams of astronomers sought to extend the Hubble graph using type 1a

supernovae as distance markers; the Supernova Cosmology Project led by the American

physicist Saul Perlmutter and the High-Z Supernova Research Team led by the astronomer

Martin Schmidt. By 1998, both teams had reported an astonishing discovery; the most distant

galaxies were dimmer (i.e. further way) than expected from simple extrapolations of the Hubble

graph, by a factor of about 25%! This was a curious finding indeed. Although we refer

continually to the ‘Hubble constant’, physicists had long expected to observe a decrease in the

rate of the expansion of the universe at the largest distances, due to the gravitational pull of the

galaxies. But the supernova measurements pointed to a rate of expansion that is increasing, a

finding now known as the acceleration of the universe.

Despite initial scepticism, painstaking observations of more supernovae suggested that the effect

was real and that there was an unknown force of expansion at work in the cosmos. It seemed

ordinary matter and dark matter did not comprise the total energy density of the universe; there

was a contribution from an unknown source that became known as dark energy. 11

24

Dark energy

The discovery of dark energy marked yet another paradigm shift in cosmology. But what was the

physical process responsible for the phenomenon? Several explanations were proposed, but the

simplest was an old friend; perhaps the accelerated expansion was evidence of a cosmological

constant. Recall that Einstein added a cosmological constant to the equations of relativity in

order to predict a static universe, interpreting it as a constant energy of space that acts to

counterbalance the inward force of gravity at the largest scales. With Hubble’s observation of the

galaxy redshifts, Einstein dropped his cosmological constant, declaring it his ‘greatest blunder’

(see chapter 2). However, Lemaître and others considered it an important component of

relativistic models of the universe and used it to reconcile the age of the universe measured from

the Hubble graph with the known age of stars. When Hubble’s estimates of astronomical

distance were revised by Baade and Sandage in the 1950s, it seemed the cosmic constant could

be safely set to zero. However, it never really disappeared from mathematical models of the

universe and now it was back with a vengeance.

Considering the new Hubble graph of figure 12, a quantitative estimate of the contribution of

dark energy can be obtained by fitting a curve to the data. Recall that the density of matter in the

universe is defined in terms of the parameter Ω, the ratio of the actual density of matter to the

critical value required to close the universe. This definition must now be extended to include a

new term ΩΛ , representing the contribution of dark energy. Thus, the geometry of the universe

will be determined by the sum of two contributions i.e. Ωtotal = ΩM + ΩΛ. (As before, Ωtotal <1

denotes a universe of open geometry, Ωtotal >1 denotes a universe of closed geometry and Ωtotal =1

is the special case of a flat universe). It is clear from fig 12 that the astronomer’s favourite, an

open universe of low matter density and no dark energy (ΩM = 0.3, ΩΛ = 0) does not fit the data

25

well, nor does a universe of flat geometry and no dark energy (Ωtotal = 1, ΩΛ = 0). By far the best

fit is obtained if one assumes a universe of flat geometry (Ωtotal = 1) with a dark energy

contribution over twice that of matter. Taking the observational value of ΩM = 0.3 for the matter

density then suggests a value of about 0.7 for ΩΛ. However, these estimates are provisional as

the fit only provides an estimate of the relative contributions of dark energy and of matter; it

does not provide an independent estimate of either, or indeed of the overall geometry of the

universe12.

Figure 12 Supernovae measurements show the Hubble graph is not linear at the largest distances. Three possible fits

to the data are shown; the best fit is obtained with a model with ΩM = 0.3 and ΩΛ = 0.7 (top line). Data and fits from

[12]

In sum, the discovery of an accelerated expansion indicated a universe with two contributions to

its energy density and thus to its geometry. One contribution, ΩM, comes from both ordinary

26

matter and cold dark matter. Another contribution, ΩɅ, acts to oppose the gravitational effect of

matter and became known as dark energy. Fits to the data suggested a relative contribution of

about 30% from matter and 70% from dark energy respectively; from the known density of

matter this in turn suggests a universe of flat geometry, in pleasing agreement with the

predictions of inflation. However, further measurements were needed in order to obtain

independent estimates of each contribution and of the overall geometry of the universe.

Could the observed acceleration be some type of systematic error in the supernova data? This

possibility was ruled out when measurements of even more distant (and older) supernovae

showed evidence of a de-accelerated expansion13. It seems dark energy became significant only

when the universe reached a certain age; after a gradual slowing for the first seven billion years

or so, the expansion then began to speed up! More recent studies of a great many supernovae

have confirmed the current acceleration beyond doubt14.

It should also be noted that the concept of dark energy did not come as a complete surprise to

many theorists. In the first instance, inflation predicted a universe of flat geometry; given the

observational value of Ω = 0.3 for the density of matter, this pointed to another density

contribution of unknown origin. Secondly, there was the age problem; some theoreticians had

already resurrected the cosmological constant in order to ease the tension between the calculated

age of globular clusters and the revised value of the Hubble constant15. It is impressive that many

papers were written on this subject before the supernova data were announced; the latter must

have been a pleasant surprise for those authors16!

Dark energy and the cosmic microwave background

27

Meanwhile, measurements of the cosmic background radiation continued. It was soon realised

that accurate measurements of the fluctuations in the background radiation could also reveal the

presence of dark energy. This is because the angular scale of the perturbations indicate whether

the geometry of the universe is open, closed or flat - and thus reveal a contribution from dark

energy17. The COBE satellite had already detected ripples in the CMB, but more accurate

measurements were needed in order to determine their scale.

In 1999, two balloon experiments reported new measurements of fluctuations in the cosmic

microwave background. (Such experiments are significantly cheaper than satellite missions, but

only a small part of the sky can be mapped). The BOOMERANG project took data at an altitude

of 42 kilometres above the Antarctic for a period of ten days, while the MAXIMA project was

launched from Texas. The results suggested a value of Ωtotal = 1.00+/- 0.05, i.e. the geometry of

the universe is indeed flat to within an experimental error of less than 5%! This was an important

advance; taking the observational value of ΩM = 0.3 for the density of matter, it suggested a dark

energy contribution of 0.7, in good agreement with the supernova results18. It is always a good

moment in science when different lines of enquiry point towards the same result and the next

experiment was to prove even more satisfying…

28

Fig 15. The BOOMERANG experiment gave the first direct experimental measurement of the geometry of the

universe

The WMAP mission

In 2001, the Wilkinson Microwave Anisotropy Probe (WMAP) was launched into orbit. This

experiment comprised highly sensitive instruments on board a satellite 1.5 million kilometres

from earth, calibrated to measure fine variations in the entire spectrum of the microwave

background. The measured variations are plotted as a function of angle in figure 13, a plot

known as the angular power spectrum (the temperature is expressed in multipoles and the

intensity is plotted as a function of multipole number, a representation that arises from a

mathematical co-ordinate system known as spherical harmonics). The dominant feature of the

spectrum is the large peak on the left hand side, known as the ‘first acoustic peak’. Theory

predicts that this feature is a direct measure of the curvature of space and the WMAP data

implies a value of Ωtotal = 1.0 +/- 0.02, in excellent agreement with the balloon experiments. The

spectrum also gives an estimate of ΩM = 0.27 for the contribution of the matter density alone, in

good agreement with astrophysical observation. Combined together, these results suggest a

value of ΩΛ = 0.73 for the contribution of dark energy, in excellent agreement with the

supernova results.

A theoretical fit (solid line) to the data is also shown in figure 13. This fit has been computed

with the use of the parameters Ωtotal = 1.0, ΩΛ = 0.73 and ΩM = 0.27 19. It can be seen from the

diagram that these parameters give a very good fit indeed to the data. By contrast, none of the

main alternative models (a flat universe with no dark energy, a curved universe with no dark

energy or a curved universe with dark energy) give a good fit, as can be seen in figure 14. By far

29

the best fit is obtained by assuming a model of a flat universe containing ordinary matter, cold

dark matter, and dark energy.

Fig 13 Anisotropies of the CMB as a function of angle, known as the power spectrum. The solid line is that

predicted by the Λ-CDM model with the parameters Ωtotal = 1.0, ΩΛ = 0.73 and ΩM = 0.27

Fig 14 Alternative fits to the power spectrum. The solid line is that predicted by the Λ-CDM model with the

parameters Ωtotal = 1.0, ΩΛ = 0.73 and ΩM = 0.27

The Λ-CDM model

The experiments of this chapter - the COBE satellite mission, the supernova

measurements, the BOOMERANG and MAXIMA balloon experiments and the

WMAP mission – led inexorably to a new version of the big bang model. This

view is now known as the standard or Λ-CDM model and it states that we live

in an accelerating universe of flat geometry that contains ordinary matter, cold

dark matter (CDM) and dark energy. The Λ symbol indicates that the dark energy is associated

with a non-zero cosmological constant that is thought to arise from a natural tendency of space to

expand, opposing the gravitational effect of matter. Note that the moniker is slightly misleading

30

as we expect three contributions to the mass/energy density of the universe; these contributions

are estimated as 74%, 22% and 4% for dark energy, dark matter and ordinary matter respectively

(see figure 15).

Figure 15 Pie chart of the relative contributions of dark energy, dark matter and ordinary matter to the mass/energy

density of the universe

Note also that the observation of flat geometry fits very nicely with the hypothesis of inflation; as

we have seen, it is quite difficult to explain why the observable universe should exhibit this

geometry without some form of inflation in the first fractions of a second. We also saw in

chapter 6 that inflation predicts fluctuations in the cosmic microwave background of the right

order of magnitude. Indeed, the theory predicts that the spectrum of fluctuations will have a

shape that is very nearly independent of scale (characterized by a spectral index ns that is close,

but not exactly equal to 1.0 to an accuracy of 10%). It is exciting to note the spectrum in figure

13 has exactly such a shape; the fit has been computed by taking a spectral index of ns = 0.95 +-

0.0219.

Finally, we note that, for all the exciting successes of this chapter, the term Λ-CDM also

highlights the shortcomings of the model; we do not know what physical substance makes up

either dark matter or dark energy and hence it could be argued that we do not know what 96% of

31

the universe is made of! This puzzle will be discussed further in the next chapter, along with

other outstanding problems.

32

Chapter 8 Putting it all together

In this chapter, we review the modern version of the big bang theory, the Λ-CDM model. We review the outstanding puzzles of inflation, dark energy and the singularity and finally address the key question: is it true?

In the first part of this book, we encountered the four main planks of evidence for

the big bang model; the recession of the galaxies (Hubble’s law), the abundance of the lightest

elements, the distribution of the radio galaxies and the cosmic microwave background. In the

second part, we saw that advances in astronomy and detailed study of the background radiation

led a new version of the model, a theory that suggests we live in an expanding universe

of flat geometry that contains dark energy, dark matter and ordinary matter.

Put together, a picture emerges of a universe that originated in an incredibly hot,

dense state with energy in the form of radiation and elementary particles.

The universe expanded and cooled, with nuclei of the lightest elements

forming after one second and atoms forming after about 100,000 years. The

latter process rendered the universe transparent to radiation, allowing it to

travel unimpeded for the first time; this radiation is observable today as the

cosmic microwave background. Evidence of the seeds of today’s galaxies are

detectable as small variations in this background radiation, and calculations

suggest that natural fluctuations in density of the infant universe could have

given rise to these variations if an inflationary phase of the universe

occurred in the first fraction of an instant. This view is summarized in figure

16.

33

Quantitative parameters of the model can be extracted from the experimental data of

chapter 7. This data has been backed by several recent experiments, notably the SLOAN Sky

survey and the PLANCK satellite. The best fit is a universe that is 13.7 ± 0.1 billion years old

and has a current expansion rate of 70.4 ± 1.4 km/s−1/Mpc . It is geometrically flat to an

accuracy of 0.2 %, with an energy density made up of 4.56 ± 0.16% of ordinary matter, 22.7

± 1.4% of cold dark matter and 72.8 ± 1.6% of dark energy; the latter leads to a slight

acceleration in the current expansion. Density fluctuations observable in the cosmic background

radiation are well modeled by a spectral index of ns = (0.963 +/- 0.012) predicted by the theory of

inflation.

Outstanding puzzles

The CDM big bang model fits the data exquisitely well and it is also self-consistent. Evidence

from different lines of enquiry all point to the same model, from the relative abundance of the

chemical elements to the masses of stars, from supernova measurements of the accelerated

expansion to estimates of the geometry of the universe from measurements of the microwave

background.

However, the model also features several outstanding puzzles. To a physicist, the

situation is rather like a jigsaw puzzle that is missing several crucial pieces; we have an idea

what the overall image will look like, but major gaps remain (and the completed puzzle may one

day turn out to be part of a much larger ensemble). The most outstanding of these gaps are the

nature of dark matter, the nature of dark energy, the nature of the inflationary field and the nature

of the singularity itself.

The puzzle of dark matter

34

We recall from chapter 6 that the concept of dark matter was first mooted by

Fritz Zwicky in the 1930s in order to explain the rotational motion of certain

galaxies. Since that time, the hypothesis of dark matter has been invoked by

astronomers to explain the behaviour of matter at almost all astronomical

scales, from the motion of stars to that of galaxies, from galaxy clusters to

galaxy halos. Additional evidence comes from gravitational lensing, a

phenomenon whereby multiple images of a light source are seen due to the

bending of light by unseen matter between the source and observer1.

Essentially, the astronomical hypothesis is that dark matter is an entity that neither emits nor

scatters light or other electromagnetic radiation (and so cannot be directly detected via optical or

radio astronomy), but is observable by its gravitational effect on ordinary matter2.

In the last chapter, we have seen that there is an additional cosmological basis for the hypothesis

of dark matter; measurements of inhomogeneites in the cosmic microwave

background suggest that about 22% of the total energy density of the

universe is in the form of matter that is not strongly interacting. Hence,

astronomy and cosmology provide indirect evidence for the existence of dark

matter - but is there any direct evidence?

An exciting breakthrough occurred in 2006, when astronomers measured a

merging of two galaxies into a single cluster, now known as the bullet

cluster. This dramatic effect was observed by the CHANDRA X-ray satellite observatory,

launched by NASA in 1999. Each galaxy appeared to comprise two separate components that

behaved very differently during the collision. The stars of each galaxy were not greatly affected

and passed more or less straight through, if slowed slightly due to gravitational effects. (Gases in

35

each galaxy slowed more due to their electromagnetic interaction). Most importantly, a large

component of matter in each galaxy (detected before and after by the gravitational lensing of

background objects) showed no interaction whatsoever – which is exactly what one would

expect of dark matter. This was exciting news indeed and similar effects have since been

observed in other galaxy collisions3. However, while such observations give strong evidence for

the existence of dark matter, they tell us little about the nature of the phenomenon.

Figure 15: In the merging of two galaxies into the bullet cluster, each galaxy is seen to comprise an interacting and

non-interacting component

Cosmologists have expected for many years that experiments in particle

physics should throw up candidates for dark matter. Obviously, certain

constraints apply; it must be weakly interacting matter (because it is not

seen by telescopes at any wavelength) and there must be enough of it to

account for 22% of the energy density of the universe. We also saw in

chapter 7 that it must be cold, i.e. travel at non-relativistic speeds. The

leading candidate for such matter is known by the acronym WIMPs, i.e.

weakly interacting massive particles. However, the identity of such particles

has remained elusive.

36

There are currently many experiments underway to detect possible candidate particles for dark

matter. These experiments are very difficult technically as WIMPs are expected to interact

extremely weakly with any detector. Hence, the experimenters must measure huge numbers of

interactions in the hope of finding a few rare WIMP events. The experiments take place in large

underground laboratories in order to shield the apparatus from incoming cosmic rays. Examples

include the CDMS experiment in the Soudan mine in Minnesota (US), the SNOLAB

underground laboratory at Sudbury in Canada, the DAMA/LIBRA experiment at the Gran Sasso

National Laboratory in Italy, the Boulby Underground Laboratory in the UK and the Deep

Underground Science and Engineering Laboratory in South Dakota (US).

Most of these experiments employ detectors at extremely low temperatures that measure the heat

produced when a particle hits an atom in crystalline absorbing material, although some use liquid

detectors to detect collisions of dark matter particles with liquid molecules. One group, the

DAMA/LIBRA experiment at Gran Sasso, have detected an annual effect that they suspect is due

to dark matter. However, this claim is controversial as it has not been reproduced in other

experiments4. At the time of writing, the CDMS experiments such as the CDMS (above) and the

XENON 100 experiment at Gran Sasso are thought to provide the most sensitive limits on

WIMP detection; so far, their data have shown null results 5.

The puzzle of dark energy

We have seen that the concept of dark energy was raised by the supernova measurements of the

accelerated expansion in1998 1. Since then, further supernova studies and WMAP measurements

of perturbations in the CMB have added weight to the hypothesis, fixing the dark energy

contribution at about 73% of the total energy density of the universe. Dark energy is not

37

incompatible with the theory of inflation (it fits well with inflation’s prediction of a universe of

flat geometry) but the two concepts should not be confused – the former concerns a slight

acceleration in the expansion of the present universe over millions of years, while the latter

concerns an enormous, exponential expansion in the infant universe that lasted only a fraction of

a second.

The simplest explanation for dark energy is the hypothesis of a cosmological constant, first

mooted by Einstein and later retained by Lemaître. Mathematically speaking, such a factor arises

naturally within the framework of general relativity; as a constant of integration, the constant can

be of any value and there is no particular reason why it should be zero2 . In terms of a physical

interpretation, we can follow Einstein by ascribing the constant to a natural tendency of space to

expand. This hypothesis concurs nicely with a prediction from modern quantum physics: that

particles and anti-particles can pop in and out of existence in the vacuum in accordance with

Heisenberg Uncertainty Principle, leading to a repulsive energy that we measure as the cosmic

constant.

However, there are two major problems with this simple model of dark energy. In the first

instance, quantum calculations suggest that if such a vacuum energy exists, it should be many

orders of magnitude larger than our measurements of dark energy (by a factor of about 10150 in

fact). This is quite a mismatch of theory and experiment and is known as the vacuum

catastrophe. It is of course possible that there is some as-yet unknown symmetry mechanism that

causes the various contributions to vacuum energy to cancel; however, it seem unlikely that such

a mechanism could lead to a value that is extremely close to zero, but not equal to it.

38

A second puzzle is the size of dark energy relative to dark matter; although the contribution of

dark energy to the total energy density is about twice as large as that of matter, the two

contributions are within an order of magnitude. But if dark energy is a cosmological constant, it

remains constant while the density of matter decreases over billions of years in an expanding

universe. Again, it seems something of a coincidence that the two contributions should be the

same sort of size in our own particular epoch..

In consequence, several alternative explanations for dark energy have been proposed. Possibly

the best-known alternative is the notion of quintessence. Quintessence is a hypothetical form of

dark energy that differs from the cosmological constant in that it is dynamic i.e. can change over

time. Furthermore, it is hypothesised that quintessence can be either attractive or repulsive

depending on the ratio of kinetic to potential energy in the universe. In this model of dark

energy, it is thought that a repulsive quintessence may have been triggered once a balance

between radiation and matter was established in the early universe. The phenomenon may have

been of negligible size at that epoch, but eventually grew to dominate the universe today. Note

that quintessence offers an explanation for dark energy that is outside the context of general

relativity, while the cosmological constant arises naturally within the framework of the general

theory.

To decide between a cosmological constant or quintessence as models for dark energy is

straightforward in principle, as the former is constant over time while the latter varies. Hence a

new generation of sophisticated experiments will attempt to measure the evolution of dark

energy over time, by measuring the expansion rate of the universe at earlier and earlier epochs

with the use of the most distant supernova of all20.

39

Finally, we note an interesting connection between dark energy and inflation; the observation of

an accelerated expansion in the current epoch does in principle support the idea of different

expansion rates at different epochs. Some cosmologists go much further than this, speculating

that the current acceleration of the universe may in fact be a faint echo of inflation – a leftover

from the exponential expansion of the infant universe. However, the recent discovery that the

expansion was de-accelerating in the epoch before the current acceleration casts doubt on this

hypothesis3.

The puzzle of inflation

We have seen in chapter six how the postulate of an inflationary phase in the infant universe

offers a neat solution to the horizon and flatness problems. In addition, it provides a mechanism

for the formation of galaxy structure and may even shed some light on the singularity problem.

We have also seen that the theory is supported by modern measurements, from precision

measurements of the cosmic microwave background to the detection of the acceleration of the

expansion.

However, while the general idea of inflation has become part of the standard model of

cosmology, the specific nature of the inflationary field remains an outstanding problem. It was

originally proposed that inflation was driven by a Higgs –like quantum field; it later emerged

that such a field will not do. In recent years, a great many alternate models of inflation have been

proposed; however, there has been no success to date in uniquely identifying a successful model.

It is hard to see how inflationary cosmology can make concrete progress until we know more

about the nature of the field. From the point of view of experiment, it is a matter of some concern

that no hint of the existence of a quantum field with the necessary characteristics has ever been

40

observed in the world of particle physics. In sum, one could argue that it is unfortunate that the

whole edifice of the big bang rests on a quantum field that has never been detected.

In addition to this puzzle, one has the philosophical problems associated with inflation. As we

saw in chapter six, the theory seems to lead inevitably to the hypothesis of chaotic inflation and

the multiverse. Many physicists are concerned by the extravagant scenario of the multiverse and

uncomfortable with the notion of the existence of worlds that can never be measured in principle.

Of course, these possibilities may one day be ruled out, but for the moment, they are an

unattractive feature of the theory of inflation.

The problem of the singularity

Without doubt, the greatest challenge of all is the big bang itself. We have noted before that the

name ‘big bang’ is a misnomer as the model tells us nothing about the bang itself. The general

theory of relativity provides a successful framework for a description of the evolution the

universe from the first fractions of a second to the current epoch, but not of its origin. Indeed, at

first sight, relativity implies a universe that begins in a singularity , i.e., in a point of zero

volume, infinite density and infinite temperature (see figure 4). Physicists do not take this

prediction literally because mathematical singularities are not usually a reliable description of the

physical world; more importantly, we do not expect general relativity to give a reliable

description of the micro-world. We have strong evidence the universe was once smaller than an

atom, but phenomena on the atomic scale are described by the strange laws of quantum physics,

a micro-world in which the rules are very different from those of ordinary physics. Hence, a

successful analysis of the bang will necessitate an understanding of the behavior of gravity on

quantum scales. Such a theory - a quantum theory of gravity - has not so far been forthcoming.

This problem is the reason one talks about a ‘big bang model’ rather than a ‘big bang theory’; the

41

model is clearly incomplete.

To describe the behaviour of gravity on quantum scales will require a synthesis of the two great

pillars of modern physics, general relativity and quantum physics. This is a major topic of

research and has led to a convergence of two very different fields, cosmology (the study of the

universe on the largest scales) and particle physics (the study of the world of the sub-atomic).

Some progress has been made, particularly in the context of string theory; in this theory, the

elementary particles are described as excitations of minute strings. More recently, this idea has

been broadened to M- theory, where the strings are replaced by two-dimensional membranes.

An intriguing aspect of the theory of inflation is that may speak to the problem of the singularity.

This idea arises from a consideration of the Heisenberg Uncertainy Principle , a fundamental

tenet of quantum physics. According to the uncertainty principle, the tiniest particles of matter

can in principle appear out of the vacuum if they disappear again quickly enough. It seems they

can borrow energy to exist, as long as that existence is extremely short. (The concept arises from

a fundamental indeterminacy in quantum entities)20. This phenomenon has long been known and

evidence of the existence of such virtual particles and anti-particles is routinely detected

indirectly in accelerator experiments19.

Now inflation is hypothesised to occur to a tiny patch of space over a timespan of fractions of a

second; hence one expects quantum processes to be relevant. This raises an intriguing possibility:

if virtual particles briefly appear in a minute region of space and that region undergoes inflation,

could they be blown up to become our entire universe? This scenario may seem rather

speculative but it is taken seriously by many theorists because it offers the first glimpse of a

42

possible explanation for the hardest question of all – how does something arise from nothing?

The concept of a universe ex nihio became extremely well-known when it was popularized by

the cosmologist Stephen Hawking22. We shall return to this question in the epilogue.

The cyclic universe

One intriguing idea that has emerged from the string theory approach to cosmology is the

possibility of multiple bangs, i.e., the cyclic universe. In this scenario, the Universe undergoes

an endless sequence of cycles in which it contracts in a big crunch and re-emerges in an

expanding big bang. Hence the big bang is not the beginning of time; rather, it is a bridge from a

pre-existing era. This theory, the cyclic universe, has recently been expounded by the veteran

cosmologists Paul Steinhardt and Neil Turok.

In the cycle universe, each cycle proceeds through a period of radiation and matter-domination

consistent with standard cosmology, producing the observed abundance of elements, the cosmic

microwave background and the recession of the galaxies. For the next trillion years or more, the

Universe then undergoes a period of slow cosmic acceleration that ultimately empties the

Universe and triggers the events that lead to contraction and a big crunch. The transition from

big crunch to big bang automatically replenishes the Universe by creating new matter and

radiation.

What is the proposed mechanism for this process? In the cyclic model, the Universe consists of

two branes (surfaces) bounding an extra dimension and the big bang corresponds to a collision of

the two branes. Each bang is a transition from big crunch to big bang due to the collapse, bounce

and re-expansion of the extra dimension. Instead of the collapse of all dimensions to a singularity

43

predicted by relativity, the cyclic model suggests that one higher dimension shrinks; our usual

space dimensions remain infinite and time runs continuously.

At first sight, one might argue that the cyclic model simply complicates an old problem by

replacing one big bang with a multitude. However, the point is that the model sidesteps the

problem of the singularity because there is no longer any need to postulate an explosive

beginning to space and time. Instead, space and time may have always existed in an endless

cycle of expansion and rebirth. In addition, the temperature and density of the universe do not

become infinite at any point in the cycle.

One intriguing aspect of the model is that it offers an alternative to inflation. In the cyclic

universe, the flatness and homogeneity of the universe are not set by a hyper-expansion in the

initial moments, but by conditions existing before the bang. Similarly, the seeds for galaxy

formation were created by instabilities arising as the Universe was collapsing towards a big

crunch prior to our big bang. Best of all, the phenomenon of dark energy is crucial to the model,

rather than an unexpected add-on; it is dark energy that drives the universe to the end of each

cycle once matter has formed.

It is important to note that the cyclic universe does not replace the Big Bang model; instead, it

postulates that the big bang is part of a grander scheme. In addition, it offers an intriguing

solution to certain troublesome aspects of the Big Bang model ( the nature of inflation, dark

energy and the singularity). Some physicists are quite excited about the cyclic model while most

are cautious or sceptical. Certainly, it is interesting that the model recoups all of the successful

predictions of the big bang/inflationary theory, while addressing several questions that the big

bang/inflationary model does not address. However, the mechanism of the cyclic model relies

44

heavily on ideas in modern string theory that remain untested. For this reason, it will probably

remain a speculative idea for most physicists for many years. Indeed, one of the great challenges

of modern physics is how fundamental ideas in string theory may be tested experimentally.

The above are all respectable theories; however, it should be borne in mind that they are highly

speculative. So far, there is no evidence that more than one bang occurred in our universe , while

there is a great deal of evidence that at least one bang did occur.2 Most physicists (including this

one) take the view that slowly constructing an accurate picture of the bang we know about is

enough to be going on with for now….

45

Notes

Chapter 51 This is something of a simplification as it assumes the cosmological constant is zero2 The astronomer Alan Sandage wrote a famous paper titled ‘ Cosmology – the search for just two numbers’, indicative of this approach to observational cosmology3 There is a great description of this work in the book ‘The Cosmological Distance Ladder’ (Rowan-Robinson , 1985)4 Sandage estimated a value of 50 kilometers per second per megaparsec (km/s/Mpc) while de Vaucouleurs got a value of versus 90 km/s/Mpc; the Hubble Space Telescope (HST) provided a final answer of 70 km/s/Mpc. This story is very well told in the book ‘Lonely hearts of the cosmos’, by Dennis Overbye5 In other words Ω = ρ/ρc, where ρ is the actual density of matter (a variable) and ρc is the critical density (a constant). The critical density is related to the Hubble constant by the equation ρc = 3 H0

2/8ΠG where G is the gravitational constant6 For example, the mass of our sun can be estimated from measurements of the earth’s orbit7 There is no law that dictates that all matter be visible i.e. interact with the electromagnetic force. It should be noted that the one alternative is that our laws of gravity are wrong, a theory known as Modified Newtonian Dynamics or MOND. However, this theory has lost support in recent years as modern observations support the existence of dark matter at every scale (see Rowan-Robinson , 1985).8 The nature of particles that make up dark matter is a vibrant field of study today. A good review can be found in .........9 There is a good description of this method in ‘The Cosmic Century’ (Longair, 2006) 10 Ibid

Chapter 61. This observation is usually attributed to the data from satellite measurements; in fact the homogeneity of

the universe was observed empirically long before this2. The problem was first pointed out by Robert Dicke (Dicke, 1970). The calculations show that this scenario

results if Ω differs from unity by a factor of even 1 in10-15 when the universe is 1 nanosecond old3. An exponential growth is extremely fast; it is written as en where e =2.718 and n can be any number. In

Guth’s proposal, the universe is proposed to have grown by a factor of about 1050 during a timespan of about 10 -30 s

4. A very good history of the development of the theory can be found in Kolb and Turner (1994)5. Guth’s first major paper on the subject was specifically titled Inflationary Universe; A Possible Solution to

the Horizon and Flatness problems (Guth 1981)6. This is the key point: the starting point is vanishingly small (about 10-28 m)7. In analogy with Hawking’s no-hair theorem for black holes8. Qantum tunneling is the process whereby quantum particles penetrate a barrier that should be too high for

them to cross, according to classical physics9. Linde had independently developed his own model of inflation, so he was quick to appreciate the problem.

As a tongue-in-cheek tribute to Guth he titled his paper A New Inflationary Universe Scenario; A Possible Solution of the Horizon, Flatness, Isotropy, and Primordial Monopole Problems (Linde, 1982)

10. Technically speaking, new inflation posits a second-order phase transition, rather than the first-order phase transition of the Guth model. There is an excellent description of the difference between new and old inflation in chapters 11 and 12 of Guth’s book (Guth, 1997)

11. The fuzziness is specified by the Heisenberg Uncertainty Principle, a theorem that predicts a fundamental ambiguity in the properties of quantum entities. For example, a particle cannot possess an exact position and momentum simultaneously; the more definite its position, the less defined its momentum. (It is not a problem of measurement, as is often stated).

12. The occasion was the 1982 Nuffield Workshop on Cosmology in Cambridge. There is a great description of the process of discovering this result in chapter 13 of Guth’s book (Guth 1997)

46

13. This is a slight simplification; in fact the work showed that inflation could give rise to the shape of the CMB spectrum. The amplitude of the perturbations depended on the nature of the inflationary field which was not known

14. The problem Guth was addressing is that Grand Unified Theories of particle physics predict the existence of magnetic monoples, none of which have ever been observed; an observable universe that is only a fraction a much larger universe could explain the low density of monopoles. It is noteworthy that his model is firmly grounded in ideas of particle physics, rather than cosmology.

Chapter 71. Given the low value of mass density from observation, some theorists wondered about an additional

contribution to the energy density, as we shall se2. There is a very nice overview of this period in Marcus Chown’s book (Chown 1997). 3. Including the tragedy of the Challenger Disaster. There is a very good overview of the buildup to the

COBE experiment in chapter 6 of the book ‘Wrinkles In Time’ (Smoot, 1997)4. The DMR instrument provided sensitive measurements of the background radiation from two different

directions simultaneously5. For example, a background of numerous stellar and gaseous emissions could not produce such a spectrum6. Smoot’s announcement of the results was quite controversial, as explained in the book ‘The Very First

Light’ (Mather and Boslough, 1996) 7. One of the telescope mirrors was found to be flawed and had to be replaced by astronauts 8. It was later realised that this mismatch occurred because the calculation of the age of globular clusters

assumed a matter-filled universe with no cosmological constant9. Type 1a supernovae occur when a white dwarf star collapses to a black hole. The process leads to a

luminous source that is extremely uniform and ideal for the calibration of astronomical distance10. The fascinating story of the discovery of the accelerating universe is told by one of the participants in the

book ‘The Extravagant Universe’ (Kirshner 2002)11. It is not known who first coined the term dark energy, but it is probably named in analogy with dark matter.

It is an unfortunate term as students often confuse the two, and the confusion is heightened when one recalls the mass-energy equivalence of relativity

12. The data and fitting of figure 12 is taken from Perlmutter’s 1999 publication13. This point is often overlooked. If the acceleration was a systematic error, it would affect all the supernova

data. In fact, measurements of supernovae at the largest redshifts suggest a de-accelerated expansion; thus the universe underwent a change from a slowing expansion to an accelerated one

14. See SLOAN survey15. See note 816. See for example Carroll, Press and Turner (1992), Krauss and Turner (1995) or Krauss (1998)17. A measurement of the angle covered by an object of known size at a known distance gives information

about the surrounding geometry. The maximum size of fluctuations in the CMB is 300,000 light years (because recombination occurs at 300,000 years) and they are viewed at a distance of about 14 billion light years; hence a measurement of the angle gives an estimate of the geometry of the universe

18. Actually, one doesn’t need to assume a value for ΩM. Since the balloon measurements give an estimate of the sum ΩM + ΩΛ, and the supernova measurements give an estimate of the difference ΩΛ - ΩM, the two equations combined give independent estimates of each.

19. Figures 13 and 14 are provided by David Kaiser of MIT. There is a very nice overview of fitting the predictions of inflation to this data in (Guth and Kaiser 2005)

47

Chapter 820. Gravitational lensing of distant objects by observable matter (massive stars) is a well-established

phenomenon; thus multiple images of a distant source can be used to infer the presence of dark matter between source and observer

21. A very good overview of the astronomical evidence for dark matter can be found in Rowan-Robinson’s book

22. In essence, each galaxy behaves as a couple consisting of an aloof person and his friendly partner entering a party; they are soon separated as the ordinary matter stops to say hello, while the dark matter keeps right on going.

23. See - for a good overview of the serach for the particles that make up dark matter24. A good overview of this controversy can be found in.....25. See http://arxiv.org/abs/1011.2482 . and http://arxiv.org/PS_cache/arxiv/pdf/1005/1005.0380v3.pdf .A

good overview of each experiment can be found on their webpages.26. Paper on de-acceleration27. There is a good discussion of the concept of quintessence in Steinhardt et all (1999)1. Einstein himself often warned of the dangers of extrapolating the laws of general relativity to atomic

dimensions2. One of the great problems of modern science journalism is that there is often very little distinction between

theories that are supported by a great deal of evidence, and theories that are not

48

Bibliography

Alpher RA, Bethe H and Gamow G (1948) ‘The origin of the chemical elements’, Phys Rev 73 (7)

Alpher RA, Follin JW and Herman RC, Phys Rev (1953) Phys Rev 92, 1347

Alpher RA and Herman RC (1948) Nature 162

Bertotti B., Balbinot R., Bergia S. and Messina A. (1990) Modern Cosmology in Retrospect CUP

Carroll, Press and Turner (1992) Annual Rev Astronomy 30, p499

Chown, M ( 1993) The afterglow of creation: from the fireball to the discovery of cosmic ripples (Arrow Books)

Dicke, R. H. (1970). Gravitation and the Universe. American Philosophical Society.

Einstein,A (1905) On the electrodynamics of moving bodies Originally published in German in Annalen der Physik

17 (10) 1905. See http://www.fourmilab.ch/etexts/einstein/specrel/www/ for a translation on the web.

Einstein,A (1916) The foundation of the general theory of relativity, Annalen der Physik vol 354 (7) p769

Einstein,A (1917) Cosmological considerations of the general theory of relativity Sitzungsberichte der Kӧnigliche

Preussische Akademie der Wissenschaften (Berlin) 142-52

Everitt,C.W.F. et al. (2011) "Gravity Probe B: Final Results of a Space Experiment to Test General Relativity"..

Physical Review Letters. May, 2011

Farrell J (2005) ‘The Day Without Yesterday; Lemaître, Einstein and the birth of modern cosmology’ Thunder’s

Mouth Press, NY 2005

Friedman, A (1922) On the curvature of space, Zeitschrift für Physik vol 10, p337

Friedman, A (1924) On the possibility of a universe with constant negative curvature of space, Zeitschrift für Physik

vol 21, p326

Galileo G (1610) Sidereus Nuncius (Baglioni)

Guth A, (1981) Inflationary Universe; A Possible Solution to the Horizon and Flatness Problems

Phys Rev D 23 (2)

Guth A (1997) The inflationary universe; a quest for a new theory of cosmic origins (Jonathan Cape)

Guth A and Kaiser D (2005) Science, vol 307 p 884

Harrison, E (2001) Cosmology (Cambridge University Press)

49

Hawking, SW, Penrose, R (1970) "The Singularities of Gravitational Collapse and Cosmology". Proceedings of

the Royal Society A 314 (1519)

Hawking SW (1988) ‘A Brief History of Time; From the Big Bang to Black Holes’ Bantam Books

Hubble, Edwin (1929) A relation between distance and radial velocity among extra-galactic nebulae PNAS vol 15,

3 1929

Hubble, Edwin and Humason ML (1931) The velocity-distance relation among extra-galactic nebulae APJ vol 74,

p43

Kragh, H (1996) ‘Cosmology and Controversy; the historical development of two theories of the universe’

, Princeton University Press

Kennefick, D(2009), "Testing relativity from the 1919 eclipse- a question of bias," Physics Today, March 2009, pp.

37–42.

Kirshner, R 2002 ‘The Extravagant Universe; Exploding stars, dark energy and the accelerating cosmos’ (Princeton

University press)

Kolb E and Turner M (1994) The Early Universe (Westview Press)

Krauss L and Turner M (1995) Gen.Rel.Grav. 27 p 1137

Krauss L (1998) Astrophysical Journal 501, p461

Kuhn, Thomas (1976) The Structure of Scientific Revolutions (Harvard University Press)

Latour, Bruno and Woolgar, Steve (1976) Laboratory Life; The Construction of Scientific Facts (Princeton

University Press)

Lemaître, G (1927) A homogeneous universe of constant mass and increasing radius, accounting for the radial

velocity of extra-galactic nebulae, Annales de la societe Scientifique de Bruxelles, vol A47 p 29. Translated and

reprinted as ‘The Expanding Universe’ in Monthly Notices of the Royal Astronomical Society vol 91 (1931) p 490

Lemaître, G (1931,a) ‘The beginning of the world from the point of view of quantum theory’ Nature, March 21, p 44

Lemaître, G (1931,b) ‘L’expansion de l’espace’, Revue Questions Scientifiques 17, p391

McMullan Ernan (2005) ‘Galileo and the Church’(Univ Notre Dame Press)

Linde A (1982) A New Inflationary Universe Scenario; A Possible Solution of the Horizon, Flatness, Isotropy, and

Primordial Monopole Problems Physics Letters B, Volume 108, Issue 6

Longair, M (2006) The Cosmic Century: A History of Astrophysics and Cosmology (CUP)

50

Mather JC and Boslough J(1996) The Very First Light’ (Penguin)

Overbye, D (1999) Lonely Hearts of the Cosmos: The Story of the Scientific Quest for the Secret of the Universe ,

Back Bay Books

Peebles (1972) Physical Cosmology (Princeton University Press)

Peebles, P. J. E. (1993). Principles of Physical Cosmology. Princeton University Press

Perlmutter S, Aldering G, Goldhaber G et al (1999) Measurements of omega and lambda from 42 high-redshift

supernovae Astronomical Journal vol 517 (2) p 56

Sandage, A (1970) ‘Cosmology: A search for two numbers’ Physics Today, February 1970

Smith, RW (1982) The Expanding Universe; Astronomy’s Great Debate (Cambridge University Press)

Smoot G(1994) Wrinkles in Time (Harper Perennial)

Weinberg, S (1993) ‘The first three minutes; a modern view of the origin of the universe’ (Basic Books)

Zlatev, I.; Wang, L.; Steinhardt, P. (1999). "Quintessence, Cosmic Coincidence, and the Cosmological Constant".

Physical Review Letters 82 (5): 896–899. Or arXiv: astro-ph/9807002

51