1 Introduction
1.1 A brief history
The last century saw an expansion in our view of the world from a static, Galaxy-sized Universe, whose constituents were stars and “nebulae” of unknown but possibly stellar origin, to the view that the observable Universe is in a state of expansion from an initial singularity over ten billion years ago, and contains approximately 100 billion galaxies. This paradigm shift was summarised in a famous debate between Shapley and Curtis in 1920; summaries of the views of each protagonist can be found in [43] and [195].
The historical background to this change in world view has been extensively discussed and whole books have been devoted to the subject of distance measurement in astronomy [176*]. At the heart of the change was the conclusive proof that what we now know as external galaxies lay at huge distances, much greater than those between objects in our own Galaxy. The earliest such distance determinations included those of the galaxies NGC 6822 [93], M33 [94] and M31 [96], by Edwin Hubble.
As well as determining distances, Hubble also considered redshifts of spectral lines in galaxy spectra
which had previously been measured by Slipher in a series of papers [197, 198]. If a spectral
line of emitted wavelength is observed at a wavelength
, the redshift
is defined as
![v](article4x.gif)
![v = cz](article6x.gif)
![H0](article7x.gif)
![>](article8x.gif)
![H 0](article9x.gif)
Recession velocities are very easy to measure; all we need is an object with an emission line and a
spectrograph. Distances are very difficult. This is because in order to measure a distance, we need a
standard candle (an object whose luminosity is known) or a standard ruler (an object whose length is
known), and we then use apparent brightness or angular size to work out the distance. Good standard
candles and standard rulers are in short supply because most such objects require that we understand
their astrophysics well enough to work out what their luminosity or size actually is. Neither
stars nor galaxies by themselves remotely approach the uniformity needed; even when selected
by other, easily measurable properties such as colour, they range over orders of magnitude in
luminosity and size for reasons that are astrophysically interesting but frustrating for distance
measurement. The ideal object, in fact, is one which involves as little astrophysics as
possible.
Hubble originally used a class of stars known as Cepheid variables for his distance determinations. These
are giant blue stars, the best known of which is UMa, or Polaris. In most normal stars, a self-regulating
mechanism exists in which any tendency for the star to expand or contract is quickly damped out. In a small
range of temperature on the Hertzsprung–Russell (H-R) diagram, around 7000 – 8000 K, particularly at high
luminosity,2
this does not happen and pulsations occur. These pulsations, the defining property of Cepheids, have a
characteristic form, a steep rise followed by a gradual fall. They also have a period which is directly
proportional to luminosity, because brighter stars are larger, and therefore take longer to pulsate. The
period-luminosity relationship was discovered by Leavitt [123] by studying a sample of Cepheid
variables in the Large Magellanic Cloud (LMC). Because these stars were known to be all at
the same distance, their correlation of apparent magnitude with period therefore implied the
P-L relationship.
The Hubble constant was originally measured as [95] and its subsequent
history was a more-or-less uniform revision downwards. In the early days this was caused by
bias3
in the original samples [12], confusion between bright stars and H ii regions
in the original samples [97, 185] and differences between type I and II
Cepheids4
[7]. In the second half of the last century, the subject was dominated by a lengthy dispute between investigators
favouring values around
and those preferring higher values of
.
Most astronomers would now bet large amounts of money on the true value lying between these extremes,
and this review is an attempt to explain why and also to try and evaluate the evidence for the best-guess
current value. It is not an attempt to review the global history of
determinations, as this has been
done many times, often by the original protagonists or their close collaborators. For an overall review of
this process see, for example, [223] and [210]. Compilations of data and analysis of them are
given by Huchra (
http://cfa-www.harvard.edu/~huchra/hubble), and Gott ([77], updated
by [35]).5
Further reviews of the subject, with various different emphases and approaches, are given by [212, 68*].
In summary, the ideal object for measuring the Hubble constant:
- Has a property which allows it to be treated as either as a standard candle or as a standard ruler
- Can be used independently of other calibrations (i.e., in a one-step process)
- Lies at a large enough distance (a few tens of Mpc or greater) that peculiar velocities are small compared to the recession velocity at that distance
- Involves as little astrophysics as possible, so that the distance determination does not depend on internal properties of the object
- Provides the Hubble constant independently of other cosmological parameters.
Many different methods are discussed in this review. We begin with one-step methods, and in particular
with the use of megamasers in external galaxies – arguably the only method which satisfies all the above
criteria. Two other one-step methods, gravitational lensing and Sunyaev–Zel’dovich measurements, which
have significant contaminating astrophysical effects are also discussed. The review then discusses two other
programmes: first, the Cepheid-based distance ladders, where the astrophysics is probably now well
understood after decades of effort, but which are not one-step processes; and second, information from the
CMB, an era where astrophysics is in the linear regime and therefore simpler, but where is not
determined independently of other cosmological parameters in a single experiment, without further
assumptions.
1.2 A little cosmology
The expanding Universe is a consequence, although not the only possible consequence, of general relativity coupled with the assumption that space is homogeneous (that is, it has the same average density of matter at all points at a given time) and isotropic (the same in all directions). In 1922, Friedman [72] showed that given that assumption, we can use the Einstein field equations of general relativity to write down the dynamics of the Universe using the following two equations, now known as the Friedman equations:
Here![a = a(t)](article23x.gif)
![(1 + z)](article24x.gif)
![a0∕a](article25x.gif)
![Λ](article26x.gif)
![k](article27x.gif)
![− 1](article28x.gif)
![0](article29x.gif)
![+1](article30x.gif)
![ρ](article31x.gif)
![p](article32x.gif)
![p = wρ](article33x.gif)
![a](article34x.gif)
At any given time, we can define a Hubble parameter
which is obviously related to the Hubble constant, because it is the ratio of an increase in scale factor to the scale factor itself. In fact, the Hubble constant![H0](article36x.gif)
![H](article37x.gif)
If , we can derive the kinematics of the Universe quite simply from the first Friedman equation.
For a spatially flat Universe
, and we therefore have
![ρ c](article44x.gif)
![k < 0](article45x.gif)
![a˙> 0](article46x.gif)
![k > 0](article47x.gif)
![˙a = 0](article48x.gif)
![a˙](article49x.gif)
For the global history of the Universe in models with a cosmological constant, however, we need to
consider the term as providing an effective acceleration. If the cosmological constant is
positive, the Universe is almost bound to expand forever, unless the matter density is very much
greater than the energy density in cosmological constant and can collapse the Universe before the
acceleration takes over. (A negative cosmological constant will always cause recollapse, but is
not part of any currently likely world model). Carroll [34] provides further discussion of this
point.
We can also introduce some dimensionless symbols for energy densities in the cosmological constant at
the current time, , and in “curvature energy”,
. By rearranging the first
Friedman equation we obtain
The density in a particular component of the Universe , as a fraction of critical density, can be
written as
![α](article56x.gif)
![w](article57x.gif)
![α = − 3(1 + w )](article58x.gif)
![α = − 3](article59x.gif)
![α = − 4](article60x.gif)
![α = 0](article61x.gif)
![w = − 1](article62x.gif)
![w = − 1](article63x.gif)
![w < − 13](article64x.gif)
![w < − 1](article65x.gif)
![Λ](article66x.gif)
![Ω = 0 k](article68x.gif)
![Ωk](article69x.gif)
We finally obtain an equation for the variation of the Hubble parameter with time in terms of the Hubble constant (see, e.g., [155]),
where![Ωr](article71x.gif)
![Ωm](article72x.gif)
To obtain cosmological distances, we need to perform integrals of the form
where the right-hand side can be expressed as a “Hubble distance”![DH ≡ c∕H0](article74x.gif)
![Ω](article75x.gif)
![D C](article76x.gif)
![DA = DC ∕(1 + z )](article77x.gif)
![DL = (1 + z)2DA](article78x.gif)
![z](article79x.gif)