The validation of the Big Bang interpretation of the universe arrived, in 1965, just as a 22-year-old Joseph Silk was entering the field of cosmology. The discovery came in the form of an observation of microwave radiation suffusing all of space in every direction. This cosmic microwave background, or CMB, matched a theoretical prediction of what temperature a universe born in a Big Bang would have reached today if the universe had been expanding and cooling for 13 billion years. The radiation, however, was featureless—yet features are what a CMB would need if it’s going to grow into a universe full of galaxies. Presumably more precise observations would detect those irregularities, and in 1967 and 1968, Silk calculated that when observations of the CMB attained a far more subtle level of precision, those features would emerge in the form of infinitesimal temperature fluctuations.
Those delicate observations of the infant universe, however, would have to wait until engineers could design space observatories to make them. In the meantime, cosmologists were facing problems at the other end of the cosmic calendar—today’s large-scale distribution of galaxies. Astronomers were finding so much “noise” in the data—variables that might or might not be significant—as to make the observations practically useless. In the early 1980s, though, Nicholas Kaiser had the idea of borrowing statistical methodology that engineers deploy in telephone systems and seeing whether it applied to the universe. Other theorists seized on this methodology to create their own statistical models, which in turn helped them make sense of the astronomical observations and arrive at a consensus: The visible universe today consists primarily of superclusters of galaxies spanning hundreds of millions, and in some cases billions, of light-years.
That result, however, presented a further complication. Astronomers had expected a relatively uniform distribution of matter. Instead what they found was a cosmic web—filaments of matter in the form of superclusters, separated by vast voids. Clustering on such an extraordinary scale wouldn’t make sense, gravitationally speaking, unless much more matter was present than the eye (aided by telescopes working in any portion of the electromagnetic spectrum) could see. That possibility echoed another anomaly then emerging in astronomy: The rotational behavior of individual galaxies didn’t make gravitational sense, either, at least unless you inferred the existence of invisible matter.
“Dark matter,” researchers called it, and Silk and Kaiser both developed ways of detecting its effects and inferring its mass.
In 1984 Silk proposed exploring the nature of dark-matter particles through their possible self-annihilations into particles we can directly detect.
In 1987 and 1992 Kaiser demonstrated how cosmologists could use statistical methods to make sense of dark-matter data in both galaxy distributions and “weak lensing”—an effect predicted by Einstein’s equations for general relativity in which a foreground galaxy gravitationally multiplies the light from a background galaxy.
Beginning in the early 1990s, space observatories began probing the CMB for those infinitesimal temperature fluctuations—and finding them. The Cosmic Background Explorer in 1992, the Willkinson Anisotropy Probe in the first decade of the twenty-first century, and the Planck mission this past decade have delivered increasingly precise data that shows the match that Silk predicted, between the primordial patterns and observations of today’s large-scale structure of the universe, which Kaiser’s statistical method made possible. Those observatories also showed that about 85 percent of the universe’s matter is in the form of dark matter—still a mystery, but one that cosmologists continue to try to solve in multiple experiments around the world involving the self-annihilation of particles and the weak lensing effects that Joseph Silk and Nicholas Kaiser pioneered.