PDFLINK |

# Ruminations on Cosmology and Time

Like many people, I have been riveted for decades by the breathless bulletins from cosmologists describing the latest twist to their model of space and time at the largest possible scale. Recently, I have been reading a lot of their papers and finding how far cosmology has advanced since I was a grad student. This article is partly to share the beauty of this theory with my colleagues but also to express how thoroughly relativity shakes up our deep psychological conviction of the reality of an external physical time. This Newtonian perspective, “Time, in and of itself…without reference to anything external, flows uniformly,”Footnote^{1} *Principia*, Scholium in *Definitions*.^{✖} is perpetuated in the “standard model” of cosmologists and I want to critique its basis (see §4).

## 1. A Trip by Space Ship

I began, a few months ago, wondering how far a space ship, capable of 1g acceleration for decades, could take a human being.Footnote^{2} A referee pointed out to me that this has often been used as an exercise in many relativity courses and can be found at https://www.mathpages.com/rr/s2-09/2-09.htm. Though simple, I think it is very helpful to shake our ingrained habit of thinking of time in the Newtonian way.^{✖} I have no idea what its power source might be but 1g means life on the space ship would feel exactly like life on earth if their floor was perpendicular to the acceleration and, when it needed to decelerate, it would simply turn around so the astronauts would still be standing normally on the “floor.” To my astonishment, I found that I could travel to the black hole in the center of the Milky Way galaxy, known as Sagittarius A*, *and back*, all in less than forty years. Meanwhile, tens of thousands of years will have elapsed *on earth* when it returns, so forget the greeting committee. That’s the magic of relativity. Here are the details.

Let be space-time coordinates, and let be its special relativistic metric with measuring “proper” time along any time-like trajectory (e.g., the subjective time that would be experienced by a human following this trajectory) and the speed of light. We assume the trip takes place in the plane The Lorentz group acts transitively on any hyperbola whose asymptotes are light rays (i.e., a curve given by . hence these must be the curves of constant acceleration. Alternatively, the relativistic acceleration of a body whose trajectory is ), is readily seen to be and one can check that this is constant on all such hyperbolas. A hyperbola through the origin and tangent there to the time axis represents the trajectory of an initially stationary space ship with constant acceleration and its trajectory is conveniently expressed as:

Here where , is the acceleration of gravity on earth, i.e., 32.174 feet/sec^{2}, and is proper time in the space ship. If we use years and light-years for units, then and, remarkably, More precisely, there are exactly 31,557,600 seconds in a year (the Julian year, as used by the IAU) and about ! feet make a light-year, making in light-year and year units.

The latest measurements show that SagA* is about 26,000 light-years away from us. Then if you set you find that in less than 10 subjective years, the space ship has gone 13,000 light-years, is half way to the black hole and needs to start decelerating. However, note that , is also 13,000, i.e., on earth 13 millennia have passed. At that point, your speed is or 99.99999985% of the speed of light. In twenty years, you’ll be at rest at Sag A* but, of course, prudently not too close. In less than forty years, you are back on an earth where more than 52,000 years have elapsed.

A few caveats: firstly, space-time is not flat, meaning you need some extra power to get out of the sun’s gravitational field and you will be accelerated/decelerated by the galaxies gravitation on the way in, resp., way back. I think these are minor. When your own space ship hits its maximum speed midway, everything is normal inside your space ship but outside, stars and interstellar gas are rushing by extremely close to the speed of light. The usual formula shows that their masses increase by the factor You’ll need good shielding. Finally, if you use a photon drive, accelerating by emitting high energy photons, and take the mass of the space ship to be the same as that of the space shuttle (2 million kg), my calculations show that the needed energy for this trip is roughly . kilowatt-hours, roughly *10,000 years* worth of the total energy being generated today on earth—seems to need some breakthroughs, but don’t forget that the history of mankind shows a succession of inventions that enable us to wield ever more power. Incidentally, paths that accelerate fast enough, such as get to , in finite proper time although requiring infinite energy. But see“Black Holes” below for another option to live “forever.” Finally, I want to mention a totally exotic approach to this sort of travel: ,*warp drives*. A recent study appeared in arXiv:2102.06824. The idea is to form a traveling bubble by warping space-time so that, within the bubble, both space and time can be scaled differently from their values outside the bubble. In the preprint referenced above, the authors claim this to be possible “in principle.”

The key thing for me is that when high relative velocities occur, one should not expect any, even roughly, consistent time coordinate to be possible. Fellow humans in the future or members of an alien civilization are likely to be marching to different drummers. But it is also true that measured stellar and galactic velocities all cluster tightly compared to the speed of light. Individual stars, both in their rotation around the Milky Way and in their additional so-called “peculiar” motion (this is standard astronomer’s lingo for all apparently random components of motion), have speeds with order of magnitude 100 km/sec, of 0.1% Even so-called high-velocity stars have speeds . e.g., a record neutron star “S5-HVS1” was found moving at 1755 km/sec. Galaxies have peculiar velocities in that range too. In the Virgo cluster, for example, measured by their redshifts, their recessional velocities vary by plus or minus 1000 km/sec. Our local group of galaxies is believed to have a peculiar velocity of about 600 km/sec. So if other intelligent forms of life exist in our universe, one good sign might be the occasional object moving at extremely large velocities. Science fiction movies have it all wrong: if you want to go far, there is no need to subject yourself to suspended animation, you simply need a huge supply of energy. 1%,

## 2. The Standard Model

The so-called standard model of modern cosmology is, more precisely, the FLRW or Friedman-Lemaitre-Robertson-Walker model of 4-dimensional space-time. We will use coordinates with the metric negative definite along *t* = constant slices and positive definite along the time-like lines = constant (n.b.: sometimes the *opposite signs* are used but I will follow this convention). From this truly vast set of possibilities, the FLRW model comes about by requiring *isotropy* (invariant under spatial rotations) and *spatial homogeneity* (invariant under a 3D group of translations), together known as the *cosmological principle*. This has the drastic effect that the only possible metrics are of the form:

where is either the metric of the 3-sphere, flat or the hyperbolic 3-space. It is true that astronomy strongly supports isotropy and mankind’s bad experience with Ptolemaic earth-centered planetary models supports the idea that the position of the earth is in no way special. But for spatial homogeneity to even make sense, one must choose a distinguished set of spatial translations with respect to which it is homogeneous. And this means taking our local universe and extrapolating it “sideways” into parts of space-time which are totally unobservable from our space-time location. ,*We can only observe things on our past light cone, the locus of points in space-time from which a light ray might reach us now.* In other words, we are trying to make inferences about a 4-dimensional cosmos from data on this 3-dimensional backwards oriented cone. A global space vs. time splitting, as in the FLRW model, is already a huge extrapolation of our limited human observations to a model of the entire universe. I see this model as making a similar error to that of Ptolemy. It would have been more natural to update the idea that we are not in a special point in the cosmos by asking that space-time is homogeneous, so we are not in a special space-time location. This is what Einstein initially did. But the redshift and its explanation by an expanding universe (instead of it being a Doppler effect) contradicts this: apparently no reasonable metric for our universe can be invariant under time translation. It is an easy exercise to show that with the above model, light emitted by atoms at time and observed by comoving observers of time has its frequency divided by This ratio minus one is the standard measure . of redshift whose measurements then connect data with the function in the model.

*Accepting the validity of the standard model*, what does this backwards light cone look like? is given by integrating Einstein’s and Friedman’s equations. Fitting the best current data, e.g., the Hubble constant, observational data on galaxies, and the small but significant anisotropies of the cosmic microwave background (CMB), the standard model has been made completely specific: in one version anyway, the universe looks flat in spatial directions and the stress-energy tensor, the source term in Einstein’s equations, is made specific with 31.5% energy from massive particles (including “dark matter”), roughly 0.00006% from radiation, and 68.5% from “dark energy” (a positive cosmological constant). In recent times, dark energy controls for most of the past matter did and at the time of the CMB, radiation was the dominant term. With all these assumptions, Friedman’s equations allow one to integrate for , .

The result is shown in Figure 1: a depiction of everything visible by telescopes, simplified by making space 2-dimensional, not 3. The light cone starts off, looking out in space and back in time, like a cone whose slope is one light-year out, one year back. But as you go further out and further back in time, to say around eight billion years back, to where the Hubble expansion of the universe means space was then smaller, this begins to seriously counteract the expanding sphere of ancient light sources so the cone expands less than linearly. What you’re seeing through the telescope when you look 10 billion light-years out or more begins to come from smaller and smaller 2-spheres as measured in the metric at those long-past times. At 13.8 billion years back, you hit the source of the ,*cosmic microwave background* (or CMB), the radiation from a densely packed world where photons get absorbed as quickly as they are emitted. You see this like a blinding flash (redshifted to microwave frequencies) and you can see no farther back in time or out in space. Possibly, going beyond photon-based astronomy, gravitational waves may give us data on even earlier states of the universe but this has not been done so far. Remarkably, the diameter of the sphere that produced the CMB that you see on the earth today was then not 13.8 billion light-years across but merely 80 *million* light-years (according to the standard model).

## 3. An Aside on Space-Time Curvature

Before going further, I want to clarify the meaning of curved space-time. As a mathematician used to Riemannian geometry, I realized after a while that I was confused about the meaning of positive and negative curvature for space-time, also because the metric is assumed to have signature (+,-,-,-) by some authors and (-,+,+,+) by others. Both conventions are widespread in the literature. The coefficients of the curvature tensor itself depend on the coordinates and it is not clear how to interpret their sign. In Riemannian geometry, what makes *geometric* sense are (i) the convergence and divergence of nearby geodesics and (ii) the rate of increase of areas/volumes as their radius increases. The simplest coordinate-free scalar curvature variables, ones that depend only the geometry, are the *sectional curvatures*. These are scalar measures attached to 2-planes in some tangent space by evaluating the Riemann curvature on an orthonormal basis of this plane, or in any basis, dividing by the appropriate minor of the metric. I was surprised, however, that the immense tome *Gravitation* of Misner, Thorpe, and Wheeler never mentions these (without an online version, I may have missed this).

In general relativity, time-like geodesics are the paths of freely falling bodies and their convergence/divergence is highly relevant and can be read off a sectional curvature for a 2-plane with indefinite metric containing their two initial trajectories. However, in such a 2-plane, time-like and space-like geodesics behave in opposite ways: if time-like ones diverge, space-like ones will converge and vice versa! I suggest that we would like curvature to reflect the more significant behavior of time-like geodesics in this case. As for totally spatial 2-planes in general relativity, geodesics here have no physical significance but growth or shrinkage of areas/volumes does. We want spatial sectional curvature to be positive (resp., negative) if areas grow less fast (resp., faster) than BUT beware: sectional curvatures also depend on the sign of . Sectional curvatures flip signs when the metric flips signs. To be clear, I’ll call .*geometric sectional curvatures* the sectional curvatures whose sign reflects the behavior of time-like geodesics for 2-planes meeting the light cone and the behavior of areas for purely spatial 2-planes. A further issue to be careful about is that the curvature of a time slice *in the induced metric* is not the same as the sectional curvatures in its tangent plane: the second fundamental form of the time slice must be added.

On the other hand, if you consider the Ricci tensor as a quadratic function on the tangent space, then one way to evaluate on a unit vector is to take an orthonormal basis including the vector Then the sum of the sectional curvatures for all the sections formed from . and one of the others is This is true in the Riemannian case and still holds for indefinite metrics so long as you .*don’t flip signs* to make the sectional curvatures geometric.

An important example are the curvatures in the standard model given above, including the case where space has constant non-zero curvature. The only variables are the unknown function and the curvature of space. The sectional curvatures in the six coordinate planes are:

The signs may be confirmed geometrically by considering pairs of geodesics for which whose distance apart has the factor , as well as geodesics that start in a hyperplane , but then bend towards the side where is smaller.

## 4. An Alternative Approach to the Cosmological Principle

The cosmological principle that we shouldn’t be in a special place in space-time is surely a legitimate way to constrain our models. Thus interpreting the nearly isotropic redshift as a Doppler effect in a fixed spatial would have meant that we are at the center of an expansion and contradict this principle. But are there other ways to enforce some sort of homogeneity, short of assuming a group of isometric translations transitive on a spatial slice? I think the essential clue comes from the measurements that have been made of the relative velocities of earth, sun, planets, stars, and galaxies. It is very striking that *no such measurement is more than 1% of the speed of light*. As we described in §1, so far no such measurements are greater than 1800 km/sec and light travels at about 300,000 km/sec. If we think of every object as tracing out a trajectory in space-time, it is common to represent their motion by a unit 4-vector called its 4-velocity and this means that the 4-velocities of all nearby stars and galaxies are very close to each other, that their “worldlines” are essentially parallel. Moreover, at least at present, matter is clustered within a near vacuum and interacts almost entirely by gravity, hence the worldlines are assumed to be very nearly geodesics. This description of the distribution of matter (dark or not) is what cosmologists call “dust.” My proposal is that the cosmological principle should be used to say in a very large portion, call it of space-time near us, there is a time-like unit vector field , whose value at each point represents the common mean 4-velocity of all the matter near that point. Moreover, the integral curves of this vector field are geodesics, forming what is called a *geodesic congruence*. At every such point, defines the time-axis with respect to which this matter is nearly stationary and its perpendicular 3-place is the natural choice for defining *locally simultaneous events*. Going back to our space traveler who went to Sag A*, this is why it is reasonable to say that when he gets to the center of the Milky Way and his clock says he has travelled for 20 years, then *at the same time* clocks on the earth show that 26,000 years have elapsed. Everything in the Milky Way has small relative motions so we can propagate the time of our clocks consistently throughout the galaxy and define rough simultaneity without getting into much trouble.

Globally, these local spatial slices define a 3-dimen- sional *distribution* on .*But the cosmological principle does not imply that this distribution is integrable!* A good way to think of this is to take the dual one-form (i.e., lower its indices). The integrability of the spatial distribution is equivalent to asserting or that for some But because the congruence is geodesic, . is “half-closed.” In fact, using and for any vector field , we get , But . still has a spatial curl, namely whose vanishing now implies , hence , for some time .Footnote^{3} Added in proof: I just read an article in 2021 Nature Astronomy by P. Wang et al. giving evidence that there is coherent rotational motion of galactic filaments at a 100 million light-year scale.^{✖}

In the standard approach, Hubble’s law says that the redshift is the fundamental measure of distance for objects lying outside our local cluster. This comes from a long history of identifying “standard candles,” objects and events in deep space whose intrinsic luminosity is predictable and then measuring both observed luminosity and redshift. Hubble found a rough linearicty here and the slope is called the Hubble constant. His observations have been extended with every new, ever more powerful telescope and a fair number of standard candles (e.g., Cepheid variables and type 1a supernovae). An exhaustive 2013 survey can be found in Tully et al., arXiv:1307.7213, that culminated in Figure 2.

The thing to note is the immense variability of the estimates of the Hubble constant. Combining the Hubble constant with local data where parallax is available to get accurate distances and using the standard model, the redshift becomes the universal measure for both the distance from earth and the past time of all telescopic observations of deep space galaxies. But if we question the standard model, what can we say about the relationship of redshift and luminosity? Starting from scratch, light rays follow *null* geodesics so there is no physically meaningful way to assign either a distance from us or a time in the past to an observed galaxy. But one can develop formulas for both the attenuation of luminosity and the redshift of light received at earth *using the curvature of space-time along the path that the light has travelled between the emitter and earth*. These show that both are highly sensitive to variations of curvature along the photon’s path. I want to thank Piotr Chrusciel for helping me with the summary of this effect below.

Let be the null geodesic of the photon, joining the emitter at and earth at for observer, where , is an affine parametrization. The redshift along is the product of a geometric component and a Doppler component. Let be the parallel translation of earth’s 4-velocity along Then the inner product . measures the rate of recession/approach as a multiple of the Doppler component of redshift. The geometric component is given by comparing , to the nearby geodesic one period of the light’s oscillation later. The separation is described by Jacobi’s formula where , is the sectional curvature of the 2-plane . is the geometric component of the redshift.

The formula for the diminution of luminosity along was worked out by R. Sachs in 1961 and involves the orthogonal split of the tangent space to space-time along the light ray:

Think of the as the wave front of a beam of light from the emitter in which the beam evolves as an ellipse stretching/shrinking in varying directions at varying rates. The curvature tensor contracted with defines a quadratic form on for which we get the Jacobi equation for the matrix of relative distances between the light rays in the beam. Then, if we ignore polarization, the attenuation is (Note that . the proportional rate of change in the area of the light beam.) For details I refer to Fleury’s thesis, ,arXiv:1511.03702, Chapter 2.

## 5. Which Model is Better?

To assess whether the standard model is a reasonable approximation comes down to how inhomogeneous is the observed part of our past light cone so that, in particular, will its curvature vary substantially enough to invalidate the use of Hubble’s linear law to measure deep space and time. Well, galaxies cluster and these clusters fall into superclusters; the Milky Way sits inside the “local group” of galaxies (roughly 10 million light-years across) which is, in turn, part of the Virgo supercluster (roughly 100 million light-years across). But at the 100 million light-year scale, these superclusters form walls and filaments, e.g., the Sloan Great Wall that is more than a billion light-years long. These structures are separated by voids with extremely few galaxies whose size averages around 150 million light-years and which seem to fill about 60–70% of the volume of space today. The whole structure now looks very fractal-like. There may even be larger structures that one detects from gamma ray bursts and clustered groups of quasars (the radiation from matter furiously accelerated by really big black holes) whose redshift places them at 5–10 billion years old, the equivalent of 5–10 billion light-years away in today’s expanded universe. But this may be the limit because the CMB is very nearly homogeneous and (according to the standard model) is 13.8 billion years in the past.

The most likely place to find variations in curvature is in the voids versus the walls/filaments. This has been investigated by David Wiltshire at the University of Canterbury, New Zealand. He has proposed an alternative to the standard model that he calls a *Timescape Model* (see his lecture notes on the arXiv:1311.3787). In this he allows the voids and walls to evolve quite differently. His conclusion is these two did indeed evolve very differently, both in spatial curvature and in their proper time. The wall/filament structures in his model maintain near zero curvature while the voids develop strongly negative curvature. It is the voids that drive the expansion of the universe while gravitational attraction is stabilizing the matter-filled walls/filaments. In his model, by wall/filament clock time, the universe is 14.2 billion years old while clocks in voids would show an age of 17.5 billion years. As soon as you abandon the standard model, there is no reason to assign one number to the age of the universe: it depends on the variations of curvature along each null geodesic from the CMB to the present. Working with his model, one might well explain the wildly diverging estimates of Hubble’s constant seen in Figure 2, but to thoroughly implement Wiltshire’s approach needs considerable further effort.

Another major critic of the standard model is Thomas Buchert at the CNRS Center CRAL in the University of Lyons, France. His basic criticism is that one cannot simply average out the inhomogeneity because the equations are highly non-linear so that averaging and time evolution do not commute. In fact, it is not even obvious how one ought to average a key tensor like curvature to appropriately summarize its effects. This problem is called “back-reaction,” curved space affecting matter that then affects curved space. To be clear, this criticism is a minority view right now. Especially controversial is the contention that back-reaction may eliminate the need to introduce dark energy. Evidence for this point of view is laid out in detail in Buchert et al.’s *Classical and Quantum Gravity* article “Is there proof that back-reaction of inhomogeneities is irrelevant in cosmology?”, arXiv:1505.07800 (see also Buchert et al., arXiv:0906.0134 and arXiv:gr-qc/0506106). I want to thank Prof. Buchert for his patience in explaining this line of research to me and for providing many arXiv links in this section. He has developed extensive machinery for attacking these issues: for a review on fundamentals see arXiv:0707.2153, and for a recent paper on averaging in general space-time foliations see arXiv:1912.04213. In the former paper, a key point in his theory appears in equations (10) and (11). Letting be the *local* rate of expansion for a universe filled with irrotational “dust,” he finds that first averaging over a spatial domain he calls and then taking its time derivative, you get a positive contribution from the spatial *variance* of i.e., inhomogeneities are causing accelerated average expansion. ,

How can we formulate models of the cosmos that incorporate inhomogeneities and the back-reaction? The most research has gone into perturbing the standard model, adding a power series to its metric, to force the Ricci curvature to match the inhomogeneous stress-energy tensor. But this work is mostly restricted to first and second order perturbation, throwing away vast numbers of higher order terms presumed small. Back-reaction only *starts* at second order. A review of this work can be found in arXiv:0804.3276. A more reasonable approach than low order perturbations might be to evolve the metric tensor as in Hamilton and Perelman’s work on 3-manifolds but now driven by the error in the stress-energy tensor.

A basic problem in computing an accurate model of our space-time neighborhood is the lack of data. We have telescopic data only for events on our past light cone but not on its interior. We also have data on a 2-sphere in the CMB event where it intersects our light cone. One could integrate forwards in time from the CMB extrapolating its inhomogeneities from the visible 2-sphere to its interior. Or one might integrate backwards in time from data on the light cone. The latter may sound strange but see Courant and Hilbert, vol. 2, §16.4, where this is done for the classical wave equation. Or one might be content with stochastic simulations that reproduce the statistics of the data, e.g., the look and feel of the fractal distribution of galaxies. Integrating forwards in time, Macpherson, Price, and Lasky just published simulations of the full Einstein’s equations with initial data at the CMB on a lattice of physical size up to 1 gigaparsec ( billion light-years). They use the Einstein toolkit in the software package *Cactus*. These show the formation of walls and voids. Many other simulations, e.g., the 2011 “Bolshoi” runs (arXiv:1109.0003) and the 2021 “NewHorizon” runs (arXiv:2009.10578), are based on solving a Newtonian N-body problem in the space-time of the standard model. This approach totally ignores all back-reaction.

This very active field of inhomogeneous relativistic cosmology seeks to replace the standard cosmological model, as a naive guess of the average evolution of the universe, by exploiting the richness inherent in general relativity and to provide more general models that may also provide an explanation of dark energy and dark matter through physical properties like inhomogeneous spatial curvature, an element that is simply neglected in the standard model of cosmology.

## 6. Black Holes

Black holes have gone from an awkward part of the Schwarzschild model that nobody was sure existed in real space-time to a now ubiquitous and much observed reality in which general relativity gets really wild and wooly. In particular, we are now sure that stars, if they retain sufficient mass after the possibly explosive collapse when their fuel runs out, are fated to collapse further leaving behind a black hole. And we have excellent evidence that almost all galaxies harbor a really huge black hole in their center, like Sag A* in our own galaxy. In fact, one theory for the origin of the “dark matter” that binds galaxies together is that it consists of black holes of various sizes formed in the Big Bang and forming a halo around each galaxy. This idea, originally proposed by Hawking and Carr, has never been mainstream but refuses to die: see the 2020 article “Primordial black holes as dark matter: Recent developments,” https://www.annualreviews.org/doi/abs/10.1146/annurev-nucl-050520-125911. There is also a cluster of 73 quasar black holes known as the “Huge LQG,” 9 billion light-years away that stretches over billions of light-years. To top it off, recently (arXiv:1909.11090) the theory has arisen that our solar system might have a ninth planet that is really a primordial black hole (one created soon after the “Big Bang”), possibly 5 cm in size with mass that of the whole earth! All this is not surprising since the theoretical results of Hawking and Penrose show that singularities of the black hole type are pretty much inevitable in general relativity.

But the popular conception of black holes as a sort of whirlpool that is sucking people in to their doom is radically wrong, due again to the fact that people cling to the idea that there is one time, valid for all of us. The widely accepted model claims to show what is going on *inside* a black hole. But, from our perspective at a distance, whoever tries to enter the black hole apparently gets paralyzed, their on-board clocks slow to nearly stopping, and their TV transmissions get redder and redder. The passengers meanwhile have no idea anything special is happening and keep shining their flashlights to show they are fine as they merrily cross the hole’s so-called horizon. But they can only send a finite number of photons before crossing the horizon so from the outside the signal actually stops as the passenger gets near the horizon. The traveller’s time has stopped for all intents and purposes while your’s moves on. The only way for us to interpret their lives after entering the black hole is to say that *time there is infinitely far in our time’s future*. Crazily enough, looking back out the port holes even from inside the horizon, he/she can still watch what is now an infinitely long-ago universe to which they can never go back.

The usual model for a black hole starts with the Schwarzschild model:

We have set ( the gravitational constant), is the radius of the black hole horizon, equal to twice the mass of the hole, and we assume in this equation. If is large, the metric becomes indistinguishable from flat space-time, so we may identify both its time and space coordinates with our usual In fact, it was devised to model planetary motion with a central mass because its geodesics act as if they are being pulled towards (0,0,0), . being a simple multiple of the mass. The fact that the metric apparently explodes when decreases to the boundary we now call the ,*horizon*, was initially a puzzle. Is the metric really going crazy at the horizon? Its Riemann curvature tensor turns out to be “diagonal,” i.e., it only has six non-zero entries all of the form (taking its symmetries into account). The corresponding sectional curvatures are for the planes , with singularities only at , nothing special at , Other sectional curvatures are interpolations of these. This is an example of a curvature tensor of the simple “type D” in Petrov’s classification (see, e.g., the Wikipedia article on this). These are not the geometric curvatures, so we can use them to check that the Ricci curvature is zero and most importantly, .*there is no singularity* when Note also that the positive curvature in the . plane is geometric and is the main driver of planetary motion. The near periodicity of planetary motion is an immediate consequence of this positive curvature. The negative curvature in the plane allows errant bodies to be thrown out of the solar system or to fall into the sun.

Curvature is one thing, but to study the horizon, let’s look at how geodesics behave there, taking to be constant for simplicity (i.e., look at geodesics going directly towards the center (0,0,0)). The simplest ones are the light rays whose trajectories are easy to check: they are given by

The log-term blows up as decreases to i.e., showing that an infinite amount of “our” clock time is needed for light to reach the black hole horizon. The trajectories of material bodies are given by a messier formula but tell a similar story with the added feature that we can track their proper clock time , as the body approaches the horizon. If we eliminate the external time the formula for , for the position as a function of proper time shows that radial geodesics cross the horizon smoothly and continue to the true singularity (where you really do get torn apart). On the other hand, must blow up logarithmically when this path crosses the horizon, as it does on light rays, hence the fact that when the body crosses the horizon, outside time is infinitely far in the future.

The seeming conundrum of the geodesic continuing in this way is clarified, following Eddington and Finkelstein, by *changing the time coordinate* in the Schwarzschild metric. Define a new “time” by

Using these coordinates, we find that we can *enlarge* our usual space-time, adding a curious universe inside the black hole whose time is infinitely far in the future:

for which the new metric has singularities only at :

In these coordinates, space is the same but time has been “corrected” to make the hole’s interior continuous with its exterior. Both light rays and material objects moving towards the black hole easily enter this new universe, though, from our standpoint on the outside, they merely slow down, seemingly stuck just outside the hole’s horizon. This interior universe is not very friendly because its curvature indeed grows without limit as goes to zero and all time-like trajectories lead there.

Arguably, a more principled way of changing the time coordinate is to introduce the flow lines of the geodesics that start out as stationary objects far from the black hole but then get sucked in. This idea was called “rain coordinates” by Taylor and Wheeler who introduced it in their text *Exploring Black Holes*. These flow lines define a global time-invariant vector field on black hole space-time, like the nearly parallel flow lines of stars on which we proposed to base global cosmology. In this case, their common perpendicular defines an integrable distribution for a time coordinate Like Eddington and Finkelstein’s time, it equals the observer’s time . plus a logarithmic infinity blowing up at the horizon and leading to the same model of the interior. Unfortunately, the formula for it brings in elliptic functions so we won’t reproduce it. A geodesically complete analysis of Schwarzschild’s model must add an infinitely distant past as well as such a future, using either a modification of Eddington-Finkelstein or the beautiful Kruskal-Szekeres model.

Note finally, the similarity of a trip inside a black hole with the trip we started with, going to Sag A*: in both cases the travelers time and the stay-at-home time diverge wildly. My point is that once again we need to forget Newton’s idea that time is a universal physical thing. It is rather a thing with distinct existence for each person and there are real possibilities of time-travel into the future, maybe even infinitely far into the future.

## 7. Truly Wild and Wooly Models

This trick of changing coordinates by a logarithmically infinite term so that key geodesics become complete has now been used again and again, and it sometimes leads to crazy science-fiction scenarios. I’ll sketch two: the extended Kerr model and the Taub-NUT model.

The Kerr model is both the apparently unique model for black holes that incorporates their rotation but also one whose geodesic completion allows time travel *into the past*! It is also an algebraic nightmare, using oblate spheroidal coordinates and requiring multiple off-diagonal metric terms. It has two forms: (i) the Boyer-Lindquist model, describing the hole from the outside perspective, generalizing the Schwarzschild model, and (ii) the Kerr-Schild model that remains valid across the horizon, generalizing Eddington-Finkelstein. Both incorporate a normalized measure of the angular momentum, generally (and here) assumed less than An excellent introduction is Matt Visser’s exposition, .arXiv:0706.0622v3.

Here are (i) oblate spheroidal coordinates for space modified by adding to that we write , (ii) some useful abbreviations, and (iii) the Boyer-Lindquist model: ,

Now we descend into the black hole in stages: first define , so The . define the inner and outer ’s*ergospheres*, the the inner and outer ’s*horizons*. At the coefficient of , goes negative so ceases to measure time and must increase on all future oriented time-like paths. At the metric appears to blow up, but, as in the case, the proper time along time-oriented geodesics is nonetheless bounded all the way up to the horizon. We must call in an infinite change of coordinates, now in *both* and :

The result is now the original metric proposed by Roy Kerr, shown here in both Cartesian and modified oblate coordinates (dropping the subscript “Kerr”):

In both cases, each first line is the same flat Minkowski metric and each second line is the same square of a time-invariant null 1-form, null for both the flat metric and the resulting Kerr metric. Don’t try verifying this unless you have plenty of both paper and patience. Imposing time and rotational invariance and requiring the Ricci curvature to vanish was how Kerr was led to the above definition. Each form is useful for showing the behavior in different ways. The determinant of the metric is so we have a bona fide metric wherever , i.e., outside the ring , where curvatures do explode. What happens to a voyager braving this black hole? Between and we saw he must begin to turn around the hole in the , direction. In fact, a logarithmic infinity needed to be added to to get a regular metric so in the original coordinates (those of the external observer), he must spin infinitely often around the hole before crossing the horizon (though not feeling dizzy in his local inertial frame). After crossing the outer horizon, and switch roles, and is monotonically decreasing on all future directed time-like paths, roughly tracking the explorer’s proper time backwards. The inner horizon is called a *Cauchy horizon* because everything inside it is in the future light cone of the singularity, hence Einstein’s equations are no longer well posed. It gets worse: you can even go through the singular ring by allowing to become negative and there you can spiral back in “time,” i.e., there are closed future oriented time-like trajectories. At this point, physicists throw up their hands and say enough is enough. But mathematicians do not and, in fact, there is an infinite chain of new universes connected by similar coordinate changes with which you can make a geodesically complete model. This is spelled out in O’Neill’s book, *The Geometry of Kerr Black Holes*, a few of whose conclusions are diagrammed in Figure 3 using Kruskal-Szekeres-type coordinates. The opening image of a black hole is a simulation of the Kerr model modified to include radiating matter swirling inwards towards the horizon and distorted by the hole’s curvature.

The Taub-NUT space-time is such a truly bizarre example that I can’t resist describing this. Its relatively tame part was discovered by E. H. Taub in 1951. The suffix NUT is not a suggestion of its weirdness but an abbreviation of the three authors Newman, Unti, and Tamburino who (metaphorically) crossed its horizon in 1963. It starts, let’s say, at time with space being a constant curvature 3-sphere but with a treacherous second fundamental form in 4D space-time reflecting the Hopf fibration It is only a matter of time before “all hell breaks loose,” the fibers of the fibration having lengths going to zero and all time-like geodesics veering wildly away in one of two directions. Here’s the scoop. .

If we take the 3-sphere as the unit sphere in complex 2-space the Hopf map is given by , If we take coordinates . with and then , are the usual angle coordinates on the quotient 2-sphere and is easily checked to be the unit 1-form perpendicular to The idea is to put a metric on the 3-sphere with time-varying different weights on the fibers and on the quotient of the Hopf map. We now define the Taub metric on . as follows:

Remarkably, this choice of coefficients satisfies Einstein’s vacuum equations. As approaches distance along the Hopf fibers shrinks and space collapses to a 2-sphere. Geodesics, however, do a strange thing: except in symmetric situations, they loop around the Hopf fibers infinitely often while ,*still having finite length*! Figure 4 left, is a picture in the plane, where the blue geodesic (with is shown looping around twice (dotted when it comes around).

But this isn’t the craziest thing about this model. What the NUT authors showed was that, just like the black hole case with the Eddington-Finkelstein substitution, one can change coordinates and then cross the horizon with no problem. Misner called the resulting space a “Counterexample to Almost Anything.” Some of its properties are shown on the right in Figure 4. Everything above the horizon is a new space infinitely far to the right in the Taub model. The blue geodesic is the same one shown on the left but now crossing the horizon, making an excursion and curving back to the horizon in a striking way. The thick red lines in the figure are all null geodesics, light rays. The horizon itself becomes a circular null geodesic. The red zones show the sectors of future oriented time-like geodesics: space and time get seriously confused as time-like geodesics above the horizon can curve back towards the horizon. To be more careful we need some formulas.

The shift of giving the NUT-space and the resulting NUT metric are these:

This is now a perfectly fine metric on the entire space Moreover, note that ambiguous sign in the metric. There are actually two possible ways to enlarge Taub space beyond the horizon, one extending the geodesics that spin left and the other extending those that spin right. But if you try to patch both of them onto Taub space, it becomes non-Hausdorff! For example, the vertical geodesic in the figure crosses every geodesic, whether it’s spinning left or right. So in its closure, you get two distinct limits, one in each extension. The blue geodesic shows the fate of all time-like voyagers who cross the horizon: after an excursion into NUT space, they continue to the right, looping around the . circle infinitely often while slowly coming back to the horizon. In fact, the picture is even wilder. The whole blue geodesic to infinity has finite length again and, if you take the following coordinate change, it continues into yet another world: More carefully, one should first remove the circle . from the 3-sphere and then unwind it, allowing the angle to be a real number. These ideas were the subject of quite a few papers in the *Journal of Mathematical Physics* in the 60s and 70s and I have taken this from the 1973 paper by J. G. Miller that appeared there. He showed how the patching continues forever; see his Figure 1, p. 490. If I’m not mistaken, this gives an exact and physically possible version of Bill Murray’s experiences in the film *Groundhog Day* where, coming to Punxsutawney to report on the groundhog, he gets trapped in a world with circular time and ultimately escapes after his circular time spirals sideways long enough for him to find love. Well, except for the saccharine ending, I think the blue geodesic follows the movie closely.

## 8. Final Ruminations

From a theory standpoint, there must be huge numbers of possible vacuum or matter filled space-times, with or without dark energy, and with complex global structure. Most of these have no tendency to prefer (or even allow) a global splitting of space and time. In this vast array of possibilities, scientists and mathematicians have only explored a handful. The book, *Exact Solutions of Einstein’s Field Equations*, another very heavy tome with five authors, H. Stephani, D. Kramer, M. Maccallum, C. Hoenselaers, and E. Herlt, is a pretty exhaustive survey. What it covers, however, are all metrics with some simplifying properties: symmetries, restricted curvature tensors, hypersurfaces in 5-space, metrics of special forms like Kerr’s. In this *pot pourri*, however, we have only touched on a couple.

For me, the biggest takeaway is the fact that time itself is such a tenuous thing, that hitching your body to light rays or visiting any place where space-time is seriously warped, you break free of the constraints of its normal passage. Like Miranda in *The Tempest*, we find wonders when our small isolated world is opened to the larger universe. But returning to real life now, any speculation about where cosmology is going must take into account that we are in the midst of an explosion of data from deep space and all sorts of new situations are coming to light. There are indications that galaxy clusters can have coherent motion like galaxies themselves, that huge black holes are not always tucked safely in galaxy cores but can be ejected, reminiscent of Newtonian 3-body solutions in which one body is ejected after a near triple collision. Perhaps the rotations of black holes are not merely a *local* distortion of space-time and maybe dark matter consists of black holes. Gravitational waves are opening up brand new avenues for probing space-time. However, unless causality itself is violated, our ability to observe and measure the universe will still be limited to observations on or near our past light cone. We can never observe all that much about the interior of our past light cone, unless some sort of cosmic “mirror” is invented or discovered whereby we see the past of our own neighborhood. And we need to wait for eons to pass before we’ll see what our cosmic “neighbors” are doing now or before the explorer of Sag A* that I started with returns. I submit that it is likely that we will never be able to satisfy the human drive to know the “big picture,” the beginning and end of “all there is.” The confidence of many cosmology popularizers today may wind up sounding as mistaken as the firm conclusions of so many former generations.

Article DOI: 10.1090/noti2364

## Credits

Opening image is courtesy of NASA’s Goddard Space Flight Center/Jeremy Schnittman.

Figures 1, 3, and 4 are courtesy of David Mumford.

Figure 2 is courtesy of the American Astronomical Society. Reproduced with permission.

Photo of David Mumford (in his teens) is courtesy of Grace Mumford.