A model of an infinite universe is postulated in which both space and time expand together and are scaled by a time-dependent length scale, δ(t). The Ricci tensor and Ricci scalar both vanish identically, so the Einstein field equations reduce to a balance between the time-dependent spatially averaged stress energy tensor Tμν and its scalar invariant T/2 times the metric tensor. The divergence of TμνTgμν/2 is zero, so the conserved quantity is G*=ρc2/Gδ2(t), where c is the speed of light, ρ is the rest mass density, and G* is Quantum field theory prediction—the so-called “worst prediction in the history of physics.” The implications of this single time-dependent length scale hypothesis for our physical space are explored using the rules of tensor analysis. These imply that the length scale grows linearly with t which itself varies exponentially with atomic clock time. The Hubble parameter is just H(t)=1/t, where t is the age of the universe, so the universe expansion rate is slowing down. The Hubble parameter can be expressed in terms of the red-shift parameter z as H(z(t))=Ho[1+z], where Ho is its current value. Ho=63.6 km/s/Mpc is in excellent agreement with a large number of observations and implies that the universe began 15.4 × 109 years ago. Excellent agreement is demonstrated by recent astronomical measurements with neither dark matter nor dark energy.

This paper was originally presented at Purdue University by the first author on April 11, 2022 before any James Webb Space Telescope (JWST) pictures were released.1,2 We predicted a very different universe than was previously believed to exist. It treated Einstein's Field equations as though they represented a “fluid” where millions of galaxies and gas made up the “fluid particles.” Using ideas from turbulence theory (e.g., Refs. 3 and 4), we imagined an infinite universe, in which both time and length scales evolved together. This paper has been available on our website5 for the past 3 years and has been presented at numerous American Physical Society Physics and Fluids meetings.6–10 Our predictions appear to resolve many of the questions raised by recent JWST data (see, for example, the critical commentary by Neil deGrasse Tyson11). In particular, our predicted universe (like JWST's) is much older, it is much more energetic looking back in time, it does not have a Hubble “crisis,” it does not need to be corrected for 1+z (“tired light”), and it does not need either dark matter or dark energy. Yet it agrees wonderfully with all the data prior to JWST, including the “worst prediction in the history of physics.”12 Given its historical value as a real “prediction,” we have resisted the urge to re-write it and present it here in its original form with only minor, mostly editorial changes.

Einstein's field equations were first presented over the period from 1914 to 1917. These have been collected in Volume 6 of his collected papers.13 Originally, Einstein (and most of the astronomical community) believed the universe to be static, but he could only find static solutions to the universe by introducing a cosmological constant. Shortly after hearing of Einstein's work, both Friedman14 and LeMaitre15 proposed unsteady solutions that suggested the universe might be expanding. Subsequent astronomical observation by Hubble and coworkers16 confirmed that indeed it was. In a paper with De Sitter,17 Einstein finally recognized the essential correctness of the unsteady solutions. Einstein's failure to recognize early on the implications of his equations was regarded by him as his greatest mistake. This history and theory has been discussed in thousands of places, among the most useful in this context.18,19 The recent historical review20 of the Einstein/De Sitter collaboration is particularly relevant to this paper.

The prevailing model for the universe is based on the so-called Friedman–LeMaitre–Robertson–Walker equations. It is these equations, together with the astronomical observations claiming that the universe is expanding at an accelerating rate that have been used to argue for the need for dark energy. They have been discussed in detail in many places as well; here, we present Eqs. (8.74)–(8.76) of Carroll,19 which nicely summarize the critical density problem. He defines a critical density, ρc, and a density parameter, Ω, as
(1)
(2)
where ρ is the mass (or energy) density of matter in space, G is the gravitational constant, and H is the Hubble parameter. These together with the FLRW equations imply
(3)
where a is the expansion rate parameter in the FLRW-model and κU is the curvature of space.

The problem is that all of the available astronomical evidence (since the Hubble telescope especially) suggests that space is nearly flat, so κU=0. However, all of the observations of matter in the universe suggest that Ω=ρ/ρc1/3, not 1. Clearly, both cannot be true. So, either the model is wrong. Or there must be some as yet unobserved matter (or energy) to makeup the difference, or both. Hence, the search for dark energy and dark matter. (Note that mass and energy can be considered equivalent since for rest mass, e=ρc2.) This is the cosmological crisis of our generation.

In this paper, we consider an alternative to the prevailing theory (v. chapter 8 of Ref. 19 for a comprehensive presentation). We propose that space is indeed flat and infinite and expanding but needs neither dark energy nor dark matter to explain it. Moreover, for our theory the universe is in fact decelerating. Yet, we argue that it agrees with the preponderance of the same astronomical observations previously used to conclude that it is accelerating.21,22 It agrees with recent Hubble, cosmic background, and galaxy number observations as well.23–26 Most importantly, our theory provides a direct link to the quantum field theory (QFT) estimates (the so-called “worst predictions in the history of physics”); and in fact treats it as the initial condition.

Finally, in this first version of our work we have deliberately put in as much detail as possible with the goal of making it straightforward to understand, even if tedious to read. Given the importance of the conclusions and our newness to the field (see postscript), we have tried to make our analysis as transparent as possible so it should be easy to identify where we might have gone wrong. So, hopefully any graduate student should be able to follow our analysis. However, it should be possible as well, for those who find math boring, to simply skip over the math and go to the principle conclusions and deductions beginning with Sec. VI. Or, for those who do not like to read at all, simply skip to the figures. They speak for themselves.

Instead of seeking a solution in which space alone expands, a solution of the Einstein equations is sought for the universe in which both time and space expand together. As the quote below makes clear, we are not the first to consider this idea

I believe that the times and distances which are to be used in the Einstein's general relativity are not the same as the times and distances which were to be provided by atomic clocks. There are good theoretical reasons for believing that that is so, and for the reason that the gravitational forces are getting weaker compared to electric forces as the world gets older. (Paul Dirac, Göttingen Interview, 198227)

In the tradition of fluid mechanics and turbulence (c.f. Refs. 3 and 4), a similarity solution is sought from Einstein's field equations in the form18,19
(4)
where Rμν is the Ricci tensor, Tμν is Einstein's stress energy tensor, gμν is the metric tensor, c is the speed of light, and T is the scalar invariant defined by
(5)

The indices μ and ν can take values of 0,1,2,and3. We interpret all quantities as ensemble averages over an infinity of statistically identical universes. Note for future use that as written here the contravariant stress energy tensor Tμν has dimensions of density times velocity squared. We also note for future consideration that we do not require the divergence of the stress energy tensor to be zero, so T is not related to the Ricci scalar, R=gμνRμν, as used in Ref. 19. Hence, our use of Eq. (4) is not restricted to empty space if the Ricci scalar turns out to be zero.

We hypothesize the existence of a spatially homogeneous universe which in similarity coordinates, say (τ,η), is completely at rest; meaning that all averaged velocities (over many galaxies) in these coordinates are zero. We require that the metric describing this τ,η-space is Minkowski, i.e.,
(6)
where gμν=[1,1,1,1]. As will be clear from their definitions below, all of these variables (including ds̃) are dimensionless. The dimensionless time, τ, shall be assumed to be the same everywhere in the universe at an instant, and monotonically increasing in equal increments from τ= to the present.
Homogeneous turbulent flows are known to evolve so that both time and spatial coordinates can be scaled by a single time-dependent length scale (v. Refs. 3, 4, and 28). We propose that the same is true for our homogeneous infinite universe; namely, if the length scale is δ(t)=δ̃(τ(t)), then dτ=dt/δ(t), dη=dx/δ(t). It follows that τ and η are related to our physical space, say (t,x), by the following:
(7)
(8)
where δ(t) is an unspecified length scale to be determined and t1 is a reference time. We will choose τ=0 to correspond to the virtual origin of our theory, i.e., the time at which the Big Bang appears to have occurred looking backward in τ,η-space. Clearly, t1 must be greater than zero. We shall see later (v. Sec. VIII D) that a convenient choice for t1 is proportional to the Planck time, tP, since it measures the increasing importance of gravity relative to quantum forces. The spatial coordinate η=x/δ(t) is a true co-moving coordinate.29 

Since we have assumed τ and η to be measured in equal increments, clearly these equal increments do not correspond to equal increments in t and x. We note the comment of Dirac at the beginning, who seems to have been thinking along the lines we have postulated. Our t should be thought of as a gravitational or mechanical time that governs our physical laws. The dimensionless time τ, on the other hand, with its equal increments very much resembles an atomic clock. However, τ could be measured in any convenient way, for example, by using the temperature of the universe presumed to be cooling (suggestion of Ref. 30). We shall do exactly this in Sec. VI E.

Note that throughout this paper we use the symbols ct,x, and xμ interchangeably to simplify the discussions below. Similarly, τ,η is used interchangeably with xμ. Our use of primed and unprimed indices follows the convention of Ref. 31. Repeated Greek indices are assumed summed over 0, 1, 2, and 3 with 0 representing time; and repeated Latin indices are summed only over 1, 2, and 3. The mathematical details along with the basis vectors and Jacobians in physical space and the Christoffel symbols are included in appendixes. Here, we list only the metric tensors in physical space
(9)
where δ̇=dδ/dt. The determinant is g=1/δ8.
The corresponding contravariant metric tensor in physical space, gμν, is readily computed to be
(10)

Its determinant is 1/g=δ8.

We note that the definition of proper time is given by the following integral19:
(11)

Substituting the covariant metric from Eq. (9) shows that τproper is exactly equal to the τ defined by Eq. (7). This is not surprising since we obtained the metric from our definitions of τ and η in physical variables.

Interestingly, if we use only the g00 contribution to compute a “proper time,” say τ̃00, we obtain a result that looks more like special relativity, i.e.,
(12)
(13)
(14)
where we have used a result we shall derive later that the mean radial expansion velocity in physical space is
(15)

In subsequent analysis, we will use only the proper time defined using the entire metric tensor. This is of course our original definition of Eq. (7).

We can see then that the basic premise of this paper is that we think we are living in one space (τ,η-space). However, we are really living in the other (t,x-space). We shall explore this difference in detail. We shall show that at least some of what we believed to be true probably was not. However, the data were correct; only the interpretations of at least some of it were not.

It is straightforward (e.g., using Maple and Mathematica) to show that the Riemann and Ricci tensors and the Ricci scalar are identically zero. This is no surprise, since the original τ,η-space was assumed to be flat (Minkowski). So, the transformed space must be as well.

Clearly, an expanding universe does not require a Riemann tensor with curvature. In our quasi-equilibrium model, the universe has no average motion in its own (τ,η)-coordinates, but it does in ours (t,x). Note, however, that the form of our metric tensor in physical coordinates does appear to have curvature. However, these extra terms, [(δ̇/c)|x|/δ], vanish for small distances relative Ro=ct. Even when these terms are not small, however, the Riemann tensor is identically zero.

In Secs. III–XII, we shall explore first the kinematics of our assumed universe and show it is consistent with many astronomical observations. Then, we shall show what the vanishing Riemann tensor implies about the stress energy tensor and explore its consequences for the dynamics. Finally, we shall try to outline how our theory might be used to interpret (or reinterpret) recent observations on the distribution of mass and clusters throughout the universe.

How the velocities in the two coordinate systems behave and relate to each other is clearly of prime interest. In this section, we consider their behavior in both spaces. We discover that the length scale is determined by them.

We first define a displacement field in our expanding-space-time coordinate system to be ηp(τ,η). Note that we use the subscript p to distinguish the proper-time-dependent displacement field from the independent variables τ,η. Then, the four-dimensional “velocity” is Vp(τ,η)=ηp/τ. However, only time passes; otherwise nothing moves (by assumption). So, the velocity in this space is simply 1, 0, 0, 0, since only τp(τ,η)/τ is non-zero and equal to one.

Now, what we would really like to know is how Vp looks in physical space (or t,x-space). Unfortunately, we cannot simply transform it like an ordinary vector, since derivatives of vectors in general do not transform like tensors. We must use Lie derivatives and their associated Christoffel symbols. Note that we did not need these for Vp in τ,η-coordinates since in the hypothesized Minkowski space the Christoffel symbols are all zero.

If we define xp(x,t) to correspond to our function ηp(τ,x), we can write our covariant derivative condition as
(16)
where we have used primes to distinguish which set of coordinates we are using. Note that since the basis vectors are themselves time-dependent, a necessary condition for solution is that the terms in curly brackets be zero for each value of ν.
The problem will be considerably simplified if we transform Eq. (16) into derivatives with respect to an affine (equally spaced) parameter, say λa. Since the tangent to the curve is given by dxpμ/dλa, we can multiply Eq. (16) by it to obtain
(17)
Note that the first term is simply the directional derivative. Also, note that we can choose the proper time for λa, so we choose τ=λa.

In Secs. III C–III E, we shall first consider x0=t. Then, in the next, xν for ν=1,2, and 3.

From Eq. (A7) in  Appendix A, the only non-zero Christoffel symbol for ν=0 is Γ000. So, Eq. (17) reduces to
(18)
It is pretty obvious that the only solution is for
(19)
since xp0=t. We did not need to use the fact that λa=τ, but we will use it later.
Equation (19) can be integrated directly to yield
(20)
where δ(t1) is the value of δ(t) at time t1.
On dimensional grounds alone, we recognize that the coefficient between δ(t) and t must be the speed of light, c. We can without loss of generality define
(21)

This choice of coefficient insures that δ(tP) is the Planck length scale if tP is the Planck timescale. This will prove to be important later. It is humbling (but quite reassuring) to note that we could have deduced Eq. (21) by dimensional analysis alone.

We shall see in Sec. VI that H=δ̇/δ is the Hubble parameter. So, clearly H=1/t. So, if to is the age of the universe, then 1/Ho=1/H(to)=to is the age as well. This is consistent with wide speculation for many decades but not previous theories. As noted by Carroll and Ostlie,32 almost all previous “ages” of the universe have been calculated using a Friedmann model. So, our “ages” and “times” will be different from that in common use.

From Eqs. (7) and (21), the relation between our proper time τ and t reduces to just
(22)

So, time in our reference space is logarithmically related to our physical space time t. Note that by hypothesis, it is the increments of τ that are equally spaced, and increments of physical space time which are stretched.33 

Alternatively, we can express Eq. (22) this way
(23)
Equations (21) and (22) imply that in τ,η-space the length scale varies exponentially with τ, i.e.,
(24)
where we have defined δ̃(τ)=δ(t(τ)) to distinguish its different independent variables.
The equations for ν=1,2, and 3 are considerably more complicated since there are three non-zero Christoffel symbols for each. We will consider ν=1 first, but the others will be similar. Our task is somewhat simplified by the ν=0 result, we need only to show here that the δ(t) we deduced in Eq. (19) is a solution to these equations as well. Substituting into Eq. (17) using the values for ν=1 yields
(25)

The first term in square brackets is clearly zero if we substitute the result of Eq. (19). It is easy to show by differentiation that this result also makes the curly bracketed term on the right zero as well.

So what is the 1-component of velocity, say up1? It is given by
(26)
The last expression can be obtained by factoring out δ̇/δ and substituting using Eq. (19). It ends up being the ratio of two terms that are zero but exactly the same. So, we are left with
(27)
Similar equations result for i=2 and 3. So, we have
(28)

This is the result [Eq. (15)] we used in Sec. II D. Note that the material derivative of Eq. (28) is zero, consistent with our assumed rest state in τ,η variables.

So, for all values of μ=0,1,2, and 3, the solution to Eq. (16) is just
(29)

In general relativity it is the geodesic equation which provides the counterpart to Newton's Law. In this section, we derive it for our proposed universe.

If xpμ(λa) is a world line (particle trajectory), then the trajectory that minimizes τ is given by the geodesic equation
(30)

Note that in Einstein's interpretation it is the negative of the second term on the left-hand side (the one with the Christoffel symbol) which represents the gravitational potential gradient.

We choose the proper time, τ defined by Eq. (7) as our affine parameter, noting that it is the same as the proper time defined by Eq. (11).

So, we can re-write our geodesic equation using proper time as
(31)
Note that we have also replaced xμ(τ) by xpμ(τ) to emphasize that the latter is NOT an independent variable but instead a parameterized path. We have used primes on the indices since these are in our primed coordinates. Note that xp(τ) is a hybrid quantity. It is the parameterized spatial displacement in our physical coordinates but written as a function of τ, not t. So, xp(τ)=δ(t)ηp(τ), where ηp(τ) is the parameterized path in τη-space. Finally, recall that x0=t, so
(32)
We can expand Eq. (31) for μ=1 to obtain
(33)
where we have kept only the non-zero Christoffel symbols. Substituting for these and using Eq. (32) yields. Substitution for the Christoffel symbols and re-arranging yields
(34)
(35)
There are identical equations for μ=2 and 3. So, combining all three yields
(36)
where i=1,2, and 3.
Our problem is that the coefficients are dependent on t, while the xp's are functions of η and τ. We know that xp(τ)=δ(t)ηp(τ)=δ̃(τ)ηp(τ). Plus, we know that ηp(τ)=ηp(0) is time independent (by hypothesis). Thus, we can write
(37)
However, we can use the chain-rule to convert d/dτ to (δ/c)d/dt with the result
(38)
Since this equation must be satisfied for any ηp(0), the term in curly brackets on the right-hand side must be itself identically zero. The bracketed term reduces to
(39)

Interestingly, this is satisfied for ALL δ(t), no matter the particular functional form of δ(t). So, the geodesic equation is satisfied.

Almost everything we know about the universe is due to light propagation. Light propagation is governed by Maxwell's equations. Therefore, we need to consider how they change with coordinate system in order to interpret them in either (x,t) or (τ,η)-space. We must also understand how this affects the spectra and intensities of the radiation received.

Carroll19 writes Maxwell's equations in curvilinear coordinates as
(40)
where μ is the covariant derivative, and Fμν is defined as
(41)
(42)
for i=1,2, or 3, where E and B are the electric and magnetic field vectors, respectively. The charge, q, and current, J, distributions have been combined into a four vector, Jμ, defined as
(43)

Now, all of the usual rules for tensor transformation apply. In particular, since we know the Jacobians and the metric tensors, we can easily move these from τ,η-space to our t,x-space, or vice versa; or from contravariant to covariant components, or vice versa. It is easy to show that the left-hand side of the equation will look exactly the same in either coordinate system, a consequence of the linearity and of Lorentz invariance. However, since Jν=JννJν where Jνν is the Jacobian, the source terms in one frame will differ by a factor of δ from the other. For example, if we specify a charge source distribution in the τ,η-frame by say, q(η,τ), then in the other frame it will appear multiplied by 1/δ(t). Also, a charge at rest in one representation will appear as a current in the other.

The same will be true for the electric and magnetic field vectors as well. Since the power radiated is the cross–product of the electric and magnetic field vectors, the power will be modified by δ(t)2. However, this will exactly compensate in the inverse square law using 1/D2 instead of 1/ηD2=(δ/D)2, where D is a distance from the source. This will have profound implications for how we interpret intensity data from an isolated source. In particular, it means that the spectrum in physical variables will be shifted in both wavenumber and amplitude, so the power is conserved. We will discuss this in more detail in Sec. VII B, where we consider relative intensities of radiation from supernova data. Whatever space you chose to work in, both intensities must be transformed, not just one. The difference is the mysterious 1+z, which has made the universe appear to accelerate when the opposite is true.

It is easier to consider Maxwell's equation here using vector notation instead of their curvilinear coordinate representations. We can write the electric and magnetic field vectors Ẽ and B̃ and current vector J̃ as functions of η,τ. So, Maxwell's equations34 with no loss of generality reduce to
(44)
(45)
(46)
(47)
where q̃ is the charge distribution in τ,η-coordinates. We have used Gaussian variables, which absorb the dielectric and permeability constants into the definitions of the field vectors. These transformed equations themselves look exactly like the old in the new variables since δ(t) (or δ̃(τ)) have been absorbed into τ. This is a consequence of their linearity. Only the source terms look different because of the length scale, δ̃(τ).
Equations (46) and (44) can be cross-differentiated and combined to form wave equations for the magnetic and electric field vectors as follows:
(48)
(49)

Note that from Eq. (24) it follows that δ̃=dδ̃/dτ, so the right-hand sides of both equations are multiplied by δ=δ̃, exactly as should have been expected from in Sec. III A. As noted above in Subsec. III A, this has profound implications for how we interpret intensity data from an isolated source. It also means that the spectrum in physical variables is shifted in both wavenumber and amplitude, so the power is conserved. We will discuss this in more detail in Sec. VII.

In τ,η-space, the transformed equations for τ,η-space imply that there is no change in the wavenumber, say κ* or frequency, say ω*, since in this space nothing is expanding. So, the phase of a wave propagating in τ,η-space from a source at ηs and time τs is given by
(50)
Note that τs is a specific time and so not changing with time τ. However, if the star is not moving at the average velocity of the universe (assumed to be zero in τ,η-coordinates), ηs is changing with τ. If we define κ*=|κ*|, the dimensionless wavespeed is simply c*=ω*/κ* and equal to unity.
What does this look like in our physical universe? We will see these waves in our physical space propagate at frequency and wavenumber given by
(51)
(52)

The last term in parentheses in Eq. (51) will be identically zero for a particular star moving with the local average velocity of the universe. It will be zero by hypothesis for an average of them over a large enough field, our continuum (or field) hypothesis. We will use Eq. (52) extensively to analyze the Hubble measurements in Sec. IV B.

Interestingly, the shift in wavenumber is completely independent of velocity and depends only on the length scale δ(t). This is consistent with the observations of Ref. 19 and others that the average redshift is not really a Doppler shift at all. However, the total redshift, however, of an individual body is a consequence of both the expansion of the universe and the relative motion of a particular body in it.

Any of the above wave equations, Eqs. (48) and (49), can be written in the form
(53)
where ϕe can be any component of the electric or magnetic field vectors, s is a source and is the d'Alembertian defined as
(54)
We are only interested here in the radially symmetrical solution, which reduces to
(55)
where we have defined the radial coordinate to be ηr=|η|.
Since the stars of interest are at great distance, they can be modeled with a simple time varying source and a delta-function, i.e.,
(56)
where ηs,τs is the location of the star at time τs and δD(η,τ) is a four dimensional Dirac-delta-function. (Note this δ-function should not be confused with our length scale.) For the moment, we consider only a single frequency, the frequency of the source, ω*; and we allow the wave to propagates radially through space with wavenumber, κ* as described in the preceding section.
By transforming Eq. (55) in time to obtain a Helmholtz equation, the result can be directly solved to the usual point source retarded-time solution to Maxwell's wave equations given by
(57)

This means that the intensity of the radiation from a star in τ,η-coordinates will obey an inverse square law in |η|, and the frequencies and wavenumbers will remain unchanged from the source. In the physical space, however, the frequency and wavenumber detected at distances away from the source will be given instead by Eqs. (51) and (52). However, somewhat surprisingly, as noted in Secs. V A and V B, the intensities in physical space will drop off as an inverse square law in r, not r/δ(t). So, no “corrections” to intensities are necessary in either space.

To examine how spectra change with distance, we can consider how the spectrum of a blackbody is modified with distance from the emitting object. Since Maxwell's equations are linear, we can simply superimpose solutions and consider a whole spectrum of frequencies (or wavenumbers) summed together. So, we define F(k) where k=|k| to be the power emitted per unit projected area of the blackbody at temperature T into a solid angle in the wavenumber interval from k to k+dk
(58)
where the constant a can be shown by integration to be 2.82,
(59)
where Ts is the star temperature and b is Wien's displacement constant, and we have defined FT to be
(60)

Note that FTTs3, while the spectral peak wavenumber is proportional to Ts.

So, we can define a dimensionless wavenumber, say k¯=k/kpeak, and write the non-dimensional similarity black-body spectrum at the time, say t, when the radiation was emitted like this
(61)
The total power emitted per unit projected area into a solid angle at any time t is given by
(62)
This is the energy flux of the radiation at the source, and it clearly depends only on the temperature Ts.
Now, we can see HOW this spectrum is propagated through τ,η-space by using the dimensionless wavenumber κ, we defined in Eq. (52)
(63)

So, the entire spectrum shifts to lower physical space wavenumbers as δ(t) increases. However, it does so in a way as to preserve an inverse square law.

Hubble's law and its observations have dominated discussions in astronomy for almost a century, and never more than now.35 In this section, we show that our derived relation with the redshift parameter [defined in Eq. (68)] is in excellent agreement with the recent data.

Assume D to be the distance to some distant star or galaxy so that its normalized distance in τ,η-space is |ηp|=D/δ(t). Since we are computing derivatives of something that happened long ago, both the distance and the time should be evaluated using values corresponding to the time when the information was transmitted, say ts, not when it was received. For distant galaxies ts is much earlier than our present time, say to.

Also, we assume any “local” deviation velocity of the object, say |ΔVp| is negligible compared to the mean velocity of the expanding universe (i.e., |ΔVp|HD), or at least “averages out” when many neighboring objects are considered. Equation (15) implies immediately that the recessional speed from us, say vr is given by
(64)
where we have defined H to be
(65)
Equation (64) looks like Hubble's law. However, it is exactly Hubble's law only if H is a constant. We have deduced that δ(t)t, so
(66)
where t is the “age of the universe” as measured from some virtual origin at the time H(t) is evaluated.
The time-dependent wavelength, say λ(t), is related to the time-dependent wavenumber by k=2π/λ. The wavelength transmitted from a star at time ts we denote as λs, and we assume it to be equal to the wavelength it would have had on earth. We will denote the present time by to and the wavelength when it arrives on earth as λo. So, the change in wavenumber between transmission and reception is
(67)
We define the redshift parameter z by
(68)
So, λo=λs[1+z]. We can rewrite Eq. (67) as
(69)
Also, we have using Eq. (52)
(70)
Equating Eqs. (69) and (70) yields
(71)
Or
(72)
We can solve this for ΔH̃(z)=H(ts(z))H(to) to obtain
(73)

Note that the H(ts) is evaluated at the time the light was emitted, ts.

We can more conveniently express ΔH̃ in terms of the present value H(to)
(74)
So, the actual value of the Hubble parameter for any value of z is given by
(75)
where we have defined Ho=H(to) to be consistent with conventional notation. Note that Ho is the only unknown parameter.
The simple relation above can be contrasted with the standard model result (c.f. Refs. 19 and 23)
(76)
where Ωmo and Ωro are the current values of the non-relativistic and relativistic matter density parameters.

Ho is presumed by both theories to be the present value of the Hubble parameter. It is the only adjustable parameter in the theory presented here, whereas there are two additional ones (Ωmo and Ωro) in the standard model. Most important though is the linear dependence of H on z in our model. As will be clear in Sec. VI D, the absence of adjustable parameters will be seen to have very important implications for processing and evaluating both the data and our theory.

Before considering the Hubble data, let us note one other useful relationship that follows from the considerations above, namely the relation between z and distance away of an astronomical body, say D. Since H(ts)=1/ts and H(to)=1/to, Eq. (75) can be rewritten in the following way:
(77)
where to is the present time and ts is the time when light was emitted. Or solving for ts/to
(78)
However, the distance away is D=c[tots], or expressed in terms of z
(79)
Our result that D/Ro=z/(1+z) can be contrasted with the prevailing model given by Ref. 36 as
(80)
where Ωk=1ΩMΩΛ, and sinn is sinh for Ωk0 and sin for Ωk0. The differences between the theories will prove to be crucial when we consider the supernovae data in Sec. VII C.

The recent highly cited paper by Yu et al.23 contains a thorough analysis of the Hubble “constant” evaluation from 36 sources. Most conveniently (for us at least), their Table I contains the data summarized as H̃(z) vs z where z is defined by Eq. (68) for values of 0.07z2.36 and H̃[km/s/Mpc] values ranging from 68.6 to 227 (along with rms error estimates). They carry out an extensive evaluation of the standard model, so we will focus instead only on comparison of their data with our new result equation (75).

TABLE I.

Each row is defined in seconds after the Big Bang epochs of logarithmic time in cosmology with the earliest at the top. The present time is approximately 4.9×1017 s (15.4 × 109 years) after the Big Bang.

Log-time Seconds after the Big Bang Period
−45 to −40  1045 to 1040  Plank Epoch 
−40 to −35  1040 to 1035   
−35 to −30  1035 to 1030  Epoch of the Grand 
−30 to −25  1030 to 1025  Unification 
−25 to −20  1025 10  1020   
−20 to −15  1020 to 1015   
−15 to −10  1015 to 1010  Electroweak Epoch 
−10 to −5  1010 10  105   
−5 to 0  105 to 100  Hadron Epoch 
0 to +5  100 to 105  Lepton Epoch 
+5 to + 10  105 to 1010  Epoch of Nucleosynthesis 
+10 to +15  1010 to 1015  Epoch of Galaxies 
+15 to +20  1015 to 1020   
Log-time Seconds after the Big Bang Period
−45 to −40  1045 to 1040  Plank Epoch 
−40 to −35  1040 to 1035   
−35 to −30  1035 to 1030  Epoch of the Grand 
−30 to −25  1030 to 1025  Unification 
−25 to −20  1025 10  1020   
−20 to −15  1020 to 1015   
−15 to −10  1015 to 1010  Electroweak Epoch 
−10 to −5  1010 10  105   
−5 to 0  105 to 100  Hadron Epoch 
0 to +5  100 to 105  Lepton Epoch 
+5 to + 10  105 to 1010  Epoch of Nucleosynthesis 
+10 to +15  1010 to 1015  Epoch of Galaxies 
+15 to +20  1015 to 1020   

The top part of Fig. 1 plots our theoretical curve of equation (75) together with the measured H̃(z) vs z-values of Yu et al.23 The error bars are from the Yu et al. paper as well. The solid line on the top figure corresponds to Ho=63.6 km/s/Mpc, our optimal fit to the data. Our value of Ho=63.6km/s/Mpc) corresponds to t0=15.4×109 years (15.4 × 109 years).

FIG. 1.

H̃(z) vs z for data of Yu et al.23 Top: Eq. (75), H̃(z)=H(to)[1+z], with best fit Ho=H(to)=63.6km/s/Mpc (solid line). Bottom: Same plot but with alternative values H(to)=61 (dashed line), 67 (dash-dotted line), and 72 (dotted line).

FIG. 1.

H̃(z) vs z for data of Yu et al.23 Top: Eq. (75), H̃(z)=H(to)[1+z], with best fit Ho=H(to)=63.6km/s/Mpc (solid line). Bottom: Same plot but with alternative values H(to)=61 (dashed line), 67 (dash-dotted line), and 72 (dotted line).

Close modal

The bottom figure shows the same optimal fit, and in addition other popular values. The dashed lines correspond to Ho=61,67, and 72km/s/Mpc's. 72 is clearly too large, and 61 is too small. Either 67 or 63.6 would be acceptable choices. The mean square relative error of our fit is 9%, but the error for 67 is only a percent larger. The scatter in the figures appears to be randomly distributed, and both the 64 and 67 theoretical curves lie within the error bars of all but two of the data. A few outliers are largely responsible for the RMS relative error.

The only adjustable parameter in our theory is Ho. The best fit value of 63.6km/sec/Mpc is well within the error bounds of the two values of 63.8±1.3 and 65.2±1.3 inferred by Riess et al.21 Rounding it off to Ho=64km/s/Mpc corresponds exactly to the value deduced from gravitational lensing by SN Refsdahl by Vega-Ferrero et al.37 

A different dataset might produce a different value of Ho. So, also shown on the plot is the theoretical curve using the recently popular Planck value of Ho=67 for which the RMS relative error of the fit increases only to 10%.38 The random error varies inversely with the square root of the number of independent estimates (only 36 in the present case), so the error bounds will surely drop as more data are acquired, especially as new theoretical considerations are included in the analysis and old ones winnowed out. This will be especially true if astronomers start presenting their data in the form of Yu et al.23 instead of averaging it all together to get a single averaged value of H.

Clearly, no matter the exact value of Ho, the theoretical equation (75) is quite satisfactory. While Ho67 works fairly well, the proposed value of 15.4 × 109 years certainly resolves any issues about whether there could be stars older than the currently proposed age of the universe of 13.8 × 109 years. The present estimate for the “Methuselah” star (140 283) of 14.46±0.8 Gyr by Bond et al.39 places it marginally within the previous age estimates but well within the 15.4 Gyr derived above. We will use Ho=63.6km/s/Mpc in the remainder of this paper unless otherwise specified.

Finally, before leaving this section we note one other idea which we believe corrupts data analysis, originally due to an assumed constancy of the Hubble parameter. The data we presented in Fig. 1 are usually “corrected” by 1+z to make it appear that the Hubble parameter is nearly constant as shown in Fig. 2. Our equation (75) makes it clear why this works, since H̃(z)/Ho=1+z. However, what is wrong about this is that it has become justification for correcting ALL DISTANCES by 1+z, the so-called “tired light” correction. However, as our analysis shows, the Hubble parameter is really not constant. As Eq. (79) makes clear, distances vary as Dz/(1+z), not 1+z. We shall see below that this 1+z-correction is at the root of whether the universe is accelerating or not, at least with the prevailing theory. Since we believe the correction idea is wrong, we shall argue below that the universe is not accelerating with either theory.

FIG. 2.

Figure is from Farooq and Ratra.40 Note that the H(z) data have been “corrected” by 1+z. Some of the data were also used by Yu et al.23 in the plot above. For a detailed description of the other curves on the plot and the sources of the data refer to Ref. 40. The Hubble “crisis in cosmology” is because the best fit of the standard theory (thick dashed line) yields 72 km/s/Mpc at z=0, while the Planck CMBR value using the same theory (hashed line) is 67 km/s/Mpc. Our constant value, H(z)/(1+z)=63.6, has been added as the horizontal solid (black) line.

FIG. 2.

Figure is from Farooq and Ratra.40 Note that the H(z) data have been “corrected” by 1+z. Some of the data were also used by Yu et al.23 in the plot above. For a detailed description of the other curves on the plot and the sources of the data refer to Ref. 40. The Hubble “crisis in cosmology” is because the best fit of the standard theory (thick dashed line) yields 72 km/s/Mpc at z=0, while the Planck CMBR value using the same theory (hashed line) is 67 km/s/Mpc. Our constant value, H(z)/(1+z)=63.6, has been added as the horizontal solid (black) line.

Close modal

The cosmic microwave background radiation (CMBR) is of interest for several reasons. First, these data have been used to infer the Ho=67km/s/Mpc that was cited in Sec. VI D. Second, the cosmic microwave background radiation is generally believed to be the footprint of the universe at or about the time that photons could propagate,24 or when the temperature was about 3000° K.

The average temperature is uniquely determined by the radiation spectral peak, say
(81)
where Tu is the absolute temperature of the universe and b=2.90×103 m-K is the Wien's displacement constant. By the same arguments put forth in Sec. V, we can conclude that λm(t)δ(t). It follows immediately that at any time, t, the average absolute temperature of the universe divided by the average temperature now is given by
(82)
(83)
where the dependence on z given by Eq. (83) follows from Eq. (78). Note that the z-dependence is the same as for the standard model, but the time dependence is not. Also, note that Tu(t)/Tu(to) could equally well have been expressed using τ as the variable as noted in Sec. II, in which case the ratios would be exp(τ0τ).

From the Planck estimate that the temperature now is 2.725K, we can estimate that we should be able to see back to only t=14 × 106 years. Note that this is exactly the value we obtain by inserting z=1100 into Eq. (78). This is substantially greater than the 380 000 years after creation usually cited. However, it is actually farther back than the FLRW estimate, since our universe is over a billion years older.

Since the temperature value depends only on radiation spectra and time with no other assumptions, it is really not affected by the other difficulties of measurement. So, if we have correctly interpreted the Hubble data, or it does not change from the values we used, this number might be quite accurate.

Now, how about the scale of the CMB fluctuations? In our model universe, any initial inhomogeneities would now appear re-scaled by δ(t)/δ(t*), where t* would be any time shortly after the non-linear processes dominate, and surely closely related to the time estimated below for the quantum field theory energy input. So, our theory should be able to predict (or at least account for) the present scale of the inhomogeneities, say LCBM. If we use an estimate of the CMB length scale to be LCMB2.5×1024 m (or 1°), the ratio to the Planck length, LP, is LCMB/LP1.6×1059, again clearly indicating its early quantum origins. This points to a virtual origin of (1/0.16)tP6.3tP. This is the same order of magnitude as the 10.5tP estimated below in Sec. VIII D using the QFT energy estimate. Either is perfectly consistent with the time for the non-linear physics to transition from quantum mechanics to our “turbulence similarity” type behavior.

One of the greatest challenges of astronomy has been to measure distance, especially outside of our own galaxy. In this section, we explore the consequences of our theory on the interpretations of recent measurements, especially type Ia supernovae.

We have already considered how blackbody radiation would propagate in our new τ,η-universe, and in particular, how any spectrum would be modified. We have considered the consequences of an inverse square law in our expanding coordinates. Now, we will apply this information to review the methodology and terminology usually used to compute distances from intensity measurements of a single astronomical body. We will examine the results of applying our new theory to it. Finally, we consider the recent data of Refs. 21, 22, 41, and 42. These are the data that have been used with a 1+z multiplication factor to argue that the universe is expanding at an increasing rate. We note that Love and Love43 have observed empirically that without the 1+z factor in the supernovae data, the universe is not accelerating. Our theory will be seen to provide excellent agreement with the data without the 1+z corrections. Section VII A uses our results from Sec. V about how blackbody radiation propagates and finds no reason for such corrections.

The traditional way to use the inverse square law in astronomy has been to express distances in Megaparsecs using mM, where m is defined as the apparent magnitude and M is the absolute magnitude (v. Ref. 44). m and M are essentially logarithms of the measured intensities and are usually compared to a reference value using the following formula:
(84)
where I is the measured intensity and Iref is a convenient reference. mref is chosen as the value m would have, say M, if the same star were at a distance of 10 parsecs. So, if the intensity is presumed to be inversely proportional to distance squared, say I1/D2, this formula reduces to
(85)
where D is measured in Mpc. Corrections exist for a variety of conditions, including cosmic dust, expansion of the universe, etc.
Given that our inverse square law can be expressed in BOTH distance, D (in t,x-space), AND normalized distance D/δ(t) (in τ,η-space), we must be very careful about applying corrections from one frame to another. We can not just blindly use Eq. (85), but we must return to the definition of Eq. (84). Applying this yields
(86)
where to is the present time and tref is the reference time. BUT these are the same, so the δ's cancel out! So, our expression above reduces to
(87)
exactly (as expected) the result we would have gotten had we started in t,x-space. BUT this is not as trivial a result as it seems, since we had to reference BOTH D and Dref to their own space, NOT just one of them. Referring only D (and not Dref) as well as the origin of the oft-used 1+z-correction. Clearly, any correction for expanding space of just one of them would be incorrect.
Now, we know from Eq. (79) that D=Roz/(1+z), where Ro=cto. Substitution yields
(88)
If we choose Dref to be 10pc, and express Ro=cto=c/Ho, it follows immediately from Eq. (85) that
(89)
where c/Ho must be expressed in Mpc. Note that the only parameter in this equation is Ho, the present value of the Hubble parameter and the inverse of the age of the universe. No further correction of data or theory for expansion of the universe is necessary, thanks to the fact that the definition of Eq. (85) uses only relative values.
For the value of Ho=63.6km/s/Mpc derived in Sec. VI D using the data of Yu et al. (or to=15.4×109 years), Eq. (89) reduces to
(90)

This simple result can be contrasted with the much more complicated expression that results from substituting the standard model result of Eq. (80) for D into Eq. (85). Here, the only two input data are Ho and M, both of which can be obtained independently by theory and measurement.

The first extensive use of the type Ia supernovae data was that compiled in the Calán–Tololo database by Hamuy et al.,45,46 but only for relatively small values of z (0.03<z<0.10). This was extended to much larger values up to z=0.83 by Riess et al.21 and Perlmutter et al.22 There have been many papers since in more or less agreement, but we include only the additional results of Knop et al.42 All of these presented an extensive evaluation of their distance measuring methodologies. All presented their data in a variety of tables (including the Calán–Tololo data) and showed in detail how they corrected the data (Fig. 3). Most problematic (at least from the perspective of this paper) is the following quote from the Perlmutter et al. paper:

FIG. 3.

Data from column 4 of both Tables I and II in Perlmutter et al.22 and data from Table III, column 2 of Knop et al.42 are plotted vs z. The ordinate labels correspond to the table column headings. Top: Best fit of Eq. (90), m=5log10[z/(1+z)]+43.4+M, using M=18.5. Bottom: Three theoretical curves using Eq. (90) have also been plotted using values for M=18.0,18.5, and 19.0, respectively.

FIG. 3.

Data from column 4 of both Tables I and II in Perlmutter et al.22 and data from Table III, column 2 of Knop et al.42 are plotted vs z. The ordinate labels correspond to the table column headings. Top: Best fit of Eq. (90), m=5log10[z/(1+z)]+43.4+M, using M=18.5. Bottom: Three theoretical curves using Eq. (90) have also been plotted using values for M=18.0,18.5, and 19.0, respectively.

Close modal

For the supernovae discussed in this paper, the template must be time-dilated by a factor 1 + z before fitting to the observed light curves to account for the cosmological lengthening of the supernova timescale.

This is of course consistent with our arguments in Sec. V C, but they unnecessarily compensate for a growing universe by multiplying their data (column d) by 1+z. Therefore, we choose instead to use their original data and that directly measured (column b).

Figure 2 shows the data from all four groups as summarized in columns 2 and 4 of Table I and Table II of Ref. 22, and columns 2 and 3 of Ref. 42. For both papers, column 2 is z, the redshift. Column 4 of Ref. 22 is plotted as black squares. Column 3 of Ref. 42 is plotted as red circles. The blue diamonds are the Calán–Tololo data of Refs. 45 and 46 as collected in Table II of Ref. 22. We have NOT used the “corrected” data which includes the multiplication by 1+z.

The top figure shows the data with the theoretical curve computed from Eq. (90) using a value of the absolute magnitude M=18.5. The lower figure shows three values of the theoretical curve computed from Eq. (90) using values of the absolute magnitude M=18.0, 18.5, and 19.0. The 18.5 value provides an excellent fit for all values of z. The two other values pretty much bound most of the data. Since each star presumably has its own slightly different value of M, there is no reason a single value of M should fit all the data. The fact that it does so even approximately is consistent with observations that all supernovae of type SN Ia seem to have absolute magnitudes that fall in a narrow band from about −18.5 to −19.5 (c.f. Ref. 44) consistent with a Chandrasekhar limit. So, the scatter is quite likely due to individual variations of M. Regardless, the agreement of all three sets of data suggests strongly that no special correction for 1+z is necessary, exactly as argued in Sec. V. This is consistent with the observation in Sec. V that it is the entire spectrum (and its integral) that is shifted, not just individual frequencies, so no such correction is necessary.

Before leaving this section, we note that our theory assumes a flat universe, and it is slowly decelerating as 1/t2. We see nothing in these data that suggests otherwise. If, however, we had left the 1+z factor included by21,22 and others, the opposite conclusion could have been reached, at least using the standard theory. Our theory can be made to fit that data as well by choosing the value of M between about −17.5 to −18.5 with a best fit curve corresponding to M=18. The price paid however is that there is a sharp discontinuity with the Calán–Tololo data, which falls well below any of these curves. Therefore, given our physical arguments for NOT including the 1+z factor (as appears to have been common practice), the continuity from one dataset to another of our treatment of the data, and the complete absence of any obvious new physics to explain a shift, we stand by our analysis and conclusions.

Riess et al. and Perlmutter et al.21,22,47 fit the standard model result of Eq. (80) to their data pre-multiplied by 1+z with ΩM,ΩΛ=0.5,0.5. This in addition to the choice of M is a three-parameter fit. It is clear from the form of Eq. (80) why they needed the 1+z-prefactor to have any success at all with the fit (and why they might have convinced themselves that it was necessary). Our theory fits their original “uncorrected” data quite nicely with only a choice of M which is well within the bounds of most estimates. In fact, the scatter might be largely due to slightly different values of M for each star.

Now, we get to the crux of this paper. We have already seen that our proposed universe can describe the kinematics that have been observed and published. Now, we must investigate the underlying dynamics.

Also, do we still need dark energy and dark matter? Surely, there have been difficulties identifying both in nature, and the need can only be inferred indirectly using theories that might be wrong. Also, as our observations about theory in the two preceding sections make clear, neither is necessary to account for the data usually used to justify the indirect inferences. Hence, the alternative presented by our theory might be very attractive.

Since our presumed model is Minkowski in τ,η-space, it must be in all.19 As noted earlier, this does not imply that the metric tensor, gμν, in other spaces is [1,1,1,1] nor that the Christoffel symbols will be zero. However, it does dictate that the Riemann tensor, Rμναβ, Ricci tensor, Rμν, and Ricci scalar, R, will all be zero.

Thus, the field equations [Eq. (4)] in contravariant form reduced to
(91)
where we have dropped the pre-factor of the right-hand side since the left-hand side is zero. So, for our postulated universe
(92)
where T is the invariant defined by Eq. (5). Note that our Tμν differs significantly from that used in previous analyses, since we have deliberately relaxed the usually applied condition that its divergence be zero [c.f. Eq. (4.15) in Ref. 19]. Thus, we do not require that energy be conserved. Our reasons for doing so are several (and discussed as follows), but the most obvious is that theories which do assume conservation of energy do not seem to have been successful, at least without inventing mysterious and as yet unobserved energy sources. We will have more to say, about this later. Because of assumed spatial homogeneity, T can at most be a function of τ in τ,η coordinates.

It has been traditional to assume that Rμν=0 implies that Tμν=0 and thus describes only empty space. However, given our assumptions (or rather lack of one) about the divergence of Tμν, Rμν=0 only implies that space is flat, and that Tμν=Tgμν/2, not that either are zero. Also, we note that the pre-factor of 8πG/c4 completely disappears. Since the left-hand side of Eq. (91) is zero, there is no counterpart to the usual criterion for a critical density. In our theory equation (3) completely vanishes, and so there is no critical density.

Alternatively, Eq. (4) could equivalently have been formulated using the more usual form of the field equations given as19 
(93)
where R is the Ricci scalar and we have separated out Einstein's cosmological constant Λ. Clearly, if Rμν and R are identically zero and T is non-zero, then Λ is non-zero, and vice versa. We have chosen to proceed using Eq. (4) since it seems less contrived than a “cosmological constant” (or cosmological function in this case), which even Einstein himself thought contrived.

We note that dimensionally there are only two independent parameters in the governing equations: G and c. In the absence of curved space (κU=0, our flat space hypothesis), there is no imposed length scale. The Planck scales introduce G along with Planck's constant and c, to characterize when gravitational effect becomes important. However, after the initial period when δLP, where LP is the Planck length scale, the Planck length is far too small to be relevant. The evolution from quantum mechanics through this period has been described by Gibson48–50 as an inverse turbulence cascade with ever increasing scales of motion. So, the Planck parameters become ever relatively smaller and smaller, and no new parameters are entered. There is no combination of the two parameters, G and c, alone that can produce a length scale or a timescale, nor anything with the dimensions of energy or mass densities. So, any such scale must arise from the equations themselves or assumptions about them. A spatial coordinate could provide the missing parameter but not in a homogeneous environment. So, a length scale dependent only on τ (or t) must exist to provide the missing parameter. The only choice is, of course, the length scale, δ̃(τ) (or δ(t)), which we introduced at the beginning.

Since the left-hand side of Eq. (91) is zero, we have some freedom in choosing how to scale the right-hand side. However, we stick with convention and give the contravariant tensor, Tμν, in physical coordinates the dimensions of energy density, i.e., the same as ρc2 where ρ has dimensions of mass per unit volume. So, since T is given by Eq. (5) and the contravariant metric has dimensions of length squared, the appropriate choice for T itself is
(94)
where [2G*] is an unknown coefficient. Note that we have used the freedom presented by the zero left-hand side of Eq. (91) to choose the constant of proportionality as [2G*]. G* will be related to the energy and mass densities in Eqs. (99) and (100). We note that Eq. (94) can be expressed using the determinant, g, of the contravariant metric tensor of Eq. (10)
(95)

If integrated over a volume, this is exactly the form of the Einstein–Hilbert action but for the scalar invariant T instead of the Ricci scalar R (v. Ref. 19).

Since we know δ(t)=ct and δ̃=t1expτ, we can rewrite Eq. (94) in terms of t and τ as
(96)

The time derivative of T is clearly non-zero, meaning that the divergence of Tμν is non-zero, consistent with our observation about it at the beginning of this section.

Since t1 is on the order of the Planck time (tP=5.4×1044 s), 1/t12 is a very large quantity indeed. Recall that t1 is the lower limit of the integral in Eq. (7), and corresponds to τ=0. In Sec. IX we will identify both of these unambiguously with the time of the Big Bang, or at least a virtual origin corresponding to it. In Sec. VIII C, we shall show how to express T in terms of the time-dependent mass density, ρ, and the rest mass energy per unit volume, e.

Clearly, the behavior of mass density ρ and rest mass energy e with time is crucial to understanding the expansion of our universe. Earlier FLRW theories assumed conservation of mass and energy. This would require (using our symbols) that mass density, say ρδ3, and rest mass energy, say eδ3, to be constant. Abundant observations suggest strongly that they cannot be maintained as constants without a large source of external (dark) energy. This is the major point of departure of our theory from the classical theory. For our proposed universe, there is no reason to believe that either mass or energy are conserved, because we are allowing physical time to be measured in non-equal increments.

One way to see the implications of this is the following simple example. Consider the acceleration of a simple lump of matter by a force applied to it. Since mass is the ratio of the applied force to the acceleration, if the acceleration is measured differently using non-equal time increments, then the mass must be defined differently. Similar conclusions apply to the kinetic energy.

Actually, the dimensional argument for ρ and e requires a bit more subtlety and physical insight than we gave T in Subsec. VIII B. Both quantities could have been influenced by things that added to them outside the region where c, G, and δ(t) (or t) were the only parameters (like quantum effects in an earlier era). So, the most we can say, on dimensional and physical grounds alone is that changes in the density, say Δρ, and changes in the rest mass energy, say Δe, are given by
(97)
(98)
where G* is an absolute constant set by the initial conditions. Note that Δρ and Δe are functions of time t only, a direct consequence of our assumed spatial homogeneity. In fact, ρ0 in the limit as t is the only reasonable choice; and this in turn implies that Δρ=ρ and Δe=e. So, these are reduced simply to
(99)
(100)
where G* is a universal constant. Clearly, the decay in energy is a consequence of the spectral transfer, consistent with the suggestion of Gibson48–50 noted earlier.
We noted in Sec. VIII B that we could express the stress-energy scalar, T, in terms of ρ or e. It follows directly on dimensional grounds and using Eqs. (94), (99), and (100) that
(101)
It is easy to show that our choices of sign and numerical coefficient for T ensure that
(102)
since g00=δ2.

To conclude this section, we note that the idea that energy is not conserved is really not a new idea, especially since the work of Noether51 which makes clear the relation between energy and the choice of time. It has been noted before that a proper definition of inertial mass depends on how we define time.52,53 Also, Carroll19 observed the following which although in a different context appears quite relevant here.

Clearly, in an expanding universe, the energy-momentum tensor is defined on a background that is changing with time; therefore there is no reason to believe that energy should be conserved.

He then goes on to argue that the zero divergence of Tμν implies there is a law in which something is conserved. As noted above, our Tμν does not have zero divergence, but our Tμν(T/2)gμν does. As Carroll suggests, there is a corresponding conserved quantity: G*=ρGδ2/c2=eGδ2/c4.

There is a direct relation between Einstein's field equations and the equations of turbulent flow in fluid mechanics. Our solution to the field equations with its time-dependent length scale is in fact a close analog of similarity solutions of the averaged Navier–Stokes equations for homogeneous turbulence (c.f. Refs. 3 and 4) Gibson49 has argued that these post-quantum and post-inflation early processes would have been similar to an inverse Kolmogorov cascade in turbulence where the scales grow and energy is moved to larger and larger scales. Such high (or infinite) Reynolds number “turbulence” can be characterized in spatial Fourier space by a wavenumber energy spectrum that rises quickly to a peak at roughly the inverse of the characteristic length scale, say like our δ(t), then rolls of as k5/3 where k is the magnitude of a spatial wavenumber vector. The result is that the energy is spread over a band roughly characterized by wavenumber equal to 1/δ (c.f. Ref. 28 or any book on turbulence). Alternatively, the field could be described in terms of its evolution in time, usually a power law determined by the initial condition, exactly as we have in Eqs. (99) and (100).

So, the question can be asked: How far back in time can our theory be expected to apply (assuming it is valid at all)? We show in this section that it in fact appears to reasonably describe everything from Planck times to the present time. It does so without dark matter or energy. This is quite different from the popular view at the moment. First, we consider the quantum field theory estimates of the energy put in at the beginning. Then, we compare that to the best estimates of the visible mass in the universe.

One of the most unsuccessful theoretical results of the past few decades has been the large discrepancy between the predictions of Quantum Field Theory and the observed energy density in the cosmos. The ratio of the QFT prediction, 1072GeV/m3 or 10111J/m3, to the observed mass (or energy) is usually estimated at 10120. This has been described as the “worst prediction in the history of physics.”12,19,30 (Caroll and Ostlie32 in their section on “The Early Universe” provide an nice summary of how this result was obtained from the uncertainty principle.) A modification to the QFT calculation to include Lorentz invariance54 reduces this discrepancy by a factor of 1060, which is still huge. This is all the more troubling since QFT seems to accurately predict the magnetic dipole moment. These results have been interpreted in a number of ways and have been dubbed “vacuum energy.” We interpret them here as an initial condition.

The search for how much matter there is in the universe has been detailed in many publications including popular articles and all textbooks. The basic problem has been that the mass density inferred from visible matter is much less than the critical density of Eq. (1). There have been numerous studies in the last decade using various methods of processing both astronomical observations and simulations to try to estimate the visible (and invisible) matter in the universe. Table II of the very recent paper of Abdullah et al.26 lists the results of 20 different extensive studies. Their estimates of the mass density of matter in the universe range from a low of 0.22  ρc to 0.40  ρc, where ρc is the critical density defined by Eq. (1). The average of all 20 estimates was approximately 0.29ρc. Their value of ρc was calculated from Eq. (1) using the age of the universe as 13.8 × 109 years and various values of the Hubble parameter.

The most recent value cited in the table was that of Abudullah et al. themselves for which the present mass density, ρo, was estimated to be 0.305  ρc±0.04 (Ωm=0.305), which gives (using their parameters): ρo=2.72×1027kg/m3.

Our theory, Eq. (99), says that changes in mass or energy density should vary inversely with time-squared, i.e.,
(103)

We do not know the value of tQFT, but we can certainly calculate it by working backward from the Abduallah et al. value and the two QFT estimates. The QFT estimates can be converted from energy to mass by dividing by c2, and, respectively, give ρQFT1=1.11×1094kg/m3 for the original estimate and 1.05×1047kg/m3 for the revised Lorentz invariant estimate. We will use our value for to estimated in Sec. VI D as 15.4 × 109 years (or 4.9×1017 s). From the original QFT estimate, we calculate tQFT1=2.4×1043 s, or about 4.5 Planck times. For the second, tQFT2=7.8×1020 s or about 1.5×1024 Planck times. The first corresponds to the beginning of the importance of gravitational effects; the second is well into the grand unification epoch (v. Table I). Both QFT estimates are well before the age of galaxies or even before photons can propagate. Most importantly, both are close enough to t=0 to reasonably associate them with the Big Bang. Both were obtained with no adjustable constants using only our theory and the Abudullah et al. estimate of the current observable mass density of the universe.

Figure 4 shows a plot of ρ/ρo vs t/to from the Planck time tP to the present to using Eq. (99). The data have been normalized by ρo=2.72×1027kg/m3, the Abdullah et al. value. The two quantum field theory (QFT) estimates are shown by the orange diamonds. The present value is the blue triangle.

FIG. 4.

Plot of Eq. (99) showing 122 decades of mass density normalized by the present value vs time normalized by the age of the universe. The blue triangle is the present value. Also, shown are the QFT1 value and the QFT2 value (orange diamonds), both normalized by the present day density of Abdullah et al.26 

FIG. 4.

Plot of Eq. (99) showing 122 decades of mass density normalized by the present value vs time normalized by the age of the universe. The blue triangle is the present value. Also, shown are the QFT1 value and the QFT2 value (orange diamonds), both normalized by the present day density of Abdullah et al.26 

Close modal

Note that no particular “time” is given for when these QFT estimates should be applied, so we extrapolated back from the present. We know, however, that the functional form of their time dependence is the same as our theory (see discussion in Sec. VIII F and Ref. 19). So, if we had a beginning time we could have connected the curves instead of showing the QFT results as just points. This would have alleviated our need to infer the present density value from just measurements, and provided a useful comparison whether the Abdullah et al. estimate is large enough.

Regardless of our inability to pin down the exact relation, the results must be regarded as a spectacular result and success—especially for much maligned quantum field theorists, the astronomers and our theory! We will have more to say, about this below, but for now it is clear that our theory is treating the universe as an initial value problem. All of the energy (and mass) is added at the beginning. It is simply being dispersed—outward in physical coordinates. We can now with some confidence use the Abdullah et al. value to calculate the value of G* in Eqs. (99) and (100). The result is
(104)

Different astronomical results could change this result.

We have all the pieces now to examine in detail the stress energy tensor. In τ,η coordinates it is simply
(105)
where we have substituted Eq. (96) for T. We will associate t1 with the time of energy input below.

We note that this is similar in form to the quantum field theory result of Carroll19 in Chapter 9 when his equations (9.166) and (9.133) are combined. There he considers a two-dimensional flat Minkowski space of infinite extent governed by both Einstein's equations and quantum mechanics. Even his time is logarithmic. Surely, dimensional analysis alone dictates the physical time dependence on both solutions. However, we offer two possibilities in Sec. VIII F which considers radiation as well.

More similarities to the QFT theory can be seen by examining the t,x form of our contravariant stress energy tensor, Tμν. Combining Eq. (101) with Eq. (92) yields
(106)
where we have used Eq. (29) to obtain the contravariant velocity components, and the De Sitter space equation of state
(107)

This pressure is not “pressure” in the usual sense, but the mean normal “stresses” of the averaged motion—like the Reynolds normal “stresses” in classic fluid mechanics turbulence. Note that there is a slight difference from the usual equation of state [Eq. (4.33) in Ref. 19] because we have normalized differently.

Aside from the normalization these equations are exactly the form of Einstein's stress energy tensor Carroll chose. The 00-component is the rest mass energy. The diagonals are the pressure and longitudinal momentum fluxes. The off-diagonals are “Reynolds-stresses” and disappear if the matrix is diagonalized or cast in spherical coordinates.

It is interesting to discuss the role of pressure here. In τ,η-space, the pressure is everywhere the same and there is no motion, both by hypothesis and the assumed homogeneity. The t,x-space is far more interesting. It is not homogeneous, since looking out in space is really looking back in time. However, (on average) it looks the same to all observers no matter their location. No matter where they sit in it, the universe appears to be accelerating away from them since urr at a single instant in time. However, what is driving this “apparent” acceleration? Since the density decreases with time and ρc20, the pressure p=ρc2 must increases in time from a deep negative value. However, the p at any radius r is the p that was really the pressure at an earlier time, and that was even more negative than it is at present at the location of the observer. So, the pressure gradient is negative, i.e., decreasing with increasing r. Thus, to an observer in t,x-space, this negative pressure gradient appears to be accelerating the flow away, and driving a positive momentum flux outward. Most importantly, there is no need for additional sources of pressure, momentum, matter, or energy, i.e., especially dark matter or dark energy. The only necessary contribution to obtain what we think we see was put there at the beginning.

Another interesting observation: our solution looks like a three-dimensional analog of Carroll's two-dimensional QFT description in Chapter 9, Section 5. While Carroll's solution and ours began from opposite ends of the cosmic evolution, clearly the assumption of homogeneous Minkowski space dominates the behavior. In fact, Section 9.5 of Ref. 19 is almost exactly our solution if you interchange his t,x-coordinates with his τ,η-coordinates. His “Rindler coordinates” are our t,x-coordinates. With a bit of work, his log time and exponential factor are the same as our log time and δ̃(τ). His equation (9.140) is exactly what we try to argue in Sec. IX. Since there are no new parameters that enter the problem, it would be a surprise if these solutions were not the same. Unfortunately, we do not have access to enough information to establish that. Perhaps others will.

Alternatively, any resemblance of the two theories could be just simple dimensional analysis, since G and c are the only parameters. Also, note that our universe is infinite with many overlapping spheres of visible universes, depending upon where one is sitting. Carroll interprets his results as though there is a single spherical one, but this seems to be a matter of interpretation, not mathematics, which is consistent with an infinite one. The finite universe he describes leaves us needing inflation, while ours does not. It simply begins everywhere at the same time.

As Sec. VIII F makes clear, radiation provides a possibility that the two theories might be different, each representing a different region of expansion—one very early, one very late. Separated by one which has at least one more parameter—radiation.

Sections VIII A–VIII E, and Eqs. (99) and (100) in particular, show how the mass and energy densities vary with time. What they do not show is what portion is photons (or radiation) and what portion is matter. We will calculate the radiation part separately, and then identify the remainder as matter.

The radiation mass density is given by Freedman et al.44 as
(108)
where σ=5.67×108W1m2K4 is the Stefan–Boltzmann constant. As noted in Sec. VI E, the present temperature is TUo=2.725K. This corresponds to a present energy density of 4.64×1031kg/m3. Clearly, this is negligible compared to the estimate 2.72×1027kg/m3 which we used for the present density.

The contribution of photons above can be modified to include neutrinos. Nave55 (see also Carroll and Ostlie32) suggests including a factor of 1.681 to account for neutrinos since the neutrino temperature dependence is also TU4 (but only below 3Mev3×108K). This makes little difference in the contribution to the density at this time. However, it does affect where the radiation curve intersects our theory, i.e., the point before which only radiation is present.

Therefore, before leaving this section we ask: when would the total mass in the universe have been due to radiation. By combining Eqs. (108) and (82), we conclude that the radiation density as a function of time is given by
(109)
where we have changed the factor of 4 to 6.724 in Eq. (108) to account for the neutrinos. Equating this to Eq. (99) yields
(110)
where we have defined tr as the time they are equal. Solving for tr yields
(111)
The right-hand side is just the ratio of the two densities at time to, i.e.,
(112)
So, multiplying by 15.4 × 109 years implies that all of the energy was radiation energy at 262 × 106 years after the beginning. Previous estimates were approximately 380,000 years, so this is a substantial difference. BUT it is still farther back in time than previously believed since our universe is older, in fact farther back in time than the 13.8 × 109 years often taken as the age of the universe.

Now, here we are faced with a dilemma. Since the radiation density given by Eq. (109) is increasing more rapidly with decreasing time than Eq. (99), should we follow our theory back to the QFT estimate as we did in Sec. VIII E? Or should we follow the radiation density values back to the QFT estimate? If we do the latter, the radiation densities equal the QFT results at t=4.5×1014 s (8.3×1029 Planck times) and t=2.5×102 s (4.7×1041 Planck times), respectively, the latter time corresponding to the lower QFT value. While these are much later times, at least to those interested in this early period, it clearly makes no difference to the age of the universe. Both alternatives lead smoothly from our gravity-dominated theory to the quantum dominated era.

Figure 5 shows the same theoretical curve, Eq. (99), as in Fig. 4, but only for t/to104. Two radiation curves are shown corresponding to Eqs. (108) and (109), the latter including the neutrino contribution. The figure shows the location in time of z=1100 (red arrow) where the temperature is 3000° K, and the location in time of the Methuselah star (green arrow). If the arguments at the end of Sec. VIII E are correct and the QFT and our solution are the same, the solid line should be followed all the way back until quantum mechanics dominates. The dashed line simply indicates where photons begin to be important (where the lines cross) and how they cease to dominate as baryonic matter is formed.

FIG. 5.

Blow-up of Fig. 4 showing only times after t/to=104. The black line is Eq. (99) and the dashed lines are the radiation estimates of Eqs. (108) and (109). For reference purposes, we have also shown on the plot the time associated with the cosmic microwave background radiation (red triangle), when the temperature was 3000° K corresponding to z=1100. The green triangle indicates the age of the Methuselah star (14.5 × 109 years).

FIG. 5.

Blow-up of Fig. 4 showing only times after t/to=104. The black line is Eq. (99) and the dashed lines are the radiation estimates of Eqs. (108) and (109). For reference purposes, we have also shown on the plot the time associated with the cosmic microwave background radiation (red triangle), when the temperature was 3000° K corresponding to z=1100. The green triangle indicates the age of the Methuselah star (14.5 × 109 years).

Close modal

We have found a solution which seems consistent with many of the previously challenging astronomical observations and quantum field theory calculations. However, we have entered this through the back door by assuming a similarity form to describe the present, and exploring the consequences. This is exactly what Hawking and Mlodinow56 in their last chapter suggested we do to sort out the 10100-th possibilities of M-theory—work back from the present to the beginning to see which of the possible solutions happened. They described the old FLWR-based theory as a possible path back. However, we herein offer a different path back to the beginning which does not need the magic of inflation.

However, exactly what equation has our solution solved? We suggest that our solution corresponds to the following tensor equation:
(113)
where Rμν is the averaged Ricci tensor (including second-moments), Iμν=[I*,I*,I*,I*], and we have placed a Dirac delta-function, δD(τ), on the right-hand side. So, Iμν represents an impulse function of strength I* at τ=0. Note that the averaged Ricci tensor in τ,η-space is independent of η because of our spatial homogeneity assumption; and that <τ<. Also, we have arbitrarily chosen τ=0 so that the time, t1, the lower limit of Eq. (7), corresponds to the time of our impulse. So, t1 is the virtual origin and should be measured in Planck times.

The Riemann tensor itself is the product of two covariant derivatives, and it is highly non-linear (v. Ref. 18). While it can be linearized for small amplitudes (like gravity waves), it most surely cannot be here since this impulse response is not small—it is causing the whole universe to expand. Even if we wrote out the averaged equations above and included all the second moment terms, we would still be left with the usual turbulence closure problem—more unknowns than equations. Now, it may be possible to solve Eq. (113) without worrying about the higher order terms—like the inviscid flow solutions, which work well for the aerodynamics of bodies, for example. Or maybe appropriate closure approximations can be made, as in much of engineering turbulence. Surely, numerical solutions should be possible, but from turbulence experience it is probably crucial to make the computational box at least an order of magnitude larger than ct throughout the computation. (As in turbulence computations, this should be a lot easier to achieve in our scaled coordinates.) We note that Carroll19 succeeded in reducing his QFT equations to a wave equation [his equation (9.140)], but we were unable to do that here. Perhaps we have missed the obvious.

Fortunately, as happens frequently in turbulence theory (especially for flows of infinite extent,4,28,57 the strong non-linearity has led to a limit cycle or attractor which in this case can be characterized by scaling time and space together. Our solution has revealed enough about its nature so that we can examine its transient behavior in another way. We can work backward from what have already proven to be true.

We know that the mass density in τ,η-space is given by
(114)
where δ̃(0)=LP×(t1/tP) and LP is the Planck length scale. It is easy to show that its derivative with respect to τ reduces to
(115)
We can move the derivative term on the right-hand side of Eq. (115) to the left-hand side to obtain the following differential equation:
(116)
where we have added to the right-hand side a Dirac-delta function, δD(τ) to represent an impulse function of strength I* at time τ=0. Note that τ=0 (or t=t1) is where our solution looking backward in time thinks the impulse occurred, i.e., the “virtual origin.” The reason for any difference from the time of an actual impulse is that our solution cannot account for physics it does not include—meaning all of the quantum mechanics. However, it should merge smoothly with it, as we have assumed it does in Sec. VIII.
The solution to Eq. (116) we already know, so it is not particularly interesting. However, it is the relation between the impulse function and time that is of interest. If we integrate from <ττ and assume that ρ̃()=0, the result for τ>0 is
(117)
So, I* is our Big Bang and clearly should be associated with the quantum field theory results as we did in Sec. VIII.

It is obvious then that at least our universe is an initial value problem. More importantly, there is no need for additional sources of energy or mass or whatever to sustain its continued expansion. It really was a Big Bang, and aside from G and c, nothing else matters.

The great solid mechanic and experimentalist James C. Bell of the Johns Hopkins University often said to his students and collaborators

Experimentalists sort theories. (J. C. Bell58)

However, as noted in Ref. 59, the problem often is that there is only one theory, so there is enormous pressure to prove it correct. R.R. Long60 in a famous footnoted paper61 remarked

Theoretical results accepted on the basis of very limited evidence, become after long periods of time impossible to overturn with even abundant contradictory evidence.

The problem of course is the lack of alternatives before ideas become cast in intellectual concrete. So, at very least we have provided astronomers with another theory “to sort.” In this section, we have tried to provide the tools to do so.

There have been numerous studies showing various methods of processing both astronomical observations and simulations to try to account for the visible (and invisible) matter in the universe. The recent paper of Abdullah et al.26 and the earlier work of Poggianti et al.,25 for example, focus on clusters. These data on the number and mass of the galaxies in clusters are usually plotted as inferred mass or mass distribution vs z. We would like very much to have utilized these extensive results to evaluate our theory. For the most part, we have largely failed and were able to do so only for the cluster number dependence on z, which is plotted in Fig. 7 of Sec. X G. We suspect our failure largely represents our own short-comings (see the postscript to this paper for an explanation.) However, also the masses inferred seem to have been unduly influenced by the need to conform to the Standard model. We simply are not qualified to judge. So, in this section, we offer a brief outline as to how our theory might be used in future studies; or for re-processing existing data by those more qualified than us to do so.

All of the conclusions of Sec. IX were a consequence of spatial homogeneity of an infinite universe at a single time. However, we can only look back in time. Our universe is assumed to be statistically homogeneous in space but non-stationary in time, t. So, if we average over a particular shell of radius r, we are really looking at the average universe at time t=(Ror)/c. Below, we examine how our new theory affects our view of what we see.

Suppose we integrate the density of Eq. (99) over the entire visible universe, say Ro=cto. Looking away at a distance r is the same as examining its state at time t=(Ror)/c. So, the density given at any distance, say ρ̃(r), is given by
(118)
As noted above, this is only valid after the Planck time, tP, or something proportional to it within a few orders of magnitude. Integrating this from r=0 to the quantum limit Ro[1ϵ] where ϵ=ctP/Ro yields the total observable mass in the “visible” universe as
(119)
where Mo is the actual mass in the visible universe at this time, to, i.e.,
(120)

So, the mass looking back in time is VERY much greater than the mass actually there at THIS time. Only the limit imposed by the Planck time (below some multiple of which our theory does not apply) saves us from it blowing up entirely. Even if divided by the volume of the visible universe, this is a very large number and clearly is in no way indicative of the average density at present. It is, however, indicative of the energy put in initially; in fact exactly the QFT energy discussed in Sec. VIII D.

Figure 6 shows how almost all of the contributions to this integral and integrand were in the earliest second but are forever hidden to us behind a wall of invisibility by millions of years. We can only see the footprint in the cosmic background radiation.

FIG. 6.

Upper: Plot of Eq. (118) looking back in time and showing how density varies with distance from an observer, Eq. (118). Note that the apparent singularity as r approaches the radius of the visible universe, Ro=cto. Lower: Plot of Eq. (119) looking back in time and showing how the cumulative mass varies with distance from an observer over which the integral is computed, rmax. The cumulative mass is normalized by the mass at present, 4πRo3ρo.

FIG. 6.

Upper: Plot of Eq. (118) looking back in time and showing how density varies with distance from an observer, Eq. (118). Note that the apparent singularity as r approaches the radius of the visible universe, Ro=cto. Lower: Plot of Eq. (119) looking back in time and showing how the cumulative mass varies with distance from an observer over which the integral is computed, rmax. The cumulative mass is normalized by the mass at present, 4πRo3ρo.

Close modal

For theoretical arguments it is easiest to place ourselves at the center and denote the distance away from us as r. So, r=Ro=cto would be the edge of our visible universe. However, most measurements are recorded using the redshift parameter which is directly measurable. So, before considering how the mass and cluster number dependencies vary with r and z it is useful to consider how volume itself varies with z. We could simply define Ṽr=4πr3/3 and convert that to z if we knew the relation between z and r.

If we note that Ro=cto and Ror=ct, it is easily established from Eq. (74) as so, z=0 corresponds to r/Ro=0; and as z, r/Ro1 and we reach the edge of our visible universe.
(121)
It follows immediately that the volume as a function of z is given by
(122)
where we have defined Vo=4πRo3/3. Note that this is an actual volume, not a co-moving volume. For z1, V(z) varies as z-cubed. However, the volume of the universe is approached quite slowly as z. These differences will be seen to be quite important below.

It is clear from the divergent integral in Eq. (119) that what we really need is not a spatially (or temporally) averaged density but a cumulative density as a function of radius, r, or z. Then, we can choose whether or not to normalize it with the cumulative volume or the current value of density, ρo, which hopefully we can determine by fitting the data.

By the same arguments that led to Eq. (119), the observable mass inside radius r should be given by
(123)
where Ñ(r) is the cumulative number of clusters inside radius r, and M̃g(r) is their average mass (which can not be assumed to be independent of radius since mass is not conserved). Note that the ϵ in our previous integral, Eq. (119), has become the running value of r/Ro here. We have tried to follow convention by considering separately the number of clusters and their mass, although this integral can only consider their product. Nor can it separately account for clouds of gas, so any such must be considered part of the same integral or treated as just another cluster.
Substituting for the density from Eq. (118) and integrating yields
(124)
where Mo=4πRo3/3 is the mass of the visible universe at this time. This is an explicit prediction with no adjustable parameters, so should be possible to check it with data. The small r/Ro expansion is readily obtained as
(125)
This cubic dependence for small r means that the accumulated mass of the clusters increases linearly with the volume for small radius, consistent with the assumed actual and local homogeneity of the universe, the latter following from the quadratic dependence of the metric tensor in physical space on r.
It is more convenient to transform this from a dependence on r/Ro into z which can be directly measured. Since r=r(z) from Eq. (121), we can define Mg(z)=M̃g(r(z)) and N(r)=Ñ(r(z)). Then, we can substitute Eq. (121) into Eq. (124) to obtain Mg(z)N(z) (after some manipulation) as
(126)
This blows up linearly asymptotically as z increases to infinity and rRo, exactly as we found above.
The small z expansion is given by as noted above, this cubic dependence on z could be quite useful in interpreting astronomical results.
(127)

It is important to note how different the limiting values are for Mg(z)N(z) and the volume, V(z). The leading terms in the Taylor expansions of both vary as z3 for z1, so MgN/Vconstant for very small values of z. This makes quantities per unit volume quite useful for small z. However, for large z things are more complicated, since MgN varies as z for large values, while Vz goes to a constant.

Note that if either Mg(z) or N(z) are known, either Eq. (126) or (127) can be used to find the other, assuming of course that Mo is known. We will suggest an additional hypothesis in Sec. X E to make this separation possible.

The mass per unit volume as a function of z follows immediately from dividing the mass at z, the MgN of Eq. (126), by the volume at radius z, V(z) given by Eq. (122). Since Mo=ρoVo, the result is
(128)
This increases linearly as z. The small z expansion also begins linearly
(129)

Note that the only parameter is ρo, the present density of the universe.

This can be contrasted with the same expression from the standard theory, given, for example, by Eq. (3) of Poggianti et al.25 as
(130)

These clearly have very different behaviors for large values of z, our theory grows linearly in the limit as z, the standard theory as z3.

Note that to this point we have made no assumptions beyond our original hypothesis about the metric. In Sec. X E, we will make an additional hypothesis about N so we can separate mass and cluster number. So, the results in Sec. X E will be dependent upon this new hypothesis as well.

This section proposes to deduce N(z) itself by introducing the additional hypothesis that the average number of galaxies per unit volume in τ,η-space is constant.

Given our hypothesis, that the number is constant in τ,η-space, we define n̂(t) to be the number density in physical space at any physical time, t, n̂(t) must be proportional to 1/δ(t)3 since the physical volume is increasing as δ(t)3, i.e.,
(131)
It follows that
(132)
since δ(t)=ct. However, we also know that t, r, and z are related, so we can rewrite Eq. (132) as a function of r as
(133)
So, looking away in r (or back in time), the cumulative number of clusters per unit volume, say Ñ(r), should be
(134)

It is easy to see by expanding this integral for small r (i.e., rRo), that ñ(r)n(to). This is what we would have hoped for had we defined things correctly.

The exact solution for r<Ro is given by
(135)
Or, defining Vo=4πRo3/3
(136)
Note that n(to)Vo is the number of clusters in the visible universe at this time, obviously a very large number. We of course can only see them in their past. However, we can see how the number appears to vary with distance away from us. So, we should be able to make an estimate of the present cluster density, n(to). The Taylor expansion about r/Ro=0 is given by
(137)
We can define N(z)=Ñ(r(z)) where r/Ro=z/(1+z) and 1r/Ro=1/(1+z). Substitution into Eq. (136) yields. Note that this increases quadradically in the limit as z.
(138)

The small z Taylor expansion is given by
(139)
Clearly, the cubic term dominates the approach to z=0. Note that this leading cubic term is entirely a consequence of the Jacobian in the integral, and independent of the shape of the integrand. Nonetheless, we should (in principle at least) be able to use it to determine n(to) since we know Vo.
We can divide Eq. (139) by Eq. (122) to obtain from the Taylor expansion of Eq. (139), we can see that the leading term of z3 resolves the singularity at z=0 leaving only
(140)
(141)

In this subsection, we will look at two sets of data: the low-z data of Abdullah et al.26 which examines the GalWit19 database, and the higher z data of Poggianti et al.25 The former is an extensive paper that attempts to count clusters and uses the data to evaluate the standard theory and simulations based on it. We consider here only their cluster data plotted as N(z)/V(z) vs z. The Poggianti et al. paper is an older paper, one of several by the same authors, which examines clusters at two higher values at z=0.6 and z=1.5.

The Abdullah et al. data26 are plotted alone in the top part of Fig. 7. We have converted their numbers from h3/Mpc3 to 1/Mpc3 using their value of h=0.678. Since they did not include information below z=0.04, we used an iterative process to establish the value at this time as n(to)2250/Mpc3. The theoretical curve fits the data well for z0.12, and over-shoots above this value for larger values of z. The three-term Taylor expansion of Eq. (141) fits equally well. This is consistent with the observation of Abdullah et al. who attributed the difference compared to simulations (above z=0.09 in their case) to the incomplete dataset for larger values. We agree. Unlike their comparison to analysis, however, no part of our theory has to be attributed to dark energy or dark matter.

FIG. 7.

Upper: The cumulative cluster number data of Abdullah et al.26 plotted with Eq. (140) using n(to)=2250. Lower: Plot of Eq. (140) and the cumulative cluster number data of Poggianti et al.25 and Abdullah et al.,26 also using n(to)=2250.

FIG. 7.

Upper: The cumulative cluster number data of Abdullah et al.26 plotted with Eq. (140) using n(to)=2250. Lower: Plot of Eq. (140) and the cumulative cluster number data of Poggianti et al.25 and Abdullah et al.,26 also using n(to)=2250.

Close modal

Equation (140) together with the data of Poggianti et al.25 and the Abdullah et al. data are plotted in the bottom part of Fig. 7. The Poggianti et al. data were scaled (by them) by what they believed to be the value at z=0. The numbers at the other two points were N/V=1.8 for z=0.6, and N/V=4.7 for z=1.5. Our theory for N(0)V(0)/n(to) predicts N/V/n(to)=2.9 for z=0.6, and N/V/n(to)=7.5 for z=1.5. The ratios of the two data points to our theoretical values are 1.62 and 1.60, respectively, so the difference between ours and theirs is most likely due to the unknown coefficient n(to). So, we have multiplied their data by 1.6.

In summary it appears that our theory and especially the new hypothesis about the distributions of galaxy clusters in τη space looks promising. It should be noted that the same theory applies to properly performed simulations as well. Our theory suggests that δ(to)=cto is a length scale for the universe, not its size which is presumed infinite. Experience in numerical simulations of homogeneous turbulence suggests strongly that the computational domain needs to be much larger than the characteristic length scale to minimize the effect of boundaries, typically by a factor of 10. We look forward to seeing the results of this comparison of theory and data by real astronomers.

Given our solution for N(z) above, the answer is yes—at least in principle. Dividing Eq. (126) by Eq. (138) yields
(142)
since the factor 3 multiplying both cancels and Mo=ρoVo. Note that in the limit as z, Mg(z)n(to)/ρo2/z.
Alternatively, we could write this as a ratio of their Taylor expansions
(143)

The comment of Dirac27 cited at the beginning clearly foreshadows our work. In this section, we review some aspects of our theory, some of which have also been anticipated.

FIG. 8.

Figure showing how the times, t/tP=(t1/tP)eτ and τ=ln[(t/t1)(t1/tP)] are related, where tP is the Planck time and t1 is the virtual origin. For plotting purposes, we have chosen t1/tP=1. The “epochs” from Table I have been identified, along with 1 s, 1 year, and the time for which z=1100.

FIG. 8.

Figure showing how the times, t/tP=(t1/tP)eτ and τ=ln[(t/t1)(t1/tP)] are related, where tP is the Planck time and t1 is the virtual origin. For plotting purposes, we have chosen t1/tP=1. The “epochs” from Table I have been identified, along with 1 s, 1 year, and the time for which z=1100.

Close modal

From Eq. (22), we know that τ is logarithmically related to our physical time t. The description of a universe evolving in “Epochs” of logarithmic time is familiar to every cosmologist (see Table I taken from Wikipedia62). Figure 8 shows how the epoques identified in Table I correspond to our postulated universe.

While different physics dominates each region, our theory is probably the same for all after the Plank era, since the underlying assumptions would still be valid. The statistics would be the same, only the underlying mechanisms different.

Aside from the comment (cited in Sec. II) by Dirac27 near the end of his life and the musing of George52 (from which this section was adapted), there appears to be no evidence (at least available to us) that it has ever been previously considered that different times should be applied to the Einstein field equations than to quantum mechanics. However, even if we had thought of it, would we have noticed any difference?

Any cosmic (or logarithmic) time difference between two τ-times, say τ and τ+δτ, can be expressed in linear time increments by the difference of their Taylor series expansions, i.e.,
(144)
where t and t+δt are the corresponding absolute times.

George52 (using previously believed values for the age of the universe) noted that

There have been approximately 13.8 billion years (t4.3×1017 s) since the Big Bang. But mankind has only been on the earth for approximately 250 000 years. So even if we had been keeping careful track since then the differences we would have noticed between the hypothesized QSM time and linear time would have been δt/t2.5×105/(13.8×1012)1.8×108. And the differences we would have needed to observe to discover a discrepancy are the square of this, or of order 1015. But, we have been doing mechanics for only the past 500 years, so even had we started measuring carefully at the time of Galileo, δt/t500/(13.8×1012)=3.6×1011. So the leading error term would have been of order 1021, and clearly beyond our ability to distinguish from experimental data alone. Only by trying to make sense of things that happened billions of years ago would we have noticed our equations don't balance. But of course we have done that now.

Figure 9 makes clear WHY we would not have noticed a problem. By differentiating Eq. (22), it follows immediately that
(145)
FIG. 9.

Figure plots Eq. (145) showing how dτ/dt varies with time t. Only by looking back over billions of years could any deviation from linearity have been noticed.

FIG. 9.

Figure plots Eq. (145) showing how dτ/dt varies with time t. Only by looking back over billions of years could any deviation from linearity have been noticed.

Close modal

We are now at 15.4 × 109 years on the plot. The slope dτ/dt has been very nearly constant over all of our human existence. The same is true as well for most of our previously observable universe (i.e., before the Hubble telescope). Only now can we see far enough back in time to see any difference from a simple linear relationship, tτ. So, only now do we see the inconsistencies in our previous theories.

An interesting consequence of our theory is that the last star will not vanish over the horizon. The universe is expanding at precisely the right rate to prevent that from happening. This should be quite a relief for those who found the previous conclusions depressing.

However, another question which is related is: Will the last star burn out? The answer is not obvious to us and we leave it to others to speculate. Surely since G and c are the properties of nature, there is no reason there will be less of them. However, if quantum mechanics really functions on its own timescale (as suggested in the quotation from Dirac at the beginning of this paper), then the answer is probably yes. However, if not….? We leave it to the quantum field theorists and nuclear physicists to reason this out, since it is clearly beyond our capabilities—at least at the moment.

This paper explored the consequences of the simple hypothesis that we live in an infinite universe in which both gravitational time and space evolve in pace with atomic clock time when scaled by the same time-dependent length scale. This is fundamentally different from the traditional FLRW approach where only space expands with time. We have not replaced Einstein's Field equations, but we have instead found a different solution to them—one that evolves in time but does so in a way so that the acceleration terms of the field equations vanish identically. A direct consequence is that the problematical “critical density” concept vanishes completely, and with it, the need for either Dark Energy or Dark Matter.

We removed the time evolution by simply considering a universe in scaled coordinates, say τ and η, to be a maximally symmetrical Minkowski/de Sitter zero curvature space of infinite extent that remains fundamentally unchanged during its evolution. Then, we propose that this coordinate system is related to our physical coordinate system (t,x) by stretching the coordinates with a time-dependent length scale δ(t). The statistics of this stretched universe are inhomogeneous in space, x, and so vary in both space and time, t.

In effect, we have simply placed Einstein's original hypothesis of a static universe into a coordinate system in which both time and space expand together. The metric tensor is constant in τ,η-space, but in physical coordinates evolves with time. A consequence is that even though space and time are expanding, the zero values of the Riemann and Ricci tenors mean that Einstein's stress-energy tensor reduces to a single function multiplying the metric tensor. That function depends only on the gravitational constant G, the speed of light c, and the time-dependent length scale δ.

The rate of expansion is the speed of light and we show that δ(t)=ct where t is the age of the universe. So, both of these physical constants, G and c, are related directly to an initial condition—in this case, the Big Bang or its residual when gravity becomes important.

A direct consequence of the assumptions of a static universe in similarity coordinates is that the length scale is linearly dependent on physical time, t, and that τ=lnt/t1 where t1 is a virtual origin proportional to the Planck time and indicative of the Big Bang. Thus, the proper time (and similarity independent variable) τ depends on the logarithm of t, a result consistent with the speculations of George,52,53 who examined the consequences of a log-time assumption. Interestingly, Caballero63 (see also Ref. 64) has recently argued using the Wolfram Model of fundamental physics65 that logarithmic time can be proved to be generically the same as the total information content of the Universe. This provides a striking parallel to the deduction herein that initial conditions establish directly the gravitational constant G, and that the expansion rate is proportional to the speed of light c, both fundamental constants of physics.

The linear relation between the length scale δ(t)=ct and t implies that the Hubble parameter is inversely proportional to the age of the universe, i.e., H=1/t. Our prediction that the Hubble parameter varies as 1/t, where t is the age of the universe would appear to contradict current wisdom. However, Carroll19 has noted this is the desired result for the very early universe as well. We show that H=1/t implies that H̃=Ho×[1+z], where Ho is the present value (i.e., t=to or z=0). We show this equation to be in excellent agreement with the recent results of Yu et al. using Ho=63.6 km/s/Mpc. This corresponds to a universe which is 15.4 × 109 years old. Our older universe explains a number of recent results including the age of the Methuselah star (14.5 × 109 years), and recent inferences from supernovae and gravitational lensing, all of which suggest an older universe than the presently assumed value of 13.8 × 109 years.

We also examine the supernovae data of Hamuy et al.,41 Reiss et al.,21 Perlmutter et al.,22 and Knop et al.42 Our theory has only two parameters, Ho and M, the Chandrasekhar limit, M. We find excellent agreement of our theory with their “uncorrected” results using Ho=63.6 km/s/Mpc and a Chandrasekhar's constant of M=18.5. A value of M=18 provides almost as good agreement with their 1+z-corrected data. However, we argue using the scaled Maxwell's equations that the uncorrected data are preferred, since radiation spectra are shifted in both amplitude and wavenumber. Without the 1+z-correction, their universe is not accelerating. However, our universe expansion rate is slowing down for BOTH their corrected and uncorrected data. So, neither need dark energy to accelerate it.

The fact that time is not measured in linear increments means that energy and mass are not conserved quantities, only G and c are. We are not the first to notice that changing time affects conservation of energy (c.f. Refs. 19 and 51) but perhaps we are in this context. In fact, both rest mass energy and mass densities decay as e=c/δ(t)]2=1/t2, or equivalently e2τ. This suggests strongly that it is the energy cascade to both larger and smaller scales that causes the energy to decay. Interestingly, our theory extrapolated back in time explains the so-called “Worst Prediction in the History of Physics,” the enormous discrepancy between the predictions of QFT (quantum field theory) and current observations— 10120!

We show that our theoretical predictions for the growth of Planck scale disturbances near the beginning are consistent with the Planck observations of the scale of the inhomogenieties, the cosmic background radiation. By considering when the temperature was 3000° K, we suggest that in fact, they date not to 380 000 years after the big bang but to 14 × 106 years instead.

Finally, we confess our limitations in considering the enormous quantity of astronomical data. However, we try to leave a road map for astronomers to follow should our theory prove interesting.

In summary, we have proposed a new theory based on a single time-dependent length scale, δ(t). It turns out after extensive analysis using curvilinear coordinates that δ(t)=ct. We argue that average energy and average mass are not conserved. In fact, their densities vary inversely as 1/t2. Our theory found a solution to Einstein's equations which evolves logarithmically with time. However, the equations we used were exactly Einstein's equations. It was our solution that gave surprising answers. However, was this a clue? Could it be that those underlying “Laws of Nature” should have been expressed using logarithmic time as well. If so, “mass” would not be the mass we thought it was, nor “energy” the same energy. Like the “42” in the Hitchhiker's Guide to the Galaxy, we leave this question for the next generation of physicists to answer. Would not it be ironic if given the well-known Einstein's aversion to quantum mechanics, it is the application of his equations that proved quantum mechanics was right all along, only our “known” laws were in error.

These ideas have evolved over the last few years, mostly in discussions between the authors, neither of whom were trained in astronomy or cosmology. Whether or not we have made a significant contribution, we are extremely grateful to those experts who wrote books and made videos of their lectures available. It has been an exciting journey thanks to their efforts. Many former students and friends made helpful comments, but the encouragement and suggestions of Azur Hodzic (Danish Technical University) and Jose Manuel Rodriguez Caballero (University of Tartu, Estonia) were especially helpful. We would also like to acknowledge the gracious proprietor, Camilla Eriksson, and the friendly and comfortable environment of her cafe “Kaffebubblan” in Mölndal, Sweden, where many of our discussions took place. Finally, we acknowledge the contribution of April Howard, wife of WKG, who listened patiently to many versions of this theory, mostly wrong; but in doing so helped to clarify the holes. Last but surely not least, we acknowledge the work and dedication of the many “et al.'s.” As mostly retired professors, we know well what effort hides behind that simple citation.

WKG and TGJ both retired from Chalmers Technical University, WKG in 2010 as Professor of Turbulence and Director of the Turbulence Research Laboratory (www.turbulence-online.com), TGJ in 2012 as Docent in the same laboratory. Both were instrumental in the Chalmers International Turbulence Masters program which moved to Ecole Centrale de Lille upon their retirement.

The authors have no conflicts to disclose.

W. K. George: Conceptualization (equal); Formal analysis (equal); Writing – original draft (equal); Writing – review & editing (equal). T. G. Johansson: Conceptualization (equal); Formal analysis (equal); Investigation (equal); Methodology (equal); Writing – review & editing (equal).

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

1. Notation and Jacobians

We choose the notation of Grinfeld31 for its simplicity. We keep the speed of light explicit and do NOT absorb c into the time t. While analytically more complicated, it simplifies interpretation later. The coordinates with unprimed indices, zμ, are used for τ,η-space, whereas the coordinates with primed indices, zμ, are used for the physical coordinates, t,x. The indices 0 (or 0) are used for τ and t, respectively.

We define the primed coordinates to correspond to our physical space (ct,x,y,z) as follows:
(A1)
The un-primed coordinates, zμ represent τ,η and are related to the physical space coordinates as follows:
(A2)
The corresponding Jacobians, Jμμ=zμ/zμ and Jμμ=zμ/zμ, take on the following form:
(A3)
(A4)

We note that δνμ=JμμJνμ.

2. Basis vectors
The covariant basis vectors in τ,η–space are by hypothesis
(A5)
In t,x)-space (physical space) by
(A6)
where δ̇=dδ(t)/dt. Note that the basis vectors make it clear that our assumed physical space is not Cartesian.
The covariant and contravariant metric tensors in physical space, say gμν and gμν, are readily computed to be those given by Eqs. (9) and (10). In τ,η-space, the Christoffel symbols are zero. However, ten of the Christoffel symbols of the second kind in physical space are non-zero. They are
(A7)
(A8)
(A9)
(A10)
Clearly, only the last three involve space. The Christoffel symbols of the first-kind can be obtained by lowering the upper index using the metric tensor.

Between us (WKG and TGJ), we have over 100 years experience as scientific researchers in fundamental mechanics and applied physics—not one second of it in general relativity, cosmology, or astronomy. So, everything we have learned, we have learned in retirement, mostly from the internet and during the Covid-19 pandemic. We have both moved into new fields before, always by first reading books and journals, attending lectures or special courses, and most importantly by attending professional meetings so we could confer with experts. None of that was possible in the past few years. So, if at times we sound naïve, we probably are. We apologize if we have neglected or misunderstood things that are obvious to those working in the field, or if we have had a limited number of citations of recent work. Not much else has been available to us, nor could we judge its quality if it had been. So, we have had to depend heavily on a few sources, hopefully good ones.

WKG's interest in this subject was tweaked by a Canadian radio program “As It Happens,” heard on Boston Public Radio (WGBH) while driving late at night to attend the American Physical Society/Division of Fluid Dynamics meeting in Boston in November 2015. The radio host was interviewing three scientists about dark matter and dark energy, both subjects of which while interesting had always been nothing more than a curiosity. However, never before had it been clear that what was being discussed was really mostly a failure of classical mechanics to describe the observations. During lunch at the meeting the next day with two former students (Clay Byers and Marcus Hultmark, both now professors), while discussing homogeneous turbulence and its time and length scale evolution, the idea that the missing mass and energy might be related to time was born. This ultimately resulted in a paper and several presentations.52,53 However, it quickly became clear that much more sophisticated mathematical tools were needed to advance beyond mere speculation.

WKG's return to Sweden in 2018 provided the perfect opportunity to link up with a former colleague from Chalmers, TGJ, who had retired about the same time. So, together we began to meet regularly to teach ourselves about astronomy, curvilinear coordinates, and general relativity—sharing notes, ideas, and many misunderstandings. The youtube.com online-courses of L. Susskin, (Stanford), A. Maloney (McGill), A. Guth (MIT), and P. Grinfeld (Drexel) were especially helpful. However, there were many others as well. Their efforts and generosity in sharing online made our effort possible. The YouTube videos of Rebecca Smethurst (“Dr Becky”) and Quora comments on Viktor Toth were especially helpful.

This paper is an outgrowth of that effort. History alone will judge whether we have made an important contribution or any contribution at all. At our age as septugenarians (now both 80), each contribution might well be our last. So, most important to us is not another published paper or battle won over hostile reviewers (of which we have had many), but whether we have stimulated others to think about this problem (or others) in new ways. Regardless, to paraphrase the inspirational “Dr. Becky” (Smethurst) of Oxford University and youtube.com fame: we really have enjoyed having a “ring-side-seat” at the best time in the history of the world for studying (and learning about) astronomy and cosmology.

1.
W. K.
George
, “
An alternative cosmological model for an expanding universe by Prof. W.K. George
” (
2022
), see https://www.youtube.com/watch?v=E7rwrnkTVHQ.
2.
W. K.
George
and
T. G.
Johansson
, “
An alternative cosmological model for an expanding universe
,” Kenninger Lecture, Purdue University, West Lafayette, IN, April 11 (
2022
), see slides∼available∼at∼http://www.turbulence-online.com/Publications///Purdue_April_2022_Presentation.pdf.
3.
W. K.
George
, “
Decay of homogeneous isotropic turbulence
,”
Phys. Fluids
422
,
1
54
(
1992
).
4.
C. P.
Byers
,
M.
Hultmark
, and
W. K.
George
, “
Two-space, two-time similarity solution for decaying homogeneous turbulence
,”
Phys. Fluids
29
,
020710
(
2017
).
5.
W. K.
George
and
T. G.
Johansson
, “
An alternative cosmological model for an expanding universe
” (
2022
), see http://www.turbulence-online.com/Publications/Purdue_April_2022_Paper.pdf.
6.
W. K.
George
and
T. G.
Johansson
, “
A ‘dark-energy-free’ turbulence similarity solution for an infinite homogeneous expanding universe using Einstein's averaged field equations,” in Division of Fluid Dynamics Meeting
, Nov. 20–22, 2022,
Indianapolis, IN
, Session T11: Astrophysical Fluid Dynamics (
2022
), see http://www.turbulence-online.com/Publications/APSDFD2022Indianapolis.pdf.
7.
W. K.
George
and
T. G.
Johansson
, “
A new single-length-scale similarity solution for an infinite homogeneous expanding universe using Einstein's averaged field equations
,” in APS April Meeting, Abstract: AAA01.00003 20236 Minneapolis, MN, Apr. 15–18, Virtual (Apr. 24–26) (
2023
), Vol.
68
, see http://www.turbulence-online.com/Publications/APS_Minneapolis_April_2023.pdf.
8.
W. K.
George
and
T. G.
Johansson
, “
Re-thinking the age of the universe
,” in
Annual Meeting of the APS Mid-Atlantic Section
, Abstract D01.00006, Nov. 3–5, 2023, University of Delaware, Newark, DE (
2023
).
9.
W. K.
George
and
T. G.
Johansson
, “A new theory for an expanding universe,” in APS April Meeting, Poster Session I, Abstract E00.00018, Apr. 3–6, 2024, Sacramento, CA (
2024
).
10.
W. K.
George
and
T. G.
Johansson
, “A new theory for an expanding universe,” in APS-DFD Poster Session, Nov. 24–26, Salt Lake City, UT (
2024
).
11.
N.
de Grasse Tyson
, “Did the James Webb space telescope change astrophysics?,”
2024 Isaac Asimov Memorial Debate
(especially around minute 59) (
2024
), see https://www.youtube.com/watch?v=lK4EZiIpC14.
12.
M.
Tavora
, “
The worst prediction in the history of theoretical physics
,” see https://www.cantorsparadise.com/the-worst-theoretical-prediction-in-the-history-of-physics-5be09b309043.
13.
A.
Einstein
,
The Collected Papers of Albert Einstein. Volume 6: The Berlin Years: Writings 1914–1917 (English translation supplement K. Kox and R. Schulman, eds.)
, 1st ed. (
Princeton University Press
, 1996), ISBN-13978-0691010861.
14.
A.
Friedmann
, “
Über die krümmung des raumes
,”
Z. Phys. A
10
(
1
),
377
386
(
1922
).
15.
G. L.
Maitre
, “
Expansion of the universe
,”
Mon. Not. R Astron. Soc.
91
(
5
),
490
501
(1931).
16.
E.
Hubble
, “
A relation between distance and radial velocity among extra-galactic nebulae
,”
Proc. Natl. Acad. Sci. U. S. A.
15
(
3
),
168
173
(
1929
).
17.
A.
Einstein
and
W.
De Sitter
, “
On the relation between the expansion of the universe and the mean density of the universe
,”
Proc. Natl. Acad. Sci. U. S. A.
18
,
213
214
(
1932
).
18.
P. A. M.
Dirac
,
General Theory of Relativity
(
Wiley-Interscience
,
New York, NY
,
1975
).
19.
S.
Carroll
,
Space-Time and Geometry: An Introduction to General Relativity
(
Addison-Wesley
,
San Francisco, CA
,
2016
).
20.
C.
O'Raifeartaigh
,
M.
O'Keeffe
, and
S.
Mitton
, “
Historical and philosophical reflections on the Einstein-de Sitter model
,” arXiv:2008.13501 (
2020
).
21.
A.
Riess
et al, “
Observational evidence from supernovae for an accelerating universe and a cosmological constant
,”
Astron. J.
116
,
1009
(
1998
).
22.
S.
Perlmutter
et al, “
Measurements of Ω and Λ from 42 high-redshift supernovae
,”
Astrophys. J.
517
,
565
(
1999
).
23.
H.
Yu
,
B.
Ratra
, and
F.-Y.
Wang
, “
Hubble parameter and baryon acoustic oscillation measurement constraints on the Hubble constant, the deviation from the spatially flat Λ-CDM model, the deceleration–acceleration transition redshift, and spatial curvature
,”
Astrophys. J.
856
(
1
),
3
(
2018
).
24.
Wikipedia,
“Cosmic microwave radiation” (Jan. 30,
2021
).
25.
B. M.
Poggianti
et al, “
The evolution of the density of galaxy clusters and groups: Denser environments at higher redshifts
,”
Mon. Not. R. Astron. Soc.
405
,
995
1005
(
2010
).
26.
M.
Abdullah
,
A.
Klypin
, and
G.
Wilson
, “
Cosmological constraints on Ωm and σ8 from cluster abundances using the GalWCat19 optical-spectroscopic SDSS catalog
,”
Astron. J.
901
(
2
),
90
(
2020
).
27.
P.
Dirac
and
F.
Hund
, “
P. Dirac speaking to F. Hund on symmetry in relativity, quantum mechanics and physics of elementary particles
” (
1982
), see https://www.youtube.com/watch?v=Et8-gg6XNDY.
28.
W. K.
George
, “
Asymptotic effect of initial and upstream conditions on turbulence
,”
J. Fluids Eng.
134
(
6
),
1
27
(
2012
).
29.
Note1. Note there is some ambiguity about this in many discussions of the FLRW metric, where the spatial variable, x is sometimes claimed to be a co-moving coordinate, but treated mathematically as if it were not. If x is a true independent variable, then the relation to our variables is xi/δ(t)=a(t)xi. But if x is interpreted in FLRW-space19 as co-moving it must be (and is not usually) differentiated with respect to time and space. Our definition of ηi as co-moving independent coordinate avoids this ambiguity.
30.
A.
Guth
, “
The early universe, MIT online video lectures
,” Youtube.com, see https://ocw.mit.edu/courses/physics/8-286-the-early-universe-fall-2013/video-lectures/,
2013
.
31.
P.
Grinfeld
,
Introduction to Tensor Analysis and the Calculus of Moving Surfaces
(
Springler-Verlag
,
Berlin
,
Heidelberg
,
2013
).
32.
B. W.
Carroll
and
D. A.
Ostlie
,
An Introduction to Modern Astrophysics
, 2nd ed. (
Cambridge University Press
,
Cambridge, U.K
.,
2017
).
33.
Note2. This suggests strongly that it is τ which is measured by atomic clocks which presumably would be unaffected by gravity. So, any inference that these atomic clocks speed up or slow down to account for observations might in fact be backward. It is the physical time which is changing, not the atomic clock.
34.
R.
Leighton
,
R.
Feynman
, and
M.
Sands
,
The Feynman Lectures in Physics
(
Addison-Wesley
,
1964
), Vol.
2
.
35.
R.
Smethurst
, “
Dr. Becky: Crisis in cosmology
,”
Youtube.com
, see https://www.youtube.com/watch?v=Et8-gg6XNDY.
36.
S. M.
Carroll
,
W. H.
Press
, and
E. L.
Turner
, “
The cosmological constant
,”
Ann. Rev. Astron. Astrophys.
30
,
499
542
(
1992
).
37.
J.
Vega-Ferrero
,
J. M.
Diego
,
V.
Mirand
, and
G. M.
Bernstein
, “
The Hubble constant from SN refsdahl
,”
Astrophys. J. Lett.
853
(
2
),
L31
(
2018
).
38.
Note3. Note that in Section VI E we use our value of Ho with Eq. (75) to account for time and scale of the cosmic background radiation.
39.
H. D.
Bond
,
E. P.
Nelan
,
D. A.
VandenBerg
,
G. H.
Schaefer
, and
D.
Harmer
, “
HD140283: A star in the solar neighborhood that formed shortly after the big band
,”
Astrophys. J. Lett.
765
,
L2
(
2013
).
40.
F.
Farooq
and
B.
Ratra
, “
Hubble parameter measurement constraints on the cosmological deceleration–acceleration transition redshift
,”
Astrophys. J.
766
,
L7
(
2013
).
41.
Wikipedia
, “
The Calán-Tololo supernova survey
” (Nov.
2021
).
42.
R. A.
Knop
et al, “
New constraints on ΩM, ΩΛ, and w from an independent set of eleven high-redshift supernovae observed with HST
,”
Astrophys. J.
598
,
102
(
2003
).
43.
R. K.
Love
and
S. R.
Love
, “
Adding the 1/(1+z) factor to the Riess et al. (1998) and Perlmutter et al. (1999) rest-frame data removes any evidence of dark energy
,” Academia, see https://www.academia.edu/38286572.
44.
R. A.
Freedman
,
R. M.
Geller
, and
W. J.
Kaufman
III
,
Universe: Stars and Galaxies
(
W. H. Freeman and Company
,
New York, NY
,
2014
).
45.
M.
Hamuy
,
M. M.
Phillips
,
J.
Maza
,
N. B.
Suntze
,
R. A.
Schommer
, and
R.
Aviles
, “
A Hubble diagram of distant type IA supernovae
,”
Astrophys. J.
109
(
1
),
1
(
1995
).
46.
M.
Hamuy
,
M. M.
Phillips
,
J.
Maza
,
N. B.
Suntze
,
R. A.
Schommer
, and
R.
Aviles
, “
The absolute luminosities of the Calan/Tololo type IA supernovae
,”
Astrophys. J.
112
(
6
),
2391
2397
(
1995
).
47.
A.
Riess
et al, “
A 2.4 % determination of the local value of the Hubble constant
,”
Astrophys. J.
826
(
1
),
56
(
2016
).
48.
C. H.
Gibson
, “
The first turbulence
,” arXiv (
2001
).
49.
C. H.
Gibson
,
New Cosmology: Cosmology Modified by Modern Fluid Mechanics
(
Amazon.com
,
2009
).
50.
C. H.
Gibson
, “
Turbulent mixing, viscosity, diffusion, and gravity in the formation of cosmological structures: The fluid mechanics of dark matter
,”
J. Fluids Eng.
122
,
830
835
(
2000
).
51.
E.
Noether
, “
Invariante variationsprobleme
,” Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, Mathematisch-Physikalische Klasse (1918), pp.
235
257
.
52.
W. K.
George
, “
Could time be logarithmic?
,”
J. Cosmol.
26
(
6
),
14118
14134
(
2016
), available at http://www.turbulence-online.com/Publications/log_time_cosmology_final_printed.pdf.
53.
W. K.
George
. “‘
Un-darkening’ the cosmos: New laws of physics for an expanding universe
,” in APS/DFD, see http://www.turbulence-online.com,
2017
.
54.
Wikipedia
, “
Quantum field theory (May, 2022)
” (
2015
).
55.
B. R.
Nave
, “
Energy in the early universe (lectures in hyperphysics)
” (
2022
), see hyperphysic.phy-astr.gsu.edu/hbase/Astro/.
56.
S.
Hawking
and
L.
Mlodinow
,
The Grand Design: New Answers to the Ultimate Questions of Life
(
Bantam Books
,
2010
).
57.
W. K.
George
, “
The nature of turbulence
,” in
FED Forum on Turbulent Flows
(H00599-1990), edited by
W. M.
Bower
et al (
ASME
,
1990
), pp.
1
10
.
58.
J. C.
Bell
, private communication to W K George (
1965
).
59.
W. K.
George
, “
Governing equations, experiments, and the experimentalist
,”
Exp. Therm. Fluid Sci.
3
,
557
566
(
1990b
).
60.
R. R.
Long
and
T.-C.
Chen
, “
Experimental evidence for the existence of the ‘mesolayer’ in turbulent systems
,”
J. Fluid Mech.
105
,
19
59
(
1981
).
61.
Note4
. The editors added a footnote saying they were publishing it in spite of the negative reviews because of the author's persistence over many years.
62.
Wikipedia
, “
Chronology of the universe
” (
2015
), see https://en.wikipedia.org/wiki/Chronology_of_the_universe.
63.
J. M. R.
Caballero
, “
Time and complexity in closed systems
,” Online Technical Discussion Group Wolfram Community, private communications with JMRC (
2020
).
64.
C.
Pratt
and
C. M. B.
Caballero
, personal communication to WKG (
2021
).
65.
S.
Wolfram
, “
Origins of randomness in physical systems
,”
Phys. Rev. Lett.
55
(
5
),
449
(
1985
).