A model of an infinite universe is postulated in which both space and time expand together and are scaled by a time-dependent length scale, . The Ricci tensor and Ricci scalar both vanish identically, so the Einstein field equations reduce to a balance between the time-dependent spatially averaged stress energy tensor and its scalar invariant times the metric tensor. The divergence of is zero, so the conserved quantity is , where c is the speed of light, is the rest mass density, and is Quantum field theory prediction—the so-called “worst prediction in the history of physics.” The implications of this single time-dependent length scale hypothesis for our physical space are explored using the rules of tensor analysis. These imply that the length scale grows linearly with t which itself varies exponentially with atomic clock time. The Hubble parameter is just , where t is the age of the universe, so the universe expansion rate is slowing down. The Hubble parameter can be expressed in terms of the red-shift parameter z as , where is its current value. km/s/Mpc is in excellent agreement with a large number of observations and implies that the universe began 15.4 × 109 years ago. Excellent agreement is demonstrated by recent astronomical measurements with neither dark matter nor dark energy.
FOREWORD
This paper was originally presented at Purdue University by the first author on April 11, 2022 before any James Webb Space Telescope (JWST) pictures were released.1,2 We predicted a very different universe than was previously believed to exist. It treated Einstein's Field equations as though they represented a “fluid” where millions of galaxies and gas made up the “fluid particles.” Using ideas from turbulence theory (e.g., Refs. 3 and 4), we imagined an infinite universe, in which both time and length scales evolved together. This paper has been available on our website5 for the past 3 years and has been presented at numerous American Physical Society Physics and Fluids meetings.6–10 Our predictions appear to resolve many of the questions raised by recent JWST data (see, for example, the critical commentary by Neil deGrasse Tyson11). In particular, our predicted universe (like JWST's) is much older, it is much more energetic looking back in time, it does not have a Hubble “crisis,” it does not need to be corrected for (“tired light”), and it does not need either dark matter or dark energy. Yet it agrees wonderfully with all the data prior to JWST, including the “worst prediction in the history of physics.”12 Given its historical value as a real “prediction,” we have resisted the urge to re-write it and present it here in its original form with only minor, mostly editorial changes.
I. INTRODUCTION
Einstein's field equations were first presented over the period from 1914 to 1917. These have been collected in Volume 6 of his collected papers.13 Originally, Einstein (and most of the astronomical community) believed the universe to be static, but he could only find static solutions to the universe by introducing a cosmological constant. Shortly after hearing of Einstein's work, both Friedman14 and LeMaitre15 proposed unsteady solutions that suggested the universe might be expanding. Subsequent astronomical observation by Hubble and coworkers16 confirmed that indeed it was. In a paper with De Sitter,17 Einstein finally recognized the essential correctness of the unsteady solutions. Einstein's failure to recognize early on the implications of his equations was regarded by him as his greatest mistake. This history and theory has been discussed in thousands of places, among the most useful in this context.18,19 The recent historical review20 of the Einstein/De Sitter collaboration is particularly relevant to this paper.
The problem is that all of the available astronomical evidence (since the Hubble telescope especially) suggests that space is nearly flat, so . However, all of the observations of matter in the universe suggest that , not 1. Clearly, both cannot be true. So, either the model is wrong. Or there must be some as yet unobserved matter (or energy) to makeup the difference, or both. Hence, the search for dark energy and dark matter. (Note that mass and energy can be considered equivalent since for rest mass, .) This is the cosmological crisis of our generation.
In this paper, we consider an alternative to the prevailing theory (v. chapter 8 of Ref. 19 for a comprehensive presentation). We propose that space is indeed flat and infinite and expanding but needs neither dark energy nor dark matter to explain it. Moreover, for our theory the universe is in fact decelerating. Yet, we argue that it agrees with the preponderance of the same astronomical observations previously used to conclude that it is accelerating.21,22 It agrees with recent Hubble, cosmic background, and galaxy number observations as well.23–26 Most importantly, our theory provides a direct link to the quantum field theory (QFT) estimates (the so-called “worst predictions in the history of physics”); and in fact treats it as the initial condition.
Finally, in this first version of our work we have deliberately put in as much detail as possible with the goal of making it straightforward to understand, even if tedious to read. Given the importance of the conclusions and our newness to the field (see postscript), we have tried to make our analysis as transparent as possible so it should be easy to identify where we might have gone wrong. So, hopefully any graduate student should be able to follow our analysis. However, it should be possible as well, for those who find math boring, to simply skip over the math and go to the principle conclusions and deductions beginning with Sec. VI. Or, for those who do not like to read at all, simply skip to the figures. They speak for themselves.
II. AN ALTERNATIVE APPROACH TO THE EINSTEIN FIELD EQUATIONS
Instead of seeking a solution in which space alone expands, a solution of the Einstein equations is sought for the universe in which both time and space expand together. As the quote below makes clear, we are not the first to consider this idea
I believe that the times and distances which are to be used in the Einstein's general relativity are not the same as the times and distances which were to be provided by atomic clocks. There are good theoretical reasons for believing that that is so, and for the reason that the gravitational forces are getting weaker compared to electric forces as the world gets older. (Paul Dirac, Göttingen Interview, 198227)
A. A complete similarity solution
The indices and can take values of . We interpret all quantities as ensemble averages over an infinity of statistically identical universes. Note for future use that as written here the contravariant stress energy tensor has dimensions of density times velocity squared. We also note for future consideration that we do not require the divergence of the stress energy tensor to be zero, so T is not related to the Ricci scalar, , as used in Ref. 19. Hence, our use of Eq. (4) is not restricted to empty space if the Ricci scalar turns out to be zero.
B. Relation to physical space
Since we have assumed and to be measured in equal increments, clearly these equal increments do not correspond to equal increments in t and . We note the comment of Dirac at the beginning, who seems to have been thinking along the lines we have postulated. Our t should be thought of as a gravitational or mechanical time that governs our physical laws. The dimensionless time , on the other hand, with its equal increments very much resembles an atomic clock. However, could be measured in any convenient way, for example, by using the temperature of the universe presumed to be cooling (suggestion of Ref. 30). We shall do exactly this in Sec. VI E.
C. The metric tensor
Its determinant is .
D. Proper time and its implications
Substituting the covariant metric from Eq. (9) shows that is exactly equal to the defined by Eq. (7). This is not surprising since we obtained the metric from our definitions of and in physical variables.
In subsequent analysis, we will use only the proper time defined using the entire metric tensor. This is of course our original definition of Eq. (7).
We can see then that the basic premise of this paper is that we think we are living in one space ( -space). However, we are really living in the other ( -space). We shall explore this difference in detail. We shall show that at least some of what we believed to be true probably was not. However, the data were correct; only the interpretations of at least some of it were not.
E. The Riemann and Ricci tensors and Ricci scalar
It is straightforward (e.g., using Maple and Mathematica) to show that the Riemann and Ricci tensors and the Ricci scalar are identically zero. This is no surprise, since the original -space was assumed to be flat (Minkowski). So, the transformed space must be as well.
Clearly, an expanding universe does not require a Riemann tensor with curvature. In our quasi-equilibrium model, the universe has no average motion in its own ( )-coordinates, but it does in ours ( ). Note, however, that the form of our metric tensor in physical coordinates does appear to have curvature. However, these extra terms, , vanish for small distances relative . Even when these terms are not small, however, the Riemann tensor is identically zero.
In Secs. III–XII, we shall explore first the kinematics of our assumed universe and show it is consistent with many astronomical observations. Then, we shall show what the vanishing Riemann tensor implies about the stress energy tensor and explore its consequences for the dynamics. Finally, we shall try to outline how our theory might be used to interpret (or reinterpret) recent observations on the distribution of mass and clusters throughout the universe.
III. THE VELOCITY
How the velocities in the two coordinate systems behave and relate to each other is clearly of prime interest. In this section, we consider their behavior in both spaces. We discover that the length scale is determined by them.
A. Velocity in -space
We first define a displacement field in our expanding-space-time coordinate system to be . Note that we use the subscript p to distinguish the proper-time-dependent displacement field from the independent variables . Then, the four-dimensional “velocity” is . However, only time passes; otherwise nothing moves (by assumption). So, the velocity in this space is simply 1, 0, 0, 0, since only is non-zero and equal to one.
B. Velocity in -space
Now, what we would really like to know is how looks in physical space (or -space). Unfortunately, we cannot simply transform it like an ordinary vector, since derivatives of vectors in general do not transform like tensors. We must use Lie derivatives and their associated Christoffel symbols. Note that we did not need these for in -coordinates since in the hypothesized Minkowski space the Christoffel symbols are all zero.
In Secs. III C–III E, we shall first consider . Then, in the next, for , and .
C. Covariant derivative for
This choice of coefficient insures that is the Planck length scale if is the Planck timescale. This will prove to be important later. It is humbling (but quite reassuring) to note that we could have deduced Eq. (21) by dimensional analysis alone.
We shall see in Sec. VI that is the Hubble parameter. So, clearly . So, if is the age of the universe, then is the age as well. This is consistent with wide speculation for many decades but not previous theories. As noted by Carroll and Ostlie,32 almost all previous “ages” of the universe have been calculated using a Friedmann model. So, our “ages” and “times” will be different from that in common use.
D. Relation of and to t and
So, time in our reference space is logarithmically related to our physical space time t. Note that by hypothesis, it is the increments of that are equally spaced, and increments of physical space time which are stretched.33
E. Covariant derivative for and
The first term in square brackets is clearly zero if we substitute the result of Eq. (19). It is easy to show by differentiation that this result also makes the curly bracketed term on the right zero as well.
This is the result [Eq. (15)] we used in Sec. II D. Note that the material derivative of Eq. (28) is zero, consistent with our assumed rest state in variables.
IV. ACCELERATIONS AND THE GEODESIC EQUATION
In general relativity it is the geodesic equation which provides the counterpart to Newton's Law. In this section, we derive it for our proposed universe.
A. Geodesic equation
Note that in Einstein's interpretation it is the negative of the second term on the left-hand side (the one with the Christoffel symbol) which represents the gravitational potential gradient.
B. Writing the geodesic equation with proper time
We choose the proper time, defined by Eq. (7) as our affine parameter, noting that it is the same as the proper time defined by Eq. (11).
C. Showing that the geodesic equation is solved
Interestingly, this is satisfied for ALL , no matter the particular functional form of . So, the geodesic equation is satisfied.
V. MAXWELL'S EQUATIONS AND WAVE PROPAGATION
Almost everything we know about the universe is due to light propagation. Light propagation is governed by Maxwell's equations. Therefore, we need to consider how they change with coordinate system in order to interpret them in either or -space. We must also understand how this affects the spectra and intensities of the radiation received.
A. Maxwell's equations in curvilinear coordinates
Now, all of the usual rules for tensor transformation apply. In particular, since we know the Jacobians and the metric tensors, we can easily move these from -space to our -space, or vice versa; or from contravariant to covariant components, or vice versa. It is easy to show that the left-hand side of the equation will look exactly the same in either coordinate system, a consequence of the linearity and of Lorentz invariance. However, since where is the Jacobian, the source terms in one frame will differ by a factor of from the other. For example, if we specify a charge source distribution in the -frame by say, , then in the other frame it will appear multiplied by . Also, a charge at rest in one representation will appear as a current in the other.
The same will be true for the electric and magnetic field vectors as well. Since the power radiated is the cross–product of the electric and magnetic field vectors, the power will be modified by . However, this will exactly compensate in the inverse square law using instead of , where D is a distance from the source. This will have profound implications for how we interpret intensity data from an isolated source. In particular, it means that the spectrum in physical variables will be shifted in both wavenumber and amplitude, so the power is conserved. We will discuss this in more detail in Sec. VII B, where we consider relative intensities of radiation from supernova data. Whatever space you chose to work in, both intensities must be transformed, not just one. The difference is the mysterious , which has made the universe appear to accelerate when the opposite is true.
B. Re-scaling Maxwell's equations into -coordinates
Note that from Eq. (24) it follows that , so the right-hand sides of both equations are multiplied by , exactly as should have been expected from in Sec. III A. As noted above in Subsec. III A, this has profound implications for how we interpret intensity data from an isolated source. It also means that the spectrum in physical variables is shifted in both wavenumber and amplitude, so the power is conserved. We will discuss this in more detail in Sec. VII.
C. Wave propagation in -space
The last term in parentheses in Eq. (51) will be identically zero for a particular star moving with the local average velocity of the universe. It will be zero by hypothesis for an average of them over a large enough field, our continuum (or field) hypothesis. We will use Eq. (52) extensively to analyze the Hubble measurements in Sec. IV B.
Interestingly, the shift in wavenumber is completely independent of velocity and depends only on the length scale . This is consistent with the observations of Ref. 19 and others that the average redshift is not really a Doppler shift at all. However, the total redshift, however, of an individual body is a consequence of both the expansion of the universe and the relative motion of a particular body in it.
D. Radiation from a point source into -space
This means that the intensity of the radiation from a star in -coordinates will obey an inverse square law in , and the frequencies and wavenumbers will remain unchanged from the source. In the physical space, however, the frequency and wavenumber detected at distances away from the source will be given instead by Eqs. (51) and (52). However, somewhat surprisingly, as noted in Secs. V A and V B, the intensities in physical space will drop off as an inverse square law in r, not . So, no “corrections” to intensities are necessary in either space.
E. Blackbody radiation in an expanding universe
Note that , while the spectral peak wavenumber is proportional to .
So, the entire spectrum shifts to lower physical space wavenumbers as increases. However, it does so in a way as to preserve an inverse square law.
VI. HUBBLE'S LAW
A. Derivation of Hubble's law
Assume D to be the distance to some distant star or galaxy so that its normalized distance in -space is . Since we are computing derivatives of something that happened long ago, both the distance and the time should be evaluated using values corresponding to the time when the information was transmitted, say , not when it was received. For distant galaxies is much earlier than our present time, say .
B. The relation between H and z
Note that the is evaluated at the time the light was emitted, .
is presumed by both theories to be the present value of the Hubble parameter. It is the only adjustable parameter in the theory presented here, whereas there are two additional ones ( and ) in the standard model. Most important though is the linear dependence of H on z in our model. As will be clear in Sec. VI D, the absence of adjustable parameters will be seen to have very important implications for processing and evaluating both the data and our theory.
C. Relation of distance to z
D. vs z from the data
The recent highly cited paper by Yu et al.23 contains a thorough analysis of the Hubble “constant” evaluation from 36 sources. Most conveniently (for us at least), their Table I contains the data summarized as vs z where z is defined by Eq. (68) for values of and values ranging from 68.6 to 227 (along with rms error estimates). They carry out an extensive evaluation of the standard model, so we will focus instead only on comparison of their data with our new result equation (75).
Each row is defined in seconds after the Big Bang epochs of logarithmic time in cosmology with the earliest at the top. The present time is approximately s (15.4 × 109 years) after the Big Bang.
Log-time . | Seconds after the Big Bang . | Period . |
---|---|---|
−45 to −40 | to | Plank Epoch |
−40 to −35 | to | |
−35 to −30 | to | Epoch of the Grand |
−30 to −25 | to | Unification |
−25 to −20 | 10 | |
−20 to −15 | to | |
−15 to −10 | to | Electroweak Epoch |
−10 to −5 | 10 | |
−5 to 0 | to | Hadron Epoch |
0 to +5 | to | Lepton Epoch |
+5 to + 10 | to | Epoch of Nucleosynthesis |
+10 to +15 | to | Epoch of Galaxies |
+15 to +20 | to |
Log-time . | Seconds after the Big Bang . | Period . |
---|---|---|
−45 to −40 | to | Plank Epoch |
−40 to −35 | to | |
−35 to −30 | to | Epoch of the Grand |
−30 to −25 | to | Unification |
−25 to −20 | 10 | |
−20 to −15 | to | |
−15 to −10 | to | Electroweak Epoch |
−10 to −5 | 10 | |
−5 to 0 | to | Hadron Epoch |
0 to +5 | to | Lepton Epoch |
+5 to + 10 | to | Epoch of Nucleosynthesis |
+10 to +15 | to | Epoch of Galaxies |
+15 to +20 | to |
The top part of Fig. 1 plots our theoretical curve of equation (75) together with the measured vs z-values of Yu et al.23 The error bars are from the Yu et al. paper as well. The solid line on the top figure corresponds to km/s/Mpc, our optimal fit to the data. Our value of ) corresponds to years (15.4 × 109 years).
vs z for data of Yu et al.23 Top: Eq. (75), , with best fit (solid line). Bottom: Same plot but with alternative values (dashed line), 67 (dash-dotted line), and 72 (dotted line).
The bottom figure shows the same optimal fit, and in addition other popular values. The dashed lines correspond to , and 's. 72 is clearly too large, and 61 is too small. Either 67 or 63.6 would be acceptable choices. The mean square relative error of our fit is 9%, but the error for 67 is only a percent larger. The scatter in the figures appears to be randomly distributed, and both the 64 and 67 theoretical curves lie within the error bars of all but two of the data. A few outliers are largely responsible for the RMS relative error.
The only adjustable parameter in our theory is . The best fit value of is well within the error bounds of the two values of and inferred by Riess et al.21 Rounding it off to corresponds exactly to the value deduced from gravitational lensing by SN Refsdahl by Vega-Ferrero et al.37
A different dataset might produce a different value of . So, also shown on the plot is the theoretical curve using the recently popular Planck value of for which the RMS relative error of the fit increases only to 10%.38 The random error varies inversely with the square root of the number of independent estimates (only 36 in the present case), so the error bounds will surely drop as more data are acquired, especially as new theoretical considerations are included in the analysis and old ones winnowed out. This will be especially true if astronomers start presenting their data in the form of Yu et al.23 instead of averaging it all together to get a single averaged value of H.
Clearly, no matter the exact value of , the theoretical equation (75) is quite satisfactory. While works fairly well, the proposed value of 15.4 × 109 years certainly resolves any issues about whether there could be stars older than the currently proposed age of the universe of 13.8 × 109 years. The present estimate for the “Methuselah” star (140 283) of Gyr by Bond et al.39 places it marginally within the previous age estimates but well within the 15.4 Gyr derived above. We will use in the remainder of this paper unless otherwise specified.
Finally, before leaving this section we note one other idea which we believe corrupts data analysis, originally due to an assumed constancy of the Hubble parameter. The data we presented in Fig. 1 are usually “corrected” by to make it appear that the Hubble parameter is nearly constant as shown in Fig. 2. Our equation (75) makes it clear why this works, since . However, what is wrong about this is that it has become justification for correcting ALL DISTANCES by , the so-called “tired light” correction. However, as our analysis shows, the Hubble parameter is really not constant. As Eq. (79) makes clear, distances vary as , not . We shall see below that this -correction is at the root of whether the universe is accelerating or not, at least with the prevailing theory. Since we believe the correction idea is wrong, we shall argue below that the universe is not accelerating with either theory.
Figure is from Farooq and Ratra.40 Note that the data have been “corrected” by . Some of the data were also used by Yu et al.23 in the plot above. For a detailed description of the other curves on the plot and the sources of the data refer to Ref. 40. The Hubble “crisis in cosmology” is because the best fit of the standard theory (thick dashed line) yields 72 km/s/Mpc at , while the Planck CMBR value using the same theory (hashed line) is 67 km/s/Mpc. Our constant value, , has been added as the horizontal solid (black) line.
Figure is from Farooq and Ratra.40 Note that the data have been “corrected” by . Some of the data were also used by Yu et al.23 in the plot above. For a detailed description of the other curves on the plot and the sources of the data refer to Ref. 40. The Hubble “crisis in cosmology” is because the best fit of the standard theory (thick dashed line) yields 72 km/s/Mpc at , while the Planck CMBR value using the same theory (hashed line) is 67 km/s/Mpc. Our constant value, , has been added as the horizontal solid (black) line.
E. Cosmic microwave background radiation fluctuations at
The cosmic microwave background radiation (CMBR) is of interest for several reasons. First, these data have been used to infer the that was cited in Sec. VI D. Second, the cosmic microwave background radiation is generally believed to be the footprint of the universe at or about the time that photons could propagate,24 or when the temperature was about 3000° K.
From the Planck estimate that the temperature now is , we can estimate that we should be able to see back to only × 106 years. Note that this is exactly the value we obtain by inserting into Eq. (78). This is substantially greater than the 380 000 years after creation usually cited. However, it is actually farther back than the FLRW estimate, since our universe is over a billion years older.
Since the temperature value depends only on radiation spectra and time with no other assumptions, it is really not affected by the other difficulties of measurement. So, if we have correctly interpreted the Hubble data, or it does not change from the values we used, this number might be quite accurate.
Now, how about the scale of the CMB fluctuations? In our model universe, any initial inhomogeneities would now appear re-scaled by , where would be any time shortly after the non-linear processes dominate, and surely closely related to the time estimated below for the quantum field theory energy input. So, our theory should be able to predict (or at least account for) the present scale of the inhomogeneities, say . If we use an estimate of the CMB length scale to be m (or 1°), the ratio to the Planck length, , is , again clearly indicating its early quantum origins. This points to a virtual origin of . This is the same order of magnitude as the estimated below in Sec. VIII D using the QFT energy estimate. Either is perfectly consistent with the time for the non-linear physics to transition from quantum mechanics to our “turbulence similarity” type behavior.
VII. ASTRONOMICAL DISTANCE VS z USING TYPE Ia SUPERNOVAE
One of the greatest challenges of astronomy has been to measure distance, especially outside of our own galaxy. In this section, we explore the consequences of our theory on the interpretations of recent measurements, especially type Ia supernovae.
We have already considered how blackbody radiation would propagate in our new -universe, and in particular, how any spectrum would be modified. We have considered the consequences of an inverse square law in our expanding coordinates. Now, we will apply this information to review the methodology and terminology usually used to compute distances from intensity measurements of a single astronomical body. We will examine the results of applying our new theory to it. Finally, we consider the recent data of Refs. 21, 22, 41, and 42. These are the data that have been used with a multiplication factor to argue that the universe is expanding at an increasing rate. We note that Love and Love43 have observed empirically that without the factor in the supernovae data, the universe is not accelerating. Our theory will be seen to provide excellent agreement with the data without the corrections. Section VII A uses our results from Sec. V about how blackbody radiation propagates and finds no reason for such corrections.
A. The inverse square law
B. Relation of to redshift parameter z
C. Comparison of theory with data for
The first extensive use of the type Ia supernovae data was that compiled in the Calán–Tololo database by Hamuy et al.,45,46 but only for relatively small values of z ( ). This was extended to much larger values up to by Riess et al.21 and Perlmutter et al.22 There have been many papers since in more or less agreement, but we include only the additional results of Knop et al.42 All of these presented an extensive evaluation of their distance measuring methodologies. All presented their data in a variety of tables (including the Calán–Tololo data) and showed in detail how they corrected the data (Fig. 3). Most problematic (at least from the perspective of this paper) is the following quote from the Perlmutter et al. paper:
Data from column 4 of both Tables I and II in Perlmutter et al.22 and data from Table III, column 2 of Knop et al.42 are plotted vs z. The ordinate labels correspond to the table column headings. Top: Best fit of Eq. (90), , using . Bottom: Three theoretical curves using Eq. (90) have also been plotted using values for , and , respectively.
Data from column 4 of both Tables I and II in Perlmutter et al.22 and data from Table III, column 2 of Knop et al.42 are plotted vs z. The ordinate labels correspond to the table column headings. Top: Best fit of Eq. (90), , using . Bottom: Three theoretical curves using Eq. (90) have also been plotted using values for , and , respectively.
For the supernovae discussed in this paper, the template must be time-dilated by a factor 1 + z before fitting to the observed light curves to account for the cosmological lengthening of the supernova timescale.
This is of course consistent with our arguments in Sec. V C, but they unnecessarily compensate for a growing universe by multiplying their data (column d) by . Therefore, we choose instead to use their original data and that directly measured (column b).
Figure 2 shows the data from all four groups as summarized in columns 2 and 4 of Table I and Table II of Ref. 22, and columns 2 and 3 of Ref. 42. For both papers, column 2 is z, the redshift. Column 4 of Ref. 22 is plotted as black squares. Column 3 of Ref. 42 is plotted as red circles. The blue diamonds are the Calán–Tololo data of Refs. 45 and 46 as collected in Table II of Ref. 22. We have NOT used the “corrected” data which includes the multiplication by .
The top figure shows the data with the theoretical curve computed from Eq. (90) using a value of the absolute magnitude . The lower figure shows three values of the theoretical curve computed from Eq. (90) using values of the absolute magnitude , , and . The value provides an excellent fit for all values of z. The two other values pretty much bound most of the data. Since each star presumably has its own slightly different value of M, there is no reason a single value of M should fit all the data. The fact that it does so even approximately is consistent with observations that all supernovae of type SN Ia seem to have absolute magnitudes that fall in a narrow band from about −18.5 to −19.5 (c.f. Ref. 44) consistent with a Chandrasekhar limit. So, the scatter is quite likely due to individual variations of M. Regardless, the agreement of all three sets of data suggests strongly that no special correction for is necessary, exactly as argued in Sec. V. This is consistent with the observation in Sec. V that it is the entire spectrum (and its integral) that is shifted, not just individual frequencies, so no such correction is necessary.
Before leaving this section, we note that our theory assumes a flat universe, and it is slowly decelerating as . We see nothing in these data that suggests otherwise. If, however, we had left the factor included by21,22 and others, the opposite conclusion could have been reached, at least using the standard theory. Our theory can be made to fit that data as well by choosing the value of M between about −17.5 to −18.5 with a best fit curve corresponding to . The price paid however is that there is a sharp discontinuity with the Calán–Tololo data, which falls well below any of these curves. Therefore, given our physical arguments for NOT including the factor (as appears to have been common practice), the continuity from one dataset to another of our treatment of the data, and the complete absence of any obvious new physics to explain a shift, we stand by our analysis and conclusions.
Riess et al. and Perlmutter et al.21,22,47 fit the standard model result of Eq. (80) to their data pre-multiplied by with . This in addition to the choice of M is a three-parameter fit. It is clear from the form of Eq. (80) why they needed the -prefactor to have any success at all with the fit (and why they might have convinced themselves that it was necessary). Our theory fits their original “uncorrected” data quite nicely with only a choice of M which is well within the bounds of most estimates. In fact, the scatter might be largely due to slightly different values of M for each star.
VIII. MASS AND ENERGY
Now, we get to the crux of this paper. We have already seen that our proposed universe can describe the kinematics that have been observed and published. Now, we must investigate the underlying dynamics.
Also, do we still need dark energy and dark matter? Surely, there have been difficulties identifying both in nature, and the need can only be inferred indirectly using theories that might be wrong. Also, as our observations about theory in the two preceding sections make clear, neither is necessary to account for the data usually used to justify the indirect inferences. Hence, the alternative presented by our theory might be very attractive.
A. Implications of the similarity solution
Since our presumed model is Minkowski in -space, it must be in all.19 As noted earlier, this does not imply that the metric tensor, , in other spaces is nor that the Christoffel symbols will be zero. However, it does dictate that the Riemann tensor, , Ricci tensor, , and Ricci scalar, R, will all be zero.
It has been traditional to assume that implies that and thus describes only empty space. However, given our assumptions (or rather lack of one) about the divergence of , only implies that space is flat, and that , not that either are zero. Also, we note that the pre-factor of completely disappears. Since the left-hand side of Eq. (91) is zero, there is no counterpart to the usual criterion for a critical density. In our theory equation (3) completely vanishes, and so there is no critical density.
B. What is T?
We note that dimensionally there are only two independent parameters in the governing equations: G and c. In the absence of curved space ( , our flat space hypothesis), there is no imposed length scale. The Planck scales introduce G along with Planck's constant and c, to characterize when gravitational effect becomes important. However, after the initial period when , where is the Planck length scale, the Planck length is far too small to be relevant. The evolution from quantum mechanics through this period has been described by Gibson48–50 as an inverse turbulence cascade with ever increasing scales of motion. So, the Planck parameters become ever relatively smaller and smaller, and no new parameters are entered. There is no combination of the two parameters, G and c, alone that can produce a length scale or a timescale, nor anything with the dimensions of energy or mass densities. So, any such scale must arise from the equations themselves or assumptions about them. A spatial coordinate could provide the missing parameter but not in a homogeneous environment. So, a length scale dependent only on (or t) must exist to provide the missing parameter. The only choice is, of course, the length scale, (or ), which we introduced at the beginning.
If integrated over a volume, this is exactly the form of the Einstein–Hilbert action but for the scalar invariant T instead of the Ricci scalar R (v. Ref. 19).
The time derivative of T is clearly non-zero, meaning that the divergence of is non-zero, consistent with our observation about it at the beginning of this section.
Since is on the order of the Planck time ( s), is a very large quantity indeed. Recall that is the lower limit of the integral in Eq. (7), and corresponds to . In Sec. IX we will identify both of these unambiguously with the time of the Big Bang, or at least a virtual origin corresponding to it. In Sec. VIII C, we shall show how to express T in terms of the time-dependent mass density, , and the rest mass energy per unit volume, e.
C. Mass density and rest mass energy
Clearly, the behavior of mass density and rest mass energy e with time is crucial to understanding the expansion of our universe. Earlier FLRW theories assumed conservation of mass and energy. This would require (using our symbols) that mass density, say , and rest mass energy, say , to be constant. Abundant observations suggest strongly that they cannot be maintained as constants without a large source of external (dark) energy. This is the major point of departure of our theory from the classical theory. For our proposed universe, there is no reason to believe that either mass or energy are conserved, because we are allowing physical time to be measured in non-equal increments.
One way to see the implications of this is the following simple example. Consider the acceleration of a simple lump of matter by a force applied to it. Since mass is the ratio of the applied force to the acceleration, if the acceleration is measured differently using non-equal time increments, then the mass must be defined differently. Similar conclusions apply to the kinetic energy.
To conclude this section, we note that the idea that energy is not conserved is really not a new idea, especially since the work of Noether51 which makes clear the relation between energy and the choice of time. It has been noted before that a proper definition of inertial mass depends on how we define time.52,53 Also, Carroll19 observed the following which although in a different context appears quite relevant here.
Clearly, in an expanding universe, the energy-momentum tensor is defined on a background that is changing with time; therefore there is no reason to believe that energy should be conserved.
He then goes on to argue that the zero divergence of implies there is a law in which something is conserved. As noted above, our does not have zero divergence, but our does. As Carroll suggests, there is a corresponding conserved quantity: .
D. The “worst prediction in the history of physics”
There is a direct relation between Einstein's field equations and the equations of turbulent flow in fluid mechanics. Our solution to the field equations with its time-dependent length scale is in fact a close analog of similarity solutions of the averaged Navier–Stokes equations for homogeneous turbulence (c.f. Refs. 3 and 4) Gibson49 has argued that these post-quantum and post-inflation early processes would have been similar to an inverse Kolmogorov cascade in turbulence where the scales grow and energy is moved to larger and larger scales. Such high (or infinite) Reynolds number “turbulence” can be characterized in spatial Fourier space by a wavenumber energy spectrum that rises quickly to a peak at roughly the inverse of the characteristic length scale, say like our , then rolls of as where k is the magnitude of a spatial wavenumber vector. The result is that the energy is spread over a band roughly characterized by wavenumber equal to (c.f. Ref. 28 or any book on turbulence). Alternatively, the field could be described in terms of its evolution in time, usually a power law determined by the initial condition, exactly as we have in Eqs. (99) and (100).
So, the question can be asked: How far back in time can our theory be expected to apply (assuming it is valid at all)? We show in this section that it in fact appears to reasonably describe everything from Planck times to the present time. It does so without dark matter or energy. This is quite different from the popular view at the moment. First, we consider the quantum field theory estimates of the energy put in at the beginning. Then, we compare that to the best estimates of the visible mass in the universe.
One of the most unsuccessful theoretical results of the past few decades has been the large discrepancy between the predictions of Quantum Field Theory and the observed energy density in the cosmos. The ratio of the QFT prediction, or , to the observed mass (or energy) is usually estimated at . This has been described as the “worst prediction in the history of physics.”12,19,30 (Caroll and Ostlie32 in their section on “The Early Universe” provide an nice summary of how this result was obtained from the uncertainty principle.) A modification to the QFT calculation to include Lorentz invariance54 reduces this discrepancy by a factor of , which is still huge. This is all the more troubling since QFT seems to accurately predict the magnetic dipole moment. These results have been interpreted in a number of ways and have been dubbed “vacuum energy.” We interpret them here as an initial condition.
The search for how much matter there is in the universe has been detailed in many publications including popular articles and all textbooks. The basic problem has been that the mass density inferred from visible matter is much less than the critical density of Eq. (1). There have been numerous studies in the last decade using various methods of processing both astronomical observations and simulations to try to estimate the visible (and invisible) matter in the universe. Table II of the very recent paper of Abdullah et al.26 lists the results of 20 different extensive studies. Their estimates of the mass density of matter in the universe range from a low of 0.22 to 0.40 , where is the critical density defined by Eq. (1). The average of all 20 estimates was approximately . Their value of was calculated from Eq. (1) using the age of the universe as 13.8 × 109 years and various values of the Hubble parameter.
The most recent value cited in the table was that of Abudullah et al. themselves for which the present mass density, , was estimated to be 0.305 ( ), which gives (using their parameters): .
We do not know the value of , but we can certainly calculate it by working backward from the Abduallah et al. value and the two QFT estimates. The QFT estimates can be converted from energy to mass by dividing by , and, respectively, give for the original estimate and for the revised Lorentz invariant estimate. We will use our value for estimated in Sec. VI D as 15.4 × 109 years (or s). From the original QFT estimate, we calculate s, or about 4.5 Planck times. For the second, s or about Planck times. The first corresponds to the beginning of the importance of gravitational effects; the second is well into the grand unification epoch (v. Table I). Both QFT estimates are well before the age of galaxies or even before photons can propagate. Most importantly, both are close enough to to reasonably associate them with the Big Bang. Both were obtained with no adjustable constants using only our theory and the Abudullah et al. estimate of the current observable mass density of the universe.
Figure 4 shows a plot of vs from the Planck time to the present using Eq. (99). The data have been normalized by , the Abdullah et al. value. The two quantum field theory (QFT) estimates are shown by the orange diamonds. The present value is the blue triangle.
Plot of Eq. (99) showing 122 decades of mass density normalized by the present value vs time normalized by the age of the universe. The blue triangle is the present value. Also, shown are the QFT1 value and the QFT2 value (orange diamonds), both normalized by the present day density of Abdullah et al.26
Plot of Eq. (99) showing 122 decades of mass density normalized by the present value vs time normalized by the age of the universe. The blue triangle is the present value. Also, shown are the QFT1 value and the QFT2 value (orange diamonds), both normalized by the present day density of Abdullah et al.26
Note that no particular “time” is given for when these QFT estimates should be applied, so we extrapolated back from the present. We know, however, that the functional form of their time dependence is the same as our theory (see discussion in Sec. VIII F and Ref. 19). So, if we had a beginning time we could have connected the curves instead of showing the QFT results as just points. This would have alleviated our need to infer the present density value from just measurements, and provided a useful comparison whether the Abdullah et al. estimate is large enough.
Different astronomical results could change this result.
E. The stress-energy tensor revisited
We note that this is similar in form to the quantum field theory result of Carroll19 in Chapter 9 when his equations (9.166) and (9.133) are combined. There he considers a two-dimensional flat Minkowski space of infinite extent governed by both Einstein's equations and quantum mechanics. Even his time is logarithmic. Surely, dimensional analysis alone dictates the physical time dependence on both solutions. However, we offer two possibilities in Sec. VIII F which considers radiation as well.
This pressure is not “pressure” in the usual sense, but the mean normal “stresses” of the averaged motion—like the Reynolds normal “stresses” in classic fluid mechanics turbulence. Note that there is a slight difference from the usual equation of state [Eq. (4.33) in Ref. 19] because we have normalized differently.
Aside from the normalization these equations are exactly the form of Einstein's stress energy tensor Carroll chose. The -component is the rest mass energy. The diagonals are the pressure and longitudinal momentum fluxes. The off-diagonals are “Reynolds-stresses” and disappear if the matrix is diagonalized or cast in spherical coordinates.
It is interesting to discuss the role of pressure here. In -space, the pressure is everywhere the same and there is no motion, both by hypothesis and the assumed homogeneity. The -space is far more interesting. It is not homogeneous, since looking out in space is really looking back in time. However, (on average) it looks the same to all observers no matter their location. No matter where they sit in it, the universe appears to be accelerating away from them since at a single instant in time. However, what is driving this “apparent” acceleration? Since the density decreases with time and , the pressure must increases in time from a deep negative value. However, the p at any radius r is the p that was really the pressure at an earlier time, and that was even more negative than it is at present at the location of the observer. So, the pressure gradient is negative, i.e., decreasing with increasing r. Thus, to an observer in -space, this negative pressure gradient appears to be accelerating the flow away, and driving a positive momentum flux outward. Most importantly, there is no need for additional sources of pressure, momentum, matter, or energy, i.e., especially dark matter or dark energy. The only necessary contribution to obtain what we think we see was put there at the beginning.
Another interesting observation: our solution looks like a three-dimensional analog of Carroll's two-dimensional QFT description in Chapter 9, Section 5. While Carroll's solution and ours began from opposite ends of the cosmic evolution, clearly the assumption of homogeneous Minkowski space dominates the behavior. In fact, Section 9.5 of Ref. 19 is almost exactly our solution if you interchange his -coordinates with his -coordinates. His “Rindler coordinates” are our -coordinates. With a bit of work, his log time and exponential factor are the same as our log time and . His equation (9.140) is exactly what we try to argue in Sec. IX. Since there are no new parameters that enter the problem, it would be a surprise if these solutions were not the same. Unfortunately, we do not have access to enough information to establish that. Perhaps others will.
Alternatively, any resemblance of the two theories could be just simple dimensional analysis, since G and c are the only parameters. Also, note that our universe is infinite with many overlapping spheres of visible universes, depending upon where one is sitting. Carroll interprets his results as though there is a single spherical one, but this seems to be a matter of interpretation, not mathematics, which is consistent with an infinite one. The finite universe he describes leaves us needing inflation, while ours does not. It simply begins everywhere at the same time.
As Sec. VIII F makes clear, radiation provides a possibility that the two theories might be different, each representing a different region of expansion—one very early, one very late. Separated by one which has at least one more parameter—radiation.
F. Radiation energy density
Sections VIII A–VIII E, and Eqs. (99) and (100) in particular, show how the mass and energy densities vary with time. What they do not show is what portion is photons (or radiation) and what portion is matter. We will calculate the radiation part separately, and then identify the remainder as matter.
The contribution of photons above can be modified to include neutrinos. Nave55 (see also Carroll and Ostlie32) suggests including a factor of 1.681 to account for neutrinos since the neutrino temperature dependence is also (but only below ). This makes little difference in the contribution to the density at this time. However, it does affect where the radiation curve intersects our theory, i.e., the point before which only radiation is present.
Now, here we are faced with a dilemma. Since the radiation density given by Eq. (109) is increasing more rapidly with decreasing time than Eq. (99), should we follow our theory back to the QFT estimate as we did in Sec. VIII E? Or should we follow the radiation density values back to the QFT estimate? If we do the latter, the radiation densities equal the QFT results at s ( Planck times) and s ( Planck times), respectively, the latter time corresponding to the lower QFT value. While these are much later times, at least to those interested in this early period, it clearly makes no difference to the age of the universe. Both alternatives lead smoothly from our gravity-dominated theory to the quantum dominated era.
Figure 5 shows the same theoretical curve, Eq. (99), as in Fig. 4, but only for . Two radiation curves are shown corresponding to Eqs. (108) and (109), the latter including the neutrino contribution. The figure shows the location in time of (red arrow) where the temperature is 3000° K, and the location in time of the Methuselah star (green arrow). If the arguments at the end of Sec. VIII E are correct and the QFT and our solution are the same, the solid line should be followed all the way back until quantum mechanics dominates. The dashed line simply indicates where photons begin to be important (where the lines cross) and how they cease to dominate as baryonic matter is formed.
Blow-up of Fig. 4 showing only times after . The black line is Eq. (99) and the dashed lines are the radiation estimates of Eqs. (108) and (109). For reference purposes, we have also shown on the plot the time associated with the cosmic microwave background radiation (red triangle), when the temperature was 3000° K corresponding to . The green triangle indicates the age of the Methuselah star (14.5 × 109 years).
Blow-up of Fig. 4 showing only times after . The black line is Eq. (99) and the dashed lines are the radiation estimates of Eqs. (108) and (109). For reference purposes, we have also shown on the plot the time associated with the cosmic microwave background radiation (red triangle), when the temperature was 3000° K corresponding to . The green triangle indicates the age of the Methuselah star (14.5 × 109 years).
IX. WHAT DOES THIS ALL MEAN?
We have found a solution which seems consistent with many of the previously challenging astronomical observations and quantum field theory calculations. However, we have entered this through the back door by assuming a similarity form to describe the present, and exploring the consequences. This is exactly what Hawking and Mlodinow56 in their last chapter suggested we do to sort out the -th possibilities of M-theory—work back from the present to the beginning to see which of the possible solutions happened. They described the old FLWR-based theory as a possible path back. However, we herein offer a different path back to the beginning which does not need the magic of inflation.
The Riemann tensor itself is the product of two covariant derivatives, and it is highly non-linear (v. Ref. 18). While it can be linearized for small amplitudes (like gravity waves), it most surely cannot be here since this impulse response is not small—it is causing the whole universe to expand. Even if we wrote out the averaged equations above and included all the second moment terms, we would still be left with the usual turbulence closure problem—more unknowns than equations. Now, it may be possible to solve Eq. (113) without worrying about the higher order terms—like the inviscid flow solutions, which work well for the aerodynamics of bodies, for example. Or maybe appropriate closure approximations can be made, as in much of engineering turbulence. Surely, numerical solutions should be possible, but from turbulence experience it is probably crucial to make the computational box at least an order of magnitude larger than throughout the computation. (As in turbulence computations, this should be a lot easier to achieve in our scaled coordinates.) We note that Carroll19 succeeded in reducing his QFT equations to a wave equation [his equation (9.140)], but we were unable to do that here. Perhaps we have missed the obvious.
Fortunately, as happens frequently in turbulence theory (especially for flows of infinite extent,4,28,57 the strong non-linearity has led to a limit cycle or attractor which in this case can be characterized by scaling time and space together. Our solution has revealed enough about its nature so that we can examine its transient behavior in another way. We can work backward from what have already proven to be true.
It is obvious then that at least our universe is an initial value problem. More importantly, there is no need for additional sources of energy or mass or whatever to sustain its continued expansion. It really was a Big Bang, and aside from G and c, nothing else matters.
X. TOOLS FOR FUTURE WORK BY ASTRONOMERS
The great solid mechanic and experimentalist James C. Bell of the Johns Hopkins University often said to his students and collaborators
Experimentalists sort theories. (J. C. Bell58)
However, as noted in Ref. 59, the problem often is that there is only one theory, so there is enormous pressure to prove it correct. R.R. Long60 in a famous footnoted paper61 remarked
Theoretical results accepted on the basis of very limited evidence, become after long periods of time impossible to overturn with even abundant contradictory evidence.
The problem of course is the lack of alternatives before ideas become cast in intellectual concrete. So, at very least we have provided astronomers with another theory “to sort.” In this section, we have tried to provide the tools to do so.
There have been numerous studies showing various methods of processing both astronomical observations and simulations to try to account for the visible (and invisible) matter in the universe. The recent paper of Abdullah et al.26 and the earlier work of Poggianti et al.,25 for example, focus on clusters. These data on the number and mass of the galaxies in clusters are usually plotted as inferred mass or mass distribution vs z. We would like very much to have utilized these extensive results to evaluate our theory. For the most part, we have largely failed and were able to do so only for the cluster number dependence on z, which is plotted in Fig. 7 of Sec. X G. We suspect our failure largely represents our own short-comings (see the postscript to this paper for an explanation.) However, also the masses inferred seem to have been unduly influenced by the need to conform to the Standard model. We simply are not qualified to judge. So, in this section, we offer a brief outline as to how our theory might be used in future studies; or for re-processing existing data by those more qualified than us to do so.
A. Things look different looking back in time
All of the conclusions of Sec. IX were a consequence of spatial homogeneity of an infinite universe at a single time. However, we can only look back in time. Our universe is assumed to be statistically homogeneous in space but non-stationary in time, t. So, if we average over a particular shell of radius r, we are really looking at the average universe at time . Below, we examine how our new theory affects our view of what we see.
So, the mass looking back in time is VERY much greater than the mass actually there at THIS time. Only the limit imposed by the Planck time (below some multiple of which our theory does not apply) saves us from it blowing up entirely. Even if divided by the volume of the visible universe, this is a very large number and clearly is in no way indicative of the average density at present. It is, however, indicative of the energy put in initially; in fact exactly the QFT energy discussed in Sec. VIII D.
Figure 6 shows how almost all of the contributions to this integral and integrand were in the earliest second but are forever hidden to us behind a wall of invisibility by millions of years. We can only see the footprint in the cosmic background radiation.
Upper: Plot of Eq. (118) looking back in time and showing how density varies with distance from an observer, Eq. (118). Note that the apparent singularity as r approaches the radius of the visible universe, . Lower: Plot of Eq. (119) looking back in time and showing how the cumulative mass varies with distance from an observer over which the integral is computed, . The cumulative mass is normalized by the mass at present, .
Upper: Plot of Eq. (118) looking back in time and showing how density varies with distance from an observer, Eq. (118). Note that the apparent singularity as r approaches the radius of the visible universe, . Lower: Plot of Eq. (119) looking back in time and showing how the cumulative mass varies with distance from an observer over which the integral is computed, . The cumulative mass is normalized by the mass at present, .
B. Variation of volume with z
For theoretical arguments it is easiest to place ourselves at the center and denote the distance away from us as r. So, would be the edge of our visible universe. However, most measurements are recorded using the redshift parameter which is directly measurable. So, before considering how the mass and cluster number dependencies vary with r and z it is useful to consider how volume itself varies with z. We could simply define and convert that to z if we knew the relation between z and r.
C. Mass distribution as a function of r (or z)
It is clear from the divergent integral in Eq. (119) that what we really need is not a spatially (or temporally) averaged density but a cumulative density as a function of radius, r, or z. Then, we can choose whether or not to normalize it with the cumulative volume or the current value of density, , which hopefully we can determine by fitting the data.
It is important to note how different the limiting values are for and the volume, . The leading terms in the Taylor expansions of both vary as for , so for very small values of z. This makes quantities per unit volume quite useful for small z. However, for large z things are more complicated, since varies as z for large values, while goes to a constant.
D. Mass per unit volume,
Note that the only parameter is , the present density of the universe.
These clearly have very different behaviors for large values of z, our theory grows linearly in the limit as , the standard theory as .
E. Cluster number, , as a function of z
This section proposes to deduce itself by introducing the additional hypothesis that the average number of galaxies per unit volume in -space is constant.
It is easy to see by expanding this integral for small r (i.e., ), that . This is what we would have hoped for had we defined things correctly.
F. Cluster number per unit volume,
G. Comparison of vs z for the data of Refs. 26 and 25
In this subsection, we will look at two sets of data: the low-z data of Abdullah et al.26 which examines the GalWit19 database, and the higher z data of Poggianti et al.25 The former is an extensive paper that attempts to count clusters and uses the data to evaluate the standard theory and simulations based on it. We consider here only their cluster data plotted as vs z. The Poggianti et al. paper is an older paper, one of several by the same authors, which examines clusters at two higher values at and .
The Abdullah et al. data26 are plotted alone in the top part of Fig. 7. We have converted their numbers from to using their value of . Since they did not include information below , we used an iterative process to establish the value at this time as . The theoretical curve fits the data well for , and over-shoots above this value for larger values of z. The three-term Taylor expansion of Eq. (141) fits equally well. This is consistent with the observation of Abdullah et al. who attributed the difference compared to simulations (above in their case) to the incomplete dataset for larger values. We agree. Unlike their comparison to analysis, however, no part of our theory has to be attributed to dark energy or dark matter.
Equation (140) together with the data of Poggianti et al.25 and the Abdullah et al. data are plotted in the bottom part of Fig. 7. The Poggianti et al. data were scaled (by them) by what they believed to be the value at . The numbers at the other two points were for , and for . Our theory for predicts for , and for . The ratios of the two data points to our theoretical values are 1.62 and 1.60, respectively, so the difference between ours and theirs is most likely due to the unknown coefficient . So, we have multiplied their data by 1.6.
In summary it appears that our theory and especially the new hypothesis about the distributions of galaxy clusters in space looks promising. It should be noted that the same theory applies to properly performed simulations as well. Our theory suggests that is a length scale for the universe, not its size which is presumed infinite. Experience in numerical simulations of homogeneous turbulence suggests strongly that the computational domain needs to be much larger than the characteristic length scale to minimize the effect of boundaries, typically by a factor of 10. We look forward to seeing the results of this comparison of theory and data by real astronomers.
H. Can we evaluate , the average cluster mass?
XI. LOGARITHMIC TIME
The comment of Dirac27 cited at the beginning clearly foreshadows our work. In this section, we review some aspects of our theory, some of which have also been anticipated.
Figure showing how the times, and are related, where is the Planck time and is the virtual origin. For plotting purposes, we have chosen . The “epochs” from Table I have been identified, along with 1 s, 1 year, and the time for which .
Figure showing how the times, and are related, where is the Planck time and is the virtual origin. For plotting purposes, we have chosen . The “epochs” from Table I have been identified, along with 1 s, 1 year, and the time for which .
A. Epochs of our evolution
From Eq. (22), we know that is logarithmically related to our physical time t. The description of a universe evolving in “Epochs” of logarithmic time is familiar to every cosmologist (see Table I taken from Wikipedia62). Figure 8 shows how the epoques identified in Table I correspond to our postulated universe.
While different physics dominates each region, our theory is probably the same for all after the Plank era, since the underlying assumptions would still be valid. The statistics would be the same, only the underlying mechanisms different.
B. Time measured in logarithmic increments
Aside from the comment (cited in Sec. II) by Dirac27 near the end of his life and the musing of George52 (from which this section was adapted), there appears to be no evidence (at least available to us) that it has ever been previously considered that different times should be applied to the Einstein field equations than to quantum mechanics. However, even if we had thought of it, would we have noticed any difference?
George52 (using previously believed values for the age of the universe) noted that
There have been approximately 13.8 billion years ( s) since the Big Bang. But mankind has only been on the earth for approximately 250 000 years. So even if we had been keeping careful track since then the differences we would have noticed between the hypothesized QSM time and linear time would have been . And the differences we would have needed to observe to discover a discrepancy are the square of this, or of order . But, we have been doing mechanics for only the past 500 years, so even had we started measuring carefully at the time of Galileo, . So the leading error term would have been of order , and clearly beyond our ability to distinguish from experimental data alone. Only by trying to make sense of things that happened billions of years ago would we have noticed our equations don't balance. But of course we have done that now.
Figure plots Eq. (145) showing how varies with time t. Only by looking back over billions of years could any deviation from linearity have been noticed.
Figure plots Eq. (145) showing how varies with time t. Only by looking back over billions of years could any deviation from linearity have been noticed.
We are now at 15.4 × 109 years on the plot. The slope has been very nearly constant over all of our human existence. The same is true as well for most of our previously observable universe (i.e., before the Hubble telescope). Only now can we see far enough back in time to see any difference from a simple linear relationship, . So, only now do we see the inconsistencies in our previous theories.
C. The non-vanishing horizon
An interesting consequence of our theory is that the last star will not vanish over the horizon. The universe is expanding at precisely the right rate to prevent that from happening. This should be quite a relief for those who found the previous conclusions depressing.
However, another question which is related is: Will the last star burn out? The answer is not obvious to us and we leave it to others to speculate. Surely since G and c are the properties of nature, there is no reason there will be less of them. However, if quantum mechanics really functions on its own timescale (as suggested in the quotation from Dirac at the beginning of this paper), then the answer is probably yes. However, if not….? We leave it to the quantum field theorists and nuclear physicists to reason this out, since it is clearly beyond our capabilities—at least at the moment.
XII. SUMMARY AND CONCLUSIONS
This paper explored the consequences of the simple hypothesis that we live in an infinite universe in which both gravitational time and space evolve in pace with atomic clock time when scaled by the same time-dependent length scale. This is fundamentally different from the traditional FLRW approach where only space expands with time. We have not replaced Einstein's Field equations, but we have instead found a different solution to them—one that evolves in time but does so in a way so that the acceleration terms of the field equations vanish identically. A direct consequence is that the problematical “critical density” concept vanishes completely, and with it, the need for either Dark Energy or Dark Matter.
We removed the time evolution by simply considering a universe in scaled coordinates, say and , to be a maximally symmetrical Minkowski/de Sitter zero curvature space of infinite extent that remains fundamentally unchanged during its evolution. Then, we propose that this coordinate system is related to our physical coordinate system ( ) by stretching the coordinates with a time-dependent length scale . The statistics of this stretched universe are inhomogeneous in space, , and so vary in both space and time, t.
In effect, we have simply placed Einstein's original hypothesis of a static universe into a coordinate system in which both time and space expand together. The metric tensor is constant in -space, but in physical coordinates evolves with time. A consequence is that even though space and time are expanding, the zero values of the Riemann and Ricci tenors mean that Einstein's stress-energy tensor reduces to a single function multiplying the metric tensor. That function depends only on the gravitational constant G, the speed of light c, and the time-dependent length scale .
The rate of expansion is the speed of light and we show that where t is the age of the universe. So, both of these physical constants, G and c, are related directly to an initial condition—in this case, the Big Bang or its residual when gravity becomes important.
A direct consequence of the assumptions of a static universe in similarity coordinates is that the length scale is linearly dependent on physical time, t, and that where is a virtual origin proportional to the Planck time and indicative of the Big Bang. Thus, the proper time (and similarity independent variable) depends on the logarithm of t, a result consistent with the speculations of George,52,53 who examined the consequences of a log-time assumption. Interestingly, Caballero63 (see also Ref. 64) has recently argued using the Wolfram Model of fundamental physics65 that logarithmic time can be proved to be generically the same as the total information content of the Universe. This provides a striking parallel to the deduction herein that initial conditions establish directly the gravitational constant G, and that the expansion rate is proportional to the speed of light c, both fundamental constants of physics.
The linear relation between the length scale and t implies that the Hubble parameter is inversely proportional to the age of the universe, i.e., . Our prediction that the Hubble parameter varies as , where t is the age of the universe would appear to contradict current wisdom. However, Carroll19 has noted this is the desired result for the very early universe as well. We show that implies that , where is the present value (i.e., or ). We show this equation to be in excellent agreement with the recent results of Yu et al. using km/s/Mpc. This corresponds to a universe which is 15.4 × 109 years old. Our older universe explains a number of recent results including the age of the Methuselah star (14.5 × 109 years), and recent inferences from supernovae and gravitational lensing, all of which suggest an older universe than the presently assumed value of 13.8 × 109 years.
We also examine the supernovae data of Hamuy et al.,41 Reiss et al.,21 Perlmutter et al.,22 and Knop et al.42 Our theory has only two parameters, and M, the Chandrasekhar limit, M. We find excellent agreement of our theory with their “uncorrected” results using km/s/Mpc and a Chandrasekhar's constant of . A value of provides almost as good agreement with their -corrected data. However, we argue using the scaled Maxwell's equations that the uncorrected data are preferred, since radiation spectra are shifted in both amplitude and wavenumber. Without the -correction, their universe is not accelerating. However, our universe expansion rate is slowing down for BOTH their corrected and uncorrected data. So, neither need dark energy to accelerate it.
The fact that time is not measured in linear increments means that energy and mass are not conserved quantities, only G and c are. We are not the first to notice that changing time affects conservation of energy (c.f. Refs. 19 and 51) but perhaps we are in this context. In fact, both rest mass energy and mass densities decay as , or equivalently . This suggests strongly that it is the energy cascade to both larger and smaller scales that causes the energy to decay. Interestingly, our theory extrapolated back in time explains the so-called “Worst Prediction in the History of Physics,” the enormous discrepancy between the predictions of QFT (quantum field theory) and current observations— !
We show that our theoretical predictions for the growth of Planck scale disturbances near the beginning are consistent with the Planck observations of the scale of the inhomogenieties, the cosmic background radiation. By considering when the temperature was 3000° K, we suggest that in fact, they date not to 380 000 years after the big bang but to 14 × 106 years instead.
Finally, we confess our limitations in considering the enormous quantity of astronomical data. However, we try to leave a road map for astronomers to follow should our theory prove interesting.
In summary, we have proposed a new theory based on a single time-dependent length scale, . It turns out after extensive analysis using curvilinear coordinates that . We argue that average energy and average mass are not conserved. In fact, their densities vary inversely as . Our theory found a solution to Einstein's equations which evolves logarithmically with time. However, the equations we used were exactly Einstein's equations. It was our solution that gave surprising answers. However, was this a clue? Could it be that those underlying “Laws of Nature” should have been expressed using logarithmic time as well. If so, “mass” would not be the mass we thought it was, nor “energy” the same energy. Like the “42” in the Hitchhiker's Guide to the Galaxy, we leave this question for the next generation of physicists to answer. Would not it be ironic if given the well-known Einstein's aversion to quantum mechanics, it is the application of his equations that proved quantum mechanics was right all along, only our “known” laws were in error.
ACKNOWLEDGMENTS
These ideas have evolved over the last few years, mostly in discussions between the authors, neither of whom were trained in astronomy or cosmology. Whether or not we have made a significant contribution, we are extremely grateful to those experts who wrote books and made videos of their lectures available. It has been an exciting journey thanks to their efforts. Many former students and friends made helpful comments, but the encouragement and suggestions of Azur Hodzic (Danish Technical University) and Jose Manuel Rodriguez Caballero (University of Tartu, Estonia) were especially helpful. We would also like to acknowledge the gracious proprietor, Camilla Eriksson, and the friendly and comfortable environment of her cafe “Kaffebubblan” in Mölndal, Sweden, where many of our discussions took place. Finally, we acknowledge the contribution of April Howard, wife of WKG, who listened patiently to many versions of this theory, mostly wrong; but in doing so helped to clarify the holes. Last but surely not least, we acknowledge the work and dedication of the many “et al.'s.” As mostly retired professors, we know well what effort hides behind that simple citation.
WKG and TGJ both retired from Chalmers Technical University, WKG in 2010 as Professor of Turbulence and Director of the Turbulence Research Laboratory (www.turbulence-online.com), TGJ in 2012 as Docent in the same laboratory. Both were instrumental in the Chalmers International Turbulence Masters program which moved to Ecole Centrale de Lille upon their retirement.
AUTHOR DECLARATIONS
Conflict of Interest
The authors have no conflicts to disclose.
Author Contributions
W. K. George: Conceptualization (equal); Formal analysis (equal); Writing – original draft (equal); Writing – review & editing (equal). T. G. Johansson: Conceptualization (equal); Formal analysis (equal); Investigation (equal); Methodology (equal); Writing – review & editing (equal).
DATA AVAILABILITY
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
APPENDIX A: MATHEMATICAL DETAILS
1. Notation and Jacobians
We choose the notation of Grinfeld31 for its simplicity. We keep the speed of light explicit and do NOT absorb c into the time t. While analytically more complicated, it simplifies interpretation later. The coordinates with unprimed indices, , are used for -space, whereas the coordinates with primed indices, , are used for the physical coordinates, . The indices 0 (or ) are used for and t, respectively.
We note that .
2. Basis vectors
APPENDIX B: POSTSCRIPT
Between us (WKG and TGJ), we have over 100 years experience as scientific researchers in fundamental mechanics and applied physics—not one second of it in general relativity, cosmology, or astronomy. So, everything we have learned, we have learned in retirement, mostly from the internet and during the Covid-19 pandemic. We have both moved into new fields before, always by first reading books and journals, attending lectures or special courses, and most importantly by attending professional meetings so we could confer with experts. None of that was possible in the past few years. So, if at times we sound naïve, we probably are. We apologize if we have neglected or misunderstood things that are obvious to those working in the field, or if we have had a limited number of citations of recent work. Not much else has been available to us, nor could we judge its quality if it had been. So, we have had to depend heavily on a few sources, hopefully good ones.
WKG's interest in this subject was tweaked by a Canadian radio program “As It Happens,” heard on Boston Public Radio (WGBH) while driving late at night to attend the American Physical Society/Division of Fluid Dynamics meeting in Boston in November 2015. The radio host was interviewing three scientists about dark matter and dark energy, both subjects of which while interesting had always been nothing more than a curiosity. However, never before had it been clear that what was being discussed was really mostly a failure of classical mechanics to describe the observations. During lunch at the meeting the next day with two former students (Clay Byers and Marcus Hultmark, both now professors), while discussing homogeneous turbulence and its time and length scale evolution, the idea that the missing mass and energy might be related to time was born. This ultimately resulted in a paper and several presentations.52,53 However, it quickly became clear that much more sophisticated mathematical tools were needed to advance beyond mere speculation.
WKG's return to Sweden in 2018 provided the perfect opportunity to link up with a former colleague from Chalmers, TGJ, who had retired about the same time. So, together we began to meet regularly to teach ourselves about astronomy, curvilinear coordinates, and general relativity—sharing notes, ideas, and many misunderstandings. The youtube.com online-courses of L. Susskin, (Stanford), A. Maloney (McGill), A. Guth (MIT), and P. Grinfeld (Drexel) were especially helpful. However, there were many others as well. Their efforts and generosity in sharing online made our effort possible. The YouTube videos of Rebecca Smethurst (“Dr Becky”) and Quora comments on Viktor Toth were especially helpful.
This paper is an outgrowth of that effort. History alone will judge whether we have made an important contribution or any contribution at all. At our age as septugenarians (now both 80), each contribution might well be our last. So, most important to us is not another published paper or battle won over hostile reviewers (of which we have had many), but whether we have stimulated others to think about this problem (or others) in new ways. Regardless, to paraphrase the inspirational “Dr. Becky” (Smethurst) of Oxford University and youtube.com fame: we really have enjoyed having a “ring-side-seat” at the best time in the history of the world for studying (and learning about) astronomy and cosmology.