The simulation hypothesis is a philosophical theory, in which the entire universe and our objective reality are just simulated constructs. Despite the lack of evidence, this idea is gaining traction in scientific circles as well as in the entertainment industry. Recent scientific developments in the field of information physics, such as the publication of the mass-energy-information equivalence principle, appear to support this possibility. In particular, the 2022 discovery of the second law of information dynamics (infodynamics) facilitates new and interesting research tools at the intersection between physics and information. In this article, we re-examine the second law of infodynamics and its applicability to digital information, genetic information, atomic physics, mathematical symmetries, and cosmology, and we provide scientific evidence that appears to underpin the simulated universe hypothesis.

In 2022, a new fundamental law of physics has been proposed and demonstrated, called the second law of information dynamics, or simply the second law of infodynamics.1 Its name is an analogy to the second law of thermodynamics, which describes the time evolution of the physical entropy of an isolated system, which requires the entropy to remain constant or to increase over time. In contrast to the second law of thermodynamics, the second law of infodynamics states that the information entropy of systems containing information states must remain constant or decrease over time, reaching a certain minimum value at equilibrium. This surprising observation has massive implications for all branches of science and technology. With the ever-increasing importance of information systems such as digital information storage or biological information stored in DNA/RNA genetic sequences, this new powerful physics law offers an additional tool for examining these systems and their time evolution.2

It is important to clearly distinguish between physical entropy and information entropy. The physical entropy of a given system is a measure of all its possible physical microstates compatible with the macrostate, SPhys. This is a characteristic of the non-information bearing microstates within the system. Assuming the same system, and assuming that one is able to create N information states within the same physical system (for example, by writing digital bits in it), the effect of creating a number of N information states is to form N additional information microstates superimposed onto the existing physical microstates. These additional microstates are information bearing states, and the additional entropy associated with them is called the entropy of information, SInfo. We can now define the total entropy of the system as the sum of the initial physical entropy and the newly created entropy of information, Stot = SPhys + SInfo, showing that the information creation increases the entropy of a given system. It is also important to clarify that information state is defined as any physical state, process, or event that can contain information in Shannon’s information theory framework.3 When a set of n independent and distinctive information states are created, X = {x1, x2, …, xn}, having a discrete probability distribution P = {p1, p2, …, pn}, the average information content per state is given by the Shannon information entropy formula3
$H(X)=∑j=1npj⋅logb1pj.$
(1)
The base of the logarithm, b, gives the units of information. When b = 2, the function H(X) returns an information value in bits. The function H(X) is maximum when the events xj have equal probabilities of occurring, pj = 1/n.
The reader should not confuse the Shannon information entropy H(X), with the entropy of the information bearing states, SInfo. Although the two parameters are closely linked, they are rather different quantities. If N information states are created within a given system containing n independent and distinctive information states, Nn, then the additional possible states, also known as distinct messages in Shannon’s original formalism, are equivalent to the number of information bearing microstates, Ω compatible with the macrostate4,
$Ω=nN⋅H(X).$
(2)
The general entropy of the information bearing states is now derived as follows:
$SInfo=N⋅kb⋅lnn⋅∑j=1npj⋅logb1pj$
(3)
or SInfo = N · kb · ln n · H(X), where kb = 1.380 64 × 10−23 J/K is the Boltzmann constant. The second law of infodynamics states that1
$∂SInfo∂t≤0.$
(4)
Since kb is a constant and n is the number of distinct events (information states), which is also a constant of the system, the decrease in the entropy of the information states can only come from the reduction over time in the total number of states, N, or a reduction over time in the Shannon entropy due to changes to the probabilities pj.

In what follows, we will examine a few diverse applications of the second law of infodynamics and demonstrate the universal nature of this new physics law, including the fact that it points to the characteristics of a computational system, underpinning to some degree the simulated universe hypothesis. Sections II and III have been covered in greater detail in the 2022 article,1 but they are discussed briefly here to reinforce our point and introduce the context of the second law of infodynamics to the reader.

A digital data storage system contains digital information states, having two distinct states, $X=0,1$, so n = 2 and probabilities $p=p0,p1$. The base in relation (1) is taken as b = 2 for units of digital bits. Assuming the system contains N bits, according to (2), it will have a total number of possible microstates,
$Ω=2N⋅H(X).$
(5)
The entropy of the information bearing states for a digital information system is
$SInfo=N⋅kb⋅ln2⋅∑j=12pj⋅log21pj.$
(6)
The maximum Shannon entropy of this system is H(X) = 1, and it can deviate slightly from this upper limit, but this value is stable over time. Hence, in the case of digital information, the only parameter that can drive the time evolution of the entropy of the information bearing states is the total number of states, N. If N increases, then the information entropy increases. However, there is no mechanism that would result in spontaneous information being created without external intervention (i.e., energy input). In a previous study, we demonstrated that the only possible evolution of N over time is down or constant,1 in accordance with the second law of infodynamics. This is a very straightforward process and a direct consequence of the second law of thermodynamics because, over time, the digital states are eroded by thermal fluctuations, leading to the self-erasure of data. The higher the temperature of the environment, the more probable the data self-erasure processes are. Hence, in the case of digital information, the second law of infodynamics is rather trivial and fully expected. In our previous study, we demonstrated this using room temperature (300 K) micromagnetics modeling5 of a granular magnetic thin film structure with perpendicular uniaxial anisotropy of Ka = 8.75 × 106 J/m3 and Ms = 1710 kA/m. Figure 1 shows the schematic of the word INFORMATION written digitally onto a 400 × 550 × 2 nm3 magnetic thin film structure, resulting in a bit size of 50 × 50 nm2, which was allowed to evolve over time at room temperature.
FIG. 1.

(a) Schematics of the word INFORMATION is written on a material in binary code using magnetic recording. Red denotes magnetization pointing out of the plane and blue is magnetization pointing into the plane. (b)–(d) Time evolution of the digital magnetic recording information states simulated using micromagnetic Monte Carlo. (b) Initial random state. (c) INFORMATION is written (t = 0 s). (d) Iteration 930 (t = 1395 s) showing the degradation of information states. Reproduced with permission from M. M. Vopson and S. Lepadatu, AIP Adv. 12, 075310 (2022). Copyright 2022 AIP Publishing.

FIG. 1.

(a) Schematics of the word INFORMATION is written on a material in binary code using magnetic recording. Red denotes magnetization pointing out of the plane and blue is magnetization pointing into the plane. (b)–(d) Time evolution of the digital magnetic recording information states simulated using micromagnetic Monte Carlo. (b) Initial random state. (c) INFORMATION is written (t = 0 s). (d) Iteration 930 (t = 1395 s) showing the degradation of information states. Reproduced with permission from M. M. Vopson and S. Lepadatu, AIP Adv. 12, 075310 (2022). Copyright 2022 AIP Publishing.

Close modal

The average unit cell size (cubic) was V = 10−27 m3, which is intentionally ∼1.9 times lower than the required size for a thermally stable medium, in order to speed up the computation time. This resulted in a relaxation time of 1.5 s, which corresponds to a single iteration in the Monte Carlo algorithm. The simulations show that the entropy of the information bearing states will remain constant or decrease over time, and after a sufficiently long time, all information states will become self-erased, leading to zero entropy of information states. Figure 1(b) shows the simulated specimen before data were recorded on it. Figure 1(c) shows the same sample with the data written on it at time zero. Figure 1(d) shows the time evolution of the data after 930 Monte Carlo cycles, showing the degradation of the data. After 1990 cycles, the entire data got self-erased, and the information entropy became zero.

A very interesting information storage system is a DNA/RNA sequence encoding biological information. This can be represented as a long string of the letters A, C, G, and T, where the characters represent adenine (A), cytosine (C), guanine (G), and thymine (T) [replaced with uracil (U) in RNA sequences]. Therefore, within Shannon’s information theory framework, a typical genome sequence can be represented as a probabilistic system of four distinctive states, n = 4, $X=A,C,G,T$ and probabilities $p=pA,pC,pG,pT$. Using digital information units and Eq. (1), we determine that the maximum Shannon information entropy is H(X) = 2, and each nucleotide can encode a maximum of 2 bits: A = 00, C = 01, G = 10, T = 11. For a given genomic sequence containing N nucleotides, the total number of possible microstates is
$Ω=4N⋅H(X).$
(7)
The entropy of the information bearing states of a genomic sequence is
$SInfo=N⋅kb⋅ln4⋅∑j=14pj⋅log21pj.$
(8)
The time evolution of the entropy of genetic DNA/RNA information systems is given by the time evolution of the changes in their nucleotide sequence, called genetic mutations. Genetic mutations can take place via three mechanisms: (i) Single nucleotide polymorphisms (SNPs), where changes occur so that the number of nucleotides N remains constant; (ii) deletions, where N decreases; and (iii) insertions, where N is increasing.

Similar to the case of digital information, a reduction of N would most likely result in a reduction of the overall entropy of the information bearing states, so “deletion” mutations would automatically fulfill the second law of infodynamics. In our previous study, we examined real data from RNA sequences that underwent only SNP mutations, which maintained the value of the N constant, and the reduction in the information entropy came only from Shannon’s information entropy function.1,2 Our test RNA sequences were variants of the novel SARS-CoV-2 virus, which emerged in December 2019 resulting in the COVID-19 pandemic. The reference RNA sequence of the SARS-CoV-2, collected in Wuhan, China in December 2019 (MN908947),6 has 29 903 nucleotides, so N = 29 903. All analyzed variants had 29 903 nucleotides and have been collected and sequenced at a later time, after undergoing an incremental number of SNP mutations. Shannon information entropies of the reference sequence and of the variants were computed using relation (1) and previously developed software, GENIES.7,8

Remarkably, the results indicate a unique correlation between the information and the dynamics of the genetic mutations by showing that the Shannon information entropy, H(X), and the overall information entropy of the SARS-CoV-2 variants (SInfo) computed using Eq. (8) decrease linearly with the number of mutations and over time, i.e., because number of mutations increase over time (see Fig. 2). The corresponding code names of the genome variants extracted from the NCBI database9–14 and analyzed in this work are shown next to each data point in Fig. 2. This result not only confirms the universal validity of the second law of infodynamics but also points to a possible governing mechanism of genetic mutations,2 currently believed to be just random events. The observation of the information entropic force that governs genetic mutations is very powerful because it challenges the Darwinian view that genetic mutations are complete random events and could be used to develop predictive algorithms for genetic mutations before they occur.2 We should acknowledge that, while all analyzed SARS-Cov-2 variants showed a decrease in their information entropy as they underwent genetic mutations, the data points presented in Fig. 2 have been carefully selected to emphasize the linear trend.

FIG. 2.

Shannon information entropy values of variants of the SARS-CoV-2 virus as a function of the number of SNP mutations per variant.

FIG. 2.

Shannon information entropy values of variants of the SARS-CoV-2 virus as a function of the number of SNP mutations per variant.

Close modal

Naturally, we asked whether the same RNA system would display behavior consistent with the second law of infodynamics, when the SARS-CoV-2 variants suffered “addition” mutations, so the number of nucleotides N is no longer constant but becomes larger than 29 903, increasing the information entropy. Using the NCBI database, we searched all the sequenced SARS-CoV-2 variants from January 1 2020 to January 1 2022. We searched only complete sequences with no missing/undetermined nucleotides, and the result was a total of 4.48 × 106 sequences. When we restricted the results to only the sequences that had at least 29 903 nucleotides or more, then 48 450 sequences were identified. Unfortunately, only one suffered a mutation where the resultant number of nucleotides increased by 1–29 904. Hence, 98.92% of all mutations took place via “deletion,” reducing the total number of nucleotides. Since only one genome out of 4.48 × 106 appeared to increase the number of nucleotides, this is statistically irrelevant. Hence, we concluded that, for this test case, genetic mutations appear to take place in a way that reduces their information entropy, mostly via a deletion mechanism or a SNP. This is fully consistent with the second law of infodynamics, as a deletion would automatically decrease the total information entropy, and the SNPs have been shown to take place in a way that the information entropy is again reduced due to a reduction in Shannon’s information entropy.

We would also like to quote the famous Spiegelman’s experiment that took place in 1972.15 In this experiment, Spiegelman studied the evolution of a virus over 74 generations. The virus was kept isolated in ideal conditions to survive, and with each generation, the virus was sequenced. The initial virus had 4500 base points, and with each generation, the genome decreased consistently in size. After 74 generations, the virus evolved to only 218 base points, showing an interesting and unexplained reduction of its genome of over 95%. Just as the 2022 study on SARS-CoV-2,1 Spiegelman’s experiment is fully consistent with the second law of infodynamics, which requires the information entropy to remain constant or to decrease over time, reaching a minimum value at equilibrium.

Electronic states in atoms are fully described by four principal quantum numbers: (a) the principal quantum number, n. This number determines the energy of a particular shell or orbit, and it takes non-zero positive integral values n = 1, 2, 3, 4, … (b) the orbital angular momentum quantum number, . This quantum number describes the subshell, and gives the total angular momentum of an electron due to its orbital motion. This quantum number takes integral values restricted to = 0, 1, 2, …, n − 1. (c) The magnetic quantum number, m. This quantum number determines the component (projection) of the orbital angular momentum along a specific direction, usually the direction of an applied magnetic field. It takes integral values, and for a given value of , it may have (2 + 1) possible values: m = , ℓ − 1, − 2, …, 0, −, …, −( − 1), −. (d) The spin quantum number s, and the secondary spin quantum number, ms. The spin quantum number s gives the eigenvalues of the spin angular momentum operator, and it is related to the fact that the electron has an intrinsic angular momentum called “spin” or spin angular momentum, which results from the rotation of the electron around an internal axis. The spin quantum number takes the values s = n/2, where n is a positive integer, so that s = 0, 1/2, 1, 3/2, 2, …. The secondary quantum spin number ms determines the direction (i.e., projection) of the spin angular momentum along the direction of an applied magnetic field. The allowed values of ms are 2s + 1 values from −s to +s in steps of 1. For example, an electron has s = 1/2, so the allowed values of ms are −1/2 and +1/2.

The electrons occupy atomic shells according to Pauli’s exclusion principle,16 which states that two or more identical fermions cannot simultaneously occupy the same quantum state within a quantum system. In the case of electrons in atoms, this means that it is impossible for two electrons in a multi-electron atom to have the same values of the four quantum numbers described above. For example, if two electrons reside in the same orbital, then their n, , and m values are the same, so their ms must be different, imposing that the electrons must have opposite half-integer spin projections of 1/2 and −1/2.

However, in the case of multi-electron atoms, multiple electron arrangements are possible while fulfilling Pauli’s exclusion principle.

In order to determine the electron population of an atomic orbital corresponding to the ground state of a multi-electron atom, German physicist Friedrich Hund formulated in 1927 a set of rules17 derived from phenomenological observations. These are called Hund’s rules, and when used in conjunction with Pauli’s exclusion principle, they are useful in atomic physics to determine the electron population of atoms corresponding to the ground state.

To explain this, let us assume that an atom has three electrons on its p orbital. Figure 3 shows the three allowed ground state distinctive configurations that fulfill Pauli’s exclusion principle, resulting in total spin quantum values of 1/2, 3/2, and 1/2, respectively.

FIG. 3.

Three electrons residing on a p orbital and their allowed arrangements according to Pauli’s exclusion principle.

FIG. 3.

Three electrons residing on a p orbital and their allowed arrangements according to Pauli’s exclusion principle.

Close modal

The correct electronic arrangement is given by Hund’s first rule, which is the most important, and it is simply called Hund’s rule. This states that the lowest energy atomic state is the one that maximizes the total spin quantum number, meaning simply that the orbitals of the subshell are each occupied singly with electrons of parallel spin before double occupation occurs. Therefore, the term with the lowest energy is also the term with the maximum number of unpaired electrons, so for our example shown in Fig. 3, Hund’s rule dictates that the correct configuration is the middle one, resulting in a total spin quantum value of 3/2.

Hund’s rule is derived from empirical observations, and there is no clear understating of why the electrons populate atomic orbitals in this way. So far, two different physical explanations have been given in Ref. 18. Both explanations revolve around the energetic balance of the electrons and their interactions in the atom. The first mechanism implies that electrons in different orbitals are further apart, so that electron–electron repulsion energy is reduced. The second mechanism claims that the electrons in singly occupied orbitals are less effectively screened from the nucleus, resulting in a contraction of the orbitals, which increases the electron–nucleus attraction energy.19

In this article, we examine the electronic population in atoms within the framework of information theory3 and we demonstrate that Hund’s rule (Hund’s first rule) is a direct consequence of the second law of information dynamics.1 This requires that, at equilibrium in the ground state, electrons occupy the orbitals in such a way that their information entropy is minimum, or equivalently, the bit information content per electron is minimum.

We treat the two possible values of the secondary quantum spin number ms of the electrons in atoms, ms = −1/2, +1/2, as two possible events, or as a two-letter message within Shannon’s information theory framework. The secondary quantum spin number ms is a very important parameter because it is the only quantity that distinguishes two electrons residing in the same orbital. Since their n, , and m values are the same, their ms must be different to fulfill Pauli’s exclusion principle.

We will allocate to the two possible projections of the ms the spin up ↑ and spin down ↓ states. In this context, the set of n = 2 independent and distinctive information states is X = {↑, ↓}, with a discrete probability distribution P = {p, p}.

Hence, for any N electrons, we have N and N electrons, so that N = N + N, and relation (1) gives the Shannon information entropy per electron spin, or the bit information content stored per electron spin, while relation (3) gives the total information entropy per N electrons. Hence, relation (1) becomes
$H(X)=p↑⋅log21p↑+p↓⋅log21p↓,$
(9)
where p = N/N and p = N/N, which allows re-writing Eq. (9) as
$H(X)=N↑N⋅log2NN↑+N↓N⋅log2NN↓.$
(10)
Since the electronic populations are stable, then N is constant, and the minimum in the entropy of the information bearing states, SInfo corresponds to a minimum in Shannon’s information entropy. We now consider the s, p, d, and f orbitals, and we analyze in detail the Shannon’s information entropy of each possible distinctive electronic configuration, for any possible occupancy number of these orbitals. The maximum allowed value for the information entropy H(X) = IE is 1 bit, and the minimum possible value is 0 bits. We will demonstrate that for each orbital, the configuration that has the lowest Shannon information entropy, i.e., the lowest bit information content, corresponds to the highest total spin quantum value. Hence, Hund’s rule is, in fact, a consequence of the second law of infodynamics.

The s orbital can accommodate a maximum of N = 2 electrons. Figure 4 shows the possible electronic configurations of an s orbital. When N = 1, or N = 2, the IE = 1 bit in both cases, while the total spin quantum value is 0.5 and 0, respectively. Since there are no other possible configurations, the case for s-orbital is rather trivial. Figure 5(a) shows a plot of the IE values vs the total spin quantum value, S for the s orbital.

FIG. 4.

Representation of all possible distinctive electronic populations of an s orbital.

FIG. 4.

Representation of all possible distinctive electronic populations of an s orbital.

Close modal
FIG. 5.

Calculated IE values for (a) s orbital, (b) p orbital, (c) d orbital, and (d) f orbital. Data represent each possible distinct electronic configuration vs the total spin quantum value, S. The data show categorically that IE is minimum when S is maximum is each case.

FIG. 5.

Calculated IE values for (a) s orbital, (b) p orbital, (c) d orbital, and (d) f orbital. Data represent each possible distinct electronic configuration vs the total spin quantum value, S. The data show categorically that IE is minimum when S is maximum is each case.

Close modal

The p orbital can accommodate a maximum of N = 6 electrons. Figure 6 shows the electronic populations on the p orbital for all possible N values. We should mention that only distinct configurations have been represented in the diagram. Any electronic arrangement that results in the same ratio of spin-up and spin-down electrons is not represented, as it would duplicate the results.

FIG. 6.

Representation of all possible distinctive electronic populations of a p orbital. Configurations highlighted in green are the correct arrangements when multiple states are possible for N = 2, 3, and 4, respectively.

FIG. 6.

Representation of all possible distinctive electronic populations of a p orbital. Configurations highlighted in green are the correct arrangements when multiple states are possible for N = 2, 3, and 4, respectively.

Close modal

Similarly, configurations obtained by inverting all spins, i.e., mirror images, result in the same IE values and are not considered to avoid duplications.

Figure 5(b) shows the graph of the IE values vs the total spin quantum value for all possible distinct occupancy cases of the p orbital. As shown in Fig. 6, each time multiple arrangements are possible, as is the case for N = 2, 3, and 4, respectively, the maximum spin quantum value corresponds to the minimum IE value estimated using Eq. (10). For N = 2 and 3, the minimum IE is 0 in each case, while for N = 4, the minimum IE value is 0.811. To emphasize this, we highlighted, in Fig. 6, the correct configurations that are required by Hund’s rule.

We now examine the case of the d orbital, which can accommodate a maximum of N = 10 electrons. Figure 7 shows the distinct electronic populations allowed on the d orbital for all possible N values.

FIG. 7.

Representation of all possible distinctive electronic populations of a d orbital. Configurations highlighted in green are the correct arrangements when multiple states are possible for N = 2–8.

FIG. 7.

Representation of all possible distinctive electronic populations of a d orbital. Configurations highlighted in green are the correct arrangements when multiple states are possible for N = 2–8.

Close modal

For N = 1, 9, and 10, only a single distinct arrangement is possible, while for all the other N values, multiple electronic arrangements are allowed within Pauli’s exclusion principle.

Figure 5(c) shows the IE values vs the total spin quantum value for all possible distinct occupancy cases of the d orbital. The data indicates that the maximum spin quantum value corresponds to the minimum IE value estimated using Eq. (10). For each N value, we highlighted, in Fig. 7, the correct configurations that are required by Hund’s rule. These all correspond exactly to the lowest IE value, reinforcing the validity of the second law of infodynamics. The minimum IE value of 0 is achieved for N = 2, 3, 4, and 5. The minimum IE values are IE = 0.65 for N = 6, IE = 0.863 for N = 7, and IE = 0.954 for N = 8, respectively.

Finally, we examine the f-orbital, which can accommodate a maximum of N = 14 electrons. Therefore, we have 14 possible groups, with N = 1, 13, and 14 having only a single distinct electronic arrangement possible, while for all the other N values, multiple electronic arrangements are allowed by Pauli’s exclusion principle. Figure 8 shows the distinct electronic populations allowed on the f orbital for all possible N values.

FIG. 8.

Representation of all possible distinctive electronic populations of an f orbital. Configurations highlighted in green are the correct arrangements according to Hund’s rule when multiple states are possible for N = 2–12.

FIG. 8.

Representation of all possible distinctive electronic populations of an f orbital. Configurations highlighted in green are the correct arrangements according to Hund’s rule when multiple states are possible for N = 2–12.

Close modal

Again, we highlighted the correct arrangements as dictated by Hund’s rule, and we calculated the IE values for all possible configurations. The minimum IE value of 0 is achieved for N = 2, 3, 4, 5, 6, and 7. For the remaining groups with multiple electronic configurations, the minimum IE values are IE = 0.544 for N = 8, IE = 0.764 for N = 9, IE = 0.881 for N = 10, IE = 0.946 for N = 11, and IE = 0.98 for N = 12, respectively. The data show categorically that, in all cases, the minimum IE value corresponds to the maximum spin quantum value, S, so the second law of infodynamics appears to be the real driving force behind Hund’s rule.

The universe can only be either finite/close or infinite/open. The current consensus is that we live in an infinite universe that is in continuous expansion. Regardless of whether the universe is finite or infinite, the thermodynamic laws are equally applicable. The first law of thermodynamics states that energy can neither be created nor destroyed; it is conserved. The energy in the universe can only be converted from one form to another, but overall, it remains constant. Using Clausius’ sign convention, the mathematical differential form of the first law of thermodynamics is
$dQ = dU + dW,$
(11)
where Q is the net heat energy supplied to the universe, W captures the work done by the universe in all possible forms, and U represents the total internal energy of matter and radiation in the universe.
However, the universe does not exchange heat with anything, so if the universe is expanding adiabatically, then the first law becomes
$0=dQ = dU + dW.$
(12)
We now recall the relation that links heat to entropy, dQ = T · dS, where S is the total entropy of the universe and T is the temperature. Since T has a non-zero value as dictated by the third law of thermodynamics, and the average temperature of the observable universe could, in fact, be considered to be 2.7 K, we deduce that dS = 0. This implies that the total entropy of the universe must be constant. This constant entropy does not violate the second law of thermodynamics, which allows the entropy to be constant over time or to increase. However, in an expanding universe, the entropy will always increase because more possible microstates are being created via the expansion of the space itself. Figure 9 shows a diagram of a physical system containing matter when the size of the system is in continuous expansion, while its physical content remains unchanged.
FIG. 9.

Diagram of a physical system under continuous expansion in time, resulting in entropy increasing.

FIG. 9.

Diagram of a physical system under continuous expansion in time, resulting in entropy increasing.

Close modal

Just as in our expanding universe, the space expansion in the schematic physical system shown in Fig. 9 facilitates the emergence of more microstates, and the total entropy increases rapidly. If the universe does not expand, at some point, the entropy will reach its maximum and the universe will achieve equilibrium.

This process allows for the physical entropy of the universe in the past to have been at its maximum value when the universe would have been much smaller than today and at near equilibrium. The evidence for this is the cosmic microwave background (CMB) radiation,20 which is almost isotropic, having a temperature of ∼2.7 K in all directions,21 and a very low-level temperature anisotropy (ΔT/T ∼ 10−5).22,23 The time origin of this low level of temperature anisotropy can be traced back to ∼370 000 years after the big bang24 when the universe was close to chemical and thermal equilibrium and the density inhomogeneities were comparable to the temperature anisotropies (Δρ/ρ ∼ ΔT/T ∼ 10−5).

However, in order to comply with the first law of thermodynamics and the adiabatic expansion, we just showed that the total entropy of the universe must be constant. If this is the case, how can the physical entropy of our expanding universe increase continuously? This is called the “Entropic Paradox” and to solve it, there are only three possibilities:

• The laws of thermodynamics are not valid;

• The universe is not expanding;

• The entropy budget of the universe contains an unaccounted entropy term.

The readers would agree that possibilities (a) and (b) are out of the question as these are supported by undisputed empirical evidence. Therefore, we are left with the search for another entropy term responsible for the initial high entropy of the universe. This entropy term must also balance the total entropy budget of the universe in order to ensure that the overall entropy remains constant over time, despite the evident increase in physical entropy that we can observe in the expanding universe.

In this paper, we propose that the missing entropy term is the entropy associated with the information content of the universe.

Let us write the total entropy of the universe, S, as the sum of the physical entropy and the information entropy,
$S=SPhys+SInfo.$
(13)
By differentiating (13), we get
$dS=dSPhys+dSInfo.$
(14)
Imposing the dS = 0 condition, and taking a time derivative, we obtain
$dSPhysdt+dSInfodt=0.$
(15)
Since dSPhys/dt ≥ 0, i.e., physical entropy always increases over time according to the second law of thermodynamics and according to empirical observations, then the increase in the physical entropy must be balanced by the decrease in the information entropy over the same time interval, so $−dSPhysdt=dSInfodt$, which means
$dSInfodt≤0.$
(16)

The relation (16) is identical to relation (4), and it is exactly the second law of infodynamics, requiring that the entropy of the information states must decrease over time. Hence, the second law of infodynamics appears to be universally applicable and is, in fact, a cosmological necessity. It is important to realize that in order for the overall entropy of the universe to remain constant, the absolute values of physical entropy and information entropy do not have to be equal. Only their absolute change over time must be equal, in order to ensure a constant overall entropy of the universe.

Symmetry is a mathematical concept in which a certain property, for instance, the geometrical shape of an object, is preserved under certain transformations applied to the object. Such transformations include translations, rotations, reflections, and more complex operations combining these. In each case, the object remains invariant upon transformation. In the context of Euclidian geometry, these transformations are called symmetry operations. A symmetry operation is the movement of an object into an equivalent and indistinguishable orientation that is carried around a symmetry element. A symmetry element is a point, line, or plane about which a symmetry operation is carried out. The classical group theory is the mathematical tool for the study of symmetry, describing the structure of transformations that map objects to themselves exactly.

However, symmetry is not merely a mathematical concept. It transcends disciplines, connecting mathematics, chemistry, biology, and physics, and appears to be a fundamental property of the universe.

This is evidenced by everything around us, from the elegant symmetrical patterns of snowflakes to the fundamental symmetries governing subatomic particles. Symmetry occurs at all scales, playing a pivotal role in the structure and behavior of matter in the universe. Figure 10 shows a few examples of amazing symmetries manifesting in nature.

FIG. 10.

A few examples of the abundance of symmetry in the universe.

FIG. 10.

A few examples of the abundance of symmetry in the universe.

Close modal

This abundance of symmetry in the natural world begs the question: Why does symmetry dominate all systems in the universe instead of asymmetry? After all, the entropic evolution of the universe tends to a higher entropy state, yet everything in nature appears to prefer high symmetry and a high degree of order.

Here, we explore the mathematical underpinnings of symmetry and its crucial significance in the context of the second law of infodynamics. We demonstrate a unique observation that a high symmetry corresponds to a low information entropy state, which is exactly what the second law of infodynamics requires. Hence, this remarkable observation appears to explain why symmetry dominates in the universe: it is due to the second law of information dynamics.

Before we proceed to our proof, it is useful to establish a way of measuring the symmetry of an object quantitatively. In other words, how much symmetry does a shape have? One accepted method of measuring the symmetry of an object is by counting the number of symmetry operations that one can carry out on the object. The more symmetry operations a shape has, the more symmetric it is.

Since the symmetry operations are carried out around the symmetry elements, we propose to quantify the symmetry of a shape by counting its number of symmetry elements instead of counting the number of symmetry operations. For example, a perfect square has eight symmetry operations (four rotations and four reflections) and five symmetry elements (one axis of rotation and four axes of reflection).

Our main objective is to describe the relationship between the symmetry of an object, determined by the number of its symmetry elements (SE) and its information entropy (IE).

In order to do this, let us consider a range of simple Euclidian 2D geometric shapes. We start with an ordinary triangle, defined by three sides of length a, b, and c and three corresponding angles α, β, and γ, respectively (see Fig. 11). These parameters are a unique representation of the shape, as there is no other possible way of forming a triangle that looks different using this set of parameters.

FIG. 11.

Regular triangle with no symmetry elements.

FIG. 11.

Regular triangle with no symmetry elements.

Close modal

Within Shannon’s information theory framework, we define the set of six distinct characters, n = N = 6, $X=a,b,c,α,β,γ$, and a probability distribution $P=pa,pb,pc,pα,pβ,pγ$ on X. The probabilities of the set are $P=16,16,16,16,16,16$. The average information per character, or the number of bits of information per character for this set, is given by Eq. (1), $H(X)=IE=−∑j=16pj⋅log2pj=log26=2.585$.

This ordinary triangle has no symmetry, and accordingly, it has zero symmetry elements, so SE = 0.

We now examine an isosceles triangle, as shown in Fig. 12. This shape has one symmetry element (a reflection axis), so SE = 1 and it is fully defined by the set of four distinct characters n = 4, $X=a,b,α,β$, and a probability distribution $P=pa,pb,pα,pβ=26,16,26,16$. In this case, the IE is
$IE=−∑j=14pj⋅log2pj=−26log226+16log216+26log226+16log216=1.918.$
Finally, we are examining the triangle shape that has the highest symmetry, the equilateral triangle (see Fig. 13). The equilateral triangle has four symmetry elements, SE = 4 (three reflection axes and one rotation axis), and it is fully defined by the set of two distinct characters n = 2, $X=a,α$, and a probability distribution $P=pa,pα=36,36$. In this case, the IE is
$IE=−∑j=12pj⋅log2pj=−36log236+36log236=1.$
Examining the relationship between the information entropy (IE) and the symmetry elements (SE) of these triangles, we observe that the symmetry scales inversely proportionally with the information entropy.
FIG. 12.

Symmetry elements of an isosceles triangle.

FIG. 12.

Symmetry elements of an isosceles triangle.

Close modal
FIG. 13.

Symmetry elements of an equilateral triangle.

FIG. 13.

Symmetry elements of an equilateral triangle.

Close modal

High symmetry = low information entropy This behavior is clearly emphasized in Fig. 14, showing the IE vs SE for all possible triangle shapes.

FIG. 14.

Information entropy vs number of symmetry elements of triangular shapes.

FIG. 14.

Information entropy vs number of symmetry elements of triangular shapes.

Close modal

Although this was observed in the case of a single 2D geometric shape, we postulate that this is a universal behavior of symmetries. In order to convince the reader, let us examine the case of quadrilaterals. There are seven possible geometries of a quadrilateral figure in terms of its possible symmetries. Table I gives all seven possible geometries and their SE values. For each geometry, we computed the IE value. The data are also summarized in Fig. 15.

TABLE I.

Summarized results of the analysis performed on quadrilaterals.

IE
ShapeSESet X of distinct charactersProbabilities$IE=−∑j=1npj⋅log2pj$
$X=a,b,c,d,α,β,γ,θ$ $P=18,18,18,18,18,18$
$X=a,b,c,α,β$ $P=28,18,18,28,28$ 2.25
$X=a,b,α,β,γ$ $P=28,28,28,18,18$ 2.25
$X=a,b,α,β$ $P=38,18,28,28$ 1.905
$X=a,b,α$ $P=28,28,48$ 1.5
$X=a,α,β$ $P=48,28,28$ 1.5
$X=a,α$ $P=48,48$
IE
ShapeSESet X of distinct charactersProbabilities$IE=−∑j=1npj⋅log2pj$
$X=a,b,c,d,α,β,γ,θ$ $P=18,18,18,18,18,18$
$X=a,b,c,α,β$ $P=28,18,18,28,28$ 2.25
$X=a,b,α,β,γ$ $P=28,28,28,18,18$ 2.25
$X=a,b,α,β$ $P=38,18,28,28$ 1.905
$X=a,b,α$ $P=28,28,48$ 1.5
$X=a,α,β$ $P=48,28,28$ 1.5
$X=a,α$ $P=48,48$
FIG. 15.

Information entropy vs number of SE of quadrilaterals.

FIG. 15.

Information entropy vs number of SE of quadrilaterals.

Close modal

Again, the shape with the highest symmetry has the lowest information content.

The same analysis can be applied to any geometric figure, including 3D geometries, producing the same results. In each case, the symmetry scales inversely with the information content.

This remarkable result demonstrates that the symmetries manifesting everywhere in nature, and in the entire universe, are a consequence of the second law of information dynamics, which requires the minimization of the information entropy in any system or process in the universe.

In this study, we revisited the second law of infodynamics, first introduced in 2022.1 The second law of infodynamics states that the information entropy of systems containing information states must remain constant or decrease over time, reaching a certain minimum value at equilibrium. This is very interesting because it is in total opposition to the second law of thermodynamics, which describes the time evolution of the physical entropy that must increase up to a maximum value at equilibrium.

We showed that the second law of infodynamics is universally applicable to any system containing information states, including biological systems and digital data. Remarkably, this indicates that the evolution of biological life tends in such a way that genetic mutations are not just random events as per the current Darwinian consensus, but instead undergo genetic mutations according to the second law of infodynamics, minimizing their information entropy. This discovery has massive implications for genetic research, evolutionary biology, genetic therapies, pharmacology, virology, and pandemic monitoring, to name a few.

Here, we also expanded the applicability of the second law of infodynamics to explain phenomenological observations in atomic physics. In particular, we demonstrated that the second law of infodynamics explains the rule followed by the electrons to populate the atomic orbitals in multi-electron atoms, known as the Hund’s rule. Electrons arrange themselves on orbitals, at equilibrium in the ground state, in such a way that their information entropy is always minimal.

Most interesting is the fact that the second law of infodynamics appears to be a cosmological necessity. Here, we re-derived this new physics law using thermodynamic considerations applied to an adiabatically expanding universe.

Finally, one of the great mysteries of nature is: Why does symmetry dominate in the universe? has also been explained using the second law of infodynamics. Using simple geometric shapes, we demonstrated that high symmetry always corresponds to the lowest information entropy state, or lowest information content, explaining why everything in nature tends to symmetry instead of asymmetry.

The key question is now: “What can we learn from the second law of infodynamics and what is its meaning?

The second law of infodynamics essentially minimizes the information content associated with any event or process in the universe. The minimization of the information really means an optimisation of the information content, or the most effective data compression, as described in Shannon’s information theory. This behavior is fully reminiscent of the rules deployed in programming languages and computer coding. Since the second law of infodynamics appears to be manifesting universally and is, in fact, a cosmological necessity, we could conclude that this points to the fact that the entire universe appears to be a simulated construct. A super complex universe like ours, if it were a simulation, would require a built-in data optimization and compression mechanism in order to reduce the computational power and the data storage requirements. This is exactly what we are observing via empirical evidence all around us, including digital data, biological systems, atomistic systems, symmetries, and the entire universe.

Another important aspect of the second law of infodynamics is the fact that it appears to validate the mass-energy-information equivalence principle formulated in 2019.4 According to this principle, the information itself is not just a mathematical construct or just physical, as postulated by Landauer25 and experimentally demonstrated recently,26–29 but it has a small mass and can be regarded as the fifth form of matter.4 This principle has not been confirmed experimentally yet, and it has attracted a fair share of skepticism. Whether information is physical or not is irrelevant to this study because the second law of infodynamics is applicable regardless of whether information has mass or not. However, if information is physical (equivalent to mass and energy), then the second law of thermodynamics requires systems to evolve in such a way that the energy is minimized at equilibrium. Hence, a reduction in the information content, would translate into a reduction of mass-energy according to the mass-energy-information equivalence principle. Therefore, the second law of infodynamics is not just a cosmological necessity, but since it is required to fulfill the second law of thermodynamics, we can conclude that this new physics law proves that information is indeed physical. The scientific evidence supporting the simulated universe theory is also discussed in greater detail in a recently published book. 30

M.M.V. acknowledges financial support received for this research from the University of Portsmouth and the Information Physics Institute. The author is also deeply grateful to all his supporters and would like to acknowledge the generous contributions received to his research in the field of information physics, from the following donors and crowd funding backers, listed in alphabetical order: Alban Frachisse, Alexandra Lifshin, Allyssa Sampson, Ana Leao-Mouquet, Andre Brannvoll, Andrews83, Angela Pacelli, Aric R. Bandy, Ariel Schwartz, Arne Michael Nielsen, Arvin Nealy, Ash Anderson, Barry Anderson, Benjamin Jakubowicz, Beth Steiner, Bruce McAllister, Caleb M. Fletcher, Chris Ballard, Cincero Rischer, Colin Williams, Colyer Dupont, Cruciferous1, Daniel Dawdy, Darya Trapeznikova, David Catuhe, Dirk Peeters, Dominik Cech, Kenneth Power, Eric Rippingale, Ethel Casey, Ezgame Workplace, Frederick H. Sullenberger III, Fuyi Zhou, George Fletcher, Gianluca Carminati, Gordo Tek, Graeme Hewson, Graeme Kirk, Graham Wilf Taylor, Heath McStay, Heyang Han, Ian Wickramasekera, Ichiro Tai, Inspired Designs LLC, Ivaylo Aleksiev, Jamie C. Liscombe, Jan Stehlak, Jason Huddleston, Jason Olmsted, Jennifer Newsom, Jerome Taurines, John Jones, John Vivenzio, John Wyrzykowski, Josh Hansen, Joshua Deaton, Josiah Kuha, Justin Alderman, Kamil Koper, Keith Baton, Keith Track, Kristopher Bagocius, Land Kingdom, Lawrence Zehnder, Lee Fletcher, Lev X, Linchuan Wang, Liviu Zurita, Loraine Haley, Manfred Weltenberg, Mark Matt Harvey-Nawaz, Matthew Champion, Mengjie Ji, Michael Barnstijn, Michael Legary, Michael Stattmann, Michelle A. Neeshan, Michiel van der Bruggen, Molly R. McLaren, Mubarrat Mursalin, Nick Cherbanich, Niki Robinson, Norberto Guerra Pallares, Olivier Climen, Pedro Decock, Piotr Martyka, Ray Rozeman, Raymond O’Neill, Rebecca Marie Fraijo, Robert Montani, Shenghan Chen, Sova Novak, Steve Owen Troxel, Sylvain Laporte, Tamás Takács, Tilo Bohnert, Tomasz Sikora, Tony Koscinski, Turker Turken, Walter Gabrielsen III, Will Strinz, William Beecham, William Corbeil, Xinyi Wang, Yanzhao Wu, Yves Permentier, Zahra Murad, and Ziyan Hu.

The author has no conflicts to disclose.

The numerical data associated with this work is available within this manuscript. The RNA sequences used in this study are freely available from Refs. 6 and 9–14.

1.
M. M.
Vopson
and
S.
, “
The second law of information dynamics
,”
12
,
075310
(
2022
).
2.
M. M.
Vopson
, “
A possible information entropic law of genetic mutations
,”
Appl. Sci.
12
,
6912
(
2022
).
3.
C. E.
Shannon
, “
A mathematical theory of communication
,”
Bell Syst. Tech. J.
27
,
379
423
(
1948
).
4.
M. M.
Vopson
, “
The mass-energy-information equivalence principle
,”
9
(
9
),
095206
(
2019
).
5.
S.
, “
Micromagnetic Monte Carlo method with variable magnetization length based on the Landau–Lifshitz–Bloch equation for computation of large-scale thermodynamic equilibrium states
,”
J. Appl. Phys.
130
,
163902
(
2021
).
6.
See
https://www.ncbi.nlm.nih.gov/nuccore/MN908947
for the RNA sequence of MN908947
.
8.
M. M.
Vopson
and
S. C.
Robson
, “
A new method to study genome mutations using the information entropy
,”
Physica A
584
,
126383
(
2021
).
9.
See
https://www.ncbi.nlm.nih.gov/nuccore/MT956915
for the RNA sequence of MT956915
.
10.
See
https://www.ncbi.nlm.nih.gov/nuccore/OM098426
for the RNA sequence of OM098426
.
11.
See
https://www.ncbi.nlm.nih.gov/nuccore/MW679505
for the RNA sequence of MW679505
.
12.
See
https://www.ncbi.nlm.nih.gov/nuccore/OK546282
for the RNA sequence of OK546282
.
13.
See
https://www.ncbi.nlm.nih.gov/nuccore/OK104651
for the RNA sequence of OK104651
.
14.
See
https://www.ncbi.nlm.nih.gov/nuccore/OL351371
for the RNA sequence of OL351371
.
15.
D. L.
Kacian
,
D. R.
Mills
,
F. R.
Kramer
, and
S.
Spiegelman
, “
A replicating RNA molecule suitable for a detailed analysis of extracellular evolution and replication
,”
Proc. Natl. Acad. Sci. U. S. A.
69
(
10
),
3038
3042
(
1972
).
16.
W.
Pauli
, “
Über den zusammenhang des abschlusses der elektronengruppen im atom mit der komplexstruktur der spektren
,”
Z. Phys.
31
(
1
),
765
783
(
1925
).
17.
G. L.
Miessler
and
D. A.
Tarr
,
Inorganic Chemistry
,
2nd ed.
(
Prentice-Hall
,
1999
), pp.
358
360
.
18.
I. N.
Levine
,
Quantum Chemistry
,
4th ed.
(
Prentice-Hall
,
1991
), pp.
303
330
.
19.
R. J.
Boyd
, “
A quantum mechanical explanation for Hund’s multiplicity rule
,”
Nature
310
,
480
481
(
1984
).
20.
A. A.
Penzias
and
R. W.
Wilson
, “
A measurement of excess antenna temperature at 4080 Mc/s
,”
Astrophys. J.
142
(
1
),
419
421
(
1965
).
21.
D. J.
Fixsen
, “
The temperature of the cosmic microwave background
,”
Astrophys. J.
707
(
2
),
916
920
(
2009
).
22.
E. R.
Harrison
, “
Fluctuations at the threshold of classical cosmology
,”
Phys. Rev. D
1
(
10
),
2726
2730
(
1970
).
23.
P. J. E.
Peebles
and
J. T.
Yu
, “
Primeval adiabatic perturbation in an expanding universe
,”
Astrophys. J.
162
,
815
836
(
1970
).
24.
D. N.
Spergel
,
L.
Verde
,
H. V.
Peiris
,
E.
Komatsu
,
M. R.
Nolta
,
C. L.
Bennett
,
M.
Halpern
,
G.
Hinshaw
et al, “
First-year Wilkinson microwave anisotropy probe (WMAP) observations: Determination of cosmological parameters
,”
Astrophys. J., Suppl. Ser.
148
(
1
),
175
194
(
2003
).
25.
R.
Landauer
, “
Irreversibility and heat generation in the computing process
,”
IBM J. Res. Dev.
5
(
3
),
183
191
(
1961
).
26.
J.
Hong
,
B.
Lambson
,
S.
Dhuey
, and
J.
Bokor
, “
Experimental test of Landauer’s principle in single-bit operations on nanomagnetic memory bits
,”
2
(
3
),
e1501492
(
2016
).
27.
R.
Gaudenzi
,
E.
Burzurí
,
S.
Maegawa
,
H. S. J.
van der Zant
, and
F.
Luis
, “
Quantum Landauer erasure with a molecular nanomagnet
,”
Nat. Phys.
14
,
565
568
(
2018
).
28.
A.
Bérut
,
A.
Arakelyan
,
A.
Petrosyan
,
S.
Ciliberto
,
R.
Dillenschneider
, and
E.
Lutz
, “
Experimental verification of Landauer’s principle linking information and thermodynamics
,”
Nature
483
,
187
189
(
2012
).
29.
Y.
Jun
,
M.
Gavrilov
, and
J.
Bechhoefer
, “
High-precision test of Landauer’s principle in a feedback trap
,”
Phys. Rev. Lett.
113
(
19
),
190601
(
2014
).
30.
M.M.
Vopson
PhD
,
Reality Reloaded: The Scientific Case for a Simulated Universe
(
IPI Publishing
,
Hampshire, UK
,
2023
), pp.
1
140
https://doi.org/10.59973/rrtscfasu.