Probabilistic and Maximum Entropy Modeling of Chemical Reaction Systems: Characteristics and Comparisons to Mass Action Kinetic Models

We demonstrate and characterize a first-principles approach to modeling the mass action dynamics of metabolism. Starting from a basic definition of entropy expressed as a multinomial probability density using Boltzmann probabilities with standard chemical potentials, we derive and compare the free energy dissipation and the entropy production rates. We express the relation between the entropy production and the chemical master equation for modeling metabolism, which unifies chemical kinetics and chemical thermodynamics. Subsequent implementation of an maximum free energy dissipation model for systems of coupled reactions is accomplished by using an approximation to the Marcelin equation for mass action kinetics that maximizes the entropy production. Because prediction uncertainty with respect to parameter variability is frequently a concern with mass action models utilizing rate constants, we compare and contrast the maximum entropy production model, which has its own set of rate parameters, to a population of standard mass action models in which the rate constants are randomly chosen. We show that a maximum entropy production model is characterized by a high probability of free energy dissipation rate, and likewise entropy production rate, relative to other models. We then characterize the variability of the maximum entropy production predictions with respect to uncertainties in parameters (standard free energies of formation) and with respect to ionic strengths typically found in a cell.


I. INTRODUCTION
The maximization of entropy has been alluded to historically or directly stated as the goal or an emergent property of biological systems by both physicists and ecologists [1][2][3][4][5][6][7][8][9][10][11] .Yet, the concept has been underdeveloped regarding application to systems such as metabolism, and because of the abstract nature of the concept, it has gained insufficient recognition as an operational principle among microbiologists and cell biologists.There are several issues regarding the application of the principle of maximum entropy to biological systems.
First and foremost, it is not always clear what is meant by 'entropy'.The term has been associated with several mathematical descriptions, not all of which are equivalent, for example, even as they are used in physics and chemistry.The works of Ge and Qian 12 , Presse, et al. 13 , Watchel, et al 14 , Seifert 15 and Cannon 16 provide foundations to build on, in this regard.However, even given a rigorous thermodynamic definition of entropy, it is not clear how the maximum entropy concept would be applied to modeling a specific system such as metabolism, metabolic regulation or protein expression dynamics and its regulation.
Second, the concept can seem simplistic to those not familiar with the depth of the theory and its implications, leading to antagonism towards the idea that cells are thermodynamic machines.Although the idea is simple at a high level of abstrac-tion, its application is not simple and does not imply that all we need is to change our perspective and we will understand biology.However, applying the principle can uncover and aid in the understanding of emergence and biological complexity.
Third, when applied to dynamical systems, what is often meant is that the entropy (or free energy) produced is maximized (or minimized, respectively).But the concept of maximum entropy in statistical thermodynamics goes beyond simple predictive statements about a system moving to the most probable state, in agreement with the second law of classical thermodynamics, to include statistical inference.In fact, when the term maximum entropy or maximum path entropy is used in modeling dynamical systems, the term maximum caliber is often used -the application of maximum entropy concepts to dynamical systems with the use of constraints.When applied using constraints, the maximum caliber principle attempts to maximize a statistical entropy function or empirical distribution of unobserved variables while constraining the observed variables to their experimentally measured values 13,17 .The constraints might be those due to which chemical transformations (reactions) are possible, or a set of concentrations that are experimentally observed.
Applying sophisticated thermodynamics principles to a biological system is necessary but not sufficient to understand how cells function.There are many emergent processes and structures that occur in a cell that we do not understand suf-arXiv:2310.06135v2[physics.chem-ph]25 Feb 2024 ficiently to infer with thermodynamics.Moreover, the functional capabilities of a cell -it's set of metabolic reactions, gene complement, regulation, etc. -are specifically tuned to its environmental niche, and, in principle, represent a local thermodynamic optimum, arrived at by natural selection in the landscape of possible genetic capabilities.In this case, the local optimum depends on past history of the species, which inhibits our ability to infer a genome sequence ab initio from knowledge of the environment by using thermodynamics or maximum caliber as an inference tool.It is the inability to address all assumptions about appropriate constraints that led Dewar to suggest that, when a maximum entropy model fails, it is usually not the maximum entropy concept that has failed but rather the assumptions used in the model 18 .Similar conclusions have been drawn by Agozzino and Dill 19 .
Ultimately, the usefulness of maximum entropy or maximum caliber models for biology will be judged by the insight that they provide by linking physics with its natural laws to biology, and in the predictive power that the concept brings to models or systems.In this regard, an understanding is needed of how a these models are related to non-optimal models, such as one would find in a population.For example, kinetic models using the law of mass action are highly variable depending on the rate parameters that are used.How sensitive are maximum entropy or caliber models to uncertainty in the thermodynamic parameters, such as chemical potentials and equilibrium constants from which rate constants are inferred?What are the ranges of feasible concentrations and rates?
In this report, we derive the formula for entropy production rate for systems of coupled reactions from first principles, and in doing so link the ordinary differential equations describing mass action kinetics with the master equation formulation.Throughout the report, we assume that the ground states of the reactants and products are equilibrated with their local environments before and after reactions [20][21][22][23][24] such that the use of reference chemical potentials and Boltzmann probabilities are justified.We distinguish entropy production rates from free energy dissipation rates based on whether one is optimizing the reaction free energies or probabilities.We then demonstrate how to find maximum entropy models using an alternative form of the law of mass action.Having found the maximum entropy solution, we show how to efficiently generate a population of feasible mass action models, and compare the population of mass action models to the maximum entropy model.We show that thermodynamically optimal metabolisms are characterized by having the highest probability of high rates of free energy dissipation when compared to similar metabolisms.Importantly, systems having a stationary entropy production rate have a maximal change in probability with time when compared to similar systems that differ only by their rate constants.We discuss three sources of variability in models of metabolism: (1) variability of rates, metabolite levels, and free energy dissipation rates that would be found in a general population compared to the maximum entropy model; (2) variability in predictions made by maximum entropy models due to uncertainty in estimated standard free energies of reaction (equivalently, equilibrium constants); and finally, (3) variability in predictions made by maximum en-tropy models due to the range of ionic strengths that may be found in the cytoplasm of the cell.

A. Entropy Definition
The entropy as described by Boltzmann is, where k B is the Boltzmann constant and W stands for Wahrscheinlichkeit, or probability 25,26 .In a famous example used as a simple demonstration, Boltzmann employed a system in which each independent object had the same probability distribution (independent and identically distributed) and as such the number of permutations P of the objects could stand in place of the probability W .In this case, maximizing the probability is the same as maximizing the number of permutations.However, as Boltzmann also pointed out, when not assuming equal a priori distributions, it is the probability that is maximized as a system evolves, not the permutations.Boltzmann demonstrated this case for the velocity distributions of an ideal gas with different mean velocities along the x, y and z axes 27 .
Likewise, for a system of M different chemical species, the probability Pr ≡ W of observing a state in which each species i is observed to have n i counts is given by the multinomial probability density, where N tot = ∑ i n i and the probability θ i of each species i is given by the Boltzmann probability, where µ • i is the standard chemical potential and T is the temperature.As mentioned above, k B is the Boltzmann proportionality constant such that k B T is the ambient energy due to the temperature T. For brevity, we will use β = (k B T ) −1 for the inverse ambient energy.In the case that each species is independent and identically distributed (i.i.d.), the probabilities of all species are equal, i.e. θ i = θ j for all i and j.It follows that the resulting probability density function is again proportional to the number of permutations, as mentioned above.Using a constant of proportionality, c, Eqn.(2) can be simplified to the following, In this case, W is proportional to the number of ways of distributing N tot molecules among M different chemical species.
To distinguish between the more general meaning of W ≡ Pr and the demonstration case of Eqn. ( 4), we will use the symbol S U to specify the case, It is S U that is defined as entropy in some textbook descriptions.We are interested in the general multinomial case where W ≡ Pr such that, It is important to note that thermodynamic entropy obeys an extremum principle such that for any spontaneous process, the entropy increases to the maximum extent possible given the conditions.S U only obeys an extremum principle for a system that follows a uniform probability distribution, while S = k B log Pr (e.g., Eqn.7) obeys an extremum principle generally.Thus, it is S defined by Boltzmann and restated by Planck 26 that is related to the classical concept of entropy described by Clausius.However, to be clear, S in Eqn 7 is not the classical thermodynamic entropy; it is a single configuration representation of the entropy of classical thermodynamicsit is not an average value.The average of S, as discussed by Gibbs, is the mathematical function that represents the classical thermodynamic entropy in statistical thermodynamics 28 .
A common point of confusion is to conflate the definition of entropy from classical thermodynamics, where it is defined by experimental observables, with definitions from statistical thermodynamics.Moreover, in classical thermodynamics it is not the case that entropy requires that the system to be closed or at equilibrium; it is that historically entropy had not been defined experimentally for open, non-equilibrium systems.There is a big difference in meaning between the limitation of a scope of a definition due to simple inability to measure something beyond experimental limits and a requirement.However, even this experimental definition has been extended such that entropy in classical thermodynamics can be defined in open, non-equilibrium steady state systems [22][23][24] .
Like S in Eqn 7, a single configuration free energy, G, of a given state {n 1 , ..., n M } can be defined.Specifically, by using Sterling's approximation that log(n!) ≈ n log n − n, G can be derived from Eqn. (7).In doing so, it is convenient to represent the normalization factor in Eqn (3) where, Rigorously, G is a free energy density and not the Gibbs free energy, which is the ensemble average value, ⟨G⟩ .
Eqn. ( 12) is simply Eqn.(11) when the contribution from the total number of particles is ignored.The term N T log(q B /N T ) in Eqns.9-11 is a constant of integration under steady state conditions.The factor q B can be thought of as a molecular partition function for a hypothetical particle, a boltzon, whose internal energy levels reflect the different energy levels of the individual chemical species 29 .The chemical potential for this hypothetical particle is β µ B = − log(q B /N T ).Then the entropy is proportional by ambient energy β to an energy that includes the free energy and a con-tribution due to differing numbers of total particles,

B. Entropy Production, Entropy Production Rate and Free Energy Dissipation Rate
The concept that natural systems optimize their entropy or entropy production rate dS/dt = d log Pr/dt over time is related to the concept that natural systems maximize the change in probability with respect to time, dPr/dt.Over time, natural systems tend to move to the most probable state.Im-plicit in such statements of maximum entropy is the understanding that there are biological constraints on how fast the system can react and which physical configurations are possible if the system is going to react quickly.For example, the upper limit on enzyme reaction rates is approximately k cat /K M = 10 6 s −130,31 .If a system were to produce chemical species simply by maximizing Eqn.(7), many chemical species would be in such high concentrations that the cytoplasm of a cell would become glass-like, limiting diffusion and decreasing reaction rates significantly 32,33 .The idea that enzymes are regulated to ensure they do not over-produce chemical species and thereby adversely impact the solution properties inside the cell was first proposed by Atkinson 34 .
To evaluate either the entropy production or the entropy production rate, dS/dt, we need the functional forms of each.Yet the functional form of the time derivative of the entropy, dS/dt, is not immediately clear.However, it can be easily derived by considering the infinitesimal entropy produced along a reaction path ξj , which measures the extent of reaction j, The extent of the reaction ξ j determines the the amount n i of molecule i consumed in reaction j through the stoichiomentric coefficient γ i, j , Since in chemical reactions, both forward and reverse reactions happen simultaneously but may differ in proportion, each ξ j below represents the net extent of the reaction such that ξ j = ξ + j − ξ − j , where the signed index symbols on ξ + j and ξ − j represent the forward and reverse reactions, respectively.
Generally, in a system of Z reactions, we have unit vectors along each reaction coordinate, ξ1 , ..., ξZ .We define a unit vector for the system of reactions ξ = (ξ 1 ξ1 + . . .+ ξ Z ξZ ), such that a distance x that measures the extent of reaction of the system along ξ is, The entropy production of a system of Z coupled reactions along the path ξ is then given by the derivative along ξ, where the gradient of the entropy ∇S is the entropy production vector, Note that in the case that ∑ i γ i, j = 0 for each reaction j, ∇S = ∇G which is the vector of reaction affinities, Likewise, the entropy production rate is given by the scalar product of product of the gradient and a unit velocity vector, ξ = [ dξ 1 dt ξ1 , ..., dξ Z dt ξZ ] T , For brevity, below we will use the notation dξ j dt = ξ j for the reaction rates.
Substituting Eqn.(11) and the fact that N tot = ∑ i n i , differentiation provides the simplified set of equations for the entropy production and the entropy production rate: and, Here it is assumed that the change in N T due to a single reaction is such that µ B is essentially constant.Finally, substituting the identities , where K j is the equilibrium constant, and Q j is the reaction quotient, the following equalities are obtained: It is clear that maximizing the entropy production, Eqn 24, will not necessarily lead to the same result as maximizing the entropy production rate, Eqn 25, due to the difference between the extent of a reaction and its time derivative.The entropy production is a measure of the amount of entropy produced per reaction, while the entropy production rate is the entropy produced per time.An important question then is, when seeking to model nature, should a computational model maximize the entropy, the entropy production or the entropy production rate?
The entropy production rate has two contributing terms --the free energy dissipation rate and the rate of work to introduce or remove particles to the system.Rigorously, dG/dξ j = β −1 log K j Q −1 j is the reaction affinity for the j th reaction, but for steady state systems with a large number of particles, log K j Q −1 j ≈ −β ∆G is the reaction free energy change.For these systems, Eqn.(25) states that the rate of change of the entropy is proportional to the sum over all reactions of the product of the reaction rate and the negative of the free energy for each reaction.In the case of steady state systems with ensemble averaging and ∑ i γ i, j = 0, the entropy production rate from Eqn. (25) is the negative of the free energy dissipation rate, dS/dt = −dβ G/dt 12 .Furthermore, the entropy production rate is related to the real-valued, continuous chemical master equation since, Therefore, the change in probability with respect to time due to reaction j is, The time-dependent probability considering all reactions is, which is the chemical master equation.The transition probability for each reaction j is the respective exponential term, which can be shown to be a conditional probability.The arguement of the exponential is the entropy production rate for reaction j.For the first term of the integrand, integration from t = 0 to the time ∆t for one stoichiometric reaction at steady state ( ξ j constant) gives, so that the master equation due to the firing of just reaction j is, When evaluating the master equation for a change to an adjacent state such as the firing of only one or several reactions, the error due to using Sterling's approximation is negligible.However, when the change of state is large such as going from a highly improbable state to a steady state, the error may be significant.In this case it is more convenient to simply evaluate the difference in entropies using Eqn (7) and the gamma or log-gamma function.

C. The Most Probable State: Maximum Entropy
The most probable state is found by maximizing the probability, Eqn (2), or equivalently the logarithm of the probability, Eqn (7), subject to the constraint that a finite number of reactions may occur -that is, by applying Lagrange's method of undetermined multipliers to constrain the extent of each reaction to ξ j = c j , a constant, where λ is the undetermined multiplier and log Pr = S is the Boltzmann entropy from Eqn 7. Taking the derivative of F with respect to the extent of reaction of the system x, where again the ξ j are the values for the extent of reaction in the unit vector ξ and λ = [λ , ..., λ ] T is the vector containing the constraint variable.The constraint in terms of the gradient of the entropy is, If it is the case that the sum of the stoichiometric coefficients for each reaction j is zero, ∑ i γ i, j = 0, then it can be seen that the constraint λ corresponds to the reaction affinity, The specific value of λ will of course depend on the system conditions, especially the boundary conditions that determine whether the system is at equilibrium or non-equilibrium.Thus, the state with maximum probability has the property that for each reaction j, such that either a reaction j will have, or, That is, either a reaction is at equilibrium such that Eqn 39 is true, or reactions j not at equilibrium all have the same change in entropy with respect to the extent of reaction.Analogous to Boltzmann's original formulation of entropy in which each energy microstate of independent particles is assumed to be equally likely, each reaction is assumed to be equally likely from an energy perspective.In other words, the configuration that has the highest density is the state in which all reactions occur with the same probability.
Likewise, the state of time-stationary probability has the property that the time derivative of F is zero such that, The requirements for the most stable state are that either Eqn ( 40) is obeyed as before or, For a steady state, there is an additional requirement that the species counts n i (or concentrations) be stable with respect to time such that S • ξss = 0 where S is the stoichiometric matrix and ξss is the vector of steady state reaction fluxes.ξss may differ from the unit reaction fluxes ξ due to balancing of reaction stoichiometries to obtain mass conservation in the steady state.Consequently, according to Eqn (41), the maximum entropy state is the state in which any reaction j has the entropy production value λ , otherwise ξ j = 0 such that reaction j is at equilibrium.One must keep in mind that the analysis above indicates which state is the most probable; this is the state of maximum entropy.The analysis does not indicate which state is the state having the maximum entropy production or the maximum entropy production rate.There are two conditions that must be kept in mind: • Condition 1: If the boundary conditions are the same for every state such that the total drop in entropy is Zλ and if all states have the same steady state rate, then all states have the same entropy production rate dS dt .• Condition 2: If the boundary conditions are the same for every state such that the total drop in entropy is Zλ but the states have different steady state rates, it is not necessarily the case that the state obeying Eqns 39 and 40 will have the maximum entropy production rate dS dt .
To answer the question asked above, whether a computational model maximize the entropy, the entropy production or the entropy production rate, it is the entropy that should be maximized.While this may seem counter-intuitive -that the conditions above do not result in a state of maximum entropy production or maximum entropy production rate, there is a subtle semantic issue at hand.Specifically, if instead of defining probability densities based on species counts n i , one were to instead define density functions f based on reactions ξ j such that, or, and instead of using the Boltzmann entropy of Eqn 7 used Boltzmann's H-theorem as a definition of entropy, for instance, then the maximum entropy production and entropy production rate states by these definitions would indeed be the most probable states.Note that Eqn ( 45) is a thermodynamic function; Boltzmann's original H-theorem is not necessarily as the densities f were observational rather than thermodynamic in nature, making the original H-theorem more closely related to (but distinct from) the Maximum Caliber concept 13,17 .The observational densities f used by Boltzmann would presumably have the same values as f ( ξ j ) above when the observational densities are sampled over the ergodic time scale.See section II F for further discussion.
The maximum entropy state is the state at which the probability density is maximized; it can be thought of as a reference state.Denoting the vector of reaction quotients Q * = [Q * 1 , ..., Q * Z ] T for this reference state and the vector of equilibrium constants K = [K 1 , ..., K Z ] T , the vector of chemical potentials µ * = [µ * 1 , ..., µ * Z ] T for the reference state are calculated using the stoichiometric matrix S, where • indicates the Hadamard element-wise product.

D. Maximum Entropy is the State of Maximum dPr/dt
Generally in any system, whether chemical or not, we may be interested in finding the state that has the greatest change in probability with time, dPr/dt, given by Eqn ( 28), but constrained such that the rates ξ are at steady state.Similar to the procedure above, this state can be found using Lagrange's method of undetermined multipliers in which we define a new function F ′ involving a steady state constraint such that, where the constraints ∑ Z j=1 ( ξ j − c j ) are the assurance that each process rate ξ j is constant (c j ) for all processes j. Since, taking the gradient ∇ ξ of Eqn (47) with respect to the reaction fluxes ξ j , the state of maximum change in probabillity with time leads to the condition that either ξ j = 0 or, Since Pr is a constant due to a specific steady state, Eqn (49) can be rewritten as, where λ = λ ′ /Pr.Here, the constraint once again is simply ∇S = λ , and Eqn ( 50) is the most probable state condition.Because of Condition 2 discussed above, it is not the case that the state with the maximum entropy production rate dS dt due to a high steady state rate also maximizes the change in probability with time.The change in probability with time is maximized under the maximum entropy conditions for those processes j in which ξ j ̸ = 0. Note that Eqns 47-50 apply generally, not just to chemical reaction systems.

E. Mass Action Kinetic Models from Maximum Entropy
The practical question then is how to develop a maximum entropy production model for a biological system such as metabolism?The maximum entropy condition of Eqn ( 41) can be solved directly using mathematical optimization with constraints.However, mathematical optimization does not provide any additional physical insight into the maximum entropy state.Fortunately, following the development of the Arrehnius equation there was a concerted effort to develop a thermodynamic formulation of mass action.That effort ultimately eventually led to the development of transition state theory 36,37 .An important conceptual step towards this goal was Marcelin's equation for mass action 38 , which, although it was ultimately shown to be incorrect, can be used to find a maximum entropy kinetic model that provides additional physical insight into the maximum entropy requirements.
Consider a reaction involving n A molecules of reactants A, and n B molecules of products B, each with respective unsigned stoichiometric coefficients ν i, j = |γ i, j | for each molecular species, The usual kinetic law of mass action for the net reaction flux is, where k 1 and k −1 are constants of proportionality (rate parameters).The challenge is to find a maximum entropy steady state using the law of mass action, yet the usual rate laws such as Eqn.( 52) do not contain any thermodynamic terms to optimize.However, a mass action rate law such as Eqn.( 52) can be turned into a thermo-kinetic equation by a simple algebraic rearrangement that factors each term such that, The reaction rate equation now has thermodynamic terms, and k 1 n 53) is an exact but alternate representation of the law of mass action, which says that the rate is proportional to the instantaneous thermodynamic odds of the forward reaction -multiplied by the average time before the odds change due to an addition of another reactant molecule -minus the thermodynamic odds of the reverse reaction -again multiplied by the average time before addition of another reactant molecule for the reverse reaction.Setting each kinetic term to a constant c is equivalent to assuming that each reaction occurs on the same time scale such that the rate equation becomes, Although not generally correct, this formulation has the advantage that it makes the energy surface convex, since the surface of the log of an exponential distribution is convex for each fixed number of particles, N tot .The result of the approximation is the Marcelin equation 38 and can be used to easily find the maximum entropy steady state 33 .(The time dependence can be added back in post-hoc such that a general formulation of the law of mass action is once again obtained).Using this assumption with c = 1 at steady state, a set of Z sequential reactions having equal rates, will have rates that are proportional to the thermodynamic driving forces such that, The product of the M by Z stoichiometric matrix, S, and the vector of reaction fluxes ξ is zero at steady state, The convention is used here is that S has rows corresponding to metabolites and columns corresponding to reactions.Eqn (56) enforces mass balance at steady state, in which case the reaction flux may be a multiple α of the unit reaction flux, ξ ss j = α ξ j .In this case, the steady state rates are such that, Consequently, the constraints of the most probable state, Eqns (40) or (42), may not be strictly enforced since a rate ξ ss j may not be single valued.However, as shown in Figure 1, the rates are approximately single valued for the cases α = 1, 2, 3.A steady state solution can be found utilizing the rate laws of Eqn (55) using a non-linear least squares optimization that results in approximate enforcement of the constraints of Eqns (40) and (42).The optimization has a runtime of a few seconds in contrast to adaptive time step ODE solvers which can take days of CPU time to converge due to the stiffness of the equations as they approach the steady state.The physical insight gained from the use of the Marcelin equation is that in FIG. 1.When using the Marcelin approximation (Eqns (55)) to the mass action rate equations, it is not necessarily the case that all reactions j at steady state will have the same value of the thermodynamic odds K j Q −1 j .The reason for this is that the steady state rate ξ ss j may be as multiple α of the unit rate, ξ j .However, even mildly nonequilibrium values of K j Q −1 j result in approximately the same rate ξ ss j , as shown in the plot.For reference, K j Q −1 j = 5 corresponds to a free energy change of −0.94 Kcals/M or −3.99 KJ/M. the maximum entropy state, each reaction occurs on the same time scale such that the reaction rates are directly proportional to the thermodynamic odds (K J Q −1 j ) of the reaction.Regardless of how one obtains the maximum entropy solution, rate constants for the maximum entropy solution are obtained via the mass action rate laws since the reaction fluxes and steady state concentrations are known.For the chemical reaction similar to that shown in Eqn.(51), the net flux ξ j of reaction j is, Since the fluxes, ξ j , and metabolite concentrations, n i , are known, the rate constants can be determined, and The usual mass action ODEs using rate constants, e.g., Eqn (52), can then be solved using either optimization methods or an ODE solver.The kinetically accessible energy surface is not necessarily convex because of the introduction of the rate constants -each reaction now has its own time dependence.

F. Inferring the Thermodynamic Probability of a Kinetic Model
While the master equation (Eqn 30) gives the probability density of a state given the counts or concentrations of molecular species and their rates, determining the probability of a state by solving the master equation involves integrating the change in probability from an initial state (Pr(t = 0)) to a steady state where dPr(t)/dt is constant, which can be challenging.Thus, it is useful to also have an estimator of the relative probability for evaluating empirical distributions which differ by their steady state rates ξ .This is the context for which maximum caliber methods were developed 13,17 .In maximum caliber approaches, a probability density of a dynamical property, such as a rate ξ j , is often formulated using a function such as, where ∑ j ξ j − c a constraint on the system, for instance to enforce a steady state condition such that all rates equal a constant c, and λ is a Lagrange multiplier.The density can be normalized by a cumulative density Q such that, where, Here, the sum is over distinct steady states s and the reaction rate is now shown to be a function of the steady state s, ξ j (s).The result is an empirical density function Pr( ξ ) in which the normalization Q is the cumulative density over the observed steady states s.The probability density is simply an empirical density function of observed rates, in which the state having the highest density is also the state that has the average density.Comparing Eqn (60) with Eqn (34), the latter contains information on the system energy while the former does not.The probability density Pr( ξ ) of Eqn (61) is the domain of statistics, not statistical thermodynamics, as it does not follow a thermodynamic extremum principle -it does not maximize the thermodynamic entropy nor minimize the free energy, which Jaynes stated, as well 17 .However, relative thermodynamic probabilities of steady states due to different steady states s and their rates ξ ss (s) can be calculated as a function of the distance d from the most probable state, as described by the criteria in Eqn.38.Defining a distance d for steady state s from the most probable state as, the probability of a state a distance d away from the most probable state is, The use of the product in this context requires that each of the Z reactions be statistical independent.This is generally the case.While the free energies are related such that the sum of the free energy must be the same as the total free energy, the rate constants are random variables.That is, if the rate parameters k + j , k − j are independently and randomly chosen from a distribution, then since the rate parameter values determine the concentrations and hence the reaction quotients, Q j , the K j Q −1 j are also random.An issue with using Eqn 65 is that the standard deviation σ j cannot necessarily be reliably estimated.Instead, the probability densities of the reaction free energies can be inferred non-parametrically using a Gaussian kernel density estimation.From a random sample population of N ss steady state solutions in which each steady state solution s has a set of reaction quotients {Q j=1 (s), ..., Q j=Z (s)}, the estimated probability f ( ξ j (s)) of observing a difference d s, j = log where h j is the kernel scaling parameter or bandwidth.The estimated relative probability of the steady state s can then expressed as the product of the probabilities for each reaction, Here again, it is significant to note that if one considers entropy production to be defined with respect to the densities f ( ξ (s)) for reactions ξ j using Boltzmann's statistical version of the H-theorem 25,39,40 as shown in Eqn (45), then, is an estimate of the thermodynamic H function, Eqn (45).By this definition the maximum entropy production rate state is the state that matches the conditions for the maximum entropy state discussed below Eqn (41).A similar H-theorem estimate can be analogously defined as the maximum entropy production state in which the d s, j measures the distance in entropy production away from the most probable reaction.

III. APPLICATION AND EVALUATION
To demonstrate the concepts below, we use a canonical model of central metabolism consisting of glycolysis coupled to the tricarboxylic acid (TCA) cycle shown in Figure 2. In the model, glucose is fed into the system through the glucose kinase reaction (abbr.HEX1) and carbon leaves the system as CO 2 in the TCA cycle.Specifically, the metabolic model consisted of the 21 reactions of glycolysis, the tricarboxylic acid (TCA) cycle and the glutamine oxoglutarate aminotransferase reaction (GOGAT) reaction, which includes 37 metabolites, of which 17 were fixed boundary species and 20 were allowed to vary.Without the GOGAT reaction, the submatrix of the stoichiometric matrix that describes the TCA cycle is singular (eight reactions but nine variable intermediate metabolites including the input species acetyl CoA).The system is made non-singular by coupling the intermediate α-ketoglutarate to a bath using the GOGAT reaction, 2 glutamate + NAD ⇔ glutamine + α-ketoglutarate + NADH, in which glutamate, glutamine, NAD and NADH are held fixed.
Standard free energies of formation of metabolites in aqueous solution as well as equilibrium constants were determined using the eQuilibrator software 41 using a range of pH and ionic strength values.Rate parameters were determined as described above, Eqn (59).Figures and plots below use molar units for free energy such that free energies/mole are used rather than free energies/molecule.The former differs from the latter by Avagadro's number N A .Consequently, Rydberg's constant R multiplied by the temperature T , RT , is used instead of β where RT /N A = β −1 ).
In comparison to the maximum entropy model, we focus on three sources of variability in predictions: (1) variability of predictions due to variability in rate constants; (2) variability in predictions due to uncertainty in estimated equilibrium constants; and (3) variability in predictions in models due to the range of ionic strengths that may be found in the cytoplasm of the cell.However, first we describe how variability in steady states arises in biological systems due to variability in nominal rate parameters.

A. Variable Rate Parameters and Implicit Representation of Enzymes.
While varying parameters such as mass action rate constants to maximize the entropy of non-equilibrium steady states may not be a familiar concept, this is what biological systems do when under selective pressure (natural selection), even if it is not the entire aspect of natural selection.Through mutation, a set of enzymes in a reaction pathway can be selected for the most thermodynamically efficient enzymes, which ultimately means that a greater amount of energy can be harvested from the environment and put to use to create biomass.
For enzyme catalyzed reactions such as shown in Figure 3, the rate constants determined through Eqn.59 are composite rate constants that implicitly represents an enzymatic process such as the one shown in Figure 3 FIG. 3. Reaction scheme for catalyzed reactions relative to uncatalyzed reactions used in Eqn.(51)   how the composite rate constants, k 1 and k −1 in Eqn.(51), can be related to the elementary enzymatic processes shown in Figure 3.The two types of rate constants, composite and elementary, are related through Kirchhoff's voltage law applied to chemical systems 42 , such that, Likewise, As a consequence of generalized detailed balance, the reaction flux ξ1 represents a coarse graining of the enzyme reaction fluxes according to product of the ratio of the enzyme reaction fluxes, Thus, while the enzymes are not explicitly represented, the thermodynamics and coarse-grained kinetics of the overall catalyzed process are represented, as expected due to the Haldane relationship between the reaction thermodynamics and enzyme catalysis 43 .

B. Comparison of the Maximum Entropy Model to an Ensemble of Mass Action Models
While the maximum entropy solution is an optimal (i.e., most likely) model, it is not clear how much leeway nature has when selecting organisms with metabolisms that vary from the maximum entropy solution.To investigate this, a population of models with differing reaction rate constants was generated and compared to the maximum entropy solution model.Rate constants were generated by changing metabolite concentrations and thereby altering the value of the reaction quotient Q and solving Eqn (59) for rate constants.For example, for the chemical reaction equation of Eqn (51), solving the rate law (Eqn ( 52)) for the rate parameters gives, and Six ranges of concentrations were used to generate rate constants, corresponding to variation of the concentrations across r orders of magnitude (10 ±r/2 ), centered on the maximum entropy solution, where r = 0.5, 1, 2, 3, 4 and 5.This leads to approximately the same range of variability in rate constants.For example, for concentrations varied up or down randomly using r = 1 and 3, the distribution of the ratio of feasible rate constants are shown in Figure 4, A and B, respectively.Each distribution represents the sampling of rate constants over all reactions, except as noted below.Both distributions are asymmetric around the maximum solution.However, the distribution with the lower variation of range r = 1 (Figure 4A) is roughly uniformly distributed on each side of the maximum solution, while the distribution with higher variation of r = 3 (Figure 4B) falls off steeply on each side of the maximum entropy production solution.Since the rate constants for the input boundary reactions (Hex1 and GOGAT) only vary due to variation in the reaction product, not the reactants, they are highly biased to maintain proximity to the rate constants of the maximum entropy solution.These rate constants were therefore removed from both distributions shown in A and B. In contrast, the output boundary reactions producing CO 2 or Co-A (PDH, CSM, ICDH, AKDG, SUCCoA) have both reactants and at least one product that varies.
The percent of trials that produced a steady state solution are shown in Figure 4C.Here, results are shown from all six ranges of concentrations used to generate rate constants.As can be seen, the number of feasible solutions decreases rapidly, from approximately 50% to less than 10%, as the allowable concentration range is expanded beyond 10-fold.When the concentration range is 10 5 , only 0.07% of the random concentrations result in a feasible steady state.
Properties for individual reactions were evaluated, as well.Figure 5A-C shows the resulting distribution of rate constants, reaction rates, and reaction free energies of the ensemble of models generated using sets of random concentrations that varied by up to 10-fold up or down of the maximum entropy solution (r = 2: up to 100-fold range across individual concentrations, Figure 5D.See also Appendix B 1). Boxplots were used to characterize the resulting rate constants, rates, reaction free energies and concentrations.The ends of each box represents the 25 th and 75 th quantiles, denoted q n (25) and q n (75), while the whiskers extending from each box denote the most extreme data points falling withing 1.5 times the range between q n (25) and q n (75).Data points outside this range are denoted as outliers and are shown as red '+'s.
The feasible rate constants for each reaction are shown in Figure 5A as the logarithm of the ratio of the rate constant to the maximum rate constant.The random rate constants vary asymmetrically around the maximum values, especially for the glucose kinase (HEX1) reaction, which is an input boundary reaction.As mentioned above, the distribution of rates   constants for the input boundary reactions HEX1 and GOGAT (not shown) are considerably skewed due to the fact that only the product concentrations of the reactions are varied.
The analogous reaction rates (for the forward reactions) are shown in Figure 5B as the logarithm of the ratio of the rate to the maximum entropy rate.Interestingly, the lowest forward reaction rates across reactions (values of zero) correspond to those for the maximum entropy solution.Reaction free energies varied considerably for each reaction as well, and are shown in Figure 5C.The free energies of the maximum entropy solution are shown as black * , while red * marks the steady state in which the reaction free energies have the largest variance away from the maximum entropy solution.Blue * denote the distribution that has the largest variance away from the maximum entropy solution with respect to forward reaction rates.As expected, the maximum entropy model due to the Marcelin approximation has the most uniform set of reaction free energies.
Despite the variation in reaction free energies across the models, each model has the same steady state rate since the rate used to calculate the rate constants (Eqn (59)) was steady state rate from the the maximum entropy model.Hence, each model extracts the same amount of energy per glucose molecule.The energy extracted per mole of glucose consumed can be calculated by dividing the free energy dissipation rate by the net reaction flux of the glucose uptake reaction, which is the glucose kinase reaction HEX1 in the model, The free energy extracted per glucose is a constant, namely −146.6 • RT (-363.4kJ/mol).Likewise, because each model in the ensemble has the same steady state rate of 36.4 (unitless time) and the same overall free energy change, the free energy dissipation rate also has the same value for each model, -5334 (∆G/RT • unitless time), demonstrating Condition 1 in Section II C.
A population of models can be generated that do not have the same steady state rates or free energy dissipation rates.This is done following the same process as before in which rate constants are determined by solving Eqn.(59) using random steady state concentrations.However, in this case, the rate constant for the first reaction in the pathway, glucose kinase (HEX1), is set to the same value determined from the maximum entropy solution.Since the concentration of the product of that reaction, glucose 6-phosphate, is still randomly varied and the resulting steady state flux is solved using Eqn 52, then if the latter concentration is less than its maximum concentration, the reaction will be more favorable (∆G Hex1 decreases), and the reverse reaction will occur less frequently.This process results in varying steady state and free energy dissipation rates, as shown in Figure 6A, and varying steady state and entropy production rates, as shown in Fig- ure 6B.The entropy production rates (free energy dissipation) increase (decrease) nearly linearly with respect to increases in the steady state rate.The maximum entropy solution to the boundary value problem doesn't have the highest free energy dissipation or entropy production rate since it does not have the highest steady state rate, demonstrating Condition 2 in Section II C.
Since the maximum entropy model results in each reaction being equally far from equilibrium, there are fewer reactions close to equilibrium in the model.Consequently, that the maximum entropy models have generally high steady state rates is consistent with the concept that reactions near equilibrium in systems of coupled reactions will limit the rate of the overall pathway 44 .

C. Probability of Models with High Dissipation Rates
While it would seem from Figure 6 that species with the highest steady state rate of metabolism might be the most optimal model due to correlation with the free energy dissipation rate, what is not taken into account in the analysis are the probabilities of each model relative to each other.In physical systems it is not the entropy production rate, dlogPr/dt, that is  maximized but rather the change in probability of the system with time dPr/dt.To demonstrate, Figure 7 plots the same models used in Figure 6 (black dots) but this time comparing the probability of the steady state, Pr, with the change in probability with time, dPr/dt.The probability of each of the steady states was determined using the Gaussian kernel method of Eqn (67), while dPr/dt was determined with Eqn.(28).The probability densities for each of the 21 reactions are shown in Figure 12 in Appendix B 3.
As can be seen in Figure 7, the maximum entropy model, indicated by the red * , has both the largest probability and the largest change in probability with time, dPr/dt.That is, because of the principle of equiparition of energy, not only is the maximum entropy model expected to be the thermodynamically most likely model, it will also have the largest change in probability with time of any models.Unless one has specific information about a reaction not obeying the principle of equipartition of energy, perhaps because the catalyst can only lower its transition state a limited amount, the use of a maximum entropy model is the least biased model.A Michealis-Menton model, in contrast, is always biased by the assumption that there is infinite thermodynamic driving force to release the product of an enzymatic reaction, which is certainly not the case for coupled reactions in a cell.In order to determine the sensitivity of the maximum entropy predictions of fluxes and concentrations due to uncertainty in the estimates of ∆G • , we randomly sampled ∆G • across its 95% confidence interval and characterized the predictions of steady state fluxes and concentrations.In addition, each prediction was evaluated using multiple sets of random initial concentrations.The standard free energies come into play in the model predictions through the equilibrium constants, where the equilibrium constant for reaction j is K j = e −∆G • j /RT .The collection of standard free energy changes, ∆G • and standard deviations σ (∆G • ) for all reactions were calculated using eQuilibrator software 41 .
Specifically, 1000 values were sampled from disjoint intervals within a single standard deviation range, [∆G • − σ (∆G • ), ∆G • + σ (∆G • )], using the Latin hypercube sampling (LHS) method 45,46 .The resulting equilibrium constants were calculated from the sampled free energy values.Ten initial starting concentrations were then chosen for each variable metabolite.Nonlinear least squares optimization using Eqns.(55) and (56) was then used to acquire steady state fluxes through the reactions, metabolite concentrations and net driving forces on the reactions.The thermodynamic driving force on a reaction, known as the reaction affinity, is defined as, (If the system is at steady state and the concentrations are large enough that they can be assumed to be continuously distributed, then −∆G j ≈ A j .)From the net driving force on each reaction, the total driving force on the pathway, A, was determined using Eqn.(76) and the relation A = ∑ j A j .The variation of fluxes obtained as a function of the total driving force on the pathway, shown in Figure 8A and Appendix B, Table I, results in a linear response.In the upper glycolysis pathway, the reactions produce exactly half the flux as downstream reactions due to the production of two glyceraldehyde-3-phophates from fructose-1,6-bisphophate in the fructosebisphosphate aldolase catalyzed reaction.This results in a second line of higher flux data points for lower glycolysis (Figure 8A).The significance is that uncertainties in standard reaction free energies ∆G • /RT can potentially have a significant effect on predicted reaction fluxes.This is the case whether one uses a maximum entropy model or not, since variation in the equilibrium constants K j results in variation in the rate parameters k j and k − j , as well.I.) The variability in the predicted concentrations and flux is a result of variability in the equilibrium constants and not initial metabolite concentrations.To show this, we performed 10 distinct optimization routines with random initial starting concentrations for each metabolite.Each set of random starting metabolite concentrations was generated from a logarithmic scale using uniform distribution on the interval [-10, 0].The mean and variation in concentrations due to uncertainty in ∆G • are shown as error bars for each metabolite or reaction in Figure 9A.The resulting steady state reaction flux for each of the 10 starting concentrations are represented by the range of colors from blue (1 st ) to green (10 th ).Identical means and standard deviations across each blue-green set of steady state values show that steady state metabolite and flux variation is not a result of starting concentrations or the optimization routine.Instead, steady state metabolite and flux variation is directly related to variations in the free energy.The large variability in the steady state flux in Figure 9B is most likely due to large variability in standard reaction free energies of just a few reactions.In particular, Glyceraldehyde 3-phosphate dehydrogenase (GAPD) has an especially large coefficient of variation of 5.27 × 10 4 (Appendix B, Table I), due to the small value of the equilibrium constant (mean = 2.5 × 10 −5 , σ = 1.34).The significance is that predicted concentrations do not vary over a large range due to uncertainties in the standard free energies of reaction ∆G • j used in the model.The steady state metabolite concentrations predicted from a maximum entropy approach or any other approach may be outside the range of the values typically observed experimentally 47,48 , as can be seen for acetyl CoA and fructose-1,6-bisphosphate in Figure 9.However, as we have shown elsewhere 32,49 , these metabolite concentrations are reduced to physiological levels when enzyme activities are introduced into the governing differential equations.

E. Prediction Uncertainty due to Variability in Ionic Strength
The value of ionic strength, I, in the cell cytoplasm is often taken as I = 0.25 M. In order fully test the model prediction sensitivity due to change in ionic strength, 1000 values were sampled uniformly within equally spaced disjoint intervals between [0.01, 0.5], which were chosen since they span the range of ionic strengths that are physically observed.Each value of ionic strength is then utilized to generate standard free energies for each reaction, ∆G • j (I), using eQuilibrator API 41 which allows for the adjustment of free energies as a function of pH and ionic strength.For the overall chemical equation for the pathway, the standard free energy decreases as the ionic strength increases.As above, from standard free energies of reaction, equilibrium constants are calculated and subsequently used to obtain steady-state predictions for metabolites and fluxes.
The steady state fluxes due to variability in ionic strength in upper and lower glycolysis remain linear with respect to the reaction affinity, A (Eqn. ( 76 the free energy change for reactions was directly varied (Figure 8A).However, the range of reaction affinities are reduced when ionic strength is varied, specifically the total affinity was within [−379.225,−342.274](Figure 10A), but ranged between [−413.483,−316, 46] due to uncertainty in equilibrium constants (Figure 8A).Consequently, obtaining accurate standard free energies of reaction is more important than getting accurate ionic strength values for inside the cell.Moreover, the distribution of fluxes through upper glycolysis (Figure 10B) is no longer bell-shaped as was observed in Figure 8B, due to the distribution of standard free energies derived from the sampled ionic strength values.Variation in individual steady state metabolite concentrations due to alterations in ionic strength (Figure 11A) are represented by blue error-bars showing a single standard deviation.Similarly, the steady state flux through each reaction due to alterations in ionic strength are shown in Figure 11B.In both the steady state metabolite concentrations and reaction flux, the variations due to ionic strength lie within the variations based on uncertainty (95% confidence interval) in the standard free energy change (compare Figure 9B and Figure 11B).The mean flux and standard deviations for individual reaction fluxes and free energies are listed in Appendix B, Table I.Like variations in flux, the variation in concentrations due to uncertainties in ionic strength are much less that the variations seen due to uncertainty in standard reaction free energies, ∆G • j .

IV. DISCUSSION AND CONCLUSION
In this work, the relationship between the probability density of a system of coupled chemical reactions, with each chemical species represented by it's Boltzmann probability, and the rate of free energy dissipation, entropy production, and change in probability with time, dPr/dt, due to chemical reactions was demonstrated.When the total number of particles does not change in any reaction, the free energy dissipation rate is the same as the entropy production rate, but opposite in sign.The entropy production rate for systems in which the total number of particles can vary will additionally depend on changes in the probability density due to changes in the number of particles.We showed that, while the maximum entropy state is not necessarily the state with the highest entropy production rate (d log Pr/dt), it is the state that maximizes the change in probability with time, dPr/dt.
The entropy production rate relates the chemical master equation to chemical kinetics, as described by the law of mass action.Moreover, the time-dependent probability density for a system of coupled chemical reactions can be separated into time-dependent and time-independent equations (Eqn ( 53)), which can be solved separately by first solving for the maximum entropy solution and then adding specific time dependence back into the rate equations using either known rate constants or measured steady state concentrations.
Predictions between maximum entropy models and kinetic models which do not maximize the entropy production were compared.Systems that have the same steady state rate but only differ in free energies at each reaction have the same free energy dissipation rate, regardless.Only when the steady state rates vary, do the free energy dissipation rates also vary.
In a maximum entropy model, of course, free energies are equally partitioned across reactions as much as possible.The equipartition of reaction energy acts to generally, but not always, increase the steady state rate relative to other models (i.e., models that are not at maximum entropy) because fewer reactions in the maximum entropy model are near equilibrium.Reactions near equilibrium act to constrain the steady state flux 44 .The closer to equilibrium, the relatively more flux occurs in the reverse direction.Despite this, a maximum entropy solution is not guaranteed to have the highest nominal rate of entropy production or free energy dissipation.
However, a maximum entropy model is also maximal in probability space, and consequently when the probabilities of each model are considered, the maximum entropy model strikes a balance between a high free energy dissipation and entropy production rates and the thermodynamic work required to maintain the steady state.Because of the link between free energy and probability density (Eqn.( 2)), a low probability density means that a significant amount of energy is required to obtain that steady state.
In contrast, maximum entropy models are highly probable because they represent the thermodynamic average of a population.If a population average model can't provide a description of a biological process of interest, it is likely that the model is incomplete and missing key functionality.Consequently, if a maximum entropy model does not adequately capture the experimentally observed behavior, the model is likely missing important mechanisms or functionality.In this regard, it is essential to consider physical, chemical and biological constraints, as well, for these may represent the missing, and possibly emergent, functionality.For instance, since the concentration of chemical species in maximum entropy models are proportional to their Boltzmann probabilities, adjusted for non-equilibrium boundary conditions, the concentrations for species such as acetyl CoA and fructose-1,6-bisphosphate may be unreasonably high for a biological system.However, application of constraints in the form of activity coefficients for the system results in concentrations that are consistent with experimental observation, and hence represents an approach to learn regulation 32,49 .
The metabolic models shown in this work are reduced models that are appropriate when the details of the dynamics of the catalytic process are not needed.That is, the models represent dynamics that are coarse grained over the enzyme binding and release of substrates and products, and consequently can be solved quickly using non-linear least squares optimization of the metabolite concentrations to steady state values.In principle, maximum entropy simulation models can be developed analogously at the enzyme level by taking into account the thermodynamic work required to synthesize the enzymes and the benefit provided to the cell 50 .Maximum entropy methods have been developed to replicate experimental observations based on data 51 .
Relative errors in the free energies of reaction do not have a major impact on the maximum entropy solution.The predicted concentrations may vary within ±1 − 3 orders of magnitude.For comparison, concentrations in an E. coli cell can vary by more than 7 orders of magnitude, from tens of millimolar to approximately nanomolar which corresponds to 1 molecule per cell.
However, the relative uncertainty in standard free energies can have an impact on steady state flux by up to 70%.Still, as models become more precise and accurate it will be important to reduce the uncertainties in standard free energies of reaction as much as possible.Sophisticated electronic structure calculations are perhaps the most tractable way to develop more accurate models 52,53 .
Likewise, variation in ionic strength not have a strong effect on concentrations, but may impact steady state fluxes more significantly since the fluxes are a function of the overall driving force on the pathway, and the overall driving force increases with an increases in the ionic strength.Impacts on steady state flux up observed to be as high as 27%.
(γ i, j > 0), n i (J)! (n i (J) + γ i, j )! = 1 (n i + ν i, j ) • • • (n i + 1) ≈ n −γ i, j i , (A3) and for reactants (γ i, j < 0), n i (J)! (n i (J) + γ i, j )! = (n i )(n i − 1) where the approximations are valid when n i ≫ 0. Eqn A2 can then be written as, Pr(J + δ ξ j ) Pr(J) 12. Estimated probability densities f (d j ) and histograms for the distance d j = log K j Q −1 j − λ from the most probable density for each of the 20 reactions used in the metabolic model that have reaction flux ξ j ̸ = 0.Each density is calculated from Eqn 66.The true maximum entropy solution occurs at d j = 0, the Marcelin approximation to the maximum entropy solution is indicated by a red * and the location of the peak density in each distribution occurs at the average value of d j from the distribution of Eqn (66).See section II F for details.

FIG. 4 .
FIG. 4. (A and B) Probability density function of the log ratio of the population rate constants k i relative to the respective rate constants k • i from the maximum entropy model for each reaction i. (A) Rate constants were obtained by varying concentrations from the maximum entropy solution by 10 ±0.5 -fold.(B) Rate constants were obtained by varying concentrations from the maximum entropy solution by 10 ±1.5 -fold.(C) Percentage of trials that resulted in a steady state solution as a function of the range over which concentrations were randomly varied.

5 FIG. 6 . 19 FIG. 7 .
FIG. 6. (A) Free energy dissipation rate and (B) Entropy production rate are plotted against the steady state rate for each model in the ensemble generated by varying rate constants +/-2.0 orders of magnitude in a manner, described in the text, such that each model has a different steady state rate.The maximum entropy model, red * at 680, 7.45) is compared to the random models (black dots).The highest entropy production rate (red X) occurs at the location 748, 8.19

D
. Prediction Variability due to Uncertainty in Standard Free Energies of Reaction.

FIG. 8 .
FIG. 8. (A) The relation between reaction affinity (Eqn.(75)) and steady state flux for the set of reactions from glucose through the TCA cycle and (B) corresponding probability distribution of the fluxes through upper glycolysis as a function of the uncertainty in the estimated standard free energies of reaction.

FIG. 9 .
FIG. 9. Individual steady state metabolite levels (left) and fluxes (right) calculated via optimization from 10 unique starting concentrations are shown in blue-green in separate columns.Differences resulting from variation in the standard free energy changes of the reactions, ∆G • , are shown for each metabolite.Error bars represent a single standard deviation.
FIG. 10.The relation between reaction affinity and flux (left) is shown for the overall set of reactions from glucose through the TCA cycle as a function of the ionic strength used to determine the standard free energy changes of the reactions, ∆G • .The corresponding distribution of fluxes (right) are shown for the same ionic strength values.