This work aims at the development of a mathematical and computational approach that enables quantification of the inherent sources of stochasticity and of the corresponding sensitivities in stochastic simulations of chemical reaction networks. The approach is based on reformulating the system dynamics as being generated by independent standardized Poisson processes. This reformulation affords a straightforward identification of individual realizations for the stochastic dynamics of each reaction channel, and consequently a quantitative characterization of the inherent sources of stochasticity in the system. By relying on the Sobol-Hoeffding decomposition, the reformulation enables us to perform an orthogonal decomposition of the solution variance. Thus, by judiciously exploiting the inherent stochasticity of the system, one is able to quantify the variance-based sensitivities associated with individual reaction channels, as well as the importance of channel interactions. Implementation of the algorithms is illustrated in light of simulations of simplified systems, including the birth-death, Schlögl, and Michaelis-Menten models.

It has long been recognized that stochastic simulators1–7 offer an attractive computational means for simulating the chemical master equation (CME). These algorithms are based on describing the evolution of the number density of individual chemical species on the basis of propensity functions, which provide a probabilistic characterization of the rate of progress of elementary reactions. In its elementary formulation, the reaction is advanced by sampling the time for next reaction to occur, whereas the system remains frozen in its current state in the mean time. After a reaction has occurred, new samples are drawn, and the procedure is iterated in order to determine successive elements of a Markov process associated with the evolution of the state vector of species number densities. By computing multiple realizations of such chains, the algorithm affords in a straightforward fashion the possibility to determine various statistical moments of the solution.

One of the difficulties associated with stochastic simulators is that it is not generally straightforward to assess the impact of individual reaction channels on the evolution of the solution, nor to quantify the effect of uncertainties on the variability of the solution. This is especially the case for stochastic systems that do not involve parametric uncertainty, which are the focus of the present study. The central question we aim to address is whether it is possible, without perturbing the reaction rates (propensity functions), to exploit the stochasticity inherent in stochastic simulations for the purpose of quantifying the impact of individual channels on the evolution of particular species of interest, and also potentially characterizing the role of channel interactions. As outlined in Sec. II, our approach to this question is based on reformulating classical Stochastic Simulator Algorithms (SSAs, see Refs. 1 and 2) using independent standardized Poisson processes similarly to the next reaction method proposed in Ref. 8. This reformulation affords a straightforward identification of independent random realizations for individual channel dynamics, and consequently enables us to exploit sampling strategies in order to quantify the impact of the stochasticity associated with elementary channels, or groups of channels. As discussed in Sec. III, the Sobol-Hoeffding (SH) decomposition is applied for this purpose. In particular, this enables us to adopt well-established Monte-Carlo sampling strategies to obtain estimates of the contributions of individual channels, and of channel interactions, to the variance of specific quantities of interest. Implementation of the resulting methodology is illustrated in Sec. V through applications to well-known systems, including the birth-death (BD), Schlögl, and Michaelis-Menten models. Major conclusions are summarized in Sec. VI.

The present work focuses on the analysis of the respective importance of inherent sources of variance in stochastic simulations of chemical reaction networks. Stochastic effects arise predominantly when the reacting system involves a small number of reactant molecules, due to discrete evolutions. Specifically, we consider a well-stirred chemical system involving Ms molecular species {S1, …, SMs} and Kr reaction channels {R1, …, RKr}. We denote X ( t ) = ( X 1 ( t ) , , X M s ( t ) ) T N 0 M s the state of the system at time t, where Xi(t) is the instantaneous number of molecules of species Si. The dynamics of the system are prescribed by the definition of the propensity functions aj and state-change vectors νj ∈ ℤMs associated to the reaction channel Rj. The propensity functions,

are defined such that, for infinitesimal time dt, aj(x) dt is the probability that one jth reaction occurs in the time interval [t, t + dt) given that X(t) = x, whereas νj = (ν1,j, …, νMs,j) describes the change in the molecular population due to a unique reaction Rj.

The system then obeys the CME,9 

(1)

where P(x, t|x0, t0) is the probability that X(t) = x given that X(t0) = x0.

Solving (1) is impractical even for systems with small Ms and Kr. The Gillespie SSA exactly emulates the chemical master equation by generating samples of X(t) with probability law P(x, t|x0, t0). In Sec. II A we briefly recall the original SSA. In Sec. II B we introduce a slightly modified version of the SSA, which is suitable for the analysis of the variance.

Gillespie’s SSA1,2 consists in the construction of a Markov-chain X(t) having for distribution P(x, t|x0, t0). The first idea of the SSA is that, given X(t) = x, the probability of the next reaction (irrespective of the channel) to occur in the infinitesimal time interval [t, t + dt) is

In other words, the time elapsed to the next reaction, τ, follows an exponential distribution with mean 1/a0(x). Second, the relative probability of reaction channel Rj to fire first is pj(x) = aj(x)/a0(x). Gillespie’s Algorithm I proceeds along these two stages to advance the state of the system: first, drawing at random a time increment to the next reaction event, and second, selecting at random the firing reaction channel to update the state. In more detail, given the current state X of the system, the SSA first evaluates the Kr propensity functions aj and their sum a0. Drawing r1 uniformly in (0, 1) the algorithm sets τ = − log(r1)/a0, and selects the index of the firing channel using the largest l such that j < l a j < a 0 r 2 , where r2 is another random number, independent of r1, drawn uniformly from (0, 1). Time and system state are then advanced, tt + τ and XX + νl, and the procedure is repeated until the final time is reached.

It is thus seen that the SSA advances in time using a consolidated next reaction time distribution and relative reactions probabilities, so the method inherently depends on the whole state of the system. In other words, the algorithm fully couples the reaction dynamics. It requires drawing 2 random numbers from the unit interval for every elementary event (occurrence of a reaction). In practice, a pseudo-random number generator RG is used to simulate the sequence of random numbers r1 and r2. Classically, the pseudo-random number generator RG provides the exact same sequence of pseudo-random couples (r1, r2) when properly seeded (initialized). As further discussed below, this property of pseudo-random generators is attractive for our purpose, because it enables us to generate the same sequence of pseudo-random events multiple times.

However, the SSA has several drawbacks that make it unsuitable for the separation of the variances caused by the different channels. The central drawback is that SSA combines all channels to determine the next reaction time, while the analysis of the variability inherent to specific channels, as proposed in Sec. III, requires us to distinguish and control the dynamics of individual channels. An alternative way to understand the issue is that for a fixed sequence of pseudo-random numbers, the resulting trajectory of X(t) is not stable to permutation of the reaction indexation. This drawback motivates the use of the Next Reaction Algorithm (NRA) outlined below.

ALGORITHM I.

Stochastic simulation algorithm. ProcedureSSA(X0, T, {νj}, {aj}, RG).

Require: Initial condition X0, final time T, state-change vectors {νj}, propensity functions {aj}, and seeded pseudo-random number generator RG 
1: t←0, XX0   
2: loop   
3: Evaluate aj(X), j = 1, …, Kr   
4: Set a 0 = j = 1 K r a j ( X )    
5: Get r1 and r2 from RG   
6: Set τ = 1 a 0 ln ( r 1 )   ⊳ time to next reaction 
7: ift + τ > Tthen   
8: Break  ⊳ final time reached 
9: else   
10: Set largest l s.t. j = 1 j < l a j < ( a 0 r 2 )   ⊳ pick the reaction to fire 
11: XX + νl  ⊳ update the state vector 
12: tt + τ  ⊳ advance time 
13: end if   
14: end loop   
15: Return X  ⊳ State X(T
Require: Initial condition X0, final time T, state-change vectors {νj}, propensity functions {aj}, and seeded pseudo-random number generator RG 
1: t←0, XX0   
2: loop   
3: Evaluate aj(X), j = 1, …, Kr   
4: Set a 0 = j = 1 K r a j ( X )    
5: Get r1 and r2 from RG   
6: Set τ = 1 a 0 ln ( r 1 )   ⊳ time to next reaction 
7: ift + τ > Tthen   
8: Break  ⊳ final time reached 
9: else   
10: Set largest l s.t. j = 1 j < l a j < ( a 0 r 2 )   ⊳ pick the reaction to fire 
11: XX + νl  ⊳ update the state vector 
12: tt + τ  ⊳ advance time 
13: end if   
14: end loop   
15: Return X  ⊳ State X(T

Following the next reaction method proposed in Ref. 8, the dynamics of the stochastic state vector X(t) can be expressed through the following random time-change:10 

(2)

where Nj(τ) are independent Poisson processes with unit rates, and the scaled time tj are given by

(3)

Thus, the system state can be expressed as a linear combination of Kr standard Poisson processes, with individual time scaling given by the time-integrals of the corresponding propensity functions from t0 to t. Expression (2), together with (3), results from the fact that Poisson processes have independent increments distributed according to

Consequently, the probability of Rj to fire once in the infinitesimal time interval [t, t + dt) is

(4)

which is precisely the definition of the propensity function aj.

The representation of X(t) through (2) and (3) suggests an alternative version of the original SSA, hereafter referred as the NRA, which is a slightly modified version of the next reaction method proposed in Ref. 8. Algorithm II introduces two additional vectors, (τ1, …, τKr) and ( τ 1 + , , τ K r + ) , respectively, to store the unscaled time of the Poisson processes and their next reaction times. These two vectors are used to keep track of the trajectories of the Kr independent Poisson processes. Given the current state X of the system, normalized times (τ1, …, τKr) and next reaction times ( τ 1 + , , τ K r + ) , one can deduce which channel will fire first from scaled times advancement rules (3). Specifically, if no other channel fires before it does, the jth channel will react after a time lapse d t j = ( τ j + τ j ) / a j . Letting l = arg min j d t j be the first channel to fire, the procedure selects at once both the firing channel and the time lapse (dtl) to the next reaction, unlike the SSA. Having found l, time is advanced (tt + dtl), system state X is updated (XX + νl), the unscaled times vector is updated (τjτj + ajdtl), and the unscaled next reaction time τ l + is drawn at random ( τ l + τ l + log ( r l ) , where rl is drawn at random from (0, 1)). The procedure is then repeated until t reaches the final time. The NRA (Algorithm II) involves drawing a unique random variable per time step (whereas SSA draws two) but requires the initialization, computation, and storage of the scaled-times and next jump-times associated to the Kr Poisson processes associated to the reaction channels. However, both algorithms result in states X that are equal in distribution, that is,

Our interest in considering the NRA lies in the possibility of maintaining constant realizations of selected standardized Poisson processes Nj while varying others. It amounts to keeping invariant the sequences { τ j + } of unscaled jump-times. This possibility is enabled by the introduction of the Kr individual pseudo-random number generators, one for each reaction channel, when in fact a unique random number generator would have been sufficient in NRA (Algorithm II). The use of Kr independent pseudo-random number generators distinguishes NRA (Algorithm II) from the next reaction method in 8. As for SSA, the storage of these Kr independent sequences is not required as they are generated on the fly using the pseudo-random number generators RGj. Doing so, the realizations of the Poisson processes are fully determined by imposing the seeds of the Kr pseudo-random number generators. As a result, we now have a clear understanding of the inherent source of stochasticity which is kept invariant when using the same set of seeds, namely, the underlying Poisson processes, and a fine control over this source of stochasticity. This control over the inherent stochasticity enables a decomposition of the variance of the state vector, and the analysis of the variability induced by distinct channels, as discussed in Sec. IV.

ALGORITHM II.

Next reaction algorithm. ProcedureNRA (X0, T, {νj}, {aj}, RG1, …, RGKr).

Require: Initial condition X0, final time T, state-change vectors {νj}, propensity functions {aj}, and seeded pseudo-random number generators RGj=1,…Kr 
1: forj = 1, …, Krdo   
2: Draw rj from RGj   
3: τj←0, τ j + log r j   ⊳ set next reaction times 
4: end for   
6: t←0, XX0   
6: loop   
7: forj = 1, …, Krdo   
8: Evaluate aj(X) and d t j = τ j + τ j a j    
9: end for   
10: Set l = arg min j d t j   ⊳ pick next reaction 
11: ift + dtl > Tthen   
12: break  ⊳ Final time reached 
13: else   
14: tt + dtl  ⊳ update time 
15: XX + νl  ⊳ update the state vector 
16: forj = 1, …, Krdo   
17: τjτj + ajdtl  ⊳ update unscaled times 
18: end for   
19: Get rl from RGl   
20: τ l + τ l + log r l   ⊳ next reaction time 
21: end if   
22: end loop   
23: Return X  ⊳ State X(T
Require: Initial condition X0, final time T, state-change vectors {νj}, propensity functions {aj}, and seeded pseudo-random number generators RGj=1,…Kr 
1: forj = 1, …, Krdo   
2: Draw rj from RGj   
3: τj←0, τ j + log r j   ⊳ set next reaction times 
4: end for   
6: t←0, XX0   
6: loop   
7: forj = 1, …, Krdo   
8: Evaluate aj(X) and d t j = τ j + τ j a j    
9: end for   
10: Set l = arg min j d t j   ⊳ pick next reaction 
11: ift + dtl > Tthen   
12: break  ⊳ Final time reached 
13: else   
14: tt + dtl  ⊳ update time 
15: XX + νl  ⊳ update the state vector 
16: forj = 1, …, Krdo   
17: τjτj + ajdtl  ⊳ update unscaled times 
18: end for   
19: Get rl from RGl   
20: τ l + τ l + log r l   ⊳ next reaction time 
21: end if   
22: end loop   
23: Return X  ⊳ State X(T

In parametric sensitivity analysis, one is interested in char acterization of the variability of the deterministic output (or quantity of interest) of a model when some parameters are changed. Local sensitivity analyses characterize the dependence of the deterministic output about a particular value of the parameters, usually by computing the first derivatives of the output with respect to the parameters. Contrary to local methods, global sensitivity analyses aim to elucidate the global contribution of a particular input on the total output variability, assuming a certain distribution of the parameters. In the ANOVA (analysis of the variance) or Sobol’s variance decomposition, the sensitivity indices are associated to the input parameters via a decomposition of the output variance.11,12 When the input parameters vary independently, Sobol’s de composition is orthogonal and the definition of the sensi tivity indices is immediate from the so-called partial variances (see below). The case of dependent inputs requires appropriate definition and interpretation of the sensitivity indices (see, for instance, Refs. 13 and 14). Sobol’s decomposition is also related to High Dimensional Model Representation (HDMR), which corresponds to a truncated SH decomposition; such decompositions have in particular been used for the construction of surrogate models (see, for instance, Refs. 13 and 15–17). In the context of reaction networks presenting an inherent stochastic dynamics, analyses have been restricted to averaged functionals of the stochastic model solution: the analyses characterize the sensitivity of such averages with respect to parameters, for instance coefficients in the propensity functions defining the stochastic network.18–22 

Parametric sensitivity analyses have to be contrasted with the use of the Sobol decomposition we are proposing in the present work. Here, we assume no variability in any model parameter. Instead, we seek to quantify the respective contributions of different reaction channels to the variance of a given functional of the stochastic model solution.

In this section, we summarize relevant aspects of the Sobol-Hoeffding decomposition, provide a brief outline of how these concepts can be applied to define useful sensitivity indices, and detail a Monte-Carlo sampling procedure for their estimation.

Consider a vector N = (N1, …, ND) of D independent random quantities defined on an abstract probability space P = ( Ω , Σ , μ ) . Let F : NF(N) ∈ ℝ be a second-order random functional in N, that is,

(5)

Let D be the power set of {1, …, D}. For u D , denote |𝔲| = Card(𝔲) and u = D u , such that u u = D and 𝔲𝔲 = 0̸. Given u D we denote N𝔲 the sub-vector of N with components (N𝔲1, …, N𝔲|𝔲|), so N = (N𝔲, N𝔲).

For the assumptions considered, the function F(N) has a unique orthogonal decomposition of the form23 

(6)

Equation (6) is called the SH decomposition of F. For instance, in the case D = 3 the decomposition is expressed as

The orthogonality condition for the functions implies that ∀𝔲𝔰,

and the SH functions F𝔲 are recursively defined according to11 

(7)

where E F n u is the conditional expectation of F(N) given N𝔲 = n𝔲, namely,

For instance, in the previous example with D = 3, we have

Clearly, F = E F , and the orthogonality of the decomposition implies E F u = 0 for 𝔲≠0̸. Decomposition (6) being orthogonal, the variance V F is decomposed into the sum of partial variances corresponding to the variances of the SH functions,

(8)

Recalling that E F u = 0 , and using definition (7), the partial variances V F u can be expressed as

(9)

where we have used E E F N u = E F = F and the orthogonality of the SH functions.

The partial variance V F u measures the contribution to V F of the interactions between the variables N𝔲. Since there are 2D such partial variances, the sensitivity analysis is usually reduced to a simpler characterization, based on first and total-order sensitivity indices associated to individual variables Ni = N{i} or group of variables N𝔲.

The first-order sensitivity indexS𝔲, defined as12 

(10)

represents the fraction of the variance V F due to the variable N𝔲 only, i.e., without any interaction with variables in N𝔲. For instance,

The total-order sensitivity indexT𝔲, defined as12 

(11)

accounts in addition for the interactions between variables N𝔲 and N𝔲. For our example with D = 3, we have

In fact, it is immediate to show from the definition of the total-order sensitivity index that S𝔲T𝔲 ≤ 1. In addition, the following inequalities hold for the case of sensitivity indices associated to a single variable:

while

In the latter case, F is said to be additive since there is no interaction between the Ni’s.

From (10) and (11) it is seen that the computation of the first and total order sensitivity indices essentially amounts to the determination of the variance of conditional expectations. For this purpose, we shall rely on the Monte Carlo method proposed in Ref. 24.

Consider two independent random sample sets N I and N I I of M realizations of N. The conditional variance V E F N u can be estimated using the following average:24 

(12)

Here, we have denoted NI,(i) and NII,(i) the ith elements of N I and N I I , respectively. Introducing the classical ( N I -)sample estimator for the mean,

and variance of F,

the Monte Carlo estimator of the first-order sensitivity index is

(13)

A similar expression can be derived for the estimation of the total-order sensitivity index T𝔲 in (11),

(14)

It is seen that the MC evaluation of the sensitivity indices requires M evaluations of F (for the elements of N I ), and M new function evaluations for each S𝔲 or T𝔲. In particular, the MC estimation of the first and total-order sensitivity indices S{i} and T{i} for the D input variables requires a total of (2D + 1) M model evaluations.

Our objective is now to propose a decomposition of the variance for a functional g(X(t)), where X(t) is the stochastic output of a stochastic simulator. In all the examples of this section, the functional g is simply a component of X(t) at some time t = T, but the proposed methodology can be readily extended to more complex functionals including path integrals and exit times, provided that g(X(t)) is a second-order random variable.

To decompose V g ( X ( t ) ) into contributions arising from the different Kr reaction channels and their interaction, and accordingly define the first and total-order sensitivity indices, associated with the variability of an individual channel (or group of reaction channels), an appropriate definition of the conditional expectations is required. Specifically, we need to provide a clear meaning to the expectation of g(X(t)) conditioned on a realization of a particular (or group of) reaction channel(s).

The central idea proposed in the present work is to identify a realization of the reaction channels with the realization of the underlying vector of independent standard Poisson processes,

To this end, let us denote X(t, N(ω)) the solution of the Poisson process formulation of the stochastic dynamics given by (2), that is,

where

Then, defining F(N(ω))≐g(X(t, N(ω))), with X(t, N(ω)) given by (2), and assuming F has finite second-order moment, the variance decomposition with respect to the Poisson processes in N can proceed as discussed previously in Sec. III, because the Kr channels are associated with independent standard Poisson processes Nk(ω). To this end, let K be the power set of {1, …, Kr}; for u K , the conditional expectation of g(X(t, N)) given N𝔲 = n𝔲 becomes

(15)

Observe that given nu, the physical times and sequence of firing reaction channels remain in general stochastic, that is, dependent on N𝔲, even for the channels Rj with index j𝔲, owing to the definition of the scaled time tj: the conditioning fixes the sequence of firing channels only in the unscaled time. However, V E g ( X ( t , N ) ) N u does measure the variability in g(X(t)) induced by the underlying stochastic processes Nj(ω) with j𝔲, and so characterizes the variance caused by the inherent stochasticity in channels Rj, j𝔲. As a result, the sensitivity indices defined above can be used to quantify and characterize the impact of individual or group of reaction channels on the variability of g(X(t)). Even though the indices are not characterizing a sensitivity with respect to some changes in the definition of the propensity functions associated to the channels (case of parametric studies), but rather characterize a contribution to the inherent stochasticity of the system, we shall continue to refer to them as sensitivity indices.

We complete the section by discussing the practical implementation of the Monte Carlo procedure for the estimation of the sensitivity indices of g(X(t, N)), and drop the time dependence of X for simplicity.

In addition to the classical mean and variance estimations for g(X(N)), the Monte Carlo estimation of sensitivity indices involves correlations of the form (see (13) and (14))

(16)

where NI,(i) and NII,(i) are elements of two independent sample sets of N(ω).

Direct sampling of the vector of Kr independent standard Poisson processes is generally not practical, as it would require the storage of a prohibitively large number of firing times for each channel. Instead, it is more convenient to construct the Poisson processes “on the fly,” as shown in Algorithm II, where the next unscaled firing time τ j + for channel j is determined only when it actually fires (that is, when the unscaled time τj reaches τ j + ).

However, for the correct MC estimation of the correlation in (16), it is crucial to ensure that the sequence of (unscaled) firing times τ j + remains the same for the channels j𝔲 when computing X ( N u I , ( i ) , N u I , ( i ) ) and X ( N u I , ( i ) , N u I I , ( i ) ) . In other words, the same sequence of time increments between the successive firing events of channel j𝔲 must be repeated; because the time increment between two successive firing events for channel j is simulated by means of the pseudo-random number generators RGj, as shown in line 20 of Algorithm II, this implies that the generator RGj must reproduce the same sequence of pseudo-random numbers rj when computing g ( X ( N u I , ( i ) , N u I , ( i ) ) ) and g ( X ( N u I , ( i ) , N u I I , ( i ) ) ) , whenever j𝔲. On the contrary, for a channel index j𝔲, two independent sequences of random numbers rj have to be used. The control of sequence of pseudo-random numbers delivered by RGj can be classically enforced through the “seed” of the generator. In this context, the Poisson process Nj(τ) is totally determined by RGj and its seed, sj. For such generators, consistent solutions,

are obtained by calling Algorithm II twice, with the same initialization (seeding) RGj(sj) if j𝔲, and two random and independent realizations RG j ( s j I ) and RG j ( s j I I ) for j𝔲.

The whole procedure for the Monte Carlo estimation of the whole set of Kr first and total-order sensitivity indices S{i} and T{i} is schematically illustrated in Algorithm III, in the case of g(X) = g(X(t = T)). Note that Algorithm III is not optimized; in particular the computationally intensive part of the algorithm, namely, the (Kr + 1) calls to NRA can be carried out in parallel. Nonetheless, the present formulation provides a general framework that can be easily adapted to the estimation of generic sensitivity indices associated to u K , and to other functionals g(X).

ALGORITHM III.

Computation of the first and total-order sensitivity indices S{j}, and T{j} of g(X(T)). ProcedureCompute_SI (M, X0, T, {νj}, {aj}, g).

Require: Sample set dimension M, initial condition X0, final time T, state-change vectors {νj}, 
propensity functions {aj} and functional g 
1: μ←0, σ2←0  ⊳ Init. Mean and Variance 
2: forj = 1 to Krdo   
3: S(j)←0, T(j)←0  ⊳ Init. first and total-order SIs 
4: end for   
5: form = 1 to m = Mdo   
6: Draw two independent set of seeds sI and sII   
7: X NRA ( X 0 , T , { ν j } , { a j } , RG 1 ( s 1 I ) , , RG K r ( s K r I ) )    
8: μμ + g(X), σ2σ2 + g(X)2  ⊳ Acc. mean and variance 
9: forj = 1 to Krdo   
10: X S NRA ( X 0 , T , { ν j } , { a j } , RG 1 ( s 1 I I ) , ,  
11:  , RG j ( s j I ) , , RG K r ( s K r I I ) )  
12: X T NRA ( X 0 , T , { ν j } , { a j } , RG 1 ( s 1 I ) , ,  
13:  , RG j ( s j I I ) , , RG K r ( s K r I ) )  
14: S(j)←S(j) + g(X) × g(XS ⊳ Acc. 1-st order 
15: T(j)←T(j) + g(X) × g(XT ⊳ Acc. total order 
16: end for  ⊳ Next channel 
17: end for  ⊳ Next sample 
18: μμ/M, σ2σ2/(M − 1) − μ2   
19: forj = 1 to Krdo   
20: S ( j ) S ( j ) ( M 1 ) σ 2 μ 2 σ 2   ⊳ Estim. 1-st order 
21: T ( j ) 1 T ( j ) ( M 1 ) σ 2 + μ 2 σ 2   ⊳ Estim. total order 
22: end for   
23: ReturnS(j) and T(j), j = 1, …, Kr  ⊳ First and total-order sensitivity indices S{j} and T{j} of g(X(T)) 
Require: Sample set dimension M, initial condition X0, final time T, state-change vectors {νj}, 
propensity functions {aj} and functional g 
1: μ←0, σ2←0  ⊳ Init. Mean and Variance 
2: forj = 1 to Krdo   
3: S(j)←0, T(j)←0  ⊳ Init. first and total-order SIs 
4: end for   
5: form = 1 to m = Mdo   
6: Draw two independent set of seeds sI and sII   
7: X NRA ( X 0 , T , { ν j } , { a j } , RG 1 ( s 1 I ) , , RG K r ( s K r I ) )    
8: μμ + g(X), σ2σ2 + g(X)2  ⊳ Acc. mean and variance 
9: forj = 1 to Krdo   
10: X S NRA ( X 0 , T , { ν j } , { a j } , RG 1 ( s 1 I I ) , ,  
11:  , RG j ( s j I ) , , RG K r ( s K r I I ) )  
12: X T NRA ( X 0 , T , { ν j } , { a j } , RG 1 ( s 1 I ) , ,  
13:  , RG j ( s j I I ) , , RG K r ( s K r I ) )  
14: S(j)←S(j) + g(X) × g(XS ⊳ Acc. 1-st order 
15: T(j)←T(j) + g(X) × g(XT ⊳ Acc. total order 
16: end for  ⊳ Next channel 
17: end for  ⊳ Next sample 
18: μμ/M, σ2σ2/(M − 1) − μ2   
19: forj = 1 to Krdo   
20: S ( j ) S ( j ) ( M 1 ) σ 2 μ 2 σ 2   ⊳ Estim. 1-st order 
21: T ( j ) 1 T ( j ) ( M 1 ) σ 2 + μ 2 σ 2   ⊳ Estim. total order 
22: end for   
23: ReturnS(j) and T(j), j = 1, …, Kr  ⊳ First and total-order sensitivity indices S{j} and T{j} of g(X(T)) 

In this section, we apply and illustrate the proposed sensitivity analysis and variance decomposition methods to well-known stochastic models.

The BD process25 involves a single species S (Ms = 1) and Kr = 2 reaction channels,

with propensity functions

We set b = 200, d = 1, and use M = 1 000 000 Monte Carlo samples to compute the estimates presented in this section.

Figure 1 reports typical trajectories of the system for the initial condition X(t = 0) = 0. The plot shows the transient growth of X toward its asymptotic distribution, which is the Poisson distribution with mean b/d = 200. As depicted in Figure 2, the histogram of X(t = 8) is very close to the asymptotic distribution.

FIG. 1.

Birth-death model: selected trajectories of X(t) generated using Algorithm II.

FIG. 1.

Birth-death model: selected trajectories of X(t) generated using Algorithm II.

Close modal
FIG. 2.

Birth-death model: Histogram of X(t = 8). Results are based on 1 × 106 realizations of the Poisson process, Algorithm II.

FIG. 2.

Birth-death model: Histogram of X(t = 8). Results are based on 1 × 106 realizations of the Poisson process, Algorithm II.

Close modal

For this simple model with only two reaction channels, Rb and Rd, the analysis of the variance for g(X(t)) = X(t) yields only three terms corresponding to the two first-order sensitivity indices S{b} and S{d}, and the mixed contribution S { b , d } V X { b , d } / V X ; we have

and S { b } + S { d } + S { b , d } = 1 .

Figure 3 shows the evolution of the first-order and total sensitivity indices of X for t ∈ [0, 8]. To appreciate the transient dynamic, the sensitivity indices have been scaled by the variance V X whose evolution is also provided. For the each reaction channel, the scaled first-order and total sensitivity indices are reported as a colored strip (red for Rb and blue for Rd) spanning vertically the interval [ V X S , V X T ] , so the extent of the strips is equal to V X { b , d } . The results show that at early times during the transient (t < 1) the variability in X is predominantly caused by the birth channel stochasticity, whereas the death channel induces a variability that grows in time at a much lower rate. This can be explained by the fact that Rb is a zero-order reaction with constant rate, whereas Rd is a first-order reaction and thus requires a sufficiently large population X to fire at a significant rate. Further, during this transient stage, the dynamics are essentially additive since S{b} + S{d} ≈ 1. For 1 ≤ t ≤ 4, the variability induced by Rd only, measured by S{d}, continues to grow with the population size (recall that E X = V X for this model) and becomes eventually close to, but lower than S{b}. In the mean time, the mixed effects also increase, as reflected by the growing extent of the strips. Finally, for t > 4, interval for which the variance has essentially achieved its asymptotic value ( V X 200 ), the first order sensitivity index S{b} becomes constant while the mixed effect S{d} starts to decrease with a very low rate. The slow decay of S{d} underlines the slow emergence of an increasingly important mixed contribution of the two channels on the variability of X.

FIG. 3.

Birth-death model. First-order and total sensitivity indices of the birth-death model and t ∈ [0, 8]. For clarity, the sensitivity indices have been scaled by V X ( t ) .

FIG. 3.

Birth-death model. First-order and total sensitivity indices of the birth-death model and t ∈ [0, 8]. For clarity, the sensitivity indices have been scaled by V X ( t ) .

Close modal

To illustrate the channels interaction, we provide in Figure 4 the time evolution up to t = 500 of the first-order sensitivity indices S{b} and S{d}, together with the interaction term S { b , d } = 1 S { b } S { d } . The figure shows that while the variability induced by Rb remains constant for large t and amounts to 50% of the overall variance, the sole impact of channel Rd monotonically decays to the benefit of the mixed term. Again, this behavior is explained by the different natures of the two channels. First, because channel Rb is a zero-order reaction, the stochasticity that it induces is independent of the state X and therefore becomes asymptotically constant, accounting for half the variance. Second, Eqs. (2) and (3) indicate that the contribution of channel Rd (a first-order reaction) to the dynamics of X, involves not only the intrinsic stochasticity of the standardized Poisson process Nd, but also the variability of the scaled-time td which involves the path-integral of X(t). As a result, the variance due to Nd(td) involves also the variability brought by channel Rb, through its effect on X(t), and our computation quantifies how the sole effect of the variability of Nd monotonically decays while the interaction of the two channels grows.

FIG. 4.

Birth-death model. Long-time evolution of the first-order sensitivity indices, and of the mixed interaction term. Note the log-scale in time.

FIG. 4.

Birth-death model. Long-time evolution of the first-order sensitivity indices, and of the mixed interaction term. Note the log-scale in time.

Close modal

The Schlögl model26 has Kr = 4 reaction channels, expressed according to

Species B1 and B2 are assumed in large excess with constant population over time: XB1 = XB2/2 = 105. The stochastic state then reduces to the single evolving species S with Ms = 1. The propensity functions are given by

and

We set c1 = 3 × 10−7, c2 = 10−4, c3 = 10−3, and c4 = 3.5, so that the dynamics involve a zero-order reaction (channel R3), a first-order reaction (channel R4), a second-order reaction (channel R1), and a third-order reaction (channel R2). In addition, we use the deterministic initial condition X(t = 0) = 250.

For the present settings, the system exhibits a bifurcation with two attracting branches. This is illustrated in Figures 5 and 6, which depict selected trajectories of X(t) and the empirical histogram of X(t = 8), respectively. The bi-modal character of the solution at t = 8 is clearly evidenced, with a first peak at x ∼ 100 that is well separated from a second peak occurring at x ∼ 600. Also note the different spreads of the distributions, as the second peak is significantly broader than the first.

FIG. 5.

Schlögl model. Selected trajectories of X(t) showing the bifurcation in the stochastic dynamics.

FIG. 5.

Schlögl model. Selected trajectories of X(t) showing the bifurcation in the stochastic dynamics.

Close modal
FIG. 6.

Schlögl model. Histogram of X(t = 8) based on 1 × 106 realizations of the Poisson process Algorithm II, highlighting the bimodal nature of the stochastic solution.

FIG. 6.

Schlögl model. Histogram of X(t = 8) based on 1 × 106 realizations of the Poisson process Algorithm II, highlighting the bimodal nature of the stochastic solution.

Close modal

We first examine sensitivity indices of the Kr reaction channels on g(X(t)) = X(t). For each channel, Figure 7 shows the ranges [S{j}(t), T{j}(t)] scaled using the state variance V X ( t ) as scaling factor. Also plotted is the sum of the first-order contributions. Focusing first on the first-order indices, we observe that for t ≤ 8, we have

Since most of the variance arises from the bi-modality of the solution, this suggests that channels R1 and R4 are the primary contributors to the solution branch selection. Figure 7 also indicates that up to t ≈ 2 the system dynamics are essentially additive, while at later times mixed interactions between channels become increasingly significant. In fact, it is seen that the variance induced by the first-order effects, V X × j = 1 K r S { j } , i.e., by the individual channels without interaction, is essentially constant for t > 4 while the total variance continues to grow as interaction contributions develop. At t = 8, X is close to its asymptotic distribution, and the importance of the mixed effects in all the channels, T{j}S{j}, can be appreciated from the extents of the colored strips in the plots. Again, mixed effects for channels R1 and R4 appear to be comparable, but significantly larger than they are in channels R2 and R3.

FIG. 7.

Schlögl model. Sensitivity indices scaled by the local variance as well as the total variance and sum of first-order variance. The estimates are based on sample sets of 1 × 106 trajectories of the NRA.

FIG. 7.

Schlögl model. Sensitivity indices scaled by the local variance as well as the total variance and sum of first-order variance. The estimates are based on sample sets of 1 × 106 trajectories of the NRA.

Close modal

To gain further insight into the structure of the variability, the characterization of high-order partial variances is needed. For the present model with K = 4, we can afford to estimate all first-order sensitivity indices S𝔲, u K , from which the partial variances V X u can be retrieved. We provide in Figure 8 the second-order partial variances V X u for |𝔲| = 2. It is seen that they all assume relatively low values (≲500) except for V X { 1 , 4 } which asymptotically peaks at approximately 3500, confirming the preponderance of these two channels. Note also that V X { 2 , 3 } is estimated to be zero, so not only these two channels have lower first-order variance, but also have no second-order interaction.

FIG. 8.

Schlögl model. Evolution of second-order partial variances. The estimates are based on sample sets of 1 × 106 trajectories of the NRA.

FIG. 8.

Schlögl model. Evolution of second-order partial variances. The estimates are based on sample sets of 1 × 106 trajectories of the NRA.

Close modal

Regarding the higher-order partial variances for |𝔲| > 2, reported in Figure 9, we observe again that V X u have significant values only for {1, 4} ∈ 𝔲. The partial variance V X { 1 , 2 , 3 , 4 } , which quantifies the variability due to the interaction between all the channels, becomes asymptotically dominant over all other partial variances except V X { 1 , 2 , 4 } and V X { 1 , 3 , 4 } , underscoring the importance of complex high-order interactions between the channels. Further, unlike the third-order partial variances, V X { 1 , 2 , 3 , 4 } has not yet converged at t = 8, highlighting the slower time scales at which full interactions develop.

FIG. 9.

Schlögl model. Evolution of third-order and fourth-order partial variances. The estimates are based on sample sets of 1 × 106 trajectories of the NRA.

FIG. 9.

Schlögl model. Evolution of third-order and fourth-order partial variances. The estimates are based on sample sets of 1 × 106 trajectories of the NRA.

Close modal

To support our conclusion that channels R1 and R4 are the main sources of stochasticity, through their role as the dominant mechanism for selecting the bifurcation branch, we provide in Figure 10 sample sets of trajectories of X for t ∈ [0, 8] conditioned on the Poisson processes (N1, N4) and (N2, N3), respectively. In Figure 10(a) we show, for 9 realizations (n1, n4) of (N1(ω), N4(ω)), 10 trajectories of X. Similarly, Figure 10(b) depicts, for 9 realizations (n2, n3) of (N2(ω), N3(ω)), 10 trajectories of X. The results indicate that given the Poisson processes of channels R1 and R4, the stochastic system tends to select more systematically one of the two attracting branches of the dynamics. In contrast, the branch selection appears to be less influenced by channels R2 and R3.

FIG. 10.

Schlögl model. Trajectories of X(t) conditioned on (a) N1(ω) = n1 and N4(ω) = n4, and (b) N2(ω) = n2 and N3(ω) = n3. Each sub-plot shows 10 conditionally random trajectories for fixed realizations n1 and n4 in (a), and n2 and n3 in (b). (a) Conditioned on N1 and N4. (b) Conditioned on N2 and N3.

FIG. 10.

Schlögl model. Trajectories of X(t) conditioned on (a) N1(ω) = n1 and N4(ω) = n4, and (b) N2(ω) = n2 and N3(ω) = n3. Each sub-plot shows 10 conditionally random trajectories for fixed realizations n1 and n4 in (a), and n2 and n3 in (b). (a) Conditioned on N1 and N4. (b) Conditioned on N2 and N3.

Close modal

The Michaelis-Menten system27 has Ms = 4 species and Kr = 3 reaction channels, and is expressed according to

The system models the creation of a product S4 through the binding of the enzyme S2 with the substrate S1; S3 is an intermediate species which can dissociate back into enzyme and substrate (channel 2) or decompose to product S4 and enzyme S2 (channel 3). Note that X2 + X3 is a conserved quantity. The propensity functions of the model are given by a1(x) = c1x1x2, a2(x) = c2x3, and c3(x) = c3x3. We set c1 = 0.0017, c2 = 10−3, and c3 = 0.125, and use as initial conditions X1(t = 0) = 300, X2(t = 0) = 120, and X3(t = 0) = X4(t = 0) = 0.

Figure 11 shows selected stochastic trajectories of the state vector. Qualitatively, the trajectories initially show a fast increase in the population of the intermediate species, S3, with a decay in substrate and enzyme concentrations, respectively, X1 and X2. When X3 is large enough, S3 starts to decompose producing S4, whose population grows monotonically. As S4 is produced, the substrate and intermediate species populations are progressively exhausted. As t → ∞, (i) all the substrates will eventually be consumed into product, i.e., X4X1(t = 0); and (ii) the intermediate species population X3 will vanish thus releasing the enzyme, thus X2X2(t = 0).

FIG. 11.

Michaelis-Menten model. Selected trajectories of the state vector X for t ∈ [0, 60].

FIG. 11.

Michaelis-Menten model. Selected trajectories of the state vector X for t ∈ [0, 60].

Close modal

Figure 12 shows the time evolution of the first-order and total sensitivity indices of the three reaction channels, for species (a) S1, (b) S2, and (c) S4 using decompositions of g(X) = Xi with i = 1, 2, and 4, respectively. Also plotted for each species is the scaled sum of first-order indices. Because of the conservation rule X2 + X3 = const, the sensitivity indices for S2 and S3 are equal, and so the results for S3 are not shown. Starting with S1 (Fig. 12(a)), we see that the binding process R1 is initially the main source of stochasticity; the variability due to the decomposition process R3 then catches-up at later time, and even becomes dominant for t ∈ [10, 20], before decaying to zero, while the effect of R1 decays at a slower rate. At the time of maximum of variance in X1, the two channels R1 and R3 contribute roughly equally to the variability. In contrast, the variability induced by channel R2 (dissociation of S3) is always negligible. It is also interesting to note that the effects of the channels are essentially additive.

FIG. 12.

Michaelis-Menten model: first-order and total sensitivity indices S{j} and T{j} for j = 1, …, 4. Plots are generated for (a) X1, (b) X2, and (c) X4. The estimates are based on a sample set of 1 × 106 trajectories of the NRA.

FIG. 12.

Michaelis-Menten model: first-order and total sensitivity indices S{j} and T{j} for j = 1, …, 4. Plots are generated for (a) X1, (b) X2, and (c) X4. The estimates are based on a sample set of 1 × 106 trajectories of the NRA.

Close modal

Regarding S2 (Fig. 12(b)), we first remark that, again, channel R2 induces a negligible fraction of the variance. Second, as for S1, channel R1 is initially the dominant source of uncertainty, but R3 catches up sooner, and then remains the main factor in V X 2 . Further, interaction effects between channel R1 and R3 appear to have a more significant impact on the variability of S2 than for S1.

Finally, the sensitivity analysis for the product species S4 (Fig. 12(c)) highlights the dominance of channel R3 on the variability at all times. In addition, the variability in X4 caused by channel R2 is negligible, and so are the interaction effects.

Overall, it can be concluded that channel R2 (dissociation of S3) has essentially no impact on the variability of X. In fact, repeating the simulation disregarding channel R2 (i.e., setting c2 = 0) yields essentially similar variances (not shown), implying that for the present setting the dissociation process can be neglected, and the system accordingly simplified. However, this is not necessarily the case in general, e.g., when system parameters are changed. To demonstrate this claim, we provide in Figure 13 the same analysis as above, but now using c2 = 25 × 10−3. Globally, the increase in c2 leads to a significant contribution of channel R2 to the variance of X1, X2, and X3, and to a lesser extent to the variance of X4. The increase in the dissociation rate also results in a noticeable increase in the effects of interaction terms on the variance of X1 and X2.

FIG. 13.

Michaelis-Menten model: same as in Figure 12 but with c2 increased 25 folds. (a) S1. (b) S2. (c) S4.

FIG. 13.

Michaelis-Menten model: same as in Figure 12 but with c2 increased 25 folds. (a) S1. (b) S2. (c) S4.

Close modal

This work focused on the development of methods and algorithms that enable an orthogonal decomposition of the variance in the functional of the output of a stochastic simulator. The key to this development consists in formulating the stochastic dynamics as being generated by independent standardized Poisson processes. At the manageable expense of representing the individual Poisson processes, this enables us to identify individual realizations of the channel dynamics, and subsequently apply a Sobol decomposition with respect to the channels of the functional variance. A Monte-Carlo sampling approach was adopted for this purpose. This leads to an algorithm having a complexity that is linear in the number of conditional variances (sensitivity indices) that one wishes to compute. In addition, each conditional variance can be computed in parallel (lines 9–16 in Algorithm III).

Implementation of the algorithms resting on this foundation was illustrated through application to simple stochastic systems, namely, the birth-death, Schlögl, and Michaelis-Menten models. The computations were used to demonstrate the possibility of quantifying contributions of individual Poisson processes associated with elementary reactions to the variance of species concentrations, of characterizing the role of mixed interactions, and consequently assessing the variability of the functional to reaction pathways.

While the scope of the computations performed was restricted to simple demonstrations, the methods developed in this work can be readily applied to investigate broader questions. An interesting area to explore would consist in investigating more complex functionals of the solution. Examples would include variance decomposition of arrival, exit, means hitting times, reaction time scales, progress variables, and full path functionals.

In addition, a number of possible algorithmic improvements and fundamental extensions can be conceived from the methodology and algorithms presented in the paper. In particular, complex systems involving large numbers of channels and species would benefit from the incorporation of more elaborate time-integrators; for instance, if an unaffordable number of elementary reactions fires before reaching the final time, tau-leap methods4–6,28,29 should be considered to accelerate the simulations. Extending our methodology to tau-leap methods will require consistent time-approximations for the individual realizations of the Poisson processes associated with each channel. It would also be beneficial to rely on improved sampling strategies and variance reduction methods, such as multi-level Monte Carlo,30–32 to reduce the number of simulations needed to estimate the sensitivity indices. On the fundamental side, interesting generalizations include the potential of leveraging the present approach for the purpose of model reduction,33–36 and for quantifying the impact of uncertainties that may affect system parameters.22,37–43 In particular, such extensions would offer the promise of accounting for and distinguishing between the impacts of parametric sensitivity and irreducible noise,43 as well as providing estimates where reduced model approximations are valid. These topics are the subject of ongoing work.

This work was supported by the US Department of Energy (DOE), Office of Science, Office of Advanced Scientific Computing Research, under Award No. DE-SC0008789. Support from the Research Center on Uncertainty Quantification of the King Abdullah University of Science and Technology is also acknowledged.

1.
D. T.
Gillespie
, “
A general method for numerically simulating the stochastic time evolution of coupled chemical reactions
,”
J. Comput. Phys.
22
(
4
),
403
434
(
1976
).
2.
D. T.
Gillespie
, “
Exact stochastic simulation of coupled chemical reactions
,”
J. Phys. Chem.
81
(
25
),
2340
2361
(
1977
).
3.
D. T.
Gillespie
, “
A rigorous derivation of the chemical master equation
,”
Phys. A
188
(
1–3
),
404
425
(
1992
).
4.
D. T.
Gillespie
, “
Approximate accelerated stochastic simulation of chemically reacting systems
,”
J. Chem. Phys.
115
(
4
),
1716
1733
(
2001
).
5.
D. T.
Gillespie
and
L. R.
Petzold
, “
Improved leap-size selection for accelerated stochastic simulation
,”
J. Chem. Phys.
119
(
16
),
8229
8234
(
2003
).
6.
M.
Rathinam
,
L. R.
Petzold
,
Y.
Cao
, and
D. T.
Gillespie
, “
Stiffness in stochastic chemically reacting systems: The implicit tau-leaping method
,”
J. Chem. Phys.
119
(
24
),
12784
12794
(
2003
).
7.
W.
E
,
D.
Liu
, and
E.
Vanden-Eijnden
, “
Nested stochastic simulation algorithms for chemical kinetic systems with multiple time scales
,”
J. Comput. Phys.
221
(
1
),
158
180
(
2007
).
8.
M. A.
Gibson
and
J.
Bruck
, “
Efficient exact stochastic simulation of chemical systems with many species and many channels
,”
J. Phys. Chem. A
104
,
1876
1889
(
2000
).
9.
C.
Gardiner
,
Stochastic Methods
,
Springer Series in Synergetics
(
Springer
,
2009
), Vol.
13
.
10.
S. N.
Ethier
and
T. G.
Kurtz
,
Markov Processes: Characterization and Convergence
,
Wiley Series in Probability and Statistics
(
Wiley
,
2005
).
11.
I. M.
Sobol′
, “
Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates
,”
Math. Comput. Simulat.
55
(
1-3
),
271
280
(
2001
).
12.
T.
Homma
and
A.
Saltelli
, “
Importance measures in global sensitivity analysis of nonlinear models
,”
Reliab. Eng. Syst. Saf.
52
(
1
),
1
17
(
1996
).
13.
G.
Li
,
H.
Rabitz
,
P. E.
Yelvington
,
O. O.
Oluwole
,
F.
Bacon
,
C. E.
Kolb
, and
J.
Schoendorf
, “
Global sensitivity analysis for systems with independent and/or correlated inputs
,”
J. Phys. Chem. A
114
(
19
),
6022
6032
(
2010
).
14.
G.
Chastaing
,
F.
Gamboa
, and
C.
Prieur
, “
Generalized Sobol sensitivity indices for dependent variables: Numerical methods
,”
J. Stat. Comput. Simul.
85
(
7
),
1306
1333
(
2015
).
15.
G.
Li
,
S.-W.
Wang
,
H.
Rabitz
,
S.
Wang
, and
P.
Jaffé
, “
Global uncertainty assessments by high dimensional model representations (HDMR)
,”
Chem. Eng. Sci.
57
,
4445
4460
(
2002
).
16.
T.
Crestaux
,
O.
Le Maître
, and
J. M.
Martinez
, “
Polynomial chaos expansion for sensitivity analysis
,”
Reliab. Eng. Syst. Saf.
94
(
7
),
1161
1172
(
2009
).
17.
M.M.
Zuniga
,
S.
Kucherenko
, and
N.
Shah
, “
Metamodelling with independent and dependent inputs
,”
Comput. Phys. Commun.
184
,
1570
1580
(
2013
).
18.
M.
Rathinam
,
P. W.
Sheppard
, and
M.
Khammash
, “
Efficient computation of parameter sensitivities of discrete stochastic chemical reaction networks
,”
J. Chem. Phys.
132
,
034103
(
2010
).
19.
P. W.
Sheppard
,
M.
Rathinam
, and
M.
Khammash
, “
A pathwise derivative approach to the computation of parameter sensitivities in discrete stochastic chemical systems
,”
J. Chem. Phys.
136
(
3
),
034115
(
2012
).
20.
E. S.
Wolf
and
D. F.
Anderson
, “
A finite difference method for estimating second order parameter sensitivities of discrete stochastic chemical reaction networks
,”
J. Chem. Phys.
137
(
22
),
224112
(
2012
).
21.
E. S.
Wolf
and
D. F.
Anderson
, “
Hybrid pathwise sensitivity methods for discrete stochastic models of chemical reaction systems
,”
J. Chem. Phys.
142
(
3
),
034103
(
2015
).
22.
A.
Alexanderian
,
F.
Rizzi
,
M.
Rathinam
,
O. P.
Le Maître
, and
O. M.
Knio
, “
Preconditioned Bayesian regression for stochastic chemical kinetics
,”
J. Sci. Comput.
58
,
592
626
(
2014
).
23.
W.
Hoeffding
, “
A class of statistics with asymptotically normal distribution
,”
Ann. Math. Stat.
19
,
293
325
(
1948
).
24.
I. M.
Sobol
, “
Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates
,”
Math. Comput. Simul.
55
,
271
280
(
2001
).
25.
M. A.
Nowak
,
Evolutionary Dynamics: Exploring the Equations of Life
(
Harvard University Press
,
2006
).
26.
F.
Schlögl
, “
Chemical reaction models fornon-equilibrium phase transitions
,”
Z. Phys.
253
,
247
(
1972
).
27.
D. J.
Wilkinson
,
Stochastic Modeling for System Biology
,
Mathematical and Computational Series
(
Chapman and Hall/CRC
,
2006
).
28.
D. F.
Anderson
, “
A modified next reaction method for simulating chemical systems with time dependent propensities and delays
,”
J. Chem. Phys.
127
(
21
),
214107
(
2007
).
29.
A.
Moraes
,
R.
Tempone
, and
P.
Vilanova
, “
Hybrid Chernoff Tau-Leap
,”
Multiscale Model. Simul.
12
,
581
615
(
2014
).
30.
M. B.
Giles
, “
Multi-level Monte Carlo path simulation
,”
Oper. Res.
56
(
3
),
607
617
(
2008
).
31.
D. F.
Anderson
and
D. J.
Higham
, “
Multi-level Monte Carlo for continuous time Markov chains, with applications in biochemical kinetics
,”
Multiscale Model. Simul.
10
(
1
),
146
179
(
2012
).
32.
A.
Moraes
,
R.
Tempone
, and
P.
Vilanova
, “
Multilevel hybrid Chernoff Tau-Leap
,”
BIT Numer. Math.
(submitted).
33.
X.
Han
and
H. N.
Najm
, “
Dynamical structures in stochastic chemical reaction systems
,”
SIAM J. Appl. Dyn. Syst.
13
,
1328
1351
(
2014
).
34.
M.
Bruna
,
J. S.
Chapman
, and
M. J.
Smith
, “
Model reduction for slow-fast stochastic systems with metastable behaviour
,”
J. Chem. Phys.
140
,
174107
(
2014
).
35.
M.
Salloum
,
A.
Alexanderian
,
O. P.
Le Maître
,
H. N.
Najm
, and
O. M.
Knio
, “
Simplified CSP analysis of a stiff stochastic ODE system
,”
Comput. Methods Appl. Mech. Eng.
217-220
,
121
138
(
2012
).
36.
C. A.
Gómez-Uribe
,
G. C.
Verghese
, and
A. R.
Tzafriri
, “
Enhanced identification and exploitation of time scales for model reduction in stochastic chemical kinetics
,”
J. Chem. Phys.
129
,
244112
(
2008
).
37.
K.
Sargsyan
,
B.
Debusschere
,
H. N.
Najm
, and
Y.
Marzouk
, “
Bayesian inference of spectral expansions for predictability assessment in stochastic reaction networks
,”
J. Comput. Theor. Nanosci.
6
(
10
),
2283
2297
(
2009
).
38.
K.
Sargsyan
,
C.
Safta
,
B.
Debusschere
, and
H.
Najm
, “
Multiparameter spectral representation of noise-induced competence in Bacillus subtilis
,”
IEEE/ACM Trans. Comput. Biol. Bioinf.
9
(
6
),
1709
1723
(
2012
).
39.
F.
Rizzi
,
H. N.
Najm
,
B. J.
Debusschere
,
K.
Sargsyan
,
M.
Salloum
,
H.
Adalsteinsson
, and
O. M.
Knio
, “
Uncertainty quantification in MD simulations. Part I: Forward propagation
,”
Multiscale Model. Simul.
10
,
1428
1459
(
2012
).
40.
F.
Rizzi
,
H. N.
Najm
,
B. J.
Debusschere
,
K.
Sargsyan
,
M.
Salloum
,
H.
Adalsteinsson
, and
O. M.
Knio
, “
Uncertainty quantification in MD simulations. Part II: Bayesian inference of force-field parameters
,”
Multiscale Model. Simul.
10
,
1460
1492
(
2012
).
41.
F.
Rizzi
,
R.
Jones
,
B. J.
Debusschere
, and
O. M.
Knio
, “
Uncertainty quantification in MD simulations of concentration driven ionic flow through a silica nanopore. Part I: Sensitivity to physical parameters of the pore
,”
J. Chem. Phys.
138
,
194104
(
2013
).
42.
F.
Rizzi
,
R.
Jones
,
B. J.
Debusschere
, and
O. M.
Knio
, “
Uncertainty quantification in MD simulations of concentration driven ionic flow through a silica nanopore. Part II: Uncertain potential parameters
,”
J. Chem. Phys.
138
,
194105
(
2013
).
43.
O. P.
Le Maître
and
O. M.
Knio
, “
PC analysis of stochastic differential equations driven by Wiener noise
,”
Reliab. Eng. Syst. Saf.
135
,
107
124
(
2015
).