The weighted stochastic simulation algorithm (wSSA) recently introduced by Kuwahara and Mura [J. Chem. Phys.129, 165101 (2008)] is an innovative variation on the stochastic simulation algorithm (SSA). It enables one to estimate, with much less computational effort than was previously thought possible using a Monte Carlo simulation procedure, the probability that a specified event will occur in a chemically reacting system within a specified time when that probability is very small. This paper presents some procedural extensions to the wSSA that enhance its effectiveness in practical applications. The paper also attempts to clarify some theoretical issues connected with the wSSA, including its connection to first passage time theory and its relation to the SSA.

1.
H.
Kuwahara
and
I.
Mura
,
J. Chem. Phys.
129
,
165101
(
2008
).
2.
The computation of σ2 in Eq. (7) evidently involves taking the difference between two usually large and, in the best of circumstances, nearly equal numbers. This can give rise to numerical inaccuracies. Since, with μmn1k=1nwkm, it is so that μ2μ12 is mathematically identical to n1k=1n(wkμ1)2, the form of the latter as a sum of non-negative numbers makes it less susceptible to numerical inaccuracies. Unfortunately, using this more accurate formula is much less convenient than formula (7), whose two sums can be computed on the fly without having to save the wk values. But unless the two sums in Eq. (7) are computed with sufficiently high numerical precision, use of the alternate formula is advised.
3.
See, for instance,
J. V.
Sengers
,
D. T.
Gillespie
, and
J. J.
Perez-Esandi
,
Physica
90A
,
365
(
1978
);
D. T.
Gillespie
,
J. Opt. Soc. Am. A
2
,
1307
(
1985
).
4.
Result (9a) for the uncertainty when no importance sampling is used can also be deduced through the following line of reasoning: Abbreviating p(x0,E;t)p, the n runs are analogous to n tosses of a coin that have probability p of being successful. We know from elementary statistics that the number of successful runs should then be the binomial (or Bernoulli) random variable with mean np and variance np(1p). When n is very large, that binomial random variable can be approximated by the normal random variable with the same mean and variance. Multiplying that random variable by n1 gives the fraction of the n runs that are successful. Random variable theory tells us that it too will be (approximately) normal but with mean n1p=p/n and variance (n1)2np(1p)=p(1p)/n, and hence standard deviation p(1p)/n. The latter, with p=mn/n, is precisely uncertainty (9a). Essentially this argument was given in Appendix B of Ref. 1. But there is apparently no way to generalize this line of reasoning to the case where the weights of the successful runs are not all unity; hence the need for the procedure described in the text.
5.
See, for instance,
C. W.
Gardiner
,
Handbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences
(
Springer-Verlag
,
Berlin
,
1985
), pp.
238
240
.
6.
D. T.
Gillespie
,
Markov Processes: An Introduction for Physical Scientists
(
Academic
,
New York
,
1992
), pp.
520
529
.
7.
P. D.
Welch
, in
The Computer Performance Modeling Handbook
, edited by
S.
Lavenberg
(
Academic
,
New York
,
1983
), pp.
268
328
.
You do not currently have access to this content.