Long-term temporal correlations in time series in a form of an event sequence have been characterized using an autocorrelation function that often shows a power-law decaying behavior. Such scaling behavior has been mainly accounted for by the heavy-tailed distribution of interevent times, i.e., the time interval between two consecutive events. Yet, little is known about how correlations between consecutive interevent times systematically affect the decaying behavior of the autocorrelation function. Empirical distributions of the burst size, which is the number of events in a cluster of events occurring in a short time window, often show heavy tails, implying that arbitrarily many consecutive interevent times may be correlated with each other. In the present study, we propose a model for generating a time series with arbitrary functional forms of interevent time and burst size distributions. Then, we analytically derive the autocorrelation function for the model time series. In particular, by assuming that the interevent time and burst size are power-law distributed, we derive scaling relations between power-law exponents of the autocorrelation function decay, interevent time distribution, and burst size distribution. These analytical results are confirmed by numerical simulations. Our approach helps to rigorously and analytically understand the effects of correlations between arbitrarily many consecutive interevent times on the decaying behavior of the autocorrelation function.

In general, events in complex systems are correlated with each other. The autocorrelation function for the event series measures the overall diminishing impact of one event at a time on another event in the future (or past). The autocorrelation function in complex systems typically shows a long tail, meaning a long-lasting memory. We mathematically solve the autocorrelation function to understand the origin of such long tails in terms of not only the heavy-tailed distribution of the time interval between two successive events, namely, interevent time, but also correlations between arbitrarily many successive interevent times.

Complex systems often show complex dynamical behaviors such as long-term temporal correlations, also known as 1 / f noise.1–5 Characterizing those temporal correlations is of utmost importance to understand mechanisms behind such observations. There are a number of characterization and measurement methods in the literature; e.g., one can refer to Refs. 6–9 and references therein. One of the most commonly used measurements is an autocorrelation function.10,11 Precisely, the autocorrelation function for a time series x ( t ) is defined with a time lag t d as
(1)
where is the time average over the entire period of the time series. The autocorrelation function has been extensively used for detecting temporal correlations in various natural and social phenomena.8,12–14 For the time series with long-term correlations, the autocorrelation function typically decays in a power-law form with a decay exponent γ such that
(2)
This power-law behavior is closely related to the 1 / f noise via Wiener–Khinchin theorem15 as well as to the Hurst exponent H16 and its generalizations.17–21 
A type of time series that has attracted attention is given in a form of a sequence of event timings or an event sequence, which can be regarded as realizations of point processes in time.22 Temporal correlations in such event sequences have been characterized by the time interval between two consecutive events, namely, an interevent time. In many empirical data sets, interevent time distributions, denoted by P ( τ ), have heavy tails;23–36 in particular, P ( τ ) often shows a power-law tail as
(3)
with a power-law exponent α.8 Lowen and Teich derived the analytical solution of the power spectral density for a renewal process governed by a power-law interevent time distribution,37,38 concluding that the decay exponent γ in Eq. (2) is solely determined by the interevent time exponent α in Eq. (3) as
(4)
It is not surprising because the heavy-tailed interevent time distribution is the only source of temporal correlations in the time series for renewal processes, where there are no correlations between interevent times. The same scaling relation in Eq. (4) was derived in other model studies.39–41 
In general, correlations between interevent times, in addition to the interevent time distribution, should also be relevant to the understanding of asymptotic decay of the autocorrelation function. To detect correlations between consecutive interevent times, a notion of bursty trains was introduced;42 for a given time window Δ t, consecutive events are clustered into a bursty train when any two consecutive events in the train are separated by the interevent time smaller than or equal to Δ t, while the first (last) event in the train is separated from the last (first) event in the previous (next) train by the interevent time larger than Δ t. The number of events in each bursty train is called a burst size, and it is denoted by b. By analyzing various data, Karsai et al. reported that the burst size distribution shows power-law tails as
(5)
with a power-law exponent β for a wide range of Δ t.42 Similar observations have been made in other data.43–48 These findings immediately raise an important question: how does the autocorrelation function decay power-law exponent γ depend on the interevent time power-law exponent α as well as on the burst-size power-law exponent β?

It is not straightforward to devise a model or process that can answer this question, because the heavy tail of the burst size distribution typically implies that arbitrarily many consecutive interevent times may be correlated with each other. It is worth noting that correlations between two consecutive interevent times have been quantified in terms of the memory coefficient,49 local variation,50 and mutual information;51 they were implemented using, e.g., a copula method52,53 and a correlated Laplace Gillespie algorithm in the context of many-body systems.54,55 Correlations between an arbitrary number of consecutive interevent times have been modeled by means of, e.g., the two-state Markov chain,42 self-exciting point processes,56 the interevent time permutation method,57 and a model inspired by the burst-tree decomposition method.48 Although scaling behaviors of the autocorrelation function were studied in some of mentioned works,42,53,56,57 the scaling relation γ ( α , β ) has not been clearly understood due to the lack of analytical solutions of the autocorrelation function.

In this work, we devise a model for generating a time series with arbitrary functional forms of the interevent time distribution and burst size distribution. Assuming power-law tails for interevent time and burst size distributions, our model generates correlations between an arbitrary number of consecutive interevent times. By theoretically analyzing the model, we derive asymptotically exact solutions of the autocorrelation function from the model time series, thereby enabling us to find the scaling relation γ ( α , β ). These analytical results will help us to better understand temporal scaling behaviors in empirical time series.

The paper is organized as follows. In Sec. II, we introduce the model with arbitrary functional forms of interevent time and burst size distributions. In Sec. III, we provide an analytical framework for the derivation of the autocorrelation function for the model time series. In Sec. IV, by assuming power-law distributions of interevent times and burst sizes, we derive analytical solutions of the autocorrelation function, hence the decay exponent γ as a function of α and β. We also compare the obtained analytical results with numerical simulations. Finally, we conclude our paper in Sec. V.

Let us introduce the following model for generating event sequences { x ( 1 ) , , x ( T ) } in discrete time, where T is the number of discrete times, for given interevent time distribution and burst size distribution. By definition, x ( t ) = 1 if an event occurs at time t { 1 , , T }, and x ( t ) = 0 otherwise. By construction, the minimum interevent time is τ min = 1. Furthermore, we only consider bursts with Δ t = 1 throughout this work. In other words, we regard that events occurring in consecutive times form a burst, including the case in which the burst contains only one event.

The event sequence x ( t ) is generated using an interevent time distribution for τ 2, denoted by ψ ( τ ), and a burst size distribution Q ( b ). Note that ψ ( τ ) is normalized. To generate an event sequence, we first randomly draw a burst size b 1 from Q ( b ) to set x ( t ) = 1 for t = 1 , , b 1. Then, we randomly draw an interevent time τ 1 from ψ ( τ ) to set x ( t ) = 0 for t = b 1 + 1 , , b 1 + τ 1 1. Note that τ 1 2. We draw another burst size b 2 from Q ( b ) and another interevent time τ 2 from ψ ( τ ), respectively, to set x ( t ) = 1 for t = b 1 + τ 1 , , b 1 + τ 1 + b 2 1 and x ( t ) = 0 for t = b 1 + τ 1 + b 2 , , b 1 + τ 1 + b 2 + τ 2 1. We repeat this procedure until t reaches T. See Fig. 1(a) for an example.

FIG. 1.

(a) Schematic diagram for an event sequence x ( t ) in discrete time t, showing how interevent times τ and burst sizes b are combined in the event sequence, with an example of non-zero x ( t ) x ( t + t d ) for the time lag t d = 14. (b) The event sequence x ( t ) can be seen as the alternation of periods of x ( t ) = 1 and those of x ( t ) = 0. A typical composition of the time lag t d for non-zero x ( t ) x ( t + t d ) is presented. See the main text for the definitions of symbols.

FIG. 1.

(a) Schematic diagram for an event sequence x ( t ) in discrete time t, showing how interevent times τ and burst sizes b are combined in the event sequence, with an example of non-zero x ( t ) x ( t + t d ) for the time lag t d = 14. (b) The event sequence x ( t ) can be seen as the alternation of periods of x ( t ) = 1 and those of x ( t ) = 0. A typical composition of the time lag t d for non-zero x ( t ) x ( t + t d ) is presented. See the main text for the definitions of symbols.

Close modal
We have two remarks that are related to each other. First, in our model, all interevent times of τ = 1 are generated by bursts with b 2 events; a burst of size b generates exactly b 1 interevent times of τ = 1. This is why the fraction of interevent times of τ = 1 among all interevent times is determined by Q ( b ). To be precise, if there are m bursts in the generated event sequence, there must be the asymptotically same number of interevent times of τ 2 in the limit of T , m . Then, the number of interevent times of τ = 1 is m ( b 1 ), where b is the average burst size, i.e.,
(6)
The fraction of interevent times of τ 2, denoted by u, is obtained as
(7)
Using this u, one can write the interevent time distribution for the entire range of interevent times as follows:
(8)
where δ ( , ) is a Kronecker delta.
The second remark is for the general case including our model; in general, P ( τ ) and Q Δ t ( b ) are not independent of each other. Following Refs. 57–59, let us consider a sequence of n events, from which one obtains n 1 interevent times. For a given time window Δ t, n events are clustered into, say, m bursty trains, implying that the average burst size is given by
(9)
Here, m is closely related to the number of interevent times larger than Δ t as follows:
(10)
where F ( Δ t ) Δ t d τ P ( τ ) is a cumulative probability distribution function of P ( τ ). By combining Eqs. (9) and (10), one obtains
(11)
In the limit of n , we get
(12)
If we assume pure power-law forms of P ( τ ) in Eq. (3) and of Q Δ t ( b ) in Eq. (5), we can derive the scaling relation between α and β, possibly undermining our main question about γ ( α , β ). However, most empirical distributions of interevent times and burst sizes are not pure power-laws,8 enabling us to treat α and β as independent parameters to some extent. Later, we will assume that the interevent time distribution has a power-law tail for τ 2, while the burst size distribution is a pure power-law.
We analyze the autocorrelation function given by Eq. (1) for the time series { x ( 1 ) , , x ( T ) } generated by the model described in Sec. II. Because A ( t d = 0 ) = 1 by definition, we consider the positive time lag ( t d > 0) unless we state otherwise. For an event sequence composed of n events, one gets the event rate as
(13)
enabling to write x ( t ) 2 = x ( t ) = λ because x ( t ) { 0 , 1 }. The term x ( t ) x ( t + t d ) in Eq. (1) is written as follows:
(14)
where Z ( t d ) is the probability that x ( t + t d ) = 1 conditioned on x ( t ) = 1. Note that Z ( 0 ) = 1. Then, the autocorrelation function reads
(15)

Let us consider the case in which x ( t ) x ( t + t d ) is non-zero, i.e., x ( t ) = 1 and x ( t + t d ) = 1. As depicted in Fig. 1(a), the time series x ( t ) in a period of [ t , t + t d ] is typically composed of several alternating bursts and interevent times larger than one. Here, the consecutive interevent times of τ = 1 forms a burst and the sum of such interevent times of τ = 1 is equal to the burst size minus one. Therefore, the time lag t d is written as a sum of burst sizes (each minus one) of bursts appeared in the period of [ t , t + t d ] and interevent times between those bursts. We note that the first burst contains an event at time t and that the last burst contains an event at time t + t d.

We denote by r the number of events from time t to the end of the burst containing the event at time t. By definition, we exclude the event at time t when counting r, hence r 0. For example, if a burst is composed of five events occurring at times 1 , , 5 and we consider t = 2, then one obtains r = 3. If we select t such that x ( t ) = 1 uniformly at random, t is contained in a burst of size b with a probability proportional to Q ( b ). Therefore, r is a discrete-time variant of the waiting time or the residual time derived from the interevent time,8 and the probability distribution of r is given by
(16)
Note that r = 0 R ( r ) = 1. See  Appendix A for the derivation of the factor 1 / b .
Next we denote by r the number of events that are in the burst containing the event at time t + t d and occur before or at t + t d [Fig. 1(b)]. It should be noted that r 1. The definition of r implies that the first event of the burst containing the event at t + t d occurred at time t + t d r + 1. We denote by q ( r ) the probability that x ( t + t d ) = 1 conditioned that the first event of the burst containing the event at time t + t d is located at t + t d r + 1. For this case to occur, the size of the burst starting at time t + t d r + 1 has to be larger than or equal to r . Therefore, one obtains
(17)
To derive Z ( t d ) in Eq. (15), we denote by p k ( t d ) the probability that an event occurs at time t + t d and there are exactly k alterations between the burst and the interevent time larger than one, conditioned that an event occurs at time t. By k alterations, we mean that the event at time t + t d belongs to the kth burst after the burst to which the event at time t belongs to. Note that there are then k interevent times larger than one between the burst at time t and that at time t + t d. Then, Z ( t d ) can be written in terms of p k ( t d ) as
(18)
For the case with k = 0, we obtain using Eq. (16)
(19)
It should be noted that, when counting t d for k = 0, we exclude the event at time t and include the event at time t + t d for consistency. For example, consider a burst of five events at times 1 , , 5. If we are considering the two events at times 2 and 4, we set t = 2 and t d = 2.
If there are k ( 1 ) interevent times larger than one intersecting [ t , t + t d ], one can write [Fig. 1(b)]
(20)
where we have defined a reduced interevent time by
(21)
for convenience. Note that τ ¯ i 1 for all i because τ i 2. We assume that all variables on the right-hand side of Eq. (20) are statistically independent of each other. Then, we can write p k ( t d ) for k 1 as
(22)
It is also remarkable that using the reduced interevent time, the event rate is obtained as follows:
(23)
where
(24)
namely, λ is the ratio of the average length of the period of x ( t ) = 1 to the sum of those of x ( t ) = 1 and of x ( t ) = 0 [Fig. 1(b)].
For analytical tractability, we assume that all variables on the right-hand side of Eq. (20) are real numbers. Therefore, ψ ( τ ¯ ), Q ( b ), R ( r ), q ( r ), and p 0 ( t d ) are also considered for their respective continuous variables. The continuous versions of Eqs. (16) and (17) are given by
(25)
and
(26)
respectively. Furthermore, the continuous-time versions of Eqs. (19) and (22), respectively, read
(27)
and
(28)
where δ ( ) is the Dirac delta function.
We take the Laplace transform of Eq. (27) to obtain
(29)
where Q ~ ( s ) denotes the Laplace transform of Q ( b ). The Laplace transform of Eq. (28) reads for k 1
(30)
where
(31)
(32)
and ψ ~ ( s ) denotes the Laplace transform of ψ ( τ ¯ ). Then, the Laplace transform of Z ( t d ) in Eq. (18) is obtained as
(33)
By taking the inverse Laplace transform of Z ~ ( s ) and then substituting it into Eq. (15), one can obtain the analytical solution of the autocorrelation function for x ( t ).
To demonstrate the above analytical results, we consider a simple case in which both reduced interevent times and burst sizes are real numbers and exponentially distributed. By assuming that
(34)
and
(35)
we derive an exact solution of the autocorrelation function as follows (see  Appendix B):
(36)
Let us return to our original question on temporal scaling behavior. We assume continuous versions of power-law distributions of reduced interevent times and burst sizes as follows:
(37)
(38)
Here α , β > 1 are power-law exponents, and τ ¯ c and b c are cutoffs. Also, C ψ α 1 1 τ ¯ c 1 α and C Q β 1 1 b c 1 β are normalization constants. Then, we will derive the analytical result of the decay exponent γ of the autocorrelation function as a function of α and β, i.e., γ ( α , β ).
We first prove a useful property that γ is symmetric with respect to the exchange of α and β, namely,
(39)
To prove this property, let us consider a complementary event sequence y ( t ) to the original event sequence x ( t ), which is defined as
(40)
The autocorrelation function defined for y ( t ) using the formula in Eq. (1) turns out to be the same as the autocorrelation function for x ( t ),
(41)
That is, the decay exponent of the autocorrelation function for x ( t ) must be the same as that for y ( t ). By the definition of y ( t ) in Eq. (40), the periods of x ( t ) = 0 correspond to those of y ( t ) = 1 and vice versa. It means that reduced interevent times and burst sizes in x ( t ), respectively, correspond to burst sizes and reduced interevent times in y ( t ), closing the proof.
Under Eq. (37), the average of τ ¯ is given by
(42)
Similarly, under Eq. (38), the average of b reads
(43)
Note that the event rate λ in Eq. (23) is determined by the above τ ¯ and b .

We now divide the entire range of α , β > 1 into several cases to derive the analytical solution of the autocorrelation function in each case. Considering the symmetric nature of γ in Eq. (39), the following cases are sufficient to get the complete picture of the result.

When α = β = 2, we get the Laplace transforms of ψ ( τ ¯ ) in Eq. (37) and Q ( b ) in Eq. (38) in the limit of s 0 as (see  Appendix C)
(44)
(45)
By substituting Eq. (45) in Eqs. (29), (31), and (32), we obtain
(46)
(47)
(48)
Using Eq. (33), after some algebra, one obtains
(49)
where represents “approximately equal to,” leading to its inverse Laplace transform as
(50)
with γ EM 0.5772 denoting the Euler–Mascheroni constant.60 Using Eq. (15) one obtains
(51)
thereby enabling us to conclude that
(52)
The Laplace transform of ψ ( τ ¯ ) for α 2 is written as
(53)
where Γ ( , ) is the upper incomplete Gamma function. For the intermediate range of s, i.e., 1 τ ¯ c s 1, we obtain C ψ α 1 and Γ ( 1 α , s τ ¯ c ) 0, resulting in
(54)
We expand Eq. (54) in the limit of s 0 to obtain
(55)
where a s ( α 1 ) Γ ( 1 α ) and a 1 α 1 2 α. Here Γ ( ) is the Gamma function. As for Q ~ ( s ) and other functions derived from Q ~ ( s ), we keep using Eqs. (45)–(48). For 1 < α < 2, after some algebra, one obtains
(56)
implying
(57)
Using Eq. (15) we obtain
(58)
hence
(59)
For α > 2, we obtain
(60)
Although the inverse Laplace transform of ln s does not exist, we can still conclude that
(61)
Thanks to the symmetric property of γ in Eq. (39), one concludes that
(62)
We first study the case with α β and then the solution in the case of α > β will be obtained via Eq. (39). Similarly to the case of ψ ~ ( s ) in Eqs. (54) and (55), we get the expanded Q ~ ( s ) for the intermediate range of s, i.e., 1 b c s 1, as follows:
(63)
where c s ( β 1 ) Γ ( 1 β ), c 1 β 1 2 β, c 2 β 1 2 ( 3 β ), and c 3 β 1 6 ( 4 β ). Again using Eqs. (29), (31), and (32), we obtain
(64)
(65)
(66)
For the case with α < β, after some algebra, we obtain Z ~ ( s ) in Eq. (33) up to the leading terms as follows:
(67)
Obviously, the last term on the right-hand side of Eq. (67) is dominated by the second term. We find that for the intermediate range of s, specifically, 1 b c s 1, the second term is dominated by the first term because
(68)
Finally, for the first term in Eq. (67), since c 1 b for b c 1, we obtain up to the second leading term
(69)
which leads to
(70)
Note that the coefficient of the term t d 2 β is negative for the range of 1 < β < 2. By substituting Eq. (70) in Eq. (15), we obtain
(71)
In the case with α = β, we similarly obtain the autocorrelation function as follows:
(72)
Therefore, we conclude that
(73)
Owing to the symmetric nature of γ ( α , β ) in Eq. (39), we further conclude that
(74)
For the case with α , β > 2, we use the expanded ψ ~ ( s ) in Eq. (55) and the expanded Q ~ ( s ) in Eq. (63) in the limit of s 0. Since a 1 = τ ¯ for α > 2 and τ ¯ c 1 [Eq. (42)], we replace a 1 by τ ¯ . Similarly, we replace c 1 by b . Using Eqs. (29), (31), and (32), we obtain
(75)
(76)
(77)
After some algebra, we derive Z ~ ( s ) in Eq. (33) up to the leading terms as
(78)
leading to its inverse Laplace transform as
(79)
Since the constant term on the right-hand side of Eq. (79) is canceled with λ in Eq. (23), one obtains from Eq. (15)
(80)
enabling us to find the scaling relation
(81)
Note that this γ is symmetric with respect to the exchange of α and β.
When 1 < α < 2 and β > 2, by combining the expanded ψ ~ ( s ) given by Eq. (55), the expanded Q ~ ( s ) given by Eq. (63), and related functions given by Eqs. (75)–(77), we obtain up to the leading terms
(82)
The inverse Laplace transform of Eq. (82) results in
(83)
Since τ ¯ diverges for α < 2 and τ ¯ c 1, one obtains the vanishing event rate, i.e., λ = 0 [Eq. (23)]. Thus, A ( t d ) = Z ( t d ) from Eq. (15), and we get the scaling relation
(84)
Again thanks to the symmetric nature of γ, we conclude that
(85)
In sum, we have derived the power-law exponent γ of the autocorrelation function decay as a function of the interevent time power-law exponent α and burst-size power-law exponent β for the entire range of α , β > 1 as follows:
(86)
We depict the result in Fig. 2. Note that, for the case of β 1, the scaling relation in Eq. (4) is partly recovered.
FIG. 2.

Visualization of the main result, i.e., γ ( α , β ), given by Eq. (86). Here γ is a power-law exponent for the decaying behavior of the autocorrelation function, α is a power-law exponent of the interevent time distribution, and β is a power-law exponent of the burst size distribution.

FIG. 2.

Visualization of the main result, i.e., γ ( α , β ), given by Eq. (86). Here γ is a power-law exponent for the decaying behavior of the autocorrelation function, α is a power-law exponent of the interevent time distribution, and β is a power-law exponent of the burst size distribution.

Close modal
To test the validity of our analytical solution given by Eq. (86), we generate the event sequence { x ( 1 ) , , x ( T ) } using the following distributions of reduced interevent times and burst sizes:
(87)
(88)
where C ψ = ( τ ¯ = 1 τ ¯ c τ ¯ α ) 1 and C Q = ( b = 1 b c b β ) 1. Precisely, we randomly draw a burst size b 1 from Q ( b ) in Eq. (88) to set x ( t ) = 1 for t = 1 , , b 1. Then a reduced interevent time τ ¯ 1 is randomly drawn from ψ ( τ ¯ ) in Eq. (87) to set x ( t ) = 0 for t = b 1 + 1 , , b 1 + τ ¯ 1. We draw another burst size b 2 from Q ( b ) and another reduced interevent time τ ¯ 2 from ψ ( τ ¯ ), respectively, to set x ( t ) = 1 for t = b 1 + τ ¯ 1 + 1 , , b 1 + τ ¯ 1 + b 2 and x ( t ) = 0 for t = b 1 + τ ¯ 1 + b 2 + 1 , , b 1 + τ ¯ 1 + b 2 + τ ¯ 2. We repeat this procedure until t reaches T.
Using the generated time series { x ( 1 ) , , x ( T ) }, we numerically calculate the autocorrelation function by
(89)
where λ 1 and σ 1 are, respectively, the average and standard deviation of { x ( 1 ) , , x ( T t d ) }, and λ 2 and σ 2 are respectively the average and standard deviation of { x ( t d + 1 ) , , x ( T ) }.

For the simulations, we use T = 5 × 10 7 and τ ¯ c = b c = 10 6 to generate 100 different event sequences x ( t ) for each combination of α and β. Then, their autocorrelation functions are calculated using Eq. (89). As shown in Fig. 3, simulation results in terms of A ~ ( t d ) for several combinations of parameters of α and β are in good agreement with corresponding analytical solutions of A ( t d ) in most cases. In some cases, we observe systematic deviations of analytical solutions from the simulation results, which may be due to ignorance of higher-order terms when deriving analytical solutions and/or finite-size effects of simulations.

FIG. 3.

Numerical autocorrelation functions A ~ ( t d ) defined as Eq. (89) (symbols) that are calculated for the time series x ( t ) generated using the model with various combinations of α and β. Each autocorrelation function curve was obtained from 100 event sequences generated with T = 5 × 10 7 and τ ¯ c = b c = 10 6. Error bars denote standard errors. These simulation results are compared with corresponding analytical solutions of A ( t d ) (solid lines), which are respectively Eqs. (51), (58), (71), (72), (80), and (83), and some of these equations with α and β being exchanged ( τ ¯ c and b c being exchanged too whenever necessary). Note that there are no analytical solutions for the case with α = 2 and β > 2 [Eq. (60)]. In all panels, some curves are vertically shifted for better visualization.

FIG. 3.

Numerical autocorrelation functions A ~ ( t d ) defined as Eq. (89) (symbols) that are calculated for the time series x ( t ) generated using the model with various combinations of α and β. Each autocorrelation function curve was obtained from 100 event sequences generated with T = 5 × 10 7 and τ ¯ c = b c = 10 6. Error bars denote standard errors. These simulation results are compared with corresponding analytical solutions of A ( t d ) (solid lines), which are respectively Eqs. (51), (58), (71), (72), (80), and (83), and some of these equations with α and β being exchanged ( τ ¯ c and b c being exchanged too whenever necessary). Note that there are no analytical solutions for the case with α = 2 and β > 2 [Eq. (60)]. In all panels, some curves are vertically shifted for better visualization.

Close modal

To study the combined effects of the interevent time distribution P ( τ ) and the burst size distribution Q ( b ) on the autocorrelation function A ( t d ), we have devised a model for generating time series using P ( τ ) and Q ( b ) as inputs. Our model is simple but takes correlations between an arbitrary number of consecutive interevent times into account in terms of bursty trains.42 We are primarily interested in temporal scaling behaviors observed in A ( t d ) t d γ when P ( τ ) τ α (except at τ = 1) and Q ( b ) b β are assumed. We have derived the analytical solutions of A ( t d ) for arbitrary values of interevent time power-law exponent α > 1 and burst-size power-law exponent β > 1 to obtain the autocorrelation function decay power-law exponent γ as a function of α and β [Eq. (86); see also Fig. 2].

We remark that our model has assumed that interevent times with τ 2 and burst sizes in the time series are independent of each other. However, there are observations indicating the presence of correlations between consecutive burst sizes and even higher-order temporal structure.48 Thus, it would be interesting to see whether such higher-order structure affects the decaying behavior of the autocorrelation function.

So far, we have focused on the analysis of the single time series observed for a single phenomenon or for a system as a whole. However, there are other complex systems in which each element of the system has its own bursty activity pattern or a pair of elements have their own bursty interaction pattern, such as calling patterns between mobile phone users.61,62 In recent years, such systems have been studied in the framework of temporal networks,9,63,64 where interaction events are heterogeneously distributed among elements as well as over the time axis. Modeling temporal networks is important to understand collective dynamics, such as spreading or diffusion,65,66 taking place in those networks. Some recent efforts for modeling temporal networks are mostly concerned with heavy-tailed interevent time distributions for each element or a pair of elements.67,68 Our model can be extended to generate more realistic temporal networks in which activity patterns of elements or interaction patterns between elements are characterized by bursty time series with higher-order temporal structure beyond the interevent time distribution.

H.-H.J. and T.B. acknowledge financial support by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2022R1A2C1007358). N.M. acknowledges financial support by the Japan Science and Technology Agency (JST) Moonshot R&D (under Grant No. JPMJMS2021), the National Science Foundation (under Grant Nos. 2052720 and 2204936), and JSPS KAKENHI (under Grant Nos. JP 21H04595 and 23H03414).

The authors have no conflicts to disclose.

Hang-Hyun Jo: Conceptualization (lead); Formal analysis (lead); Funding acquisition (equal); Investigation (equal); Software (lead); Visualization (lead); Writing – original draft (lead); Writing – review & editing (equal). Tibebe Birhanu: Formal analysis (supporting); Validation (equal); Writing – review & editing (equal). Naoki Masuda: Formal analysis (equal); Funding acquisition (equal); Writing – original draft (equal); Writing – review & editing (equal).

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Let us write R ( r ) as follows:
(A1)
Then, we derive the normalization constant C R from the normalization condition for R ( r ),
(A2)
Two summations on the right-hand side can be exchanged as
(A3)
leading to
(A4)
Finally, one obtains
(A5)
By substituting Q ( b ) in Eq. (35) in Eqs. (25)–(27), we obtain
(B1)
(B2)
(B3)
We take the Laplace transform of Eqs. (34), (35), and (B1)(B3) to obtain
(B4)
(B5)
(B6)
Using Eq. (33), one obtains
(B7)
The inverse Laplace transform of Eq. (B7) is given by
(B8)
which leads to
(B9)
where
(B10)
We take the Laplace transform of ψ ( τ ¯ ) with α = 2 in Eq. (37),
(C1)
where C ψ = τ ¯ c τ ¯ c 1. For τ ¯ c 1, one obtains C ψ 1. Then,
(C2)
The change of the integrated variable from τ ¯ to v s τ ¯ yields
(C3)
We have assumed that s τ ¯ c 1. We take the limit of s 0 after dividing 1 ψ ~ ( s ) by s ln s,
(C4)
(C5)
(C6)
(C7)
where we have used L’Hôpital’s rule for the derivation.38 Therefore, in the limit of s 0, we obtain
(C8)
Similarly, we obtain Q ~ ( s ) 1 + s ln s.
1.
F. N.
Hooge
,
T. G. M.
Kleinpenning
, and
L. K. J.
Vandamme
, “
Experimental studies on 1 / f noise
,”
Rep. Prog. Phys.
44
,
479
532
(
1981
).
2.
M. B.
Weissman
, “
1 / f noise and other slow, nonexponential kinetics in condensed matter
,”
Rev. Mod. Phys.
60
,
537
571
(
1988
).
3.
P.
Bak
,
C.
Tang
, and
K.
Wiesenfeld
, “
Self-organized criticality: An explanation of the 1 / f noise
,”
Phys. Rev. Lett.
59
,
381
384
(
1987
).
4.
P.
Bak
,
How Nature Works: The Science of Self-Organized Criticality
(
Copernicus
,
New York, NY
,
1996
).
5.
P.
Allegrini
,
D.
Menicucci
,
R.
Bedini
,
L.
Fronzoni
,
A.
Gemignani
,
P.
Grigolini
,
B. J.
West
, and
P.
Paradisi
, “
Spontaneous brain activity as a source of ideal 1 / f noise
,”
Phys. Rev. E
80
,
061914
(
2009
).
6.
H.
Kantz
and
T.
Schreiber
, “Nonlinear time series analysis,” in Cambridge Nonlinear Science Series, 2nd ed. [Cambridge University Press, Cambridge (GB), 2004].
7.
J. W.
Kantelhardt
, “Fractal and multifractal time series,” in Mathematics of Complexity and Dynamical Systems, edited by R. A. Meyers (Springer New York, New York, NY, 2012), pp. 463–487.
8.
M.
Karsai
,
H.-H.
Jo
, and
K.
Kaski
,
Bursty Human Dynamics
(
Springer International Publishing
,
Cham
,
2018
).
9.
N.
Masuda
and
R.
Lambiotte
, “A guide to temporal networks,” 2nd ed. in Series on Complexity Science No., Vol. 6 (World Scientific, NJ, 2021).
10.
R. M.
Fano
, “
Short-time autocorrelation functions and power spectra
,”
J. Acoust. Soc. Am.
22
,
546
550
(
1950
).
11.
J. W.
Kantelhardt
,
E.
Koscielny-Bunde
,
H. H. A.
Rego
,
S.
Havlin
, and
A.
Bunde
, “
Detecting long-range correlations with detrended fluctuation analysis
,”
Phys. A: Stat. Mech. Appl.
295
,
441
454
(
2001
).
12.
R.
Brunetti
and
C.
Jacoboni
, “
Analysis of the stationary and transient autocorrelation function in semiconductors
,”
Phys. Rev. B
29
,
5739
5748
(
1984
).
13.
E.
Koscielny-Bunde
,
H.
Eduardo Roman
,
A.
Bunde
,
S.
Havlin
, and
H.-J.
Schellnhuber
, “
Long-range power-law correlations in local daily temperature fluctuations
,”
Philos. Mag. B
77
,
1331
1340
(
1998
).
14.
W.
Min
,
G.
Luo
,
B. J.
Cherayil
,
S. C.
Kou
, and
X. S.
Xie
, “
Observation of a power-law memory kernel for fluctuations within a single protein molecule
,”
Phys. Rev. Lett.
94
,
198302
(
2005
).
15.
N.
Leibovich
,
A.
Dechant
,
E.
Lutz
, and
E.
Barkai
, “
Aging Wiener–Khinchin theorem and critical exponents of 1 / f β noise
,”
Phys. Rev. E
94
,
052130
(
2016
).
16.
H. E.
Hurst
, “
The problem of long-term storage in reservoirs
,”
Int. Assoc. Sci. Hydrol. Bull.
1
,
13
27
(
1956
).
17.
C.-K.
Peng
,
S. V.
Buldyrev
,
S.
Havlin
,
M.
Simons
,
H. E.
Stanley
, and
A. L.
Goldberger
, “
Mosaic organization of DNA nucleotides
,”
Phys. Rev. E
49
,
1685
1689
(
1994
).
18.
J.
Barunik
and
L.
Kristoufek
, “
On Hurst exponent estimation under heavy-tailed distributions
,”
Phys. A: Stat. Mech. Appl.
389
,
3844
3855
(
2010
).
19.
D.
Rybski
,
S. V.
Buldyrev
,
S.
Havlin
,
F.
Liljeros
, and
H. A.
Makse
, “
Scaling laws of human interaction activity
,”
Proc. Natl. Acad. Sci. U.S.A.
106
,
12640
12645
(
2009
).
20.
D.
Rybski
,
S. V.
Buldyrev
,
S.
Havlin
,
F.
Liljeros
, and
H. A.
Makse
, “
Communication activity in a social network: Relation between long-term correlations and inter-event clustering
,”
Sci. Rep.
2
,
560
(
2012
).
21.
L.
Tang
,
H.
Lv
,
F.
Yang
, and
L.
Yu
, “
Complexity testing techniques for time series data: A comprehensive literature review
,”
Chaos Solitons Fractals
81
,
117
135
(
2015
).
22.
D. J.
Daley
and
D.
Vere-Jones
,
An Introduction to the Theory of Point Processes
, 2nd ed. (
Springer
,
New York
,
2003
).
23.
P.
Bak
,
K.
Christensen
,
L.
Danon
, and
T.
Scanlon
, “
Unified scaling law for earthquakes
,”
Phys. Rev. Lett.
88
,
178501
(
2002
).
24.
Á.
Corral
, “
Long-term clustering, scaling, and universality in the temporal occurrence of earthquakes
,”
Phys. Rev. Lett.
92
,
108501
(
2004
).
25.
A.-L.
Barabási
, “
The origin of bursts and heavy tails in human dynamics
,”
Nature
435
,
207
211
(
2005
).
26.
L.
de Arcangelis
,
C.
Godano
,
E.
Lippiello
, and
M.
Nicodemi
, “
Universality in solar flare and earthquake occurrence
,”
Phys. Rev. Lett.
96
,
051102
(
2006
).
27.
A.
Vázquez
,
J. G.
Oliveira
,
Z.
Dezsö
,
K.-I.
Goh
,
I.
Kondor
, and
A.-L.
Barabási
, “
Modeling bursts and heavy tails in human dynamics
,”
Phys. Rev. E
73
,
036127
(
2006
).
28.
C.
Bédard
,
H.
Kröger
, and
A.
Destexhe
, “
Does the 1 / f frequency scaling of brain signals reflect self-organized critical states?
,”
Phys. Rev. Lett.
97
,
118102
(
2006
).
29.
M. I.
Bogachev
,
J. F.
Eichner
, and
A.
Bunde
, “
Effect of nonlinear correlations on the statistics of return intervals in multifractal data sets
,”
Phys. Rev. Lett.
99
,
240601
(
2007
).
30.
R. D.
Malmgren
,
D. B.
Stouffer
,
A. E.
Motter
, and
L. A. N.
Amaral
, “
A Poissonian explanation for heavy tails in e-mail communication
,”
Proc. Natl. Acad. Sci. U.S.A.
105
,
18153
18158
(
2008
).
31.
R. D.
Malmgren
,
D. B.
Stouffer
,
A. S. L. O.
Campanharo
, and
L. A.
Amaral
, “
On universality in human correspondence activity
,”
Science
325
,
1696
1700
(
2009
).
32.
T.
Kemuriyama
,
H.
Ohta
,
Y.
Sato
,
S.
Maruyama
,
M.
Tandai-Hiruma
,
K.
Kato
, and
Y.
Nishida
, “
A power-law distribution of inter-spike intervals in renal sympathetic nerve activity in salt-sensitive hypertension-induced chronic heart failure
,”
BioSystems
101
,
144
147
(
2010
).
33.
Y.
Wu
,
C.
Zhou
,
J.
Xiao
,
J.
Kurths
, and
H. J.
Schellnhuber
, “
Evidence for a bimodal distribution in human communication
,”
Proc. Natl. Acad. Sci. U.S.A.
107
,
18803
18808
(
2010
).
34.
Y.
Tsubo
,
Y.
Isomura
, and
T.
Fukai
, “
Power-law inter-spike interval distributions infer a conditional maximization of entropy in cortical neurons
,”
PLoS Comput. Biol.
8
,
e1002461
(
2012
).
35.
M.
Kivelä
and
M. A.
Porter
, “
Estimating inter-event time distributions from finite observation periods in communication networks
,”
Phys. Rev. E
92
,
052813
(
2015
).
36.
Y.
Gandica
,
J.
Carvalho
,
F.
Sampaio dos Aidos
,
R.
Lambiotte
, and
T.
Carletti
, “
Stationarity of the inter-event power-law distributions
,”
PLoS One
12
,
e0174509
(
2017
).
37.
S. B.
Lowen
and
M. C.
Teich
, “
Fractal renewal processes generate 1 / f noise
,”
Phys. Rev. E
47
,
992
1001
(
1993
).
38.
S. B.
Lowen
and
M. C.
Teich
,
Fractal-Based Point Processes
(
Wiley-Interscience
,
Hoboken, NJ
,
2005
).
39.
S.
Abe
and
N.
Suzuki
, “
Violation of the scaling relation and non-Markovian nature of earthquake aftershocks
,”
Phys. A: Stat. Mech. Appl.
388
,
1917
1920
(
2009
).
40.
S.
Vajna
,
B.
Tóth
, and
J.
Kertész
, “
Modelling bursty time series
,”
New J. Phys.
15
,
103023
(
2013
).
41.
B.-H.
Lee
,
W.-S.
Jung
, and
H.-H.
Jo
, “
Hierarchical burst model for complex bursty dynamics
,”
Phys. Rev. E
98
,
022316
(
2018
).
42.
M.
Karsai
,
K.
Kaski
,
A.-L.
Barabási
, and
J.
Kertész
, “
Universal features of correlated bursty behaviour
,”
Sci. Rep.
2
,
397
(
2012
).
43.
M.
Karsai
,
K.
Kaski
, and
J.
Kertész
, “
Correlated dynamics in egocentric communication networks
,”
PLoS One
7
,
e40612
(
2012
).
44.
T.
Yasseri
,
R.
Sumi
,
A.
Rung
,
A.
Kornai
, and
J.
Kertész
, “
Dynamics of conflicts in Wikipedia
,”
PLoS One
7
,
e38869
(
2012
).
45.
Z.-Q.
Jiang
,
W.-J.
Xie
,
M.-X.
Li
,
B.
Podobnik
,
W.-X.
Zhou
, and
H. E.
Stanley
, “
Calling patterns in human communication dynamics
,”
Proc. Natl. Acad. Sci. U.S.A.
110
,
1600
1605
(
2013
).
46.
R.
Kikas
,
M.
Dumas
, and
M.
Karsai
, “
Bursty egocentric network evolution in Skype
,”
Soc. Netw. Anal. Min.
3
,
1393
1401
(
2013
).
47.
W.
Wang
,
N.
Yuan
,
L.
Pan
,
P.
Jiao
,
W.
Dai
,
G.
Xue
, and
D.
Liu
, “
Temporal patterns of emergency calls of a metropolitan city in China
,”
Phys. A: Stat. Mech. Appl.
436
,
846
855
(
2015
).
48.
H.-H.
Jo
,
T.
Hiraoka
, and
M.
Kivelä
, “
Burst-tree decomposition of time series reveals the structure of temporal correlations
,”
Sci. Rep.
10
,
12202
(
2020
).
49.
K.-I.
Goh
and
A.-L.
Barabási
, “
Burstiness and memory in complex systems
,”
EPL (Europhys. Lett.)
81
,
48002
(
2008
).
50.
S.
Shinomoto
,
K.
Shima
, and
J.
Tanji
, “
Differences in spiking patterns among cortical neurons
,”
Neural Comput.
15
,
2823
2842
(
2003
).
51.
S. K.
Baek
,
T. Y.
Kim
, and
B. J.
Kim
, “
Testing a priority-based queue model with Linux command histories
,”
Phys. A: Stat. Mech. Appl.
387
,
3660
3668
(
2008
).
52.
H.-H.
Jo
,
B.-H.
Lee
,
T.
Hiraoka
, and
W.-S.
Jung
, “
Copula-based algorithm for generating bursty time series
,”
Phys. Rev. E
100
,
022307
(
2019
).
53.
H.-H.
Jo
, “
Analytically solvable autocorrelation function for weakly correlated interevent times
,”
Phys. Rev. E
100
,
012306
(
2019
).
54.
N.
Masuda
and
L. E. C.
Rocha
, “
A Gillespie algorithm for non-Markovian stochastic processes
,”
SIAM Rev.
60
,
95
115
(
2018
).
55.
N.
Masuda
and
C. L.
Vestergaard
,
Gillespie Algorithms for Stochastic Multiagent Dynamics in Populations and Networks
(
Cambridge University Press
,
Cambridge
,
2022
).
56.
H.-H.
Jo
,
J. I.
Perotti
,
K.
Kaski
, and
J.
Kertész
, “
Correlated bursts and the role of memory range
,”
Phys. Rev. E
92
,
022814
(
2015
).
57.
H.-H.
Jo
, “
Modeling correlated bursts by the bursty-get-burstier mechanism
,”
Phys. Rev. E
96
,
062131
(
2017
).
58.
H.-H.
Jo
and
T.
Hiraoka
, “
Limits of the memory coefficient in measuring correlated bursts
,”
Phys. Rev. E
97
,
032121
(
2018
).
59.
H.-H.
Jo
and
T.
Hiraoka
, “Bursty time series analysis for temporal networks,” in Temporal Network Theory, edited by P. Holme and J. Saramäki (Springer International Publishing, Cham, 2023), 2nd ed., pp. 165–183.
60.
See http://dlmf.nist.gov/ for “NIST Digital Library of Mathematical Functions” (Release 1.2.0 of 2024-03-15).
61.
H.-H.
Jo
,
M.
Karsai
,
J.
Kertész
, and
K.
Kaski
, “
Circadian pattern and burstiness in mobile phone communication
,”
New J. Phys.
14
,
013055
(
2012
).
62.
J.
Saramäki
and
E.
Moro
, “
From seconds to months: An overview of multi-scale dynamics of mobile telephone calls
,”
Eur. Phys. J. B
88
,
164
(
2015
).
63.
P.
Holme
and
J.
Saramäki
, “
Temporal networks
,”
Phys. Rep.
519
,
97
125
(
2012
).
64.
“Temporal network theory,” in Computational Social Sciences, 2nd ed., edited by P. Holme and J. Saramäki (Springer International Publishing, Cham, 2023).
65.
L. E. C.
Rocha
and
V. D.
Blondel
, “
Bursts of vertex activation and epidemics in evolving networks
,”
PLoS Comput. Biol.
9
,
e1002974
(
2013
).
66.
R.
Pastor-Satorras
,
C.
Castellano
,
P.
Van Mieghem
, and
A.
Vespignani
, “
Epidemic processes in complex networks
,”
Rev. Mod. Phys.
87
,
925
979
(
2015
).
67.
T.
Hiraoka
,
N.
Masuda
,
A.
Li
, and
H.-H.
Jo
, “
Modeling temporal networks with bursty activity patterns of nodes and links
,”
Phys. Rev. Res.
2
,
023073
(
2020
).
68.
A.
Sheng
,
Q.
Su
,
A.
Li
,
L.
Wang
, and
J. B.
Plotkin
, “
Constructing temporal networks with bursty activity patterns
,”
Nat. Commun.
14
,
7311
(
2023
).