Scaling regions—intervals on a graph where the dependent variable depends linearly on the independent variable—abound in dynamical systems, notably in calculations of invariants like the correlation dimension or a Lyapunov exponent. In these applications, scaling regions are generally selected by hand, a process that is subjective and often challenging due to noise, algorithmic effects, and confirmation bias. In this paper, we propose an automated technique for extracting and characterizing such regions. Starting with a two-dimensional plot—e.g., the values of the correlation integral, calculated using the Grassberger–Procaccia algorithm over a range of scales—we create an ensemble of intervals by considering all possible combinations of end points, generating a distribution of slopes from least squares fits weighted by the length of the fitting line and the inverse square of the fit error. The mode of this distribution gives an estimate of the slope of the scaling region (if it exists). The end points of the intervals that correspond to the mode provide an estimate for the extent of that region. When there is no scaling region, the distributions will be wide and the resulting error estimates for the slope will be large. We demonstrate this method for computations of dimension and Lyapunov exponent for several dynamical systems and show that it can be useful in selecting values for the parameters in time-delay reconstructions.
Many problems in nonlinear dynamics involve the identification of regions of constant slope in plots of some quantity as a function of a length or time scale. The slopes of these scaling regions, if they exist, allow one to estimate quantities such as a fractal dimension or a Lyapunov exponent. In practice, identifying scaling regions is not always straightforward. Issues with the data and/or the algorithms often cause the linear relationship to be valid only for a limited range in such a plot, if it exists at all. Noise, geometry, and data quality can disturb its shape in various ways, and even create multiple scaling regions. Often the presence of a scaling region, and the end points of its range, are determined by eye,1,2 a process that is subjective and not immune to confirmation bias.3–6 Worse yet, we know of no formal results about the relationship between the width of the scaling region and the validity of the resulting estimate; often, practitioners simply use the heuristic notion that “wider is better.” In this work, we propose an ensemble-based method to objectively identify and quantify scaling regions, thereby addressing some of the issues raised above.
I. INTRODUCTION
To identify a scaling region in a plot, our method begins by generating multiple fits using intervals of different lengths and positions along the entire range of the data set. Penalizing the lower-quality fits—those that are shorter or have a higher error—we obtain a weighted distribution of the slopes across all possible intervals on the curve. As we will demonstrate by example, the slope of a scaling region that is broad and straight manifests as a dominant mode in the weighted distribution, allowing easy identification of an optimal slope. Moreover, the extent of the scaling region is represented by the modes of similar distributions that assess the fit as a function of the interval end points. As we will show, for a long, straight scaling region, markers closer to the end points of the scaling region are more frequently sampled (due to the combinatorial nature of sampling) and have a higher weighting (due to longer lengths of the fits). Thus, the method gives the largest reasonable scaling interval for the optimal fit along with error bounds on the estimate as computed from the distributions.
Section II covers the details of this ensemble-based method and illustrates its application with three examples. In Sec. III, we demonstrate the approach on several dynamical systems problems, starting in Sec. III A with the estimation of correlation dimension for two canonical examples. We show that our method can accurately detect spurious scaling regions due to noise and other effects and that it can be useful in systematically estimating the embedding dimension for delay reconstruction. We then demonstrate that this method is also useful for calculating other dynamical invariants, such as the Lyapunov exponent (Sec. III B). We present the conclusions in Sec. IV.
II. CHARACTERIZING SCALING REGIONS
A scaling law “…describe[s] the functional relationship between two physical quantities that scale with each other over a significant interval.” (https://www.nature.com/subjects/scaling-laws). Such a scaling law manifests as a straight segment on a two-dimensional graph of these physical quantities, known as a scaling region. In practical situations, the data will typically not lie exactly on a line or may do so only over a limited interval [see, e.g., the synthetic example in Fig. 1(a)]. These data are obtained from a function , where is piecewise linear, with slope in the region ,
This function exhibits a scaling region in the range that is bounded by intervals with zero slope: a shape that is similar to what is often seen in computations of dynamical invariants even for well-sampled, noise-free data. To make the problem less trivial, we add a damped oscillation,
to . The data set for this example is taken to be on with a sampling interval .
Extracting a scaling region using an ensemble-based approach. (a) A synthetic example with a linear range that is bounded below by oscillations and above by a flat interval. (b) Weighted distribution of slopes (orange histogram) generated from an ensemble of fits in different intervals of this plot. A kernel density estimation for the probability density is shown in blue, with its mode marked by the black line. The horizontal axis is clipped to [4,6]. (c) Probability distributions of the interval end points from the ensemble, with the same weighting as (b).
Extracting a scaling region using an ensemble-based approach. (a) A synthetic example with a linear range that is bounded below by oscillations and above by a flat interval. (b) Weighted distribution of slopes (orange histogram) generated from an ensemble of fits in different intervals of this plot. A kernel density estimation for the probability density is shown in blue, with its mode marked by the black line. The horizontal axis is clipped to [4,6]. (c) Probability distributions of the interval end points from the ensemble, with the same weighting as (b).
To find a scaling region in a diagram like Fig. 1(a), one would normally choose two points—markers and for the bounds of the linear region—then fit a line to the interval . A primary challenge is to identify the appropriate interval. For Fig. 1(a), the range appears to be linear, but should this range extend to smaller ? One could certainly argue that the lower boundary could be instead of , but it would be harder to defend a choice of because of the growing oscillations.
Several approaches have been proposed for formalizing this procedure in the context of the largest Lyapunov exponent (LLE) problem. These include what Ref. 6 calls the “small data method”—used, for example, in Ref. 7—which is efficient but suffers from the subjectivity problem described above; an object-searching method of the “maximum nonfluctuant,” which often reports a local optimal solution;8 and a fuzzy C-means clustering approach,9 which also has limitations since the number of clusters must be pre-specified by the user.
Our approach formalizes the selection of a scaling interval by first choosing all possible “sensible” combinations of left- and right-hand-side marker pairs. Specifically, given a sequence of data points at , we choose and such that
so that the left marker must be below the right one () and there must also be at least data points on which to perform the linear fit. For each pair , we perform a least squares fit using the data bounded by the pair to compute both the slope and the least squares fit error. To suppress the influence of intervals with poor fits, we construct a histogram of their slopes, weighting the frequency of each directly by the length of the fit and inversely by the square of the associated fitting error. Finally, we use a kernel density estimator to fit a smooth probability density function (PDF) to the histogram. The details are presented in Appendix A, along with a discussion of the choices for and the exponents associated with the fit width and the fitting error (which are set to 1 and 2, respectively, in the experiments reported here).
A central claim in this paper is that the resulting distribution contains useful information about the existence, extent, and slope of the scaling region. The slopes for which the PDF is large represent estimates that not only occur for more intervals in the graph but also that are longer and have lower fit errors. In particular, we conjecture that the mode of this distribution gives the best estimate of the slope of the scaling region: i.e., a value with low error that persists across multiple choices for the position of the interval. More formally, most intervals that result in slopes near the mode of the PDF [within full width half maximum (FWHM)10] correspond to high-quality fits of regions in the graph whose bounds lie within the interval of the scaling region. If the graph has a single scaling region, the PDF will be unimodal; a narrower peak indicates that the majority of high-quality fits lie closer to the mode. Broad peaks and multimodal distributions can arise for multiple reasons, including noise; we address these issues in Sec. III.
For the example in Fig. 1, there are data points and we set so the ensemble of end points has elements. The resulting weighted slope distribution is unimodal, as shown in panel (b), with mode —not too far from the expected slope of 5.0. To provide a confidence interval around this estimate, we compute three quantities:
the full width half maximum () of the PDF around the mode;
the fraction, , of ensemble members that are within the ;
the standard deviation, , for the slopes of the ensemble members within the .
For the example in Fig. 1, we obtain
That is, we estimate the slope of the scaling region as , noting that fully of the estimated values are within . These error estimates quantify the shape of the peak and give assurance that the diagram does indeed contain a scaling region.
This method also can estimate the boundaries of the scaling region. To do this, we compute secondary distributions using the positions of the left-hand and right-hand side markers. Since we choose all possible pairs, subject to constraint (1), the probability densities of the resulting histograms will decrease linearly for and grow linearly for from left to right. To penalize poor fits, we again scale the frequency for each point in these secondary histograms directly by the length and inversely by the square of the error of the corresponding fit. The resulting PDFs for the synthetic example of Fig. 1(a) are shown in panel (c). The modes of the LHS and RHS histograms fall at and , respectively. These align neatly with what an observer might choose as the end points of the putative scaling region in Fig. 1(a). Note that the distribution is wider with a relatively lower peak, whereas the distribution is narrower and taller. This indicates that we can be more certain about the position of the right end point than we are about the left one. There is more flexibility in choosing than .
To test our ensemble method on a dataset without an obvious scaling region, we consider the quadratic function
We choose the range and a sampling interval . A graph of a resulting function, Fig. 2(a), of course has no straight regions, but one might imagine that the curve becomes nearly linear near the top of the range even though the actual slope grows linearly with . The slope distribution, shown in Fig. 2(b), has mode
and as the end point distributions in panel (c) show, this corresponds to the interval . However, the large width of the slope distribution and the correspondingly high magnitude of convey little confidence as to the existence of a scaling region. This shows that the error statistics and the can be useful indicators as to whether or not a scaling region is present. A high error estimate and/or a high contraindicates a scaling region.
Estimating the scaling region of a quadratic function. (a) The quadratic function represented by . (b) Weighted slope histogram with the KD estimator. (c) Weighted distributions of interval end points.
Estimating the scaling region of a quadratic function. (a) The quadratic function represented by . (b) Weighted slope histogram with the KD estimator. (c) Weighted distributions of interval end points.
In the case of the two simple examples above, scaling regions imply simple linear relationships between the independent and dependent variables. Our technique can just as well be applied to plots with suitably transformed axes—like the log–linear or log–log axes involved in the computation of many dynamical invariants—as we demonstrate next by computing the dimension of a fractal.
The box-counting dimension, , is the growth rate of the number of boxes of size that cover a set,
In practice, is estimated by computing the slope of the graph of vs . As an example, we consider the equilateral Sierpinski triangle of side one, generating a set of points using an iterated function system11 to obtain Fig. 3(a). A log–log plot of for this data set is shown in Fig. 3(b) and the resulting slope distribution is shown in Fig. 3(c). The mode of this distribution falls at , where—as before—the error is the estimate . This is close to the true dimension . This peak is narrow, with , though , similar to the first example. Thus, about of the weighted fits lie within of the estimated slope, strengthening confidence in the estimate. The small periodic oscillations in the curve in Fig. 3(b) are due to the self-similarity of the fractal. Note that the linear growth saturates when , the point at which the box size becomes comparable to the diameter of the set. Neither of these effects significantly influences the mode of the slope distribution. For this example, the LHS and RHS distributions in panel (b) have similar widths, with modes and , respectively.
Estimating the box-counting dimension of the Sierpinski triangle. (a) 100,000 points on the Sierpinski triangle. (b) A log-log plot of versus . (c) Weighted slope distribution.
Estimating the box-counting dimension of the Sierpinski triangle. (a) 100,000 points on the Sierpinski triangle. (b) A log-log plot of versus . (c) Weighted slope distribution.
The examples of this section show that the ensemble-based method effectively selects an appropriate scaling region—if one exists—and is able to exclude artifacts near the edges of the data.
III. APPLICATIONS TO DYNAMICAL SYSTEMS
In this section, we apply this scaling region identification and characterization method to calculations of two dynamical invariants—the correlation dimension (Sec. III A) and the Lyapunov exponent (Sec. III B)—for two well-known examples, the Lorenz system and the driven, damped pendulum. We also explore the effects of noise (Sec. III A 3) and the selection of embedding parameters for delay reconstructions (Sec. III A 4).
A. Correlation dimension
Correlation dimension, one of the most common and useful dynamical invariants, can be calculated using the Grassberger–Procaccia algorithm.12,13 Given a set of points that sample an object, such as an attractor of a dynamical system, this algorithm estimates the average number of points in an -neighborhood of a given point, . For an attractor of correlation dimension , this scales with as
Estimating is, therefore, equivalent to finding a scaling region in the log–log plot of vs .14
In this section, we use d2, the TISEAN1 implementation, which outputs estimates of for a range of values. We apply our ensemble-based method to a graph of log vs log and identify the mode of the resulting slope distribution to obtain an estimate of the correlation dimension of the data set. When the slope distribution is multi-modal, our method can also reveal the existence of potential scaling regions for different ranges of , some of which may not be obvious on first observation. This analysis, then, not only yields values for the slope and extent of the scaling region but it also provides insight into the geometry of the dynamics, the details of the Grassberger–Procaccia algorithm, and the interactions between the two.
1. Lorenz
We start with the canonical Lorenz system,
with , , and .15 We generate a trajectory from the initial condition using a fourth-order Runge–Kutta algorithm for points with a fixed time step , discarding the first points to remove the transient, to get a total trajectory length of 90 000 points (see Fig. 10 in Appendix B). The chosen initial condition lies in the basin of the chaotic attractor and the discarded length is sufficient to remove any transient effects. Figure 4(a) shows a log–log plot of vs produced by the d2 algorithm. To the eye, this graph appears to contain a scaling region in the approximate range . As in the box-counting dimension example in Sec. II, the curve flattens when is larger than the diameter of the attractor since will not change when increases beyond this point. Since the diameter of the Lorenz attractor is , this flattening occurs for .
Estimating the correlation dimension of the Lorenz attractor. (a) Correlation sum calculations plot. (b) Weighted slope distribution. (c) Weighted distributions of interval endpoints. (d) Heatmap of the end point ensemble.
Estimating the correlation dimension of the Lorenz attractor. (a) Correlation sum calculations plot. (b) Weighted slope distribution. (c) Weighted distributions of interval endpoints. (d) Heatmap of the end point ensemble.
Figures 4(b) and 4(c) show the results of our ensemble-based approach. The mode of the slope PDF is
This gives an estimate for the correlation dimension for the Lorenz attractor that agrees with the accepted range of 2.06–2.16.16,17 The PDFs of the LHS and RHS markers in Fig. 4(c) have modes and , respectively. Estimates of both end points by our algorithm are close to the informal ones although slightly on the conservative side. In other words, our method can indeed effectively formalize the task of identifying both the existence and the extent of scaling regions.
The end point distributions can be broken down further in the heatmap-style scatterplot of Fig. 4(d) to reveal additional details. Each dot represents a single element of the ensemble: i.e., a fit for a particular interval . Its color encodes the slope and its radius is scaled by the fitting weight. If samples come from intervals that are close, the associated dots will be nearby; if these have low error, the dots will be large and their effective density will be high. Note that the domain of panel (d) is a triangle since by (1).
This visualization provides an effective way to identify scaling region ranges that produce similar slope estimates: long scaling regions will manifest as large regions of similarly colored points. Figure 4(d) shows such a triangular high-density region in blue (slope 2.0), above the diagonal, bounded from the left by , and from above by . This clearly corresponds to the primary scaling region in Fig. 4(a) and the corresponding mode in panel (b). This heatmap can reveal other, shorter scaling regions. Indeed, panel (d) shows other clusters of similarly colored dots that are somewhat larger than the dots from neighboring regions: e.g., the green cluster for and , with a slope near . Close examination of the original curve in Fig. 4(a) reveals a small straight segment in the interval . This feature, not immediately apparent from the original curve, stands out quite clearly in the distribution visualization. Two much smaller scaling regions at the two ends of panel (a) are also brought out by this representation: the interval of slope 3.0 for small (the dark blue cluster at the lower left corner of the scatterplot) and the zero slope for large (the red cluster at the upper right corner).
2. Pendulum
As a second example, we study the driven, damped pendulum,
where the natural frequency is , the damping coefficient is , and the driving force has amplitude and frequency . The coordinates of the three-dimensional extended phase space are the angle , the time , and the angular velocity . We generate a trajectory for this system using a fourth-order Runge–Kutta algorithm with initial condition for points with a fixed time step of , discarding the first points to remove the transient, resulting in a final time series of length one million. To avoid issues with periodicity in and , we project the time series from onto . The resulting trajectory is shown in Fig. 5. Note that the attractor has the bound , but that the range of the other two variables is due to the sinusoidal transformation.
To the eye, results of a d2 calculation on this trajectory, shown in Fig. 6(a), exhibit two apparent scaling regions above and below . The slope distributions produced by our method not only confirm but also formalize this observation. The larger peak of the distribution in panel (b) of the figure falls at ( = 0.32 and ), which is equivalent to the correlation dimension of the attractor as reported in Ref. 18. Note that, to account for the fact that the distribution is clipped on the right before the density falls below half the peak value, the computation of uses this as the upper bound. The interval end point distributions in panel (c) indicate that the extent of this scaling region is . To the eye, would seem a more-appropriate choice for the RHS end point of the scaling region; however, minor oscillations in the interval cause the ensemble-based algorithm to be more conservative and choose . The presence of a second scaling region in panel (a) for gives a second mode at the slope ( and ). The secondary peaks in the end point distributions in panel (c) suggest an extent of for this second scaling region, which is consistent with a visual inspection of panel (a).
Estimating the correlation dimension of the driven, damped pendulum trajectory shown in Fig. 5. (a) Correlation sum calculations plot. (b) Weighted slope distribution. (c) Weighted distributions of interval end points with the correlation sum graph from (a) superimposed.
Estimating the correlation dimension of the driven, damped pendulum trajectory shown in Fig. 5. (a) Correlation sum calculations plot. (b) Weighted slope distribution. (c) Weighted distributions of interval end points with the correlation sum graph from (a) superimposed.
This second scaling region is an artifact of the large aspect ratio of the attractor: only one of the variables, , has a range larger than , so when , the effective dimension is one. To test this hypothesis, we rescaled the components of the phase space vectors so that the attractor has equal extent of in all three directions (using the -E flag in the TISEAN d2 command) and repeated the d2 calculations. This leads to rescaling of the axis of the plot (not shown)— now has the range —and causes the second scaling region in the previous results to disappear, leaving a single scaling region with slope
Note that, in addition to resolving the artifact of the second scaling region, this rescaling also reduces the width of the mode and gives a much tighter error bound.
By revealing the existence, the end points, and the slopes of different scaling regions in the data, our ensemble-based approach has not only produced an objective estimate for the correlation dimension but also yielded some insight into the interaction between the d2 algorithm and the attractor geometry.
3. Noise
Noise is a key challenge for any practical nonlinear time-series analysis method. We explore the effects of measurement noise on our method using the Lorenz trajectory of Sec. III A 1 by adding uniform noise post facto to each of the three state variables. The noise amplitude in each dimension is proportional to the radius of the attractor in that dimension. The correlation sum for a noise amplitude —i.e., of the radius of the attractor in each dimension—is shown in Fig. 7(a). There appear to be two distinct scaling regions in this plot, above and below a slight knee at . The slope distribution produced by our method is shown in Fig. 7(b). As in the pendulum example, this distribution is bimodal, indicating the presence of two scaling regions with slopes
and
respectively. We claim that these results are consistent with the geometry of the noise and of the dynamics, respectively. The taller peak corresponds to the interval , delineated by red markers in panel (a). Note that the right end point of this region is close to , the maximum extent of the noise. The computed slope of in this interval captures the geometry of a noise cloud in three-dimensional state space. The smaller, secondary peak at reflects the dimension of the attractor for scales larger than the noise, the interval that is bounded by green markers in panel (a).
Correlation dimension of the Lorenz attractor with added noise for (a)-(b), and for (c)-(f). Panels (e)-(f) exclude ln . The first column shows the correlation sum calculations, and the corresponding slope distribution plots are shown in second column. The red markers in (a), (c) and (e) denote scaling regions corresponding to noise while the green markers indicate that for the dynamics.
Correlation dimension of the Lorenz attractor with added noise for (a)-(b), and for (c)-(f). Panels (e)-(f) exclude ln . The first column shows the correlation sum calculations, and the corresponding slope distribution plots are shown in second column. The red markers in (a), (c) and (e) denote scaling regions corresponding to noise while the green markers indicate that for the dynamics.
As the noise level increases, the geometry of the attractor is increasingly obscured (see Fig. 10 in Appendix B). Figures 7(c) and 7(d) show results for . As one would expect, the right-hand boundary of the lower scaling region is increased, shown as the red markers in panel (c). We observe that , near the noise cloud radius of . With this noise level, the slope distribution is nearly unimodal with mode
again reflecting the geometry of the noise. While there appears to be a secondary peak in the slope distribution in panel (d), it is nearly obscured.
Interestingly, the identification of the scaling region due to the noise can be used to refine the calculation and retrieve some information about the dynamical scaling: one simply repeats the slope-distribution calculations using only the data for larger values—that is, discarding the values below the noise. Restricting to the interval gives the curve and the slope distribution shown in panels (e) and (f). In addition to the noise peak near , this reveals two additional peaks. The first,
was hinted at by the tail of the distribution in panel (d). This, we conjecture, corresponds to the correlation dimension of the noisy dynamics. The final peak, near , corresponds to the smaller scaling region that can be seen in a heatmap-style scatterplot (not shown). The validity of this region can be easily confirmed by further limiting the ensemble to the interval , which then gives a single mode at . Note that this region corresponds to a smaller scaling region that was also seen in the noiseless case: the small green cluster in Fig. 4(d).
The ability of the ensemble-based slope distribution method to reveal secondary scaling regions and suggest refinements allows one to identify noise scales and even extract information that might be hidden by the noise.
4. Time-series reconstructions
The previous examples assumed that all of the state variables are known. This is rarely the case in practice; rather, one often has data only from a single sensor. Delay-coordinate embedding,19,20 the foundation of nonlinear time-series analysis, allows one to reconstruct a diffeomorphic representation of the actual dynamics from observations of a scalar time series provided that a few requirements are met: must represent a smooth, generic observation function of the dynamical variables19,21 and the measurements should be evenly spaced in time. This method embeds the time series into as a set of delay vectors of the form given a time delay . Theoretically, the only requirements are and , where is the capacity dimension of the attractor on which the orbit is dense.19,22 Many heuristics have been developed for estimating these two free parameters,1,18,23–37 notably the first minimum of the average mutual information for selecting 25 and the false near neighbors algorithm for selecting .37 See Refs. 2 and 38 for more details.
Since the correlation dimension is preserved under a diffeomorphism, it can be computed with the d2 algorithm on properly reconstructed dynamics. Moreover, the calculations of the correlation dimension for different values of can help diagnose the correctness of an embedding. To explore how our method can contribute in this context, we embed the time series of the Lorenz trajectory from Sec. III A 1 using , which was chosen using the curvature heuristic described in Ref. 18 and corroborated using the first minimum of the average mutual information. Using TISEAN, we then create a series of embeddings for a range of values and apply the Grassberger–Procaccia algorithm to each one. The results are shown in Fig. 8(a) for .
Correlation dimension from time-delay reconstructions of the Lorenz system for , , and . (a) Correlation sum calculations for the ten embedding dimensions. (b) The corresponding weighted slope distributions. Each m from (a) gives a slope distribution (b), and these are pairwise compared in (c) using the Wasserstein distance. The slope distribution plot in (b) is clipped from above at 50 for the sake of clarity. (c) The Wasserstein distance profile.
Correlation dimension from time-delay reconstructions of the Lorenz system for , , and . (a) Correlation sum calculations for the ten embedding dimensions. (b) The corresponding weighted slope distributions. Each m from (a) gives a slope distribution (b), and these are pairwise compared in (c) using the Wasserstein distance. The slope distribution plot in (b) is clipped from above at 50 for the sake of clarity. (c) The Wasserstein distance profile.
As is well known, we can assess the convergence of by reasoning about such a sequence of curves. When is too small, the reconstructed attractor is improperly unfolded, giving an incorrect dimension, while for large enough , typically converges—modulo data limitation issues39—to the nominally correct value. Since our ensemble methodology automates the calculation of scaling regions, it can simplify this calculation for multiple curves. To determine the value of for which the slopes converge, one simply computes the modes of each slope distribution and looks for the value at which the positions of those modes stabilize. For Fig. 8(b), the mode for is, of course, , but when reaches , the distributions begin to overlap and for the modes are , essentially the estimate from the full dynamics in Sec. III A 1. This finding nicely corroborates the rigorous results of Ding et al.39 which show—in the noise-free case—that the correlation dimension can be correctly recovered using : i.e., the smallest integer not less than .
To formalize this procedure, we use the Wasserstein metric to compare pairs of distributions.40 Informally, this metric—called the “earth mover’s distance”—treats each distribution as a pile of soil, with the distance between the two distributions defined as the minimum “effort” required to turn one pile into another. Thus, only when the distributions are identical. We use the python scipy.stats.wasserstein_distance package to compute this metric. (https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.wasserstein_distance.html).
Given slope distributions and for successive embedding dimensions from Fig. 8(b), the distance is shown in panel (c) of the figure. Note that initially decreases rapidly with increasing , approaching zero at . Up to fluctuations, remains approximately constant thereafter. After experimenting, we propose the heuristic threshold to estimate convergence of as grows. Such a threshold has two benefits, giving both an estimate of the embedding dimension and of .
To assess the generality of these ideas, we applied the ensemble-based method to four additional scalar time series: the noisy Lorenz data of Sec. III A 3 with using , the pendulum trajectory of Sec. III A 2 using , a shorter, point segment of this orbit, and, finally, a trajectory from the Lorenz-96 model41 with a phase space dimension . In each case, we again used the curvature-based heuristic of Ref. 18 to estimate . This set of examples represents a range of problems that can arise in practice: measurements affected by noise, and a trajectory that is too short to fully cover an attractor, and a high-dimensional attractor. The results, presented in Table I, show that for the reconstructed dynamics is close to that of the full dynamics as well as to that given previously by manually fitting scaling regions.18 However, the embedding dimensions are often significantly smaller than those suggested by other work. For the first four examples in Table I, for example, Ref. 18 used , the dimension required by the Takens theorem for a three-dimensional state space;19 for the Lorenz-96 example, that paper used , obtained from the heuristic .22 Both and are sufficient conditions, of course. The Wasserstein test used in Table I shows that accurate estimates of can often be obtained with a lower embedding dimension and without prior knowledge of the original dynamics.
Correlation dimensions calculated using the ensemble-based method with Wasserstein distance for five cases. The histograms for each are shown in Appendix C.
. | . | Reconstructed . | ||
---|---|---|---|---|
System . | Full dynamics d2 . | d2 . | τ . | m . |
Lorenz (Sec. III A 1) | 2.09 ± 0.02 | 2.09 ± 0.01 | 18 | 4 |
Noisy Lorenz (δ = 0.1) | 2.59 ± 0.20 | 2.27 ± 0.14 | 21 | 7 |
Pendulum (Sec. III A 2) | 2.22 ± 0.02 | 2.16 ± 0.02 | 120 | 4 |
Truncated pendulum | 2.23 ± 0.07 | 2.20 ± 0.03 | 120 | 4 |
Lorenz-96 | 5.63 ± 0.09 | 5.80 ± 0.07 | 24 | 10 |
. | . | Reconstructed . | ||
---|---|---|---|---|
System . | Full dynamics d2 . | d2 . | τ . | m . |
Lorenz (Sec. III A 1) | 2.09 ± 0.02 | 2.09 ± 0.01 | 18 | 4 |
Noisy Lorenz (δ = 0.1) | 2.59 ± 0.20 | 2.27 ± 0.14 | 21 | 7 |
Pendulum (Sec. III A 2) | 2.22 ± 0.02 | 2.16 ± 0.02 | 120 | 4 |
Truncated pendulum | 2.23 ± 0.07 | 2.20 ± 0.03 | 120 | 4 |
Lorenz-96 | 5.63 ± 0.09 | 5.80 ± 0.07 | 24 | 10 |
B. Lyapunov exponent
Computing a Lyapunov exponent also often involves the identification and characterization of scaling regions. Here, we use the Kantz algorithm42 to estimate the maximal Lyapunov exponent for reconstructed trajectories of the Lorenz and chaotic pendulum examples from Sec. III A 4. We embed the scalar data using a delay found by standard heuristics and experiment with various embedding dimensions. The Kantz algorithm starts by finding all of the points in a ball of radius (also called the “scale”) around randomly chosen reference points on an attractor. By marching through time, the algorithm then computes the divergence between the forward image of the reference points and the forward images of the other points in the ball. The average divergence across all reference points at a given time is the stretching factor. To estimate , one identifies a scaling region in the log–linear plot of the stretching factor as a function of time.
Note that this procedure involves three free parameters: the embedding parameters and and the scale of the balls used in the algorithm. (Other parameters—the number of reference points, Theiler window, and length of the temporal march, etc.—were set to TISEAN’s default values.) To obtain accurate results, one is confronted by an onerous multivariate parameter tuning problem: a situation in which an automated approach can be extremely advantageous. If, for example, one embeds the coordinate of the Lorenz data from Sec. III A 1 with ranging from 2 to 9 and chooses 10 values of (e.g., on a logarithmic grid with 10 values between 0.038 and 0.381), then there will be 80 different curves, as seen in Fig. 9(a). Manually fitting scaling regions to each curve would be a demanding task.
Estimating the largest Lyapunov exponent of the Lorenz (panels a, b) and driven, damped pendulum (panels c, d) examples. Panels (a) and (c) show stretching-factor calculations for the eight embedding dimensions m and ten search scales . The estimated exponents for all the dimensions and scales are shown in grid form in panels (b) and (d). The gray scale in each grid cell represents the magnitude of the exponent.
Estimating the largest Lyapunov exponent of the Lorenz (panels a, b) and driven, damped pendulum (panels c, d) examples. Panels (a) and (c) show stretching-factor calculations for the eight embedding dimensions m and ten search scales . The estimated exponents for all the dimensions and scales are shown in grid form in panels (b) and (d). The gray scale in each grid cell represents the magnitude of the exponent.
The ensemble method gives the automated results shown in panel (b) of the figure. Each row of this 2D grid corresponds to fixed and each column to fixed . The estimated —the location of the mode in the slope PDF—is the value shown in the cell. To help detect convergence, each cell is shaded according to the corresponding value. Note that the majority of the configurations (48 out of 80, colored in intermediate gray) give an estimate of , which is consistent with the known value.43
The pattern in the grid makes sense. The cells in its center correspond to midrange combinations of the free parameters: and . Values outside these ranges create well-known issues for the Kantz algorithm in the context of finite data. If is too small, the ball will not contain enough trajectory points to allow the algorithm to effectively track the divergence; if is too large, the ball will straddle a large portion of the attractor, thereby conflating state-space deformation with divergence. For too small, the attractor is insufficiently unfolded, so the values in the top few rows of the grid are suspect. In the upper right corner, these two effects combine; moreover, for these cases, the zero slope segments on the right ends of the stretching factor curves becomes dominant (since the ball reaches the edge of a flattened attractor more quickly), thus causing the ensemble-based algorithm to return a slope close to zero. Finally, we see a slightly higher estimate of for and . On inspection, the stretching factor plot for this case exhibits significant oscillations, distorting most of the curve. Though a relatively straighter section of this curve is chosen by the algorithm as a scaling region, the distortion shows that this parameter combination should be avoided for estimating .
The process is repeated for the driven damped pendulum example in Figs. 9(c) and 9(d). Here, we use a slightly different trajectory than in Sec. III A 2: points (after removing a transient of points) with a time step . For this trajectory, the heuristic of Ref. 18 suggests an embedding delay . From the grid, we observe similar patterns as in the Lorenz example: 46 out of 80 combinations give (for and ). We observe significantly lower estimates when the embedding dimension is low and is high. Additionally, an anomaly, similar to the Lorenz example, is observed for higher embedding dimensions () and smaller scales () that gives . This is due to low frequency oscillations in the stretching factor plots, with a relatively straight region toward the right end of the plots where they start to flatten. As for the Lorenz example, these parameter combinations should be avoided.
The ensemble approach allows easy automation of the computation of scaling regions for various values of the hyperparameters, thus sparing significant manual labor. Moreover, the grid visualization shows the “sweet spot” in the -parameter space. Of course, this is only a relatively crude convergence criterion, and one could instead use a more rigorous test such as the Wasserstein distance of Sec. III A 4.
IV. CONCLUSIONS
The technique described above is intended to formalize a subjective task that arises in the calculation of dynamical invariants: the identification and characterization of a scaling region, an interval in which one quantity grows linearly with respect to another. By manipulating the axes, one can extend this to detect exponential or power-law relationships: e.g., using log–log plots for fractal dimensions and log–linear plots for Lyapunov exponents, as shown in Secs. III A and III B. Moreover, linearity is not a limitation in the applicability of this approach: our methodology could be used to identify regions in a dataset with any hypothesized functional relationship (e.g., higher order polynomial, logarithmic, exponential, etc.). One would simply compute the least squares fit over the data using the hypothesized function instead.
A strength of our method is that the scaling region is chosen automatically, as the optimal family of intervals over which the fits have similar slopes with small errors. To do this, we create an ensemble of fits over different intervals, building PDFs to identify the boundaries and slopes of any scaling regions and to obtain error estimates. When the scaling region is clear, our ensemble-based approach computes a slope similar to that which would be manually determined. Another strength of this method is its robustness: while the examples in this paper involve scaling regions containing hundreds of points, the method works surprisingly well even when the number of input points is relatively small and the responses are fairly noisy, albeit with a higher error estimate. Finally, we showed that a convergence test for the slope distributions helps choose the delay reconstruction dimension. This is a challenging task: even though there are rigorous theoretical guidelines, the information needed to apply them is not known in advance from a given scalar time series. As a next step, we are investigating how this approach compares with existing methods for estimating the correct embedding dimension (such as by false nearest neighbors) for delay reconstruction.
This kind of objective, formalized method is useful not only because it reveals scaling regions that may not be visible but also because it provides confidence in their existence in the form of error estimates. Moreover, since computing a dynamical invariant, such as the Lyapunov exponent, can involve finding scaling regions in many curves, an automated method provides a clear advantage over hand selection. Our method could be potentially useful in areas outside dynamical systems as well: e.g., estimating the scaling exponent for data drawn from a power-law distribution. This is an active area of research.5,44 Standard techniques involve inferring which power-law distribution (if any) the data are most likely drawn from, and—if so—which exponent is most likely. Our method could potentially help narrow the range of exponents to begin such a search. There are other potential applications as well, e.g., detection of scaling regions in fluctuation functions that arise in detrended fluctuation analysis (DFA),45 as well as in wavelet or rescaled range (R/S) analysis46,47 for the determination of the Hurst exponent48 in stochastic processes with long-range temporal correlations.
There are a number of interesting avenues for further work, beginning with how to further refine the confidence intervals for slope estimates. We have used the standard deviation of the slopes within the around the mode of the distribution. Alternatively, one could use the average of the least squares fit error for the samples within the as the confidence interval. Another possibility would be to first extract a single scaling region (by determining the modes for the left- and right-hand side markers) and then compute the error statistics over all the sub-intervals of this scaling region (e.g., standard deviation of the slopes or the average of the fit errors). We used the Wasserstein distance to assess the convergence of a sequence of slope distributions to help choose an embedding dimension. This may be too strict since it requires that the entire distributions be close. One could instead target the positions of the modes, quantifying closeness using some relevant summary statistic like their confidence intervals. Of course, for multimodal distributions, the Wasserstein distance test is more appropriate since we cannot choose a single mode for computing the intervals.
ACKNOWLEDGMENTS
The authors acknowledge support from the National Science Foundation (NSF) [Grant Nos. EAGER-1807478 (J.G.), AGS-2001670 (V.D., E.B., and J.D.M.), and DMS-1812481 (J.D.M.)]. J.G. was also supported by Omidyar and Applied Complexity Fellowships at the Santa Fe Institute. Helpful conversations with Aaron Clauset and Holger Kantz are gratefully acknowledged as well as useful suggestions from the anonymous referees.
DATA AVAILABILITY
The data that support the findings of this study are available within the article.
AUTHOR DECLARATIONS
Conflict of Interest
The authors have no conflicts to disclose.
APPENDIX A: ALGORITHM
The pseudocode for the proposed ensemble-based approach is described in Algorithm 1. There are four important choices in this algorithm. The first is the parameter , the minimum spacing between the left-hand and right-hand side markers and . This choice limits what is deemed to be a “significant interval” for a scaling region. We set in this paper, which we chose using a persistence analysis: that is, varying and seeing if the results change. One could increase for larger data sets. The second is the choice of a kernel density estimator (KDE) for the histograms. We used the Gaussian KDE with the python implementation, scipy.stats.gaussian_kde,49 of Scott’s method50 to automatically select the kernel bandwidth. Alternatively, one might choose other bandwidth selection methods, such as Silverman’s method,51 or simply manually specify the bandwidth. Scott’s method, which is the default method used by the package, worked well for all our examples.
Ensemble approach for estimating the slope of scaling regions
1: Assume a plot that potentially contains a scaling region. |
2: Initialize empty arrays , , , , and . |
3: for do |
4: for do |
5: if then |
6: Fit a line using least squares to data . |
7: Obtain an estimate for slope and intercept . |
8: Compute the least squares fit length and fitting error, (A1) (A2) |
9: Append the slope, fit length, and error to the arrays , , and , respectively. |
10: Append and to end point arrays and , respectively. |
11: Generate a histogram from the array, weighting each point i by (A3) |
12: Using a kernel density estimator, generate a probability distribution function , from [see, e.g., Fig. 1(b)]. |
13: Compute the mode(s) of and the error estimates as described in Sec. II. |
14: Generate histograms and for and from Step 10, weighting the frequency with (A3). Generate PDFs, as in Step 12, to generate distributions and [see, e.g., Fig. 1(c)]. |
1: Assume a plot that potentially contains a scaling region. |
2: Initialize empty arrays , , , , and . |
3: for do |
4: for do |
5: if then |
6: Fit a line using least squares to data . |
7: Obtain an estimate for slope and intercept . |
8: Compute the least squares fit length and fitting error, (A1) (A2) |
9: Append the slope, fit length, and error to the arrays , , and , respectively. |
10: Append and to end point arrays and , respectively. |
11: Generate a histogram from the array, weighting each point i by (A3) |
12: Using a kernel density estimator, generate a probability distribution function , from [see, e.g., Fig. 1(b)]. |
13: Compute the mode(s) of and the error estimates as described in Sec. II. |
14: Generate histograms and for and from Step 10, weighting the frequency with (A3). Generate PDFs, as in Step 12, to generate distributions and [see, e.g., Fig. 1(c)]. |
The two other important parameters are the powers and used for the weights of the fit length and fit error in (A3). We explored ranges for these parameters, constraining and to non-negative integers, and determined suitable values that worked well for all of the examples considered here. Some interesting patterns emerged in these explorations. First, we found that helps us to reduce the error estimate for the slope and improve . Note that penalizes the shorter fits near the edges of the , suppressing their influence on the histogram. The , therefore, narrows, in turn reducing the error estimate . Second, setting was found to be advantageous in all cases, ensuring that the algorithm does not prioritize unnecessarily longer fits of poor quality. Enforcing these two conditions, we experimented with a few choices for and in the context of Fig. 1 and found that lower values generally work better. Higher powers tend to magnify the effects of small errors and small lengths, making the algorithm very conservative in terms of what constitutes a good fit. With this in mind, we settled on and for all of the correlation dimension estimations. For the Lyapunov exponent examples, on the other hand, we found that and do better, generating tighter confidence interval bounds around the estimate across the various parameter combinations.
This algorithm always uses the full data set to compute the ensemble but note that it does not require even spacing of the data. Given a much larger data set, it might make sense to downsample to speed up the execution. In its current implementation, the run-time complexity of the algorithm is for a relationship curve with points. In the future, it might be useful to develop faster algorithms for sampling and generating the slope distributions.
APPENDIX B: EFFECTS OF NOISE ON THE LORENZ ATTRACTOR
Figure 10 shows the effects of noise on the Lorenz attractor (viz., the examples in Sec. III A 3). As we increase the noise level (as a fraction of the attractor radius) from 0.0 to 0.1, the dynamics of the attractor are progressively distorted. As discussed in Sec. III A 3, the effects of this distortion on the correlation dimension plots, and hence the slope distributions, are clearly visible (see Fig. 7).
Effects of noise on the Lorenz attractor with different magnitudes of noise level . (a) Noiseless Lorenz trajectory. (b) Noisy Lorenz trajectory (). (c) Noisy Lorenz trajectory ().
Effects of noise on the Lorenz attractor with different magnitudes of noise level . (a) Noiseless Lorenz trajectory. (b) Noisy Lorenz trajectory (). (c) Noisy Lorenz trajectory ().
APPENDIX C: ADDITIONAL PLOTS FOR TIME-SERIES RECONSTRUCTION
Here, we present figures to support the results of Table I.
For the noisy Lorenz data, Fig. 11 shows the slope distributions and Wasserstein distance between slope distributions of consecutive embedding dimensions, . Panels (a) and (b) use as in Table I. Note that converges more slowly for the noisy data than it did for the deterministic case in Fig. 8. The distance at , giving . Figures 11(c) and 11(d) show the reconstruction results for , the embedding delay given by the average mutual information (AMI) method.25 Here, does not reach for . The implication is that the embedding dimension should be larger than , contrary to the theoretical requirement . Indeed, panel (c) shows that the position of the mode grows monotonically with , reaching values much higher than the expected for the full dynamics.
Figures 12 and 13 show the results for the pendulum trajectories of length and , respectively, generated using a delay of . In both cases, the Wasserstein distance threshold is reached at , resulting in the values in Table I.
Finally, Fig. 14 shows the delay reconstruction of the Lorenz-96 trajectory using . Convergence occurs at , giving .
Estimating the correlation dimension for the reconstructed noisy Lorenz example for two values of . The top row shows the slope distributions (panel a) and the Wasserstein distance profiles (panel b) for , estimated using the heuristic of Ref. 18, while the bottom row presents the corresponding plots for , the estimate produced using AMI.
Estimating the correlation dimension for the reconstructed noisy Lorenz example for two values of . The top row shows the slope distributions (panel a) and the Wasserstein distance profiles (panel b) for , estimated using the heuristic of Ref. 18, while the bottom row presents the corresponding plots for , the estimate produced using AMI.
Estimating the correlation dimension for the reconstructed pendulum trajectory of points using . The slope distribution plot in (a) is clipped from above at 35 for the sake of clarity. The Wasserstein distance profile is shown in panel (b).
Estimating the correlation dimension for the reconstructed pendulum trajectory of points using . The slope distribution plot in (a) is clipped from above at 35 for the sake of clarity. The Wasserstein distance profile is shown in panel (b).
Estimating the correlation dimension for the reconstructed pendulum trajectory truncated to points using . The slope distribution plot in (a) is clipped from above at 20 for the sake of clarity. The Wasserstein distance profile is shown in panel (b).
Estimating the correlation dimension for the reconstructed pendulum trajectory truncated to points using . The slope distribution plot in (a) is clipped from above at 20 for the sake of clarity. The Wasserstein distance profile is shown in panel (b).
Estimating the correlation dimension for the reconstructed Lorenz-96 example using . The slope distribution plot in (a) is clipped from above at 10 for the sake of clarity. The Wasserstein distance profile is shown in panel (b).
Estimating the correlation dimension for the reconstructed Lorenz-96 example using . The slope distribution plot in (a) is clipped from above at 10 for the sake of clarity. The Wasserstein distance profile is shown in panel (b).