Scaling regions—intervals on a graph where the dependent variable depends linearly on the independent variable—abound in dynamical systems, notably in calculations of invariants like the correlation dimension or a Lyapunov exponent. In these applications, scaling regions are generally selected by hand, a process that is subjective and often challenging due to noise, algorithmic effects, and confirmation bias. In this paper, we propose an automated technique for extracting and characterizing such regions. Starting with a two-dimensional plot—e.g., the values of the correlation integral, calculated using the Grassberger–Procaccia algorithm over a range of scales—we create an ensemble of intervals by considering all possible combinations of end points, generating a distribution of slopes from least squares fits weighted by the length of the fitting line and the inverse square of the fit error. The mode of this distribution gives an estimate of the slope of the scaling region (if it exists). The end points of the intervals that correspond to the mode provide an estimate for the extent of that region. When there is no scaling region, the distributions will be wide and the resulting error estimates for the slope will be large. We demonstrate this method for computations of dimension and Lyapunov exponent for several dynamical systems and show that it can be useful in selecting values for the parameters in time-delay reconstructions.

Many problems in nonlinear dynamics involve the identification of regions of constant slope in plots of some quantity as a function of a length or time scale. The slopes of these *scaling regions*, if they exist, allow one to estimate quantities such as a fractal dimension or a Lyapunov exponent. In practice, identifying scaling regions is not always straightforward. Issues with the data and/or the algorithms often cause the linear relationship to be valid only for a limited range in such a plot, if it exists at all. Noise, geometry, and data quality can disturb its shape in various ways, and even create multiple scaling regions. Often the presence of a scaling region, and the end points of its range, are determined by eye,^{1,2} a process that is subjective and not immune to confirmation bias.^{3–6} Worse yet, we know of no formal results about the relationship between the width of the scaling region and the validity of the resulting estimate; often, practitioners simply use the heuristic notion that “wider is better.” In this work, we propose an ensemble-based method to objectively identify and quantify scaling regions, thereby addressing some of the issues raised above.

## I. INTRODUCTION

To identify a scaling region in a plot, our method begins by generating multiple fits using intervals of different lengths and positions along the entire range of the data set. Penalizing the lower-quality fits—those that are shorter or have a higher error—we obtain a weighted distribution of the slopes across all possible intervals on the curve. As we will demonstrate by example, the slope of a scaling region that is broad and straight manifests as a dominant mode in the weighted distribution, allowing easy identification of an optimal slope. Moreover, the extent of the scaling region is represented by the modes of similar distributions that assess the fit as a function of the interval end points. As we will show, for a long, straight scaling region, markers closer to the end points of the scaling region are more frequently sampled (due to the combinatorial nature of sampling) and have a higher weighting (due to longer lengths of the fits). Thus, the method gives the largest reasonable scaling interval for the optimal fit along with error bounds on the estimate as computed from the distributions.

Section II covers the details of this ensemble-based method and illustrates its application with three examples. In Sec. III, we demonstrate the approach on several dynamical systems problems, starting in Sec. III A with the estimation of correlation dimension for two canonical examples. We show that our method can accurately detect spurious scaling regions due to noise and other effects and that it can be useful in systematically estimating the embedding dimension for delay reconstruction. We then demonstrate that this method is also useful for calculating other dynamical invariants, such as the Lyapunov exponent (Sec. III B). We present the conclusions in Sec. IV.

## II. CHARACTERIZING SCALING REGIONS

A *scaling law* “…describe[s] the functional relationship between two physical quantities that scale with each other over a significant interval.” (https://www.nature.com/subjects/scaling-laws). Such a scaling law manifests as a straight segment on a two-dimensional graph of these physical quantities, known as a *scaling region*. In practical situations, the data will typically not lie exactly on a line or may do so only over a limited interval [see, e.g., the synthetic example in Fig. 1(a)]. These data are obtained from a function $y(x)=s(x)+d(x)$, where $s(x)$ is piecewise linear, with slope $5$ in the region $1\u2264x\u22649$,

This function exhibits a scaling region in the range $[1,9]$ that is bounded by intervals with zero slope: a shape that is similar to what is often seen in computations of dynamical invariants even for well-sampled, noise-free data. To make the problem less trivial, we add a damped oscillation,

to $s(x)$. The data set for this example is taken to be $y(x)$ on $[0,10)$ with a sampling interval $\Delta x=0.1$.

To find a scaling region in a diagram like Fig. 1(a), one would normally choose two points—markers $xl$ and $xr$ for the bounds of the linear region—then fit a line to the interval $[xl,xr]$. A primary challenge is to identify the appropriate interval. For Fig. 1(a), the range $7\u2264x\u22649$ appears to be linear, but should this range extend to smaller $x$? One could certainly argue that the lower boundary could be $xl=6$ instead of $7$, but it would be harder to defend a choice of $xl\u22644$ because of the growing oscillations.

Several approaches have been proposed for formalizing this procedure in the context of the largest Lyapunov exponent (LLE) problem. These include what Ref. 6 calls the “small data method”—used, for example, in Ref. 7—which is efficient but suffers from the subjectivity problem described above; an object-searching method of the “maximum nonfluctuant,” which often reports a local optimal solution;^{8} and a fuzzy C-means clustering approach,^{9} which also has limitations since the number of clusters must be pre-specified by the user.

Our approach formalizes the selection of a scaling interval by first choosing all possible “sensible” combinations of left- and right-hand-side marker pairs. Specifically, given a sequence of data points at ${xj,j=1,\u2026,N}$, we choose $xl$ and $xr$ such that

so that the left marker $xl$ must be below the right one ($xr$) and there must also be at least $n+1$ data points on which to perform the linear fit. For each pair $[xl,xr]$, we perform a least squares fit using the data bounded by the pair to compute both the slope and the least squares fit error. To suppress the influence of intervals with poor fits, we construct a histogram of their slopes, weighting the frequency of each directly by the length of the fit and inversely by the square of the associated fitting error. Finally, we use a kernel density estimator to fit a smooth probability density function (PDF) to the histogram. The details are presented in Appendix A, along with a discussion of the choices for $n$ and the exponents associated with the fit width and the fitting error (which are set to 1 and 2, respectively, in the experiments reported here).

A central claim in this paper is that the resulting distribution contains useful information about the existence, extent, and slope of the scaling region. The slopes for which the PDF is large represent estimates that not only occur for more intervals in the graph but also that are longer and have lower fit errors. In particular, we conjecture that the mode of this distribution gives the best estimate of the slope of the scaling region: i.e., a value with low error that persists across multiple choices for the position of the interval. More formally, most intervals $[xl,xr]$ that result in slopes near the mode of the PDF [within full width half maximum (FWHM)^{10}] correspond to high-quality fits of regions in the graph whose bounds lie within the interval of the scaling region. If the graph has a single scaling region, the PDF will be unimodal; a narrower peak indicates that the majority of high-quality fits lie closer to the mode. Broad peaks and multimodal distributions can arise for multiple reasons, including noise; we address these issues in Sec. III.

For the example in Fig. 1, there are $N=100$ data points and we set $n=10$ so the ensemble of end points has $4005$ elements. The resulting weighted slope distribution is unimodal, as shown in panel (b), with mode $4.97$—not too far from the expected slope of 5.0. To provide a confidence interval around this estimate, we compute three quantities:

the full width half maximum ($FWHM$) of the PDF around the mode;

the fraction, $pFWHM$, of ensemble members that are within the $FWHM$;

the standard deviation, $\sigma $, for the slopes of the ensemble members within the $FWHM$.

For the example in Fig. 1, we obtain

That is, we estimate the slope of the scaling region as $4.97\xb10.11$, noting that fully $67%$ of the estimated values are within $\xb10.18$. These error estimates quantify the shape of the peak and give assurance that the diagram does indeed contain a scaling region.

This method also can estimate the boundaries of the scaling region. To do this, we compute secondary distributions using the positions of the left-hand and right-hand side markers. Since we choose all possible pairs, subject to constraint (1), the probability densities of the resulting histograms will decrease linearly for $xl$ and grow linearly for $xr$ from left to right. To penalize poor fits, we again scale the frequency for each point in these secondary histograms directly by the length and inversely by the square of the error of the corresponding fit. The resulting PDFs for the synthetic example of Fig. 1(a) are shown in panel (c). The modes of the LHS and RHS histograms fall at $xl=6.45$ and $xr=8.92$, respectively. These align neatly with what an observer might choose as the end points of the putative scaling region in Fig. 1(a). Note that the $xl$ distribution is wider with a relatively lower peak, whereas the $xr$ distribution is narrower and taller. This indicates that we can be more certain about the position of the right end point than we are about the left one. There is more flexibility in choosing $xl$ than $xr$.

To test our ensemble method on a dataset without an obvious scaling region, we consider the quadratic function

We choose the range $x\u2208[0,2]$ and a sampling interval $\Delta x=0.05$. A graph of a resulting function, Fig. 2(a), of course has no straight regions, but one might imagine that the curve becomes nearly linear near the top of the range even though the actual slope grows linearly with $x$. The slope distribution, shown in Fig. 2(b), has mode

and as the end point distributions in panel (c) show, this corresponds to the interval $[xl,xr]=[0.90,1.70]$. However, the large width of the slope distribution and the correspondingly high magnitude of $\sigma $ convey little confidence as to the existence of a scaling region. This shows that the error statistics and the $FWHM$ can be useful indicators as to whether or not a scaling region is present. A high error estimate and/or a high $FWHM$ contraindicates a scaling region.

In the case of the two simple examples above, scaling regions imply simple linear relationships between the independent and dependent variables. Our technique can just as well be applied to plots with suitably transformed axes—like the log–linear or log–log axes involved in the computation of many dynamical invariants—as we demonstrate next by computing the dimension of a fractal.

The box-counting dimension, $dbox$, is the growth rate of the number $N(\u03f5)$ of boxes of size $\u03f5$ that cover a set,

In practice, $dbox$ is estimated by computing the slope of the graph of $ln\u2061N(\u03f5)$ vs $ln\u20611\u03f5$. As an example, we consider the equilateral Sierpinski triangle of side one, generating a set of $105$ points using an iterated function system^{11} to obtain Fig. 3(a). A log–log plot of $N(\u03f5)$ for this data set is shown in Fig. 3(b) and the resulting slope distribution is shown in Fig. 3(c). The mode of this distribution falls at $1.585\xb10.020$, where—as before—the error is the estimate $\sigma $. This is close to the true dimension $ln\u20613ln\u20612\u22481.585$. This peak is narrow, with $FWHM=0.07$, though $pFWHM=0.68$, similar to the first example. Thus, about $70%$ of the weighted fits lie within $\xb10.035$ of the estimated slope, strengthening confidence in the estimate. The small periodic oscillations in the curve in Fig. 3(b) are due to the self-similarity of the fractal. Note that the linear growth saturates when $ln\u20611\u03f5<0.5$, the point at which the box size becomes comparable to the diameter of the set. Neither of these effects significantly influences the mode of the slope distribution. For this example, the LHS and RHS distributions in panel (b) have similar widths, with modes $xl=1.64$ and $xr=5.59$, respectively.

The examples of this section show that the ensemble-based method effectively selects an appropriate scaling region—if one exists—and is able to exclude artifacts near the edges of the data.

## III. APPLICATIONS TO DYNAMICAL SYSTEMS

In this section, we apply this scaling region identification and characterization method to calculations of two dynamical invariants—the correlation dimension (Sec. III A) and the Lyapunov exponent (Sec. III B)—for two well-known examples, the Lorenz system and the driven, damped pendulum. We also explore the effects of noise (Sec. III A 3) and the selection of embedding parameters for delay reconstructions (Sec. III A 4).

### A. Correlation dimension

Correlation dimension, one of the most common and useful dynamical invariants, can be calculated using the Grassberger–Procaccia algorithm.^{12,13} Given a set of points that sample an object, such as an attractor of a dynamical system, this algorithm estimates the average number of points in an $\u03f5$-neighborhood of a given point, $C(\u03f5)$. For an attractor of correlation dimension $d2$, this scales with $\u03f5$ as

Estimating $d2$ is, therefore, equivalent to finding a scaling region in the log–log plot of $C(\u03f5)$ vs $\u03f5$.^{14}

In this section, we use d2, the TISEAN^{1} implementation, which outputs estimates of $C(\u03f5)$ for a range of $\u03f5$ values. We apply our ensemble-based method to a graph of log $C(\u03f5)$ vs log $\u03f5$ and identify the mode of the resulting slope distribution to obtain an estimate of the correlation dimension of the data set. When the slope distribution is multi-modal, our method can also reveal the existence of potential scaling regions for different ranges of $\u03f5$, some of which may not be obvious on first observation. This analysis, then, not only yields values for the slope and extent of the scaling region but it also provides insight into the geometry of the dynamics, the details of the Grassberger–Procaccia algorithm, and the interactions between the two.

#### 1. Lorenz

We start with the canonical Lorenz system,

with $\sigma =10$, $\beta =83$, and $\rho =28$.^{15} We generate a trajectory from the initial condition $(0,1,1.05)$ using a fourth-order Runge–Kutta algorithm for $105$ points with a fixed time step $\Delta t=0.01$, discarding the first $104$ points to remove the transient, to get a total trajectory length of 90 000 points (see Fig. 10 in Appendix B). The chosen initial condition lies in the basin of the chaotic attractor and the discarded length is sufficient to remove any transient effects. Figure 4(a) shows a log–log plot of $C(\u03f5)$ vs $\u03f5$ produced by the d2 algorithm. To the eye, this graph appears to contain a scaling region in the approximate range $[ln\u2061\u03f5l,ln\u2061\u03f5r]=[\u22123,2]$. As in the box-counting dimension example in Sec. II, the curve flattens when $\u03f5$ is larger than the diameter of the attractor since $C(\u03f5)$ will not change when $\u03f5$ increases beyond this point. Since the diameter of the Lorenz attractor is $25.5$, this flattening occurs for $ln\u2061\u03f5>ln\u2061(25.5)\u22483.2$.

Figures 4(b) and 4(c) show the results of our ensemble-based approach. The mode of the slope PDF is

This gives an estimate for the correlation dimension for the Lorenz attractor that agrees with the accepted range of 2.06–2.16.^{16,17} The PDFs of the LHS and RHS markers in Fig. 4(c) have modes $ln\u2061\u03f5l=\u22122.6$ and $ln\u2061\u03f5r=1.5$, respectively. Estimates of both end points by our algorithm are close to the informal ones although slightly on the conservative side. In other words, our method can indeed effectively formalize the task of identifying both the existence and the extent of scaling regions.

The end point distributions can be broken down further in the heatmap-style scatterplot of Fig. 4(d) to reveal additional details. Each dot represents a single element of the ensemble: i.e., a fit for a particular interval $[ln\u2061\u03f5l,ln\u2061\u03f5r]$. Its color encodes the slope and its radius is scaled by the fitting weight. If samples come from intervals that are close, the associated dots will be nearby; if these have low error, the dots will be large and their effective density will be high. Note that the domain of panel (d) is a triangle since $\u03f5l<\u03f5r$ by (1).

This visualization provides an effective way to identify scaling region ranges that produce similar slope estimates: long scaling regions will manifest as large regions of similarly colored points. Figure 4(d) shows such a triangular high-density region in blue (slope $\u2248$ 2.0), above the diagonal, bounded from the left by $ln\u2061\u03f5l\u2248\u22124$, and from above by $ln\u2061\u03f5r\u22482$. This clearly corresponds to the primary scaling region in Fig. 4(a) and the corresponding mode in panel (b). This heatmap can reveal other, shorter scaling regions. Indeed, panel (d) shows other clusters of similarly colored dots that are somewhat larger than the dots from neighboring regions: e.g., the green cluster for $ln\u2061\u03f5l\u22482$ and $ln\u2061\u03f5r\u22483$, with a slope near $1.5$. Close examination of the original curve in Fig. 4(a) reveals a small straight segment in the interval $[2,3]$. This feature, not immediately apparent from the original curve, stands out quite clearly in the distribution visualization. Two much smaller scaling regions at the two ends of panel (a) are also brought out by this representation: the interval of slope $\u2248$3.0 for small $\u03f5$ (the dark blue cluster at the lower left corner of the scatterplot) and the zero slope for large $\u03f5$ (the red cluster at the upper right corner).

#### 2. Pendulum

As a second example, we study the driven, damped pendulum,

where the natural frequency is $\nu 0=98$, the damping coefficient is $\beta =2.5$, and the driving force has amplitude $A=91$ and frequency $\alpha =0.75\nu 0$. The coordinates of the three-dimensional extended phase space $T2\xd7R$ are the angle $\theta $, the time $t$, and the angular velocity $\omega $. We generate a trajectory for this system using a fourth-order Runge–Kutta algorithm with initial condition $(3.14,0,50)$ for $1.1\xd7106$ points with a fixed time step of $0.001$, discarding the first $105$ points to remove the transient, resulting in a final time series of length one million. To avoid issues with periodicity in $\theta $ and $t$, we project the time series from $T2\xd7R$ onto $(sin\u2061(\theta ),sin\u2061(\alpha t),\omega )\u2208R3$. The resulting trajectory is shown in Fig. 5. Note that the attractor has the bound $|\omega |\u226424.88$, but that the range of the other two variables is $[\u22121,1]$ due to the sinusoidal transformation.

To the eye, results of a d2 calculation on this trajectory, shown in Fig. 6(a), exhibit two apparent scaling regions above and below $ln\u2061\u03f5\u22481$. The slope distributions produced by our method not only confirm but also formalize this observation. The larger peak of the distribution in panel (b) of the figure falls at $d2=2.19\xb10.09$ ($FWHM$ = 0.32 and $pFWHM=0.53$), which is equivalent to the correlation dimension of the attractor as reported in Ref. 18. Note that, to account for the fact that the distribution is clipped on the right before the density falls below half the peak value, the computation of $FWHM$ uses this as the upper bound. The interval end point distributions in panel (c) indicate that the extent of this scaling region is $\u22122.6<ln\u2061\u03f5<0$. To the eye, $\u03f5r\u223c0.8$ would seem a more-appropriate choice for the RHS end point of the scaling region; however, minor oscillations in the interval $\u03f5\u2208[0,1]$ cause the ensemble-based algorithm to be more conservative and choose $\u03f5r=0$. The presence of a second scaling region in panel (a) for $ln\u2061\u03f5>1$ gives a second mode at the slope $0.94\xb10.10$ ($FWHM=0.35$ and $pFWHM=0.26$). The secondary peaks in the end point distributions in panel (c) suggest an extent of $0.8<ln\u2061\u03f5<2.8$ for this second scaling region, which is consistent with a visual inspection of panel (a).

This second scaling region is an artifact of the large aspect ratio of the attractor: only one of the variables, $\omega $, has a range larger than $1$, so when $\u03f5>1$, the effective dimension is one. To test this hypothesis, we rescaled the components of the phase space vectors so that the attractor has equal extent of $[0,1]$ in all three directions (using the -E flag in the TISEAN d2 command) and repeated the d2 calculations. This leads to rescaling of the axis of the $d2$ plot (not shown)—$ln\u2061\u03f5$ now has the range $[\u22127,0]$—and causes the second scaling region in the previous results to disappear, leaving a single scaling region $\u22127<ln\u2061\u03f5<\u22122$ with slope

Note that, in addition to resolving the artifact of the second scaling region, this rescaling also reduces the width of the mode and gives a much tighter error bound.

By revealing the existence, the end points, and the slopes of different scaling regions in the data, our ensemble-based approach has not only produced an objective estimate for the correlation dimension but also yielded some insight into the interaction between the d2 algorithm and the attractor geometry.

#### 3. Noise

Noise is a key challenge for any practical nonlinear time-series analysis method. We explore the effects of measurement noise on our method using the Lorenz trajectory of Sec. III A 1 by adding uniform noise *post facto* to each of the three state variables. The noise amplitude in each dimension is proportional to the radius of the attractor in that dimension. The correlation sum $C(\u03f5)$ for a noise amplitude $\delta =0.01$—i.e., $1%$ of the radius of the attractor in each dimension—is shown in Fig. 7(a). There appear to be two distinct scaling regions in this plot, above and below a slight knee at $ln\u2061\u03f5\u2248\u22121$. The slope distribution produced by our method is shown in Fig. 7(b). As in the pendulum example, this distribution is bimodal, indicating the presence of two scaling regions with slopes

and

respectively. We claim that these results are consistent with the geometry of the noise and of the dynamics, respectively. The taller peak corresponds to the interval $ln\u2061\u03f5\u2208[\u22124.60,\u22121]$, delineated by red markers in panel (a). Note that the right end point of this region is close to $\u03f5r\u22480.01\xd726$, the maximum extent of the noise. The computed slope of $2.94$ in this interval captures the geometry of a noise cloud in three-dimensional state space. The smaller, secondary peak at $2.08$ reflects the dimension of the attractor for scales larger than the noise, the interval $ln\u2061\u03f5\u2208[\u22121,2.2]$ that is bounded by green markers in panel (a).

As the noise level increases, the geometry of the attractor is increasingly obscured (see Fig. 10 in Appendix B). Figures 7(c) and 7(d) show results for $\delta =0.1$. As one would expect, the right-hand boundary of the lower scaling region is increased, shown as the red markers in panel (c). We observe that $ln\u2061\u03f5r\u22480.16$, near the noise cloud radius of $1.3$. With this noise level, the slope distribution is nearly unimodal with mode

again reflecting the geometry of the noise. While there appears to be a secondary peak in the slope distribution in panel (d), it is nearly obscured.

Interestingly, the identification of the scaling region due to the noise can be used to refine the calculation and retrieve some information about the dynamical scaling: one simply repeats the slope-distribution calculations using only the data for larger $\u03f5$ values—that is, discarding the values below the noise. Restricting to the interval $[\u22121.0,4.0]$ gives the curve and the slope distribution shown in panels (e) and (f). In addition to the noise peak near $3.0$, this reveals two additional peaks. The first,

was hinted at by the tail of the distribution in panel (d). This, we conjecture, corresponds to the correlation dimension of the noisy dynamics. The final peak, near $1.8$, corresponds to the smaller scaling region $[2.0,3.0]$ that can be seen in a heatmap-style scatterplot (not shown). The validity of this region can be easily confirmed by further limiting the ensemble to the interval $ln\u2061\u03f5\u2208[1.0,4.0]$, which then gives a single mode at $1.83$. Note that this region corresponds to a smaller scaling region that was also seen in the noiseless case: the small green cluster in Fig. 4(d).

The ability of the ensemble-based slope distribution method to reveal secondary scaling regions and suggest refinements allows one to identify noise scales and even extract information that might be hidden by the noise.

#### 4. Time-series reconstructions

The previous examples assumed that all of the state variables are known. This is rarely the case in practice; rather, one often has data only from a single sensor. Delay-coordinate embedding,^{19,20} the foundation of nonlinear time-series analysis, allows one to reconstruct a diffeomorphic representation of the actual dynamics from observations of a scalar time series $x(t)$ provided that a few requirements are met: $x(t)$ must represent a smooth, generic observation function of the dynamical variables^{19,21} and the measurements should be evenly spaced in time. This method embeds the time series $x(t)$ into $Rm$ as a set of delay vectors of the form $[x(t),x(t\u2212\tau ),\u2026,x(t\u2212(m\u22121)\tau )]$ given a time delay $\tau $. Theoretically, the only requirements are $\tau >0$ and $m>2dcap$, where $dcap$ is the capacity dimension of the attractor on which the orbit is dense.^{19,22} Many heuristics have been developed for estimating these two free parameters,^{1,18,23–37} notably the first minimum of the average mutual information for selecting $\tau $^{25} and the false near neighbors algorithm for selecting $m$.^{37} See Refs. 2 and 38 for more details.

Since the correlation dimension is preserved under a diffeomorphism, it can be computed with the d2 algorithm on properly reconstructed dynamics. Moreover, the calculations of the correlation dimension for different values of $m$ can help diagnose the correctness of an embedding. To explore how our method can contribute in this context, we embed the $x(t)$ time series of the Lorenz trajectory from Sec. III A 1 using $\tau =18$, which was chosen using the curvature heuristic described in Ref. 18 and corroborated using the first minimum of the average mutual information. Using TISEAN, we then create a series of embeddings for a range of $m$ values and apply the Grassberger–Procaccia algorithm to each one. The results are shown in Fig. 8(a) for $m\u2208[1,10]$.

As is well known, we can assess the convergence of $d2$ by reasoning about such a sequence of curves. When $m$ is too small, the reconstructed attractor is improperly unfolded, giving an incorrect dimension, while for large enough $m$, $d2$ typically converges—modulo data limitation issues^{39}—to the nominally correct value. Since our ensemble methodology automates the calculation of scaling regions, it can simplify this calculation for multiple curves. To determine the value of $m$ for which the slopes converge, one simply computes the modes of each slope distribution and looks for the $m$ value at which the positions of those modes stabilize. For Fig. 8(b), the mode for $m=1$ is, of course, $d2=1$, but when $m$ reaches $3$, the distributions begin to overlap and for $3\u2264m\u22645$ the modes are $2.09\xb10.01$, essentially the estimate from the full dynamics in Sec. III A 1. This finding nicely corroborates the rigorous results of Ding *et al.*^{39} which show—in the noise-free case—that the correlation dimension can be correctly recovered using $m=\u2308d2\u2309$: i.e., the smallest integer not less than $d2$.

To formalize this procedure, we use the Wasserstein metric $MW$ to compare pairs of distributions.^{40} Informally, this metric—called the “earth mover’s distance”—treats each distribution as a pile of soil, with the distance between the two distributions defined as the minimum “effort” required to turn one pile into another. Thus, $MW=0$ only when the distributions are identical. We use the python scipy.stats.wasserstein_distance package to compute this metric. (https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.wasserstein_distance.html).

Given slope distributions $Pm$ and $Pm\u22121$ for successive embedding dimensions from Fig. 8(b), the distance $MW(Pm,Pm\u22121)$ is shown in panel (c) of the figure. Note that $MW(Pm,Pm\u22121)$ initially decreases rapidly with increasing $m$, approaching zero at $m=4$. Up to fluctuations, $MW(Pm,Pm\u22121)$ remains approximately constant thereafter. After experimenting, we propose the heuristic threshold $MW(Pm,Pm\u22121)\u223c0.1$ to estimate convergence of $d2$ as $m$ grows. Such a threshold has two benefits, giving both an estimate of the embedding dimension and of $d2$.

To assess the generality of these ideas, we applied the ensemble-based method to four additional scalar time series: the noisy Lorenz data of Sec. III A 3 with $\delta =0.1$ using $x(t)$, the pendulum trajectory of Sec. III A 2 using $\omega (t)$, a shorter, $200000$ point segment of this orbit, and, finally, a trajectory from the Lorenz-96 model^{41} with a phase space dimension $d=14$. In each case, we again used the curvature-based heuristic of Ref. 18 to estimate $\tau $. This set of examples represents a range of problems that can arise in practice: measurements affected by noise, and a trajectory that is too short to fully cover an attractor, and a high-dimensional attractor. The results, presented in Table I, show that $d2$ for the reconstructed dynamics is close to that of the full dynamics as well as to that given previously by manually fitting scaling regions.^{18} However, the embedding dimensions are often significantly smaller than those suggested by other work. For the first four examples in Table I, for example, Ref. 18 used $m=7=2d+1$, the dimension required by the Takens theorem for a three-dimensional state space;^{19} for the Lorenz-96 example, that paper used $m=12$, obtained from the heuristic $m>2dcap$.^{22} Both $m\u22652d+1$ and $m>2dcap$ are *sufficient* conditions, of course. The Wasserstein test used in Table I shows that accurate estimates of $d2$ can often be obtained with a lower embedding dimension and without prior knowledge of the original dynamics.

. | . | Reconstructed . | ||
---|---|---|---|---|

System . | Full dynamics d_{2}
. | d_{2}
. | τ
. | m
. |

Lorenz (Sec. III A 1) | 2.09 ± 0.02 | 2.09 ± 0.01 | 18 | 4 |

Noisy Lorenz (δ = 0.1) | 2.59 ± 0.20 | 2.27 ± 0.14 | 21 | 7 |

Pendulum (Sec. III A 2) | 2.22 ± 0.02 | 2.16 ± 0.02 | 120 | 4 |

Truncated pendulum | 2.23 ± 0.07 | 2.20 ± 0.03 | 120 | 4 |

Lorenz-96 | 5.63 ± 0.09 | 5.80 ± 0.07 | 24 | 10 |

. | . | Reconstructed . | ||
---|---|---|---|---|

System . | Full dynamics d_{2}
. | d_{2}
. | τ
. | m
. |

Lorenz (Sec. III A 1) | 2.09 ± 0.02 | 2.09 ± 0.01 | 18 | 4 |

Noisy Lorenz (δ = 0.1) | 2.59 ± 0.20 | 2.27 ± 0.14 | 21 | 7 |

Pendulum (Sec. III A 2) | 2.22 ± 0.02 | 2.16 ± 0.02 | 120 | 4 |

Truncated pendulum | 2.23 ± 0.07 | 2.20 ± 0.03 | 120 | 4 |

Lorenz-96 | 5.63 ± 0.09 | 5.80 ± 0.07 | 24 | 10 |

### B. Lyapunov exponent

Computing a Lyapunov exponent also often involves the identification and characterization of scaling regions. Here, we use the Kantz algorithm^{42} to estimate the maximal Lyapunov exponent $\lambda 1$ for reconstructed trajectories of the Lorenz and chaotic pendulum examples from Sec. III A 4. We embed the scalar data using a delay $\tau $ found by standard heuristics and experiment with various embedding dimensions. The Kantz algorithm starts by finding all of the points in a ball of radius $\u03f5s$ (also called the “scale”) around randomly chosen reference points on an attractor. By marching through time, the algorithm then computes the divergence between the forward image of the reference points and the forward images of the other points in the $\u03f5s$ ball. The average divergence across all reference points at a given time is the *stretching factor*. To estimate $\lambda 1$, one identifies a scaling region in the log–linear plot of the stretching factor as a function of time.

Note that this procedure involves three free parameters: the embedding parameters $\tau $ and $m$ and the scale of the balls used in the algorithm. (Other parameters—the number of reference points, Theiler window, and length of the temporal march, etc.—were set to TISEAN’s default values.) To obtain accurate results, one is confronted by an onerous multivariate parameter tuning problem: a situation in which an automated approach can be extremely advantageous. If, for example, one embeds the $x$ coordinate of the Lorenz data from Sec. III A 1 with $m$ ranging from 2 to 9 and chooses 10 values of $\u03f5s$ (e.g., on a logarithmic grid with 10 values between 0.038 and 0.381), then there will be 80 different curves, as seen in Fig. 9(a). Manually fitting scaling regions to each curve would be a demanding task.

The ensemble method gives the automated results shown in panel (b) of the figure. Each row of this 2D grid corresponds to fixed $m$ and each column to fixed $\u03f5s$. The estimated $\lambda 1$—the location of the mode in the slope PDF—is the value shown in the cell. To help detect convergence, each cell is shaded according to the corresponding $\lambda 1$ value. Note that the majority of the configurations (48 out of 80, colored in intermediate gray) give an estimate of $\lambda 1=0.89\xb10.02$, which is consistent with the known value.^{43}

The pattern in the grid makes sense. The cells in its center correspond to midrange combinations of the free parameters: $m\u2208[5,9]$ and $\u03f5s\u2208[0.064,0.177]$. Values outside these ranges create well-known issues for the Kantz algorithm in the context of finite data. If $\u03f5s$ is too small, the ball will not contain enough trajectory points to allow the algorithm to effectively track the divergence; if $\u03f5s$ is too large, the ball will straddle a large portion of the attractor, thereby conflating state-space deformation with divergence. For $m$ too small, the attractor is insufficiently unfolded, so the values in the top few rows of the grid are suspect. In the upper right corner, these two effects combine; moreover, for these cases, the zero slope segments on the right ends of the stretching factor curves becomes dominant (since the $\u03f5s$ ball reaches the edge of a flattened attractor more quickly), thus causing the ensemble-based algorithm to return a slope close to zero. Finally, we see a slightly higher estimate of $\lambda 1=1.17$ for $m=9$ and $\u03f5s=0.038$. On inspection, the stretching factor plot for this case exhibits significant oscillations, distorting most of the curve. Though a relatively straighter section of this curve is chosen by the algorithm as a scaling region, the distortion shows that this parameter combination should be avoided for estimating $\lambda 1$.

The process is repeated for the driven damped pendulum example in Figs. 9(c) and 9(d). Here, we use a slightly different trajectory than in Sec. III A 2: $500000$ points (after removing a transient of $50,000$ points) with a time step $0.01$. For this trajectory, the heuristic of Ref. 18 suggests an embedding delay $\tau =21$. From the grid, we observe similar patterns as in the Lorenz example: 46 out of 80 combinations give $\lambda =0.93\xb10.04$ (for $m\u2208[4,9]$ and $\u03f5s\u2208[0.083,0.3]$). We observe significantly lower estimates when the embedding dimension is low and $\u03f5s$ is high. Additionally, an anomaly, similar to the Lorenz example, is observed for higher embedding dimensions ($m\u2208[8,9]$) and smaller scales ($\u03f5s\u2208[0.05,0.065]$) that gives $\lambda 1=0.60\xb10.02$. This is due to low frequency oscillations in the stretching factor plots, with a relatively straight region toward the right end of the plots where they start to flatten. As for the Lorenz example, these parameter combinations should be avoided.

The ensemble approach allows easy automation of the computation of scaling regions for various values of the hyperparameters, thus sparing significant manual labor. Moreover, the grid visualization shows the “sweet spot” in the $(m,\u03f5s)$-parameter space. Of course, this is only a relatively crude convergence criterion, and one could instead use a more rigorous test such as the Wasserstein distance of Sec. III A 4.

## IV. CONCLUSIONS

The technique described above is intended to formalize a subjective task that arises in the calculation of dynamical invariants: the identification and characterization of a scaling region, an interval in which one quantity grows linearly with respect to another. By manipulating the axes, one can extend this to detect exponential or power-law relationships: e.g., using log–log plots for fractal dimensions and log–linear plots for Lyapunov exponents, as shown in Secs. III A and III B. Moreover, linearity is not a limitation in the applicability of this approach: our methodology could be used to identify regions in a dataset with any hypothesized functional relationship (e.g., higher order polynomial, logarithmic, exponential, etc.). One would simply compute the least squares fit over the data using the hypothesized function instead.

A strength of our method is that the scaling region is chosen automatically, as the optimal family of intervals over which the fits have similar slopes with small errors. To do this, we create an ensemble of fits over different intervals, building PDFs to identify the boundaries and slopes of any scaling regions and to obtain error estimates. When the scaling region is clear, our ensemble-based approach computes a slope similar to that which would be manually determined. Another strength of this method is its robustness: while the examples in this paper involve scaling regions containing hundreds of points, the method works surprisingly well even when the number of input points is relatively small and the responses are fairly noisy, albeit with a higher error estimate. Finally, we showed that a convergence test for the slope distributions helps choose the delay reconstruction dimension. This is a challenging task: even though there are rigorous theoretical guidelines, the information needed to apply them is not known in advance from a given scalar time series. As a next step, we are investigating how this approach compares with existing methods for estimating the correct embedding dimension (such as by false nearest neighbors) for delay reconstruction.

This kind of objective, formalized method is useful not only because it reveals scaling regions that may not be visible but also because it provides confidence in their existence in the form of error estimates. Moreover, since computing a dynamical invariant, such as the Lyapunov exponent, can involve finding scaling regions in many curves, an automated method provides a clear advantage over hand selection. Our method could be potentially useful in areas outside dynamical systems as well: e.g., estimating the scaling exponent for data drawn from a power-law distribution. This is an active area of research.^{5,44} Standard techniques involve inferring which power-law distribution (if any) the data are most likely drawn from, and—if so—which exponent is most likely. Our method could potentially help narrow the range of exponents to begin such a search. There are other potential applications as well, e.g., detection of scaling regions in fluctuation functions that arise in detrended fluctuation analysis (DFA),^{45} as well as in wavelet or rescaled range (R/S) analysis^{46,47} for the determination of the Hurst exponent^{48} in stochastic processes with long-range temporal correlations.

There are a number of interesting avenues for further work, beginning with how to further refine the confidence intervals for slope estimates. We have used the standard deviation of the slopes within the $FWHM$ around the mode of the distribution. Alternatively, one could use the average of the least squares fit error for the samples within the $FWHM$ as the confidence interval. Another possibility would be to first extract a single scaling region (by determining the modes for the left- and right-hand side markers) and then compute the error statistics over all the sub-intervals of this scaling region (e.g., standard deviation of the slopes or the average of the fit errors). We used the Wasserstein distance to assess the convergence of a sequence of slope distributions to help choose an embedding dimension. This may be too strict since it requires that the *entire* distributions be close. One could instead target the positions of the modes, quantifying closeness using some relevant summary statistic like their confidence intervals. Of course, for multimodal distributions, the Wasserstein distance test is more appropriate since we cannot choose a single mode for computing the intervals.

## ACKNOWLEDGMENTS

The authors acknowledge support from the National Science Foundation (NSF) [Grant Nos. EAGER-1807478 (J.G.), AGS-2001670 (V.D., E.B., and J.D.M.), and DMS-1812481 (J.D.M.)]. J.G. was also supported by Omidyar and Applied Complexity Fellowships at the Santa Fe Institute. Helpful conversations with Aaron Clauset and Holger Kantz are gratefully acknowledged as well as useful suggestions from the anonymous referees.

## DATA AVAILABILITY

The data that support the findings of this study are available within the article.

## AUTHOR DECLARATIONS

### Conflict of Interest

The authors have no conflicts to disclose.

### APPENDIX A: ALGORITHM

The pseudocode for the proposed ensemble-based approach is described in Algorithm 1. There are four important choices in this algorithm. The first is the parameter $n$, the minimum spacing between the left-hand and right-hand side markers $LHS$ and $RHS$. This choice limits what is deemed to be a “significant interval” for a scaling region. We set $n=10$ in this paper, which we chose using a persistence analysis: that is, varying $n$ and seeing if the results change. One could increase $n$ for larger data sets. The second is the choice of a kernel density estimator (KDE) for the histograms. We used the Gaussian KDE with the python implementation, scipy.stats.gaussian_kde,^{49} of Scott’s method^{50} to automatically select the kernel bandwidth. Alternatively, one might choose other bandwidth selection methods, such as Silverman’s method,^{51} or simply manually specify the bandwidth. Scott’s method, which is the default method used by the package, worked well for all our examples.

1: Assume a plot $(x[0:N\u22121],y[0:N\u22121])$ that potentially contains a scaling region. |

2: Initialize empty arrays $slopes$, $lengths$, $errors$, $xl$, and $xr$. |

3: for $lhs=0,1,2,\u2026,N\u22121$ do |

4: for $rhs=0,1,2,\u2026,N\u22121$ do |

5: if $rhs\u2212lhs>n$ then |

6: Fit a line using least squares to data $x[lhs:rhs],y[lhs:rhs]$. |

7: Obtain an estimate for slope $m$ and intercept $c$. |

8: Compute the least squares fit length and fitting error, $fit length=1+m2|x[rhs]\u2212x[lhs]|,$ $fit error=\u2211i=lhsrhs(y[i]\u2212mx[i]\u2212c)2rhs\u2212lhs.$ |

9: Append the slope, fit length, and error to the arrays $slopes$, $lengths$, and $errors$, respectively. |

10: Append $x[lhs]$ and $x[rhs]$ to end point arrays $xl$ and $xr$, respectively. |

11: Generate a histogram $H$ from the $slopes$ array, weighting each point i by $weight[i]\u221d(fit length[i])p(fit error[i])q$ |

12: Using a kernel density estimator, generate a probability distribution function $P$, from $H$ [see, e.g., Fig. 1(b)]. |

13: Compute the mode(s) of $P$ and the error estimates as described in Sec. II. |

14: Generate histograms $Hl$ and $Hr$ for $xl$ and $xr$ from Step 10, weighting the frequency with (A3). Generate PDFs, as in Step 12, to generate distributions $Pl$ and $Pr$ [see, e.g., Fig. 1(c)]. |

1: Assume a plot $(x[0:N\u22121],y[0:N\u22121])$ that potentially contains a scaling region. |

2: Initialize empty arrays $slopes$, $lengths$, $errors$, $xl$, and $xr$. |

3: for $lhs=0,1,2,\u2026,N\u22121$ do |

4: for $rhs=0,1,2,\u2026,N\u22121$ do |

5: if $rhs\u2212lhs>n$ then |

6: Fit a line using least squares to data $x[lhs:rhs],y[lhs:rhs]$. |

7: Obtain an estimate for slope $m$ and intercept $c$. |

8: Compute the least squares fit length and fitting error, $fit length=1+m2|x[rhs]\u2212x[lhs]|,$ $fit error=\u2211i=lhsrhs(y[i]\u2212mx[i]\u2212c)2rhs\u2212lhs.$ |

9: Append the slope, fit length, and error to the arrays $slopes$, $lengths$, and $errors$, respectively. |

10: Append $x[lhs]$ and $x[rhs]$ to end point arrays $xl$ and $xr$, respectively. |

11: Generate a histogram $H$ from the $slopes$ array, weighting each point i by $weight[i]\u221d(fit length[i])p(fit error[i])q$ |

12: Using a kernel density estimator, generate a probability distribution function $P$, from $H$ [see, e.g., Fig. 1(b)]. |

13: Compute the mode(s) of $P$ and the error estimates as described in Sec. II. |

14: Generate histograms $Hl$ and $Hr$ for $xl$ and $xr$ from Step 10, weighting the frequency with (A3). Generate PDFs, as in Step 12, to generate distributions $Pl$ and $Pr$ [see, e.g., Fig. 1(c)]. |

The two other important parameters are the powers $p$ and $q$ used for the weights of the fit length and fit error in (A3). We explored ranges for these parameters, constraining $p$ and $q$ to non-negative integers, and determined suitable values that worked well for all of the examples considered here. Some interesting patterns emerged in these explorations. First, we found that $p>0$ helps us to reduce the error estimate for the slope $\sigma $ and improve $pFWHM$. Note that $p>0$ penalizes the shorter fits near the edges of the $FWHM$, suppressing their influence on the histogram. The $FWHM$, therefore, narrows, in turn reducing the error estimate $\sigma $. Second, setting $p\u2264q$ was found to be advantageous in all cases, ensuring that the algorithm does not prioritize unnecessarily longer fits of poor quality. Enforcing these two conditions, we experimented with a few choices for $p$ and $q$ in the context of Fig. 1 and found that lower $p,q$ values generally work better. Higher powers tend to magnify the effects of small errors and small lengths, making the algorithm very conservative in terms of what constitutes a good fit. With this in mind, we settled on $p=1$ and $q=2$ for all of the correlation dimension estimations. For the Lyapunov exponent examples, on the other hand, we found that $p=1$ and $q=1$ do better, generating tighter confidence interval bounds around the estimate across the various parameter combinations.

This algorithm always uses the full data set to compute the ensemble but note that it does not require *even* spacing of the data. Given a much larger data set, it might make sense to downsample to speed up the execution. In its current implementation, the run-time complexity of the algorithm is $O(N2)$ for a relationship curve with $N$ points. In the future, it might be useful to develop faster algorithms for sampling and generating the slope distributions.

### APPENDIX B: EFFECTS OF NOISE ON THE LORENZ ATTRACTOR

Figure 10 shows the effects of noise on the Lorenz attractor (viz., the examples in Sec. III A 3). As we increase the noise level (as a fraction of the attractor radius) $\delta $ from 0.0 to 0.1, the dynamics of the attractor are progressively distorted. As discussed in Sec. III A 3, the effects of this distortion on the correlation dimension plots, and hence the slope distributions, are clearly visible (see Fig. 7).

### APPENDIX C: ADDITIONAL PLOTS FOR TIME-SERIES RECONSTRUCTION

Here, we present figures to support the results of Table I.

For the noisy Lorenz data, Fig. 11 shows the slope distributions and Wasserstein distance between slope distributions of consecutive embedding dimensions, $MW(Pm,Pm\u22121)$. Panels (a) and (b) use $\tau =21$ as in Table I. Note that $MW(Pm,Pm\u22121)$ converges more slowly for the noisy data than it did for the deterministic case in Fig. 8. The distance $MW(Pm,Pm\u22121)<0.1$ at $m=7$, giving $d2=2.27\xb10.14$. Figures 11(c) and 11(d) show the reconstruction results for $\tau =60$, the embedding delay given by the average mutual information (AMI) method.

^{25}Here, $MW(Pm,Pm\u22121)$ does not reach $0.1$ for $m\u226410$. The implication is that the embedding dimension should be larger than $10$, contrary to the theoretical requirement $m=7$. Indeed, panel (c) shows that the position of the mode grows monotonically with $m$, reaching values much higher than the expected $d2=2.59$ for the full dynamics.Figures 12 and 13 show the results for the pendulum trajectories of length $106$ and $2\xd7105$, respectively, generated using a delay of $\tau =120$. In both cases, the Wasserstein distance threshold is reached at $m=4$, resulting in the values in Table I.

Finally, Fig. 14 shows the delay reconstruction of the Lorenz-96 trajectory using $\tau =24$. Convergence occurs at $m=10$, giving $d2=5.80\xb10.07$.

## REFERENCES

*Computer Science and Communications Dictionary*(Springer US, Boston, MA, 2001), pp. 661–661.

*The Theory of Chaotic Attractors*(Springer, 2004), pp. 170–189.

*Dynamical Systems and Turbulence*(Springer, Berlin, 1981), pp. 366–381.

*Predictability of Weather and Climate*(Cambridge University Press, 2006), pp. 40–58.