The fractal dimension is a central quantity in nonlinear dynamics and can be estimated via several different numerical techniques. In this review paper, we present a self-contained and comprehensive introduction to the fractal dimension. We collect and present various numerical estimators and focus on the three most promising ones: generalized entropy, correlation sum, and extreme value theory. We then perform an extensive quantitative evaluation of these estimators, comparing their performance and precision using different datasets and comparing the impact of features like length, noise, embedding dimension, and falsify-ability, among many others. Our analysis shows that for synthetic noiseless data, the correlation sum is the best estimator with extreme value theory following closely. For real experimental data, we found the correlation sum to be more strongly affected by noise vs the entropy and extreme value theory. The recent extreme value theory estimator seems powerful as it has some of the advantages of both alternative methods. However, using four different ways for checking for significance, we found that the method yielded “significant” low-dimensional results for inappropriate data like stock market timeseries. This fact, combined with some ambiguities we found in the literature of the method applications, has implications for both previous and future real-world applications using the extreme value theory approach, as, for example, the argument for small effective dimensionality in the data cannot come from the method itself. All algorithms discussed are implemented as performant and easy to use open source code via the DynamicalSystems.jl library.
When chaotic dynamical systems evolve in time, they typically create sets in the state space that have fractal properties. One of the major ways to characterize these chaotic sets is through a computationally feasible version of a fractal dimension (FD). In the field of nonlinear dynamics, the correlation sum and the generalized (Rényi) entropy are the two most commonly used approaches. One attempts to find a scaling exponent of these quantities vs a size parameter, and this exponent approximates the fractal dimension. A third method based on extreme value theory is a promising alternative, but it has been developed only recently and, hence, has not undergone the same amount of scrutiny as the previous two methods. Here, we provide a comprehensive, up to date, and self-contained analysis of available methods, comparing across every conceivable scenario. We also provide open source implementations to compute each method.
I. INTRODUCTION
Fractal geometry deals with geometric objects (or sets) called fractals,1,2 which are “irregular” in terms of traditional Euclidean geometry. Their most striking property arguably is that they possess structure at all scales, which typically remains invariant in one form or another, no matter how much one zooms into the set. Because of this, the traditional topological dimension is not fit to describe these sets adequately (Chap. 5 of Ref. 3). The concept of a fractal dimension, a generally non-integer number, is, therefore, employed to characterize such objects.2,4 The fractal dimension, which will be shortened to FD in the rest of the article, can be used to quantify the complexity of the geometry, its scaling properties and self-similarity, and the effective ratio of surface areas to volumes.2 It has been applied in a vast array of different scenarios, from the archetypal measurement of coastlines5,6 to being suggested as a tool for validating abstract art, e.g., that of Pollock.7,8
The evolution of chaotic dynamical systems results in sets in the state space that are typically fractal4,9 and, thus, can be characterized by a FD,10–16 done to our knowledge for the first time by Russell et al.17 For dissipative systems, these sets are called strange or chaotic attractors17,18 (boundaries of basins of attraction and chaotic saddles can also be fractal19,20). For conservative systems, they often have special properties and are (typically) called fat fractals.21 For fractal sets resulting from evolving dynamical systems, an estimate of FD provides the additional crucial information of the effective degrees of freedom of the time evolution. This means that calculating the FD for an experimentally obtained dataset can be used to guide the modeling process,3,22,23 as the ceiling of the FD is the minimum number of independent variables that can model the system. Alternatively, if one has a model that confidently approximates the real system, due to physical arguments, then the FD of the model output can be used to tune model parameters: since FD is a dynamic invariant,3,22 one can compare FDs of the model simulations to the FD obtained by observed data and change parameters until those two values match.
These reasons have motivated many researchers to compute the FD of many real-world systems by delay embedding measured timeseries.22 Examples include global climate,24–26 physiology,27 and lasers,28 but many more exist. Unfortunately, some of these studies have been challenged, because calculating the fractal dimension in practice is a difficult, error prone process, with limited means of providing confidence to the obtained result. This is, for example, highlighted well in the controversy of estimating FDs of “climate attractors” (see, e.g., Ref. 26, and references therein), where FD estimates were incorrectly used to claim very low FD of for the whole climate system. Researchers, thus, often need to interpret results partly subjectively, due to the lack of objective measures.
In this paper, we compare as many computationally feasible estimators of FDs as possible, in as objective manner as possible, and across as many scenarios as possible. With this, we provide an objective baseline with which researchers can check against their results, reducing the amount of interpretation and subjectivity. It is important to stress the separation between computing the FD of “the deterministic dynamics,” i.e., the dynamic invariant characterizing the flow in the state space, and the FD of “the graph of a timeseries vs .” The latter is relevant for stochastic perspectives, is often related with the Hurst exponent,29,30 and is typically used as a quantifier of timeseries more suitable for classification tasks vs typical statistics-based quantifiers (see, e.g., Refs. 31 and 32). These two FD versions are very different and unrelated in the general sense. Our paper focuses exclusively on the first version.
Hence, our main operating assumption is that we have a multivariate (sometimes also called multidimensional) dataset, obtained from a dynamical system from which we do not know the dynamic rule (equations of motion). If one has a timeseries, it first must be reconstructed into a state space set via delay embedding or other means.3,22,23 Our operating assumption is on the one hand motivated by the increase of interest in observed or measured multivariate data, and higher accessibility of sensors and experimental data in preparation for a nonlinear-dynamics-based analysis.33 On the other hand, there is also noticeable recent progress regarding better attractor reconstruction techniques based on delay embeddings, which can also utilize multivariate measurements, yielding a higher quality reconstruction overall (see, e.g., Refs. 34 and 35 by Kraemer et al., and references therein). We point out that our goal here is estimating FDs of chaotic sets knowing that these exist, in order to separate the problem of the FD estimation from the scientific questions surrounding the interpretation of the data. The latter requires following best practices, e.g., various data pre- and post- processing, de-noising, or surrogate tests comparing the FD value of the real data with those of surrogates. These steps are entirely skipped here, and the reader can find more information in standard timeseries analysis textbooks or review articles, such as Refs. 23 and 28.
Since the first review on the topic of FD by Theiler in 1990,16 computers and software have improved and several new algorithms to estimate fractal dimensions have been proposed. It is, thus, timely to revisit the subject, and, here, we will provide a comparison and evaluation of FD estimators across a range of topics much larger than what has been done so far. Beyond comparing and evaluating various fractal dimension estimators, we provide optimized, easy-to-use, extensively tested, open source implementations for all algorithms discussed in this review in the DynamicalSystems.jl software library.36
This review and comparison paper is structured as follows (and see also Fig. 1 for a summary of the paper). In Sec. II, we provide a self-contained concise definition of the major methods used to compute FDs and how they relate with the natural density and with each other. Connected with this section is Appendix A, which presents all (computationally feasible) algorithms we have found for estimating a fractal dimension. The core of the paper are Secs. III and IV, which compare in detail the three best and most popular estimators of FDs: scaling of entropy, scaling of correlation sum, and an extreme value theory approach. We compare across data dimensionality, data length, different kinds of dynamical systems, noise levels, real-world data, various embeddings of timeseries, and the order of the fractal dimension, among others. We close the paper with a summary of our findings in Sec. VI. Every result, plot, and method that we present in this paper is fully reproducible via open source code and adjustable to other input datasets (see Appendix B for details).
II. STATE OF THE ART
An easy to understand introduction and review regarding the fractal dimension was given by Theiler in 1990.16 The theoretical background of the fractal dimension and the methods (known until 1990) to compute it are summarized, and a plethora of historic references is given there. A more recent publication on fractal dimensions in a style of a review is given by Lopes and Betrouni in 2009.37 However, it is focused on image analysis and pattern recognition and does not do any quantitative comparison. The most recent detailed source that provides quantitative information on the limitations and pitfalls of fractal dimension estimates is the well known textbook by Kantz and Schreiber.22
A number of ways to estimate a FD, which are applicable to dynamical systems, have been devised in the literature. For this comparison, we found, implemented, and compared the methods that we briefly summarize in Table I. From these, our main analysis focuses on the three most prominent ones. In the rest of this section, we will provide a concise, yet self-contained, summary of the concept of a fractal dimension in dynamical systems. Then, we introduce the three main estimators that we use in the extensive comparisons done in Secs. III and IV and illustrate how the different estimators connect to the natural density and with each other. In the following, we assume that we have a set that contains -dimensional points representing a (possibly observed) multivariate timeseries of a dynamical system. We will use the letter to denote various versions of a fractal dimension, and we will use a superscript in parenthesis, such as to denote the particular estimator used to estimate .
Estimator . | Brief description . | Main Refs. . |
---|---|---|
Natural measure entropy | Scaling of generalized entropy Hq of amplitude binning, vs box size | 17 |
Molteno’s histogram optimization | Optimized algorithm for amplitude binning with restricted size | 38 |
Correlation sum | Scaling of correlation sum Cq vs radius | 10 and 39 |
Box/prism-assisted correlation sum | Optimized algorithm to calculate the correlation sum | 40 |
Performance-optimized box size | Optimized for performance box-assisted algorithm | 41 |
Logarithmic correction | Better converging fit instead of the standard least square fit of log (C2) vs | 42 |
Takens’ estimator | Maximum likelihood estimation of scaling exponent, for | 13 and 43 |
Judd’s estimator | Binned MLE with additional degrees of freedom from polynomial | 44 and 45 |
Mean return times | Logarithmic scaling of mean return time to an -sphere, vs | 46 and 16 |
Lyapunov dimension | Kaplan and Yorke’s linear interpolation for sum of Lyapunov exponents = 0 | 47 |
Lyapunov dimension via fits | Higher-order interpolations and other fits for sum of Lyapunov exponents = 0 | 48 |
Extreme value theory based | Rare events follow a Pareto distribution whose parameter is the fractal dim. | 49 |
Persistent homology | Quantify how the topology of shape changes as it is thickened | 50 |
Estimator . | Brief description . | Main Refs. . |
---|---|---|
Natural measure entropy | Scaling of generalized entropy Hq of amplitude binning, vs box size | 17 |
Molteno’s histogram optimization | Optimized algorithm for amplitude binning with restricted size | 38 |
Correlation sum | Scaling of correlation sum Cq vs radius | 10 and 39 |
Box/prism-assisted correlation sum | Optimized algorithm to calculate the correlation sum | 40 |
Performance-optimized box size | Optimized for performance box-assisted algorithm | 41 |
Logarithmic correction | Better converging fit instead of the standard least square fit of log (C2) vs | 42 |
Takens’ estimator | Maximum likelihood estimation of scaling exponent, for | 13 and 43 |
Judd’s estimator | Binned MLE with additional degrees of freedom from polynomial | 44 and 45 |
Mean return times | Logarithmic scaling of mean return time to an -sphere, vs | 46 and 16 |
Lyapunov dimension | Kaplan and Yorke’s linear interpolation for sum of Lyapunov exponents = 0 | 47 |
Lyapunov dimension via fits | Higher-order interpolations and other fits for sum of Lyapunov exponents = 0 | 48 |
Extreme value theory based | Rare events follow a Pareto distribution whose parameter is the fractal dim. | 49 |
Persistent homology | Quantify how the topology of shape changes as it is thickened | 50 |
A. What does a fractal dimension really quantify?
In the introduction, we discussed how the FD is useful for practical matters and, hence, worthy of being estimated. However, before discussing any estimators for a FD, it is useful to conceptualize what the FD truly characterizes, as this is the origin of all practical estimations.
For this discussion, we ignore the presence of noise existing in real data, and the fact that observed timeseries need to be delay embedded to yield higher dimensional data. The starting assumption, therefore, is that the set we have at hand is a faithful sampling of some sort of -dimensional invariant set (typically a chaotic attractor) of a dynamical system, i.e., . is itself characterized by its natural, or invariant, measure , which defines a probability space on top of by requiring . Equivalently, we may use the natural density , which in practical terms is the -dimensional histogram of . To learn more on the natural measure, one should consult various textbooks on nonlinear dynamics3,18,21 or the review by Theiler.16 In essence, while the samples form a time sequence of points with deterministic origin, they can be also be thought of as points sampled “at random” from according to the measure . Note that throughout this FD description, we make the fundamental assumption that (with measure ) is ergodic.
This intuition-based definition of a local and attractor dimension is further motivated by Theiler in his review article on the basis of the Hausdorff dimension.16 Equation (1), however, is noncomputable because is unknown, since in practice, we have only partial knowledge of due to the finite observations of . In addition, for the overwhelming majority of dynamical systems, does not have an analytic expression anyway, irrespectively of finite data. An estimator of a FD, therefore, attempts to measure either the local scaling of Eq. (1) or the global scaling by averaging in Eq. (2).
Before going into specific estimators, we need to stress that none of the estimators we considered in this review yield the Hausdorff dimension.2,52 Often in the literature, researchers use the term “Hausdorff dimension” to refer to the output of the estimators, but no one has provided any formal proof of the equivalence between the estimates and the Hausdorff dimension (which itself has a very precise and rigorous mathematical definition). This is especially so for the box-counting dimension, where it is easy to prove that it cannot be the same as the Hausdorff dimension.53
B. Entropy of natural density
To connect with , we re-write . This is a weighted average, equaling to , since . We note that is our notion of “amount,” as we measure the amount by the probability mass. Regularizing the expression by its exponent, the quantity is the “average bulk” or “average amount” in a hypercube of linear size (i.e., scale) . The number settles the way we average: for , we have the arithmetic (typical) average; for , a root mean square; and for , a geometric average.
In Eq. (4), both limits are theoretical and cannot be realized in practice. As a result, is estimated by plotting vs and estimating the slope of a linear scaling region (for sufficiently large , more on this in Sec. III I).
If the fractal dimension depends on , the set is called multi-fractal. This is the case when the natural measure underlying the attractor is strongly non-uniform. Large positive values of then put more emphasis on regions of the attractor with a higher natural measure. In dynamical systems theory, almost all chaotic sets are multi-fractal.14 The dependence of on is connected with another concept, the so-called singularity spectrum, or multifractality spectrum ; however, we will not be calculating here and refer to other sources for more (see, e.g., Chap. 9 of Ref. 18 by Ott).
C. Correlation sum
Originally, the version explicitly having was used to define the correlation dimension,10 and the process of defining a FD was, in fact, the same as in for any . A linear scaling region is estimated from the curve of vs . Then, the correlation-sum-based FD is the slope of that linear region.
We may leverage two potential improvements here. First, to calculate , we used a box-assisted method.40,41 We modify this method as discussed in Sec. 4 of Appendix A, because otherwise it fails for data with even a small amount of noise (see discussions in Sec. III F and Sec. 4 of Appendix A). Second, in addition to the standard least squares fit , we also used the correction by Sprott and Rowlands42 when possible (i.e., when at least half the range has ). Reference 42 optimizes the fit are parameters to be optimized in parallel with . It is intended to give better fits for sets that have a slowly converging fractal dimension estimate, but as we will show later, it is best to not use it in practice.
In the rest of the manuscript, we will use or to refer to the methods of estimating FD via the (generalized) entropy or (generalized) correlation sum. We will explicitly use a subscript when we make statements that apply only to this particular order .
D. Extreme value theory
The third major way of defining and estimating a FD from a set is based on extreme value theory (EVT) applied to dynamical systems.49 The method estimates a local dimension , as in Eq. (1), and then provides the FD as the average. Interestingly, the method utilizes exactly the same information as the correlation sum: all inter-point distances. However, is not estimated directly via an exponential scaling relationship in contrast to the generalized entropy and correlation sum methods.
To the best of our knowledge, the method has been developed using progress across several papers59–64 (see also Chaps. 4 and 9 of Ref. 49). Even though relatively recent, this method has been applied already to a plethora of real-world cases (see, e.g., Refs. 65–74) and many more (see Ref. 75 for a summary of recent applications). Despite this plethora of applications, we have noticed that some applications have ambiguities with the basic theory that connects EVT with FD, and we discuss these issues in Sec. 10 of Appendix A.
Let us now discuss how this theory connects to Eq. (1) and, hence, relates to the parameter. Due to the invertibility of the function, the exceedances correspond to when the orbit of the dynamical system comes closer than to the reference point , with . This clarifies why we focus on extremes of : we are interested in what happens around a small radius around a reference point , in order to connect with the fundamental definition of the local dimension of Eq. (1).
E. Lyapunov (Kaplan–Yorke) dimension
System . | D . | Δ(L) . |
---|---|---|
Lorenz96 | 4 | 2.99 |
Lorenz96 | 6 | 4.93 |
Lorenz96 | 8 | 6.91 |
Lorenz96 | 10 | 8.59 |
Lorenz96 | 12 | 10.35 |
Lorenz96 | 14 | 12.10 |
Lorenz96 | 32 | 27.68 |
Rössler (chaotic) | 3 | 1.9 |
Hénon map | 2 | 1.26 |
Kaplan–Yorke map | 2 | 1.43 |
Towel map | 3 | 2.24 |
Coupled logistic maps | 8 | 8 |
Kuramoto–Sivashinsky | 101 | 31.76 |
System . | D . | Δ(L) . |
---|---|---|
Lorenz96 | 4 | 2.99 |
Lorenz96 | 6 | 4.93 |
Lorenz96 | 8 | 6.91 |
Lorenz96 | 10 | 8.59 |
Lorenz96 | 12 | 10.35 |
Lorenz96 | 14 | 12.10 |
Lorenz96 | 32 | 27.68 |
Rössler (chaotic) | 3 | 1.9 |
Hénon map | 2 | 1.26 |
Kaplan–Yorke map | 2 | 1.43 |
Towel map | 3 | 2.24 |
Coupled logistic maps | 8 | 8 |
Kuramoto–Sivashinsky | 101 | 31.76 |
has a huge advantage when compared to the previous definitions of fractal dimensions: it can be computed with very high precision, even for high-dimensional systems (where the other methods typically suffer from accuracy, as we will show below). However, it also has a huge disadvantage: practically, it can be computed only if the dynamic rule (equations of motion) is known. Only using the dynamical rule and its linearization one can estimate the entire Lyapunov spectrum with satisfactory precision, for example, by means of the known algorithm due Shimada and Nagashima78 and Benettin et al.79 From a finite, and often noisy real-world dataset, calculating the entire spectrum of exponents is a very challenging task that requires for higher-dimensional attractors very large data sets.80 Therefore, one is in most cases better off calculating a fractal dimension directly from data, or instead fit an explicit model to the data, e.g., Ref. 81.
III. CORRELATION SUM VS ENTROPY
In this section, we perform a quantitatively rigorous and exhaustive comparison of the methods based on entropy and correlation sum . Even though fundamentally different, both rely on estimating the scaling of some quantity vs some size . For a given , they both (in theory) approximate the same quantity, the exponential scaling of the -average of “amount” of measure vs the scale, as we illustrated in Sec. II. To compute , we use the method Sec. 1 of Appendix A, and for the method Sec. 4 of Appendix A for most cases, and the straightforward implementation Sec. 3 of Appendix A for very high-dimensional data. For , we also tested the logarithmic correction of Ref. 42. The motivation of choosing these methods, and why an exhaustive comparison of other methods is not presented, is explained in Appendix A. Before any numerical analysis, we normalize input data so that each of its columns is transformed to have 0 mean and standard deviation of 1. This linear transformation leaves dynamic invariants (like the FD) unaffected, however, makes all numerical methods more accurate and faster to converge.
In Secs. III A–III I, all results will be presented with the same plot type, as, e.g., shown in Fig. 2. The legend shows the different datasets used in the plot and a description of the plot’s purpose. The top panel is the entropy estimate while the bottom is the correlation sum. To estimate , for each curve, we identify automatically, and objectively, a linear scaling region as discussed in Sec. III I. This region is denoted by markers of the same color on each curve. The secondary legends inside the panels provide the 5%–95% confidence intervals for the estimated slopes of these segments. Unless otherwise stated, all datasets used have length , which is for many experiments a typical upper bound for the amount of data one has access to. Continuous time systems are sampled with approximately 10 points per characteristic oscillation period.
A. Benchmark sets with known dimensions
The best place to start is a simple sanity and accuracy test of the two main estimators for sets whose fractal dimension can be computed analytically in a straightforward manner. This is shown in Fig. 2. We use a periodic orbit from the Rössler system ( ), a quasiperiodic orbit of order 2 from the Hénon–Heiles system ( ), the Koch snowflake ( ), the Kaplan–Yorke map ( ), a uniform filling of the 3D sphere ( ) and a chaotic trajectory of the Standard Map (SM) for very high , which covers uniformly the state space and, thus, has .
Some results that will be repeatedly seen throughout this section become clear. For noiseless data, the correlation sum method is clearly better for three reasons. First, its linear scaling region covers a wider range of scales, while saturates much more quickly to flatness for small . Notice that cannot saturate for small , but instead diverges to , but we can never reach this point due to the process that chooses the appropriate overall range of (Sec. III I). Both curves would in principle saturate to flatness for very large , specifically exceeding the total size of , but we again do not reach this threshold based on the choice of the range of .
Second, within the linear scaling region, the curves of fluctuate less than those of , resulting in a narrower confidence interval. Comparing confidence intervals is only meaningful if the same method is used to extract them. That is why specifically for Fig. 2, we used the standard least squares fit for instead of the aforementioned correction of Ref. 42. Third, the actual numbers we obtain for from the correlation sum method are closer to the analytically expected values than those of the entropy-based method. In particular, for sets that should have an integer fractal dimension, the correlation sum method is much closer to the actual result.
B. Different dynamical systems
In the following, we cross-compare with the value obtained from the Lyapunov (Kaplan–Yorke) dimension, which for all systems of interest used in this paper is found in Table II. We note that is conjectured to equate to the version of dimensions; however, here we use . That is, because the correlation sum algorithm does not apply to , unless one uses a fixed mass approach, but we explain in Sec. 5 of Appendix A why we do not. We assume that the differences between the two estimators, when compared to , should not depend much when changing to ; hence, the following results remain valid.
In Fig. 3, we compare three discrete and three continuous dynamical systems of different input dimensionality (systems are defined in detail in Appendix C). We confirm the result of Sec. II, i.e., the correlation sum method is more accurate than the entropy one because it is much closer to the values expected from the Lyapunov dimension (Kaplan–Yorke conjecture). Furthermore, the entropy method seems to underestimate the fractal dimension more strongly as the state space dimensionality increases. This is expected given the fact that the entropy method works via a histogram approximation of the natural density, and it is well known that the higher dimensional the data, the less accurate producing a histogram for them becomes (i.e., space is covered more sparsely by points as increases).
This figure also allows us to indeed confirm that the logarithmic correction to the correlation sum by Sprott and Rowlands42 can be impactful, especially for high-dimensional sets where the convergence is the slowest. For example, the standard linear regression fit would give confidence intervals for the eight-dimensional Lorenz96 model and for the eight coupled logistic maps. These values are closer to the entropy-based estimates but further away from the estimates of Table II.
C. Data length
The results for the 8D Lorenz96 model shown in Fig. 4 are essentially consistent with the estimates of Eckmann and Ruelle. For the confidence interval of the slope is , i.e., slightly larger than assuming a value of , for example, would provide . With increasing , the estimated slopes converge toward the value of the Kaplan–Yorke dimension of the 8D Lorenz96 system, a value that is already included in the interval obtained with , a number of points large enough to estimate correlation dimensions up to according to the Eckmann–Ruelle limit.
The bottom panel of Fig. 4 shows log–log plots of for real-world experimental data (see Sec. III G for a description). The dimension estimates for different lengths are all about 3 and only the corresponding confidence intervals shrink for increasing . This is also in agreement with the Eckmann–Ruelle bound, because dimensions can, in principle, be achieved with or larger.
On the other hand, Tsonis et al.,26 using the results of Nerenberg and Essex,83 argue that the minimum number of points to estimate a dimension with 95% confidence is scaling like . In our example, assuming true value , it would require at least points, which is too high compared to what we can estimate from Fig. 4. However, the estimate presented by Tsonis et al. is surrounded by its fair share of ambiguity, because it involves deciding a priori a scaling region extent, and it does not say whether the dimension in the expression should be the embedding dimension or the actual fractal dimension (that is unknown). We will discuss this topic again in Sec. III H.
If only small data sets are available, finite sample corrections derived by Grassberger84 could be used to improved estimates of or .
D. q-order dimensions (multi-fractality)
Here, we examine how well the estimators capture multi-fractal properties, i.e., the dependence of on , or the absence of multi-fractality, i.e., results that should be invariant to . This dependence on was discussed in the review by Theiler,16 but we think an even better reference is in the book by Rosenberg.85 A known theoretical result is that is a non-increasing function of , for (see, e.g., Ref. 11).
Here, we will use two examples. The first is the Koch snowflake, which has an (approximately) uniform density and, thus, its FD should have no dependence on whatsoever. The second is the Hénon map, which has a strongly non-uniform natural measure, giving the expectation of a clear decrease of with increasing . The results are shown in Fig. 5.
Both the entropy and correlation sum approaches perfectly capture the absence of multi-fractality, giving identical curves for all for the Koch snowflake. For the Hénon map estimates, both methods satisfy the criterion of a decreasing with . But there is a problem. For the correlation sum method and for , the function vs is no longer a straight line but is composed of two linear regions with significantly different slope, making the results ambiguous. It is also not obvious why the slopes change at the given value, or why the slopes below that indicate a significantly lower dimension value. The slopes change at and for the left slope becomes while for , the left slope becomes (right slopes are shown in figure legend). We observed this behavior of having two slopes in practically all example sets with a non-uniform measure.
We could not find anything in the literature about this observation, and, in fact, we could not find a single figure in the literature plotting vs for , even though the correlation sum for is provided in several publications,11,22,39 which report a value for (but it is unclear whether they have encountered the same problem as we have or not). We have extensively tested our code and we are confident that the implementation of Eq. (6) is correct.
For multi-fractal analysis, we are (typically) interested in quantifying the most fine properties of the fractal. Perhaps then one should determine the slope of the linear region at the smallest values, instead of the slope of the linear region covering the largest range of (but for very small , the statistics becomes worse for finite data sets). However, the slopes of the smallest are clearly incorrect; the correct slopes are the ones of the largest values (those also highlighted in Fig. 5). In any case, using the slope of the largest values gives the correct results, but this strong dependence of slope with when is worrisome and indicates that more clarity regarding must be established in the literature.
E. Dimension (delay embedding)
This section examines the impact of varying state space dimensionality of input data, which is common case when delay-embedding timeseries22,86,87 (because there one increases and searches for convergence of ). In principle, provided the condition is met, with the embedding dimension, then the reconstructed set has the same fractal dimension as the original set the timeseries was recorded from Ref. 88. In Fig. 6, we check how the methods fare with this statement, and whether their accuracy decreases with increasing input dimensionality, i.e., embedding dimension.
In Fig. 6, we used a chaotic timeseries from the Hénon–Heiles system. This has , and as such, we expect convergence of the fractal dimension estimates to a value around 3 for . We see in Fig. 6 that this is indeed the case. does not seem to drop in performance with increasing , besides a very small decrease in the overall range of order of magnitudes the linear scaling region covers. The same results were obtained using a timeseries from the chaotic Rössler system with or a chaotic Lorenz96 system with , which also has . seems to perform poorer with increasing by either significantly reducing the linear scaling region or being less accurate in the dimension estimates. However, how much poorer it performs depends on the dataset: for the Hénon–Heiles, it is still decent, while for Lorenz96, it performs much worse (not shown). Hence, we conclude that performs much better as the state space dimensionality of data increases (while keeping data length and other aspects constant) vs , which is somewhat already evident from Fig. 3.
F. Noise
Real-world data are always accompanied by noise, and, therefore, the impact of noise on the calculation is highly important for the choice of the method. As it is well known, the presence of noise in the data makes estimating a fractal dimension harder, as the fractal dimension of the noise is equal to that of the state space and, hence, almost always larger than the fractal dimension of the clean data. Figure 7 shows results for various kinds, and amount, of noise added to the (normalized) chaotic Rössler attractor. On purpose for this plot, we have used (see Sec. III I), and the standard linear regression method for , because the logarithmic correction of Ref. 42 overestimates for noisy data (the input dataset is three-dimensional and, thus, cannot have > 3).
We start with the case of additive noise. There, it is known that there is some distance called the “noise level,” below which the slope of changes from being the fractal dimension of the chaotic set to that of the noise,89 This fact can be used to actually estimate the noise level of the data.22 In Fig. 7, we see how quickly this really happens for . Even for 5% additive noise, the curve is already dominated with the noise slope (which has for three-dimensional additive noise), and only a small segment of the curve at large values where the slope becomes the deterministic value. The entropy curve does not have this property and remains having a single slope throughout (except the saturation part of course), while the slope value is an average between the purely deterministic and that of the noise. On one hand, this may be considered a downside, because it does not allow estimation of noise level. On the other hand, we should note the majority of the curve is already reflecting the noise slope. Thus, if we estimated the average slope of the curve, it would be much larger than that of , i.e., it puts much more weight on the noise dimension than the deterministic data. These results regarding additive noise are typical and do not seem to strongly depend on the system considered. Note that in the case of delay reconstruction, the slope of the noise should increase with increasing embedding dimension. The slope corresponding to the FD of the deterministic set should remain constant for embedding dimensions larger than a minimum value required for successful reconstruction of the state space of the dynamical system generating the data.
For dynamic noise, we turned the ordinary differential equation (ODE) of the Rössler system into a stochastic differential equation by adding a Wiener process term in the second equation. For small amount of dynamic noise (which here reflects a proportionality of with the expected size of the variable), the fractal dimension increases slightly as expected, but does not have any noticeable change in its numerical value up to 10% noise. When one turns up the dynamic noise more, the dynamics collapse and there is no “chaotic” attractor anymore (not shown). Both entropy and correlation sum methods perform equally well vs this kind of noise, and there is no noise radius or change of slope discernible in the correlation sum case. We also looked at low-resolution data, by rounding Röessler timeseries to 2 digits after the decimal. This obviously decreases the valid -ranges one can do the computation for (see Sec. III I). For , this also significantly changes the result to a value smaller than “correct,” while for , it has no impact. This means that performs better for rounded data, which makes sense given the way it is computed (as long as points are in the same box, it does not matter how close they are to each other).
G. Real-world data
In Fig. 8, we show fractal dimension estimates for real-world experimental data. We focused specifically in experiments that are relatively clean (large signal-to-noise ratio) and where the underlying dynamics is well known to display low dimensional deterministic chaos. This is important, because for this review, we do not want to mix the scientific question of whether an observed system accommodates a low-dimensional deterministic representation, with the technical/computational question of whether an estimator would actually detect that.
In Sec. III H, we further discuss what happens with real-world data where neither of these two conditions apply. We limited densely sampled experimental data to at most , with sampling of about ten samples per characteristic timescale. Because of the observations of Sec. III H, we have used and the standard linear regression method for instead of the logarithmic correction of Ref. 42.
The datasets are as follows: two electrochemical oscillator datasets (the second being more chaotic than the first);34 timeseries from a circuit replicating the dynamics of the Shinriki oscillator;90 the mean field of a network of 28 circuits following Rössler dynamics from Ref. 91; data from a mechanical double pendulum from Ref. 92; and ECG recordings during a pacing experiment of healthy individual from Ref. 32. All experimental timeseries were delay embedded using the recent automated method due to Kraemer et al.34 (see Appendix D for the embedding parameters). The method yielded embedding space of seven or less for all experimental timeseries, giving even more confidence that the data may display low-dimensional deterministic chaos.
For dataset “Rössler Net,” the curve continuously changes its slope instead of having one, or at most two, constant-slope segments. This makes deducing a single fractal dimension ambiguous. Slight curving can be observed also in “electroch. 1,” “electroch. 2” and “ECG IBI” but it is a weak enough effect that two scaling regions can nevertheless be extracted (for “ECG IBI,” Fig. 8 reports the slope of the noise). On the other hand, the entropy-based approach does not suffer from this problem and, besides the expected saturation for small , seems to be described quite accurately by a single slope and, thus, a specific FD value. The results shown in Fig. 8 are from five to six-dimensional embeddings yet both yield FDs that are less than the embedding dimensions (excluding the “Rössler Net” case for where a FD cannot be estimated). Hence, we can assume that the FD of the underlying dynamics has to be somewhere between the low bounds of and .
The “constant slope curving” of is something we have not seen before with synthetic data. From the discussion of Sec. III F, the problem may be because realistic noise may be neither white nor stationary, or because a too high of a noise level in the data. Indeed, we saw that for 5%–10% relative noise, the slopes of already reflect the noise FD. In Fig. 8, yields consistently higher FD than , even though we know that an underlying low-dimensional representation exists. To extract this lower FD value from , one, therefore, must focus on the larger scales and try to find this consistent smaller slope by increasing embedding dimension (as is standard practice22). We show such an analysis in Appendix F. Nevertheless, it appears true that is more strongly affected by noise when compared to .
H. Extreme cases
In this subsection, we examine the result of applying the aforementioned methods to ill-conditioned data, which may be non-deterministic or non-stationary, or to extremely high-dimensional data, where there exists this notion in the literature that the methods used so far are unlikely to succeed. For the first dataset, we used data from the Lorenz-96 model, with , while having the parameter increase linearly during the time evolution from 1.0 (periodic motion) to 24.0 (chaotic motion with ). The second dataset is the concatenation of a periodic trajectory from the Rössler system with noise uniformly distributed on the 3D sphere. The third dataset is paleoclimate temperature timeseries from the Vostok Ice core, embedded in eight-dimensional space, which is unlikely to be stationary or to accommodate a low-dimensional representation.25 The fourth dataset is a stock market timeseries for the “nifty50” index embedded in six-dimensional space, which is definitely non-stationary and rather unlikely to be deterministic. The last two datasets are extremely high-dimensional data of the Lorenz96 and the Kuramoto–Sivashinsky spatiotemporal system (the latter having 101 dimensions after discretization). The results of the dimension estimation are shown in Fig. 9.
Generally speaking, in Fig. 9, the results of the first four datasets show that something is “wrong.” There is a large miss-match between the estimates of the entropy and correlation sum methods and the curves do not seem to be composed of a single slope. Oddly, for the non-stationary Lorenz96 data, the correlation sum has a clear straight slope with fractal dimension somewhere between the extreme values. The Vostok data, in particular, are plagued both by a continuous change in slope, especially in , but also, the resulting FD values do not converge when increasing the embedding dimension (not shown). Additionally, the FD values obtained from or are very different. Notice that in all cases, our automated algorithm finds a value for nevertheless. This only serves to highlight how careful one should be, and to always plot the curves of , .
Although in the literature there exist several tests for non-stationarity using, e.g., permutation entropy or other methods, we will now describe a simple, fractal-dimension-based scheme. One can divide the data into equal parts in two ways: Making segments of length of successive points, or by choosing every th point, each time starting from point 1 to . For these subsets, the same fractal dimension estimation is done. If there is non-stationarity, the first kind of selection will show significantly different estimates across its sub-datasets, while the second will show approximately the same estimates.
Let us now consider the last two datasets of Fig. 9 (32-dimensional Lorenz96 and Kuramoto–Sivashinsky). Surprisingly, shows a rather clear linear scaling region that has a very high slope, but not as high as the expected dimension values (28 and 32, respectively). This is consistent with the upper bound equation (16) of Eckmann and Ruelle with for or for , because even in cases where linear scaling occurs already for relatively large values it cannot be expected to obtain slopes of size 28 or 32 with data points, only. Furthermore, Fig. 9 shows that for these data the range of scales covered by the linear region is very small: only one . Increasing the amount of data also increases the linear scaling region and the resulting confidence intervals shrink as we get more data points in the linear region (not shown here).
As discussed already in Sec. III C, these examples show again that with high-dimensional data, one needs much longer timeseries to properly cover the (typically high-dimensional) chaotic set, and this affects any kind of estimate of dynamical properties; it is not a problem specific to the correlation dimension. Additionally, one needs a very low signal-to-noise ratio, because due to the coverage of a very small range of scaling factors , even a small amount of noise may ruin the estimation. However, if one does have such a clean high-dimensional dataset, may still provide a useful estimator at least for a lower bound of the correlation dimension of that data set, where the value obtained should always be compared with for the available amount of data and dimension estimates of surrogate data (see Sec. V) to avoid wrong conclusions.
I. Estimation of slopes and sizes
To estimate the value of or we need to find the slope of or vs . This matter is typically resolved in a context-specific manner, where each plot is carefully examined and the “linear region” is decided by the practitioner by eye. This approach cannot work in an objective comparison. Here, we formulate an entirely objective and sensible (but not flawless) automated process that is separated into two parts: the choice of which sizes to calculate or for, and how to estimate a linear scaling region from the respective curves. Once the linear scaling region is identified, actually obtaining the fractal dimension is a simple least squares fit. Note that small deviations from a straight line for densely sampled values of may occur due to lacunarity of fractal sets.126–130 While we can also observe this for very densely sampled , this effect is so miniscule that we consider it irrelevant for almost all data sets in practice. These oscillations are not suitable for the quantification of lacunarity in fractal sets, but other measures exist for this purpose.131,132
The range of is always decided with generating formula where are linearly spaced values from to , i.e., the values of are exponentially ranged in base . is the smallest inter-point distance existing in the set and the average of the lengths along each of the variables of the set, and constants. Unless stated otherwise, we have used in Sec. III. In essence, we are limiting to be one order of magnitude (in base ) larger than the smallest inter-point distance and one order of magnitude smaller than . If the resulting range does not cover at least two orders of magnitude (common case in high-dimensional data), we use instead. This choice of brings very good performance in the box-assisted algorithm for the correlation sum (Sec. 4 of Appendix A) but it is not so small as to make the computation meaningless for realistic and/or noisy data. Notice that the automated fractal dimension estimates can be sensitive on . In practice, we would recommend producing several estimates by varying these parameters and obtaining the median of .
To estimate the linear region, we proceed as follows. We scan the local slopes of each one of the segments of the curve vs starting from the leftmost one (here or ). If the local slope of the preceding segment is approximately equal to that of the next one with relative tolerance tol, i.e., , then these two segments belong to the same linear region. We move to the next segment and compare it in the same way with the first segment belonging to the same linear region. When we find a mismatch, we start a new linear region. This way we have segmented the curve vs into approximately linear regions. We then choose the linear region which spans the largest amount of the axis, and label it “the” linear region. We finally perform a least squares fit there and report the 5-95% confidence interval of the fitted slope. In Fig. 10, we visually demonstrate the process. We also compare it with another standard way fractal dimension related plots are presented: the successive local slopes of each point of the curves and the same slopes but fitted in a five-long data window. Our linear regions approach is equivalent with finding the largest plateau in the local slopes plots.
Notice that there is a clear pitfall here. This algorithm will deduce a linear region no matter what. In many scenarios, this region might be meaningless, being too small to be of actual value, or could even be the scaling of the noise in noisy data, as shown in Sec. III F. So after all, careful consideration of the result is always necessary.
In addition to the algorithm presented here, also worth mentioning is recent work by Deshmukh et al.93 that offers an alternative way to estimate a slope. All possible slopes that could be estimated from the curve (by choosing all possible segments of length more than a specified minimum) are estimated. These are weighted by their length and by their inverse error and compose a distribution. The mean of the distribution is presented as the slope, while the quantiles of the distribution can be used as confidence intervals. For the work presented here, we believe our approach is more fitting, because how large a scaling region is also part of the accuracy of a FD estimator. Furthermore, we still wanted to display problems in the presence of two scaling regions (as in, e.g., Sec. III F), while the approach of Deshmukh et al. transforms the “looking at the curve for two scaling regions” problem into “looking at the distribution of all possible slopes for two peaks,” which, while easier to resolve, still would add more information content in our already extensive article. Nevertheless, the approach Deshmukh et al. is most likely better suited to use in practice, when estimating a fractal dimension from experimental data, as the authors have extended their method to also provide convergence criteria of fractal dimension estimates in Ref. 94.
IV. EXTREME VALUE THEORY ANALYSIS
In this section, we thoroughly analyze the power and shortcomings of the extreme value theory based FD introduced in Sec. II D for computing FDs under a similar lens as the comparison of Sec. III. Unless stated otherwise, we use as the data length, standardize all input sets before any computations, and use as the quantile probability of Sec. II D (due to the discussion in Sec. IV B). Since is obtained via an arithmetic mean, it can be compared to and , which is what is used by most plots in Sec. III.
A. Exemplary sets
We start with Fig. 11, which shows computed for exemplary sets. The figure should be compared with Figs. 2 and 3. The figure style is typical for the rest of this section and shows the dimension estimates as distributions, with dashed white lines indicating the mean, and dotted white lines indicating the “expected” value, for which we use if possible, otherwise . Small horizontal red lines cap the strict limit that the dimension estimates should not exceed, i.e., the state space dimensionality. The inner legend in the plot displays information about the distribution: the mean, and in parenthesis the percentage of values that exceed the dimensionality limit cap (the red line).
All in all, the estimates seem to match well those obtained by with two notable exceptions: the method performs “poorly” for a quasiperiodic ( ) trajectory of the Hénon–Heiles system, for the chaotic attractor of the Hénon map and for the Lorenz-96 chaotic attractor. Here, “poorly” means significant inaccuracy in the first decimal digit. It is not clear exactly why the EVT method is not particularly accurate for these systems; however, we can speculate the respective reasons. For quasiperiodic trajectories, the sampling time may be near-commensurate with one of the two periods, making some points very rarely visited in the finite set, even if they would be uniformly visited in the limit . As the method assigns a higher dimension value to a point according to its visitation frequency, these rarely visited points on the quasiperiodic orbit get higher dimension values than they should (within the EVT framework, the lower the visitation frequency of a state space point, the higher its local FD). For the Hénon map, the only thing to note is that the attractor natural density is extremely singular and the assumption of Eq. (13) likely does not hold. For the Lorenz96 and the coupled logistic maps, the thing to note is that the attractors have a high “expected” (here Lyapunov) FD, which means that for given amount of data and high underlying FD, the EVT underestimates the FD more than .
B. Quantile probability
Unlike the entropy and correlation sum methods for computing FD presented so far, the EVT one is parametric:95 it requires the choice of an “extreme” probability value for which to extract the quantile of when calculating the exceedances in Eq. (8). Therefore, before performing any further evaluation of the method, we must examine how it depends on its parameter . We have found no formal mathematical definition of an “extreme” in the literature of this EVT methodology, or how to practically compute an “optimal” value for , or whether an optimal value exists at all. Reference 65 provides some methodology for checking whether the chosen is inappropriate that we evaluate in Sec. IV C.
In this subsection, we examine the impact of choices of . This choice is somewhat linked with the data length , as the local dimension estimation for each state space point is done based on points. In Fig. 12 we, therefore, vary with fixed but also co-vary with fixed .
The results show that increasing increases . It also appears that not only the mean of the distribution of depends on but the shape of the distribution as well. On one hand, it is somewhat reassuring than once a fixed is chosen, the results do not vary as wildly as when is fixed, provided that is large enough. However, we also noticed that under fixed , the estimated dimension seems to monotonically decrease further and further away from the expected value when is decreased.
On the other hand, in most real-world applications, it is that is fixed, and, hence, the results of the top panel of Fig. 12 are what is of most interest. In addition, even if it was possible to co-vary in a realistic application, we still cannot provide instructions of what the best choice should be for : while Fig. 12 reveals the dependence on , it does not lead to any obvious conclusions on what should be. A saving grace here is that while there is a clear dependence on , the mean value does not change significantly (i.e., differences span less than one integer, which is anyways the accuracy we are interested in practice), provided that remains in a range so that .
C. Quantifying significance
The entropy or correlation sum approaches provide a relatively straightforward way to check for the significance of results: there should a single slope covering several orders of magnitude . The larger the range of magnitudes, the more significant the results. Such a simple visual significant check does not exist for the EVT approach. In this section, we examine possible ways to test for the significance of the EVT results based on what has been suggested in the literature or with alternative means we devised while composing this review.
In practical applications as in Ref. 65, the authors identify a range of values that are appropriate using a statistical hypothesis test of whether follows an exponential distribution (EXPD). Examples of such statistical tests are the Anderson–Darling96 or the Kolmogorov–Smirnov.97 Here, we used the Kolmogorov–Smirnov exact one sample test. The test proceeds as follows: for a given , each of the are first fitted to a EXPD. Then, the fitted EXPD is used in a statistical test for the null hypothesis: “the data coming from the given GPD.” The test yields a p-value. Typically, if the p-value is very small, e.g., p<0.05, the null hypothesis can be rejected, which may mean unsuitable data altogether, or not enough data (e.g., quantile probability was chosen too high). In practice, one hopes that the majority of the p-values (each for each ) is significantly larger than some low threshold of, e.g., 0.05.
Formally speaking however, a p-value greater than some threshold does not mean we can accept the null hypothesis; only that we fail to reject it. Any other distribution may have generated the data equally well. Hence, the convincing power of this line of argumentation (checking for large p-values) is weak from a statistical inference point of view. An alternative test mentioned in the literature is to check how stable the distributions for of the fitted EXPDs are when varying ; but we did not find this argument convincing (stability of parameters does not mean significant fit), so we ignored it.
In Fig. 13, we show distributions of p-values and NRMSEs for a chaotic trajectory of the towel map. We provide more similar such plots in Appendix E, which establish that our observations do not depend at all on the input set .
The results are very surprising. It appears that the p-value based test is unhelpful and/or misleading. For example, it shows that quantile is a bad choice, because the overwhelming majority of p-values are 0.05, eluding to a rejection of the hypothesis “the data come from a EXPD.” Yet, if we look at the actual fitted data to the right of the top panel, we do not observe “bad quality” fits at all. This is further established by the distribution of NRMSE values, which has all of its mass in low values. We are not sure why the hypothesis test behaves this way in this scenario.
On the other extreme of , the p-value based test is again misleading. This is the case where most of the p-values are >0.05; hence, it would be the most trustworthy in terms of the p-value test. Yet clearly, this is the case where the actual fits are the worst by far. We have performed extensive numerical tests and are confident that the code implementations yielding the p-value are correct.100,132
This leads us to conclude that the NRMSE based test is much more trustworthy. If a practitioner wants to transform the NRMSE data into a Boolean decision “is this okay?,” we would argue to check if the majority (i.e., 99%) of the mass of the NRMSE distribution is less than 0.5. Even so, the NRMSE test may only provide a range for where the EXPD fits are of sufficiently high quality. It cannot instruct how to pick a from that range. Thankfully, from what we have seen in Sec. IV B, the fluctuations of with are relatively small if is in an appropriate range.
What made the discussion of this subsection difficult is that we have not found any information regarding the -values of this test in the literature, despite the plethora of real-world applications (see Appendix A 10). While it has been mentioned that the chosen in these real-world applications “satisfy” this p-value test,65 the actual p-value distributions were not shown.
In fact, we have not found any discussions in general regarding the significance of the EVT results. The main argument the EVT FD literature has used in favor of significance of results is that the distributions did not change much within a range of appropriate . However, by itself, this argument is not convincing of the validity of the results, only of their stability. We will discuss these aspects again in Sec. IV H and in the conclusions. For the rest of the article, we will be using , as these values seem to yield correct results for synthetic with lengths .
D. Comparison with pointwise dimension
In Fig. 14, we again compute for exemplary sets as in Sec. IV A, but now we compare it with the pointwise dimension, i.e., the scaling of the inner sum of Eq. (6) vs . Our goal with this comparison is to see how well either method captures the “spread” of dimension values. We expect that for rather uniform fractal sets the distribution should be narrow, and wide for sets with highly non-uniform natural measure. Note that the average of the pointwise dimensions of the correlation sum does not coincide with . As is also made clear in Ref. 16, gives a more accurate result for the FD of the whole attractor because it utilizes more points to estimate the scaling behavior.
The most important result here is that for the chosen , the two methods yield very similar results, hence establishing the overall accuracy of the EVT method for synthetic data. The pointwise dimension estimates, however, are more accurate for the Hénon map attractor and a quasiperiodic orbit, which we already discussed in Sec. IV A. Hence, is slightly more accurate for noiseless deterministic sets.
E. Length, dimension, sampling time
We now test the EVT approach while varying various aspects of the input data: length, state space dimensionality (using delay embeddings of increasing dimension as in Sec. III E) and sampling time. The reason to judge the quality of the EVT approach vs sampling time is because, unlike the correlation sum approach, which explicitly takes into account dense time sampling via the Theiler window , the EVT approach is typically presented as agnostic to the sampling time. The analysis is presented in Fig. 15.
Regarding data length, EVT scales well with decreasing up to a threshold. When becomes too low, so that becomes less than 50, the results significantly lose accuracy, making the estimated increase rapidly. Interestingly, with decreasing , the EVT overestimates the FD instead of underestimating it like or . Note that one cannot simply fix this problem by reducing because, as illustrated in Sec. IV B, this has its own downside of decreasing the estimated dimension. Still, we may conclude that for the data considered here, with the EVT performs well, which is a better scaling with than , but worse than . Like with , this data length should scale with the FD; however, there is no analytic treatment as to how (while for analytic bounds are discussed in Sec. III C). Given that real-world data are typically small in length, one has to be particularly aware about this point.
As far as input dimensionality is concerned, we observe similar results as with in Fig. 6: seems quite robust when increasing dimensionality.
When it comes to sampling time, the correlation sum approach utilizes the Theiler window, which in practice shortens the distance calculations from to . This has a negligible impact on data length but a significant positive impact on the FD estimate in cases where data are sampled densely in time.3 Similar results are obtained for : for very small sampling times, the FD is biased toward lower values. Hence, it would make sense to include a Theiler window in Eq. (7), i.e., only include with absolute distance from index greater than some like in Eq. (6). We note that Ref. 66 also considered the impact of “temporal neighbors” and reached the same negative bias conclusion.
F. Noise
We repeat here the analysis of Sec. III F in Fig. 16 using the Rössler system combined with various forms of noise. For additive noise, behaves much more similarly to because the mean FD value of EVT is an average without focus at a specific scale. This means that the FD value in the presence of additive noise is between 1.9 (deterministic) and 3 (state space dimensionality). Additionally, the FD values of are the smallest (and, hence, closest to the deterministic FD value) out of the three ( ). also seems to be completely unaffected by rounding.
These observations make sense if one considers how is computed (Sec. II D). The logarithms of all inter-point distances are taken into account for the computation of the quantile of . Rounding will have a negligible effect on the distribution of inter-point distances, and additive noise will have a diminished effect due to being “averaged out” in some sense when computing the quantile. These properties make preferable in the presence of noise, unless one wants to identify the noise radius, in which case is more suited.
Last, we mention that in the case of dynamic noise, provides slightly higher values than or (but, of course, we do not know whether any of the three is the more “correct” number). It is worth noting nevertheless that the distribution of for dynamic noise is much narrower than for other types of noise, which we found unexpected. We examined no further however, as the results with dynamic noise depend strongly on the system used and the exact form the noise was added, and, hence, cannot lead to any general statements.
G. Real-world data
In Fig. 17, we use the same datasets as in Sec. III G. Because the datasets are smaller in size than the typical length we used before ( ), we used instead of . This value for also satisfies the “NRMSE test” we described in Sec. IV C, in the sense of most NRMSE values being less than 0.5.
In addition to the large extent of some of the distributions of , we do not notice any downside or incorrectness in the mean FD values: they are comparable with those coming from (and we cannot know whether or EVT are more correct in their estimate). Given the results of the preceding Sec. IV F, this is expected due to the better response EVT has to noise (vs ).
H. Extreme cases
In Fig. 18, we apply in various extreme cases as in Sec. III H. does a good job distinguishing that two sets of very different dimensions have been artificially merged with each other in the first two cases of Fig. 18, because it yields bi-modal distributions for . However, this is mainly due to the way that the sets were created. When compared to the case of adding noise to the data, we did not see bimodal distributions with peaks at 2 and 3 dimension values, i.e., cannot be used to identify a noise radius unlike . Nevertheless, could be a good tool detecting non-stationarity in an observed set, if that non-stationarity significantly changed the FD value over time.
Unfortunately, for the sets that do not accommodate a low-valued FD description, like the Vostok and nifty50, a straightforward application of gives a rather “clear” picture that a low-dimensional FD value describes the data. This is especially obvious in the nifty50 set (which we remind is a stock market timeseries), where gives the lowest dimension values and a narrow distribution. Surrogate testing did not help here either: generated random-Fourier surrogates101 had consistently higher than the original data, enhancing the wrong conclusion that the estimated of the data is valid. Additionally, performing the significance tests of Sec. IV C made things worse: the resulting plots (shown in Fig. 23) look very similar to those obtained from systems with a legitimate low-dimensional representation, again reinforcing the wrong conclusion. Last, we attempted to perform the standard analysis of checking whether the FD result converges with increasing embedding dimension of the timeseries. In Appendix F, we find that, in contrast to the methods, the EVT method shows a convergence of the “nifty50” timeseries FD. Already in embedding dimension , it shows a constant mean for any , which, again, re-enforces the wrong conclusion. The same results were obtained for the “Vostok” timeseries.
It appears that four different significance testing methods (p-values, NRMSEs, surrogate timeseries, convergence with increasing embedding dimension) all yielded with confidence that inappropriate data like “vostok” or “nifty50” have a small FD, which is incorrect. These results have major implications for both previous and future applications of the EVT method in real-world data. We have not found a way to test whether the FD values yielded from the EVT method actually represent a low-dimensional deterministic system or not, i.e., there is no way to falsify the method. Hence, extreme care must be taken when applying the EVT method to arbitrary real-world datasets, and whether the data accommodate a deterministic representation must be confirmed by other means (e.g., a self-consistent physical theory or using the correlation sum with increasing embedding dimension as is standard practice in nonlinear timeseries analysis22).
For extremely high-dimensional but deterministically chaotic data, does an excellent job in identifying a very high FD (note the FD values are divided by 4 in the figure), which is also very close to the expected value . This was expected, as also does an excellent job identifying a very high dimension for clean data, and, in general, EVT and perform very similarly overall when it comes to deterministic noiseless data. For the Kuramoto–Sivashinsky example, we see that EVT estimates higher FD than (while typically we noticed that for high-dimensional data, it underestimates the FD when compared to ). However, in this example, we are not sure what the results are because EVT is indeed more accurate or because EVT overestimates the FD due to having too small when compared to the expected FD value (see discussion in Sec. IV E).
V. A NOTE ON SURROGATE TIMESERIES
Surrogate timeseries have been recommended by Theiler et al. in the early 1990s101 as a means to test for nonlinearity in noisy timeseries. It was suggested there that a discriminatory statistic for the test can be the FD computed via the correlation sum. That requires a bit of care. If one uses the algorithm we described here, i.e., using Eq. (6) and then deducing the slope of the largest linear scaling region, then the user risks estimating the fractal dimension of the noise (already existing in the original data) instead of the “underlying deterministic nonlinear dynamics data” (if any exist), thus invalidating the hypothesis testing approach in the first place. Other discriminatory statistics should be used instead (see Ref. 102, for example).
If a fractal dimension is chosen as a discriminatory statistic nevertheless, we propose the following alternatives for computing it: (i) use Takens’ estimator (Sec. 6 of Appendix A) with the empirically good estimate with the original timeseries. Because Takens’ estimator performs a maximum likelihood estimation instead of a linear fit and, thus, considers all regions of the correlation sum up to some , it produces an “average” fractal dimension of the noise and the underlying data. It should be clear, however, that the Takens estimator is used in this context (just) as a discriminating statistics and its results must not be interpreted as meaningful dimension estimates, or (ii) use the EVT version , which also similarly produces an “average” of fractal dimensions of noise and deterministic set.
VI. CONCLUSIONS
In this paper, we have analyzed many different practically relevant fractal dimension (FD) estimators we found in the literature. From all these estimators, we focused on an extensive quantitative comparison between the entropy-from-histogram method (Sec. II B and Sec. 1 of Appendix A), the (potentially box-assisted) correlation sum method (Sec. II C) without any logarithmic corrections, and the extreme value theory method (EVT, Sec. II D). Based on our review, these three estimators are the ones most worthy using in practice ( Appendix A).
To keep this paper within sensible limits, we have used a relatively small set of possible dynamical systems and real-world data. We cross-checked our results with different systems (not shown), and we are confident that the results presented are robust. Nevertheless, it is impossible to guarantee their universality for all possible input datasets. To amend this, the reader can repeat our extensive analysis for any other dynamical system or input dataset of choice by only changing a couple of lines of code in the provided code base (see Appendix B).
Our conclusions are as follows (see also Fig. 1). When comparing with , we found that for synthetic (i.e., noiseless) data, is clearly superior to , retaining much better accuracy for decreasing (amount of points), increasing (state space dimension), or decreasing (size scale). can also be used to detect the “noise radius,” as, e.g., illustrated by Kantz and Schreiber.22 For real data, this can become a downside, making the FD estimation using ambiguous due to either an almost continuous change of slope or the majority of the slope reflecting the slope of the noise (Sec. III G). This can be partly alleviated by examining the behavior of with increasing embedding dimension . In Sec. III C, we saw that, provided that a deterministic chaotic attractor is known to generate the data, it is still quite sensible to estimate a FD of relatively small datasets using , as the estimated FD remains very accurate even for relatively small data lengths. On the other hand, is very sensitive to data length and underestimates the FD strongly even for moderately small data (which is often the case in experiments). We had difficulties making sense of correlation sum for order , even though it has been mentioned several times in the literature. for gives correct results only when one considers the slope at the largest values and it is unclear why the slopes (i.e., FD values) at small values are incorrect. As such, we suggest that when treating multifractality, the entropy method should be preferred, or, if one has too few data points (where performs poorly), then using and recording the slope of at largest is the best alternative.
We then compared with EVT. In deterministic datasets, we found a very high degree of agreement between and EVT, confirming EVT’s accuracy. EVT performed equally well for high-dimensional data but worse than for decreasing . Still, EVT performs better with decreasing , increasing , or increasing underlying FD when compared to . When it comes to noise, EVT appears to have similar results with , i.e., the results are an average of the deterministic (smaller) FD and that of the noise. EVT reports the smallest FD values when contaminated by noise, and hence, EVT is affected less strongly by noise than or . One more advantage of EVT vs is that it foregoes the identification, fitting, and extraction of slope from a scaling region and, hence, does not face the same limitations when the data can cover only very small range of magnitudes of . This advantage is balanced by the disadvantage of EVT being much more of a black box method than .
It appears that the EVT is a promising method that combines benefits from both : it scales well with decreasing or increasing and is more tolerant to noise than . However, we also observed that it has a huge downside: it cannot be falsified. Or, at least, at the moment, there does not exist a method (visual or statistical) in the literature that can confidently falsify it, nor that it can quantify its significance meaningfully. All four methods we utilized and discussed in Sec. IV C (p-values, root-mean-squared errors, surrogate tests, and increasing embedding dimension) failed to indicate that, e.g., stock market timeseries are an inappropriate input (despite being non-stationary and not satisfying practically any of the assumptions underlying the EVT method). Instead, all ways to test for significance gave big confidence that stock market timeseries are described by a very small FD of .
Surprisingly, we found practically no discussion in the literature about falsifiability, despite the plethora of real-world applications in very varied input datasets (Sec. II D). Nevertheless, we believe that falsifiability is important, because with it, one resolves controversies like the fractal dimension of “global climate attractors” (Ref. 26). The lack of falsifiability has major implications for both previous and future real-world applications using the extreme value theory approach: the argument for low-dimensional determinism in the data cannot come from the EVT method itself, at least not at the moment.
It is clear, as it was before this work, that estimating a FD is not an easy task, and, hence, focusing on only a single number can mislead. The best practice we feel is to calculate several versions of , from different methods and with varying the parameters of each method (including the range of or the quantile probability ) and produce a median of the results. In addition, given the software implementation we provide here, calculating all FD variants only necessitates a couple of lines of code (see Appendix B). Furthermore, plotting of the appropriate quantities vs is a must and can hint whether the methods are applied at inappropriate data. In addition, expecting more than one decimal point of accuracy is unrealistic in most practical applications.
As an outlook, we believe there is still some future research to be done regarding estimating FDs. In addition to the Lyapunov dimension, which is not easily applicable to observed data, every other estimator disregards the time-ordering information in observed data (i.e., that the sequence of points follows the flow in state space instead of being randomly drawn samples on the attractor). Perhaps here is a way to make a more powerful estimator by using this discarded information of the time-ordering of the points in the dataset. For example, this time-ordering information has been used in, e.g., estimating the transfer operator,103,104 which can also yield the natural density, and perhaps this operator can be utilized to create a FD estimator of higher accuracy or with better scaling with the number of points .
Regarding the EVT approach, we believe future research can improve in two fronts: (1) developing a mathematically rigorous framework for choosing and (2) developing statistical tests for correctness and significance of the method’s results that can successfully falsify the method with inappropriate data. With new statistical indicators, the method may be applied with more confidence to data of unknown dynamical origin.
ACKNOWLEDGMENTS
We thank Ignacio Del Amo for an initial draft code implementation of the extreme value theory dimension estimator and for providing the simple proof of non-equivalence between the box-counting and Hausdorff dimensions; three independent reviewers for constructive criticism that greatly improved the quality of the manuscript; Nils Bertschinger for helpful discussions regarding distributions of p-values; and Gabrielle Messori and Davide Faranda for discussions and clarifications regarding the usage of extreme value theory for estimating a fractal dimension, including its p-value test of Ref. 65.
AUTHOR DECLARATIONS
Conflict of Interest
The authors have no conflicts to disclose.
Author Contributions
George Datseris: Conceptualization (lead); Data curation (lead); Formal analysis (lead); Methodology (lead); Software (lead); Supervision (lead); Visualization (lead); Writing – original draft (lead); Writing – review & editing (lead). Inga Kottlarz: Formal analysis (supporting); Software (supporting); Writing – original draft (supporting). Anton P. Braun: Formal analysis (supporting); Software (supporting); Writing – original draft (supporting). Ulrich Parlitz: Conceptualization (lead); Supervision (lead); Writing – review & editing (lead).
DATA AVAILABILITY
Appendix B discusses software implementations and a code base for reproducing the submitted paper.
APPENDIX A: ALGORITHMS FOR ESTIMATING A FRACTAL DIMENSION
1. Optimized histograms for arbitrary
Here, we describe an optimized method to calculate histograms, which to our knowledge neither has been published before nor could we find a faster method that works for arbitrary . The process has memory allocation scaling of and performance scaling of , neither of which depends on . The process is as follows. Every point in the dataset is first mapped to its corresponding bin via the operation , where is a vector containing the minimum value of the dataset along each dimension and is the floor operation. The resulting ( in total) are then sorted with a quick sorting algorithm, which results in all equal being successive to each other. The sorting is lexicographic, i.e., sorting by first dimension, then by second, and so on. We then count the successive occurrences of equal , which gives the amount of points present in the corresponding bin and move on to the next bin (there are no actual bins being created in memory, a bin is conceptually defined as the group of successively equal ). Dividing by the total amount of points gives the probabilities that can then be plugged into Eq. (3) to yield the entropy .
2. The box-counting algorithm by Molteno
This box-counting algorithm for the generalized dimension was introduced by Molteno38 as an improvement for the method introduced by Grassberger.105 It claims to be of order . The algorithm partitions the data into boxes and counts the number of points in each box to retrieve the probabilities necessary to calculate the generalized dimension . The faster runtime of the algorithm is due to the use of integer manipulations for the division of the data points into boxes.
The first box contains all indices to the array that stores the data points, where a box refers to an array of indices. The boxes are subsequently partitioned, until the mean number of points per filled box falls below a threshold , recommended by Molteno to be 10.
The algorithm provides exponentially scaled sets of values that suit the approximation of the limit in Eq. (4) and it only needs to compute the operation (A2) times per partitioning process. The main disadvantage is the static box size choice. It works well for low-dimensional data sets, but for larger dimensions, the number of boxes increases exponentially with dimension, thereby increasing the number of points necessary to calculate the dimension exponentially. This drastically limits the range to which the algorithm is applied over, and for some sets makes the computation of low accuracy or even straight-out impossible. Given how fast our histogram algorithm already is (all curves in this paper took on average 0.1 s to compute), we saw no reason to use the Molteno method. Also, the Molteno method does not allow the user to decide values, making it less flexible.
3. Correlation sum
4. The box and prism-assisted correlation sum
Theiler40 proposed an improvement over the calculation of the correlation dimension by Grassberger and Procaccia106 that divides the data into boxes before calculating the distances between points. Thereby, the number of distance calculations is reduced and the scaling becomes faster than (how much, it depends on the box size ). After division into boxes, the formula given in (6) is used to calculate the correlation sum, therefore an extension to the -order correlation sum is possible.
For a point inside a box, there may be points outside the box that are within the given distance of the point. Therefore, the neighboring boxes with respect to the current box also have to be found and checked. The distances between the points in the box and its neighbors are calculated as given in Eq. (6). The first sum runs over all points of the initial box and the second sum uses all points in the box and adjacent boxes. For , only boxes with equal or larger indices are included in the neighbor search and the optimization, presented in the previous chapter, is used.
A point of criticism for this algorithm is its poor runtime for higher dimensions. The advantage of distributing the points into boxes beforehand diminishes as the number of boxes increases considerably with dimension. To reduce the amount of boxes, Theiler proposed a prism algorithm, where only the first dimensions are used to distribute the data into boxes. These new boxes, where sides are of side length and the other sides cover the whole range, are called prisms. The best choice given by Theiler is and should be used when exceeds . A downside of this prism approach is that for any , some point pairs that should have been discarded may be included due to having small distances in the first dimensions but a larger distance in at least one of the remaining dimensions.
The box size estimator still varies in its choice of box size but shows fluctuations of smaller amplitude than the Theiler estimator. The choice of a box size smaller than the smallest interpoint distance only occurs for high dimensional data sets with a low number of points. Furthermore, Bueno-Orovio and Pérez-García chose a prism dimension of always (for ). However, a prism dimension of 2 can result in box size estimates smaller than the minimal interpoint distance for high dimensional datasets with comparably low size. In this paper, we used but we have different (see below).
The main benefit of a small is that it makes the computations much faster. Unfortunately, both suggestions, and especially that of Ref. 41, often fail in practice. They give much too small values and for data with any amount of noise whatsoever this value is well below the noise radius. This is displayed in Fig. 7. The vertical dotted line shows the estimated by Eq. (A6) (the calculation of would be limited up to this ). In fact, is so small that even for only 5% additive noise the computation would only show the noise dimension and no hint of the deterministic (and smaller) slope of vs . That is why in this paper, we decided to use the box-assisted version for better performance, but with as discussed in Sec. III I. The performance is not too bad, e.g., for the typical data lengths considered here computing the entire curve takes about a minute on an average computer. Notice that even with the optimized-for-performance of Ref. 41, the entropy method is still massively faster (in fact, it is even faster than just computing ).
5. Fixed mass correlation dimension
6. Takens’ estimator
For a Gaussian distributed random variable, the log-likelihood function is a parabola, that at has fallen by 0.5 from its maximum and at by 2. By invariance,112 this is also the case for a non-Gaussian random variable, letting us easily estimate the variance of .
When testing the algorithm and its dependency on (Fig. 20) on different dynamical systems, we found that the variation of exceeds the confidence intervals at any fixed for low-dimensional systems.
These variations occur because for the estimation, it is assumed that Eq. (A10) holds. Thus, the estimated and its confidence intervals are of no use as long as the validity of the assumption (A10) is not known.
While Takens’ estimator does not provide a significant advantage in the precise estimation of the fractal dimension, compared to correlation-sum-based methods, it can be useful in the case of surrogate timeseries as described in Sec. V.
7. Judd’s estimator
Once the optimal bin width is found, the cutoff is chosen as the right edge of the fullest bin of the histogram. All bins to the right of this bin are joined to form .
The minimization of Eq. (A12) is subject to two difficulties that are already noted by Judd. First, an optimizer cannot understand the idea that the exponential is the essence of the model, while the polynomial is only a device to correct for deviations from the scaling law. Second, the optimizer is highly sensitive to the initial condition of the optimization, which could be reduced by a tailored optimizer,113 but still one can observe a very broad distribution of values over different samples of a long trajectory, especially for higher-dimensional systems, as is shown in Fig. 21.
Due to these problems, we decided not to include the estimator in the main comparison.
8. Dimension from Lyapunov exponents
In Sec. II E, we described the Lyapunov dimension due to Kaplan and Yorke. We do not have anything to add here regarding , but we want to mention Ref. 48 by Chlouverakis and Sprott. They suggest that instead of a linear interpolation to the sum of , a polynomial interpolation should be done instead. However, as we found no theoretical foundation for this proposal, we decided to skip it (and we also did not notice any significant improvement with the numeric results).
9. Mean return times
According to the Poincaré recurrence theorem, any trajectory within an ergodic set114 will return arbitrarily often and arbitrarily close to any neighborhood in the ergodic set.99 We represent this neighborhood as a hypersphere of radius , centered at some point in the ergodic set, and define as the mean return time to this hypersphere. Then, one expects that with the fractal dimension, as estimated from the mean return times. A more formal discussion of this fact and explicit connection with the natural measure of the ergodic set and the fractal dimension obtained via the generalized entropy (4) is given by Theiler.16 The earliest reference we found using return times to estimate fractal dimensions was Ref. 46.
Unfortunately, the method using mean return times is not recommended at all. A fundamental limitation is that knowledge of the dynamic rule is necessary, otherwise the results of the method for measured data are too inaccurate to be considered seriously. Even for known rule , the method converges slowly (numerically). Furthermore, it provides an estimate of the local dimension around the point of return, similarly with the point-wise dimension. Thus, it has to be further averaged over several state-space points, requiring several orders of magnitude more computation time than the correlation sum method or the Lyapunov dimension method. Last, we found its numeric output (not shown, see online repository) to be quite far from the results of the correlation dimension and, thus, we do not consider it accurate enough.
10. Extreme value theory
We introduced the algorithm of estimating a FD via extreme value theory (EVT) in Sec. II D. Here, we will expand more on how the algorithm has been used in the literature and highlight potential ambiguities we have noticed in its real-world applications.
The second ambiguity we encountered is the lack of application, and even discussion, of delay coordinates embeddings. Many real-world applications, e.g., Ref. 65, analyze a single dynamic variable (such as sea level pressure in the case of climate applications). This dynamic variable is a spatiotemporal field and, hence, provides a multi-dimensional input dataset. Despite the high input dimensionality, this single variable is likely coupled to many other dynamic variables in a coupled dynamical system (which is especially true in the case of climate dynamics). The theory of delay embeddings is supposed to re-construct the missing dynamic variables and as a result provide a more correct representation of the dynamical flow. Delay embedding is missing from almost all applications of EVT we reviewed and cited, even though they all use the timeseries of only one dynamic variable. Given that delay embedding is a well established analysis step28 that is completely separate from estimating fractal dimensions, we are not sure why there is a lack of discussion of it.
The third ambiguity we want to highlight is the report of relatively small values for the fractal dimensions of spatiotemporal, and highly complex, real-world data. For example, Ref. 70 reports dimensions of for the dynamics of slow earthquakes in the Cascadia region, Ref. 72 reports for spatiotemporal atmospheric flow (of daily resolution; hence, large scale turbulence is considered), Ref. 73 report a difference of at most from the states of largest and smallest local fractal dimension of the 500-hPa geopotential height (Z500) dynamic variable for the weather of the European-Atlantic region. In particular, in the last case, this would mean that, out of the potentially millions of available degrees of freedom in this (discretized) spatiotemporal system, there is a difference of at most two additional degrees accessed by the state space flow between the least and most stable regions in state space. Given that in this review, we noticed much higher differences of local dimensions in much lower-dimensional systems (see, e.g., Fig. 11 or 18), we find this reported small number difficult to grasp. In general, given the discussion on falsifiability of Secs. IV C and IV H, as well as limitations that come from the length of input data that we discussed in Sec. III C, we believe that the absolute value of the fractal dimensions reported in these publications should be taken with a grain of salt and not be equated with the available degrees of freedom in the state space. Whether or not the relative values of the local dimensions (in the sense that a relatively higher value means higher local state instability for the real system) can be used to draw conclusions or not depends on the confidence one has on the stability of the distributions of (see Sec. IV C).
11. Persistent homology
Persistent homology methods are based on topological timeseries analysis and applications in dynamical systems. These techniques have been relatively recently applied to estimate fractal dimensions, and a quantitative review was recently published by Jaquette and Schweinhart.50
The methodology is based on tracking how -dimensional holes form or disappear as the point cloud that composes is “inflated” or “thickened.” This means that each point in is taken as a sphere with radius initially 0, and this radius is increased as the point cloud is “inflated.” Estimators based on zero-dimensional persistent homology using minimal spanning trees were proposed already in the early 1990s by van de Weygaert et al.115 and Martínez et al.116 who stress that this approach provides an estimate of a (generalized) fractal dimension and also works with relative small data sets. For more information about implementations of the method, see Ref. 50.
The review of Ref. 50 compared fluctuations in the output values and the distance of the output values themselves from the reference “true” values of the test sets applied to. The results showed that the persistent homology method has similar performance for , and dramatically worse performance for , when compared to the correlation sum. Unfortunately, for , the method performs poorly when noise is present, i.e., it does not distinguish two slopes (of the noise and of the deterministic set) and instead shows that of the noise, while it does find two slopes for . Additionally, the method output depends strongly on its meta-parameter , whose value cannot be deduced from input data. For these reasons, and because the methodology itself is more complicated to both explain and implement than the correlation sum of Sec. II C, we deem the method worse than the correlation sum and only considered the correlation sum for a more in-depth comparison in Sec. III.
APPENDIX B: SOFTWARE IMPLEMENTATIONS AND CODE BASE
The work done in this paper and the figures produced are available as a fully reproducible code base, which can be found on GitHub.117 It is written in the Julia language118 and is using the software: DynamicalSystems.jl,36 DifferentialEquations.jl,119 BenchmarkTools.jl,120 ComplexityMeasures.jl,104 LsqFit.jl, and DrWatson.121 Figures were produced with Makie.122 All methods, with the exception of Judd’s algorithm and the persistent homology method, are implemented, documented, and tested extensively in the FractalDimensions.jl123 submodule of DynamicalSystems.jl. The implementations follow best practices in scientific code124 and are highly optimized, utilizing multi-threading whenever possible. The following code is a simple example of calculating , , and with DynamicalSystems.jl and the Julia language:
APPENDIX C: DYNAMIC RULES OF KNOWN SYSTEMS
All dynamical systems used for generating data are listed in Table III.
System . | Dynamical rule . | Initial conditions . | Parameters . |
---|---|---|---|
Hénon map | (0.08, 0.12) | a = 1.4, b = 0.3 | |
Kaplan–Yorke map | (0.15, 0.2) | λ = 0.2 | |
Towel map | (0.085, − 0.121, 0.075) | ||
Hénon–Heiles | |||
Coupled logistic maps | k = 0.1 | ||
Rössler | (0.1, − 0.2, 0.1) | a = b = 0.2, c = 5.7 | |
Periodic parameters: | a = b = 0.2, c = 3 | ||
Lorenz-96 | (j × 0.1 for j ∈ 0, …, D − 1) | F = 24 |
System . | Dynamical rule . | Initial conditions . | Parameters . |
---|---|---|---|
Hénon map | (0.08, 0.12) | a = 1.4, b = 0.3 | |
Kaplan–Yorke map | (0.15, 0.2) | λ = 0.2 | |
Towel map | (0.085, − 0.121, 0.075) | ||
Hénon–Heiles | |||
Coupled logistic maps | k = 0.1 | ||
Rössler | (0.1, − 0.2, 0.1) | a = b = 0.2, c = 5.7 | |
Periodic parameters: | a = b = 0.2, c = 3 | ||
Lorenz-96 | (j × 0.1 for j ∈ 0, …, D − 1) | F = 24 |
APPENDIX D: DELAY EMBEDDING PARAMETERS FOR EXPERIMENTAL DATA
The method of Kraemer et al.34 finds optimal delay times that may not be equispaced. The amount of delay times found is equal to the embedding dimension. The delay times found are listed in the below list for each system (delay times are always integers, in units of the sampling time). For Vostok and “nifty50,” data we used traditional techniques of optimizing delay time and embedding dimension individually via Cao’s method and minimum of mutual information, because the method of Ref. 34 (correctly) yields that the data do not accommodate a proper embedding:
“electroch. 1”: (0, 26, 13, 5, 20),
“electroch. 2”: (0, 25, 16, 148, 138, 87, 60, 105),
“Shinriki”: (0, 19, 38, 57),
“nifty50”: (0, 43, 86, 129, 172, 215),
“vostok”: (0, 50, 100, 150, 200, 250, 300),
“double pendulum”: (0, 51, 25, 39, 12),
“Roessler”: (0, 6, 3, 14), and
“EEG IBI”: (0, 13, 26, 39, 7).
APPENDIX E: MORE PLOTS FOR SIGNIFICANCE OF EVT
APPENDIX F: INCREASING EMBEDDING DIMENSION OF REAL-WORLD DATA
In Figs. 24 and 25, we perform what is known as standard practice when estimating FDs: increasing the embedding dimension iteratively until a convergence of FD appears at the largest scales of .22 In particular, for this subsection, we estimate the slope of the right-most linear scaling region (i.e., the one at the largest ), as opposed to the slope of the largest linear region. That is because in real data, it is this slope that should indicate the FD of the underlying deterministic dynamics, if any exist (see Sec. III F).
Indeed, for the “electrochemical 2” dataset, the convergence of becomes apparent very quickly. On the other hand, for “nifty50,” there is no convergence (we computed up to , not shown). The results of for “nifty50” are inaccurate because the timeseries has only 3125 points; they should not be trusted (but anyways they do not show any convergence either).
We also perform the same analysis for the EVT approach, which once again reinforces that the method which fails to understand the stock market timeseries should not have a convergent dimension. Instead, the dimension estimates converge very rapidly with increasing .