In this paper, we propose, discuss, and illustrate a computationally feasible definition of chaos which can be applied very generally to situations that are commonly encountered, including attractors, repellers, and non-periodically forced systems. This definition is based on an entropy-like quantity, which we call “expansion entropy,” and we define chaos as occurring when this quantity is positive. We relate and compare expansion entropy to the well-known concept of topological entropy to which it is equivalent under appropriate conditions. We also present example illustrations, discuss computational implementations, and point out issues arising from attempts at giving definitions of chaos that are not entropy-based.

Toward the end of the 19th century, Poincaré demonstrated the occurrence of extremely complicated orbits in the Newtonian dynamics of three gravitationally attracting bodies. This complexity is now called chaos and has received a vast amount of attention since Poincaré's early discovery. In spite of this abundant past and current work, there is still no broadly applicable, convenient, generally accepted definition of the term chaos. In this paper, we advocate a particular entropy-based definition that appears to be very simple, while, at the same time, is readily accessible to numerical computation, and can be very generally applied to a variety of often-encountered situations, including attractors, repellers, and non-periodically forced systems. We also review and compare various previous definitions of chaos.

While the word chaos is widely used in science and mathematics, there are a variety of ways of defining it. Thus, for this 25th anniversary issue of the journal CHAOS, we are motivated to review issues that arise when attempting to formulate a generally applicable definition of chaos, and to advocate a particular entropy-based definition that seems to us to be especially apt. We also relate our proposed definition to previous definitions.

Intuitively, perhaps the two most prominent (not necessarily independent) attributes of what scientists commonly think of as chaos are the presence of complex orbit structure and extreme sensitivity of orbits to small perturbations. Indeed, in the paper by Li and Yorke1 where the term chaos was introduced in its now widely accepted nonlinear dynamics context, the term was motivated by the simultaneous presence of unstable periodic orbits of all periods, as well as an uncountable infinity of non-periodic orbits. Thus, Li and Yorke's introduction of this terminology was motivated by the chaos attribute of complex orbit structure. On the other hand, Lorenz2 was concerned with weather forecasting and accordingly focused on the chaos attribute of temporally exponential increase of the sensitivity of orbit locations to small initial perturbations. As we will discuss, these two attributes can be viewed as “two sides of the same coin.”

We think of a definition of chaos as being “good” if it conforms to common intuitive notions of chaos (such as complex orbit structure and orbit sensitivity) and, at the same time, has the following three desirable features:

  • Generality: The definition should work for almost all the examples that typical readers of this journal are likely to judge as chaotic.

  • Simplicity: The definition should be fairly concise and not too technical.

  • Computability: The definition should allow a practical, straightforward computational implementation for discerning the existence of chaos in a model.

Considering the issue of generality, one would like a definition of chaos to be applicable not only to attractors but also to non-attracting sets, often called repellers. With respect to chaotic repellers,3–5 we note that they are central to the physically relevant topics of fractal basin boundaries,6 chaotic transients, and chaotic scattering,7 occurring, for example, in fluid dynamics,8,9 celestial mechanics,10 chemistry,11 and atomic physics.12 

Furthermore, again considering the issue of generality, due to their common occurrence in applications, we desire that our definition of chaos be applicable to non-autonomous dynamical systems (i.e., systems that are externally forced by time dependent inputs), including external inputs that are temporally quasi-periodic,13 stochastic,14–16 or are themselves chaotic. Here, physical examples include quasi-periodic forcing of atmospheric jets,17 quasi-periodic forcing of stellar luminosity variation by two superposed stellar modal oscillations,18,19 advective transport in fluids with temporally and spatially irregular flow fields,20–24 and phase synchronism of chaos by noisy or chaotic drives.25 We emphasize that, when considering externally forced systems, we are interested in identifying chaos in the response of the system to a particular realization of the forcing, not in characterizing whether the forcing is chaotic.

An important point for consideration of non-periodically forced chaotic systems is that the notion of a compact invariant set, which is typically used in definitions of chaos for autonomous systems (including Poincaré maps of periodically forced systems), may not be appropriate or convenient for situations with non-periodic forcing. Furthermore, in practice, it may be difficult to locate or detect an invariant set that is not an attractor. Thus, rather than defining chaos for an invariant set, we will instead consider a notion of chaos for the dynamics within any given bounded positive-volume subset S of the state space. We call such a set S a restraining region. For autonomous systems, chaos for an invariant set can be detected by taking S to be a neighborhood of the desired invariant set.

In our opinion, the currently most satisfactory way of defining chaos for autonomous systems is by the existence of positive topological entropy or metric entropy. We note, however, that the standard definitions of these entropies are quite difficult to straightforwardly implement in a numerical procedure. In addition, while generalizations to the original definitions of topological and metric entropy have been proposed, we view it as desirable to have a relatively simple definition that is applicable very broadly. However, we do not address here the question of identifying chaos in experimental data, which presents additional challenges, especially in the cases of non-attracting sets and externally forced systems.

Motivated by the considerations above, in Sec. II we introduce and discuss the definition of an alternate entropy quantity that we call “expansion entropy.” The expansion entropy of an n-dimensional dynamical system on a restraining region S is the difference between two asymptotic exponential rates: first, the maximum over d ≤ n of the rate at which the system expands d-dimensional volume within S; and second, the rate at which n-dimensional volume leaves S (this rate is 0 for an invariant set). We define chaos as the existence of positive expansion entropy on a given restraining region. Expansion entropy generalizes (to nonautonomous systems and noninvariant restraining regions) a quantity that was formulated by Sacksteder and Shub26 in the case of an autonomous system on a compact manifold. In this restricted case, by the results of Kozlovski,27 expansion entropy is equal to topological entropy for infinitely differentiable maps. In Sec. III, we present examples of the application of our definition of expansion entropy to various systems, and also provide illustrative numerical evaluations of expansion entropy for some of these examples. Section IV discusses topological entropy and previous work on computation of this quantity. Section V discusses issues that arise in previous non-entropy-based definitions of chaos.

Our definition of expansion entropy, which we denote H0, is closely related to previous definitions of topological entropy, to which it is equivalent under appropriate conditions (see Sec. IV). Expansion entropy uses the linearization of the dynamical system and a notion of volume on its state space; thus, unlike topological entropy, it is defined only for smooth dynamical systems. On the other hand, expansion entropy does not require the identification of a compact invariant set. As we will discuss, the differences in the definitions may make the criterion H0 > 0 attractive as a general definition of chaos in smooth dynamical systems.

We assume that the state space of the dynamical system is a finite-dimensional manifold M. (For example, if the manifold M is n-dimensional Euclidean space, the state x at a given time is a vector [x1, x2,…, xn] where each xi is a real number. If some of the coordinates are angle variables, they can be taken modulo 2π, resulting in manifolds such as a circle, cylinder, or torus.) We write μ(S) for the volume28 of a subset S of M, and write (x) for integration with respect to this volume; for n-dimensional Euclidean space, one can take (x) = dnx. Given a dynamical system on M, we will use an integral to define its expansion entropy on a closed subset S (the restraining region) that has positive, finite volume. The set S need not be invariant under the system.

We consider a deterministic dynamical system to be defined by an evolution operator, by which we mean a family f of maps ft,t:MM, with the interpretation that if x and x are the states of the system at times t and t, respectively, then x=ft,t(x). (For example, ft,t could represent the solution from time t to t of a system of differential equations.) The family f must satisfy the identities ft,t(x) = x and ft,t(x)=ft,t(ft,t(x)). The maps ft,t are defined for t,t being integer-valued (discrete time) or being real numbers (continuous time), with the restriction that tt if the system is noninvertible. We allow the system to be nonautonomous, including the case where ft,t is a realization of a stochastic dynamical system.29 If the system is autonomous, then ft,t depends only on tt, and in this case we will often write ft,t=fT, where T=tt. Regardless, we assume that ft,t is a differentiable function of x.

Recall that the singular values of a matrix A are the square roots of the eigenvalues of AA. Thinking of A as a linear transformation, the image of the unit ball under A is an ellipsoid, and the singular values of A are the semiaxes of this ellipsoid. Let G(A) be the product of the singular values of A that are greater than 1; if none of the singular values are greater than 1, let G(A) = 1. Then, G(A) is roughly the number of ϵ-balls needed to cover the image of an ϵ-ball under A.

If the matrix A is n × n, consider a d-dimensional plane Pd in n-dimensional Euclidean space, where d ≤ n. Let W be a d-dimensional ball in Pd, let A(W) denote the image of W under A, and let μd denote d-dimensional volume. Then, G(A) is the maximum over orientations of Pd and over d of μd(A(W))/μd(W). Thus, G(A) is the largest possible growth ratio of d-dimensional volumes under A. Below we will apply G to the derivative matrix Dft,t, in which case it represents a local volume growth ratio for the (typically nonlinear) map ft,t.

Let St,t be the set of x, such that ft,t(x)S for all t between t and t (that is, the trajectory of x from t to t under f never leaves S). Let

(1)

Definition of expansion entropy: We define the expansion entropy H0 to be

(2)

We consider H0 and other limiting quantities below to be well-defined only if the limit involved exists.30 We remark that if the system f is nonautonomous and the restraining region S is not invariant, H0(f, S) could potentially depend on the starting time t in addition to f and S. Also, it can be shown that the value of H0 is invariant under differentiable changes of coordinates that are nonsingular on S.

To help interpret the definition of H0(f, S), we now replace 1/μ(S) with [1/μ(St,t)][μ(St,t)/μ(S)] in Eq. (1). The definition (2) can then be expressed as

(3)

where

(4)

and

Thus, we can view H0(f, S) as the difference of two exponential rates, given by the limits in Eqs. (3) and (4), with the following interpretations. Imagine that N initial conditions are uniformly sprinkled throughout the volume of S at time t, and that N is very large (N). The second term 1/τ+ in Eq. (3) is then the exponential decay rate, as t increases, of the number of trajectories from these initial conditions that remain in S for all times between t and t. The quantity Ẽt,t(f,S) is the average over these remaining trajectories of the maximum local d-dimensional volume growth ratio along the trajectory. Thus, the first term in Eq. (3) is the exponential growth rate of this average.

It can be shown that the exponential growth rate of G(Dft,t(x)) as t is the sum of the positive Lyapunov exponents of the trajectory starting at x at time t. Thus, the limit in Eq. (3) is, in a sense, an average of this sum of positive Lyapunov exponents over trajectories that remain in S for all (forward) time. The criterion H0 > 0, which we propose for defining chaos, requires that this exponential volume growth rate strictly exceed the exponential rate 1/τ+ at which trajectories leave S. Note that if S is forward invariant, e.g., an absorbing neighborhood of an attractor, then 1/τ+ = 0.

Some points of interest for this entropy definition are that

  • it applies to non-autonomous systems,

  • it assigns an entropy value H0 to every restraining region S in the manifold M, and

  • it directly suggests a computational technique for numerically estimating H0 (see Sec. II C).

Finally, notice that

(5)

This property follows from the fact that St,tSt,t if SS, and consequently Et,t(f,S)Et,t(f,S). Thus, if there is chaos according to the Definition H0 > 0 with respect to a restraining region S, then there is also chaos with respect to every restraining region S that contains S. In particular, as illustrated by the example in Sec. III B, this implies that the expansion entropy will detect chaos (H0 > 0) within a restraining region S when S also contains a nonchaotic attractor, even when the chaos exists only on a repeller.

In this section, we show that the expansion entropy of an autonomous, invertible system is the same as the expansion entropy of the inverse system. (Note that this is also true for the topological entropy; see Sec. IV.) This equality results from the following identity for all invertible systems (not necessarily autonomous):

To verify this identity, notice that ft,t is the inverse of ft,t. Below we use the notation x=ft,t(x), and consequently x=ft,t(x). Then, Dft,t(x) and Dft,t(x) are inverses, and hence the singular values of Dft,t(x) are the inverses of the singular values of Dft,t(x). Since the product of the singular values of a square matrix is the absolute value of its determinant, if A is invertible then |detA|=G(A)/G(A1). In particular, |detDft,t(x)|=G(Dft,t(x))/G(Dft,t(x)). Also, ft,t(St,t)=St,t. Writing Et,t(f,S) as an integral over x and then making the change of variables x=ft,t(x), we obtain

If f is autonomous and invertible, we write ft,t=ftt and fT1=fT. Then

Here, the first equality is by definition, the second equality is a change of notation, the third equality follows from the time-reversal identity for E derived above, and the fourth equality uses the fact that f is autonomous.

With respect to point (iii) in Sec. II A, we can imagine a computation of H0 proceeding as follows. First, randomly sprinkle a large number of initial conditions {x1, x2,…, xN} uniformly in S. Then evolve each trajectory fT,0(xi) and the corresponding tangent map DfT,0(xi) forward in time, continuing to evolve only as long as the trajectory remains in S. At a discrete sequence of times T, compute

(6)

where the prime on the summation symbol signifies that only those i values for which fT,0(xi) remains in S up to time t are included in the sum. From our definition of E in Eq. (1), we see that ÊT(f,S) is an estimate of ET,0(f, S). Plotting lnÊT(f,S) versus T, for sufficiently large N and T, we expect to find an approximately linear relationship. Accordingly, we can estimate H0 as the slope of a straight line fitted to such data (see also Ref. 31 for a similar approach in two dimensions).

As in other such procedures, judgment and experimentation are called for in determining reliable choices of N and the range of T over which to do the fit, and such choices will be constrained by computer resources. In practice, we find it useful to choose a number, say, 100, of different samples of size N, compute lnÊT(f,S) for each sample, and take the mean and standard deviations of these logarithms. Not only does this allow us to estimate the sampling error but it also produces a more reliable mean estimate than computing lnÊT(f,S) for a single sample of 100N points. Example illustrations of this computational approach are given in Secs. III B–III E.

Specializing to the case of an autonomous invertible system f, since Sec. II B shows that the expansion entropy of f and f–1 are the same, one could do a numerical computation of H0 using either f or f–1. The question then arises as to which of these two alternatives is preferable from the point of view of computational cost and accuracy. In the remainder of this section, we argue that it is computationally preferable to calculate H0 from f if f is volume contracting in S, while calculation from f–1 is preferable if f is volume expanding in S. In order to see this, we generalize Eq. (4) to define both forward and backward exponential decay rates

(7)

where S±T,0 is, as in Sec. II A, the set of initial conditions at time 0 whose trajectories under f remain in S between times 0 and ±T, respectively. That is, in terms of the previously stated numerical procedure for calculating the expansion entropy, 1/τ+ is the exponential temporal decay rate of the number of initial conditions sprinkled uniformly throughout S at time 0 that lead to orbits that never leave S up to time T, while 1/τ is the analogous quantity taking the initially sprinkled points backward from time 0 to time −T. Since, to estimate the integral in Eq. (1), we need to compute the average expansion rates only from those initial conditions that have not left S, statistics at any given T are improved when the number of such orbits is largest. Further, the estimate of the limit T → + dictates that we make T large. These two considerations indicate that the forward (respectively, backward) calculation of H0 will be computationally more efficient if τ+ > τ (respectively, τ > τ+). Subtracting the definition (7) of 1/τ+ from the definition of 1/τ, and using the fact that ST,0=fT,0(ST,0) for an autonomous invertible system, we obtain

The right hand side is positive (respectively, negative) when the map is volume expanding (respectively, contracting) in S. Thus, τ > τ+ if the map is volume expanding, while τ+ > τ if the map is volume contracting. In particular, if S is a neighborhood of an attractor, it is best to employ a forward time calculation. We note that the common examples of the Hénon map and the Lorenz system are uniformly volume contracting at all points in state space (implying that τ+ > τ), while Hamiltonian systems are volume preserving (implying that τ+ = τ).

In past work on fractal dimension, the box-counting dimension has been generalized to a spectrum of dimensions often denoted Dq, where the box-counting dimension corresponds to q = 0, and the index q can be any nonnegative number.32–34 In addition, a spectrum of entropy-like quantities, again depending on an index q ≥ 0, has been introduced by Grassberger and Procaccia,34,35 where q = 0 corresponds to the topological entropy, and q = 1 corresponds to the metric entropy. Thus, motivated by these past works, it is natural to introduce an analogous spectrum of q-order expansion entropies, Hq, and to consider whether they are useful with respect to the issue of defining chaos.

In  Appendix A, we introduce and discuss a natural way of specifying Hq. In particular, the form defining Hq is specified so that it gives Eqs. (1) and (2) when q = 0, gives an expansion entropy analogue of the entropy of Grassberger and Procaccia,34,35 and also gives a correspondence for q = 1 with previous results for the metric entropy of repellers4,36 and with Pesin's formula37 for the metric entropy for attractors. However, as we will argue in  Appendix A, q = 0 is special in regard to defining chaos. In particular,  Appendix A will consider Hq for an example in which S contains an attracting fixed point and a chaotic repeller (see Sec. III B). For this example, it is shown that H0 > 0, while Hq for q > 0 can be zero. Thus, H0 successfully detects the chaos within S, but Hq for q > 0 may not.

Consider a one-dimensional differentiable map f with a fixed point x0; for simplicity, we assume that Df(x0) ≠ ±1. Let the restraining region S be an interval containing x0 on which |Df(x)|1; that is, f is either uniformly expanding or uniformly contracting on S. In either case, we show below that the expansion entropy H0(f, S) is zero, i.e., that fixed points are not chaotic according to our definition.

In the case of an attracting fixed point, |Dft,t(x)|<1, and hence G(Dft,t(x))=1, for all xS and t>t. Also, St,t=S for t>t. Then, from Eqs. (1) and (2) we have Et,t(f,S)=μ(S) for t>t and H0(f, S) = 0.

In the expanding case, |Dft,t(x)|>1 for trajectories that remain in S from time t to t. Also, St,t is a subinterval of S whose endpoints map to the endpoints of S under ft,t. Thus,

Once again, H0(f, S) = 0.

For fixed points (or periodic orbits) of higher-dimensional systems, similar arguments can be made, though they are more complicated in the case when the fixed point has both stable and unstable directions. The essence of these calculations is that any growth in the integrand G of Eq. (1) as tt increases is balanced (up to a time-independent multiplicative constant) by a reduction in the volume of St,t. The conclusion remains that H0 = 0, i.e., that isolated period orbits are not chaotic.

Consider the one-dimensional map f shown in Fig. 1 and the two restraining regions S and SS, where we take S = [−1, 1.5] and S=[0,1]. The map is linear with derivative 3 on [0, 1/3] and linear with derivative −2 on [1/2, 1], mapping each of these intervals to [0, 1]. For this example, the fixed point x = −1/2 attracts almost all initial conditions with respect to Lebesgue measure in S.

FIG. 1.

A map illustrating a case where S = [−1, 1.5] contains a chaotic repeller (in S=[0,1]) and an attracting fixed point (x = −1/2).

FIG. 1.

A map illustrating a case where S = [−1, 1.5] contains a chaotic repeller (in S=[0,1]) and an attracting fixed point (x = −1/2).

Close modal

On the other hand, S contains an invariant Cantor set that is commonly called a chaotic repeller. We now show that according to our definition, f exhibits chaos by having positive expansion entropy H0 in both S and S. The invariant Cantor set consists of all initial conditions in [0, 1] whose trajectories never land in the interval (1/3, 1/2), i.e., those whose base 6 expansion does not contain the digit 2. The set ST,0 of initial conditions whose trajectories remain in [0, 1] from time 0 to time T consists of 2T intervals corresponding to all possible strings of length T of the letters L and R, where L denotes an iteration in the interval [0, 1/3] and R denotes an iteration in the interval [1/2, 1]. A string with k L's and T − k R's corresponds to an interval of length 3k2kT, on which DfT=3k(2)Tk. Then, the integral of G(DfT)=|DfT| on each such interval is 1. Thus, ET,0(f,S)=2T, and hence H0(f,S)=ln2. In accordance with property (5), since S contains S, we also have H0(f, S) = ln 2.

In  Appendix A, in addition to defining the quantity Hq discussed in Sec. II D, we also evaluate Hq for the map in Fig. 1. We find for the smaller restraining region S that Hq(f,S)>0 for all q ≥ 0, but for the larger restraining region S there is a critical value qc < 1 such that Hq(f, S) = 0 for q ≥ qc. For q = 1, we have H1(f,S)=(2/5)ln(5/2)+(3/5)ln(5/3)>0, while H1(f, S) = 0. Our interpretation is that H1(f, S) is dominated by the dynamics of Lebesgue almost every initial condition whose trajectory approaches the fixed point attractor, while H0(f, S) is dominated by the chaotic saddle in S. We therefore conclude that H1 (and similarly Hq for q > 0) is not an appropriate tool for detecting non-attracting chaos in a restraining region.

Next we use this example to illustrate the numerical computation of H0. Our procedure, as explained in Sec. II C, is to choose a sample size N and range of T values and to do the following for each T. Using Eq. (6), we compute an estimate ÊT of ET,0 for each of 100 different samples of N points each in the restraining region, and compute the mean and standard deviation of the 100 samples. The results for N = 1000 and N = 100 000 are shown in Fig. 2 for S and in Fig. 3 for S. The estimated value of H0 is the slope of the solid curve in an appropriate scaling interval. The scaling interval for a given N can be judged by consistency of the results with a larger value of N, in addition to smallness of the error bars and straightness of the curve. Notice that the somewhat arbitrarily chosen restraining region S yields nearly as long a scaling interval as the restraining region S that is chosen with knowledge of the invariant Cantor set.

FIG. 2.

Computation of lnÊT versus T for the map of Fig. 1 with restraining region S. For each T, we computed lnÊT(f,S) for 100 different samples, with N = 1000 randomly chosen initial conditions in each sample (top figure) and N = 100 000 randomly chosen initial conditions in each samples (bottom figure). The solid curve shows the mean of the 100 samples, and the error bars show their standard deviation. As we discussed in Sec. II C, the slope of the solid curve should, in the limit of large N and T, approximate H0(f,S). The dashed line has slope ln 2, which is the value we obtained analytically for H0(f,S).

FIG. 2.

Computation of lnÊT versus T for the map of Fig. 1 with restraining region S. For each T, we computed lnÊT(f,S) for 100 different samples, with N = 1000 randomly chosen initial conditions in each sample (top figure) and N = 100 000 randomly chosen initial conditions in each samples (bottom figure). The solid curve shows the mean of the 100 samples, and the error bars show their standard deviation. As we discussed in Sec. II C, the slope of the solid curve should, in the limit of large N and T, approximate H0(f,S). The dashed line has slope ln 2, which is the value we obtained analytically for H0(f,S).

Close modal
FIG. 3.

Computation of lnÊT versus T for the map of Fig. 1 with restraining region S. This is the analogue of Fig. 2 with S replaced by S.

FIG. 3.

Computation of lnÊT versus T for the map of Fig. 1 with restraining region S. This is the analogue of Fig. 2 with S replaced by S.

Close modal

Consider the one-dimensional random map

(8)

where K > 0 and α0, α1, α2,… are independent random variables that are uniformly distributed in the circle [0, 2π). We take the restraining region S to be the entire circle.

Notice that

and that

where η denotes an average over η. If θ0 is uniformly distributed, then θ0, θ1, θ2,… are independent and uniformly distributed, so that

This suggests that for a typical realization of the random inputs, H0 ≈ λ, where

For 0 < K ≤ 1

while for K > 1

For 0 < K ≤ 1, each map is a diffeomorphism, so t/0 > 0, and

Thus, ET,0 is not exponentially increasing, so, indeed, H0 = 0 for 0 < K ≤ 1. Numerical experiments agree with the argument above that H0 > 0 for K > 1, though establishing that the transition to chaos (according to our definition) occurs exactly at K = 1 would require a more definitive study. In Fig. 4, we show the computed lnÊT (see Secs. II C and III B) versus T for K = 1.5.

FIG. 4.

Computation of lnÊT versus T for the random circle map (8) with K = 1.5. For each T, we computed lnÊT(f,S) for 100 different samples, with N = 1 000 000 randomly chosen initial conditions in each sample. Each initial condition used a different realization of the random sequence of maps. The solid curve shows the mean of the 100 samples, and the error bars show their standard deviation. The dashed line has slope ln|1+1.5cosθ|θ; this estimate of the expansion entropy appears to be slightly larger than the slope of the computed data.

FIG. 4.

Computation of lnÊT versus T for the random circle map (8) with K = 1.5. For each T, we computed lnÊT(f,S) for 100 different samples, with N = 1 000 000 randomly chosen initial conditions in each sample. Each initial condition used a different realization of the random sequence of maps. The solid curve shows the mean of the 100 samples, and the error bars show their standard deviation. The dashed line has slope ln|1+1.5cosθ|θ; this estimate of the expansion entropy appears to be slightly larger than the slope of the computed data.

Close modal

This example illustrates a case where orbits are dense and, as in chaos, typical nearby orbits separate from each other with increasing time. However, the rate of separation is linear, rather than exponential in time. The example is the following map of the 2-torus:

(9)

where ω/(2π) is irrational.38,39 As shown in Fig. 5, the image under this map of a curve C looping around the θ,ϕ-torus once in the θ direction is a curve C that loops once in the ϕ direction, as well as once in the θ direction, with the number of ϕ loops increasing by one on each subsequent iterate. Orbits are dense and nearby initial conditions with different values of θ separate linearly with t. To evaluate H0 for this map from the Definition Eq. (2), with the restraining region S being the entire torus, we note that

for all ϕ and θ. For tt1, the singular values of Dft,t are approximately tt and (tt)1. Thus, for large tt

and H0 = 0 (since (tt)1log(tt)0 as t). Hence, this example is not chaotic according to our definition.

FIG. 5.

Image C of a circle C given by ϕ=constant under the shear map (9).

FIG. 5.

Image C of a circle C given by ϕ=constant under the shear map (9).

Close modal

Figure 6 shows the action of a horseshoe map in the plane on a unit square S, which we also take to be the restraining region. The step (a) → (b) represents a uniform horizontal compression and vertical stretching of the square. Let ρ > 2 be the ratio by which the vertical length of the square is stretched, and assume that bending deformations in the step (b) → (c) take place only in the shaded region. The fraction of the original square that remains in the square after one iterate is 2/ρ (see Fig. 6(d)), and after tt iterates, the fraction is (2/ρ)tt. Also, G(Dft,t)=ρtt, yielding Et,t(f,S)=ρtt(2/ρ)tt=2tt, and H0 = ln 2. Thus, by our definition the horseshoe map is chaotic in S.

FIG. 6.

A horseshoe map f that is linear for trajectories that remain in the restraining region S.

FIG. 6.

A horseshoe map f that is linear for trajectories that remain in the restraining region S.

Close modal

For the Hénon map

with b = 0.3, the results of Devaney and Nitecki40 imply that for a ≥ 3.4, the map has a topological horseshoe, and for a ≤ 5.1, the nonwandering set is contained in the square −3 ≤ x, y ≤ 3, which we take to be the restraining region S. Fig. 7 shows the results of a numerical computation for a = 4.2 of lnÊT (see Secs. II C and III B) versus T, which agrees well with the value H0 = ln 2.

FIG. 7.

Computation of lnÊT versus T for the Hénon map with a = 4.2 and b = 0.3. For each T, we computed lnÊT(f,S) for 100 different samples, with N = 100 000 randomly chosen initial conditions in each sample. The solid curve shows the mean of the 100 samples, and the error bars show their standard deviation. The dashed line has slope H0 = ln 2.

FIG. 7.

Computation of lnÊT versus T for the Hénon map with a = 4.2 and b = 0.3. For each T, we computed lnÊT(f,S) for 100 different samples, with N = 100 000 randomly chosen initial conditions in each sample. The solid curve shows the mean of the 100 samples, and the error bars show their standard deviation. The dashed line has slope H0 = ln 2.

Close modal

In this section, we define topological entropy, and discuss its relation to (and equivalence with, in appropriate circumstances) both expansion entropy and the related notion of volume growth.

The original definition of topological entropy, by Adler, Konheim, and McAndrew,41 was for a continuous map f on a compact topological space X. If X is a metric space, an equivalent definition of topological entropy due to Dinaburg42 and Bowen43 is as follows. (Equivalence to the original definition was proved by Bowen.44) For ϵ > 0, two points x and y in X are called (T, ϵ)-separated if the distance between their kth iterates satisfies d(fk(x),fk(y))>ϵ for some 0 ≤ k < T. A finite set of points PX is said to (T, ϵ)-span X if there is no point in X that is (T, ϵ)-separated from every point in P. Let n(T, ϵ) be the minimum number of points needed to (T, ϵ)-span X, and let N(T, ϵ) be the maximum number of points in X that can be pairwise (T, ϵ)-separated. Let

and

(10)

It is not hard to show that n(T, ϵ) ≤ N(T, ϵ) ≤ n(T, ϵ/2). This implies the analogous relation between hn and hN, which implies that they have the same limit as ϵ → 0. Define the topological entropy h of f on X by

(11)

The notions of expansion entropy H0 and topological entropy h are both well-defined in the case when f is a smooth, autonomous system on a compact manifold M and the restraining region S is all of M. In this case, Sacksteder and Shub26 defined a quantity they called h1 that is equivalent to expansion entropy. Subsequently, Przytycki45 proved that h1 is an upper bound on h if f is a C1+γ diffeomorphism for γ > 0; this proof was extended to noninvertible maps by Newhouse.46 Though there are examples47 for which the two quantities differ, Kozlovski27 proved that h1 = h for C maps. Thus, H0(f, M) = h(f, M) for a sufficiently smooth map f on a compact manifold M.

From our point of view, these results leave open consideration of important issues regarding nonautonomous systems and the role of restraining regions. For example, suppose now that J is a compact invariant set of an autonomous system f on a (not necessarily compact) manifold M. If J has volume zero, H0(f, J) is undefined, but we can define H0 for a neighborhood S of J that contains no other invariant sets. In this case, we conjecture that H0(f, S) = h(f, J) if f is C. More generally, when the restraining region S contains multiple invariant sets, we conjecture (consistent with Eq. (5)) that H0(f, S) is the maximum topological entropy of f on an invariant subset of S.

Our notion of expansion entropy is related to the notion of volume growth defined by Yomdin48 and Newhouse.46 Yomdin defines the exponential rate vd(f) of d-dimensional volume growth of a smooth map f on a compact manifold M, and proves that vd(f) ≤ h(f, M) if f is C. Newhouse defines the volume growth rate more generally for a neighborhood U of a compact invariant set JM, and proves that h(f, J) is bounded above by the maximum over d of the d-dimensional volume growth rate on U. See also Ref. 49 for a discussion of these results.

Based on these results, Newhouse and Pignataro50 proposed and implemented algorithms for computing entropy of two-dimensional diffeomorphisms (including Poincaré sections of three-dimensional differential equations) by computing the exponential growth rate of the length of an iterated curve. Other algorithms51,52 compute the growth rate of the number of disconnected arcs resulting from the iteration of an initial line segment within a neighborhood of a two-dimensional chaotic saddle or repeller. Of course, these methods could be extended to higher dimensions by considering growth of surface areas, etc. Expansion entropy, by estimating volume growth locally, allows an analogous computation to be done without having to compute and measure multidimensional surfaces. It is analogous to the approach used by Jacobs et al.31 for two-dimensional maps.

Another approach to computing entropy is by symbolic dynamics: partition the state space into numbered subsets and estimate the exponential growth rate (as time increases) of the number of different sequences of subsets that can be visited by a finite-time trajectory. This approach can yield good estimates with well-chosen partitions, but inadequate partitions may lead to underestimation, and in some cases symbolic dynamics indicates positive entropy when the topological entropy is actually zero.53 

We conclude this section with a brief discussion of the connection between the definitions of expansion entropy and topological entropy in the case of a smooth, autonomous system on a compact manifold M, with restraining region S = M. In  Appendix B, we argue that for sufficiently small ϵ > 0, the quantity ET,0(f, S) of Eq. (1) approximates Ñ(T,ϵ)/N(0,ϵ), where Ñ(T,ϵ) is the maximum number of trajectories that are a distance ϵ apart at either time 0 or at time T. Note that Ñ(T,ϵ) is a lower bound on N(T, ϵ), because the latter distinguishes between trajectories that are ϵ apart at some time between 0 and T; however, at least for hyperbolic systems the difference between Ñ and N should be inconsequential. Equations (10) and (11) first take a limit with respect to T and then ϵ. Normalizing by N(0, ϵ) does not change the limit

(12)

We have argued above that

Thus, by Eq. (2), the definition of H0 differs from the definition of h primarily because it uses Ñ(T,ϵ)N(T,ϵ), and because the limits with respect to T an ϵ are taken in the reverse order.

A concept often associated with chaos is that of sensitive dependence, the idea that the orbits from two nearby initial conditions can move far apart in the future. In the mathematical literature, the most common definition of “sensitive dependence” is as follows. (This definition is also sometimes called “weak sensitive dependence.”)

Definition: A continuous map, f: MM, on the compact metric space M has sensitive dependence if there exists a ρ > 0 such that for each δ > 0 (no matter how small) and each xM, there is a yM that is within the distance δ of x and for which at some later time t, |ft(x)ft(y)|>ρ.

That is, no matter how close together the initial conditions x and y are, if we wait long enough, the orbits from these initial conditions will separate by more than some fixed value ρ. Notice that this definition of sensitive dependence does not say anything about the rate at which these orbits diverge from each other: this rate might, for example, be exponential (e.g., as for situations with a positive Lyapunov exponent), or linear (e.g., as for the example in Sec. III D).

Another often used concept assigns sensitive dependence to the dynamics on a compact invariant set (the space M is now not necessarily compact) as follows.

Definition: A continuous map f has sensitive dependence on a compact invariant set J of a metric space M if for every δ > 0 (no matter how small) and every point xJ, there is a point yJ within a distance δ of x such that, at some later time t, |ft(x)ft(y)|>ρ for some fixed value ρ > 0.

The following is a definition of chaos, based on that given by Devaney.54 

Definition of Devaney-chaos: A continuous map f: MM, with M a compact metric space, is chaotic if it satisfies the following three conditions.

  • f has sensitive dependence.

  • f has periodic orbits that are dense in M.

  • f has an orbit that is dense in M, i.e., there exists an initial condition x* such that for each yM and each δ > 0 (no matter how small), at some time t, the orbit from x* will be within the distance δ from y: |ft(x*)y|<δ.

This definition can be converted to define Devaney chaos for a compact invariant set, J = f(J), by replacing M in conditions (ii) and (iii) by J, and condition (i) by “f has sensitive dependence on J.”

It was pointed out by Banks et al.55 that conditions (ii) and (iii) of Devaney's definition, imply his condition (i). Thus, condition (i) for Devaney-chaos can be omitted. A serious drawback of Devaney's definition of chaos is that it excludes significant cases that are sometimes considered and that most would regard as chaotic. For example, consider a map with quasi-periodic forcing

(13)

where ω/(2π) is an irrational number. Regarding this as a dynamical system with a state x = (z, θ), we see that, because of the quasi-periodic behavior of θ, there are no periodic orbits of this system. Hence, the system (13) fails condition (ii) for Devaney chaos. Thus, according to the definition of Devaney chaos, a system like (13) can never exhibit chaos. Yet quasi-periodically forced systems are of practical interest and can have attractors with a positive Lyapunov exponent, a situation generally thought of as chaotic. Another point to make in connection with example (13) is that it presents a problem for the Devaney definition of chaos even when G is independent of θ: in that case zt+1 = G(zt), on its own, might indeed satisfy the conditions for Devaney-chaos; however, by considering the state to be x = (z, θ) with θt quasi-periodic, the Devaney chaos condition (ii) is not satisfied, even though there is no change in the chaotic dynamics of z.

According to Banks et al., Devaney-chaos only requires satisfaction of the two conditions that there are a dense orbit and a dense set of periodic orbits. Robinson,38 on the other hand, notes that of the three conditions originally specified by Devaney, the requirement of a dense set of periodic orbits does not seem as “central to the idea of chaos” as the other two conditions (sensitive dependence and a dense orbit). Thus, he (and also, independently, Wiggins56) proposes the following definition.

Definition of Robinson-chaos: The same as Devaney-chaos except that condition (ii) is deleted.

This definition, by not requiring periodic orbits, has the benefit of potentially allowing more consistent treatment of forced systems, like (13). However, there is still, in our opinion, a drawback. This occurs, e.g., with reference to the shear map example (9) of Sec. III D, which was considered by Robinson38 (see also Ref. 39). As discussed in Sec. III D, orbits are dense and nearby points typically separate linearly with t. Thus, this example is Robinson chaotic. However, the two Lyapunov exponents are zero. While this example satisfies the Robinson-chaos definition, due to the slow, linear-in-time, separation of orbits, such dynamics has previously been classified as nonchaotic (see literature on so-called strange nonchaotic attractors13). Indeed, this linear-in-time separation of nearby orbits presents comparatively little prediction difficulty as compared to the exponential divergence emphasized by Lorenz.

Li and Yorke1 define the notion of a “scrambled set,” and the presence of a scrambled set can be taken as another definition of chaos. While this works well in the original context of one-dimensional maps considered by Li and Yorke, as we will see, it is not as appropriate for higher dimensional systems.

Definition of a scrambled set: For f: MM with M a compact metric space, an uncountably infinite subset J of M is scrambled if, for every pair x, yJ with x ≠ y,

Thus, by the second Li-Yorke condition, the orbits from x and y come arbitrarily close to each other an infinite number of times, while by the first Li-Yorke condition, the distance between the orbits from x and y also exceeds a fixed positive amount an infinite number of times. An attractive aspect of scrambling is that it excludes some cases that have sensitive dependence but are usually not considered chaotic. In particular, the shear map example (9) discussed above does not have a scrambled set because the θ-distance (or ϕ-distance, if the θ-distance is 0) between a pair of orbits remains constant, thus violating the second Li-Yorke condition for scrambling. Nevertheless, as with Robinson-chaos, the definition of chaos as having an uncountable scrambled set includes cases that are generally regarded as nonchaotic. One example is a two-dimensional flow with an attracting homoclinic orbit, considered by Robinson38 and Ott and Yorke57 (see Figure 1 in either paper); on a trajectory converging to the homoclinic orbit, a finite piece of the trajectory forms an uncountable scrambled set. Thus, the compact invariant set formed by the homoclinic orbit and its interior exhibits scrambling.

From the discussion above, we see that using notions related to the common definition of sensitive dependence presents problems when attempting to use them to give a generally applicable definition of chaos. On the other hand, another type of dynamical characterization, namely, that of Lyapunov exponents, seems better suited to defining chaos. Indeed, it can be quite useful to define a chaotic attractor using Lyapunov exponents. If one excludes certain cases of Milnor attractors (see below) and concentrates on a definition of an attractor of a map f as a bounded set A with a dense orbit such that there is an ϵ-neighborhood Aϵ for which t=0ft(Aϵ)=A, then it seems that a good definition of a chaotic attractor of the map f is simply an attractor that has a positive Lyapunov exponent.

Now, however, consider Milnor's definition58 of an attractor: A is an attractor for f: MM if there is a positive Lebesgue measure of points xM, such that A is the forward time limit set of A. Figure 8 shows an example demonstrating that the definition of a chaotic attractor as an attractor with a positive Lyapunov exponent can be problematic, if Milnor's definition of an attractor is used.

FIG. 8.

A one-dimensional map for which the unstable fixed point at x = 0 is a Milnor attractor. Trajectories that reach [a, b] map to 0 two iterates later.

FIG. 8.

A one-dimensional map for which the unstable fixed point at x = 0 is a Milnor attractor. Trajectories that reach [a, b] map to 0 two iterates later.

Close modal

The function f(x) in Fig. 8 goes to zero at x = ±1 and remains zero for |x|>1. There is a positive measure of initial conditions x0 that go to x = 0 and stay there (e.g., if a < x0 < b, then x1 > 0 and x2 = x3 = x4 = ⋯ = 0). Thus, the unstable fixed point x = 0 is a Milnor attractor with a positive Lyapunov exponent (because df/dx > 1 at x = 0) yet it would be, we think, unacceptable to call the set x = 0 chaotic. This example is rather special and contrived. Thus, in practice, it is still very useful to think of a chaotic attractor as one with a positive Lyapunov exponent. However, a main concern of this paper is a definition of chaos that works fairly generally, including being applicable to both attractors and repellers. In the case of repellers, basing the existence of chaos on Lyapunov exponents presents a problem, since a repelling fixed point with a positive Lyapunov exponent could not reasonably be considered chaotic. On the other hand, the anomaly for the fixed-point repeller example and the Milnor example of Fig. 8 is removed if we define chaos by positive expansion entropy (see, e.g., Sec. III A).

In this paper, we have introduced a quantity, the expansion entropy (Eqs. (1) and (2)), and we have argued that expansion entropy provides a “good” definition of chaos in that it possesses several desirable properties. We also compare this definition with other past definitions of chaos (Secs. IV and V). In particular, the expansion entropy H0 enjoys the properties of generality, simplicity, and computability discussed in Sec. I. One important feature of H0 is that it assesses the presence of chaos in any given bounded region S in state space, rather than in an invariant set. As such, it applies naturally in cases where the invariant sets are unknown or (e.g., in both deterministically and randomly forced systems) do not exist. Section III C presents examples illustrating various issues and features of expansion entropy, perhaps most importantly its numerical computation. It is our hope that our paper will lead to the use of expansion entropy in applications and to further study of its properties.

The work of E. Ott was supported by the U.S. Army Research Office under Grant No. W911NF-12-1-0101. We thank S. Newhouse for pointing out earlier work related to expansion entropy, and J. Yorke and the reviewers for helpful comments.

As discussed in Sec. II D, we here generalize our definition of H0 to a definition of a q-order entropy

(A1)

where the argument of G is the same as in Eq. (1). Comparing Eqs. (1) and (2) with (A1), we see that (A1) reduces to (1) and (2) for q = 0. Furthermore, letting q → 1 and assuming that the q → 1 and t limits can be interchanged, we obtain

(A2)

where, as in Sec. II C,

The quantity (A2) can be viewed as bearing a relationship to metric entropy that is analogous to the relationship between H0 and topological entropy. In the case where S contains an attractor, 1/τ+ = 0. In the case where S contains a repeller, we call τ+ the average lifetime of repeller orbits. In the case where S is a neighborhood if an invariant set with a “natural measure,”36 we can identify the first term in Eq. (A2) with the sum of the positive Lyapunov exponents λj, so that

(A3)

which agrees with the results for metric entropy of Kantz and Grassberger4,36 for chaotic repellers and of Pesin37 for 1/τ+ = 0.

We now obtain Hq(f, S) and Hq(f,S) for the example in Sec. III B, where S is the large interval [−1, 1.5] containing both the attracting fixed point and the chaotic repeller, while SS is the smaller restraining region [0, 1] containing only the chaotic repeller.

We begin by finding Hq(f,S). As discussed in Sec. III B, the set ST,0 of initial conditions that remain in S from time 0 to time T consists of 2T initial intervals of varying widths 3k2kT (where k = 0, 1,…, T) on each of which G(DfT)=3k2Tk. In addition, the number of such intervals with a given k is the binomial coefficient C(T,k)=T!/[k!(Tk)!]. Thus, the integral of G1−q that appears in Eq. (A1) is

(A4)

Furthermore, the total length of ST,0 decreases by the ratio 5/6 upon increase of T by one, so that

(A5)

Using Eqs. (A4) and (A5) in Eq. (1), we obtain

(A6)

Note that this quantity is positive for all q ≥ 0, and decreases monotonically with increasing q (dashed curve in Fig. 9). For q = 0, this agrees with our previous result of Sec. III B that H0(f,S)=ln2, while taking the limit q → 1 yields H1(f,S)=(2/5)ln(5/2)+(3/5)ln(5/3)>0. As q,Hq(f,S)ln(5/3), so that H0=ln2HqH=ln(5/3).

FIG. 9.

Hq(f,S) (dashed curve) and Hq(f, S) (solid curve) versus q. The dashed curve decreases slightly from ln 2 ≈ 0.693 at q = 0 to about 0.663 at q = 1.5.

FIG. 9.

Hq(f,S) (dashed curve) and Hq(f, S) (solid curve) versus q. The dashed curve decreases slightly from ln 2 ≈ 0.693 at q = 0 to about 0.663 at q = 1.5.

Close modal

We now turn to the evaluation of the q-order expansion entropy for the larger restraining region S = [−1, 1.5]. The main difference from S is that ST,0 = S for all T ≥ 0, so μ(ST,0) = 2.5, in contrast with Eq. (A5). To estimate the integral of G(DfT)1−q over ST,0, note first that by Eq. (A4)

The contribution to the integral of G(Df T)1−q from initial conditions in ST,0 but not in ST,0 can be bounded above by CTmax[(3q+2q)T,1] for a constant C independent of T; the factor of T in the upper bound comes from considering trajectories that first leave S at time t, for each of the values t = 0, 1,…, T − 1. Also, since G(Df T) = 1 for initial conditions in the interval near x = −1/2 on which Df < 1, such initial conditions contribute at least c > 0 to the integral, where c is the length of the contracting interval. Thus,

From Eq. (A1), recalling that μ(ST,0) = μ(S), we conclude that for q ≠ 1

(A7)

Note that there is a critical value 0 < qc < 1 for which 3qc+2qc=1; then Hq(f, S) = 0 for q ≥ qc. In particular, H1(f, S) = 0 by taking the limit q → 1.

Comparing Eqs. (A6) and (A7), we see that H0(f,S)=H0(f,S)=ln2 (as we argued in Sec. III C), but Hq(f,S)>Hq(f,S) for q > 0; see Fig. 9. Note also that if the slopes 3 and 2 were increased, the critical value qc beyond which Hq(f, S) = 0 could be made arbitrarily close to 0. We conclude that Hq for q > 0 does not always detect chaos (i.e., Hq may be zero) in a restraining region containing an invariant set that is chaotic by all the Definitions we reviewed in Sec. V, as well as by our Definition H0 > 0.

Here, we justify the claim in Sec. IV that the integral ET,0(f, S) of Eq. (1) used to define expansion entropy approximates, for small ϵ, the ratio Ñ(T,ϵ)/N(0,ϵ) related to the definition of topological entropy. Below, when we say that two quantities have the “same order of magnitude,” we mean that their ratio is bounded above and below by positive constants that are independent of ϵ and T.

Assume that ϵ > 0 is small enough that the remainder term in the first order Taylor expansion of fT,0 is much smaller than ϵ for points within ϵ of each other, i.e., that

for x, yS with |yx|ϵ. Cover S with a grid of N0 boxes whose diameters are ϵ; then N0 has the same order of magnitude as the maximum number N(0, ϵ) of ϵ-separated points in S. Each box B is contained in a ball of radius ϵ, and contains a ball whose radius has the same order of magnitude as ϵ. Notice also that μ(B) ≈ μ(S)/N0 for small ϵ. Let xB be the center of B, and let σ1 ≥ σ2 ≥ ⋯ ≥ σn be the singular values of DfT,0(xB). Then, the image of B under fT,0 contains and is contained in ellipses whose semiaxes have the same order of magnitude as σ1ϵ,σ2ϵ,…, σnϵ. Let d be the largest index for which σd > 1. Then the maximum number of ϵ-separated points in the image of B has the same order of magnitude as σ1σ2σd = G(DfT,0(xB)). Summing over all B, the maximum number Ñ(T,ϵ) of trajectories that are ϵ-separated at either time 0 or at time T has the same order of magnitude as

Since we assumed that S is invariant, ST,0 is the same as S. Comparing with Eq. (1), the discussion above constitutes an outline of a proof that Ñ(T,ϵ)/N(0,ϵ) has the same order of magnitude as ET,0(f, S). (In fact, the same is true when S is not invariant, if we define Ñ to count only trajectories that remain in S between times 0 and T.)

1.
T. Y.
Li
and
J. A.
Yorke
, “
Period three implies chaos
,”
Am. Math. Mon.
85
,
985
992
(
1975
).
2.
E.
Lorenz
, “
Deterministic nonperiodic flow
,”
J. Atmos. Sci.
20
,
130
141
(
1963
).
3.
C.
Grebogi
,
E.
Ott
, and
J. A.
Yorke
, “
Crises, sudden changes in chaotic attractors and chaotic transients
,”
Physica D
7
,
181
200
(
1983
).
4.
H.
Kantz
and
P.
Grassberger
, “
Repellers, semi-attractors and long-lived chaotic transients
,”
Physica D
17
,
75
86
(
1985
).
5.
Y.-C.
Lai
and
T.
Tél
,
Transient Chaos: Complex Dynamics on Finite Time Scales
, Applied Mathematical Sciences Vol. 173 (
Springer
,
New York
,
2011
).
6.
S. W.
McDonald
,
C.
Grebogi
,
E.
Ott
, and
J. A.
Yorke
, “
Fractal basin boundaries
,”
Physica D
17
,
125
153
(
1985
).
7.
For a review see
E.
Ott
and
T.
Tél
, “
Chaotic scattering: An introduction
,”
Chaos
3
,
417
426
(
1993
).
8.
J. D.
Skufca
,
J. A.
Yorke
, and
B.
Eckhardt
, “
Edge of chaos in parallel shear flow
,”
Phys. Rev. Lett.
96
,
174101
(
2006
).
9.
J. C.
Sommerer
,
H. C.
Ku
, and
H. E.
Gilrath
, “
Experimental evidence for chaotic scattering in a fluid wake
,”
Phys. Rev. Lett.
77
,
5055
5058
(
1996
).
10.
C.
Jaffe
,
S. D.
Ross
,
M. L.
Lo
,
J.
Marsden
,
D.
Farrelly
, and
T.
Uzer
, “
Statistical theory of asteroid escape rates
,”
Phys. Rev. Lett.
89
,
011101
(
2002
).
11.
For example,
R. E.
Gillian
and
G. S.
Ezra
, “
Transport and turnstiles in multidimensional Hamiltonian mappings for unimolecular fragmentation: Application to van der Waals predissociation
,”
J. Chem. Phys.
94
,
2648
2668
(
1991
).
12.
For example,
M. L.
Du
and
J.
Delos
, “
Effect of closed classical orbits on quantum spectra: Ionization of atoms in a magnetic field. I. Physical picture and calculations
,”
Phys. Rev. A
38
,
1896
1912
(
1988
).
13.
U.
Feudel
,
S.
Kuznetsov
, and
A.
Pikovsky
,
Strange Nonchaotic Attractors
(
World Scientific
,
Singapore
,
2006
).
14.
F.
Ledrappier
and
L.-S.
Young
, “
Dimension formula for random transformations
,”
Commun. Math. Phys.
117
,
529
548
(
1988
).
15.
F.
Ledrappier
and
L.-S.
Young
, “
Entropy formula for random transformations
,”
Probab. Theory Relat. Fields
80
,
217
240
(
1988
).
16.
L.
Yu
,
E.
Ott
, and
Q.
Chen
, “
Transition to chaos for random dynamical systems
,”
Phys. Rev. Lett.
65
,
2935
2938
(
1990
).
17.
I. I.
Rypina
,
F. J.
Beron-Vera
,
M. G.
Brown
,
H.
Kocak
,
M. J.
Olascoaga
, and
I. A.
Udovydchenkov
, “
On the Lagrangian dynamics of atmospheric zonal jets and the permeability of the stratospheric polar vortex
,”
J. Atmos. Sci.
64
,
3595
3610
(
2007
).
18.
J. F.
Lindner
,
V.
Kohar
,
B.
Kia
,
M.
Hippke
,
J. G.
Learned
, and
W. L.
Ditto
, “
Strange non-chaotic stars
,”
Phys. Rev. Lett.
114
,
054101
(
2015
).
19.
P.
Moskalik
, “
Multimode oscillations in classical Cepheids and RR Lyrae-type stars
,”
Proc. Int. Astron. Union
9
(S
301
),
249
256
(
2013
).
20.
F.
Varosi
,
T. M.
Antonsen
, and
E.
Ott
, “
The spectrum of fractal dimensions of passively convected scalar gradients in chaotic fluid flows
,”
Phys. Fluids A
3
,
1017
1028
(
1991
).
21.
J. C.
Sommerer
and
E.
Ott
, “
Particles floating on a moving fluid: A dynamically comprehensible physical fractal
,”
Science
259
,
335
339
(
1993
).
22.
G.
Haller
and
G.
Yuan
, “
Lagrangian coherent structures in three-dimensional fluid flows
,”
Physica D
147
,
352
370
(
2000
).
23.
G. A.
Voth
,
G.
Haller
, and
J. P.
Gollub
, “
Experimental measurements of stretching fields in fluid mixing
,”
Phys. Rev. Lett.
88
,
254501
(
2002
).
24.
M.
Mathur
,
G.
Haller
,
T.
Peacock
,
J. E.
Ruppert-Felsot
, and
H. L.
Swinney
, “
Uncovering the Lagrangian skeleton of turbulence
,”
Phys. Rev. Lett.
98
,
144502
(
2007
).
25.
For example,
A. S.
Pikovsky
,
M. G.
Rosenblum
,
G. V.
Osipov
, and
J.
Kurths
, “
Phase synchronization of chaotic oscillators by external driving
,”
Physica D
104
,
219
238
(
1997
).
26.
R.
Sacksteder
and
S.
Shub
, “
Entropy of a differentiable map
,”
Adv. Math.
28
,
181
185
(
1978
).
27.
O. S.
Kozlovski
, “
An integral formula for topological entropy of C maps
,”
Erg. Theory Dyn. Syst.
18
,
405
424
(
1998
).
28.

If M is a Riemannian manifold, then it has a canonical volume that is equivalent to Lebesgue measure in appropriate local coordinates. Furthermore, when we treat the derivative of a map as a matrix, or use (small) distances in M, we assume the use of “normal coordinates,” which exist at least for C2 Riemannian manifolds.

29.

Though we define expansion entropy only for a particular realization of a stochastic system, we expect that under appropriate hypotheses it has the same value for almost every realization. At a minimum, this should be the case when S is invariant for all realizations of a system forced by an IID process, because in this case the expansion entropy depends only on the tail of the IID process. A suitable setting for the rigorous study of expansion entropy in random systems would be that of Ledrappier and Young.14,15

30.

If f is a C map on a compact manifold, and S is the entire manifold, then the limit has been proved to exist.27 More generally, H0 could be defined as the lim sup, as in other definitions of entropy (see Sec. IV).

31.
J.
Jacobs
,
E.
Ott
, and
B. R.
Hunt
, “
Calculating topological entropy for transient chaos with an application to communicating with chaos
,”
Phys. Rev. E
57
,
6577
6588
(
1998
).
32.
J.
Balatoni
and
A.
Rényi
, “
Remarks on entropy
,”
Pub. Math. Inst. Hung. Acad. Sci.
1
,
9
37
(
1956
). Translated in Selected Papers of A. Rényi (Akadémiai Kiadó, Budapest, 1976), Vol. 1, p. 558.
33.
H. G. E.
Hentschel
and
I.
Procaccia
, “
The infinite number of generalized dimensions of fractals and strange attractors
,”
Physica D
8
,
435
444
(
1983
).
34.
P.
Grassberger
and
I.
Procaccia
, “
Dimensions and entropies of strange attractors from a fluctuating dynamics approach
,”
Physica D
13
,
34
54
(
1984
).
35.
P.
Grassberger
and
I.
Procaccia
, “
Estimation of Kolmogorov entropy from a chaotic signal
,”
Phys. Rev. A
28
,
2591
2593
(
1983
).
36.
B. R.
Hunt
,
E.
Ott
, and
J. A.
Yorke
, “
Fractal dimensions of chaotic saddles of dynamical systems
,”
Phys. Rev. E
54
,
4819
4823
(
1996
).
37.
Ya. B.
Pesin
, “
Lyapunov characteristic exponents and ergodic properties of smooth dynamical systems with an invariant measure
,”
Dokl. Akad. Nauk SSSR
226
,
774
777
(
1976
)
Ya. B.
Pesin
, [
Sov. Math. Dokl.
17
,
196
199
(
1976
)].
38.
C.
Robinson
, “
What is a chaotic attractor?
,”
Qual. Theory Dyn. Syst.
7
,
227
236
(
2008
).
39.
B. R.
Hunt
and
E.
Ott
, “
Fractal properties of robust strange non-chaotic attractors
,”
Phys. Rev. Lett.
87
,
254101
(
2001
).
40.
R. L.
Devaney
and
Z.
Nitecki
, “
Shift automorphisms in the Hénon mapping
,”
Commun. Math. Phys.
67
,
137
146
(
1979
).
41.
R. L.
Adler
,
A. G.
Konheim
, and
M. H.
McAndrew
, “
Topological entropy
,”
Trans. Am. Math. Soc.
114
,
309
319
(
1965
).
42.
E. I.
Dinaburg
, “
The relation between topological entropy and metric entropy
,”
Dokl. Akad. Nauk SSSR
190
,
19
22
(
1970
)
E. I.
Dinaburg
, [
Sov. Math. Dokl.
11
,
13
16
(
1970
)].
43.
R.
Bowen
, “
Entropy for group endomorphisms and homogeneous spaces
,”
Trans. Am. Math. Soc.
153
,
401
414
(
1971
).
44.
R.
Bowen
, “
Periodic points and measures for axiom A diffeomorphisms
,”
Trans. Am. Math. Soc.
154
,
377
397
(
1971
).
45.
F.
Przytycki
, “
An upper estimation for topological entropy of diffeomorphisms
,”
Inv. Math.
59
,
205
213
(
1980
).
46.
S.
Newhouse
, “
Entropy and volume
,”
Erg. Theory Dyn. Syst.
8
,
283
299
(
1988
).
47.
M.
Misiurewicz
and
W.
Szlenk
, “
Entropy of piecewise monotone mappings
,”
Stud. Math.
67
,
45
63
(
1980
), see https://eudml.org/doc/218304.
48.
Y.
Yomdin
, “
Volume growth and entropy
,”
Isr. J. Math.
57
,
285
300
(
1987
).
49.
M.
Gromov
, “
Entropy, homology and semialgebraic geometry
,”
Sém. Bourbaki
28
,
225
240
(
1985
-6), see https://eudml.org/doc/110064.
50.
S.
Newhouse
and
T.
Pignataro
, “
On the estimation of topological entropy
,”
J. Stat. Phys.
72
,
1331
1351
(
1993
).
51.
Z.
Kovács
and
T.
Tél
, “
Thermodynamics of irregular scattering
,”
Phys. Rev. Lett.
64
,
1617
1620
(
1990
).
52.
Q.
Chen
,
E.
Ott
, and
L. P.
Hurd
, “
Calculating topological entropies of chaotic dynamical systems
,”
Phys. Lett. A
156
,
48
52
(
1991
).
53.
G.
Froyland
,
O.
Junge
, and
G.
Ochs
, “
Rigorous computation of topological entropy with respect to a finite partition
,”
Physica D
154
,
68
84
(
2001
); see in particular Remark B.2.
54.
R. L.
Devaney
,
An Introduction to Chaotic Dynamical Systems
(
Addison-Wesley
,
New York and Reading
,
1989
).
55.
J.
Banks
,
J.
Brooks
,
G.
Cairns
,
G.
Davis
, and
P.
Stacy
, “
On Devaney's definition of chaos
,”
Am. Math. Mon.
99
,
332
334
(
1992
).
56.
S.
Wiggins
,
Chaotic Transport in Dynamical Systems
, Interdisciplinary Applied Mathematics Series Vol. 2 (
Springer-Verlag
,
Berlin
,
1992
).
57.
W.
Ott
and
J.
Yorke
, “
When Lyapunov exponents fail to exist
,”
Phys. Rev. E
78
,
056203
(
2008
).
58.
J.
Milnor
, “
On the concept of an attractor
,”
Commun. Math. Phys.
99
,
177
195
(
1985
).