Observability can determine which recorded variables of a given system are optimal for discriminating its different states. Quantifying observability requires knowledge of the equations governing the dynamics. These equations are often unknown when experimental data are considered. Consequently, we propose an approach for numerically assessing observability using Delay Differential Analysis (DDA). Given a time series, DDA uses a delay differential equation for approximating the measured data. The lower the least squares error between the predicted and recorded data, the higher the observability. We thus rank the variables of several chaotic systems according to their corresponding least square error to assess observability. The performance of our approach is evaluated by comparison with the ranking provided by the symbolic observability coefficients as well as with two other data-based approaches using reservoir computing and singular value decomposition of the reconstructed space. We investigate the robustness of our approach against noise contamination.

A popular approach for studying nonlinear dynamical systems from a recorded time series is to reconstruct the original system using delay or derivative coordinates. It is known that the choice of the measured variable can affect the quality of attractor reconstruction. Unlike in linear systems for which the state space is observable or not from the measurements, nonlinear systems are more or less observable from measurements depending on the state space location. Moreover, the observability strongly depends on the measured variables. It is, therefore, useful to assess the observability provided by a variable using a real number within the unit interval between two extreme values: 0 for nonobservable and 1 for full observability. Analytical techniques for determining observability require knowledge of the underlying equations, which are typically unknown when an experimental system is investigated. This is often the case for social and biological networks. It is thus of primary importance to assess observability directly from recorded time series. In this paper, we show how Delay Differential Analysis (DDA) can assess observability from time series. The performance of this approach is evaluated by comparing our results obtained for simulated chaotic systems with the symbolic observability coefficients obtained from the governing equations.

## I. INTRODUCTION

Studying dynamical systems from real world data can be difficult as they are often high-dimensional and nonlinear; moreover, it is typically not possible to measure all the variables spanning the associated state space.^{1–7} In theory, it is possible to reconstruct the non-measured variables by using delay or differential embeddings from a single measurement.^{8} However, when performing state-space reconstruction, the dimension required to obtain a diffeomorphical equivalence—required for correctly distinguishing the different states of the system—with the original state space may depend on the measured variable(s).^{9} Indeed, a $d$-dimensional system can be optimally reconstructed from a given variable with a $d$-dimensional embedding, but a higher-dimensional space may be required when another variable is measured. For instance, the Rössler attractor is easily reproduced with a three-dimensional global model from variable $y$, but a four-dimensional model^{9} or a quite sophisticated procedure^{10} is needed when variable $z$ is measured. It was shown that data analysis often (if not always) depends on the observability provided by the measured variable.^{11–13}

In the 1960s, the concept of observability and its mathematical dual, controllability, was introduced by Rudolf Kálmán in control theory for linear systems.^{14} These concepts were later extended for nonlinear systems in the 1970s from the perspective of differential geometry.^{15} Observability assesses whether different states of the original system can be distinguished from the measured variable. A system is said to be fully observable from some measurement if the rank of the observability matrix is equal to the dimension of the system.^{16,17} With such an approach, the answer is either fully observable or nonobservable. This approach is sufficient for linear systems because the observability matrix does not depend on the location in the state space.

This is not true for nonlinear systems, and observability coefficients were introduced to overcome this discrepancy answer.^{9,18} Observability coefficients are real numbers within the unit interval between two extreme values: 0 for nonobservable and 1 for fully observable. These coefficients are estimated at every point of the trajectory produced by the governing equations in the state space and then averaged along that trajectory.^{9,18} It is also possible to construct symbolic observability coefficients from the Jacobian matrix of the system studied.^{19,20} In this way, observability takes a graded value according to the probability with which the attractor intersects the singular observability manifold,^{21} that is, the subset of the original space for which the determinant of the observability matrix is zero. The great advantage of these coefficients is that they allow comparing the observability provided by variables from different systems and they can be computed for high-dimensional systems.^{7} It is then possible to rank the variables according to the observability of the original state space they provide. The dependency of the observability on the measured variable is due to the way variables are coupled in the original system.^{22} Symmetries are often sources of difficulty for assessing observability, particularly because reconstructing the original symmetry is not possible from a single variable if the symmetry differs from an inversion.^{23,24}

The weakness of these analytical approaches is that the governing equations must be known and it is not possible to assess observability from experimental data. A first attempt to overcome this was based on a singular value decomposition of some matrices built from local data.^{25} Results were encouraging, but some slight discrepancies with analytical results were noticed. Another approach, based on a model built directly from the data using reservoir computing, was also proposed.^{26} In both cases, some discrepancies with the symbolic observability coefficients were observed. It therefore, remains challenging to develop a reliable technique that always matches with theoretical results. In this work, we propose a measure for assessing observability from recorded data by using DDA and compare our results and those obtained—when available in the literature—with the two techniques discussed above with the symbolic observability coefficients computed for several well-studied chaotic systems. Here, DDA is based on a delay differential equation that approximates the dynamics underlying the measured time series. Contrary to what is done with global modeling^{27} or reservoir computing,^{28} there is no need for an accurate model. Previous work showed a rough model with a very limited number of terms (typically three) is sufficient to detect dynamical changes or classify various dynamical regimes.^{29–31}

The subsequent part of this paper is organized as follows. Section II A is a brief introduction to the computation of symbolic observability coefficients. Section II B provides an introduction to DDA and explains how it can be used for ranking variables according to the observability of the state space they provide. Section III introduces the investigated chaotic systems and provides the corresponding symbolic observability coefficients. Section IV is the main section of this paper: it discusses the performance of DDA for assessing observability of the chaotic systems and compares it with those of the two other data-based techniques. Section V provides some conclusions.

## II. THEORETICAL BACKGROUND

### A. Symbolic observability coefficients

Let us consider a $d$-dimensional dynamical system represented by the state vector $x\u2208Rd$ whose components are given by

where $fi$ is the $i$th component of the vector field $f$. Let us introduce the measurement function $h(x):Rd\u21a6Rm$ of $m$ variables chosen among the $d$ ones spanning the original state space. It is then required to reconstruct a space $Rdr$ ($dr\u2265d$) from the $m$ measured variables. One has to choose $dr\u2212m$ derivatives of these $m$ measured variables to get a $dr$-dimensional vector $X$ spanning the reconstructed space. Commonly, observability is assessed by using $dr=d$.^{16,17} In the present work, we are only working with scalar time series ($m=1$). The change of the coordinate between the original state space and the reconstructed one is thus the map,

When the derivative coordinates are used for spanning the reconstructed space, the map can be analytically computed.^{32} The observability of a system from a variable is defined as follows.^{16,33} For the sake of simplicity, let us limit ourselves to the case $m=1$ (a generalization to the other cases is straightforward).

The dynamical system (1) is said to be *state observable* at time $tf$ if every initial state $x(0)$ can be uniquely determined from the knowledge of the vector $s(\tau )$, $0\u2264\tau \u2264tf$.

To test whether a system is observable or not is to construct the observability matrix,^{15} which is defined as the Jacobian matrix of the Lie derivatives of $h(x)$. Differentiating $h(x)$ yields

where $Lfh(x)$ is the Lie derivative of $h(x)$ along the vector field $f$. The $k$th order Lie derivative is given by

being the zero order Lie derivative the measured variable itself, $Lf0h(x)=h(x)$. Therefore, the observability matrix $O\u2208Rd\xd7d$ is written as

where $d\u2261\u2202\u2202x$.

The dynamical system (1) is said to be state observable if and only if the observability matrix has full rank, that is, rank $(O)=d.$

The observability matrix $O$ is equal to the Jacobian matrix of the change of coordinates $\Phi :x\u2192X$ when derivative coordinates are used.^{32} In this approach, the observability is either full or zero. The term *structural* was introduced when the results do not depend on parameter values.^{34} Computing the rank of the observability matrix is independent of parameter values and, consequently, is an example of structural observability.^{35} Computing observability with graphs^{1,34,36} is also a structural approach. We term observability assessed from recorded data—necessarily dependent on the parameter values used for simulating the trajectory of the system—as *dynamical* observability.^{35} This type of approach returns a real number within the unit interval: variables can be ranked between the two extreme cases, 1.0 (0.0) for a full (null) observability. There is a third type of observability, *symbolic* observability, which does not depend on parameter values but allows ranking the variables.^{20} All types of observability are not sensitive to symmetry-related problems. This is due to the fact that observability is a local property, while symmetry is a global one. Consequently, symmetry may degrade the assessment of observability.^{24}

The procedure to compute symbolic observability coefficients is implemented in three steps as follows.^{7,20} First, the Jacobian matrix $J$ of the system (1), composed of elements $Jij$, is transformed into the symbolic Jacobian matrix $J~$ by replacing each constant element $Jij$ by 1, each polynomial element $Jij$ by $1\xaf$, and each rational element $Jij$ by $1\xaf\xaf$ when the $j$th variable is present in the denominator or by $1\xaf$ otherwise. Rational terms in the governing equations (1) are distinguished from polynomial terms since the formers reduce more strongly the observability than the latter.^{20}

Then, the symbolic observability matrix $O~$ is constructed. The first row of $O~$ is defined by the derivative of the measurement function $dh(x)$; that is, $O~1j=1$ if $j=i$ and $0$ otherwise when the $i$th variable is measured. The second row is the $i$th row of $J~$. The $k$th row is obtained as follows. First, each element of the $i$th row of $J~$ is multiplied by the corresponding $i$th component of the vector $v=(O~\u21131,\u2026,O~\u2113d)T$, where $\u2113=k\u22121$ refers to the $(k\u22121)$th row of the symbolic observability matrix $O~$. The rules to perform the symbolic product $J~ij\u2297vi$ are such that^{20}

Second, the matrix $J~\u2032$ is reduced into a row where each element $O~kj=\u2211iJ~ij\u2032$ according to the addition law,^{20}

The last step is associated with the computation of the symbolic observability coefficients. The determinant of $O~$ is computed according to the symbolic product rule defined in (4) and expressed as products and addends of the symbolic terms $1$, $1\xaf$, and $1\xaf\xaf$, whose number of occurrences are $N1$, $N1\xaf$, and $N1\xaf\xaf$, respectively. It is convenient to impose that if $N1\xaf=0$ and $N1\xaf\xaf\u22600$, then $N1\xaf=N1\xaf\xaf$. The symbolic observability coefficient is thus defined as

with $D=N1+N1\xaf+N1\xaf\xaf$. This coefficient is in the unit interval, $\eta =1$ for a variable providing full observability of the original state space. An observability is said to be good when $\eta \u22650.75$.^{37}

### B. Delay Differential Analysis

Let us assume that a time series $X1$ is recorded in a $d$-dimensional system. From this time series, it is possible to obtain a global model reproducing the underlying dynamics. There are typically two main approaches working with either derivative or delay coordinates.^{27,38} When derivatives are used, it is possible to construct a $d$-dimensional differential model,

where $Xi$ is the $(i\u22121)$th derivative of the measured variable $X1$.^{39} The function $F$ can be numerically estimated by using a least-squares technique with a structure selection.^{40,41}$F$ can be polynomial^{39,41} or rational.^{42,43} This model requires $d$-ordinary differential equations whose variables are the $d$ successive derivatives of $X1$: this model works in a differentiable embedding.

Second, it is possible to construct a model whose equations have the form of a difference equation,

where $\phi i$ is a monomial of delay coordinates $X\tau j(k)=X(k\u2212\tau j)$ with $\tau j=n\delta t$ ($n\u2208N+$) being a time delay expressed in terms of the sampling time $\delta t$ with which the scalar time series ${X1(k)}$ is recorded: $k$ is the discrete time. Such a model has an auto-regressive form, and typically, the number $N$ of terms is between 10 and 20. The space in which this model is working is thus spanned by delay coordinates: its dimension is very often significantly larger than the dimension $d$, the embedding dimension,^{44} or even than the Takens criterion.^{8} An optimal form of the difference equation (8) is developed under the form of a nonlinear autoregressive-moving average (NARMA) model.^{45}

Recently, a third type of model was investigated under the name of *reservoir computing*.^{46} This approach considers an oversized model with a functional structure based on a network whose nodes are characterized by some simple function. For instance, the Lorenz attractor was accurately reproduced with an Erdös–Rényi network of 300 nodes with a mean degree $\delta \xaf=6$, each node being made of a difference equation.^{28} The model so-obtained corresponds to an accurate global model of the dynamics. Notably, this model was constructed from the measurements of all the variables of the Lorenz system. The main advantage of such a large model is its flexibility, that is, its ability to capture various dynamical regimes, but it has the disadvantage that the space in which it is working is not clearly defined and has a very large dimension ($dr>300$ in the work discussed above).

The DDA approach uses a kind of a mixed model between the differential model (7) and the difference equation (8), the left member of the latter being replaced with the left member of the former. It is, therefore, based on the delay differential equation,

where $X=X1$ designates the measured variable and $X\tau j$ some delay coordinates. The purpose is not to construct a global model reproducing accurately the dynamics but only an approximated model for detecting dynamical changes (nonstationarity) or classifying different dynamical regimes.^{29,30,47} We, therefore, use a rough model with very few terms ($N\u22643$). Such sparsity in the model prevents overparametrization, that is, spurious dynamics induced by overly complex models.^{48} Indeed, delay differential equations are known to already produce complex dynamics with only two terms.^{49,50} Many characteristics of the measured dynamics can be captured with two or three terms and appropriate time delays.^{31} Based on previous works,^{29–31,47} it is assumed that these characteristics are sufficient to distinguish different dynamical regimes. This DDA model (9) is a differential equation whose state space is spanned by delay coordinates $X\tau j$.

Model (9) has two sets of parameters, the fixed parameters $\tau j$ (set during the structure selection) and the free parameters $ai$ (estimated independently from each data window). The structure of model (9) as well as the delays are determined for each time series. Then, the free coefficients $ai$ are determined for each window of the recorded time series. The data in each window $X1$ are normalized to have a zero mean and unit variance to remove amplitude information before estimating the free parameters $ai$ by using a singular value decomposition (SVD). The least-squares error

between the derivatives returned by the DDA model and the derivatives computed from the measured time series quantifies the ability of the model to capture the underlying dynamics. It is known that there is a relationship between the model quality and observability.^{9,11,24} The signal derivative $X\u02d91$ is computed using a five-point center derivative.^{51} In this work, structure selection [i.e., choosing the model form of Eq. (9] and the fixed parameters $\tau j$) was performed via an exhaustive search over all possible three-term models (three monomials: $N=3$) with two delays such that $\tau j\u2208[m+1;60]\delta t$, where $m=5$ is equal to the number of points for estimating the derivative and $\delta t$ is the sampling time. Function $F$ is made of three monomials selected from the possible candidates,

Monomials and delays are selected in an exhaustive search over all possible model forms, i.e., 44, and delay combinations under the restrictions specified above. Each model is thus characterized by the set of “fixed” parameters $(\tau 1,\tau 2)$, the corresponding monomials $\phi i$, and the free parameters $ai$, which are estimated for each time window of the measured data. In this work, the time window is the entire time series. The structure providing the model with the lowest $\rho X$ is retained to assess observability according to the model error $\rho X$.

As used with reservoir computing,^{26} the error $\rho X$ between the model and the measured data provides a measure of how the system dynamics may be reconstructed from these data. Indeed, to obtain a reliable deterministic model, it is necessary to distinguish every different state of the system for retrieving the underlying causality. Since the error is used as a relative measure, it is only needed to have a sufficiently flexible functional form for the model as observed with reservoir computing or with a delay differential equation. Consequently, the smaller the error $\rho X$, the higher the observability provided by the variable $X$. This results from previous works where it was shown that the complexity of the model to approximate was correlated to the observability: the better the observability provided by the measured variable, the simpler the model to approximate.^{11,24} The error $\rho X$ from the best DDA model is computed with an increasing noise amplitude. For each three-dimensional system and each signal-to-noise ratio (no noise, 20, 10, and 0 dB: where 0 dB indicates that the variance of the noise matches the variance of the signal), the error $\rho X$ was computed over several hundred pseudoperiods for each time series.

## III. DYNAMICAL SYSTEMS AND OBSERVABILITY COEFFICIENTS

### A. Low-dimensional systems

The governing equations of the systems here investigated are reported in Table I. The symbolic observability coefficients (SOCs) and the model error $\rho X$ are reported for each variable of every system in Table I. Parameter values are reported in Table II.

System . | Equations . | SOC . | Error . |
---|---|---|---|

Rössler 76^{52} | $x\u02d9=\u2212y\u2212z$ | 0.84 | 0.037 |

$y\u02d9=x+ay$ | 1.0 | 0.022 | |

$z\u02d9=b+z(x\u2212c)$ | 0.56 | 0.106 | |

Rössler 77^{53} | $x\u02d9=\u2212ax\u2212y(1\u2212x2)$ | 0.56 | 0.0009 |

$y\u02d9=\mu (bx+y\u2212cz)$ | 0.84 | 0.0005 | |

$z\u02d9=\mu (x+cy\u2212dz)$ | 0.68 | 0.0007 | |

Lorenz 63^{54} | $x\u02d9=\sigma (y\u2212x)$ | 0.78 | 0.02 |

$y\u02d9=Rx\u2212y\u2212xz$ | 0.36 | 0.039 | |

$z\u02d9=\u2212bz+xy$ | 0.36 | 0.071 | |

Lorenz 84^{55} | $x\u02d9=\u2212y2\u2212z2\u2212ax+aF$ | 0.36 | 0.061 |

$y\u02d9=xy\u2212bxz\u2212y+G$ | 0.36 | 0.205 | |

$z\u02d9=bxy+xz\u2212z$ | 0.36 | 0.204 | |

Cord^{56} | $x\u02d9=\u2212y\u2212z\u2212ax+aF$ | 0.68 | 0.108 |

$y\u02d9=xy\u2212bxz\u2212y+G$ | 0.36 | 0.198 | |

$z\u02d9=bxy+xz\u2212z$ | 0.36 | 0.232 | |

HR^{57} | $x\u02d9=y\u2212ax3+bx2+I\u2212z$ | 0.68 | 0.025 |

$y\u02d9=c\u2212dx2\u2212y$ | 0.56 | 0.023 | |

$z\u02d9=r[s(x\u2212xR)\u2212z]$ | 1.00 | 0.002 | |

Fisher^{58} | $x\u02d9=y$ | 1.00 | 0.003 |

$y\u02d9=\u2212ax\u2212by\u2212z$ | 0.84 | 0.004 | |

$z\u02d9=b+x\u2212|x|$ | 0.56 | 0.027 | |

Chua^{59} | $x\u02d9=\alpha (\u2212x+y\u2212f(x))$ | 1.00 | 0.05 |

$y\u02d9=x\u2212y+z$ | 0.84 | 0.068 | |

$z\u02d9=\u2212\beta y$ | 1.00 | 0.066 | |

Duffing^{60,61} | $x\u02d9=y$ | 1.00 | 0.022 |

$y\u02d9=\u2212\mu y+x\u2212x3+u$ | 0.86 | 0.08 | |

$u\u02d9=v$ | 0.00 | 0.00 | |

$v\u02d9=\u2212\omega 2u$ | 0.00 | 0.00 | |

Rössler 79^{62} | $x\u02d9=\u2212y\u2212z$ | 0.75 | 0.005 |

$y\u02d9=x+ay+w$ | 0.83 | 0.001 | |

$z\u02d9=b+xz$ | 0.44 | 0.079 | |

$w\u02d9=\u2212cz+dw$ | 0.63 | 0.006 | |

Hénon–Heiles^{63} | $x\u02d9=u$ | 0.64 | 0.0005 |

$y\u02d9=v$ | 0.64 | 0.0004 | |

$u\u02d9=\u2212x\u22122xy$ | 0.44 | 0.0009 | |

$v\u02d9=\u2212y\u2212y2\u2212x2$ | 0.44 | 0.0008 |

System . | Equations . | SOC . | Error . |
---|---|---|---|

Rössler 76^{52} | $x\u02d9=\u2212y\u2212z$ | 0.84 | 0.037 |

$y\u02d9=x+ay$ | 1.0 | 0.022 | |

$z\u02d9=b+z(x\u2212c)$ | 0.56 | 0.106 | |

Rössler 77^{53} | $x\u02d9=\u2212ax\u2212y(1\u2212x2)$ | 0.56 | 0.0009 |

$y\u02d9=\mu (bx+y\u2212cz)$ | 0.84 | 0.0005 | |

$z\u02d9=\mu (x+cy\u2212dz)$ | 0.68 | 0.0007 | |

Lorenz 63^{54} | $x\u02d9=\sigma (y\u2212x)$ | 0.78 | 0.02 |

$y\u02d9=Rx\u2212y\u2212xz$ | 0.36 | 0.039 | |

$z\u02d9=\u2212bz+xy$ | 0.36 | 0.071 | |

Lorenz 84^{55} | $x\u02d9=\u2212y2\u2212z2\u2212ax+aF$ | 0.36 | 0.061 |

$y\u02d9=xy\u2212bxz\u2212y+G$ | 0.36 | 0.205 | |

$z\u02d9=bxy+xz\u2212z$ | 0.36 | 0.204 | |

Cord^{56} | $x\u02d9=\u2212y\u2212z\u2212ax+aF$ | 0.68 | 0.108 |

$y\u02d9=xy\u2212bxz\u2212y+G$ | 0.36 | 0.198 | |

$z\u02d9=bxy+xz\u2212z$ | 0.36 | 0.232 | |

HR^{57} | $x\u02d9=y\u2212ax3+bx2+I\u2212z$ | 0.68 | 0.025 |

$y\u02d9=c\u2212dx2\u2212y$ | 0.56 | 0.023 | |

$z\u02d9=r[s(x\u2212xR)\u2212z]$ | 1.00 | 0.002 | |

Fisher^{58} | $x\u02d9=y$ | 1.00 | 0.003 |

$y\u02d9=\u2212ax\u2212by\u2212z$ | 0.84 | 0.004 | |

$z\u02d9=b+x\u2212|x|$ | 0.56 | 0.027 | |

Chua^{59} | $x\u02d9=\alpha (\u2212x+y\u2212f(x))$ | 1.00 | 0.05 |

$y\u02d9=x\u2212y+z$ | 0.84 | 0.068 | |

$z\u02d9=\u2212\beta y$ | 1.00 | 0.066 | |

Duffing^{60,61} | $x\u02d9=y$ | 1.00 | 0.022 |

$y\u02d9=\u2212\mu y+x\u2212x3+u$ | 0.86 | 0.08 | |

$u\u02d9=v$ | 0.00 | 0.00 | |

$v\u02d9=\u2212\omega 2u$ | 0.00 | 0.00 | |

Rössler 79^{62} | $x\u02d9=\u2212y\u2212z$ | 0.75 | 0.005 |

$y\u02d9=x+ay+w$ | 0.83 | 0.001 | |

$z\u02d9=b+xz$ | 0.44 | 0.079 | |

$w\u02d9=\u2212cz+dw$ | 0.63 | 0.006 | |

Hénon–Heiles^{63} | $x\u02d9=u$ | 0.64 | 0.0005 |

$y\u02d9=v$ | 0.64 | 0.0004 | |

$u\u02d9=\u2212x\u22122xy$ | 0.44 | 0.0009 | |

$v\u02d9=\u2212y\u2212y2\u2212x2$ | 0.44 | 0.0008 |

Rössler 76 | a = 0.52 | b = 2 | c = 4 | |

Rössler 77 | a = 0.03 | b = 0.3 | c = 2 | d = 0.5 |

μ = 0.1 | ||||

Lorenz 63 | σ = 10 | b = 8/3 | R = 28 | |

Lorenz 84 | a = 0.28 | b = 4 | F = 8 | G = 1 |

Cord | a = 0.28 | b = 4 | F = 8 | G = 1 |

HR | a = 1 | b = 3 | c = 1 | d = 5 |

I = 3.29 | $xR=85$ | r = 0.003 | s = 4 | |

Fisher | a = 0.3 | b = 0.097 | ||

Chua | α = 9 | $\beta =1007$ | $a=\u221287$ | |

$b=\u221257$ | ||||

Duffing | μ = 0.3 | ω = 1.2 | ||

x_{0} = 1 | y_{0} = 0 | |||

u_{0} = 0.5 | v_{0} = 0 | |||

Rössler 79 | a = 0.25 | b = 3 | c = 0.5 | d = 0.05 |

Rössler 76 | a = 0.52 | b = 2 | c = 4 | |

Rössler 77 | a = 0.03 | b = 0.3 | c = 2 | d = 0.5 |

μ = 0.1 | ||||

Lorenz 63 | σ = 10 | b = 8/3 | R = 28 | |

Lorenz 84 | a = 0.28 | b = 4 | F = 8 | G = 1 |

Cord | a = 0.28 | b = 4 | F = 8 | G = 1 |

HR | a = 1 | b = 3 | c = 1 | d = 5 |

I = 3.29 | $xR=85$ | r = 0.003 | s = 4 | |

Fisher | a = 0.3 | b = 0.097 | ||

Chua | α = 9 | $\beta =1007$ | $a=\u221287$ | |

$b=\u221257$ | ||||

Duffing | μ = 0.3 | ω = 1.2 | ||

x_{0} = 1 | y_{0} = 0 | |||

u_{0} = 0.5 | v_{0} = 0 | |||

Rössler 79 | a = 0.25 | b = 3 | c = 0.5 | d = 0.05 |

The Rössler 76,^{52} Lorenz 84,^{55} Cord,^{56} Hindmarsh–Rose^{57} (HR), and Fisher^{58} systems have no symmetry. The Hindmarsh–Rose system is known to be problematic when variable $x$ or $z$ is measured, for two different reasons.^{64} When variable $z$ is measured, the observability matrix

where $\chi =2b\u22123ax$ becomes singular when $r$ is too small (Det $Oz=r2s2$): the observability can be null for $r=0$ and full for $r\u22600$ (this is also true for $s$, but $s$ is commonly significantly different from 0). When variable $x$ is measured, although the observability matrix $Ox$ is never singular (Det $Ox=r\u22121;r\u22601$), the plane projection of the differential embedding induced by variable $x$ does not reveal the chaotic nature of the underlying dynamics, contrary to what is clearly provided by variable $z$ (Fig. 1). As discussed by Aguirre *et al*.,^{64} the observability matrix

where

has a determinant Det $Ox$ whose polynomial nature is canceled by the contributions of $O32$ and $O33$, but this is not structurally stable. Any perturbation in one of these two elements would lead to a determinant vanishing for a subset of the state space. This is not detected by the symbolic observability coefficients. If we keep the polynomial nature of elements $O32$ and $O33$, the symbolic observability matrix would be

The corresponding corrected symbolic observability coefficient is thus $\eta x3\u2032=0.68$. The corrected ranking of variables is, therefore, $z\u22b3x\u22b3y$. This ranking will be used in the subsequent analysis.

The other systems have symmetry properties as follows. The Lorenz 63 system^{54} is equivariant under a $Rz$ rotation symmetry around the $z$-axis.^{65,66} Variables $x$ and $y$ are mapped into their opposite ($\u2212x$ and $\u2212y$, respectively), while variable $z$ is invariant under the rotation symmetry. At least two variables must be measured to correctly reconstruct the rotation symmetry.^{23} The Rössler 77,^{53} the Chua circuit,^{59} and the driven Duffing systems^{60,61} are equivariant under an inversion symmetry. Such a symmetry can be recovered from a single variable and, consequently, should not blur the observability analysis. The driven Duffing system is in fact a four-dimensional system, a conservative harmonic oscillator driving the dissipative Duffing oscillator: it is thus a semi-dissipative (or semi-conservative) system.^{61} When variable $u$ (or $v$) is recorded, a periodic orbit is obtained, while variable $x$ (or $y$) provides a chaotic state portrait. Since a chaotic driving signal necessarily implies a chaotic response, it is obvious that $u$ drives $x$ and not the opposite. It can, therefore, be concluded, without further analysis, that the system is not observable from $u$ (or $v$). Thus, we only have to determine the observability from variables $x$ and $y$, respectively. The Fisher system and the Chua circuit have a piecewise nonlinearity. They will be useful to test whether DDA is robust against discontinuous nonlinearity.

All these systems but three—the Lorenz 84, the Cord, and the Hénon–Heiles^{63} systems—have at least one variable providing a good observability ($\eta >0.75$) of the original state space. The Hénon–Heiles system is conservative, and one may guess that the observability problem will be more sensitive since the invariant domain of the state space has a dimension close to 3 and not 2 as for all the other systems that are strongly dissipative.

### B. A higher-dimensional system

The Lorenz 63 system results from a Galerkin expansion of the Navier–Stokes equations for Rayleigh–Bénard convection.^{67} It is also possible to have a higher-dimensional expansion in retaining more Fourier components. One of them lead to the 9D Lorenz system,^{68}

where

This 9D Lorenz system is equivariant.^{69} Depending on the $R$-values, the attractor produced may be asymmetric [Fig. 2(a)] or symmetric [Fig. 2(b)]. The symbolic observability coefficients are

leading to

Notice that every variable offers an extremely poor observability of the original state space. It was shown that at least five variables need to be measured for having a good observability ($\eta >0.75$) of the original state space.^{7} Moreover, for a sufficiently large $R$-value ($R=45$), the behavior is hyperchaotic. One of the characteristics of this highly developed behavior is that there are two different time scales. We will, therefore, investigate whether the observability assessed with DDA is dependent on parameter values, that is, on bifurcation affecting the symmetry properties (order-4 or order-2 asymmetric chaos, symmetric chaos, and hyperchaos).

## IV. DDA RANKING

The structure of the best DDA models $FX$ under no noise is reported in Table V of the Appendix along with the corresponding time delays retained for identifying the free parameters. As examples, $\rho X$ for some systems with increasing noise is shown in Fig. 3. For no noise, $\rho X$ is reported in Table I.

The rankings for variables according to increasing symbolic observability coefficients (SOCs), decreasing $\rho X$ for DDA, and, when available in the literature, for decreasing reservoir computing (RC) and singular value decomposition observability (SVDO) are summarized in Table III for all low-dimensional systems ($d\u22644$). The results for the Rössler 76, Rössler 77, Fisher, driven Duffing, and Rössler 79^{62} systems are in a perfect agreement with the SOC. The discontinuity of the Fisher system does not perturb the analysis. The hyperchaotic nature of the Rössler 79 system was not problematic for correctly assessing observability.

System . | SOC . | DDA . | RC . | SVDO . |
---|---|---|---|---|

Rössler 76 | $y\u22b3x\u22b3z$ | $\u2219$ | $x\u22b3y\u22b3z$ | $\u2219$ |

Rössler 77 | $y\u22b3z\u22b3x$ | $\u2219$ | … | … |

Lorenz 63 | $x\u22b3y=z$ | $\xb0$ | $y\u22b3x\u22b3z$ | $\xb0$ |

Lorenz 84 | x = y = z | $\xb0$ | … | $\u2219$ |

Cord | $x\u22b3y=z$ | $\xb0$ | … | $\xb0$ |

Hindmarsh–Rose | $z\u22b3x\u22b3y$ | $\xb0$ | … | $\u2219$ |

Fisher | $x\u22b3y\u22b3z$ | $\u2219$ | … | … |

Chua | $x=z\u22b3y$ | $\xb0$ | $\u2219$ | $\xb0$ |

Duffing | $x\u22b3y\u22b3u=v$ | $\u2219$ | … | … |

Rössler 79 | $y\u22b3x\u22b3w\u22b3z$ | $\u2219$ | $x\u22b3y\u22b3z\u22b3w$ | $x\u22b3y\u22b3w\u22b3z$ |

Hénon–Heiles | $x=y\u22b3u=v$ | $\xb0$ | $\xb0$ | … |

System . | SOC . | DDA . | RC . | SVDO . |
---|---|---|---|---|

Rössler 76 | $y\u22b3x\u22b3z$ | $\u2219$ | $x\u22b3y\u22b3z$ | $\u2219$ |

Rössler 77 | $y\u22b3z\u22b3x$ | $\u2219$ | … | … |

Lorenz 63 | $x\u22b3y=z$ | $\xb0$ | $y\u22b3x\u22b3z$ | $\xb0$ |

Lorenz 84 | x = y = z | $\xb0$ | … | $\u2219$ |

Cord | $x\u22b3y=z$ | $\xb0$ | … | $\xb0$ |

Hindmarsh–Rose | $z\u22b3x\u22b3y$ | $\xb0$ | … | $\u2219$ |

Fisher | $x\u22b3y\u22b3z$ | $\u2219$ | … | … |

Chua | $x=z\u22b3y$ | $\xb0$ | $\u2219$ | $\xb0$ |

Duffing | $x\u22b3y\u22b3u=v$ | $\u2219$ | … | … |

Rössler 79 | $y\u22b3x\u22b3w\u22b3z$ | $\u2219$ | $x\u22b3y\u22b3z\u22b3w$ | $x\u22b3y\u22b3w\u22b3z$ |

Hénon–Heiles | $x=y\u22b3u=v$ | $\xb0$ | $\xb0$ | … |

The Lorenz 63, Lorenz 84, Cord, and Hindmarsh–Rose systems show close agreement between DDA and SOC. For the Lorenz 63 system, variable $x$ was correctly detected as providing the best observability, but variable $z$ was found to offer worse observability than variable $y$, a feature that is not predicted by the SOC due to a problem inherent to the symmetry involved. For the Lorenz 84 system, all variables have equally low SOCs, however, for DDA variable $x$ shows greater observability. For the Cord system, while no single variable provides good observability for the original state space, DDA correctly ranks $x$ as providing the best observability. However, DDA ranks $z$ as providing worse observability than variable $y$, while SOC ranks them with equivalent observability. For the Hindmarsh–Rose system, variable $z$ provides full observability and is associated with the lowest $\rho X$. However, there is some discrepancy between DDA and SOC since, as assessed with DDA, $y$ provides a slightly higher observability than $x$. Results for the Hénon–Heiles system are quite equivalent to the SOC. Variables $x$ and $y$ are more observable than $u$ and $v$; however, $y(v)$ is more observable than $x(u)$ instead of showing equivalent observability.

For the Chua circuit, the variable $x$ contains a piecewise nonlinearity and has full observability, and DDA correctly ranks $x$ as the most observable. DDA also ranks variable $y$ with the worst observability, which is in agreement with SOC. However, variable $z$ has only slightly better observability than $y$, whereas it should be equivalent to $x$.

When compared to the two other data-based techniques, DDA performs better than RC for the Rössler 76, Rössler 79, and the Lorenz 63 systems but not for the Chua circuit. Compared to the SVDO, the DDA approach provides similar results for all systems investigated by these two techniques. DDA outperforms SVDO for the hyperchaotic Rossler 79 system in correctly identifying the variable y as providing the best observability, a feature missed by the SVDO, whereas the SVDO approach outperforms DDA for the Lorenz 84 and Hindmarsh-Rose systems.

For most of the systems, these results are robust against noise contamination, at least up to a signal-to-noise ratio greater than 10 dB: below this ratio, results can be blurred and observability can no longer be reliably assessed using DDA. A similar robustness was observed with SVDO. It was not investigated with RC.

Note that another interesting data-based technique for assessing observability was proposed by Parlitz *et al*.^{70} It was only tested with the Rössler 76 system (and the Hénon map, not investigated here). It would be interesting to further investigate its performance, but this is out of the scope of this paper.

The results for the 9D Lorenz system are not so clear. The first reason is that this system is nearly unobservable from a single variable. The SOC is nearly saturated (close to 0) with nonlinear elements as revealed by the symbolic Jacobian matrix of the 9D Lorenz system (16), namely,

which illustrates that most of the couplings between variables are nonlinear. Considering only the observability provided by a single variable is here investigated and that the SOCs are all close to 0, one may conclude that the 9D Lorenz system is not observable from a single variable.

Results provided by DDA are shown in Fig. 2 where it is seen that variables cannot be easily ranked, particularly when $R$ is increased. Results are summarized in Table IV as follows. For each $R$-value, the rankings of the variables are reported—from 1 for the variable offering the best observability to 9 for the one providing the poorest observability—and compared to the ranking provided by the SOC. The results vary with the $R$-value but do not follow a clear trend. Variable $x5$ with a null observability as assessed by the SOC (and analytically) is found to provide the best observability as assessed by DDA. Nevertheless, this is in agreement with the successful three-dimensional global model obtained from this variable for $R=14.22$;^{68} that is, at least for this $R$-value, the dynamics can be correctly reconstructed for recovering the underlying determinism.

. | R
. | x_{1}
. | x_{2}
. | x_{3}
. | x_{4}
. | x_{5}
. | x_{6}
. | x_{7}
. | x_{8}
. | x_{9}
. |
---|---|---|---|---|---|---|---|---|---|---|

SOC | … | 1 | 2 | 1 | 2 | 3 | 3 | 1 | 1 | 3 |

DDA | 14.22 | 7 | 4 | 1 | 5 | 2 | 8 | 3 | 6 | 9 |

14.30 | 3 | 7 | 2 | 6 | 1 | 8 | 5 | 4 | 9 | |

15.10 | 5 | 2 | 6 | 3 | 1 | 9 | 8 | 7 | 4 | |

45.00 | 8 | 5 | 7 | 6 | 1 | 4 | 3 | 2 | 9 |

. | R
. | x_{1}
. | x_{2}
. | x_{3}
. | x_{4}
. | x_{5}
. | x_{6}
. | x_{7}
. | x_{8}
. | x_{9}
. |
---|---|---|---|---|---|---|---|---|---|---|

SOC | … | 1 | 2 | 1 | 2 | 3 | 3 | 1 | 1 | 3 |

DDA | 14.22 | 7 | 4 | 1 | 5 | 2 | 8 | 3 | 6 | 9 |

14.30 | 3 | 7 | 2 | 6 | 1 | 8 | 5 | 4 | 9 | |

15.10 | 5 | 2 | 6 | 3 | 1 | 9 | 8 | 7 | 4 | |

45.00 | 8 | 5 | 7 | 6 | 1 | 4 | 3 | 2 | 9 |

It should be pointed out that looking for full observability (i.e., being able to “reconstruct” each of the non-measured variables) is not the same thing as looking for an embedding, especially for large $d$-dimensional systems producing an attractor that can be embedded within a space whose dimension $dR$ is lower than the dimension $d$ of the original state space. Full observability ensures the existence of an embedding, and the opposite is not necessarily true. Here, DDA selects the variable that provides the best reconstructed space. If compared with the results provided by the SOC with multivariate measurements,^{7} variables $x2$, $x4$, $x5$, and $x6$ are always among the six variables selected for providing a full observability. DDA returns three of them as providing the best observability, $x2$, $x4$, and $x5$ (Table IV). Variable $x6$, the single one that is invariant under the symmetry of this system, is identified as a variable providing a poor observability. Once again, symmetry induces difficulties for assessing observability.

## V. CONCLUSION

The ability to infer the state of a system from a scalar output depends on the system variable that is measured. We have introduced a numerical approach using the error between a DDA model and measured data to assess the observability provided by the measured variables in several chaotic systems. The smaller the model error, the better the observability provided. We compared these measures with symbolic observability coefficients, which are determined directly from the system’s equations. Our measure overall reliably ranks variables according to the observability they provide about the original state space. The largest discrepancy was obtained for a large-dimensional (9D Lorenz) system. The assessment of observability is quite robust against noise contamination in the majority of the systems here considered.

There are two situations in which our approaches may face some complications. The first one is a common one. Inconsistencies in assessing observability are known for systems with symmetry properties, particularly with variables left invariant. The second one is also a typical one: when the dimension of the system increases, the observability of the state space provided by a single variable becomes very poor and assessing observability is delicate. Our approach is thus very reliable for low-dimensional systems without symmetry properties, even with a signal-to-noise ratio as commonly encountered in experiments.

As in most of the other techniques, variables of different systems cannot be compared to each other. This is a common limitation in assessing observability that is only overcome by using an analytical approach, such as by computing explicitly the observability matrix or by using the symbolic observability coefficients. A kind of normalization should be considered to have, for instance, the error $\rho y$ of variable $y$ of the Rössler 76 system (which has full observability) smaller than for variable $y$ of the Rössler 77 system. This problem is more challenging than it may appear. It was, for instance, never solved for the observability coefficients computed along a trajectory using a relationship extracted from the system’s equations or using SVD applied to a reconstructed space.

## AUTHORS’ CONTRIBUTIONS

C.E.G. and C. Lainscsek contributed equally to this work.

## ACKNOWLEDGMENTS

C. Letellier wishes to thank Irene Sendiña-Nadal for her assistance in computing the symbolic observability coefficients for the 9D Lorenz system. This work was supported by the National Institutes of Health (NIH)/NIBIB (Grant No. R01EB026899-01) and by the National Science Foundation Graduate Research Fellowship (Grant No. DGE-1650112).

## DATA AVAILABILITY

The data that support the findings of this study are available from the corresponding author upon reasonable request.

### APPENDIX: FUNCTIONAL FORMS OF DDA MODELS

The functional forms of the DDA models for each variable of the systems investigated are shown in Table V.

. | . | a_{1}
. | a_{2}
. | a_{3}
. | τ_{1}
. | τ_{2}
. |
---|---|---|---|---|---|---|

Rössler 76 | $Fx$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} |

$Fy$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} | |

$Fz$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} | |

Rössler 77 | $Fx$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} |

$Fy$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 7 δ_{t} | 6 δ_{t} | |

$Fz$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} | |

Lorenz 63 | $Fx$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} |

$Fy$ | $X\tau 1$ | $X\tau 13$ | $X\tau 1X\tau 22$ | 6 δ_{t} | 19 δ_{t} | |

$Fz$ | $X\tau 1$ | $X\tau 12$ | $X\tau 22$ | 18 δ_{t} | 6 δ_{t} | |

Lorenz 84 | $Fx$ | $X\tau 1$ | $X\tau 2$ | $X\tau 12$ | 7 δ_{t} | 6 δ_{t} |

$Fy$ | $X\tau 1$ | $X\tau 13$ | $X\tau 1X\tau 22$ | 6 δ_{t} | 28 δ_{t} | |

$Fz$ | $X\tau 1$ | $X\tau 1X\tau 2$ | $X\tau 1X\tau 22$ | 6 δ_{t} | 60 δ_{t} | |

Cord | $Fx$ | $X\tau 1$ | $X\tau 13$ | $X\tau 12X\tau 2$ | 7 δ_{t} | 51 δ_{t} |

$Fy$ | $X\tau 1$ | $X\tau 1X\tau 2$ | $X\tau 1X\tau 22$ | 6 δ_{t} | 18 δ_{t} | |

$Fz$ | $X\tau 1$ | $X\tau 2$ | $X\tau 12$ | 6 δ_{t} | 7 δ_{t} | |

HR | $Fx$ | $X\tau 1$ | $X\tau 1X\tau 2$ | $X\tau 23$ | 6 δ_{t} | 9 δ_{t} |

$Fy$ | $X\tau 12$ | $X\tau 12X\tau 2$ | $X\tau 23$ | 25 δ_{t} | 6 δ_{t} | |

$Fz$ | $X\tau 1$ | $X\tau 2$ | $X\tau 12$ | 6 δ_{t} | 7 δ_{t} | |

Fisher | $Fx$ | $X\tau 1$ | $X\tau 2$ | $X\tau 12$ | 7 δ_{t} | 6 δ_{t} |

$Fy$ | $X\tau 1$ | $X\tau 2$ | $X\tau 12$ | 6 δ_{t} | 7 δ_{t} | |

$Fz$ | $X\tau 1$ | $X\tau 2$ | $X\tau 12$ | 7 δ_{t} | 6 δ_{t} | |

Chua | $Fx$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} |

$Fy$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 7 δ_{t} | 6 δ_{t} | |

$Fz$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 13 δ_{t} | 32 δ_{t} | |

Duffing | $Fx$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} |

$Fy$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} | |

$Fu$ | $X\tau 1$ | $X\tau 2$ | $X\tau 12$ | 38 δ_{t} | 37 δ_{t} | |

$Fv$ | $X\tau 1$ | $X\tau 2$ | $X\tau 12$ | 38 δ_{t} | 37 δ_{t} | |

9D Lorenz | $F1,3,5$ | $X\tau 1$ | $X\tau 2$ | $X\tau 12$ | 7 δ_{t} | 6 δ_{t} |

R = 14.22 | $F4,7,8$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 7 δ_{t} | 6 δ_{t} |

$F6,9$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} | |

$F2$ | $X\tau 1$ | $X\tau 1X\tau 2$ | $X\tau 22$ | 47 δ_{t} | 14 δ_{t} | |

9D Lorenz | $F1\u22125,7,8$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} |

R = 14.30 | $F6,9$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 7 δ_{t} | 6 δ_{t} |

9D Lorenz | $F1,3,7,8$ | $X\tau 1$ | $X\tau 2$ | $X\tau 12$ | 7 δ_{t} | 6 δ_{t} |

R = 15.10 | $F2,4,5,9$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} |

$F6$ | $X\tau 1$ | $X\tau 12$ | $X\tau 22$ | 25 δ_{t} | 6 δ_{t} | |

9D Lorenz | $F1$ | $X\tau 1$ | $X\tau 1X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 11 δ_{t} |

R = 45 | $F2$ | $X\tau 1$ | $X\tau 13$ | $X\tau 1X\tau 22$ | 6 δ_{t} | 57 δ_{t} |

$F3$ | $X\tau 1$ | $X\tau 13$ | $X\tau 23$ | 6 δ_{t} | 7 δ_{t} | |

$F4$ | $X\tau 1$ | $X\tau 13$ | $X\tau 1X\tau 22$ | 6 δ_{t} | 44 δ_{t} | |

$F5$ | $X\tau 13$ | $X\tau 12X\tau 2$ | $X\tau 23$ | 10 δ_{t} | 6 δ_{t} | |

$F6$ | $X\tau 12$ | $X\tau 1X\tau 2$ | $X\tau 13$ | 10 δ_{t} | 23 δ_{t} | |

$F7$ | $X\tau 1$ | $X\tau 1X\tau 2$ | $X\tau 23$ | 6 δ_{t} | 9 δ_{t} | |

$F8$ | $X\tau 1$ | $X\tau 1X\tau 2$ | $X\tau 23$ | 6 δ_{t} | 10 δ_{t} | |

$F9$ | $X\tau 1$ | $X\tau 12X\tau 2$ | $X\tau 23$ | 6 δ_{t} | 9 δ_{t} | |

Rössler 79 | $Fx$ | $X\tau 1$ | $X\tau 2$ | $X\tau 12$ | 6 δ_{t} | 7 δ_{t} |

$Fy$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 7 δ_{t} | 6 δ_{t} | |

$Fz$ | $X\tau 1$ | $X\tau 2$ | $X\tau 12$ | 6 δ_{t} | 7 δ_{t} | |

$Fw$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 7 δ_{t} | 6 δ_{t} | |

Hénon–Heiles | $Fx$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} |

$Fy$ | $X\tau 1$ | $X\tau 2$ | $X\tau 12$ | 6 δ_{t} | 7 δ_{t} | |

$Fu$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} | |

$Fv$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 7 δ_{t} | 6 δ_{t} |

. | . | a_{1}
. | a_{2}
. | a_{3}
. | τ_{1}
. | τ_{2}
. |
---|---|---|---|---|---|---|

Rössler 76 | $Fx$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} |

$Fy$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} | |

$Fz$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} | |

Rössler 77 | $Fx$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} |

$Fy$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 7 δ_{t} | 6 δ_{t} | |

$Fz$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} | |

Lorenz 63 | $Fx$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} |

$Fy$ | $X\tau 1$ | $X\tau 13$ | $X\tau 1X\tau 22$ | 6 δ_{t} | 19 δ_{t} | |

$Fz$ | $X\tau 1$ | $X\tau 12$ | $X\tau 22$ | 18 δ_{t} | 6 δ_{t} | |

Lorenz 84 | $Fx$ | $X\tau 1$ | $X\tau 2$ | $X\tau 12$ | 7 δ_{t} | 6 δ_{t} |

$Fy$ | $X\tau 1$ | $X\tau 13$ | $X\tau 1X\tau 22$ | 6 δ_{t} | 28 δ_{t} | |

$Fz$ | $X\tau 1$ | $X\tau 1X\tau 2$ | $X\tau 1X\tau 22$ | 6 δ_{t} | 60 δ_{t} | |

Cord | $Fx$ | $X\tau 1$ | $X\tau 13$ | $X\tau 12X\tau 2$ | 7 δ_{t} | 51 δ_{t} |

$Fy$ | $X\tau 1$ | $X\tau 1X\tau 2$ | $X\tau 1X\tau 22$ | 6 δ_{t} | 18 δ_{t} | |

$Fz$ | $X\tau 1$ | $X\tau 2$ | $X\tau 12$ | 6 δ_{t} | 7 δ_{t} | |

HR | $Fx$ | $X\tau 1$ | $X\tau 1X\tau 2$ | $X\tau 23$ | 6 δ_{t} | 9 δ_{t} |

$Fy$ | $X\tau 12$ | $X\tau 12X\tau 2$ | $X\tau 23$ | 25 δ_{t} | 6 δ_{t} | |

$Fz$ | $X\tau 1$ | $X\tau 2$ | $X\tau 12$ | 6 δ_{t} | 7 δ_{t} | |

Fisher | $Fx$ | $X\tau 1$ | $X\tau 2$ | $X\tau 12$ | 7 δ_{t} | 6 δ_{t} |

$Fy$ | $X\tau 1$ | $X\tau 2$ | $X\tau 12$ | 6 δ_{t} | 7 δ_{t} | |

$Fz$ | $X\tau 1$ | $X\tau 2$ | $X\tau 12$ | 7 δ_{t} | 6 δ_{t} | |

Chua | $Fx$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} |

$Fy$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 7 δ_{t} | 6 δ_{t} | |

$Fz$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 13 δ_{t} | 32 δ_{t} | |

Duffing | $Fx$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} |

$Fy$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} | |

$Fu$ | $X\tau 1$ | $X\tau 2$ | $X\tau 12$ | 38 δ_{t} | 37 δ_{t} | |

$Fv$ | $X\tau 1$ | $X\tau 2$ | $X\tau 12$ | 38 δ_{t} | 37 δ_{t} | |

9D Lorenz | $F1,3,5$ | $X\tau 1$ | $X\tau 2$ | $X\tau 12$ | 7 δ_{t} | 6 δ_{t} |

R = 14.22 | $F4,7,8$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 7 δ_{t} | 6 δ_{t} |

$F6,9$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} | |

$F2$ | $X\tau 1$ | $X\tau 1X\tau 2$ | $X\tau 22$ | 47 δ_{t} | 14 δ_{t} | |

9D Lorenz | $F1\u22125,7,8$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} |

R = 14.30 | $F6,9$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 7 δ_{t} | 6 δ_{t} |

9D Lorenz | $F1,3,7,8$ | $X\tau 1$ | $X\tau 2$ | $X\tau 12$ | 7 δ_{t} | 6 δ_{t} |

R = 15.10 | $F2,4,5,9$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} |

$F6$ | $X\tau 1$ | $X\tau 12$ | $X\tau 22$ | 25 δ_{t} | 6 δ_{t} | |

9D Lorenz | $F1$ | $X\tau 1$ | $X\tau 1X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 11 δ_{t} |

R = 45 | $F2$ | $X\tau 1$ | $X\tau 13$ | $X\tau 1X\tau 22$ | 6 δ_{t} | 57 δ_{t} |

$F3$ | $X\tau 1$ | $X\tau 13$ | $X\tau 23$ | 6 δ_{t} | 7 δ_{t} | |

$F4$ | $X\tau 1$ | $X\tau 13$ | $X\tau 1X\tau 22$ | 6 δ_{t} | 44 δ_{t} | |

$F5$ | $X\tau 13$ | $X\tau 12X\tau 2$ | $X\tau 23$ | 10 δ_{t} | 6 δ_{t} | |

$F6$ | $X\tau 12$ | $X\tau 1X\tau 2$ | $X\tau 13$ | 10 δ_{t} | 23 δ_{t} | |

$F7$ | $X\tau 1$ | $X\tau 1X\tau 2$ | $X\tau 23$ | 6 δ_{t} | 9 δ_{t} | |

$F8$ | $X\tau 1$ | $X\tau 1X\tau 2$ | $X\tau 23$ | 6 δ_{t} | 10 δ_{t} | |

$F9$ | $X\tau 1$ | $X\tau 12X\tau 2$ | $X\tau 23$ | 6 δ_{t} | 9 δ_{t} | |

Rössler 79 | $Fx$ | $X\tau 1$ | $X\tau 2$ | $X\tau 12$ | 6 δ_{t} | 7 δ_{t} |

$Fy$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 7 δ_{t} | 6 δ_{t} | |

$Fz$ | $X\tau 1$ | $X\tau 2$ | $X\tau 12$ | 6 δ_{t} | 7 δ_{t} | |

$Fw$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 7 δ_{t} | 6 δ_{t} | |

Hénon–Heiles | $Fx$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} |

$Fy$ | $X\tau 1$ | $X\tau 2$ | $X\tau 12$ | 6 δ_{t} | 7 δ_{t} | |

$Fu$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 6 δ_{t} | 7 δ_{t} | |

$Fv$ | $X\tau 1$ | $X\tau 2$ | $X\tau 13$ | 7 δ_{t} | 6 δ_{t} |

## REFERENCES

*Dynamical Systems and Turbulence, Warwick 1980*, Lecture Notes in Mathematics Vol. 898 (Springer, 1981), pp. 366–381.

*Linear Systems*, Information and System Sciences Series (Prentice-Hall, 1980).