We introduce and test a general machine-learning-based technique for the inference of short term causal dependence between state variables of an unknown dynamical system from time-series measurements of its state variables. Our technique leverages the results of a machine learning process for short time prediction to achieve our goal. The basic idea is to use the machine learning to estimate the elements of the Jacobian matrix of the dynamical flow along an orbit. The type of machine learning that we employ is reservoir computing. We present numerical tests on link inference of a network of interacting dynamical nodes. It is seen that dynamical noise can greatly enhance the effectiveness of our technique, while observational noise degrades the effectiveness. We believe that the competition between these two opposing types of noise will be the key factor determining the success of causal inference in many of the most important application situations.

The general problem of determining causal dependencies in an unknown time evolving system from time-series observations is of great interest in many fields. Examples include inferring neuronal connections from spiking data, deducing causal dependencies between genes from expression data, discovering long spatial range influences in climate variations, etc. Previous work has often tackled such problems by consideration of correlations, prediction impact, or information transfer metrics. Here, we propose a new method that leverages the potential ability of machine learning to perform predictive and interpretive tasks and uses this to extract information on causal dependence. We test our method on model complex systems consisting of networks of many interconnected dynamical units. These tests show that machine learning offers a unique and potentially highly effective approach to the general problem of causal inference.

The core goal of science is often described to be a generalization from observations to understanding,1 commonly embodied in predictive theories. Related to this is the desire to use measured data to infer necessary properties and the structure of any description consistent with a given class of observations. On the other hand, it has recently emerged that machine learning (ML) is capable of effectively performing a wide range of interpretive and predictive tasks on data.2 Thus, it is natural to ask whether machine learning might be useful for the common scientific goal of discovering structural properties of a system from data generated by this system. In this paper, we consider an important, widely applicable class of such tasks. Specifically, we consider the use of machine learning to address two goals.

  • Goal (i): Determine whether or not a state variable of a time evolving system causally influences another state variable.

  • Goal (ii): Determine the “strength” of such causal influences.

In the terminology of ML, Goal (i) is referred to as “classification ML,” and Goal (ii) is referred to as “regression ML.” These goals have previously been of great interest in many applications (e.g., economics,3 neuroscience,4 genomics,5 climate,6 etc.). Many past approaches have, for example, been based upon the concepts of prediction impact,3,4 correlation,7–9 information transfer,10,11 and direct physical perturbations.12,13 Other previous works have investigated the inference of network links from time series of node states assuming some prior knowledge of the form of the network system and using that knowledge in a fitting procedure to determine links.9,14–17 In addition, some recent papers address network link inference from data via techniques based on delay coordinate embedding,15 random forest methods,18 network embedding algorithms,19 and feature ranking.20 In this paper, we introduce a technique that makes the use of an ML training process in performing predictive and interpretive tasks and attempts to use it to extract information about causal dependencies. In particular, here, we use a particular type of machine learning (ML) called reservoir computing, an efficient method of time-series analysis, which has previously been successfully used for different tasks, e.g., prediction of chaotic dynamics21–23 and speech recognition,24,25 to mention a few. In our case, a “reservoir” dynamical system is trained such that it becomes synchronized to a training time series data set from the unknown system of interest. The trained reservoir system is then able to provide an estimation of the response to perturbations in different parts of the original system, thus yielding information about causal dependencies in the actual system. We will show that this ML-based technique offers a unique and potentially highly effective approach to determining causal dependencies. Furthermore, the presence of dynamical noise (either naturally present or intentionally injected) can very greatly improve the ability to infer causality,14,15 while, in contrast, observational noise degrades inference.

We begin by considering the very general case of an evolving, deterministic, dynamical system whose state at time t is represented by the M-dimensional vector z(t)=[z1(t),z2(t),,zM(t)]T, where z(t) evolves via a system of M differential equations, dz(t)/dt=F(z(t)), and has reached a statistically steady dynamical state (perhaps chaotic). In this context, we frame the issue of causality as follows: Will a perturbation at time t applied to a component zi of the state vector z(t) [i.e., zi(t)zi(t)+δzi(t)] lead to a subsequent change at a slightly later time, t+τ, of another scalar component zj [i.e., zj(t+τ)zj(t+τ)+δzj(t+τ)]; and how can we quantify the strength of this dependence? This formulation might suggest comparison of the evolutions of z(t) that result from two identical systems, one with, and the other without, the application of the perturbation. However, we will be interested in the typical situation in which such a comparison is not possible, and one can only passively observe (measure) the state z(t) of the (single) system of interest. Aside from knowing that the dynamics of interest evolves according to a system of the form dz/dt=F(z), we assume little or no additional knowledge of the system and that the available information is a limited-duration past time series of the state evolution z(t). Nevertheless, we still desire to deduce causal dependencies, where the meaning of causal is in terms of responses to perturbations as defined above. Since, as we will see, accomplishment of this task, in principle, is not always possible, our approach will be to first propose a heuristic solution and then numerically test its validity. The main message of this paper is that our proposed procedure can be extremely effective for a very large class of important problems. We will also delineate situations where our procedure is expected to fail. We emphasize that, as our method is conceptually based on consideration of responses to perturbations, in our opinion, it provides a more direct test of what is commonly of interest when determining causality than do tests based on prediction impact, correlation, or entropy metrics.

Furthermore, although the setting motivating our procedure is for deterministic systems, dz/dt=F(z), we will also investigate performance of our procedure in the presence of both dynamical noise [i.e., noise added to the state evolution equation, dz/dt=F(z)] and observational noise [i.e., noise added to observations of z(t) used as training data for the machine learning]. Both types of noise are, in practice, invariably present. An important result from our study is that the presence of dynamical noise can very greatly enhance the accuracy and applicability of our method (a similar point has been made in Refs. 14 and 15), while observational noise degrades the ability to infer causal dependence.

To more precisely define causal dependence, we consider the effect of a perturbation on one variable on the other variables as follows. Taking the jth component of dz/dt=F(z), we have

dzj(t)/dt=Fj(z1(t),,z2(t),,zM(t)),

for j=1,2,,M. Perturbing zi(t) by δzi(t), we obtain, for small τ, that the component of the orbit perturbation of zj at time (t+τ) due to δzi is

δzj(t+τ)=τ{Fj(z)zi|z=z(t)}δzi(t)+O(τ2).

We define the Short Term Causal Dependence (STCD) metric, fji, of zj on zi by

fji=G(Fj(z)zi),
(1)

where () denotes a long time average of the quantity () over an orbit and the function G is to be chosen in a situation-dependent manner. For example, later in this paper, we consider examples addressing Goal (i) [where we want to distinguish whether or not Fj(z)/zi is always zero] for which we use G(q)=|q|, while, when we consider an example addressing Goal (ii) and are concerned with the time-averaged signed value of the interaction strength, we then use G(q)=q. In either case, we view fji as quantifying the causal dependence of zj on zi, and the key goal of this paper will be to obtain and test a machine learning procedure for estimating fji from observations of the state evolution z(t). For future reference, we will henceforth denote our machine learning estimate of fji by f^ji. In the case of our Goal (i) experiments, where G(q)=|q|, we note that fji defined by (1) is an average of a non-negative quantity and thus, fji0, as will be our estimate, f^ji0. Furthermore, for this case, we will define STCD of zi on zj by the condition, fji>0, and, when using our machine learning estimate f^ji, we shall judge STCD to likely apply when f^ji>ϵ where we call ϵ>0 the discrimination threshold. In the ideal case (f^ji=fji), the discrimination threshold ϵ can be set to zero, but, in practice, due to the error in our estimate, we consider ϵ to be a suitably chosen positive number. We note that, in the ideal case, ϵ=0 can be regarded as a test for whether or not Fj(z) is independent of zi.

As a demonstration of a situation for which the determination of STCD from observations of the motion of z(t) on its attractor is not possible, we note the case where the attractor is a fixed point (a zero-dimensional attractor). Here, the measured available information is the M numbers that are the coordinates of the fixed point, and this information is clearly insufficient for determining STCD. As another problematic example, we note that in certain cases, one is interested in a dynamical system that is a connected network of identical dynamical subsystems and that such a network system can exhibit exact synchronization of its component subsystems26 (including cases where the subsystem orbits are chaotic). In the case where such a synchronized state is stable, observations of the individual subsystems are indistinguishable, and it is then impossible, in principle, for one to infer causal relationships between state variables belonging to different subsystems. More generally, in addition to the above fixed point and synchronization examples, we note that the dimension of the tangent space at a given point z on the attractor is, at most, the smallest embedding dimension of the part of the attractor in a small neighborhood of z. Thus, the full M×M Jacobian of F(z) at z cannot be precisely determined from data on the attractor when the local attractor embedding dimension at z is less than M, which is commonly the case. Thus, these examples motivate the conjecture that to efficiently and accurately infer STCD, the orbital complexity of the dynamics should be large enough so as to encode the information that we seek. Note that these considerations of cases where inference of STCD is problematic do not apply to situations with dynamical noise, e.g., dz/dt=F(z)+(noise), as the addition of noise may be roughly thought of as introducing an infinite amount of orbital complexity. Alternatively, the addition of noise increases the embedding dimension of the data to that of the full state space, i.e., M.

We base our considerations on a type of machine learning called reservoir computing, originally put forward in Refs. 27 and 28 (for a review, see Ref. 29). We assume that we can sample the time-series data z(t) from our system at regular time intervals of length τ so that we have a discrete set of observations {z(0),z(τ),z(2τ),}. To begin, we first describe a reservoir-computer-based machine learning procedure in which the reservoir computer is trained to give an output z^(t+τ) in response to an M-dimensional input z(t) as illustrated in Fig. 1.

FIG. 1.

Schematic of the reservoir computing architecture used in this work. The input-to-reservoir coupling matrix Win couples the input time series for the vector z to the reservoir state vector r. The reservoir-to-output coupling matrix Wout generates the output vector z^ from the reservoir. z^ is found to be a good estimate of z after training.

FIG. 1.

Schematic of the reservoir computing architecture used in this work. The input-to-reservoir coupling matrix Win couples the input time series for the vector z to the reservoir state vector r. The reservoir-to-output coupling matrix Wout generates the output vector z^ from the reservoir. z^ is found to be a good estimate of z after training.

Close modal

For our numerical tests, we consider a specific reservoir computer implementation (Fig. 1) in which the reservoir consists of a network of RM nodes whose scalar states, r1(nτ),r2(nτ),,rR(nτ), are the components of the R-dimensional vector r(nτ).

The nodes interact dynamically with each other through an R×R network adjacency matrix A, and their evolution is also influenced by coupling of the M-dimensional input z(nτ) to the individual nodes of the reservoir network by the M×R input coupling matrix Win according to the neural-network type of evolution equation (e.g., Refs. 21–23 and 29–31),

r((n+1)τ)=tanh(Ar(nτ)+Winz(nτ)),
(2)

where tanh(v) for a vector v=(v1,v2,v3,)T is defined as (tanhv1,tanhv2,tanhv3,)T. For proper operation of the reservoir computer, it is important that Eq. (2) satisfies the “echo state property”21,27,29 (in nonlinear dynamics, this condition is also known as “generalized synchronization”32–34): given two different initial reservoir states, r1 and r2, for the same input time series of z, the difference between the two corresponding reservoir states converges to zero as they evolve in time [that is, |r1(t)r2(t)|0 as t, implying that, after a transient initial period, r(t) essentially depends only on the past history of z, z(t) for tt, and not on the initial condition for r].

Using measured input training data over a training interval of length Tτ, which begins after the initial transient period mentioned above, we use Eq. (2) to generate r(τ),r(2τ),,r(Tτ). We also record and store these determined values r(nτ) along with the corresponding inputs, z(nτ), that created them. The matrices A and Win are regarded as fixed and are typically chosen randomly. In contrast, the R×M output coupling matrix Wout, shown in Fig. 1, is regarded as an adjustable linear mapping from the reservoir states r to an M-dimensional output vector z^,

z^((n+1)τ)=Woutr((n+1)τ).
(3)

“Training” of the machine learning reservoir computer then consists of choosing the RM adjustable matrix elements (“weights”) of Wout so as to make z^(nτ) a very good approximation to z(nτ) over the time duration (τ,2τ,,Tτ) of the training data. This is done by minimization with respect to Wout of the quantity, {n=1Tz(nτ)Woutr(nτ)2}+βWout2. Here, βWout2, with β small, is a “ridge” regularization term35 added to prevent overfitting, and (r(nτ),z(nτ)) are the previously recorded and stored training data. In general, RM is required in order to obtain a good fit of z^ to z(t). For illustrative purposes, we now consider the ideal case where z^=z (i.e., the training perfectly achieves its goal).

For the purpose of estimating STCD, we now wish to eliminate the quantity r from the basic reservoir computer system [Eqs. (2) and (3)] to obtain an evolution equation solely for the state variable z. To do this, we would like to solve (3) for r in terms of z. However, since R, the dimension of r, is much larger than M, the dimension of z, there are typically an infinite number of solutions of (3) for r. To proceed, we hypothesize that it may be useful to eliminate r by choosing it to be the solution of (3) with the smallest L2 norm. This condition defines the so-called Moore-Penrose inverse36 of Wout, which we denote W^out1; i.e., the minimum L2 norm solution for r is written as r=W^out1z. We emphasize that W^out1z is not necessarily expected to give the correct r obtained by solving systems (2) and (3). However, from numerical results to follow, our choice will be supported by the fact that it often yields very useful estimates of fji.

Now, applying Wout to both sides of Eq. (2) and employing r=W^out1z to eliminate r(nτ) from the argument of the tanh function in Eq. (2), we obtain a surrogate time τ map for the evolution of z, z((n+1)τ)=H[z(nτ)], where H(z)=Wouttanh[(AW^out1+Win)z] . Here, we note that we do not claim that this map in itself can be used for time-series prediction in place of Eqs. (2) and (3), which were commonly used in previous works (e.g., Refs. 21–23,30, and 31). Rather, we use it as a symbolic representation of the result obtained after eliminating the reservoir state vector r from Eqs. (2) and (3). In particular, the prediction recipe using Eqs. (2) and (3) is always unique and well defined, in contrast to the above map, where Wout1 is clearly nonunique. Therefore, we use this map only for causality estimation purposes, as described below. Differentiating H(z) with respect to zi, we have

Fj(z)zi=τ1[Hj(z)ziδij],
(4)

where δij is the Kronecker delta, and we propose to use Eqs. (1) and (4) to determine STCD.

In our numerical experiments, the number of training time steps is T=6×104 for Figs. 2 and 3 and T=2×104 for Fig. 4. In each case, the actual training data are obtained after discarding a transient part of 2×104 time steps, and the reservoir system sampling time is τ=0.02. The elements of the input matrix Win are randomly chosen in the interval [0.1,0.1]. The reservoir is a sparse random network of R=5000 nodes for Figs. 2 and 3 and of R=1000 nodes for Fig. 4. In each case, the average number of incoming links per node is 3. Each nonzero element of the reservoir adjacency matric A is randomly chosen from the interval [a,a], and a>0 is then adjusted so that the maximum magnitude eigenvalue of A is 0.9. The regularization parameter is β=104. These parameters are adapted from Ref. 23. The average indicated in Eq. (1) is over 1000 time steps. The chosen time step τ is sufficiently small compared to the time scale over which z(t) evolves that the discrete time series z(nτ) is a good representation of the continuous variation of z(t).

FIG. 2.

Results of Experiment 1 (noiseless case). Panels (a) and (b) show the results of link inferences for two noiseless cases for L=50 links and L=100 links. The inference is perfect in (a) but is very bad in (b). (c) FP/FPR vs L for h=0,0.06, and 0.15 averaged over 100 random realizations of the system and the reservoir adjacency matrix. (d) The orbital complexity as measured by the attractor information dimension DINFO decreases with increasing L. Note that at each value of L, we compute the DINFO for 10 random realizations of a network with L links with h=0. The Kaplan-Yorke dimension is then averaged over all network realizations, and the resulting plot is further smoothed by applying a moving average filter.

FIG. 2.

Results of Experiment 1 (noiseless case). Panels (a) and (b) show the results of link inferences for two noiseless cases for L=50 links and L=100 links. The inference is perfect in (a) but is very bad in (b). (c) FP/FPR vs L for h=0,0.06, and 0.15 averaged over 100 random realizations of the system and the reservoir adjacency matrix. (d) The orbital complexity as measured by the attractor information dimension DINFO decreases with increasing L. Note that at each value of L, we compute the DINFO for 10 random realizations of a network with L links with h=0. The Kaplan-Yorke dimension is then averaged over all network realizations, and the resulting plot is further smoothed by applying a moving average filter.

Close modal
FIG. 3.

The effect of noise on STCD inference. Panels (a)–(c) show the effect of increasing the dynamical noise variance σDyn2 to greatly enhance the effectiveness of link identification even at the rather low noise level of σDyn2=106. In contrast, as shown in panels (d)–(f), starting with the situation (c) and increasing the observational noise variance σObs2 degrades link identification. L=200,h=0 for all the subfigures here.

FIG. 3.

The effect of noise on STCD inference. Panels (a)–(c) show the effect of increasing the dynamical noise variance σDyn2 to greatly enhance the effectiveness of link identification even at the rather low noise level of σDyn2=106. In contrast, as shown in panels (d)–(f), starting with the situation (c) and increasing the observational noise variance σObs2 degrades link identification. L=200,h=0 for all the subfigures here.

Close modal
FIG. 4.

Results of Experiment 3. Panel (a) shows a 100×100 pixelated, shade-coded portrait of Edward N. Lorenz and (b) reconstruction of (a) by our ML link inference technique. Note that, in (b), we plot all the values greater than or equal to 10 as black and all the values less than or equal to 5.5 as white.

FIG. 4.

Results of Experiment 3. Panel (a) shows a 100×100 pixelated, shade-coded portrait of Edward N. Lorenz and (b) reconstruction of (a) by our ML link inference technique. Note that, in (b), we plot all the values greater than or equal to 10 as black and all the values less than or equal to 5.5 as white.

Close modal

Although we use a specific reservoir computing implementation, we expect that, with suitable modifications, our approach can be adapted to “deep” types of machine learning,2 as well as to other implementations of reservoir computing24,25,37,38 [notably, implementations involving photonics,24 electronics,37 and field programable gate arrays (FPGAs)25].

In order to evaluate the effectiveness of our proposed method, we introduce mathematical model test systems that we use as proxies for the unknown system of interest for whose state variables we wish to determine STCD. We next use the test systems to generate simulated training data from which we determine STCD by our ML technique. We then assess the performance of the technique by the correctness of its results determined from the known properties of the test systems.

We first consider examples addressing our Goal (i) [G(q)=|q| in Eq. (1)], and for our simulation test systems, we consider the case of a network of N nodes and L links, where each node is a classical Lorenz system39 with heterogeneity from node to node, additive dynamical noise, and internode coupling,

dxk/dt=10[xkyk+cl=1Nakl(x,y)(ylyk)]+σDynnkx(t),
(5)
dyk/dt=28(1+hk)xkykxkzk+σDynnky(t),
(6)
dzk/dt=(8/3)zk+xkyk+σDynnkz(t).
(7)

The state space dimension of this system is M=3N. The coupling of the N nodes is taken to be only from the y variable of one node to the x variable of another node with coupling constant c, and akl(x,y) is either 1 or 0 depending on whether or not there is a link from l to k. The adjacency matrix akl(x,y) of our Lorenz network (not to be confused with the adjacency matrix A of the reservoir) is constructed by placing directed links between L distinct randomly chosen node pairs. For each node k, hk is randomly chosen in the interval [h,+h], and we call h the heterogeneity parameter. Independent white noise terms of equal variance σDyn2 are added to the left-hand sides of the equations for dx/dt, dy/dt, and dz/dt, where, for example, nkx(t)nkx(t)=2δkkδ(tt). For σ=c=h=0, each node obeys the classical chaotic Lorenz equation with the parameter values originally studied by Lorenz.39 Furthermore, denoting the right-hand side of Eq. (5) by Fxk, we have Fxk/yl=10c or 0, depending on whether there is, or is not, a link from yl to xk.

Since in this case, the derivative Fxk/yl is time independent, |Fxk/yl| is also either 10c or 0, and adopting the notation fkl(x,y)=|Fxk/yl|, we denote its machine learning estimate by our previously described procedure by f^kl(x,y). For a reasonably large network, the number N2N of ordered node pairs (k,l) of distinct nodes is large, and we consequently have many values of f^kl(x,y). Bayesian techniques (see Ref. 40 and references therein) can be applied to such data to obtain an estimate L^ for the total number of links L, and one can then set the value of ϵ so that there are L^ values of f^kl(x,y) that are greater than ϵ. Less formally, we find that making a histogram of the values of f^kl(x,y) often reveals a peak at zero and another peak at a higher positive value with a large gap or discernible minimum in between. One can then estimate ϵ by a value in the gap or by the location of the minimum between the peaks, respectively. For simplicity, in our illustrative numerical simulations to follow, we assume that L is known [approximately equivalent to the case that L is unknown, but that a very good estimate (L^) has been obtained].

Example 1
(A heterogeneous noiseless case)

We consider the parameter set c=0.3, h=0.06, σDyn=σObs=0, N=20, and we vary the number of links L. Figures 2(a) (for L=50) and 2(b) (for L=100) each show an array of 20×20 boxes where each of the boxes represents an ordered node pair (k,l) of the 20-node network, and the boxes have been colored (see Table I) according to whether the results for our procedure predict a link from l to k (“positive”) or not (“negative”) and whether the prediction is correct (“true”) or wrong (“false”).

TABLE I.

Color-coding scheme for Figs. 2 and 3.

TP (true positive) Black square 
TN (true negative) White square 
FP (false positive) Blue square 
FN (false negative) Red square 
TP (true positive) Black square 
TN (true negative) White square 
FP (false positive) Blue square 
FN (false negative) Red square 

We see that for a typical case with L=50 [Fig. 2(a)], all the boxes have been correctly labeled, corresponding to all boxes being either black or white. In contrast to this perfect result at L=50, at L=100 [Fig. 2(b)], the method fails terribly, and the fraction of correct inferences is small. In fact, we find excellent performance for L50, but that, as L increases past 50, the performance of our method degrades markedly. This is shown in Fig. 2(c) where we give plots of the number of false positives (FPs) normalized to the expected value of FP that would result if L links were randomly assigned to the N2N=380 node pairs (k,l). [We denote this normalization FPR; it is given by FPR=L(380L)/380.] Note that, with this normalization, for the different heterogeneities plotted in Fig. 2(c), the curves are similar and that they all begin increasing at around L=60 and FP/FPR becomes nearly 1 (i.e., inference no better than random) past L100. In our earlier discussion, we have conjectured that, for inference of STCD to be possible, the orbital complexity should not be too small. To test this conjecture, we have calculated the information dimension DINFO of the network system attractor corresponding to the parameters, c=0.3, h=0, σ=0, N=20, as a function of L. We do this by calculating the Lyapunov exponents of the system [Eqs. (5)–(7)] and then applying the Kaplan-Yorke formula for DINFO in terms of the calculated Lyapunov exponents.41,42 The result is shown in Fig. 2(d), where we see that DINFO decreases with increasing L. Regarding DINFO as a measure of the orbital complexity, this is consistent with our expectation that the ability to infer STCD will be lost if the orbital complexity of the dynamics is too small. As we next show, the above negative result for L increasing past about 60 does not apply even when small dynamical noise is present.

Example 2
(The effects of dynamical and observational noise)

We first consider the effect of dynamical noise of variance σDyn2 for the parameters h=0 (homogeneous), c=0.3, N=20, and L=200. Results [similar in style to Figs. 2(a) and 2(b)] are shown in Figs. 3(a)3(c). For extremely low dynamical noise variance, σDyn2=109 [Fig. 3(a)]; the result is essentially the same as for zero noise, and about one quarter of the boxes are classified as TP, TN, FP, and FN each (since there are 200 links and 400 boxes, this is no better than random assignment). As the noise variance is increased to σDyn2=107.5 [Fig. 3(b)], the results become better, with a fraction of 0.75 of the boxes either TP or TN [as opposed to 0.52 for Fig. 3(a)]. Upon further increase of the dynamical noise variance to the still small value of σDyn2=106 [Fig. 3(c)], the results typically become perfect or nearly perfect. Furthermore, excellent results, similar to those for σDyn2=106, continue to apply for larger σDyn2. This is shown by the red curve in Fig. 3(f), which shows FP/FPR vs σDyn2 (N=20;L=200). Importantly, we also note that our normalization of FP by FPR essentially makes the red curve L-independent over the range we have tested, 50L200. Our interpretation of this dynamical-noise-mediated strong enhancement of our ability to correctly infer links is that the dynamical noise allows the orbit to explore the state space dynamics off the orbit’s attractor and that the machine learning is able to make appropriate good use of the information it thus gains.

We now turn to the effect of observational noise by replacing the machine learning time-series training data formerly used, [xk(nτ),yk(nτ),zk(nτ)], by [xk(nτ)+σ^Obsn^kx(nτ),yk(nτ)+σ^Obsn^ky(nτ),zk(nτ)+σ^Obsn^k(nτ)], where the parameter σObs2 is the observational noise variance and the n^kx,n^ky,n^kz are independent Gaussian random variables with, e.g., n^kx(nτ)n^kx(nτ)=2δkkδnn. The blue curve in Fig. 3(f) shows the effect of adding observational noise of variance σObs2 on top of dynamical noise for the situation σDyn2=105 of Fig. 3(c). We see from Figs. 3(d)3(f) that, when σObs2 is below about 105, it is too small to have much effect, but, as σObs2 is increased above 105, the observational noise has an increasing deleterious effect on link inference. This negative effect of observational noise is to be expected, since inference of characteristics of the unknown system is necessarily based on the part of the signal that is influenced by the dynamics of the unknown system, which the observational noise tends to obscure.

Example 3
(Inferring continuous valued dependence strengths)
We now wish to address Goal (ii) [for which we take G(q)=q in Eq. (1)], and we, accordingly, consider the case where fkl(x,y) for each (k,l) takes on a value in a continuous range [rather than the case of Examples 1 and 2 where fkl(x,y) is either 10c or zero for all (k,l)]. For this purpose, we replace Eq. (5) by
dxk/dt=10(xkyk)+lfkl(x,y)yl
(8)
and consider Eqs. (6)–(8) as our new test system, with h=0.9, σDyn2=σObs2=0, and N=100 nodes (corresponding to 100×100=104 possible connection strength values). We choose the connection strength values as follows. A photographic portrait of Edward N. Lorenz is divided up into 100×100=104 boxes, and by using a shading scale from dark (coded as +10) to light (coded as 5), Fig. 4(a) is obtained, with the shading scale given to the right of Fig. 4(b). Setting fkl(x,y) equal to the color scale value of box (k,l), we next numerically solve Eqs. (6)–(8). We then use this orbit as the training data for input to our ML determination of causal strength dependence, f^kl(x,y), and employing the same shading scale, we use the thus determined values of f^kl(x,y) to reconstruct the original portrait, as shown in Fig. 4(b). We see that, although the reproduction is not exact, the overall picture is still clearly recognizable, indicating the effectiveness of the method for Goal (ii). For a more quantitative comparison of the actual and estimated Jacobian elements, we calculate the normalized Frobenius norm of their difference matrix f(x,y)f^(x,y). We first apply upper and lower cutoffs equal to 10 and 5.5, respectively, to f^(x,y), in order to eliminate some extreme values. Then, we calculate the ratio,
δ=||f(x,y)f^(x,y)||F||f(x,y)f~(x,y)||F,
(9)
where ||M||F=Trace(MM)=i,j|Mij|2 is the Frobenius norm of the matrix M. Here, f~(x,y) denotes a matrix constructed by randomly permuting the elements of the matrix f(x,y), and the angled brackets denote an average over such random permutations. Therefore, this ratio compares the total error in the inferred Jacobian with the variation in the original matrix elements of f(x,y). For example, for a perfect estimation of f(x,y), we will have δ=0. In contrast, δ=1 means that the prediction error is equal to the average error when the elements of f(x,y) are randomized. For the example shown in Fig. 4, we find that δ is approximately equal to 0.37.

In this paper, we have formulated and tested a new, highly effective, machine-learning-based approach for inferring causal dependencies of state variables of an unknown system from time-series observations of these state variables. A key finding is that the effectiveness of our approach is greatly enhanced in the presence of sufficient dynamical noise, provided that the deleterious effect of observational noise is not too great. The competition between the opposing effects of these two types of noise will likely be the essential key factor determining the success or failure of causality inference in many of the most important situations of interest (e.g., in neuroscience and genomics). Much work remains to be done to more fully address the utility of our method. In particular, further numerical tests on diverse systems, and, especially, experimental studies in real world applications, will ultimately determine the circumstances under which the method developed here will be useful.

This work was supported by the U.S. National Science Foundation (NSF) (Grant No. DMS 1813027). The authors acknowledge useful discussion with Sarthak Chandra, Amitabha Sen, Woodrow Shew, Nuno Martins, Adrian Papamarcou, Erik Bollt, and, especially, Brian Hunt.

1.
R.
Feynman
,
The Character of Physical Law
(
MIT Press
,
Cambridge
,
1965
).
2.
I.
Goodfellow
,
Y.
Bengio
, and
A.
Courville
,
Deep Learning
(
MIT Press
,
Cambridge
,
2016
).
3.
C. W. J.
Granger
, “
Investigating causal relations by econometric methods and cross-spectral methods
,”
Econometrica
37
,
424
430
(
1969
).
4.
M.
Ding
,
Y.
Chen
, and
S. L.
Bressler
, “Granger causality: Basic theory and applications to neuroscience,” in Handbook of Time Series Analysis (Wiley-VCH, 2006), pp. 437–460.
5.
C.
Sima
,
J.
Hua
, and
S.
Jung
, “
Inference of gene regulatory networks using time-series data: A survey
,”
Curr. Genomics
10
(
6
),
416
429
(
2009
).
6.
J. F.
Donges
,
Y.
Zou
,
N.
Marwan
, and
J.
Kurths
, “
Complex networks in climate dynamics
,”
Eur. Phys. J. Spec. Top.
174
,
157
(
2009
).
7.
W. L.
Ku
,
G.
Duggal
,
Y.
Li
,
M.
Girvan
, and
E.
Ott
, “
Interpreting patterns of gene expression: Signatures of coregulation, the data processing inequality, and triplet motifs
,”
PLoS ONE
7
,
e31969
(
2012
).
8.
J.
Ren
,
W.-X.
Wang
,
B.
Li
, and
Y.-C.
Lai
, “
Noise bridges dynamical correlation and topology in coupled oscillator networks
,”
Phys. Rev. Lett.
104
,
058701
(
2010
).
9.
Z.
Levnajic
and
A.
Pikovsky
, “
Untangling complex dynamical systems via derivative-variable correlations
,”
Sci. Rep.
4
,
5030
(
2014
);
[PubMed]
M. G.
Leguia
,
R. G.
Andrzejak
, and
Z.
Levnajic
, “
Evolutionary optimization of network reconstruction from derivative-variable correlations
,”
J. Phys. A Math. Theor.
50
,
334001
(
2017
).
10.
T.
Schreiber
, “
Measuring information transfer
,”
Phys. Rev. Lett.
85
,
461
464
(
2000
).
11.
J.
Sun
and
E. M.
Bollt
, “
Causation entropy identifies indirect influences, dominance of neighbors and anticipatory couplings
,”
Phys. D
267
,
49
57
(
2014
).
12.
E. J.
Molinelli
,
A.
Korkut
,
W.
Wang
,
M. L.
Miller
,
N. P.
Gauthier
,
X.
Jing
,
P.
Kaushik
,
Q.
He
,
G.
Mills
,
D. B.
Solit
,
C. A.
Pratilas
,
M.
Weigt
,
A.
Braunstein
,
A.
Pagnani
,
R.
Zecchina
, and
C.
Sander
, “
Perturbation biology: Inferring signaling networks in cellular systems
,”
PLoS Comput. Biol.
9
(
12
),
e1003290
(
2013
).
13.
M.
Timme
, “
Revealing network connectivity from response dynamics
,”
Phys. Rev. Lett.
98
,
224101
(
2007
).
14.
M. J.
Panaggio
,
M.-V.
Ciocanel
,
L.
Lazarus
,
C. M.
Topaz
, and
B.
Xu
, “
Model reconstruction from temporal data for coupled oscillator networks
,”
29
,
103116
(
2019
).
15.
M. G.
Leguia
,
C. G. B.
Martinez
,
I.
Malvestio
,
A. T.
Campo
,
R.
Rocamora
,
Z.
Levnajic
, and
R. G.
Andrzejak
, “
Inferring directed networks using a rank-based connectivity measure
,”
Phys. Rev. E
99
,
012319
(
2019
).
16.
T.
Stankovski
,
T.
Pereira
,
P. V. E.
McClintock
, and
A.
Stefanovska
, “
Coupling functions: Universal insights into dynamical interaction mechanisms
,”
Rev. Mod. Phys.
89
,
045001
(
2017
).
17.
S. G.
Shandilya
and
M.
Timme
, “
Inferring network topology from complex dynamics
,”
New J. Phys.
13
,
013004
(
2011
).
18.
S.
Leng
,
Z.
Xu
, and
H.
Ma
, “
Reconstructing directional causal networks with random forest: Causality meeting machine learning
,”
Chaos
29
,
093130
(
2019
).
19.
R.-M.
Cao
,
S.-Y.
Liu
, and
X.-K.
Xua
, “
Network embedding for link prediction: The pitfall and improvement
,”
Chaos
29
,
103102
(
2019
).
20.
M. G.
Leguia
,
Z.
Levnajic
,
L.
Todorovski
, and
B.
Zenko
, “
Reconstructing dynamical networks via feature ranking
,”
Chaos
29
,
093107
(
2019
).
21.
H.
Jaeger
and
H.
Haas
, “
Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication
,”
Science
304
,
78
80
(
2004
).
22.
J.
Pathak
,
B.
Hunt
,
M.
Girvan
,
Z.
Lu
, and
E.
Ott
, “
Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach
,”
Phys. Rev. Lett.
120
,
024102
(
2018
).
23.
Z.
Lu
,
J.
Pathak
,
B.
Hunt
,
M.
Girvan
,
R.
Brockett
, and
E.
Ott
, “
Reservoir observers: Model-free inference of unmeasured variables in chaotic systems
,”
Chaos
27
,
041102
(
2017
).
24.
L.
Larger
,
A.
Baylon-Fuentes
,
R.
Martinenghi
,
V. S.
Udaltsov
,
Y. K.
Chembo
, and
M.
Jacquot
, “
High-speed photonic reservoir computing using a time-delay-based architecture: Million words per second classification
,”
Phys. Rev. X
7
,
011015
(
2017
).
25.
B.
Schrauwen
,
M.
D’Haene
,
D.
Verstraeten
, and
J.
Van Campenhout
, “
Compact hardware liquid state machines on FPGA for real-time speech recognition
,”
Neural Netw.
21
,
511
523
(
2008
).
26.
L. M.
Pecora
and
T. L.
Carroll
, “
Master stability function for synchronized chaotic systems
,”
Phys. Rev. Lett.
80
,
2109
(
1998
).
27.
H.
Jaeger
, “The ‘echo state’ approach to analysing and training recurrent neural networks,” GMO Report 148, German National Research Center for Information Technology, 2001.
28.
W.
Maass
,
T.
Natschlager
, and
H.
Markham
, “
Real-time computing without stable states: A new framework for neural computation based on perturbations
,”
Neural Comput.
14
,
2531
2560
(
2002
).
29.
M.
Lukoševičius
and
H.
Jaeger
, “
Reservoir computer approaches to recurrent neural network training
,”
Comput. Sci. Rev.
3
,
127
149
(
2009
).
30.
P.
Antonik
,
M.
Gulina
,
J.
Pauwels
, and
S.
Massar
, “
Using a reservoir computer to learn chaotic attractors, with applications to chaos synchronization and cryptography
,”
Phys. Rev. E
98
,
012215
(
2018
).
31.
P.
Antonik
,
M.
Haelterman
, and
S.
Massar
, “
Brain-inspired photonic signal processor for generating periodic patterns and emulating chaotic systems
,”
Phys. Rev. Appl.
7
,
054014
(
2017
).
32.
N. F.
Rulkov
,
M. M.
Sushchik
,
L. S.
Tsimring
, and
H. D. T.
Abarbanel
, “
Generalized synchronization of chaos in directionally coupled chaotic systems
,”
Phys. Rev. E
51
,
980
994
(
1995
).
33.
L.
Kocarev
and
U.
Parlitz
, “
Generalized synchronization, predictability, and equivalence of unidirectionally coupled dynamical systems
,”
Phys. Rev. Lett.
76
,
1816
1819
(
1996
).
34.
B. R.
Hunt
,
E.
Ott
, and
J. A.
Yorke
, “
Differentiable generalized synchronization of chaos
,”
Phys. Rev. E
55
,
4029
4034
(
1997
).
35.
A. E.
Hoerl
and
R. W.
Kennard
, “
Ridge regression: Biased estimation for nonorthogonal problems
,”
Technometrics
12
,
55
67
(
1970
).
36.
R.
Penrose
, “
A generalized inverse for matrices
,”
Proc. Camb. Philos. Soc.
51
,
406
413
(
1955
).
37.
L.
Appeltant
,
M. C.
Soriano
,
G.
van der Sande
,
S.
Massar
,
J.
Dambre
,
B.
Schrauwen
,
C. R.
Mirasso
, and
I.
Fischer
, “
Information processing using a single dynamical node as a complex system
,”
Nat. Commun.
2
,
468
473
(
2013
).
38.
L.
Gordon
and
J.-P.
Ortega
, “
Reservoir computing universality with stochastic inputs
,”
IEEE Trans. Neural Netw. Learn. Syst.
23
,
1
13
(
2019
).
39.
E. N.
Lorenz
, “
Deterministic nonperiodic flow
,”
J. Atmos. Sci.
20
,
130
(
1963
).
40.
H. D.
Nguyen
and
G. J.
McLachlan
, “
Maximum likelihood estimation of Gaussian mixture models without matrix operations
,”
Adv. Data Anal. Classif.
9
,
371
394
(
2015
).
41.
J. L.
Kaplan
and
J. A.
Yorke
, “Chaotic behavior of multidimensional difference equations,” in Functional Differential Equations and Approximations of Fixed Points (Springer, Heidelberg, 1979), pp. 204–227.
42.
J. D.
Farmer
,
E.
Ott
, and
J. A.
Yorke
, “
The dimension of chaotic attractors
,”
Phys. D
7
,
153
180
(
1983
).