Networks and graphs have emerged as powerful tools to model and analyze nonlinear dynamical systems. By constructing an adjacency matrix from recurrence networks, it is possible to capture critical structural and geometric information about the underlying dynamics of a time series. However, randomization of data often raises concerns about the potential loss of deterministic relationships. Here, in using the spring-electrical-force model, we demonstrate that by optimizing the distances between randomized points through minimizing an entropy-related energy measure, the deterministic structure of the original system is not destroyed. This process allows us to approximate the time series shape and correct the phase, effectively reconstructing the initial invariant set and attracting dynamics of the system. Our approach highlights the importance of adjacency matrices derived from recurrence plots, which preserve crucial information about the nonlinear dynamics. By using recurrence plots and the entropy of diagonal line lengths and leveraging the Kullback–Leibler divergence as a relative entropic measure, we fine-tune the parameters and initial conditions for recurrence plots, ensuring an optimal representation of the system’s dynamics. Through the integration of network geometry and energy minimization, we show that data-driven graphs can self-organize to retain and regenerate the fundamental features of the time series, including its phase space structures. This study underscores the robustness of recurrence networks as a tool for analyzing nonlinear systems and demonstrates that randomization, when guided by informed optimization, does not erase deterministic relationships, opening new avenues for reconstructing dynamical systems from observational data.

The analysis of networks and graphs has seen substantial advancements in recent years.1–3 Various methods exist for constructing and visualizing networks and graphs, such as ordinal networks,4,5 visibility graphs,6,7 and k-nearest neighbor networks.8 A recurrence network can be established from a recurrence matrix, which also serves as an adjacency matrix, enabling the extraction of quantification measures related to the underlying dynamical system.9,10 Recurrence plots and quantification measures make use of recurrence as a key property of dynamical systems,11 with growing interest in recent years in analyzing nonlinear systems.9,12

Eckmann et al.13 introduced recurrence plots (RPs) as a method for visualizing recurrence matrices, which encapsulate the time-invariant properties of a dynamical system derived from observed time series data.14 RPs are generated through phase-space reconstruction, commonly applied to time-varying systems.15–21 Creating a recurrence network involves deriving an adjacency matrix which may be generated from a distance matrix or from the RP’s weighted similarity matrix, which then captures the nodal connections within the network.22 Networks have proven useful for estimating dynamic transitions, quantification measures,9,23 and invariants, as highlighted by recent studies such as those by Haehne et al.24 and Börner et al.25 As opposed to recurrence plots, which require embedded systems, networks belong to the methods which are non-embedded methods.26 Notably, using an adjacency matrix in conjunction with the spring-electrical-force (SEF) model has shown promise in qualitatively reproducing attractors akin to the original phase-space representation.10 This has potentially the advantage that invariant sets may be reconstructed from data alone if an optimal process of reconstruction using the SEF model can be found. To enable the reconstruction of an attractor, the time series need to be steady-state (after transients have dissipated) and stationary (joint probability distribution is invariant under time shifts).27 Also, noise plays an important role and the extent to which an attractor reconstruction depends on the degree of external perturbation remains unanswered. Some early research on image reconstruction deals with perturbations and reprojection errors using spring-embedder algorithms28,29 and later the force-directed algorithm.30,31

A physically meaningful recurrence matrix depends on the proper embedding of the system’s dynamics, requiring estimates for parameters such as embedding dimension,32,33 time delay,14,34 and recurrence threshold criteria.19,35–48 Standard methods include estimating embedding dimension using global false nearest neighbors19,32 and time delay using average auto-mutual information.18,34 Sophisticated techniques for optimizing recurrence thresholds have been explored,22,37,38,40,49 but simpler approaches such as scaling phase-space diameter or examining RP diagonal structures are often used.11,15,35,36,50

Accurate embedding ensures that the RP retains the dynamical system’s essential information, including static and dynamic invariants.14,19,27 Entropy-based measures like Kullback–Leibler divergence51 or generalized forms (e.g., Tsallis or Rényi divergences52) are suitable for optimization, as they provide insights into hidden states.53–55 

Yang and Liu10 used the spring-electrical force (SFE) model10,56 to reconstruct phase space from network data in a self-organized manner. This model, which uses springs to simulate attraction and electrical forces for repulsion, allows intrinsic geometry to emerge organically from random initial points.57 The approach effectively reveals the hidden dynamics of the system.58 Self-organized systems are theoretically inherently self-optimized and do not require further improvement. However, especially in numerical implementations and artificial, self-organized models and depending on the constraints and the cost function, optimization might be required and can be applied to achieve further improvements.59,60

Applying the force-directed algorithm (spring-electrical force model) and just choosing any set of random initial conditions as described by Yang and Liu10 may not lead to an acceptable reconstructed attractor. To overcome this, we have added an optimization process based on an optimized threshold search on initial conditions to their method by moving iteratively backward using random initial conditions and minimizing the Kullback–Leibler divergence51 as a geometric measure of relative entropy. Further, we have projected the time series twice, thereby deteriorating the signal similar to an observable, to approximate the effects of noise and to simulate the data-driven reconstruction of unknown systems. The procedure developed in this paper further aims to optimize the self-organized process of the spring-electrical force model, which is inherently optimized but usually employed in different applications and known to suffer from some deficiencies.

Figure 1 schematizes the different steps of how to re-generate an attractor from an observable in phase space,14 visualization through a complex network, e.g., as horizontal visibility graph26 and the application of the spring-electrical force model and the optimization procedure of obtaining the ideal set of initial conditions. Table I lists the main variables used in this paper.

FIG. 1.

Conceptual diagram of how to recover the attractor from a complex graph. Algorithms developed and provided in this paper are highlighted with Algorithm 1 (main paper) and Algorithm 1 and Algorithm 2 (supplementary material II and III). After (a) phase space reconstruction (using an optimized threshold) and (b) extracting the RP and its adjacency matrix, (c) a complex network as undirected Graph G ( V , E ) (with V, E being vertices and edges) is formed and visualized here using,26 which is iteratively updated using optimized initial conditions by minimizing the Kullback–Leibler divergence. Then, (d) the time series and the attractor based on an observable can be regenerated.

FIG. 1.

Conceptual diagram of how to recover the attractor from a complex graph. Algorithms developed and provided in this paper are highlighted with Algorithm 1 (main paper) and Algorithm 1 and Algorithm 2 (supplementary material II and III). After (a) phase space reconstruction (using an optimized threshold) and (b) extracting the RP and its adjacency matrix, (c) a complex network as undirected Graph G ( V , E ) (with V, E being vertices and edges) is formed and visualized here using,26 which is iteratively updated using optimized initial conditions by minimizing the Kullback–Leibler divergence. Then, (d) the time series and the attractor based on an observable can be regenerated.

Close modal
TABLE I.

Summary of Notations.

Notation Definition
dm;τ  embedding dimension; time delay 
distENTRm  mth order ENTR deviation distance 
y  initial coordinate set 
A  adjacency matrix of G 
C, p  SEF model parameters 
CRP  Cross-Recurrence Plot 
DKL  the Kullback–Leibler divergence 
ENTR  Shannon entropy of the frequency distribution of the diagonal line lengths 
G  an unweighted and undirected graph 
K  natural spring length 
P  end displacement of force-directed algorithm 
R ( X , d m , τ , ε )  the recurrence matrix of X 
RTE  recurrence time entropy 
RP  Recurrence Plot 
S  the embedded time series of X 
X  a time series (abbrv. TS) 
δ  position updating magnitude 
η  learning rate 
ε  the recurrence threshold 
ε S e t  recurrence threshold set 
λ  the KL divergence threshold 
θ  force-directed algorithm stopping threshold 
Notation Definition
dm;τ  embedding dimension; time delay 
distENTRm  mth order ENTR deviation distance 
y  initial coordinate set 
A  adjacency matrix of G 
C, p  SEF model parameters 
CRP  Cross-Recurrence Plot 
DKL  the Kullback–Leibler divergence 
ENTR  Shannon entropy of the frequency distribution of the diagonal line lengths 
G  an unweighted and undirected graph 
K  natural spring length 
P  end displacement of force-directed algorithm 
R ( X , d m , τ , ε )  the recurrence matrix of X 
RTE  recurrence time entropy 
RP  Recurrence Plot 
S  the embedded time series of X 
X  a time series (abbrv. TS) 
δ  position updating magnitude 
η  learning rate 
ε  the recurrence threshold 
ε S e t  recurrence threshold set 
λ  the KL divergence threshold 
θ  force-directed algorithm stopping threshold 

Let X = { x 1 , x 2 , , x n } be a time series with n points, where x i R and 1 i n. Using fixed values of embedding dimension d m and time delay τ, we can reconstruct the time series X = x i to a set with N = n τ ( d m 1 ) vector points, denoted as S = { s 1 , s 2 , , s N }. Each s i S is represented by s i = ( x i , x i + τ , , x i + τ ( d m 1 ) ) R d m, where 1 i N. The embedding dimension and time delay are estimated with the global false nearest neighbor algorithm (FNN)19,32 and average auto-mutual information (MI),34,50 respectively. Specifically, the FNN is applied by using the criterion of below 1 % falseness of nearest neighbors and the MI is applied by choosing the first minimum of the values.

Based on the definition in Ref. 17, the recurrence plot of X can be expressed as R ( X , d m , τ , ε ) (or R ( S , ε )), where ε is the recurrence threshold, i.e., the cut-off distance.15 

Let X = { x 1 , x 2 , , x n } represent a time series consisting of n points, where x i R and 1 i n. Using fixed values for the embedding dimension d m and time delay τ, the time series X can be reconstructed into a set of N = n τ ( d m 1 ) vector points, denoted by S = { s 1 , s 2 , , s N }. Each vector s i S is defined as
(1)

The embedding dimension d m and time delay τ are determined using: (a) The average auto-mutual information (MI) method,34,50 selecting τ as the first minimum of the MI values. (b) The global false nearest neighbor (FNN) algorithm,19,32 with a falseness threshold of below 1 % for nearest neighbors.

Based on the definition in Ref. 17, the recurrence plot of X can be expressed as
(2)
where H represents the Heaviside function and | | | | is the distance measure—typically the maximum norm or Euclidean norm, the latter of which we use in this study.61 Here, ε is the recurrence threshold (or cut-off distance15), indicating whether the distance between two points s i and s j is smaller than ε, marking them as recurrent ( R ( X ) i , j = 1) or not recurrent ( R ( X ) i , j = 0).

By selecting an appropriate embedding, as prescribed by Takens’ embedding theorem,18,27 the time series is reconstructed into phase space,14 enabling the analysis of the observed dynamics. This results in the construction of an N × N binary recurrence matrix R ( X ), where each element reflects the recurrence status of states at times i and j. Specifically, R ( X ) i , j = 1 indicates that the distance between states at times i and j is no greater than ε, while R ( X ) i , j = 0 indicates that the distance exceeds this threshold.

By selecting a proper embedding (in this case following Takens embedding theorem18,27), all our analyses are based on observed dynamics regenerated in phase space.14 In contrast to an RP, a cross-recurrence plot (CRP) compares the states of two different systems and is defined as C R i , j = Θ ( ε x i y j ), where x i and y j are states from two different systems in their respective reconstructed phase spaces. The key difference is that RP focuses on self-similarity within a single system, while CRP examines shared dynamics or interactions between two systems. We will use the CRP to compare the reconstructed attractors in phase space comparing the method of Yang and Liu10 with our proposed procedure, where we aim to optimize the self-organized process of the spring-electrical force mechanism.

By Eq. (2), an N × N binary recurrence matrix R ( X ) is constructed, where R ( X ) i , j = 1 (respectively, R ( X ) i , j = 0) indicates states at times i and j are recurrent (respectively not recurrent), i.e., the distance between states i and states j is no larger (respectively larger) than ε.

Donner et al.9 and Marwan et al.23 constructed undirected, unweighted networks G = ( V , E )62 from the binary recurrence matrices R ( X ) by representing the embedded points ( s i S) as nodes ( V) and the recurrence relationships as edges ( E). We use V (resp. E) to denote the node set (respectively, edge/link set). According to Lacasa et al.,26 graphs can be represented differently, for illustration see Fig. 26 for the Rössler system.

Let A = A i , j with ( i , j ) ( { 1 m } , { 1 n } ) be the adjacency matrix of G, A also being a binary matrix with A i , j = 1 (respectively, A i , j = 0) meaning there is a link (respectively, no link) between node i and node j, where i , j V. A can be generated by excluding self-loops from R ( X ), i.e., replacing the main diagonal values with “ 0,” as shown in Eq. (3).
(3)
where I is the identity matrix of dimension m × n, with m = n in the case of the recurrence plot (and adjacency matrix) forming a square structure.

We present the method employed to optimize the recurrence threshold ε for the RP. As seen in Eq. (2), if the recurrence threshold ε is set too high, most points in the dataset will appear recurrent. Conversely, if ε is chosen to be too small, only a minimal number of points will be classified as recurrent.

To determine the optimal threshold for the time series X with fixed embedding dimension d m and time delay τ, we defined an interval of recurrence threshold values denoted by ε Set = [ ε min , ε max ] with step size h, where ε min , ε max , h R and h > 0. The total number of thresholds used is given by L = | ε Set |.

To quantify the information content generated within the RPs, we use E N T R (also referred to as l-entropy). E N T R was introduced by Marwan11,17 and is calculated as the Shannon entropy of the frequency distribution of the diagonal line lengths l in the recurrence plots. For strongly varying diagonal line lengths, E N T R, as a measure of deterministic information generation, decreases.50 The formula for E N T R is defined as follows:17 
(4)
where p ( l ) = P ε ( l ) l = l min N P ε ( L ) and P ε ( l ) is the frequency distribution of the lengths l of the diagonal structures in the RP with threshold ε. Among the various recurrence quantification measures available, we focus specifically on entropy-based measures. These measures are particularly effective in predicting hidden states,53–55 as they quantify uncertainty and identify patterns within noisy or incomplete data. They capture temporal dependencies, accommodate dynamic systems with evolving states, and signal regime shifts by identifying increases in disorder.
To evaluate the differences between the given recurrence thresholds (i.e., ε S e t) by ENTR, we define an mth order ENTR deviation distance63 denoted as distENTR m as follows:
(5)
where L = | ε S e t |, m = min ( m , L i ), and 1 i L.

In this context, d i s t E N T R m represents the difference between the E N T R values (i.e., recurrence thresholds ε) of successive recurrence plots. Consequently, R ( X , ε i ) and R ( X , ε j ) can be considered similar when their E N T R deviation is small. Therefore, a recurrence threshold corresponding to the minimum value of d i s t E N T R m can be selected as the optimized recurrence threshold. A brief numerical example illustrating this process is provided in supplementary material I. It should be noted here that this choice of threshold is different from the optimal threshold as used for the ideal reconstruction of the dynamics. This was necessary to examine the maximum possible change and information content required for the SEF model to reconstruct the attractor with acceptable quality.

The optimal threshold is determined using the algorithm outlined in SI 1, which identifies the minimum value within the set distSet m.

Here, we introduce the methods we used to optimize the initial displacements for the self-organizing process.10 

1. Force-directed recurrence networks

For a given time series X, Hui et al.10 proposed a self-organizing process to derive its geometric structure from the corresponding network. This approach utilizes the force-directed algorithm, also known as the spring-electrical-force algorithm.56 

Force-directed algorithm. For a graph G = ( V , E ), whose adjacency matrix is A, we use y to denote the coordinates (displacement or layout64) of the nodes in G, i.e., for a node i V, its coordinate is y ( i ).

For a graph G = ( V , E ) with adjacency matrix A, we use y to represent the coordinates (or displacement/layout65) of the nodes in G. Specifically, for a node i V, its coordinate is denoted as y ( i ).

To visualize the graph layout concerning A, the analogy of the spring-electrical model is employed, where the edges are treated as springs and the nodes are modeled as electrically charged particles.10,56 This principle is implemented in the force-directed algorithm. The goal of this algorithm is to minimize the energy in the network, mimicking the behavior of a physical system in equilibrium. As a result, the algorithm iteratively updates the nodes’ coordinates, with the direction and magnitude of each node’s movement determined by the net force acting on it.

The algorithm assigns two types of forces between nodes: repulsive and attractive forces. The repulsive force between each pair of nodes i j, where i , j V, is given by f r ( i , j ) = C K 1 + p | | y ( i ) y ( j ) | | p , where y ( i ) and y ( j ) are the coordinates of nodes i and j, respectively. The attractive force is applied between pairs of linked nodes and is defined as f a ( i , j ) = | | y ( i ) y ( j ) | | 2 K , where A i , j = 1 indicates that nodes i and j are connected by an edge. The combined force for each node in V can then be expressed as follows:
(6)
where i V, y is the coordinates of nodes in V, K is the natural spring length, C regulates the relative strength of the two forces, p controls the long-range repulsive force, and | | | | denotes the distance between two nodes (more details, see Ref. 10). The energy of the system is defined through
(7)
Algorithm 1 shows the pseudo-code of this force-directed algorithm.
Algorithm 1

Force Directed Algorithm ( A, δ, θ, y)

 
 

2. Updated stop criteria

In Algorithm 1, we use A to represent the adjacency matrix generated from a known time series X. The variable δ denotes the node’s updating magnitude, also referred to as the cooling factor,56 which is updated according to different scenarios10 (lines 27–39). The stopping threshold is represented by θ, where the procedure terminates when the condition | E E | < θ is met (line 22). Additionally, based on Ref. 56, we update the stopping criteria (line 22) to the minimum difference between two subsequent layouts, specifically when | | y y | | < θ / K. The procedure concludes when both of the following conditions are satisfied: | | y y | | < θ / K & & | E E | < θ. Here, y represents the initial coordinate set of each node in A, where | y | = | A | = N (line 1). For the force model parameters, we set K = 1, C = 0.2, and p = 1 (lines 3 and 4).10 

The system’s energy, denoted by E and represented by Eq. (7), is initially set to inf (line 6). In lines 7 and 21, we introduce and then update the variable IterationNo, which tracks the direction of change in the system’s energy. This value serves as an input parameter for controlling the node’s updating magnitude δ (lines 27–39). In line 8, the variable converged is introduced to manage the recursive procedure (lines 8–26). It is initialized as False (line 5).

In each iteration (lines 9–25), the system’s energy and the node coordinates are updated based on the combined force [Eq. (6)] acting on each node. At the start of each iteration, the system energy from the previous iteration is stored as E (line 9), which is used to check the stop criterion (line 22) and adjust the updating magnitude (line 21). The system’s energy for the current iteration is initialized to 0 (line 9). Before starting the force computation, the coordinate set y is saved as y (line 9), and the initial combined force for each node is set to 0 (line 10). In lines 12 to 17, for each node in A, the forces are computed using Eq. (6), the coordinates are updated (line 18), and the system’s energy is recalculated using Eq. (7) (Line 19). Based on certain conditions (lines 27–39), the magnitude δ is then updated (line 21).

Current problems of the self-organizing process. We identified problems when extracting the steady geometric structure from a network using the self-organizing process of Ref. 10, and these are addressed as follows:

  1. Optimal coordinates evaluation: In line 40 of Algorithm 1, the updated coordinates are returned to satisfy the energy change condition 22, but there is no evaluation of the optimal displacement y. However, the ideal displacement should be identical to the embedded time series of X66 (i.e., S, Sec. II A), meaning y = S. To address this, we apply the Kullback–Leibler Divergence (KL divergence)67 to measure the divergence between the displacement returned by the algorithm, y, and the ideal displacement values, S. A small KL divergence value indicates closeness between the two.

  2. Initial coordinates generation: The initial coordinate set y of Algorithm 1 is randomly distributed in a high-dimensional space (embedding dimension d m, Sec. II A).10 The KL divergence value (i.e., the divergence between the returned y and S) can vary significantly with different sets of initial coordinates. To find the closest distance to S, the initial coordinates need to be optimized, as described in Sec. II D.

We propose an iterative learning optimization process for generating the initial coordinates. Given an embedded time series S, our goal is to derive a displacement P that is closest to S. This consists of the smallest KL divergence value between P and S, leading to the optimal initial coordinate set y . Note that, P = ForceDirectedAlgorithm ( A , δ , θ , y ), where the input adjacency matrix A is generated by S, and y represents the corresponding initial coordinates.

Let D K L denote the KL divergence value between the final displacement P generated by Algorithm 1 and S, computed as follows:68 
(8)
Let y be the optimal initial coordinates corresponding to the displacement P , which is closest to S and yields the smallest KL divergence value. For any randomly distributed initial coordinate set y, the deviation between y and y is minimized, i.e., for all i y, minimize | | y ( i ) y ( i ) | |. Thus, all points in y are pulled (or pushed) back to their optimal positions,
(9)
where 1 i | y |, f ( P , S ) is the KL divergence function [Eq. (8)], and η is the learning rate. η is used to control the updating magnitude, which is updated according to the following scenarios:
  1. (i) If the KL divergence value remains nearly constant over 50 iterations (indicating that the optimization procedure has reached a critical point), the learning rate is decreased to η × 0.9. A smaller η will refine the optimization process.

  2. (ii) If the KL divergence value increases in one iteration, the learning rate is increased to η / 0.9. This adjustment will accelerate the updating of the coordinates toward their optimal position.

  3. (iii) In any other case, the current learning rate is maintained.

For more details on the iterative learning process, the pseudo-code is provided as Algorithm 2 in the supplementary material.

We evaluate the proposed algorithms on simple sinusoidal motion and the chaotic motion of the Rössler system.

Algorithms. We implement and evaluate the two optimal algorithms for recurrence threshold (Algorithm SI 1) and initial coordinate [Algorithm (SI 2)], respectively.

Settings. All recurrence plot-related computations are constructed with the “Commandline Recurrence Plots.”69 All parameters are using the default values, except the embedding dimension, time delay, and neighborhood criterion as crucial parameters for embedding,18,70 with calculations conducted using the package Time Series Analysis (TI.SE.AN).33 We select m = 1 , 2 , 3 , 4 , 5 for the mth order E N T R deviation distance [Eq. (5)].

For the forces (repulsive, attractive) computation [Eq. (6)], throughout this paper, we fix K = 1, C = 0.2, p = 1, θ = 1 × 10 5, δ = C 1 3 K e 5, where K e is the average distance of an edge in the force physical model, as explained in.56 

For the initial coordinates optimization procedure, unless otherwise specified, we set η = 0.05, and λ = 0.01.

All programs are implemented in Python and are compiled in Linux. All experiments are performed on a machine with AMD EPYC 2.4 GHz CPU and Redhat Linux System.

In this section, we illustrate the proposed approaches by analyzing the sinusoidal dynamics, employing
(10)
where A is the amplitude, f is the frequency, ω = 2 π f is the angular frequency, φ is the phase. Especially, we set A = 1, f = 8, φ = 0 and generate 500 points with the sampling rate Δ t = 0.002.

For the recurrence threshold optimization, we select ε S e t = ( 0 , 3.0 ] with steps of 0.05. Since ε = 0 is meaningless, we choose the first value of ε S e t as 0.01, i.e., ε min = 0.01, ε max = 3.0.

1. Evaluation of selecting optimal recurrence threshold

In constructing the recurrence plots, we use the time delay τ = 6 and embedding dimension d m = 3.

Evaluation of E N T R.

Figures 2(a)2(d) show the recurrence plots for different thresholds ( ε = 0.01 , 0.1 , 0.7 , 1.0 , 1.2 , 2.0). With the increase of the threshold, more neighbors (black dots) become visible [Eq. (2)], i.e., the number of recurrences increases (Fig. 2) and an increase in the change of information can be measured, which is why the entropy grows [Eq. (4) and Fig. 3].

FIG. 2.

The recurrence plots of the sine function associated with different thresholds. (a) ε = 0.01, (b) ε = 0.7, (c) ε = 1.2, (d) ε = 2.0.

FIG. 2.

The recurrence plots of the sine function associated with different thresholds. (a) ε = 0.01, (b) ε = 0.7, (c) ε = 1.2, (d) ε = 2.0.

Close modal
FIG. 3.

E N T R vs the Threshold ε on the sine function.

FIG. 3.

E N T R vs the Threshold ε on the sine function.

Close modal

For large enough ε, the number of recurrences increases slowly, which affects the trend of E N T R on different ε. E N T R increases substantially at the beginning, then changes steadily. Figure 3 shows that E N T R can be used to detect the difference of the recurrence plots using various recurrence thresholds.

Evaluation of mth Order E N T R Deviation Distance. Figure 4 depicts the selection of the optimal recurrence threshold from the ε S e t for a sine function by Algorithm (SI 1). It uses the following thresholds ε min = 0.01, ε max = 3.0, h = 0.05 where the mth order E N T R deviation distances d i s t E N T R m are calculated for m = 1 , 2 , 3 , 4 , 5, respectively.

FIG. 4.

mth Order E N T R deviation distance ( m = 1 , 2 , 3 , 4 , 5) against the threshold ε on Sine Time Series.

FIG. 4.

mth Order E N T R deviation distance ( m = 1 , 2 , 3 , 4 , 5) against the threshold ε on Sine Time Series.

Close modal

The plots of d i s t E N T R m exhibit similar behavior. At lower threshold values, the sensitivity is higher: a small increase in the threshold can result in capturing many more recurrences, leading to a larger variance (indicated by stronger oscillations) for all m values calculated.

As the threshold increases, the changes in d i s t E N T R m become relatively minimal (Fig. 3). We select the threshold that minimizes d i s t E N T R 5 ( m = 5) as the optimal choice. In this case, the optimal threshold is ε = 1.2. This threshold is then used for subsequent computations, including the optimization of the initial coordinates (Sec. III A 2).

2. Evaluation of initial coordinates optimization

We demonstrate the validity of the displacement evaluation measure (i.e., KL divergence) and the optimization procedure for the initial coordinates (abbrv. ICs).

Evaluation of End Displacement with KL Divergence. Figure 5 illustrates the KL divergence distribution for all d m = 3 dimensions, based on 100 independent randomized initial coordinate sets. Each KL divergence value is computed from the corresponding end displacement of an initial coordinate set using Algorithm 1 and Eq. (8).

FIG. 5.

KL Divergence Distribution over 100 Independent Randomized Initial Coordinate Sets. IC 30 has absolutely the smallest divergence, while 61 shows the largest. Examples of their performance are shown in the supplementary material.

FIG. 5.

KL Divergence Distribution over 100 Independent Randomized Initial Coordinate Sets. IC 30 has absolutely the smallest divergence, while 61 shows the largest. Examples of their performance are shown in the supplementary material.

Close modal

For clarity, we also report the labels of the initial coordinate sets with the minimum and maximum KL divergence values, respectively. For all d m = 3 dimensions (over-embedding), the KL divergence values are generally grouped into two clusters, referred to as Group-A and Group-B. In Group-A, all KL divergence values exceed 1.0. We demonstrate in supplementary material IV to VI that not every initial coordinate set is capable of generating a displacement close to the time trace of the embedded time series.

Figures 5 and S1 (in supplementary material IV) illustrate that displacements with smaller KL divergence values are closer to their optimal displacements. This demonstrates that KL divergence can effectively be used to assess the proximity of the end displacement P to the ideal displacement S.

Evaluation of Initial Coordinates Optimization. We present the results of the initial coordinate (abbrv. IC) optimization [Algorithm (SI 2)] for Dimension 1 (or dimension X) of No. 30’s initial coordinate set generated in the previous section.

Figure 6 illustrates the iterative optimization process for dimension X of No. 30’s initial coordinate set on the sine function. Starting from a random layout of 500 nodes in the phase space, shown in Fig. 6(a), it takes approximately 1700 iterations to achieve the stable structure, at which point D K L remains steady. Figures 6(a) and 6(b) compare the end displacement P (black line) with the embedded time series S (black line) at various stages of iteration. As D K L reduces to around 0.06, the final displacement P closely approximates S, achieving a small D K L. When the two distributions ( P and S) are sufficiently close, the D K L values converge, indicating that the final displacement P is nearly identical to the optimal displacement for the current IC.

FIG. 6.

Different Iterations of the End Displacement P of No. 30 vs the Embedded Time Series S on Sine Wave; after about 1757 iterations the solution is converged. (a) 1 iteration, (b) 1757 iterations.

FIG. 6.

Different Iterations of the End Displacement P of No. 30 vs the Embedded Time Series S on Sine Wave; after about 1757 iterations the solution is converged. (a) 1 iteration, (b) 1757 iterations.

Close modal
To test whether our findings of applying optimization also work for irregular time series, we next investigated data generated with the Rössler model in its chaotic dynamic regime.71 The Rössler system equations are provided in the following:
(11)
where we employ as parameter values a = 0.2, b = 0.2, and c = 10.0 and which produces a chaotic strange attractor.27 

We observed the X-coordinate at every Δ t = 0.02 unit of time, generating a time series consisting of 2000 sample points.

Figure 7 displays the time series of the coordinates along with the corresponding attractor. For the recurrence threshold optimization, we choose ε S e t = [ 0.2 , 9.0 ] with a step size of 0.4, i.e., ε min = 0.2, ε max = 9.0.

FIG. 7.

Rössler attractor with parameters set to a = 0.2, b = 0.2, c = 10.0 ( 2000 points, Δ t = 0.03).

FIG. 7.

Rössler attractor with parameters set to a = 0.2, b = 0.2, c = 10.0 ( 2000 points, Δ t = 0.03).

Close modal

1. Evaluation of selecting optimal recurrence threshold

In constructing the recurrence plots, we use a time delay of τ = 15 and an embedding dimension of d m = 3 for the X-coordinate of the above sample.72 

Evaluation of E N T R. Figure 8 illustrates the effect of the recurrence threshold on E N T R [Eq. (4)] for the Rössler time series, with ε ranging from 0.2 to 9.0. As ε increases, more recurrences are detected [see also Figs. 9(a)9(d)]. When ε becomes sufficiently large, the relative number of recurrences increases only slightly. Together, these two factors influence the behavior of E N T R across different values of ε. That is, E N T R rises sharply at first and then levels off, changing more gradually.

Evaluation of mth order E N T R deviation distance. Figure 10 shows the selection of the optimal recurrence threshold from the ε S e t on the Rössler time series using Algorithm (SI 1) with ε min = 0.2, ε max = 9.0, and h = 0.4. The mth order E N T R deviation distances d i s t E N T R m are calculated for m = 1 , 2 , 3 , 4 , 5, respectively. The Rössler model exhibits similar behavior to the sinusoidal dynamics: (i) when ε is small, the d i s t E N T R m values oscillate; (ii) when ε is large enough, the d i s t E N T R m values stabilize and change steadily. The optimal threshold is selected as the one with the minimum value of d i s t E N T R 5 ( m = 5), which corresponds to ε = 3.2. This setting is used for the subsequent computations, i.e., the optimization of initial coordinates (Sec. III B 2).

FIG. 8.

E N T R vs the Recurrence threshold ε on Rössler.

FIG. 8.

E N T R vs the Recurrence threshold ε on Rössler.

Close modal
FIG. 9.

The recurrence plots of Rössler model associated with different thresholds. (a) ε = 1.2, (b) ε = 3.2, (c) ε = 5.2, (d) ε = 9.0.

FIG. 9.

The recurrence plots of Rössler model associated with different thresholds. (a) ε = 1.2, (b) ε = 3.2, (c) ε = 5.2, (d) ε = 9.0.

Close modal
FIG. 10.

mth order E N T R deviation distance ( m = 1 , 2 , 3 , 4 , 5) against the threshold ε on Rössler.

FIG. 10.

mth order E N T R deviation distance ( m = 1 , 2 , 3 , 4 , 5) against the threshold ε on Rössler.

Close modal

2. Evaluation of initial coordinates optimization

In this section, we demonstrate the correctness of the displacement evaluation measure (i.e., KL divergence) and the optimization procedure for the initial coordinates (abbreviated as ICs). The optimal threshold identified in Sec. III B 1 (i.e., ε = 3.2) is used for matrix generation (Algorithms 1 and 2 in supplementary material III).

Evaluation of End Displacement with KL Divergence. Figure 11 displays the KL divergence distribution for all d m = 3 dimensions, computed over 100 independent randomized initial coordinate sets. Each KL divergence value is derived from the corresponding end displacement of an initial coordinate set using Algorithm 1 and Eq. (8). For convenience, we provide the labels of the initial coordinate sets with the minimum and maximum KL divergence values, respectively.

FIG. 11.

KL divergence distribution over 100 independent randomized initial coordinate sets; using third dimension of the Rössler system. IC 72 shows the largest density, while the results for IC 85 are not that obvious. A comparison of the performance of those ICs is presented in the SI.

FIG. 11.

KL divergence distribution over 100 independent randomized initial coordinate sets; using third dimension of the Rössler system. IC 72 shows the largest density, while the results for IC 85 are not that obvious. A comparison of the performance of those ICs is presented in the SI.

Close modal

Figure S2 in the supplementary material compares the end displacement P and the embedded time series S for each dimension, highlighting that not all initial coordinates generate a displacement that provides a good candidate for the time series.

Both Figs. 11 and S2 (in supplementary material V) demonstrate that displacements with smaller KL divergence values are closer to the optimal displacement; however, the result for No. 72 does not perform as well.

Evaluation of Initial Coordinates Optimization. We report on the results of optimizing the initial coordinates (abbrv. IC) using Algorithm (SI 2) for No. 85’s initial coordinate set generated previously, exemplified for dimension X of the Rössler system, see Fig. 12. Starting from a random initial layout of 2000 nodes it takes about 2300 iterations to reach the stable structure, i.e., D K L keeps stable. Figures 12(a) and 12(b) show the structure of the final displacement P (blue line) compared to the reference time series S (black line) for different iterations. Similar to the sine function results ( D K L decreased to 0.03), the final displacement P approximates S.

Evaluation of the overall performance using cross recurrence quantification measures. In the following, we use joint recurrence plots and joint recurrence quantification measures on the Rössler attractor to compare the quality of the time series regenerated through the method suggested by Yang and Liu10 and using the optimization procedure outlined in this paper (Fig. 1). A joint recurrence plot compares two separate phase spaces. As embedding parameters the average cross-mutual information between pairwise time series is generated which produces the original time series against the time series generated with10, τ = 48 and using our method τ = 33. In both cases, a dimension of m = 3 is estimated using the global false nearest neighbor algorithm.19, Figure 13 shows a comparison of the original Rössler system ( x-coordinate), and the reconstructed system as time series (a 1, b 1) and as attractors (a 2, b 2) using delay embedding following (a) the procedure of Ref. 10, and (b) following the optimization of initial conditions of the current study. Using a cross-recurrence quantification analysis (CRQA) Fig. 13(c) shows the recurrence time entropy and (d) the clustering coefficient of the trajectories shown in phase space (a 2) and (b 2) against each other. While the time series in both cases look acceptable (see a 1 and b 1), high-frequency components (noise) have been introduced. The attractors can be reconstructed in their basic shape (cf. a 2 and b 2), yet the parts that relate to the diverging trajectory cannot be fully recovered. Using the procedure of the current study the phase and the size (the phase space diameter) of the attractor as shown in Fig. 13 b 2 could be corrected compared to that shown in Fig. 13 a 2. However, due to the rescaling procedure, a shift has been introduced which becomes apparent by comparing the ordinate value at the beginning of the time series for Figs. 13 a 2 and 13 b 2. Recurrence time entropy (RTE) in CRQA measures the variability in the timing of interactions between two time series. A higher RTE indicates greater diversity and unpredictability in the recurrence times, reflecting more complex interacting dynamics; conversely, a lower RTE suggests more regularity and predictability in their interactions.17 As seen in Fig. 13(c), the RTE between the original Rössler system and the method of Yang and Liu10 is higher than that of the original Rössler system and our modified method here. In CRQA, the clustering coefficient [depicted in Fig. 13(d)] measures the degree of local interconnectedness of recurrence points, expressing the coherence, stability, and structured nature of interactions between time series. A high clustering coefficient indicates strong, localized synchronization and deterministic dynamics and better correlation between two time series, while a low value reflects weaker or more random coupling, which is the case for the original system compared to Ref. 10. Overall, the results in Fig. 13 suggest the optimization approach introduced here constitutes an improvement relative to a mere self-organization process.

FIG. 12.

Different Iterations of the End Displacement P (blue line) of No. 85 vs the Embedded Time Series S (black line) on Rössler. (a) 1 iteration, (b) 2293 iterations.

FIG. 12.

Different Iterations of the End Displacement P (blue line) of No. 85 vs the Embedded Time Series S (black line) on Rössler. (a) 1 iteration, (b) 2293 iterations.

Close modal

FIG. 13.

Comparison of the chaotic Rössler attractor, (a) original system vs Yang and Liu,10 (b) original system vs reconstructed attractors of the current study; (a 1), (b 1) time series, (a 2), (b 2) reconstructed phase space ( t a u = 47, m = 4, FAN threshold criterion, e p s = 0.1, window size 150, and window shift 5), and cross-recurrence quantification analysis (c) recurrence time entropy and (d) clustering coefficient.

FIG. 13.

Comparison of the chaotic Rössler attractor, (a) original system vs Yang and Liu,10 (b) original system vs reconstructed attractors of the current study; (a 1), (b 1) time series, (a 2), (b 2) reconstructed phase space ( t a u = 47, m = 4, FAN threshold criterion, e p s = 0.1, window size 150, and window shift 5), and cross-recurrence quantification analysis (c) recurrence time entropy and (d) clustering coefficient.

Close modal

Research in networks and graphs has received significant attention in recent years; many quantifiers and metrics exist, especially for complex social networks.2,9 Incorporating these measures into physically explainable nonlinear time series analysis provides increased potential for data interpretation in complex dynamics.22,42

We found that the force-directed model10,56 alone does not guarantee an optimal reconstruction of the attractor and that its performance strongly depends on how the attractor is visualized in phase space. We, therefore, reshaped the network iteratively, by minimizing Kullback–Leibler divergence, as a relative entropic measure (i.e., a measure dependent on relative information generation), to optimize the initial conditions. The adjustment of iterations is achieved by using a variable learning rate to control the updating magnitude which shows good agreement. The embedding is based on an optimized threshold and relies on the calculation of the mth order of entropy deviation.

The application of a cross-recurrence quantification analysis to compare the original Rössler attractor with the method proposed by Yang and Liu10 and our new procedure shows that the new method produces a more similar attractor with less uncertainty, based on a network being clustered more closely relative to the original network of the Rössler system. The results here suggest that the optimization approach to initial conditions significantly improves both the robustness and quality of the attractor reconstruction method proposed by Yang and Liu.10 

While results indicate that there exists theoretically an equivalence relation between the attractor in phase space, generated from a time series and its graph, it is yet unclear how this bijective mapping can be explicitly and accurately described, so that the invariant set and its invariant measures remain preserved. For the results of Ref. 10 but also for our method presented here, this is not the case yet. Using twice a component of an embedded time series is like forming twice a derivative (aka projection), like introducing noise (losing an order of magnitude accuracy).73 Noise, especially high-frequency noise, as commonly found in channel noise of real-life measurements, increases the phase space dimension of a low dimensional dynamical system.19,27 While not tested here specifically on the two benchmark systems, similar to attractor reconstruction in the phase space, the self-organization effect of the spring-electrical force (SEF) model is affected by noise and low-dimensional warping effects.28,29

While the attractor is not fully recovered, the SEF method still seems to work with some degradation but requires a different threshold value as needed for conventional time-delay embedding, indicating a need for more information to reconstruct the attractor fulfilling the KLD optimization criterion. Our proposed method corrects the phase and slightly reduces the noise, especially in the divergent part of the dynamics (where the positive Lyapunov exponents become dominant). However, the warping at the edges accounts for non-uniform vertex density which leads to warping effects (similar to signal distortion but originating from a deterministic transformation process of attraction and repulsive forces) when applying the SEF model; one way to overcome this could be to combine the spring-electrical force model with the Kamada–Kawai stress model.74 The threshold value detected to be ideal is different from the threshold value chosen for normal delay embedding. The reason for this may lie in the edge effects and the information required by the SEF model to reconstruct the attractor and to reach equilibrium in the presence of these perturbations. More research is required to establish whether the threshold value changes if the Kamada–Kawai stress model is employed. While the repeated application of the embedding theorems (trying to recover the attractor from one component of a time-delayed time series) somewhat mimics the features of an observable time series (i.e., introduces noise), the exact trajectory can only be approximated. A similar problem exists when embedding a time series75 and it can be assumed that the extraction of the attractor likely follows similar approximations.

We investigated the impact of the recurrence threshold ε by varying it within a practical range ε S e t = [ ε min , ε max ] and observing its effect on the recurrence matrix connectivity and the ENTR deviation distance. Our findings indicate that extremely small or large values of ε can lead to suboptimal representations: a small ε tends to under-connect nodes and lose global dynamic patterns, while a large ε over-connects nodes and introduces noise. However, the results remain robust within a moderate range of ε, where the reconstructed attractor reliably preserves the system’s topology. These observations underline the importance of selecting an appropriate ε tailored to the system under study.

For the initial conditions in the spring-electrical force (SEF) model, we conducted multiple optimization runs with randomized ICs to assess their impact. While ICs can cause slight variations in individual node positions, the overall attractor topology and network metrics (e.g., ENTR deviation distance) remain consistent when averaged over multiple runs. This consistency demonstrates that the method is robust to IC variability, ensuring reliable results even when ICs are not precisely optimized.

Regarding the delay time ( τ), we determined its value using the (average) auto-mutual information method. This approach ensures that reconstructed dimensions are minimally dependent on one another while retaining meaningful temporal correlations, which is essential for accurately capturing the system’s dynamics.

It remains to be shown in the future how accurately reconstructed invariant sets allow estimating invariant measures, which should then be compared with static and dynamic invariants, such as correlation dimension estimates or estimates of Rényi entropies for further verification.

While the mth order entropy is used here as the optimal threshold criterion, it should be tested whether the results of attractor reconstruction can be further improved by measuring statistical complexity and determining the degree of irrelevance43 (similar to anti-optimization), including border effect corrections to obtain noise-robust diagonal line entropy values,47 or by employing Monte Carlo decision tree search for optimal embedding.48 Future research directions should also include the study of real-life systems and the systematic exploration of how features are preserved in the presence of noise, using networked rather than just individual systems. By doing so, network clusters/aggregation, coherence, or chimera states in complex systems can be trialed to be reconstructed using the self-organization property of the force-directed algorithm.

The optimization procedure, including the search for optimal initial conditions, requires substantial computational resources even for small systems. The computational complexity of the spring-electrical-force model ranges from O ( N ) for sparse matrix solutions to O ( N 3 ) for dense matrix solutions. A major computational challenge lies in the calculation of pairwise distances during phase-space reconstruction, a critical step in generating the recurrence matrix. This process has a theoretical time complexity of O ( N 2 ), where N is the number of data points. For large datasets, this quadratic scaling can quickly become a bottleneck in terms of both time and memory usage.

To mitigate this issue, we implemented several optimization techniques, such as reducing redundant calculations, leveraging vectorized operations, and introducing parallel processing (refer to lines 5–8 of Algorithm 1). Although the theoretical complexity remains O ( N 2 ), these practical optimizations significantly reduced the actual computational time and resource consumption, particularly for large datasets. By combining these techniques, we maintained high reconstruction accuracy while improving the feasibility of handling larger datasets.

In addition, the iterative optimization steps in the spring-electrical-force (SEF) model, which are used to refine node positions by minimizing system energy, also contribute to the computational burden with its computational complexity ranging from O ( N ) for sparse matrix solutions to O ( N 3 ) for dense matrix solutions. Each iteration involves repeated updates to node positions based on their connections, with the computational cost scaling with the number of nodes and edges in the network. To balance displacement optimization and computational efficiency, especially for large datasets, we introduced stopping criteria based on both iteration count and runtime. Specifically, the optimization stops and returns the best node positions obtained so far if either the number of iterations exceeds a predefined threshold, or the runtime surpasses a set time constraint. By carefully selecting these thresholds, we achieved an effective trade-off between computational efficiency and the accuracy of the reconstructed network.

However, further research is needed to explore networked systems and real-world measurements. For practical applications, it will be essential to develop a more efficient computational framework that leverages software engineering (e.g., using heuristics for efficient neighborhood search) diverse hardware architectures and distributed systems (e.g., parallelization through field programmable gate arrays).

See the supplementary material (I–VII) for separate document additional algorithms and examples to illustrate the optimization and self-organization process for the sine and the Rössler system.

This research was supported under Australian Research Councils (ARC) Discovery Projects funding scheme Project Nos. DP200100358 and DP240101536.

The authors have no conflicts to disclose.

Conggai Li: Data curation (lead); Formal analysis (lead); Investigation (lead); Methodology (equal); Software (lead); Validation (equal); Visualization (equal); Writing – original draft (lead). Joseph C. S. Lai: Conceptualization (equal); Funding acquisition (equal); Investigation (equal); Methodology (equal); Supervision (equal); Validation (equal); Writing – review & editing (equal). Sebastian Oberst: Conceptualization (lead); Data curation (supporting); Formal analysis (supporting); Funding acquisition (lead); Investigation (equal); Methodology (equal); Project administration (lead); Resources (lead); Supervision (equal); Validation (equal); Visualization (supporting); Writing – original draft (supporting); Writing – review & editing (equal).

Pseudo-code and all model details are contained within this paper. Additional information can be requested from the corresponding author upon request.

1.
E.
Bullmore
and
O.
Sporns
, “
Complex brain networks: Graph theoretical analysis of structural and functional systems
,”
Nat. Rev. Neurosci.
10
,
186
198
(
2009
).
2.
J.
Scott
, “
Social network analysis: Developments, advances, and prospects
,”
Soc. Netw. Anal. Min.
1
,
21
26
(
2011
).
3.
C.
Li
,
F.
Zhang
,
Y.
Zhang
,
L.
Qin
,
W.
Zhang
, and
X.
Lin
, “
Discovering fortress-like cohesive subgraphs
,”
Knowl. Inf. Syst.
63
,
3217
3250
(
2021
).
4.
M.
McCullough
,
M.
Small
,
H.
Iu
, and
T.
Stemler
, “
Multiscale ordinal network analysis of human cardiac dynamics
,”
Philos. Trans. R. Soc. A: Math., Phys. Eng. Sci.
375
,
20160292
(
2017
).
5.
M.
Huang
,
Z.
Sun
,
R. V.
Donner
,
J.
Zhang
,
S.
Guan
, and
Y.
Zou
, “
Characterizing dynamical transitions by statistical complexity measures based on ordinal pattern transition networks
,”
Chaos
31
,
033127
(
2021
).
6.
L.
Lacasa
,
B.
Luque
,
F.
Ballesteros
,
J.
Luque
, and
J. C.
Nuno
, “
From time series to complex networks: The visibility graph
,”
Proc. Natl. Acad. Sci. U.S.A.
105
,
4972
4975
(
2008
).
7.
B.
Luque
,
L.
Lacasa
,
F.
Ballesteros
, and
J.
Luque
, “
Horizontal visibility graphs: Exact results for random time series
,”
Phys. Rev. E
80
,
046103
(
2009
).
8.
X.
Xu
,
J.
Zhang
, and
M.
Small
, “
Superfamily phenomena and motifs of networks induced from time series
,”
Proc. Natl. Acad. Sci. U.S.A.
105
,
19601
19605
(
2008
).
9.
R. V.
Donner
,
Y.
Zou
,
J. F.
Donges
,
N.
Marwan
, and
J.
Kurths
, “
Recurrence networks—A novel paradigm for nonlinear time series analysis
,”
New J. Phys.
12
,
033025
(
2010
).
10.
H.
Yang
and
G.
Liu
, “
Self-organized topology of recurrence-based complex networks
,”
Chaos
23
,
043116
(
2013
).
11.
N.
Marwan
,
Encounters with Neighbours: Current Developments of Concepts Based on Recurrence Plots and Their Applications
(
University of Potsdam
,
2003
).
12.
C.
Geier
,
M.
Stender
, and
N.
Hoffmann
, “
Building functional networks for complex response analysis in systems of coupled nonlinear oscillators
,”
J. Sound Vib.
590
,
118544
(
2024
).
13.
J.-P.
Eckmann
,
S.
Kamphorst
, and
D.
Ruelle
, “
Recurrence plots of dynamical systems
,”
Europhys. Lett.
5
,
973
977
(
1987
).
14.
H. D. I.
Abarbanel
,
T. A.
Carroll
,
L. M.
Pecora
,
J. J.
Sidorowich
, and
L. S.
Tsimring
, “
Predicting physical variables in time-delay embedding
,”
Phys. Rev. E
49
,
1840
1853
(
1994
).
15.
J. P.
Zbilut
and
C. L.
Webber
, Jr.
, “
Embeddings and delays as derived from quantification of recurrence plots
,”
Phys. Lett. A
171
,
199
203
(
1992
).
16.
J. P.
Zbilut
,
J.-M.
Zaldivar-Comenges
, and
F.
Strozzi
, “
Recurrence quantification based Liapunov exponents for monitoring divergence in experimental data
,”
Phys. Lett. A
297
,
173
181
(
2002
).
17.
N.
Marwan
,
M.
Carmen Romano
,
M.
Thiel
, and
J.
Kurths
, “
Recurrence plots for the analysis of complex systems
,”
Phys. Rep.
438
,
237
329
(
2007
).
18.
N.
Marwan
,
S.
Schinkel
, and
J.
Kurths
, “
Recurrence plots 25 years later—Gaining confidence in dynamical transitions
,”
EPL (Europhys. Lett.)
101
,
20007
(
2013
).
19.
S.
Oberst
and
J.
Lai
, “
A statistical approach to estimate the Lyapunov spectrum in disc brake squeal
,”
J. Sound Vib.
334
,
120
135
(
2015
).
20.
N.
Marwan
and
C. L.
Webber
, “Mathematical and computational foundations of recurrence quantifications,” in Recurrence Quantification Analysis (Springer, 2015), pp. 3–43.
21.
I.
Andreadis
,
A. D.
Fragkou
, and
T. E.
Karakasidis
, “
On a topological criterion to select a recurrence threshold
,”
Chaos
30
,
013124
(
2020
).
22.
Y.
Zou
,
R. V.
Donner
,
N.
Marwan
,
J. F.
Donges
, and
J.
Kurths
, “
Complex network approaches to nonlinear time series analysis
,”
Phys. Rep.
787
,
1
97
(
2019
).
23.
N.
Marwan
,
J. F.
Donges
,
Y.
Zou
,
R. V.
Donner
, and
J.
Kurths
, “
Complex network approach for recurrence analysis of time series
,”
Phys. Lett. A
373
,
4246
4254
(
2009
).
24.
H.
Haehne
,
J.
Casadiego
,
J.
Peinke
, and
M.
Timme
, “
Detecting hidden units and network size from perceptible dynamics
,”
Phys. Rev. Lett.
122
,
158301
(
2019
).
25.
G.
Börner
,
H.
Haehne
,
J.
Casadiego
, and
M.
Timme
, “
Revealing system dimension from single-variable time series
,”
Chaos
33
,
073136
(
2023
).
26.
L.
Lacasa
,
B.
Luque
,
F.
Ballesteros
,
J.
Luque
, and
J. C.
Nuño
, “
From time series to complex networks: The visibility graph
,”
Proc. Natl. Acad. Sci. U.S.A.
105
,
4972
4975
(
2008
).
27.
H.
Kantz
and
T.
Schreiber
,
Nonlinear Time Series Analysis
(
Cambridge University Press
,
2004
).
28.
P.
Eades
, “A heuristic for graph drawing,” Congressus Numerantium 42, 149–160 (1984), see https://www.cs.ubc.ca/∼will/536E/papers/Eades1984.pdf
29.
T.
Kamada
and
S.
Kawai
, “
An algorithm for drawing general undirected graphs
,”
Inf. Process. Lett.
31
,
7
15
(
1989
).
30.
A.
Efrat
,
D.
Forrester
,
A.
Iyer
,
S. G.
Kobourov
,
C.
Erten
, and
O.
Kilic
, “
Force-directed approaches to sensor localization
,”
ACM Trans. Sen. Netw.
7
,
27
(
2010
).
31.
Z.
Huang
,
J.
Wu
,
W.
Zhu
,
Z.
Wang
,
S.
Mehrotra
, and
Y.
Zhao
, “
Visualizing complex networks by leveraging community structures
,”
Phys. A: Stat. Mech. Appl.
565
,
125506
(
2021
).
32.
M. B.
Kennel
,
R.
Brown
, and
H. D.
Abarbanel
, “
Determining embedding dimension for phase-space reconstruction using a geometrical construction
,”
Phys. Rev. A
45
,
3403
(
1992
).
33.
R.
Hegger
,
H.
Kantz
, and
T.
Schreiber
, “
Practical implementation of nonlinear time series methods: The TISEAN package
,”
Chaos
9
,
413
435
(
1999
).
34.
A. M.
Fraser
and
H. L.
Swinney
, “
Independent coordinates for strange attractors from mutual information
,”
Phys. Rev. A
33
,
1134
(
1986
).
35.
M.
Koebbe
,
G.
Mayer-Kress
, and
J.
Zbilut
, “Use of recurrence plots in the analysis of time-series data,” in Santa fe Institute Studies in the Sciences of Complexity-Proceedings Volume (Citeseer, 1992), Vol. 12, pp. 361–361.
36.
G. M.
Mindlin
and
R.
Gilmore
, “
Topological analysis and synthesis of chaotic time series
,”
Phys. D: Nonlinear Phenom.
58
,
229
242
(
1992
).
37.
L.
Matassini
,
H.
Kantz
,
J.
Hołyst
, and
R.
Hegger
, “
Optimizing of recurrence plots for noise reduction
,”
Phys. Rev. E
65
,
021102
(
2002
).
38.
M.
Thiel
,
M. C.
Romano
,
J.
Kurths
,
R.
Meucci
,
E.
Allaria
, and
F. T.
Arecchi
, “
Influence of observational noise on the recurrence quantification analysis
,”
Phys. D: Nonlinear Phenom.
171
,
138
152
(
2002
).
39.
F.
Strozzi
,
E.
Gutierrez
,
C.
Noè
,
T.
Rossi
,
M.
Serati
, and
J. M.
Zaldívar
, “Application of non-linear time series analysis techniques to the nordic spot electricity market data,” Liuc Papers , 1–51 (2007).
40.
S.
Schinkel
,
O.
Dimigen
, and
N.
Marwan
, “
Selection of recurrence threshold for signal detection
,”
Eur. Phys. J. Spec. Top.
164
,
45
53
(
2008
).
41.
Y.
Hirata
,
S.
Horai
, and
K.
Aihara
, “
Reproduction of distance matrices and original time series from recurrence plots and their applications
,”
Eur. Phys. J. Spec. Top.
164
,
13
22
(
2008
).
42.
R. V.
Donner
,
Y.
Zou
,
J. F.
Donges
,
N.
Marwan
, and
J.
Kurths
, “
Ambiguities in recurrence-based complex network representations of time series
,”
Phys. Rev. E
81
,
015101
(
2010
).
43.
L.
Uzal
,
G.
Grinblat
, and
P.
Verdes
, “
Optimal reconstruction of dynamical systems: A noise amplification approach
,”
Phys. Rev. E—Stat., Nonlinear, Soft Matter Phys.
81
,
016223
(
2011
).
44.
D.
Eroglu
,
N.
Marwan
,
S.
Prasad
, and
J.
Kurths
, “
Finding recurrence networks’ threshold adaptively for a specific time series
,”
Nonlinear Process. Geophys.
21
,
1085
1092
(
2014
).
45.
D.
Yang
,
W.-X.
Ren
,
Y.-D.
Hu
, and
D.
Li
, “
Selection of optimal threshold to construct recurrence plot for structural operational vibration measurements
,”
J. Sound Vib.
349
,
361
374
(
2015
).
46.
E.
Ngamga
,
D.
Senthilkumar
,
A.
Prasad
,
P.
Parmananda
,
N.
Marwan
, and
J.
Kurths
, “
Distinguishing dynamics using recurrence-time statistics
,”
Phys. Rev. E
85
,
026217
(
2012
).
47.
K. H.
Kraemer
and
N.
Marwan
, “
Border effect corrections for diagonal line based recurrence quantification analysis measures
,”
Phys. Lett. A
383
,
125977
(
2019
).
48.
M.
Kraemer
,
K. H.
andGelbrecht
,
I.
Pavithran
,
R. I.
Sujith
, and
N.
Marwan
, “
Optimal state space reconstruction via Monte Carlo decision tree search
,”
Nonlinear Dyn.
108
,
125977
(
2022
).
49.
K. H.
Kraemer
,
R. V.
Donner
,
J.
Heitzig
, and
N.
Marwan
, “
Recurrence threshold selection for obtaining robust recurrence characteristics in different embedding dimensions
,”
Chaos
28
,
085720
(
2018
).
50.
S.
Oberst
and
J.
Lai
, “
Chaos in brake squeal noise
,”
J. Sound Vib.
330
,
955
975
(
2011
).
51.
J.
Zeng
,
U.
Kruger
,
J.
Geluk
,
X.
Wang
, and
L.
Xie
, “
Detecting abnormal situations using the Kullback–Leibler divergence
,”
Automatica
50
,
2777
2786
(
2014
).
52.
T.-K. L.
Wong
and
J.
Zhang
, “
Tsallis and Rényi deformations linked via a new λ-duality
,”
IEEE Trans. Inf. Theory
68
,
5353
5373
(
2022
).
53.
M.
Stender
,
S.
Oberst
,
M.
Tiedemann
, and
N.
Hoffmann
, “
Complex machine dynamics: Systematic recurrence quantification analysis of disk brake vibration data
,”
Nonlinear Dyn.
97
,
2483
2497
(
2019
).
54.
Z.
Zhang
, “
Entropy-based statistics and their applications
,”
Entropy
25
,
936
(
2023
).
55.
M. R.
Sales
,
M.
Mugnaine
,
J. D.
Szezech
,
R. L.
Viana
,
I. L.
Caldas
,
N.
Marwan
, and
J.
Kurths
, “
Stickiness and recurrence plots: An entropy-based approach
,”
Chaos
33
,
033140
(
2023
).
56.
Y.
Hu
, “
Efficient, high-quality force-directed graph drawing
,”
Math. J.
10
,
37
71
(
2005
).
57.
T. M. J.
Fruchterman
and
E. M.
Reingold
, “
Graph drawing by force-directed placement
,”
Softw.: Pract. Exp.
21
,
1129
1164
(
1991
).
58.
S.
Camazine
,
J.-L.
Deneubourg
,
N. R.
Franks
,
J.
Sneyd
,
G.
Theraulaz
, and
E.
Bonabeau
,
Self-Organization in Biological Systems
(
Princeton Studies in Complexity
,
2001
).
59.
D.
Helbing
and
T.
Vicsek
, “
Optimal self-organization
,”
New J. Phys.
1
,
13
(
1999
).
60.
D. B.
Brückner
and
G.
Tkačik
, “
Information content and optimization of self-organized developmental systems
,”
Proc. Natl. Acad. Sci. U.S.A.
121
,
e2322326121
(
2024
).
61.
When the context is clear, we may eliminate some parameters in notations, such as using R ( X ) instead of R ( X , d m , τ , ε ).
62.
In this paper, we use graph and network interchangeably.
63.
T. E.
Karakasidis
,
I.
Andreadis
, and
A. D.
Fragkou
, “
On a topological classification of recurrence plots: Application to noise perturbed molecular dynamics time series
,”
Chaos
29
,
023113
(
2019
).
64.
In this paper, we use coordinate, displacement, and layout interchangeably.
65.
In this paper, the terms coordinate, displacement, and layout are used interchangeably.
66.
Here, X refers to the time series used to generate the input graph/adjacency matrix for the force-directed algorithm (Algorithm 1).
67.
I.
Csiszár
, “
I-divergence geometry of probability distributions and minimization problems
,”
Ann. Probab.
3
,
146
158
(
1975
), see http://www.jstor.org/stable/2959270
68.
D. J.
MacKay
D. J.
Mac Kay
et al.,
Information Theory, Inference and Learning Algorithms
(
Cambridge University Press
,
2003
).
69.
N.
Marwan
, “Commandline recurrence plots,” (2006) [Version: 1.13z].
70.
T.
Hertrampf
and
S.
Oberst
, “
Recurrence rate spectrograms for the classification of nonlinear and noisy signals
,”
Phys. Scr.
99
,
035223
(
2024
).
71.
O. E.
Rössler
, “
An equation for continuous chaos
,”
Phys. Lett. A
57
,
397
398
(
1976
).
72.
In the following, when the context is clear, we refer to the “embedded time series of the X-coordinate” when mentioning the “Rössler system.”
73.
R.
Gilmore
and
M.
Lefranc
,
Topology Analysis of Chaos
(
Wiley VCH Verlagsgesellschaft
,
2002
).
74.
Y.
Hu
and
Y.
Koren
, “Extending the spring-electrical model to overcome warping effects,” in 2009 IEEE Pacific Visualization Symposium (IEEE, 2009), pp. 129–136.
75.
H.
Kato
, “
Takens-type reconstruction theorems of one-sided dynamical systems
,”
Nonlinearity
36
,
1571
(
2023
).