Permutation Entropy (PE) is a cost effective tool for summarizing the complexity of a time series. It has been used in many applications including damage detection, disease forecasting, detection of dynamical changes, and financial volatility analysis. However, to successfully use PE, an accurate selection of two parameters is needed: the permutation dimension n and embedding delay τ. These parameters are often suggested by experts based on a heuristic or by a trial and error approach. Both of these methods can be time-consuming and lead to inaccurate results. In this work, we investigate multiple schemes for automatically selecting these parameters with only the corresponding time series as the input. Specifically, we develop a frequency-domain approach based on the least median of squares and the Fourier spectrum, as well as extend two existing methods: Permutation Auto-Mutual Information Function and Multi-scale Permutation Entropy (MPE) for determining τ. We then compare our methods as well as current methods in the literature for obtaining both τ and n against expert-suggested values in published works. We show that the success of any method in automatically generating the correct PE parameters depends on the category of the studied system. Specifically, for the delay parameter τ, we show that our frequency approach provides accurate suggestions for periodic systems, nonlinear difference equations, and electrocardiogram/electroencephalogram data, while the mutual information function computed using adaptive partitions provides the most accurate results for chaotic differential equations. For the permutation dimension n, both False Nearest Neighbors and MPE provide accurate values for n for most of the systems with a value of n=5 being suitable in most cases.

Permutation Entropy (PE) is a simple yet very effective tool for studying time series of dynamical systems. It provides an information statistic that measures the complexity of a time series through the probability of unique ordinal patterns found within the time series called permutations. These permutations are symbolic representations obtained by encoding consecutive subsets of the data of a certain length using their ordinal ranking. However, the success of PE depends on the selection of both the spacing (delay τ) and size (dimension n) of these permutations. Despite the wide use of PE, it is often unclear how these parameters must be selected with the most common approach relying on trial and error. This can lead to inaccurate results, and it can prevent applying PE to large datasets in the absence of automatic parameter selection algorithms. In this work, we investigate various methods for automatically selecting both τ and n. In addition to developing novel methods that facilitate the parameter selection, we also assess the accuracy of using classical time series tools for identifying permutation entropy parameters. The success for each of the investigated methods in computing τ and n is determined based on a comparison with the corresponding values suggested by domain experts.

Permutation Entropy (PE) has its origins in information entropy, which is a tool to quantify the uncertainty in an information-based system. Information entropy was first introduced by Shannon46 in 1948 as Shannon Entropy. Specifically, Shannon entropy measures the uncertainty in future data given the probability distribution of the data types in the original, finite dataset. Shannon entropy is calculated as Hs(n)=p(xi)logp(xi), where xi represents a data type and p(xi) is the probability of that data type. In recent years, information entropy has been heavily applied to the time series of dynamic systems. Several new variations of information entropy have been proposed to better accommodate these applications, e.g., approximate entropy,40 sample entropy,44 and PE.4 These methods measure the predictability of a sequence through the entropy of the relative data types. However, PE considers the ordinal position of the data (permutations), which have been shown to be effective for analyzing the dynamic state and complexity of a time series.1,5,11,18,21,22,33,37 PE is also noise robust for time series of sufficient length and relatively high signal-to-noise ratios, which is the ratio between useful signal and background noise. Alternatively, if the time series is relatively short or has a low signal-to-noise ratio, it is suggested to use a different entropy measurement such as coarse-grained entropies.42 PE is quantified in a similar fashion to Shannon entropy with only a change in the data type to permutations (see Fig. 2), which we symbolically represent as πi. PE has two parameters: the permutation dimension n and embedding delay τ, which are used when selecting the permutation size and spacing, respectively. PE is sensitive to these parameters30,45,48 and there is no accurate selecting approach for all applications. This introduces the motivation for this paper: investigate automatic methods for selecting both PE parameters. There are currently three main methods for selecting PE parameters: (1) parameters suggested by experts for a specific application, (2) trial and error to find suitable parameters, or (3) methods developed for phase space reconstruction. We will now overview a simple example to better understand these parameters.

Bandt and Pompe4 defined PE according to

H(n)=p(πi)logp(πi),
(1)

where p(πi) is the probability of a permutation πi and H(n) is the permutation entropy of dimension n with units of bits when the logarithm is of base 2. The permutation entropy parameters τ and n are used when selecting the motif size, with τ determining the time difference between two consecutive points in a uniformly sub-sampled time series and n as the permutation length or motif dimension. To form a permutation, we begin with an element xi of the series X. Using this element, the dimension n, and delay τ, we define the vector vi=[xi,xi+τ,xi+2τ,,xi+(n1)τ]. The corresponding permutation πi of this vector is determined using its ordinal pattern. For example, consider the third degree n=3 permutation shown in Fig. 1.

FIG. 1.

Sample permutation formation for n=3 and τ=1.

FIG. 1.

Sample permutation formation for n=3 and τ=1.

Close modal
FIG. 2.

All possible permutation configurations for n=3.

FIG. 2.

All possible permutation configurations for n=3.

Close modal

The permutation type, which categorizes the permutation, is found by first ordering the n values of the permutation smallest to largest and then accounting for the order received. For the given permutation in Fig. 1, the resulting permutation is categorized as the sequence πi=(1,0,2), which is one of n! possible permutations for a dimension n, see Fig. 2 for the other possible permutations of n=3.

We can normalize PE using the maximum possible PE value, which occurs when all n! possible permutations are equiprobable according to p(π1)=p(π2)==p(πn!)=1n!. The resulting normalized PE is

hn=1log2n!p(πi)log2p(πi).
(2)

A toy example demonstrating the calculation of hn is provided in the  Appendix A.

Many domain scientists who apply PE make general suggestions for n and τ,19,53 which can be impractical for some applications. As an example, Popov et al.41 showed the influence of the sampling frequency on the proper selection of τ. As for the dimension n, there are general suggestions45 on how to choose its value based on the vast majority of applications having an appropriate permutation dimension in the range of 3<n<8. Additionally, Bandt and Pompe4 suggest that Nn, where N is the length of the time series. However, these general outlines for the selection of n (and τ) do not allow for an application specific suggestion.

If we assume that suitable PE parameters correspond to optimal phase space reconstruction parameters, then a common approach for selecting τ and n is to implement one of the existing methods for estimating the optimal Takens’ embedding50 parameters. Hence, some of the common methods for determining τ include the mutual information function approach,20 the first folding time of the autocorrelation function,8,23 and phase space methods.10 Additionally, some common phase space reconstruction methods for determining n include box-counting,7 correlation exponent method,23 and false nearest neighbors (FNN).25 Although the parameters in PE have similar names to their delay reconstruction counterpart, there are innate differences between ordinal patterns and phase space reconstruction which can also lead to inaccurate n or τ values. In spite of these differences, permutations can be viewed as a symbolic representation of regions in the phase space through a binning process. Permutations partition the phase space based on the ordinal rankings of the embedded vectors. This relationship between phase space and permutations opens up potential for some of the classic phase space reconstruction methods for selecting both n and τ to be a plausible solution for selecting the same parameters for PE.

Even with the possibility that phase space reconstruction methods for selecting τ and n may work for choosing synonymous parameters of PE, there are a few practical issues that preclude using parameters from time series reconstruction for PE. One issue stems from many of the methods (e.g., false nearest neighbors and mutual information) still requiring some degree of user input through either a parameter setting or user interpretation of the results. This introduces issues for practitioners working with numerous datasets or those without enough expertise in the subject area to interpret the results. Another issue that arises in practice is that the algorithmic implementation of existing time series analysis tools is nontrivial. This hinders these tools from being autonomously applied to large datasets. For example, the first minimum of the Mutual Information (MI) function is often used to determine τ. However, in practice, there are limitations to using mutual information to analyze data without the operator intervention to shift through the minima and choose the first “prominent” one. This is due to a possibility that the mutual information function can have small kinks that can be erroneously picked up as the first minimum. Figure 3(a) shows this situation, where the first minimum of the mutual information function for a periodic Lorenz system is actually an artifact and the actual delay should be at the prominent minimum with τ=11. Furthermore, the mutual information function approach may also fail if the mutual information is monotonic. This is a possibility since there is no guarantee that minima exist for mutual information.3 An example of this mode of failure is shown in Fig. 3(b), which was generated using electroencephalogram (EEG) data2 from a patient during a seizure.

FIG. 3.

Some possible modes for failure for selecting τ for phase space reconstruction using classical methods: (a) mutual information registering false minima as suitable delay generated from a periodic Lorenz system, (b) mutual information being mostly monotonic and not having a distinct local minimum to determine τ generated from EEG data,2 and (c) autocorrelation failing from a moving average of ECG data provided by the MIT-BIH Arrhythmia Database.35 

FIG. 3.

Some possible modes for failure for selecting τ for phase space reconstruction using classical methods: (a) mutual information registering false minima as suitable delay generated from a periodic Lorenz system, (b) mutual information being mostly monotonic and not having a distinct local minimum to determine τ generated from EEG data,2 and (c) autocorrelation failing from a moving average of ECG data provided by the MIT-BIH Arrhythmia Database.35 

Close modal

A mode of failure for the autocorrelation method can occur when the time series is non-linear or has a moving average. In this case, the autocorrelation function may reach the folding time at an unreasonably large value for τ. As an example, Fig. 3(c) shows the autocorrelation not reaching the folding time of ρ=1/e until a delay of τ=283 for electrocardiogram (ECG) data provided by the MIT-BIH Arrhythmia Database.35 The last mode of failure concerns choosing the permutation dimension n to be equal to the embedding dimension optimized using delay embedding from time series analysis. This can lead to an overly large embedding dimension12 (n8), which would make the calculation of PE impractical because the number of possible permutations n! would become too large.

All of these possible modes of failure can make using classical phase space methods for selecting τ and n unreliable thus necessitating new tools or modifications to make selecting τ and n for PE more robust and less user-dependent.

These shortcomings lead us to the problem that we address in this paper: Given a sufficiently sampled/oversampled and noisy time series X={xt}R+, how can we reliably and systematically define appropriate dimension n and time delay τ values for computing the corresponding PE?

Our first contribution toward answering this question is detailed in Sec. II, which addresses the automatic selection of the time delay τ. In Sec. II A, we combine the Least Median of Squares (LMS) approach for outlier detection with the Fourier transformation theorem to derive a formula for the maximum significant frequency in the Fourier spectrum, with the assumption that X is contaminated by Gaussian measurement noise. This formula allows obtaining a cutoff value where the only input, besides the time series, is a desired percentile from the Probability Density Function (PDF) of the Fourier spectrum. Once this value is obtained, Nyquist’s sampling theorem is used to compute an appropriate τ value.

The second contribution is through an approach that we develop in Sec. II B, which uses Multi-scale Permutation Entropy (MPE) for finding τ. We show how MPE can be used to find the main period of oscillation for a time series derived from a periodic system. Building upon this, we show how the method can be extended to find τ for a chaotic time series by using the first maxima in the MPE as it satisfies Nyquist’s sampling theorem.

Our third contribution to the automatic selection of τ is through the analysis of Permutation Auto-Mutual Information31 (PAMI). PAMI is an existing method for measuring the mutual information of permutations. However, we tailor this method to specifically select τ for PE.

Our final contribution toward answering the posited question is our evaluation of the ability of existing tools for computing an embedding dimension to provide an appropriate value for the PE parameter n. We compare dimension n values computed from False Nearest Neighbors (FNN—Sec. III A), Singular Spectrum Analysis (SSA—Sec. III B), and MPE (Sec. II B). While we use existing methods for performing the FNN and the SSA analyses, for the MPE-based approach we use a criterion established in prior works,45 which requires finding τ first. We made this process automatic through the selection of τ from our second contribution.

This paper is organized as follows. We first go into detail on some existing methods for selecting both τ and n. Specifically, in Sec. II, we provide a detailed explanation for selecting τ using existing, automatic methods such as autocorrelation in Sec. II C and Mutual Information (MI) in Sec. II D. Additionally, we modify and develop/tailor methods to automatically select τ. These methods include a frequency approach in Sec. II A, MPE in Sec. II B, and PAMI in Sec. II E. In Sec. III, we expand on the process for selecting n using False Nearest Neighbors (FNN) in Sec. III A and Singular Spectrum Analysis in Sec. III B. In Sec. III C, we explain our algorithm for automatically selecting n using MPE. After introducing each method, in Sec. IV, we contrast all of these methods and make conclusions on their viability by comparing the resulting parameters to those suggested by PE experts. An overview of the methods that will be investigated for automatically calculating both τ and n is shown in Fig. 4. All the functions used and developed in this work are available in Python through GitHub.36 

FIG. 4.

Overview of methods investigated for automatically calculating both the delay τ and dimension n for permutation entropy.

FIG. 4.

Overview of methods investigated for automatically calculating both the delay τ and dimension n for permutation entropy.

Close modal

The delay embedding parameter τ is used to uniformly subsample the original time series. To elaborate, consider the time series X={xiiN}. By applying the delay τN, a new subsampled series is defined as X(τ)=[x0,xτ,x2τ,]. In order to obtain a stable and automatic method for estimating an optimal value for τ we investigate: a novel frequency-based analysis that we describe in Sec. II A, Multi-scale Permutation Entropy (MPE) (Section II B), autocorrelation (Sec. II C), and Mutual Information function (MI) (Sec. II D). We recognize, but do not investigate, some other methods for finding τ such as diffusion maps6 and phase space expansion.10 

In this section, we develop a method for finding the noise floor in the Fourier spectrum using Least Median of Squares (LMS).32 We then use the noise floor to find the maximum significant frequency of a signal contaminated with additive Gaussian white noise (GWN). Our method is based on finding the maximum significant frequency in the Fourier spectrum and the Nyquist sampling frequency criteria. To motivate the development of this approach, we begin by working with the frequency criteria developed by Melosik and Marszalek,34 which agrees with Nyquist’s sampling theorem27 for choosing a suitable sampling frequency fs as

2fmax<fs<4fmax,
(3)

where fmax is the maximum significant frequency in the signal. Melosik and Marszalek34 showed that a sampling frequency within this range is appropriate for subsampling an oversampled signal, thus mitigating the effect of temporal correlations of neighboring points in densely sampled signals. However, the automatic identification of fmax from an oversampled signal is not trivial. Melosik and Marszalek34 selected a maximum significant frequency by inspecting the normalized Fourier spectrum and using a threshold cutoff of approximately 0.01 for a noise-free chaotic Lorenz system. This made visually finding the maximum frequency significantly easier but did not provide guidance on how to algorithmically find fmax. Furthermore, attempting to algorithmically adopt the approach suggested by Melosik and Marszalek34 resulted in large errors especially in the presence of a low signal to noise ratio. This motivated the search for an automatic and data-driven approach for identifying the noise floor, which could then be used to find the maximum significant frequency. To do this, we develop a method based on 1D least median of squares applied to the Fourier spectrum. The assumptions inherent to our method are:

  1. The time series is not undersampled. The purpose of the methods is to determine a suitable delay parameter for subsampling the signal, which would be meaningless if the time series is undersampled.

  2. The Fourier transform of the time series needs to have less than 50% of the points with significant amplitudes. This requirement stems from the limitations of the least median of squares regression.

  3. The noise in the signal is approximately GWN; otherwise, the ensuing statistical analysis becomes inapplicable. Violating this assumption can yield false peak detections, which would lead to an incorrect delay parameter.

We find suitable cutoffs for obtaining fmax of the signal by using the noise floor determined from the 1D least median of squares and compute a suitable embedding delay according to

τ=fsαfmax,
(4)

where we set α=2, thus agreeing with the range in Eq. (3) and the Nyquist sampling criterion.

Figure 5 summarizes the frequency approach for τ with the use of our 1D LMS method for finding a noise floor in the Fourier spectrum. This process begins with computing the Fourier spectrum of the signal, which is followed by fitting an 0D LMS regression line to the noise in the Fourier spectrum. This provides statistical information about the Probability Distribution Function (PDF) of the noise level. The PDF is used to determine the Cumulative Distribution Function (CDF), which we use to determine a meaningful noise cutoff in the Fourier spectrum. However, it is assumed that the noise is approximately GWN for this method to hold statistical significance. This cutoff is used to separate the highest significant frequency in the Fourier spectrum fmax, which is used to find a suitable embedding delay τ based on the frequency criteria in Eq. (4). In the following paragraphs, we review our use of the LMS and the derivation of the PDF of the Fourier spectrum of GWN. We then show how to combine the LMS method with the resulting PDF expression to find a suitable noise floor cutoff and the corresponding maximum significant frequency.

FIG. 5.

Overview of our frequency-domain approach for finding the maximum significant frequency fmax using LMS for a signal contaminated with GWN.

FIG. 5.

Overview of our frequency-domain approach for finding the maximum significant frequency fmax using LMS for a signal contaminated with GWN.

Close modal
a. Least median of squares

LMS32 is a robust regression technique used when up to 50% of the data are corrupted by outliers. Outliers will be considered as anything other than noise in the Fourier spectrum for our application. In comparison to the widely used least sum of squares (LS) algorithm, the LMS replaces the sum for the median which makes LMS resilient to outliers. The difference between LS and LMS is highlighted as

LS:miniri2,LMS:min(mediani(ri2)),
(5)

where r is the residual. Similar to the subscript i in i, i in mediani signifies that the median is of all residuals. Figure 6 shows an example application of the linear LMS regression.

FIG. 6.

LMS linear regression with 45% outliers. Results match those found in Ref. 32.

FIG. 6.

LMS linear regression with 45% outliers. Results match those found in Ref. 32.

Close modal

Specifically, this figure shows 110 data points drawn from the line y=x+1 with added GWN of a zero mean and 0.1 standard deviation. The data are corrupted with 90 outliers centered around (3,2) with a normal distribution of 1.0 along x and 0.6 along y. Figure 6 shows that the linear regression results closely match the actual trend line with the fitted line being y=0.998x+1.012 in comparison to the actual y=x+1.

b. PDF and CDF of the magnitude of the fast Fourier transform of GWN

This section reviews the probability distribution function (PDF) and cumulative density function (CDF) for the Fourier Transform (FT) of white noise. Additionally, this section derives the location of the theoretical maximum of the PDF. The FT distribution of GWN43 is described as

P|X|(|X|)=2|X|Ewσx2e|X|2Ewσx2,
(6)

where |X| is the magnitude of the FT of GWN, P|X| is the probability density function of |X|, σx is the standard deviation of the GWN, and Ew is the window energy or number of discrete transforms taken during the FT. By setting the first derivative of P|X| with respect to |X| equal to zero, the theoretical maximum of the PDF is

|X|max=Ewσx22.
(7)

We calculate the CDF corresponding to the PDF described in Eq. (7) by combining the PDF in Eq. (6) with the CDF for a Rayleigh distribution as38 

CP|X|(|X|)=1e|X|2Ewσx2,
(8)

where CP|X| is the cumulative probability of |X|.

c. Finding the noise floor

Our approach for finding the noise floor combines LMS with Eqs. (6) and (7). Specifically, we utilize LMS to obtain a 0D fit of the Fast Fourier Transform (FFT) of the signal, which results in an approximate value of |X|max, which is |X| at the maximum of P|X|. Using |X|max from the LMS fit, we then find the standard deviation of the distribution σx from Eq. (7), which is used to find a cutoff based on a set cumulative probability in Eq. (8).

We begin by showing the accuracy of the LMS fit for finding |X|max. Our example uses GWN with a mean of zero and a standard deviation of 0.035 with 1000 data points. Taking the FFT of the GWN [see Fig. 8(a)] results in the distribution shown in Fig. 8(b). The distribution shows a 1D LMS fit of 8.215 compared to the theoretical maximum of the PDF from Eq. (7) of 7.826, which is approximately 4.67% greater. This shows that the 1D LMS fit accurately locates |X|max. Additionally, the theoretical shape of the PDF in Fig. 8(b) is shown to be very similar to the actual distribution.

Next, our approach utilizes Eq. (8) and σx derived from Eq. (7) for finding the cutoff value |X|cutoff. The |X|cutoff for a desired cumulative probability CP is found by solving Eq. (8) for |X| as

|X|cutoff=Ewσx2ln(1CP).
(9)

In order to make |X|cutoff robust to normalization and scaling of the FFT, we define the ratio C between the suggested cutoff from Eq. (9) and the maximum of the PDF from Eq. (7) as

C=|X|cutoff|X|max=2ln(1CP).
(10)
4. Example cutoff

An example of how Eqs. (7) and (9) are used is shown in Fig. 7, where the maximum of the PDF and the cutoff for CP=99% are marked in Figs. 7(a) and 7(b), respectively. For this example, we find the ratio C to be approximately 3.03 for a 99% probability. In addition, we suggest a cutoff ratio C=6 to be used for signals with less than 104 data points. This yields an expected probability of 108% for a point in the FFT of the GWN attaining a magnitude greater than |X|cutoff. Alternatively, Eq. (10) can be used to calculate a different value of C based on the desired probability and length of the signal.

FIG. 7.

(a) Theoretical PDF for GWN. (b) CDF for GWN with an example cutoff at the 99%CP.

FIG. 7.

(a) Theoretical PDF for GWN. (b) CDF for GWN with an example cutoff at the 99%CP.

Close modal
FIG. 8.

(A) FFT of GWN with 0.035 standard deviation and zero mean with the location of the theoretical maximum of the PDF and one-dimensional LMS regression value. (B) Distribution of GWN in the Fourier Spectrum with overlapped theoretical PDF and location of the theoretical maximum of the PDF and one-dimensional LMS regression value.

FIG. 8.

(A) FFT of GWN with 0.035 standard deviation and zero mean with the location of the theoretical maximum of the PDF and one-dimensional LMS regression value. (B) Distribution of GWN in the Fourier Spectrum with overlapped theoretical PDF and location of the theoretical maximum of the PDF and one-dimensional LMS regression value.

Close modal

In this section, we develop a method based on Multi-scale Permutation Entropy (MPE) to find the periodicity of a signal, which is then used to find a suitable delay parameter. MPE is a method of applying permutation entropy over a range of delays for analyzing physiological time series.14 Zunino et al.54 showed how the first maxima in the MPE plot arises when τ matches the characteristic time delay τr. Furthermore, the periodicity can be captured by the first dip in the MPE plot as shown in Fig. 9 at the location d2 when the delay τ matches the characteristic time delay τr.

FIG. 9.

(right) Resulting MPE plot for (left) 2P periodic time series with example embedding delays d0, d1, and d2.

FIG. 9.

(right) Resulting MPE plot for (left) 2P periodic time series with example embedding delays d0, d1, and d2.

Close modal

Figure 9 shows embedding delays d0, d1, and d2 calculated as d=τfs as well as their corresponding locations on a normalized MPE plot. This toy MPE plot shows that the normalized MPE reaches its first maximum when the delay is roughly d1, which corresponds to approximately an even distribution of permutations. A second observation, as mentioned previously, is that at d2 (or the first dip in the MPE plot) there is a resonance or aliasing effect caused by ττr, which can be used to determine the period of the time series. This is based on the embedding delay size at d2 causing the embedding vector size V=d(n1) to be approximately half of the periodicity P, which can be expressed as

d2=12P=1fsτr=12f,
(11)

where P is the main period of oscillation, f is the main frequency of the time series corresponding to P, and fs is the sampling frequency. The reason for the dip in the permutation entropy (PE) when the condition from Eq. (11) is met is caused by an aliasing effect, which reduces PE through more regularity in the permutation distribution.

We use the criteria of Melosik and Marszalek34 to determine a suitable delay from the location of the first dip at d2. Their criterion states that the sampling frequency must fall within the range shown in Eq. (3). This range led to Eq. (4), which is used to calculate τ. However, for MPE, we substitute fs and fmax in Eq. (3) with fs=2fτr from Eq. (11) and fmax=f. These substitutions allow Eq. (4) to reduce to

τ=2ατr,
(12)

where α[2,4]. These simplifications show that τ is only dependent on the delay which causes resonance τr when applying MPE. However, for a chaotic time series, the dip at τr may not be present due to non-linear trends. To address this issue, we will first investigate the three dominant regions of the MPE plot, which will also be located for a chaotic time series example. We will then propose a new, automatic method for selecting τ that agrees with the frequency criteria stated in Eq. (12). Additionally, in Subsection 2 b of the  Appendix, we investigate the robustness of the method to noise and in Subsection 5 of the  Appendix we provide the algorithm (Algorithm 1) for finding τ using MPE.

a. MPE regions

Riedl et al.45 showed that the MPE plot can be separated into three distinct regions as described below and shown in Fig. 10. Region A shows a gradual increase in the permutation entropy until reaching a maxima at the transition between regions A and B. Oversampling or a low value of τ causes the motif distribution corresponding to the permutation entropy to be heavily weighted on just increasing or decreasing motifs [motifs (0,1,2) and (2,1,0) for n=3 from Fig. 2]. This effect was coined as the “Redundancy Effect” by De Micco et al.,17 which means that sufficiently low values of τ result in redundant motifs. However, as τ increases, the motif distribution becomes more equiprobable. Additionally, when the motif probability reaches a maximum equiprobability, the permutation entropy is at a maxima, which is the point of transitions from regions A to B. Region B shows a slight dip to the first minima. This reduction in permutation entropy is caused by the aliasing or resonance from the value of d approaching half the main period length. At the transition from B to C, the resonance is reached, which provides information on the main frequency and period of the time series. Region C has possible additional minima and maxima from an additional alignment of the embedding vector d with multiples of the main period. This region was referred to as the “Irrelevant Region” by De Micco et al.17 due to effectively large values of τ forcing the delayed sampling frequency to fall below the Nyquist sampling rate as described by the lower bound in Eq. (3).

FIG. 10.

The three regions of the MPE plot for a periodic signal: A—redundant, B—resonant, and C—irrelevant.

FIG. 10.

The three regions of the MPE plot for a periodic signal: A—redundant, B—resonant, and C—irrelevant.

Close modal
b. MPE example with chaotic time series

In Secs. II B and II B a, we used a periodic time series to show and explain the regions developed in an MPE plot as well as an MPE-based method for determining a suitable embedding delay τ. In this section, we further show the applicability of this approach to chaotic signals using the x-coordinate of the Lorenz System as an example. We simulate the Lorenz equations

dxdt=σ(yx),dydt=x(ρz)y,dzdt=xyβz
(13)

with a sampling rate of 100 Hz and using the parameters ρ=28.0, σ=10.0, and β=8.0/3.0. This system was solved for 100 seconds and only the last 15 s from the time series are used. Figure 11 shows the result of applying MPE to the simulated Lorenz system.

FIG. 11.

MPE plot for the x coordinate of the Lorenz system. Additionally, points in the MPE plot with their corresponding subsampled time series are shown for the redundant, resonant, and irrelevant regions as described in Sec. II B a.

FIG. 11.

MPE plot for the x coordinate of the Lorenz system. Additionally, points in the MPE plot with their corresponding subsampled time series are shown for the redundant, resonant, and irrelevant regions as described in Sec. II B a.

Close modal

Figure 11 shows similarities to Fig. 10 with a clear maxima at the boundary between regions A and B, albeit with no obvious minima. Therefore, a new distinct feature needs to be used to determine τr. We suggest using the first maxima to find τ because this delay is likely to fall within the region described by Eq. (12).

Autocorrelation is a traditional method for selecting τ for phase space reconstruction by using the correlation coefficient between the time series and its τ-lagged version. This method was first introduced by Box et al.8 Typically, the autocorrelation function is computed as a function of τ and, as a rule of thumb, a suitable delay τ is found when the correlation between x(t) and x(t+τ) reaches the first folding time, i.e., when ρ1/e.24 The two prominent correlation techniques that are commonly used when implementing an autocorrelation-based approach for finding τ are Pearson Correlation (see Subsection 3 a of the  Appendix) and Spearman’s Correlation (see Subsection 3 b of the  Appendix). Additionally, an example demonstrating how to calculate τ using autocorrelation and the difference between the two correlation methods is provided in Subsection 3 c of the  Appendix.

Mutual information (MI) can be used to select the embedding delay τ based on a minimum in the joint probability between two sequences. The mutual information between two discrete sequences was first realized by Shannon et al.47 as

I(X;Y)=xXyYp(x,y)logp(x,y)p(x)p(y),
(14)

where X and Y are the two sequences, p(x) and p(y) are the respective probability of the elements x and y separately, and p(x,y) is the joint probability of x and y. Fraser and Swinney20 showed that, for a chaotic time series, the MI between the original sequence x(t) and delayed version x(t+τ) will decrease as τ increases until reaching a first minimum. At this minimum, the delay τ allows for the individual data points to share a minimum amount of information, which indicates sufficiently separated data points. While this delay value was specifically developed for phase space reconstruction, it is also used for the selection of the PE parameter τ. We would like to point out that, in general, there is no guarantee that local minima exist in the mutual information, which is a serious limitation for computing τ using this method. All MI methods can be applied to either ranked or unranked data. We investigate four methods for estimating τ for PE using MI. These methods include MI with equal-sized partitions, adaptive partitions, and two permutation-based MI estimation methods. For details on these methods, please refer to Sec. 4 in the  Appendix.

To determine the optimal MI approximation method for selecting τ for PE, Fig. 12 shows a comparison between the τ values computed from each of the MI methods and the corresponding values suggested by experts. The table shows that the adaptive partitioning method of Sec. 4 b results in an accurate selection of τ for the majority of systems. We will use the adaptive partitioning estimation method when making comparisons to other methods. For the exact values of τ from each of the MI methods, please refer to Table I in the  Appendix.

FIG. 12.

A comparison between the calculated and suggested values for the delay parameter τ for multiple MI approximation methods. The methods investigated were equal-sized partition method, Kraskov et al. methods 1 and 2, and the adaptive partitioning approach.

FIG. 12.

A comparison between the calculated and suggested values for the delay parameter τ for multiple MI approximation methods. The methods investigated were equal-sized partition method, Kraskov et al. methods 1 and 2, and the adaptive partitioning approach.

Close modal

As shown in Sec. II D, Mutual information (MI) is a useful method for selecting τ for phase space reconstruction. However, it does not account for the permutation distribution when selecting τ, which can lead to inaccuracies in computing the PE. To circumvent this issue, we develop a new method for selecting τ using Permutation Auto-Mutual Information (PAMI),31 which was developed to detect dynamic changes in brain activity. We are tailoring PAMI for its application in the selection of the permutation entropy parameter τ for the first time. This is done by measuring the joint probability between the original permutations formed when a delay of τ=1 is used and to the permutations when τ is incremented. PAMI is defined as

Ip(τ,n)=Hx(t,n)+Hx(t+τ,n)Hx(t,n),x(t+τ,n),
(15)

where H is the permutation entropy described in Eq. (1). We suggest an optimal delay τ for a given dimension n when PAMI is at a minimum. This delay corresponds to minimum shared information between the original permutations with τ=1 and its time lagged permutations. By applying this method for the simple sinusoidal function described in Subsection 5 e of the  Appendix, we can form Fig. 13 with n[2,5] and τ[1,50].

FIG. 13.

PAMI results for the sinusoidal function [Eq. (A10)] with n[2,5] and τ[1,50]. The figure shows an optimal window size τ(n1)25.

FIG. 13.

PAMI results for the sinusoidal function [Eq. (A10)] with n[2,5] and τ[1,50]. The figure shows an optimal window size τ(n1)25.

Close modal

As shown, the window size is approximately independent of the dimension n, with an optimal window τ(n1)25 for the example. Through our analysis of the minimum PAMI as a function of the window size, we have developed a new method for selecting the optimal embedding window. However, we need the embedding dimension to suggest an optimal delay. Hence, we implement the common choice for n ranging from 4n6 for PE.45 To reduce the computational demand, we suggest using permutation dimensions n=2 to find an optimal window size. In addition to the reduced computational demand of using n=2, we found that Ip(n=2)0 at the first minima. This also helps making this first minima even more simple.

The second parameter for permutation entropy that needs to be automatically identified is the embedding dimension n. The methods for determining n fall into one of the two categories: (1) independently determining n and τ and (2) simultaneously determining n and τ based on the width of the embedding window. For the first category, we investigate using the method of False Nearest Neighbors (FNN)25 in Sec. III A and Singular Spectrum Analysis (SSA)9 in Sec. III B. For the second category, we contribute to the selection of n by developing an automatic method using MPE from Sec. III C. This method combines the results for finding τ through MPE in Sec. III B with the work of Riedl et al.45 We acknowledge that our work does not include other commonly used methods for independently calculating n such as box-counting,13 largest Lyapunov exponent,52 and Kolmogorov–Sinai entropy.39 

False Nearest Neighbors (FNN) is one of the most commonly used methods for geometrically determining the minimum embedding dimension n for state space reconstruction.25 For this method, the time series is repeatedly embedded into a sequence of m-dimensional Euclidean spaces for a range of increasing values of m. The idea is that when the minimum embedding dimension m is reached or mn, the distance between neighboring points does not significantly change as we keep increasing m. In other words, the Euclidean distance dm(i,j) between the point PiRm and its nearest neighbor PjRm minimally changes when the embedding dimension increases to m+1. If the dimension m is not sufficiently high, then the points are false neighbors if their pairwise distance significantly increases when incrementing m. This ratio of change in the distance between nearest neighbors embedded in Rm and Rm+1 is quantified using the ratio of false nearest neighbors,

Ri=dm+12(i,j)dm2(i,j)dm2(i,j).
(16)

Here, Ri is compared to the tolerance threshold Rtol to distinguish false neighbors when Ri>Rtol. In this paper, we select Rtol=15 as used by Kennel et al.25 By applying this threshold over all points, we can find the number of false neighbors as a percent FNN PFNN. If there is no noise in the system, PFNN should reach zero when a sufficient dimension is reached. However, with additive noise present, PFNN may never reach zero. Thus, it is commonly suggested to use a percent FNN cutoff for finding a sufficient dimension n. We use the typically chosen cutoff PFNN<10%, which is suitable for most applications when moderate noise is present.

The singular spectrum analysis method was first introduced as a tool to find trends and prominent periods in a time series.9 Leles et al.29 summarized the SSA procedure as (1) immersion, (2) Singular Value Decomposition (SVD), (3) grouping, and (4) diagonal averaging. Specifically, immersion embeds the time series into a dimension L to form a Hankel matrix, SVD factors all the matrices, grouping combines the matrices that are similar in structure, and diagonal averaging reconstructs the time series using the combined matrices. The needed embedding dimension is determined from the SVD by calculating the ratio D,

D=gLgr,
(17)

of the sum of the Lth diagonal entries gL to the sum of the total diagonal entries gr. When D exceeds 0.9, we consider the dimension to be high enough and set n=L, which can then be used as the embedding dimension for permutation entropy.

Riedl et al.45 showed how MPE can be used to determine an embedding dimension n. This method requires the embedding delay τ to be set to the length of the main period of the signal as shown in Sec. II B. The theory behind the method is based on normalizing the MPE according to

hn=1n1H(n),
(18)

where hn is the PE normalized using the embedding dimension, and Hn is the PE calculated from Eq. (1). Riedl et al.45 determined the embedding dimension by incrementing n to find the largest corresponding normalized PE hn with an embedding delay τ heuristically determined from the main period length. They concluded that hn with the highest entropy accurately accounts for the needed complexity of the time series, and, therefore, suggests a suitable embedding dimension. Rield et al.45 show how this method provides an accurate embedding dimension for the Van-der-Pol-oscillator, Lorenz system, and the logistic map. However, the method is not automatic due to the reliance on a heuristically chosen τ.

To make the process automatic, we introduce Algorithm 1 based on Sec. II B to automatically select the correct τ, which we then use in conjunction with Eq. (18) to find n corresponding to the maximum hn. Additionally, we suggest scaling n from 3 to 8 as we have not yet found a system requiring n>8 using this method.

To make conclusions about the described methods for determining τ and n, we made comparisons to values suggested by experts. The majority of the suggested parameters is taken from the work of Riedl et al.,45 while parameters for the Rössler system and sine waves are from Tao et al.51Figures 14 and 15 show the calculated and suggested values for τ and n, respectively. For the exact values of τ and n from each of the parameter estimation methods, please refer to Tables II and III in the  Appendix, respectively. Additionally, script for reproducing the results found in this paper are provided through the Mendeley.

FIG. 14.

A comparison between the calculated and suggested values for the delay parameter τ. The methods investigated were MI with adaptive partitions, Spearman’s Autocorrelation (AC), the frequency analysis, Multi-scale Permutation Entropy (MPE), and Permutation Auto-mutual Information (PAMI) with n=5.

FIG. 14.

A comparison between the calculated and suggested values for the delay parameter τ. The methods investigated were MI with adaptive partitions, Spearman’s Autocorrelation (AC), the frequency analysis, Multi-scale Permutation Entropy (MPE), and Permutation Auto-mutual Information (PAMI) with n=5.

Close modal
FIG. 15.

A comparison between the calculated and suggested values for the embedding dimension n. The methods investigated were False Nearest Neighbors (FNN), Multi-scale Permutation Entropy (MPE), and Singular Spectrum Analysis (SSA).

FIG. 15.

A comparison between the calculated and suggested values for the embedding dimension n. The methods investigated were False Nearest Neighbors (FNN), Multi-scale Permutation Entropy (MPE), and Singular Spectrum Analysis (SSA).

Close modal

a. Embedding delay.Figure 14 shows the automatically computed τ in comparison to the expert-identified values for a variety of systems. These systems fall within several categories including the following: noise, chaotic differential equations, periodic systems, nonlinear difference equations, and medical data. The methods presented in Fig. 14 include PAMI from Sec. II E, MI calculated using adaptive partitioning from Sec. 4 b, Spearman’s Autocorrelation from Sec. C, MPE from Sec. II B, and the frequency approach from Sec. II A. For the noise category, we only investigated Gaussian white noise, and all the methods accurately suggest an embedding delay. For the second category of chaotic differential equations, Mutual Information approximated using adaptive partitions accurately provided suitable delay values. However, as addressed in Sec. I, there are possible modes of failure for MI. To validate that MI is accurately selecting a value for τ, we recommend also calculating τ using the frequency approach. For the third category, periodic systems, we only investigated a simple sinusoidal function. This resulted in both MPE and the Frequency approach providing accurate suggestions. Therefore, we suggest using both of these methods to calculate τ for periodic systems. Additionally, we do not suggest the use of MI for periodic systems as it can have early false minima resulting in inaccurate delay selection. For difference equations, we found that PAMI, autocorrelation, MPE, and the frequency approach provide accurate suggestions for the delay. Finally, when testing each method on medical data with intrinsic noise, we found that the noise-robust frequency approach yielded the optimal parameter selection for τ. As a generalization of the results found, we suggest the use of MI with adaptive partitioning when selecting τ for chaotic differential equations. For periodic systems, nonlinear difference equations, and ECG/EEG data we suggest the use of the frequency approach that we developed in this paper. However, when applying the frequency approach to quasiperiodic time series with multiple harmonics of decreasing amplitude, the method may fail due to the delay being selected based on an insignificant high frequency. The use of either Spearman’s autocorrelation or MPE may be more suitable under this condition. In general, multiple methods should be used for each system to validate that an accurate delay is selected due to the possible modes of failure of each method. Specifically, the frequency approach may fail if the noise does not have a Gaussian distribution, MI can fail if a false minima occur or the relationship is monotonic, and autocorrelation can fail if the time series being analyzed does not oscillate about a fixed value.

b. Embedding dimension.Figure 15 shows the automatically computed parameter n in comparison to the expert-identified values. It can be seen that both MPE and FNN commonly had parameters within the range specified for all categories. However, SSA failed to provide a consistently suitable embedding dimension n. This leads to the conclusion that either MPE or FNN are sufficient methods for determining the embedding dimension for the majority of the considered applications. However, FNN may fail if the effects of noise are not correctly accounted for, which can lead to overly large embedding dimensions. These results also show that the dimension n=6 works well for almost all applications.

In this paper, we demonstrated various methods for automatically determining the PE parameters τ and n when supplied with a sufficiently sampled/oversampled time series. The goal is to find, in an automatic way, the most accurate method in comparison to expert-suggested parameters. The methods we investigated for calculating τ include autocorrelation, mutual information, permutation-auto-mutual information, frequency analysis, and multi-scale permutation entropy. Additionally, the methods we investigated for determining the embedding dimension n include false nearest neighbors, singular spectrum analysis, and multiscale permutation entropy. Several of these methods for calculating τ or n do have suggested parameters to be set by hand. This leaves some methods as not completely automatic. However, the methods of MI, autocorrelation, MPE, and PAMI do not have any parameter set, which reduces the user influence on parameter selection and improves the automatic selection. Additionally, the parameters that are suggested are default parameters that work for the majority of applications.

Our first contribution was developing a new frequency approach analysis and extending two existing methods, PAMI, and MPE, to automatically determine τ. For the frequency approach, we developed an automatic algorithm for finding the maximum significant frequency using a cutoff greater than the noise floor. The noise floor was found using one-dimensional least median of squares applied to the Fourier spectrum in conjunction with the theoretical probability distribution function for the Fourier transform of Gaussian white noise. For using PAMI to select τ, we showed how the first minimum in the PAMI can be used to find an optimal embedding window n(τ+1), where τ was then selected using the range of 4n6. For MPE, we showed how it can be used to find the main period of oscillation from a periodic time series, which we then use to find τ. Additionally, we expanded upon this method by showing how the main period of oscillation can also be found for non-periodic time series, which we implemented into an automatic algorithm.

Our second contribution was implementing the automatic selection of τ and n using MPE. We also collected and compared some of the most popular methods for obtaining n including false nearest neighbors and singular spectrum analysis. We applied these methods to various categories including difference equations, chaotic differential equations, periodic systems, EEG/ECG data, and Gaussian noise. We then compared the generated parameters to values suggested by experts to determine which methods consistently found accurate values for τ and n. We found that SSA did not provide suitable values for n. However, both FNN and MPE provided accurate values for n for most of the systems. We conclude that, for the majority applications, a permutation dimension n=5 is suitable. For determining τ, we showed that our frequency approach provided accurate suggestions for τ for periodic systems, nonlinear difference equations, and medical data, while the mutual information function computed using adaptive partitions provided the most accurate results for chaotic differential equations.

F.A.K. acknowledges support of the National Science Foundation (NSF) under Grant Nos. CMMI-1759823 and DMS-1759824.

1. Permutation entropy calculation example

To demonstrate how PE is calculated, consider the sequence X=[4,7,9,10,6,11,3,2] with PE parameters n=3 and τ=1. The left side of Fig. 16 shows how the sequence can be broken down into the following permutations: two (0,1,2), one (1,0,2), two (1,2,0), and one (2,1,0) for a total of six permutations. This makes each permutation type have a probability out of six. The permutation distribution can be visually understood by illustrating the probabilities of each permutation as separate bins. To accomplish this, the right side of Fig. 16 shows the abundance of each permutation.

FIG. 16.

Permutations 1–6 shown for example sequence X (left) with n=3 and τ=1 and the relative abundance of each permutation (right). Permutation 1 corresponds to a (0, 1, 2), permutation 2 is of type (0, 1, 2), permutation 3 is of type (1, 2, 0), permutation 4 is of type (1, 0, 2), permutation 5 is of type (1, 2, 0), and permutation 6 is of type (2, 1, 0).

FIG. 16.

Permutations 1–6 shown for example sequence X (left) with n=3 and τ=1 and the relative abundance of each permutation (right). Permutation 1 corresponds to a (0, 1, 2), permutation 2 is of type (0, 1, 2), permutation 3 is of type (1, 2, 0), permutation 4 is of type (1, 0, 2), permutation 5 is of type (1, 2, 0), and permutation 6 is of type (2, 1, 0).

Close modal

Applying Eq. (1) to the probabilities of each permutation for our example sequence X yields

H(3)=25log2525log2515log1515log15=1.918bits.

This summarizes a simple example of how to apply permutation entropy to time series data.

a. Multi-scale permutation entropy delay algorithm

In Sec. II B b, we showed that choosing τ using MPE should be based on the position of the first peak after the noise in the MPE plot for an embedding dimension of n=3. At this maximum, the normalized PE hits a maximum of approximately 1. From this methodology, we developed Algorithm 1 to determine the delay τ using the location of the first peak, while ignoring the noise region in Fig. 17.

Algorithm 1: Algorithm using MPE for τ.

FIG. 17.

Region N is affected by noise in the MPE plot, and region S is unaffected.

FIG. 17.

Region N is affected by noise in the MPE plot, and region S is unaffected.

Close modal
b. Effects of noise on multi-scale permutation entropy

We found that the main advantage of using MPE for determining the embedding delay is its robustness to noise. Noise on an MPE plot has minimal effects on regions B and C from Fig. 10, while only significantly affecting region A as shown in Fig. 17. Furthermore, depending on the signal to noise ratio, there will only be an effect at the beginning of region A. Figure 17 shows the first region N where noise is affecting the permutation entropy. The effect of noise causes the MPE plot to start at a maxima and decrease to a local minima. When the time delay becomes large enough, the permutations are no longer influenced by the noise causing this minima. We found that the location of the minima is based on the condition

mavgτNAnoisefs,
(A1)

where mavg is the average of the absolute value of the slope and Anoise is approximately the maximum amplitude of the noise, τN is the value of τ great enough to surpass the noise amplitude. We derived this condition from the need for, on average, |f(t)f(t+τ)|>Anoise. This shows that MPE is robust to noise as long as the noise amplitude does not exceed the amplitude of the signal.

c. Pearson correlation

The Pearson correlation coefficient ρxy[1,1] measures the linear correlation of two time series x and y. Using these two datasets, the correlation coefficient is calculated as

ρxy=i=1n(xix¯)(yiy¯)i=1n(xix¯)2i=1n(yiy¯)2.
(A2)

The possible values of ρxy represent the relationship between the two datasets, where ρxy=1 represents a perfect positive linear correlation, ρxy=0 represents no linear correlation, while ρxy=1 represents a perfect negative linear correlation. However, Pearson correlation is limited because it only detects linear correlations. This limitation is somewhat alleviated by using Spearman’s Correlation, which operates on the ordinal ranking of the two time series instead of their numeric values.

d. Spearman’s correlation

Spearman’s correlation is also calculated using Eq. (A2) with the substitution of x and y for their ordinal ranking. This substitution allows for detecting nonlinear correlation trends to be represented as long as the correlation is monotonic. To demonstrate the difference, Fig. 18 shows two sequences x and y calculated from y=x4 with x[0,10]. Using this example, the Pearson correlation is calculated as ρ0.86, while Spearman’s ranked correlation yields ρ=1.0. This result demonstrates how Spearman’s correlation coefficient accurately detects the non-linear, monotonic correlation between x and y, whereas Pearson correlation may miss it.

FIG. 18.

A comparison between (left) unranked values and (right) ranked values for calculating correlation coefficients. Using the ranked x and y, Spearman’s correlation coefficient can be used to accurately reveal existing nonlinear monotonic correlations.

FIG. 18.

A comparison between (left) unranked values and (right) ranked values for calculating correlation coefficients. Using the ranked x and y, Spearman’s correlation coefficient can be used to accurately reveal existing nonlinear monotonic correlations.

Close modal
e. Autocorrelation example

We can use the concept of correlation to select a delay τ by calculating the correlation coefficient using Eq. (A2) between a time series and its τ-lagged version. As an example, take the time series x(t)=sin(2πt), with t[0,5] having a sampling frequency of 100 Hz. This results in a suggested delay τ=20 at the first folding time using both Spearman’s and Pearson correlation. In Sec. IV, we will implement Spearman’s version of autocorrelation to account for the possibility of non-linear correlations.

f. Mutual information using equal-sized partitions

For the calculation of MI, the joint and independent probabilities of the original x(t) and time lagged x(t+τ) time series are needed. However, since x is a discrete time series, we approximate these probabilities using bins, which segment the range of the series into discrete groups. The simplest method for approximating the probabilities using this discretization method is to use equal-sized bins. However, the size of these bins is dependent on the number of bins k. We investigated various methods for estimating an appropriate number of bins using the length of the time series N. These methods include the common square-root choice k=N, Sturge’s formula49k=log2(N)+1, and Rice Rule28k=2N1/3. After comparing each method using a variety of examples, we found that the use of Sturge’s formula provided the best results for selecting τ for PE using MI.

g. Mutual information using adaptive partitions

Darbellay and Vajda16 introduced a multistep, adaptive partitioning scheme to select appropriate binning sizes in the observation space formed by the planes x(t) and x(t+τ). Their method is often considered state-of-the-art for estimating the mutual information function.26 In this approach, the bins are recursively created where in the first function call, the space of the signal and its τ-lagged version is divided into an equal number of 2D bins. Then, a a chi-squared test is used to test the null hypothesis that the data within the newly created bins are independent. Any segment that fails the test is further divided until the resulting sub-segments contain independent data (or a certain number of divisions is satisfied). Using this partitioning method, the MI is calculated using Eq. (14).

h. Kraskov mutual information

Kraskov et al.26 developed a method for approximating the MI using entropy estimates using partition sizes based on k-nearest neighbors. Specifically, the method begins by first calculating the MI using entropy15 as

I(X;Y)=H(X)+H(Y)H(X,Y),
(A3)

where H is the Shannon entropy. Next, an approximation of H(X) with digamma functions is done, but the probability density of X and Y still needs to be estimated. To do this, adaptive partitions using the k-nearest neighbor are formed. Specifically, Kraskov et al. develop two different partitioning methods with similar results. The first method uses the maximum Chebyshev distance to the k=1 nearest neighbor j to form square bins as shown in Fig. 19(a), and the second method in Fig. 19(b) uses rectangular partitions using the horizontal and vertical distances to the k=1 nearest neighbor j.

FIG. 19.

Example showing two different partition methods for Mutual Information estimation using k=1 nearest neighbor adaptive partitioning: (a) Square partitioning bins with ϵ(i) as the maximum Chebyshev distance to the k=1 nearest neighbor j and (b) rectangular partitions using the horizontal and vertical distances to the k=1 nearest neighbor j.

FIG. 19.

Example showing two different partition methods for Mutual Information estimation using k=1 nearest neighbor adaptive partitioning: (a) Square partitioning bins with ϵ(i) as the maximum Chebyshev distance to the k=1 nearest neighbor j and (b) rectangular partitions using the horizontal and vertical distances to the k=1 nearest neighbor j.

Close modal

To continue with the example shown in Fig. 19, the density probability is estimated using the strips formed from these bins. To highlight the difference, Fig. 19(a) shows a horizontal strip of width ε(i) encapsulating nx(i)=2 points (strip does not include the point i), while in Fig. 19(b) only nx(i)=1 point is enclosed. Using these probability density approximations and the digamma function ψ, MI between X and Y can be estimated. Using the partitioning method shown in Fig. 19(a), the MI is estimated as

I(1)(X;Y)=ψ(k)[ψ(nx+1)+ψ(ny+1)]+ψ(N).
(A4)

Using the partitioning method shown in Fig. 19(b), the MI is estimated as

I(2)(X;Y)=ψ(k)1/k[ψ(nx)+ψ(ny)]+ψ(N).
(A5)

For the results shown in Sec. IV, we use the k=3 nearest neighbor to generate the partitions.

APPENDIX B: DYNAMIC SYSTEM MODELS

1. Lorenz systems

The Lorenz system used is defined as

dxdt=σ(yx),dydt=x(ρz)y,dzdt=xyβz.
(A6)

The Lorenz system had a sampling rate of 100 Hz with parameters σ=10.0, β=8.0/3.0, and ρ=95. This system was solved for 100 s and the last 24 s were used.

2. Rössler system

The Rössler system used was defined as

dxdt=yz,dydt=x+ay,dzdt=b+z(xc)
(A7)

with parameters of a=0.1, b=0.1, and c=14, which was solved over 400 s with a sampling rate of 10 Hz. Only the last 1500 data points of the x-solution were used in the analysis.

3. Bi-directional coupled Rössler system

The bi-directional Rössler system is defined as

dx1dt=w1y1z1+k(x2x1),dy1dt=w1x1+0.165y1,dz1dt=0.2+z1(x110),dx2dt=w2y2z2+k(x1x2),dy2dt=w2x2+0.165y2,dz2dt=0.2+z2(x210)
(A8)

with w1=0.99, w2=0.95, and k=0.05. This was solved for 4000 s with a sampling rate of 10 Hz. Only the last 400 s of the x-solution were used in the analysis.

4. Mackey–Glass delayed differential equation

The Mackey–Glass Delayed Differential Equation is defined as

x(t)=γx(t)+βx(tτ)1+x(tτ)n,
(A9)

with τ=2, β=2, γ=1, and n=9.65. This was solved for 400 s with a sampling rate of 100 Hz. The solution was then downsampled to 5 Hz, and only the last 1500 terms of the x-solution were used in the analysis.

5. Periodic sinusoidal function

The sinusoidal function is defined as

x(t)=sin(2πt).
(A10)

This was solved for 10 s with a sampling rate of 50 Hz.

6. Electroencephalogram data

The EEG signal was taken from Andrzejak et al.2 Specifically, the first 2000 data points from the EEG data of a healthy patient from set A, file Z-093 was used.

7. Electrocardiogram data

The Electrocardiogram (ECG) data were taken from SciPy’s misc.electrocardiogram dataset. These ECG data were originally provided by the MIT-BIH Arrhythmia Database.35 We used data points 3000 to 4500 during normal sinus rhythm.

8. Logistic map

The logistic map was generated as

xn+1=rxn(1xn),
(A11)

with x0=0.5 and r=3.95. Equation (A11) was solved for the first 500 data points.

9. Hénon map

The Hénon map was solved as

xn+1=1axn2+yn,yn+1=bxn,
(A12)

where b=0.3, x0=0.1, y0=0.3, and a=1.4. This system was solved for the first 500 data points of the x-solution.

APPENDIX C: TABULATED PERMUTATION ENTROPY PARAMETERS

See Tables IIII.

TABLE I.

A comparison between the calculated and suggested values for the delay parameter τ for multiple MI approximation methods. The cells in bold highlight the methods that yielded the closest match to the suggested delay. The equal-sized partition method is described in Sec. A 1 f, Kraskov et al. methods 1 and 2 in Sec. A 1 h, and the adaptive partitioning approach in Sec. A 1 g.

Mutual information
SystemEqual-sized partitionsKraskov et al. Method 1Kraskov et al. Method 2Adaptive partitionsSuggested delay τReference
White noise 1 1 45  
Lorenz 13 9 9 9 10 45  
Rössler 14 13 11 9 51  
Bi-directional Rössler 16 14 14 15 15 45  
Mackey–Glass 7 8 7 7 1–700 45  
Sine wave 17 13 15 51  
Logistic map 5 11 5 1–5 45  
Hénon map 12 15 13 1–5 45  
ECG 22 16 1–4 45  
EEG 1–3 45  
Mutual information
SystemEqual-sized partitionsKraskov et al. Method 1Kraskov et al. Method 2Adaptive partitionsSuggested delay τReference
White noise 1 1 45  
Lorenz 13 9 9 9 10 45  
Rössler 14 13 11 9 51  
Bi-directional Rössler 16 14 14 15 15 45  
Mackey–Glass 7 8 7 7 1–700 45  
Sine wave 17 13 15 51  
Logistic map 5 11 5 1–5 45  
Hénon map 12 15 13 1–5 45  
ECG 22 16 1–4 45  
EEG 1–3 45  
TABLE II.

A comparison between the calculated and suggested values for the delay parameter τ. The cells in bold show the methods that yielded the closest match to the suggested delay. The following conditions or abbreviations were used in the table: the range under PAMI results is from using the range (4 < n < 6), AP under MI is an abbreviation for adaptive partitioning, and AC is an abbreviation for autocorrelation.

Traditional methodsModified/proposed methods
CatagorySystemMI using APSpearman’s ACFreq. app.MPEPAMI (4 ≤ n ≤ 6)Suggested delay (τ)Reference
Noise White noise 1 1 1 1 45  
Chaotic differential equation Lorenz 9 15 17 5–9 10 45  
 Rössler 9 12 7 19 6–10 51  
 Bi-directional Rössler 15 12 20 6–10 15 45  
 Mackey–Glass 7 5 3 8 2–4 1–700 45  
Periodic Sine wave 10 21 16 5–8 15 51  
Nonlinear difference equation Logistic map 5 1 1 1 1 1–5 45  
 Hénon map 1 1 1 1 1–5 45  
Medical data ECG 21 2 13 1–2 1–4 45  
 EEG 5 4 1 4 2–4 1–3 45  
Traditional methodsModified/proposed methods
CatagorySystemMI using APSpearman’s ACFreq. app.MPEPAMI (4 ≤ n ≤ 6)Suggested delay (τ)Reference
Noise White noise 1 1 1 1 45  
Chaotic differential equation Lorenz 9 15 17 5–9 10 45  
 Rössler 9 12 7 19 6–10 51  
 Bi-directional Rössler 15 12 20 6–10 15 45  
 Mackey–Glass 7 5 3 8 2–4 1–700 45  
Periodic Sine wave 10 21 16 5–8 15 51  
Nonlinear difference equation Logistic map 5 1 1 1 1 1–5 45  
 Hénon map 1 1 1 1 1–5 45  
Medical data ECG 21 2 13 1–2 1–4 45  
 EEG 5 4 1 4 2–4 1–3 45  
TABLE III.

A comparison between the calculated and suggested values for the embedding dimension n. The cells in bold show the methods that yielded the closest match to the suggested dimension.

Traditional methodsModified method
CatagorySystemFNNSSAMPESuggested dim. (n)Reference
Noise White noise 4 23 5 3–7 45  
Chaotic Differential Equation Lorenz 5 5–7 45  
 Rössler 51  
 Bi-directional Rössler 6–7 45  
 Mackey–Glass 4 6 4 4–8 45  
Periodic Sine wave 4 51  
Nonlinear difference equation Logistic map 4 3 5 2–16 45  
 Hénon map 4 5 3–10 45  
Medical data ECG 7 5 3–7 45  
 EEG 5 11 6 3–7 45  
Traditional methodsModified method
CatagorySystemFNNSSAMPESuggested dim. (n)Reference
Noise White noise 4 23 5 3–7 45  
Chaotic Differential Equation Lorenz 5 5–7 45  
 Rössler 51  
 Bi-directional Rössler 6–7 45  
 Mackey–Glass 4 6 4 4–8 45  
Periodic Sine wave 4 51  
Nonlinear difference equation Logistic map 4 3 5 2–16 45  
 Hénon map 4 5 3–10 45  
Medical data ECG 7 5 3–7 45  
 EEG 5 11 6 3–7 45  
1.
J. M.
Amigó
,
R.
Monetti
,
T.
Aschenbrenner
, and
W.
Bunk
, “
Transcripts: An algebraic approach to coupled time series
,”
Chaos
22
(
1
),
013105
(
2012
).
2.
R. G.
Andrzejak
,
K.
Lehnertz
,
F.
Mormann
,
C.
Rieke
,
P.
David
, and
C. E.
Elger
, “
Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: Dependence on recording region and brain state
,”
Phys. Rev. E
64
(
6
),
061907
(
2001
).
3.
M.
Babaie-Zadeh
and
C.
Jutten
, “
A general approach for mutual information minimization and its application to blind source separation
,”
Signal Process.
85
(
5
),
975
995
(
2005
).
4.
C.
Bandt
and
B.
Pompe
, “
Permutation entropy: A natural complexity measure for time series
,”
Phys. Rev. Lett.
88
(
17
),
174102
(
2002
).
5.
A. F.
Bariviera
,
L.
Zunino
, and
O. A.
Rosso
, “
An analysis of high-frequency cryptocurrencies prices dynamics using permutation-information-theory quantifiers
,”
Chaos
28
(
7
),
075511
(
2018
).
6.
T.
Berry
,
J. R.
Cressman
,
Z.
Gregurić-Ferenček
, and
T.
Sauer
, “
Time-scale separation from diffusion-mapped delay coordinates
,”
SIAM J. Appl. Dyn. Syst.
12
(
2
),
618
649
(
2013
).
7.
A.
Block
,
W.
Von Bloh
, and
H.
Schellnhuber
, “
Efficient box-counting determination of generalized fractal dimensions
,”
Phys. Rev. A
42
(
4
),
1869
(
1990
).
8.
G. E.
Box
,
G. M.
Jenkins
,
G. C.
Reinsel
, and
G. M.
Ljung
,
Time Series Analysis: Forecasting and Control
(
John Wiley & Sons
,
2015
).
9.
D. S.
Broomhead
and
G. P.
King
, “
Extracting qualitative dynamics from experimental data
,”
Phys. D Nonlinear Phenom.
20
(
2–3
),
217
236
(
1986
).
10.
T.
Buzug
and
G.
Pfister
, “
Optimal delay time and embedding dimension for delay-time coordinates by analysis of the global static and local dynamical behavior of strange attractors
,”
Phys. Rev. A
45
(
10
),
7073
(
1992
).
11.
Y.
Cao
,
W.-W.
Tung
,
J.
Gao
,
V. A.
Protopopescu
, and
L. M.
Hively
, “
Detecting dynamical changes in time series using the permutation entropy
,”
Phys. Rev. E
70
(
4
),
046217
(
2004
).
12.
Y.-M.
Chung
,
C.-S.
Hu
,
Y.-L.
Lo
, and
H.-T.
Wu
, “A persistent homology approach to heart rate variability analysis with an application to sleep-wake classification,” arXiv:1908.06856 (2019).
13.
S. P.
Clark
, “
Estimating the fractal dimension of chaotic time series
,”
Lincoln Lab. J.
3
(
1
),
73
(
1990
).
14.
M.
Costa
,
A. L.
Goldberger
, and
C.-K.
Peng
, “
Multiscale entropy analysis of complex physiologic time series
,”
Phys. Rev. Lett.
89
(
6
),
068102
(
2002
).
15.
T. M.
Cover
and
J. A.
Thomas
,
Elements of Information Theory
(
John Wiley & Sons
,
2012
).
16.
G.
Darbellay
and
I.
Vajda
, “
Estimation of the information by an adaptive partitioning of the observation space
,”
IEEE Trans. Inform. Theory
45
(
4
),
1315
1321
(
1999
).
17.
L.
De Micco
,
J. G.
Fernández
,
H. A.
Larrondo
,
A.
Plastino
, and
O. A.
Rosso
, “
Sampling period, statistical complexity, and chaotic attractors
,”
Phys. A Stat. Mech. Appl.
391
(
8
),
2564
2575
(
2012
).
18.
B.
Deng
,
L.
Liang
,
S.
Li
,
R.
Wang
,
H.
Yu
,
J.
Wang
, and
X.
Wei
, “
Complexity extraction of electroencephalograms in Alzheimer’s disease with weighted-permutation entropy
,”
Chaos
25
(
4
),
043105
(
2015
).
19.
B.
Frank
,
B.
Pompe
,
U.
Schneider
, and
D.
Hoyer
, “
Permutation entropy improves fetal behavioural state classification based on heart rate analysis from biomagnetic recordings in near term fetuses
,”
Med. Biol. Eng. Comput.
44
(
3
),
179
(
2006
).
20.
A. M.
Fraser
and
H. L.
Swinney
, “
Independent coordinates for strange attractors from mutual information
,”
Phys. Rev. A
33
(
2
),
1134
(
1986
).
21.
J.
Garland
,
T.
Jones
,
M.
Neuder
,
V.
Morris
,
J.
White
, and
E.
Bradley
, “
Anomaly detection in paleoclimate records using permutation entropy
,”
Entropy
20
(
12
),
931
(
2018
).
22.
J.
Garland
,
T. R.
Jones
,
E.
Bradley
,
M.
Neuder
, and
J. W.
White
, “Climate entropy production recorded in a deep Antarctic ice core,” arXiv:1806.10936 (2018).
23.
P.
Grassberger
and
I.
Procaccia
, “
Measuring the strangeness of strange attractors
,”
Phys. D Nonlinear Phenom.
9
(
1–2
),
189
208
(
1983
).
24.
H.
Kantz
and
T.
Schreiber
,
Nonlinear Time Series Analysis
(
Cambridge University Press
,
2004
), Vol. 7.
25.
M. B.
Kennel
,
R.
Brown
, and
H. D.
Abarbanel
, “
Determining embedding dimension for phase-space reconstruction using a geometrical construction
,”
Phys. Rev. A
45
(
6
),
3403
(
1992
).
26.
A.
Kraskov
,
H.
Stögbauer
, and
P.
Grassberger
, “
Estimating mutual information
,”
Phys. Rev. E
69
(
6
),
066138
(
2004
).
27.
H.
Landau
, “
Sampling, data transmission, and the Nyquist rate
,”
Proc. IEEE
55
(
10
),
1701
1706
(
1967
).
28.
D.
Lane
,
J.
Lu
,
C.
Peres
,
E.
Zitek
et al., “Online statistics: An interactive multimedia course of study” (2008) (last accessed 29 January 2009).
29.
M. C.
Leles
,
J. P. H.
Sansão
,
L. A.
Mozelli
, and
H. N.
Guimarães
, “
Improving reconstruction of time-series based in singular spectrum analysis: A segmentation approach
,”
Digit. Signal Process.
77
,
63
76
(
2018
).
30.
D.
Li
,
Z.
Liang
,
Y.
Wang
,
S.
Hagihira
,
J. W.
Sleigh
, and
X.
Li
, “
Parameter selection in permutation entropy for an electroencephalographic measure of isoflurane anesthetic drug effect
,”
J. Clin. Monit. Comput.
27
(
2
),
113
123
(
2012
).
31.
Z.
Liang
,
Y.
Wang
,
G.
Ouyang
,
L. J.
Voss
,
J. W.
Sleigh
, and
X.
Li
, “
Permutation auto-mutual information of electroencephalogram in anesthesia
,”
J. Neural Eng.
10
(
2
),
026004
(
2013
).
32.
D. L.
Massart
,
L.
Kaufman
,
P. J.
Rousseeuw
, and
A.
Leroy
, “
Least median of squares: A robust method for outlier and model error detection in regression and calibration
,”
Anal. Chim. Acta
187
,
171
179
(
1986
).
33.
M.
McCullough
,
M.
Small
,
T.
Stemler
, and
H. H.-C.
Iu
, “
Time lagged ordinal partition networks for capturing dynamics of continuous dynamical systems
,”
Chaos
25
(
5
),
053101
(
2015
).
34.
M.
Melosik
and
W.
Marszalek
, “
On the 0/1 test for chaos in continuous systems
,”
Bull. Pol. Acad. Sci. Techn. Sci.
64
(
3
),
521
528
(
2016
).
35.
G. B.
Moody
and
R. G.
Mark
, “
The impact of the MIT-BIH arrhythmia database
,”
IEEE Eng. Med. Biol. Mag.
20
(
3
),
45
50
(
2001
).
36.
A.
Myers
and
F.
Khasawneh
, see https://github.com/Khasawneh-Lab/PE_parameter_selection, version 1.0.0 for Pe parameter selection, 2020.
37.
A.
Myers
,
E.
Munch
, and
F. A.
Khasawneh
, “Persistent homology of complex networks for dynamic state detection,” arXiv:1904.07403 (2019).
38.
A.
Papoulis
and
S. U.
Pillai
,
Probability, Random Variables, and Stochastic Processes
(
Tata McGraw-Hill Education
,
2002
).
39.
Y. B.
Pesin
, “
Characteristic Lyapunov exponents and smooth ergodic theory
,”
Usp. Mat. N.
32
(
4
),
55
112
(
1977
).
40.
S. M.
Pincus
, “
Approximate entropy as a measure of system complexity
,”
Proc. Natl. Acad. Sci. U.S.A.
88
(
6
),
2297
2301
(
1991
).
41.
A.
Popov
,
O.
Avilov
, and
O.
Kanaykin
, “Permutation entropy of EEG signals for different sampling rate and time lag combinations,” in Signal Processing Symposium (SPS), 2013 (IEEE, 2013), pp. 1–4.
42.
A.
Porta
,
V.
Bari
,
A.
Marchi
,
B. D.
Maria
,
P.
Castiglioni
,
M.
di Rienzo
,
S.
Guzzetti
,
A.
Cividjian
, and
L.
Quintin
, “
Limits of permutation-based entropies in assessing complexity of short heart period variability
,”
Physiol. Meas.
36
(
4
),
755
765
(
2015
).
43.
M. A.
Richards
, “The discrete-time Fourier transform and discrete Fourier transform of windowed stationary white noise,” Technical Report (Georgia Institute of Technology, 2013).
44.
J. S.
Richman
and
J. R.
Moorman
, “
Physiological time-series analysis using approximate entropy and sample entropy
,”
Am. J. Physiol. Heart Circ. Physiol.
278
(
6
),
H2039
H2049
(
2000
).
45.
M.
Riedl
,
A.
Müller
, and
N.
Wessel
, “
Practical considerations of permutation entropy
,”
Eur. Phys. J. Special Top.
222
(
2
),
249
262
(
2013
).
46.
C. E.
Shannon
, “
A mathematical theory of communication
,”
ACM SIGMOBILE Mob. Comput. Commun. Rev.
5
(
1
),
3
55
(
2001
).
47.
C. E.
Shannon
,
W.
Weaver
, and
A. W.
Burks
,
The Mathematical Theory of Communication
(
Nokia Bell Labs
,
1951
).
48.
M.
Staniek
and
K.
Lehnertz
, “
Parameter selection for permutation entropy measurements
,”
Int. J. Bifurcat. Chaos
17
(
10
),
3729
3733
(
2007
).
49.
H. A.
Sturges
, “
The choice of a class interval
,”
J. Am. Stat. Assoc.
21
(
153
),
65
66
(
1926
).
50.
F.
Takens
, “Detecting strange attractors in turbulence,” in Dynamical Systems and Turbulence, Warwick 1980 (Springer, 1981), pp. 366–381.
51.
M.
Tao
,
K.
Poskuviene
,
N.
Alkayem
,
M.
Cao
, and
M.
Ragulskis
, “
Permutation entropy based on non-uniform embedding
,”
Entropy
20
(
8
),
612
(
2018
).
52.
A.
Wolf
,
J. B.
Swift
,
H. L.
Swinney
, and
J. A.
Vastano
, “
Determining Lyapunov exponents from a time series
,”
Phys. D Nonlinear Phenom.
16
(
3
),
285
317
(
1985
).
53.
H.
Zhang
and
X.
Liu
, “Analysis of parameter selection for permutation entropy in logistic chaotic series,” in 2018 International Conference on Intelligent Transportation, Big Data & Smart City (ICITBS) (IEEE, 2018), pp. 398–402.
54.
L.
Zunino
,
M. C.
Soriano
,
I.
Fischer
,
O. A.
Rosso
, and
C. R.
Mirasso
, “
Permutation-information-theory approach to unveil delay dynamics from time-series analysis
,”
Phys. Rev. E
82
(
4
),
046212
(
2010
).