Coherence measures the similarity of progression of phases between oscillations or waves. When applied to multi-scale, nonstationary dynamics with time-varying amplitudes and frequencies, high values of coherence provide a useful indication of interactions, which might otherwise go unnoticed. However, the choice of analyzing coherence based on phases and amplitudes (amplitude-weighted phase coherence) vs only phases (phase coherence) has long been seen as arbitrary. Here, we review the concept of coherence and focus on time-localized methods of analysis, considering both phase coherence and amplitude-weighted phase coherence. We discuss the importance of using time-localized analysis and illustrate the methods and their practicalities on both numerically modeled and real time-series. The results show that phase coherence is more robust than amplitude-weighted phase coherence to both noise perturbations and movement artifacts. The results also have wider implications for the analysis of real data and the interpretation of physical systems.

Coherence is a universal principle of interactions between oscillations and waves. We explain how coherence has been introduced in physics and review procedures to measure coherence numerically. We expand the current knowledge by establishing the universal importance of measuring coherence not only as a static property but as a property evaluated locally in time. We also compare coherence defined to involve amplitude (the peak-to-peak height) vs purely the phase (the position in the cycle) by applying these different approaches to numerically modeled data. We argue that phase coherence is more robust and less susceptible to noise, particularly in cases where measurements are influenced by movement relative to the sensors. We provide an in-depth guide to the application of methods to measure coherence in data and demonstrate these points using real-world examples, including the interaction between the heart and lungs, noisy measurements of the brain, and the movement of electrons on the surface of liquid helium.

Oscillations and waves are ubiquitous in nature. They occur in mechanical and dynamical systems in virtually all areas of science: many physiological processes are oscillatory, such as the beating of the heart, breathing, or neuronal oscillations in the brain; the ecology abounds with seasonal cycles; most dynamical phenomena in astrophysics and space science are oscillatory, as are geological and hydrodynamics phenomena, such as ocean waves or earthquakes; there are business cycles in economy; strings in musical instruments produce vibrations, as do many man-made devices. Most electronic devices, the Internet, TV signals, communication systems, and medical imaging, use electromagnetic waves. The study of oscillations and waves is, therefore, essential for understanding the universe, as stated by Tesla in the quote: “If you want to find the hidden secrets of the universe, you must think in terms of energy, frequency, and vibration.”

While the underlying dynamical system may be very different in distinct cases, oscillatory processes share two key time-dependent features: amplitude (associated with the energy of the oscillation) and phase (associated with the time evolution of the oscillation). To identify interactions between different parts of a system, we can calculate the similarity of these features using the physical property known as coherence.

In this paper, we provide a review of coherence, beginning in its conceptualization in physics and subsequently evaluating relevant numerical methods used to measure coherence. In particular, we improve current understanding by both establishing the fundamental importance of taking a time-localized approach to coherence and comparing a method based on amplitude and phase to one only using phase information.

In Sec. II, we provide an overview of the development of coherence in physics and its adoption in time-series analysis. We also provide a definition of coherence based on the Fourier transform and explain the differences between coherence and the related concept of synchronization.

In Sec. III, we provide a model for a dynamical system, which is used to numerically illustrate the differences between the phase-only and amplitude-weighted methods of measuring coherence when the system is perturbed by different forms of noise.

In Sec. IV, we introduce wavelet-based coherence and explain the consequences of moving to the time–frequency domain that arise from the uncertainty principle. In this section, we also specify the alternate definitions of coherence in amplitude and phase and, based on results found using the illustrative model, argue that phase coherence is more resistant to the effects of noise and particularly movement artifacts.

In Sec. V, we provide an in-depth guide to the application of coherence in time-series analysis, including how to identify significant coherence.

In Sec. VI, we apply this knowledge and evaluate the two methods considered by considering four real-world problems, including the cardio-respiratory interaction, noisy electroencephalography (EEG) and functional near infrared spectroscopy (fNIRS) data, and electron dynamics on the surface of liquid helium.

We conclude in Sec. VII with a discussion of the time-localized approach to coherence and the impact of using methods based on only phase to those that rely also on amplitude information.

The theory of waves was initially developed by Young, Huygens, and Fresnel.1 Along with providing explanations for phenomena, such as diffraction and refraction, they also studied wave interference. In this latter case, multiple waves combine to produce a characteristic pattern of spatially and time-localized maxima and minima. However, this effect is only seen clearly when the change in the phase of the waves is the same. It is this property of the waves that we term coherence.

The study of interference and wave coherence has already led to many well-known discoveries. These include the Michelson–Morley experiment, which disproved the existence of the luminiferous ether.2 Variations of Young’s double-slit experiment have also played an important role in the understanding of wave–particle duality.3–5 In addition, the drive to develop a coherent source of light led to the invention of the laser.6 Subsequent to the development of lasers, larger-scale interference experiments have been possible, which resulted in the discovery of gravitational waves.7 Coherence is now studied across a broad spectrum of domains. This includes solid state and quantum physics,8–11 remote sensing,12 electrophysiology,13–15 communications,16 and space science.17 

With the advent of computers, the study of coherence is no longer restricted to physical experiments. Numerical methods allow for the analysis of oscillations in recorded data. Using this recorded data, coherence can be investigated.18 Coherence between different parts of a dynamical system can result from either synchronization or from modulation by a common process. While one can separately analyze two variables and qualitatively assess the common features present in each, interactions are often nonlinear in nature and, hence, difficult to discern. Coherence, therefore, provides a useful quantitative measure to identify these interactions.

An important aspect of coherence is that it is a time-localized phenomenon. This makes it particularly useful for analyzing dynamics comprised of oscillations with time-dependent quantitative characteristics. Such dynamics has been modeled using chaotic, stochastic, and non-autonomous systems.19,20 Time-series analysis methods that give a non-time-dependent representation of a time-series, such as its histogram or Fourier transform, may yield some insight into the amplitudes of oscillations present. However, these methods will generally provide little understanding of phase dynamics if the quantitative characteristics of the oscillations, or of their interactions with each other, are being modulated over time. In contrast to this, the time evolution of phases carries a great wealth of information about the underlying system when such time modulation exists.21 

Time-evolving time-localized analysis is typically performed in the time–frequency domain. This type of analysis was originally developed in quantum mechanics, with the distribution proposed by Wigner providing the highest possible frequency resolution that is mathematically possible within the limitations of the uncertainty principle.22 Ville later applied this function in the context of time–frequency analysis more generally.23 At the same time, the windowed Fourier transform was also developed,24 and the field has since been advanced with the introduction of the continuous wavelet transform.25,26 Time–frequency analysis has been applied most commonly to deal with simple forms of nonstationary data, with applications in communications, radar, sonar, and acoustics.27 Recently, it has also been invaluable in the analysis problems, such as turbulence,28 brain signals,29 blood flow,30 and excited electron oscillations on liquid helium.31 These systems involve multiple potentially mutually interacting oscillatory processes that take place simultaneously across a range of timescales; we refer to such systems as multi-scale systems.

One specific advantage of the time–frequency methods is that they, to various degrees, allow for the time-localized extraction of instantaneous phases over time (see, e.g., Ref. 32). These phases can be studied further to give insight into the system. This can be seen in phase synchronization methods, which have been applied to the cardiorespiratory system.33 Phase differences can also be observed and point to delays in coupled networks of oscillators, such as those seen in biology.34 Beyond this, we can estimate coupling functions and infer the directionality of coupling (see Ref. 35 and the references therein). In the case of weakly coupled oscillator networks, connectivity can be inferred directly from the phases.36 There are also phase stability methods, which have been used to find stable oscillations in the heart rate variability.37 

In the case of coherence represented in the time–frequency domain, the initial development of the methods was motivated by applications to biomedical data. Specifically, it has been of great importance to the mapping of functional connectivity and study of synchronization in the brain.38–45 At the same time, the development of time–frequency coherence has spearheaded investigations into microvasculature dynamics.46–51 It has since been used in other biomedical studies and found use as a marker for ageing of the cardiorespiratory system,52 as well as revealing the relation between the width of the subarachnoid space and blood pressure.53 Moreover, the generality of time–frequency coherence means that it has found applicability elsewhere. In particular, these methods have also been used extensively in the analysis of solar, geophysical, and meteorological time-series to determine the Earth–Sun dynamical relationship.54–57 Coherence has also found use in the analysis of economic time-series, where it has been used to identify instability and risk in specific markets as well as the relation between the monetary policy and the macroeconomic activity.58–62 It has also been applied in the case of cyclo-nonstationarity, where it has been used to analyze mechanical systems, such as engines and wind turbines.63 Further examples include the evaluation of electron dynamics,31 behavioral rhythms in mice,64 and social networks.65 

The original formulation of coherence was within the field of optics, where it is used to quantify the degree to which two sources of light can interfere. It was developed from a similar measure of the intensity of the interference pattern, or visibility,
(1)
where I max is the intensity of the light at the peaks and I min is the intensity at the troughs. The value of v is 1 when the interference is maximized and 0 is the case of no interference (i.e., the intensity curve of the light is smooth). While this definition is useful from an empirical standpoint, it is more difficult to use for the mathematical analysis of waves of arbitrary phase and amplitude. Coherence was, therefore, developed as a similar measure of the degree of interference, but using the phase and amplitude of the interfering waves as parameters.66 
It is worth noting that while interference was originally investigated in optics, the phenomenon prevails throughout all types of waves. As such, coherence can also be defined for any type of wave. A general analytic framework for the study of waves is provided by the Fourier transform. In this context, we can find a measure of the similarity between the waves in two data series by computing the Fourier cross spectrum,
(2)
where F a and F b are the corresponding Fourier transforms of the two series and l l ¯ denotes the complex conjugate. However, this similarity measure is still proportional to the amplitude of the Fourier components. This means that if a dominant oscillation appears in one data series but only background fluctuations are present in the other, then the cross spectrum will still have a peak at the frequency of that oscillation as long as there is some amplitude at that frequency in the other data series. With this in mind, it is clear that we need to normalize the cross spectrum so that it is not biased by this effect. The way this is achieved is by defining Fourier coherence as
(3)
where the angle brackets denote taking an average value of the Fourier spectra S a b ( f ), S a a ( f ), S b b ( f ) computed for different time-segments of the time-series.66 This defines coherence on a scale between 0 and 1, making it directly comparable with the interference visibility shown in (1).

It is worth noting that coherence should not be confused with synchronization. In terms of dimensionality, synchronization is defined specifically in the time dimension and, therefore, applies to the dynamics of oscillations in time. In contrast, coherence refers to a more general phenomenon, which extends to waves that are defined across space as well as time.

There are also important differences in the context of time-series generated by dynamical systems. While many types of synchronization exist, they all result from an interaction between two or more oscillations.33,67 As such, synchronization refers to a process of adjustment of rhythms caused by interactions. In contrast, coherence implies that two oscillations are observed to have the same frequency and frequency modulation, but this does not necessarily imply that they are coupled.

As examples, consider two linear oscillators with the same frequencies or two autonomous nonlinear oscillators with the same parameters and initial conditions. In both of these cases, the oscillations produced by the two systems will be coherent. However, since the state of one oscillator does not depend on the state of the other, they are not coupled.

Despite this difference, there is still a strong connection between coherence and specific types of synchronization. The states of complete 1:1 synchronization or 1:1 phase synchronization are more or less the same as coherence as the strength of the interaction reduces to a small value when two oscillators are completely synchronized. One can also consider indirect synchronization, such as two non-autonomous oscillators becoming synchronized via the same time-dependent modulation. In each of these cases, the effect can be measured directly using coherence.39 

In order to illustrate the factors affecting the measurement of coherence, we consider a pair of time-series, which contain common oscillations generated by non-autonomous systems with independent perturbations. To ensure that we are not biased toward perturbations in amplitude or phase, we consider a system with a separable amplitude and phase dynamics.

The Poincaré oscillator is a two-dimensional limit cycle oscillator, which can be defined in polar coordinates as
(4)
where r is the amplitude and θ is the phase of the oscillator. A stable limit cycle is defined in state space with radius r = a, with α parameterizing the rate at which the trajectory converges to this amplitude. The phase is neutrally stable and changes with a rate defined by the frequency ω. A time-series x ( t ) of the oscillation can be generated by transforming from polar coordinates by using x ( t ) = r ( t ) cos ( θ ( t ) ).

The important feature of this system is that r and θ vary independently. This means that the amplitude of the oscillator can be perturbed without affecting the phase and vice versa. However, comparing the effect of amplitude and phase perturbations this way using the current system would not be a fair comparison since r has a stable point attractor while θ does not. This leads to the perturbations to r being suppressed over time, while perturbations to θ are integrated over time.37 

To resolve this issue, we modify the Poincaré oscillator so that the form of stability is the same in both amplitude and phase. Unfortunately, we cannot simply copy the function used for d r d t to d θ d t since θ will converge to a. For persistent oscillations, θ needs to change on average monotonically, which is provided by the parameter ω in the current form. However, we cannot use d θ d t = ω α θ ( θ a ) either as θ is unbounded and lim t [ ω α θ ( θ a ) ] = α θ 2, which results in an unstable trajectory. Instead, we use the following modification:
(5)
where ϕ is an auxiliary dimension, which is left unperturbed and provides a stable point in phase moving at the same rate ω. The cubic function was chosen because it gives similar scaling of the strength of attraction to the stable point relative to the distance, but is symmetric around the stable point. The terms ξ i η i are white Gaussian noise with a standard deviation specified by ξ i.

To generate each time-series, the amplitudes { r 1 , r 2 } and phases { θ 1 , θ 2 } of two modified Poincaré oscillators were numerically modeled and summed together in a time-series X ( t ) = r 1 ( t ) cos ( θ 1 ( t ) ) + r 2 ( t ) cos ( θ 2 ( t ) ). However, even with perturbations, this time-series would appear as two noisy sinusoids with approximately stationary dynamics. To simulate more realistic nonstationary time-series, the system was made non-autonomous by modulating the oscillator frequencies with ω ( t ) = 2 π ω 0 + A sin ( 2 π ω m t ). To investigate the effect of phase differences, the phase offset of the oscillations was also adjusted by changing the initial value of ϕ.

In the numerically modeled examples used in Sec. IV, we considered a high-frequency mode with ω 0 = 1, ω m = 0.008, A = 0.8 and a low-frequency mode with ω 0 = 0.5, ω m = 0.0055, A = 0.2. For the other oscillator parameters, we used a = 1 and α = 5 in each case.

Noise plays a significant role in the evaluation of coherence. Consider two time-series with a single, identical sinusoidal oscillation with frequency f sin. By analyzing Eq. (3), we can see that S a b ( f sin ) S a a ( f sin ) and S a a ( f sin ) S b b ( f sin ), which results in the expected value C ( f sin ) = 1. However, since the time-series contain no other oscillations, this relation holds true not just for f sin but for all values of f. This means that we might mistakenly believe that coherent oscillations exist at all frequencies.

Similar behavior is apparent whenever dominant oscillations are present in both time-series. Without independent fluctuations at adjacent frequencies, significant coherence will be observed at values far from the frequencies of the corresponding oscillations.

In most real data, this is not an issue as they are usually influenced by both system noise and measurement noise. We must, therefore, take care to approximate real-world examples in our analysis by including noise in the numerical model.

To investigate the effect of both amplitude and phase perturbations, two cases were considered. In the first case, each of the modes was perturbed only by amplitude noise with ξ r = 0.5, ξ θ = 0, while in the second case, they were perturbed only by phase noise with ξ r = 0, ξ θ = 0.5. We also considered a case with additive noise to simulate measurement noise and common artifacts in the time-series. These were generated by adding the same dichotomous noise, with random abrupt transitions between two states, to both time-series. This was defined using the time-dependent transition probabilities,
(6)
where t is the time since the last transition from one state to another and Λ = λ 1 + λ 2. The transition rates were chosen as λ 1 = 0.000 01 Hz and λ 2 = 0.000 19 Hz, causing a series of spike-like features with rare 0 d transitions followed by quicker d 0 transitions. The amplitude of the spikes was chosen as d = 10. In addition to these spikes, independent 1 / f noise series were added to each time-series to simulate background fluctuations.

For the analysis of coherence of phases of oscillations in time-series, the Fourier-based definition of coherence is perfectly valid when the time-series are stationary. However, for multi-scale, nonstationary time-series, the dynamics cannot be approximated by assuming a constant time-averaged phase and amplitude, as is assumed in the Fourier transform. As discussed in Rowland Adams et al.,21 such time-series must not be analyzed from the infinite-time, non-time-evolving framework of analysis that is designed for stationary time-series—which is precisely the framework within which Fourier coherence exists—but rather, such time-series need to be analyzed from within the framework of time-evolving time-localized analysis of oscillatory characteristics.

Accordingly, it is natural to seek a way to compute coherence from time–frequency representations of the data. As already mentioned, we can compute a time–frequency representation using an ordinary Fourier transform with a moving window, which is also known as a short-time Fourier transform. However, as soon as we do this, we must ask what size of window? A large window gives us excellent frequency resolution, but then it is more difficult to determine the time at which oscillation frequencies change. Similarly, while a small window enables us to track the change in frequency more precisely, the frequency resolution is lower and makes it difficult to determine the exact frequencies of oscillations. These characteristics of the measurement of waves are well known in quantum mechanics and famously summarized in the Heisenberg uncertainty principle.

The main limiting factor in the choice of window size is the lowest-frequency oscillation that we wish to observe. It is necessary to choose a window that contains enough cycles of this oscillation to determine its frequency to reasonable precision. However, this window size is larger than the window needed to have the same frequency resolution for higher-frequency oscillations. For higher-frequency oscillations, this window size will represent a slower timescale than the timescale of these oscillations, making the analysis effectively equivalent to the kind of long-time-averaging associated with the classical non-time-evolving, long-time-asymptotic-statistics framework designed for stationary time-series described above.

Therefore, to achieve a time-localized analysis of multi-scale time-series, we would need to use an adaptive window size to increase the time resolution at high frequencies while maintaining an optimal frequency resolution overall.

The difference between this time-localized approach and the slow-timescale averaging that takes place in the fixed-window-size approach is illustrated in Fig. 1. Here, time–frequency analysis is performed on a time-series from the illustrative Poincaré oscillator model. In this case, the oscillators were not perturbed with phase noise, and only minimal amplitude noise, ξ r = 0.005, was introduced. In addition, background fluctuations were numerically modeled by adding independent 1 / f noise to each time-series. In Fig. 1(a), depicting the fixed-window approach, the idea is to characterize all aspects of the dynamics at a given time using the data in a given window. This means that all of the analysis for every frequency is performed within the same window (note that this window is shown as rectangular for illustrative purposes only—a Gaussian window was used in the short-time Fourier transform to enable a fairer comparison of the two approaches).

FIG. 1.

Time–frequency analysis illustrated for time-localized vs fixed-window approaches. (a) Generated time-series of Poincaré oscillators as defined by Eq. (5), with additive 1 / f noise and ξ r = 0.005. A window of size 12.6s centered at 120s is drawn above the time-series. The arrows above the window illustrate that the window slides across the time-series when the short-time Fourier transform (STFT) is applied. (b) The same time-series as in (a), with three wavelets with frequency resolution f 0 = 2 at different frequencies (0.5, 1, and 1.7 Hz) drawn above the time-series. The wavelets slide across the time-series when the WT is applied. The dots between the wavelets illustrate that there is one wavelet for each frequency, in our case 288 wavelets. (c) The STFT amplitude found at 120s. (d) The STFT phase found at 120s projected onto the frequency-phase plane. (e) The WT amplitude found at 120s. The orange dots correspond to the frequencies of the three wavelets in (b). Note the logarithmic frequency resolution of the WT. (f) The WT phase found at 120s projected onto the frequency-phase plane. (g) The STFT amplitude for the whole 400s time-series. A line is drawn at 120s. (h) The STFT phase for 10s of the time-series. (i) The WT amplitude for the whole 400s time-series. (j) The WT phase for 10s of the time-series.

FIG. 1.

Time–frequency analysis illustrated for time-localized vs fixed-window approaches. (a) Generated time-series of Poincaré oscillators as defined by Eq. (5), with additive 1 / f noise and ξ r = 0.005. A window of size 12.6s centered at 120s is drawn above the time-series. The arrows above the window illustrate that the window slides across the time-series when the short-time Fourier transform (STFT) is applied. (b) The same time-series as in (a), with three wavelets with frequency resolution f 0 = 2 at different frequencies (0.5, 1, and 1.7 Hz) drawn above the time-series. The wavelets slide across the time-series when the WT is applied. The dots between the wavelets illustrate that there is one wavelet for each frequency, in our case 288 wavelets. (c) The STFT amplitude found at 120s. (d) The STFT phase found at 120s projected onto the frequency-phase plane. (e) The WT amplitude found at 120s. The orange dots correspond to the frequencies of the three wavelets in (b). Note the logarithmic frequency resolution of the WT. (f) The WT phase found at 120s projected onto the frequency-phase plane. (g) The STFT amplitude for the whole 400s time-series. A line is drawn at 120s. (h) The STFT phase for 10s of the time-series. (i) The WT amplitude for the whole 400s time-series. (j) The WT phase for 10s of the time-series.

Close modal

By contrast, as depicted in Fig. 1(b), the time-localized approach uses a variable-sized window depending on which frequency is being analyzed. For the former approach, where at each time a full-frequency-spectrum Fourier transform is performed inside a pre-specified window, the result is that the time–frequency analysis can be optimized around one frequency only. However, in the time-localized approach, the analysis is centered around each frequency under analysis, much like adjusting an optical focus. This means that the time–frequency plots for this latter approach provide much greater detail across time at high frequencies, as well as much greater detail across frequency at low frequencies. An alternative version, with a Fourier transform presented with a logarithmic scale, is provided in Fig. 1 of the supplementary material. When comparing the two figures, it is obvious that a logarithmic scale is disadvantageous for the Fourier transform, which is calculated with linear frequency resolution.

The time-localized, adaptive window approach is realized by the continuous wavelet transform26 (which we shall sometimes just call the wavelet transform, abbreviated WT). This is defined by
(7)
where x ( t ) is a time-series of length T; the variable s > 0, called the “scale,” controls the width of the windowing function, enabling it to be adapted to the frequency under investigation (as described shortly); and Ψ is a complex-valued function called the mother wavelet. Using the convolution theorem (or, equivalently, Fourier isometry), the wavelet transform can be computed in the Fourier domain by
where
An example of a mother wavelet is the Morlet wavelet, which is approximately a complex exponential function multiplied by a Gaussian envelope, such that the resulting wavelet transform is approximately the adaptive-window-width version of the Gaussian-windowed Fourier transform. Specifically, the Morlet wavelet is given by
(8)
where f 0 is a free parameter called the frequency resolution: it can be changed to adjust the resolution toward greater frequency precision (higher f 0) or time precision (lower f 0). The Fourier-domain representation of the Morlet wavelet Ψ is given by
Note that Ψ ^ is a real-valued function; i.e., the Morlet wavelet Ψ is a Hermitian function.
In the wavelet transform, one can adapt the scale s to the frequency f under investigation in such a manner as to give logarithmic frequency resolution by taking s to be inversely proportional to f. Specifically, when working with the Morlet wavelet, we take
where ω max is the value at which the real-valued function Ψ ^ is maximized. Provided f 0 is not too small (larger than about 0.5), ω max is almost exactly equal to 2 π f 0, i.e., s f 0 f.

It is worth noting an issue that arises from the fact that the integral in Eq. (7) is bounded between 0 and T. This means that when t is close to one end of x ( t ), a significant part of the amplitude of the wavelet function extends beyond the bounds of the integral. This bounded integral is also equivalent to an unbounded integral where the ends of x ( t ) are padded with infinite zeros. This problem is common among methods using a moving window and other strategies include using reflected data or predicted data equal to half the length of the window. However, each of these methods causes boundary effects that result in errors in the time–frequency representation.68 The other alternative is to not include these regions in the plot. This results in a cone of influence, which is larger in size at lower frequencies due to the larger-sized wavelets reaching the ends sooner than smaller wavelets.

From the wavelet transform, one can extract an instantaneous amplitude and phase associated to each frequency f at each time t by expressing W ( s , t ) = | W ( s , t ) | e i θ ( s , t ) and taking | W ( s , t ) | as the amplitude and θ ( s , t ) as the phase.

With an optimal time–frequency representation of the time-series, we can proceed to define the coherence between them. Following from the original definition in Eq. (3), time–frequency domain coherence between two time-series x ( t ) and y ( t ) was originally popularized by Torrence and Webster69 and then again by Lachaux et al.38 where it was defined as
(9)
where S W a b are the wavelet cross spectra as defined by
(10)
Here and in the rest of the text * denotes complex conjugate. δ defines the length of a moving window in the time domain over which the cross spectra are averaged. Like wavelets, δ is chosen to be adaptive in order to maintain an optimal resolution over frequency such that δ = n c y / f, where n c y is the number of cycles at any given frequency. Values between 6 and 10 for n c y were originally recommended in the context of data recorded by brain electrodes.38 However, in other applications of time–frequency analysis, n c y = 5 has been used.70 

The application of a wavelet-based approach vs a Fourier-based approach has a significant effect on the information provided by coherence analysis. This can be seen by comparing the studies of Karavaev et al.71 and Mizeva et al.,72 both of which consider cardiovascular time-series recorded over similar timescales (15 and 20 min, respectively). In the former study, the macroscopic autonomic control is characterized by dividing the Fourier coherence into a “high-frequency” (0.15–0.4 Hz) and “low-frequency” (0.05–0.15 Hz) band. In the latter study, the wavelet coherence is divided into five separate frequency bands with ranges 0.6–2, 0.145–0.6, 0.052–0.145, 0.021–0.052, and 0.0095–0.021 Hz, which allows for the characterization of both the macroscopic and microscopic dynamics. The logarithmic scale provided by the wavelet coherence, therefore, acts much like a telescope or microscope, allowing us to zoom in and out of all frequencies of interest at every moment in time.

If we use a complex wavelet, such as the Morlet wavelet defined in Eq. (8), then the cross spectrum in the numerator of Eq. (9) can be separated into phase and amplitude, with
(11)
Doing the same for the denominator terms, we find
(12)
Written this way, the coherence defined in Eq. (9) is expressed as a phasor of the phase difference, e i ( θ a ( f , τ ) θ b ( f , τ ) ), multiplied by the normalized amplitudes. We, therefore, term this definition as amplitude-weighted phase coherence (AWPC).
However, we can actually remove the influence of the wavelet amplitude altogether. We can define phase coherence (PC) as
(13)

This definition of coherence was developed independently by Lachaux et al.38 (where it was termed single-trial phase coherence) and Bandrivskyy et al.46 While Eq. (13) defines PC for a pair of time-series, it has since been extended to groups of three or more time-series.73,74

Like Fourier coherence, both PC and AWPC take values between 0 and 1. Note, however, that for oscillations with time-dependent characteristics, strong coherence will not typically manifest as a coherence value of 1, but often as distinctly less than 1.

In the examples shown in this paper, PC was calculated using MODA—an interactive MATLAB toolbox.75 We also encourage readers to consult the MODA user guide, which contains practical information for performing PC and other time–frequency analyses.76 

The differences between PC and AWPC are shown in Figs. 2–4 using the previously defined illustrative Poincaré model. In each case, the two time-series, their corresponding WT, and the PC and AWPC plots are shown. The methods were applied using three different time–frequency resolutions by changing the central frequency f 0 of the Morlet wavelet. The effect of adjusting f 0 can be seen in the WT, where the frequency width of the bands corresponding to the oscillatory modes is decreased with increasing f 0. This effect is also seen for the coherence plots. Here, the darker bands of coherence reveal the common frequency modulation of the two modes, which becomes more localized in frequency as f 0 is increased.

FIG. 2.

Comparison between PC and AWPC applied using wavelets of different frequency resolution to modes generated by amplitude-perturbed Poincaré oscillators as defined by Eq. (5). (a) and (b) Ten-second segments of the two time-series containing modes with independent perturbations. The second row (c)–(h) presents the WT plots of the two time-series at different frequency resolutions: f 0 = 1 (c) and (d), f 0 = 2 (e) and (f), and f 0 = 5 (g) and (h). The time-series in (a) was the input for the transforms (c), (e), and (g), while (b) provided the input for (d), (f), and (h). The final row (i)–(n) indicates the time-localized coherence for the PC and AWPC methods using the transforms shown in (c)–(h). For example, (i) and (j) were both generated using the WT plots indicated by (c) and (d).

FIG. 2.

Comparison between PC and AWPC applied using wavelets of different frequency resolution to modes generated by amplitude-perturbed Poincaré oscillators as defined by Eq. (5). (a) and (b) Ten-second segments of the two time-series containing modes with independent perturbations. The second row (c)–(h) presents the WT plots of the two time-series at different frequency resolutions: f 0 = 1 (c) and (d), f 0 = 2 (e) and (f), and f 0 = 5 (g) and (h). The time-series in (a) was the input for the transforms (c), (e), and (g), while (b) provided the input for (d), (f), and (h). The final row (i)–(n) indicates the time-localized coherence for the PC and AWPC methods using the transforms shown in (c)–(h). For example, (i) and (j) were both generated using the WT plots indicated by (c) and (d).

Close modal
FIG. 3.

Comparison between PC and AWPC applied using wavelets of different frequency resolution to modes generated by phase-perturbed Poincaré oscillators as defined by Eq. (5). (a) and (b) Ten-second segments of the two time-series containing modes with independent perturbations. The second row (c)–(h) presents the WT plots of the two time-series at different frequency resolutions: f 0 = 1 (c) and (d), f 0 = 2 (e) and (f) and f 0 = 5 (g) and (h). The time-series in A was the input for the transforms (c),(e) and (g), while (b) provided the input for (d),(f) and (h). The final row (i)–(n) indicates the time-localized coherence for the PC and AWPC approaches using the transforms shown in (c)–(h). For example, (i) and (j) were both generated using the WT plots indicated by (c) and (d).

FIG. 3.

Comparison between PC and AWPC applied using wavelets of different frequency resolution to modes generated by phase-perturbed Poincaré oscillators as defined by Eq. (5). (a) and (b) Ten-second segments of the two time-series containing modes with independent perturbations. The second row (c)–(h) presents the WT plots of the two time-series at different frequency resolutions: f 0 = 1 (c) and (d), f 0 = 2 (e) and (f) and f 0 = 5 (g) and (h). The time-series in A was the input for the transforms (c),(e) and (g), while (b) provided the input for (d),(f) and (h). The final row (i)–(n) indicates the time-localized coherence for the PC and AWPC approaches using the transforms shown in (c)–(h). For example, (i) and (j) were both generated using the WT plots indicated by (c) and (d).

Close modal
FIG. 4.

Comparison between PC and AWPC applied using wavelets of different frequency resolution to modes generated by phase-perturbed Poincaré oscillators as defined by Eq. (5), with the same dichotomous noise and independent realizations of 1 / f noise added to both time-series. (a) and (b) The independently generated time-series. The second row (c)–(h) presents the WT plots of the two time-series at different frequency resolutions: f 0 = 1 (c) and (d), f 0 = 2 (e) and (f), and f 0 = 5 (g) and (h). The time-series in (a) was the input for the transforms (c), (e), and (g), while (b) provided the input for (d), (f), and (h). The final row (i)–(n) indicates the time-localized coherence for the PC and AWPC approaches using the transforms shown in (c)–(h). For example, (i) and (j) were both generated using the WT plots indicated by (c) and (d).

FIG. 4.

Comparison between PC and AWPC applied using wavelets of different frequency resolution to modes generated by phase-perturbed Poincaré oscillators as defined by Eq. (5), with the same dichotomous noise and independent realizations of 1 / f noise added to both time-series. (a) and (b) The independently generated time-series. The second row (c)–(h) presents the WT plots of the two time-series at different frequency resolutions: f 0 = 1 (c) and (d), f 0 = 2 (e) and (f), and f 0 = 5 (g) and (h). The time-series in (a) was the input for the transforms (c), (e), and (g), while (b) provided the input for (d), (f), and (h). The final row (i)–(n) indicates the time-localized coherence for the PC and AWPC approaches using the transforms shown in (c)–(h). For example, (i) and (j) were both generated using the WT plots indicated by (c) and (d).

Close modal

An additional effect seen when increasing the frequency resolution is that the background coherence between the modes also increases. The reason for this effect is due to the fact that larger wavelets average over more cycles, leading to extracted wavelet components that are more stationary in frequency. These components, therefore, appear coherent, but only because the rate of change in frequency converges to the same value (i.e., 0) for all oscillations as f 0 is increased.

Figure 2 shows the effect of amplitude perturbations on the modes following the two coherence measures. The coherence bands associated with the modes are lighter and less well-defined in the case of AWPC, with the effect being greatest for the lowest frequency resolution. The explanation for this can be found in the independent fluctuations seen in the amplitude of the WT. As highlighted in Eq. (11), AWPC is dependent on the wavelet amplitude, which means that the amplitude perturbations result in lower coherence. In contrast, PC is not dependent on the wavelet amplitude and is, therefore, resistant to such perturbations.

As one might expect, the effect is similar for both approaches when the perturbations are instead applied to the phase of oscillations. Figure 3 illustrates the effect of phase perturbations, where PC and AWPC are affected similarly by the noise due to both methods being dependent on the phase of the wavelet components.

A significant difference between PC and AWPC can be seen in the additive noise case shown in Fig. 4. Here, the common dichotomous noise results in time-localized spikes in the time domain. These can be seen as large cones of amplitude permeating into the lower frequencies in the WT. In the coherence plots, this effect has the most significant impact on the low frequencies, as larger wavelets have a lower time resolution and span across a greater period. Furthermore, it can be seen that the case for f 0 = 5 is most affected by the amplitude perturbations due to the increased temporal width of the wavelets. Generally speaking, therefore, smaller values of f 0 should be used in cases where extremely time-localized noise features are present, such as movement artifacts in biomedical measurements.

Also worth noting in the additive noise example is that even though the added dichotomous and 1 / f noise affect both the phase and amplitude of the wavelet components, the coherence bands of the modes are more strongly defined in the PC plots and the low-frequency coherence is reduced. This is caused by the time-localized properties of the dichotomous noise, which only affect a relatively small number of cycles at each wavelet scale. Since the window used to calculate the coherence averages the phase difference over a relatively large number of cycles, the effect on PC is reduced. In contrast, as shown in Eq. (11), the phase difference in AWPC is weighted by the amplitude. This means that even though the noise spikes last only a small number of cycles, the relative weight to the calculation of the coherence is increased due to the large associated amplitude.

Beyond coherence, it is often useful to extract the instantaneous wavelet phase difference ( θ a ( f , t ) θ b ( f , t )) and analyze this directly. This has been done in many studies to investigate deterministic phase differences in oscillations from two time-series.77–80 While phase is technically a time-independent measure, the direction and magnitude of the phase difference are still a valuable measure that can be used to determine time lags, which provide weight to statements of causality.

In the studies cited above, analysis of the phase difference involves extracting individual pairs of instantaneous phases and examining the change in the phase difference over time. However, in time-series containing many modes, it is often useful to analyze the phase differences in the frequency domain. Doing this reveals the phase relationships present across different timescales of the dynamics.

To define the time-averaged phase difference, we use
(14)
To be able to take the integral in Eq. (14) over the whole duration [ 0 , T ] of the signal, it would be necessary to add padding to the signal before time 0 and after time T before computing the WT. If, instead, one just computes the WT within the cone of influence, then the time-interval over which the integral in Eq. (14) is taken is the f-section of the cone of influence—that is, the set of times t over which W ( f , t ) has been computed; this is a subinterval of [ 0 , T ] that depends on f: as f decreases, this subinterval becomes narrower.

Note that while this definition of the time-averaged phase difference correctly identifies the phase differences of the coherent modes, it does not necessarily provide a meaningful value for areas of zero coherence. This is because the result will be the argument of the sum of random phasors. While the amplitude of this sum correctly gives a value of the time-averaged PC at the background level, the argument will be a random angle between 0 and 2 π. It is, therefore, important to assess such a measure of the phase difference in conjunction with the actual coherence and only to evaluate its values where the coherence is significant.

In analogy to the difference between PC and AWPC, it is also possible to define an overall phase difference using not the time-averaged phase difference as in (14), but rather the energy-averaged phase difference,
In this paper, we use the time-averaged phase difference.
We have defined PC and AWPC as functions of time and frequency since they represent information about the time-localized frequency content of the pair of signals. When we want an overall measure of the coherence at each frequency-value, taken over the whole duration of the signal, there are two approaches that one can take:
  • One is simply to take the time-average of the time-localized PC or AWPC as already defined in Secs. IV B and IV C.

  • The other is to compute PC or AWPC not over small time-windows ( t δ 2 , t + δ 2 ) as in Eqs. (10)–(13), but rather over the whole duration of the signal.

Under the former approach, we have a time-averaged PC given by
(15)
and a time-averaged AWPC given by
(16)
Let us recall here that δ itself depends on f, as described in Sec. IV B. Under the latter approach, we have an over-all-time PC given by
and an over-all-time AWPC given by
where S W a b total are the over-all-time wavelet cross spectra as defined by

In all four cases, we have given formulas according to the assumption that the WT is defined over the whole of [ 0 , T ]. Once again, this requires that padding has been added to the signal before time 0 and after time T; if, instead, the WT has been computed only over the cone of influence, then the integrals 0 T or averages 1 T 0 T taken over the time-interval [ 0 , T ] in the above formulas need to be taken instead over the f-section of the cone of influence.

In this paper, we work with the former of the two approaches, namely, Eqs. (15) and (16).

Coherence analysis is restricted by the properties of the measured data. Each dataset is likely to contain idiosyncrasies that require specific attention to avoid false representation of the results. By unlocking the temporal dimension with time-resolved analysis methods, one may properly view and assess the type of data under investigation, and once this step is completed, choose and perform the appropriate analysis. The multi-scale nature of the present analysis also enables simultaneous observation of the behavior across a number of frequencies, which in many cases are representative of various independent behaviors in the system. A review of the statistical properties of wavelet coherence is provided in Cohen and Walden.81 However, here, we focus on the practical implementation and application of these methods.

To demonstrate the nuance required when selecting parameters for analysis, we consider two sets of time-series containing two common modes. As before, the modes are generated using the modified Poincaré system and have independent perturbations. The key difference is that the first set of time-series has modes with frequencies ω 0 = 1 and ω 0 = 0.2 that are stationary in time, with ω m = 0 (the leftmost set of Fig. 5). In contrast, in the second set of time-series, the frequency of the modes varies with ω m = 0.016 π for the high-frequency mode and ω m = 0.010 π for the low-frequency mode (the rightmost set of Fig. 5).

FIG. 5.

Practical aspects to consider when applying phase coherence. Time-series generated from a pair of Poincaré oscillators, as defined by Eq. (5). The time-series were obtained numerically with f s= 4 Hz and minimal amplitude modulation ( ξ r = 0.005, ξ θ = 0). The frequencies of these modes are unchanging in time for (a) and (b) and time-varying in (c) and (d). Their corresponding WT (e)–(h) demonstrate these differences. The time-localized phase coherence plots (i) and (j) are generated from the wavelet transforms (e)–(h). (k) and (l) Time-average values are shown as solid black lines and mismatch surrogate thresholds as dashed lines.

FIG. 5.

Practical aspects to consider when applying phase coherence. Time-series generated from a pair of Poincaré oscillators, as defined by Eq. (5). The time-series were obtained numerically with f s= 4 Hz and minimal amplitude modulation ( ξ r = 0.005, ξ θ = 0). The frequencies of these modes are unchanging in time for (a) and (b) and time-varying in (c) and (d). Their corresponding WT (e)–(h) demonstrate these differences. The time-localized phase coherence plots (i) and (j) are generated from the wavelet transforms (e)–(h). (k) and (l) Time-average values are shown as solid black lines and mismatch surrogate thresholds as dashed lines.

Close modal

Importantly, when considering coherence between simultaneously measured time-series, one may use two sets of apparatus with varying sampling frequencies, f s. For the calculation of coherence, a common f s must be established. While it is theoretically possible to up-sample the data series with the smaller sampling frequency, this is not recommended as it will not recover information regarding higher-frequency oscillations. Instead, the solution is to downsample the larger time-series so that a common f s is established.

The value of f s determines the maximum observable frequency, f max, because we need at least two points in each cycle to capture an oscillation. Consequently, the upper-frequency limit, or the Nyquist frequency, is defined as f N = f s / 2. A low value of f max can introduce problems when assessing data, as seen in Fig. 5(j). In this case, the system was simulated with f s = 4 Hz, which means that f max = 2 Hz is selected. The coherent mode seemingly passes above f max, illustrating the need for a higher f s.

The lowest attainable frequency, f min, is determined by the length of the time-series. In the examples demonstrated in this work, AWPC and PC are evaluated across ten cycles of oscillation at a given frequency. It follows that the length of the time-series restricts f min and that the length must be at least ten times the length of the minimum frequency of interest. If the interaction is time-varying, then more cycles are needed to account for the modulation present, dependent upon the frequency of the modulation. The time-varying example shown in Figs. 5(c), 5(d), 5(g), 5(h), 5(j), and 5(l) demonstrates a situation where the simulated mode may be interpreted as being centered upon a greater frequency (0.25 Hz) than it really is. Specifically, the mode should be centered upon 0.2 Hz. Due to the shortness of the recording, the cone of influence contains only the upper half of the modulation cycle, resulting in an apparently higher value. In the non-time-varying frequency case, there is no issue, and the peak coherence is centered around 0.2 Hz.

The presence of oscillatory dynamics can be confirmed by first considering the time–frequency representation of the data. In addition, this step will provide information on the frequency range of interest if this is not known beforehand. Limiting the coherence analysis to this range will reduce the burden on computational capacity and save time. The WT will guide the choice of the resolution parameter. However, one must consider that this is always a trade-off, as discussed in Sec. IV C and seen in Figs. 2–4.

The considerations outlined above will help to reduce false conclusions regarding the data. However, to further reduce the chance of falsely representing spurious coherence as significant, a further step must be performed.

Even with the existence of independent fluctuations in both time-series, the interpretation of coherence is not straightforward, as illustrated by Holm.82 This is because even two completely independent noise time-series will contain fluctuations that appear at the same time and frequency, resulting in a non-zero value of coherence.

We must, therefore, determine whether observed coherence is significant. This is necessary both for being able to make physical inferences from the observation of coherence values and for being able to make physical inferences from phase-shift values ψ ( f ) associated with high coherence. Consideration of significance of coherence values can be divided into two aspects: First, the coherence values themselves need to be statistically significant in terms of exceeding some critical threshold, i.e., some baseline coherence value. Second, when one computes the time-averaged phase difference ψ ( f ) as a function of f, where there is significant coherence, one should observe a plateau—i.e., an approximately constant phase difference—over the frequency range in which the phase-coherent oscillations manifest in the time–frequency representation. One should only regard coherence as significant if it is found to satisfy both of these aspects of testing for significance.

In regard to the first aspect, defining the baseline coherence value for significance is not trivial, as it is dependent on the nature of the background dynamics generated by the system under investigation. For example, in the system described above, the independent fluctuations generated from perturbations to the phase and amplitude will result in a different level of background coherence to the case of independent 1 / f additive noise. Furthermore, in real systems, the deterministic dynamics cannot be separated from the noise perturbations, which increases the difficulty of defining a coherence baseline.

A more formulaic approach is to use a hypothesis test. Specifically, we would like to test a null hypothesis that two time-series are not coherent at a specific frequency. Such a hypothesis can be tested through the use of surrogate data.83 Surrogate data are numerically modeled time-series that are designed to preserve all features of the measured time-series apart from the feature under investigation. In this method, a set of surrogate time-series is first randomly generated. The same analysis that is performed on the real time-series is then performed on the surrogates, with the end result being the discriminating statistic corresponding to the factor of interest. This results in a distribution of values for these statistics, which can then be used to define a specific confidence interval (i.e., the value of a percentile) for discerning significance and rejection of the null hypothesis.

The optimal percentile to use in the test varies from case to case. This can be due to a number of factors. For example, a high intensity of the difference between the noises affecting the two time-series will decrease the coherence between the two time-series to a greater extent than it would decrease the coherence between surrogates, making a lower percentile for the surrogate threshold more appropriate. In this paper, we will adopt a 95th percentile threshold for most cases. However, in some cases, due to factors like the one we have just mentioned, we will use a lower threshold.

One of the most common uses of surrogate data is to test for nonlinearity, where it is possible to apply methods, such as amplitude-adjusted Fourier transform surrogates, that preserve only the linear statistical properties of the time-series (see Ref. 84 for a review of surrogate data methods). However, in testing for significant coherence, we must also preserve the effects of nonlinearity in the surrogate data. Otherwise, even if the surrogates preserve the linear statistical properties, such as the amplitude probability distribution and the frequency spectrum, the null hypothesis may still be spuriously rejected due to increased coherence resulting from nonlinearity.

Mismatched surrogates, also known as intersubject surrogates in the context of biomedical data, are one of the simplest ways to preserve potential nonlinearity in the surrogate data. With this method, pairs of real measurements of the same system (such as the human body, measured across different subjects) are separated and then re-paired with the corresponding time-series from an independent measurement (i.e., another subject). This has the advantage of preserving all properties of the time-series apart from the time-specific information. However, coherence is not preserved as the oscillations are no longer ordered in time.

While mismatched surrogates usually apply only to measured data from real systems, it is still possible to generate time-series approximating mismatched surrogates with the illustrative model defined in Sec. III. In this case, we can simply modify the frequency modulation of the two modes, ω ( t ) = 2 π ω 0 + A sin ( 2 π ω m t + ψ ), where ψ is a phase offset of the modulation. Each pair of surrogate time-series is then generated using different values of ψ for each mode, which are uniformly sampled on the interval [ 0 , 2 π ].

It is also worth noting that surrogate testing is not the only method for determining significance thresholds for coherence values. The method proposed by Sheppard et al.85 provide analytically derived significance thresholds based on higher-order statistics, which was shown to give better performance than amplitude-adjusted Fourier transform surrogates.

The effect of time-averaged surrogates is illustrated in Figs. 5(k) and 5(l), which show the 95th percentile of 99 mismatch surrogates. These surrogate thresholds give a much clearer indication of the coherence values that are present in the system vs the spurious coherence. One may also choose to illustrate the time-localized effective coherence. This is demonstrated in Fig. 6, with parameters identical to those in Figs. 2(e), 2(f), 2(k), and 2(l). The threshold here was chosen as the 75th percentile of 99 mismatch surrogates. One can now discriminate the coherence due to the modes vs the background fluctuations in the time-averaged coherence. However, many areas of significant coherence still remain in the time-localized plot distributed away from the modes. This illustrates the fact that it is easier for spurious significant coherence to occur in the time–frequency domain, where the testing area is essentially squared.

FIG. 6.

(a) Time-localized effective phase coherence generated using the Poincaré oscillator example with amplitude noise and a frequency resolution parameter of f 0 = 2. The 75th percentile of 99 mismatch surrogates was considered as a zero threshold and was subtracted from the original time-localized coherence. Resulting negative values were set to zero. (b) The time-average of the raw coherence (solid line) and the surrogate threshold (dashed line). (c) The average phase difference across frequency.

FIG. 6.

(a) Time-localized effective phase coherence generated using the Poincaré oscillator example with amplitude noise and a frequency resolution parameter of f 0 = 2. The 75th percentile of 99 mismatch surrogates was considered as a zero threshold and was subtracted from the original time-localized coherence. Resulting negative values were set to zero. (b) The time-average of the raw coherence (solid line) and the surrogate threshold (dashed line). (c) The average phase difference across frequency.

Close modal

The other effect of surrogates can be seen on the effective coherence of the low-frequency mode, which is much reduced compared to the high-frequency mode. This is due to the fact that spurious coherence between random fluctuations is more likely to be found since the average coherence is calculated over fewer cycles. This essentially reduces the observable frequency range, adding to the effects already caused by the size of the wavelets (parameterized by f 0) and the window size used for the coherence calculation (parameterized by n c y). Taking into account these cumulative effects, we generally recommend that effective coherence can only be assessed if a minimum of 30 cycles can be observed, giving the lowest observable frequency of 30 / T.

Now, to illustrate the second aspect of considering significance of coherence: In the two frequency bands where Fig. 6(b) shows coherence values exceeding the surrogate threshold, Fig. 6(c) shows the phase difference plateauing at about 0.75 π. These plateaus in conjunction with the statistical significance of the coherence values suggest that the coherence in these two frequency bands is significant. Moreover, as a consequence, we can conclude that the value 0.75 π around which the phase difference plateaus is the amount by which the first time-series leads the second, consistent with the numerically modeled input values.

Therefore, we have seen that the surrogate threshold and the phase difference are invaluable tools when interpreting coherence; this will be demonstrated in Sec. VI via a series of examples.

The heart rate is modulated through several processes, with respiration being an important factor. During inhalation, the heart tends to beat quicker, and during exhalation, it tends to slow down. This interaction is known as respiratory sinus arrhythmia.86 Cardio-respiratory interactions are perhaps one of the most widely studied interactions. Several methods have been employed,87,88 including coherence analysis based on the Fourier and wavelet transforms.89,90 Utilizing PC to study cardio-respiratory interactions has also proven valuable,50 for example, in the context of ageing,52 malaria,91 and hypoxia.92 

In this example, we evaluate cardio-respiratory interactions based on the simultaneously recorded respiratory effort and the electric activity of the heart. The 1400 s recordings are taken from a 28-year healthy male participating in the study of ageing,93 where the sensor/electrode placements are described. A time-insert of respiration is shown in Fig. 7(a) and the ECG in Fig. 7(b). The instantaneous frequencies of respiration [IRR, Fig. 7(c)] and beating of the heart [IHR, Fig. 7(d)] are extracted by ridge extraction32 after the WT was obtained. Two types of interactions are investigated: (a) between the original respiratory time-series and the IHR and (b) between both instantaneous rates, IRR and IHR. The PC and AWPC for both cases are shown in Figs. 7(e), 7(f), 7(i), and 7(j). The surrogate threshold was set to the 95th percentage of 140 intersubject surrogates, as used in the original study.93 The time-averaged values of PC and AWPC from the entire 1400 s recordings are shown in Figs. 7(g) and 7(k) for the cases (a) and (b), respectively. The phase differences, as a function of frequency, obtained for case (a) and (b), are shown in Figs. 7(h) and 7(l).

FIG. 7.

(a) and (b) Time-series of respiration and ECG from a 28-year-old healthy man, shown for 50 out of the 1400 s of recordings. (c) The WT of the time-series in (a). The solid line is the extracted ridge, giving the instantaneous respiration rate (IRR). (d) The WT of the time-series in (b). The solid line is the extracted ridge, giving the instantaneous heart rate (IHR). (e) The PC between the respiration and IHR. (f) The AWPC between the respiration and IHR. (g) The time-averaged PC (solid black line) and AWPC (solid orange line), with the corresponding surrogate thresholds (dashed lines). (h) The time-averaged phase difference at each frequency. A positive value means that the time-series in (a) is leading. (i) The PC between IHR and IRR. (j) The AWPC between IHR and IRR. (k) The time-averaged PC (solid black line) and AWPC (solid orange line), with the corresponding surrogate thresholds (dashed lines) for IHR and IRR. (l) The time-averaged phase difference at each frequency. A positive value means that the IRR is leading. The time-averaged coherence and the phase difference in (g), (h), (k), and (l) is calculated using the whole time-series (1400 s).

FIG. 7.

(a) and (b) Time-series of respiration and ECG from a 28-year-old healthy man, shown for 50 out of the 1400 s of recordings. (c) The WT of the time-series in (a). The solid line is the extracted ridge, giving the instantaneous respiration rate (IRR). (d) The WT of the time-series in (b). The solid line is the extracted ridge, giving the instantaneous heart rate (IHR). (e) The PC between the respiration and IHR. (f) The AWPC between the respiration and IHR. (g) The time-averaged PC (solid black line) and AWPC (solid orange line), with the corresponding surrogate thresholds (dashed lines). (h) The time-averaged phase difference at each frequency. A positive value means that the time-series in (a) is leading. (i) The PC between IHR and IRR. (j) The AWPC between IHR and IRR. (k) The time-averaged PC (solid black line) and AWPC (solid orange line), with the corresponding surrogate thresholds (dashed lines) for IHR and IRR. (l) The time-averaged phase difference at each frequency. A positive value means that the IRR is leading. The time-averaged coherence and the phase difference in (g), (h), (k), and (l) is calculated using the whole time-series (1400 s).

Close modal

It is clear that both PC and AWPC are much higher for the respiration-IHR case, compared to IRR-IHR case, and that the highest values of coherence are at the frequency of respiration (around 0.2–0.3 Hz), consistent with earlier studies. This indicates that, in the resting state, the heart rate is strongly modulated by the amplitude of respiration and to a much lesser extent by the frequency of respiration. In Fig. 7(g), one can see that the PC and AWPC are similar. The phase difference at the respiration frequency is around 0 rad.

Coherence analysis is often applied to find common oscillatory behavior between brain signals from different locations. This can elucidate the functional connectivity of the brain, which is known to change in various conditions.94,95 Spontaneous activity in the brain can be measured noninvasively at a relatively low cost using EEG or fNIRS with minimal discomfort to the subjects. However, both methods are susceptible to movement artifacts.96 Several approaches exist to remove these artifacts from the data, although they often compromise the quality of the data and may additionally remove information of interest.97,98 As seen in Secs. IV, phase-based approaches may be more resilient against movement artifacts and noise and, as such, can circumvent some of the more draconian preprocessing requirements. In this section, we investigate two examples of movement artifacts, one using EEG and the second using fNIRS.

1. Autism spectrum disorder

Non-invasive brain activity measurements in children are fraught with artifacts due to difficulties in keeping younger subjects still for extended periods. Analysis of signals derived from younger cohorts, therefore, necessitates methods that are robust to movement artifacts. In addition, when considering the presence of interactions between time-series, it can be important to assess how the nature of these interactions changes over time. Time-localized methods can reveal temporal dependencies in this mutual behavior. In a wide array of neurological conditions, it is not only the intensity of interaction between brain regions but the duration of interaction that is altered.99,100 By observing the time-localized coherence, one may deduce the regularity and strength of time-varying interactions.

We consider a resting-state measurement with eyes open of two simultaneously recorded EEG time-series. These data were measured in a cohort of male children aged 3–5 years old with a diagnosis of autism spectrum disorder (ASD). The time-series were captured using a Nicolet cEEG instrument (Viasys Healthcare, USA) at a sampling rate of 256 Hz. A 20-min recording period was used to collect the data, and a 180-second interval was analyzed, with the central 60 s illustrated in Fig. 8 as it contained a clear artifact. Measurement sites corresponding to F3 and F4 in the international 10–20 system were chosen, as the initial objective of the investigation was to assess reports of reduced frontal connectivity in children with ASD.101–104 

FIG. 8.

Movement artifact represented as a downward spike in the time-series recorded simultaneously at two probes: F3 (a) and F4 (b). Their corresponding WTs [(c) and (d), respectively], indicate the amplitude perturbation at around 93 s. The effective phase coherence (e) is resilient against this perturbation, while the amplitude-weighted phase coherence (f) exhibits spurious coherence. In both cases, the surrogate threshold is taken as the 75th percentile of 156 intersubject surrogates. (g) The time-averaged PC (solid black line) and AWPC (solid orange line), with the corresponding surrogate thresholds (dashed lines). (h) The time-averaged phase difference across frequency.

FIG. 8.

Movement artifact represented as a downward spike in the time-series recorded simultaneously at two probes: F3 (a) and F4 (b). Their corresponding WTs [(c) and (d), respectively], indicate the amplitude perturbation at around 93 s. The effective phase coherence (e) is resilient against this perturbation, while the amplitude-weighted phase coherence (f) exhibits spurious coherence. In both cases, the surrogate threshold is taken as the 75th percentile of 156 intersubject surrogates. (g) The time-averaged PC (solid black line) and AWPC (solid orange line), with the corresponding surrogate thresholds (dashed lines). (h) The time-averaged phase difference across frequency.

Close modal

The effect of the movement artifact is clearly seen in both the time domain, Figs. 8(a) and 8(b), and the WT, Figs. 8(c) and 8(d), where at the instance of the movement, all frequencies are present (around 93 s) in the spectrum. The effect on the coherence is much stronger and can be seen in Figs. 8(e)8(g) for the AWPC compared to the PC. A threshold of the 75th percentile of 156 intersubject surrogates was used, leaving only the significant coherence. The time-localized coherence [Fig. 8(e)] shows that the magnitude and presence of the interactions vary over time. Both the time-localized, Figs. 8(e) and 8(f), and the time-average, Fig. 8(g), coherence are elevated for the AWPC compared to the PC.

2. Chorea in Huntington’s disease

Now, we consider two time-series recorded from the temporal brain areas, in a study that investigated coherence between neuronal and vascular function.105 These locations often have artifacts due to movement of the jaw. The data are from a participant with a positive genetic test for Huntington’s disease (HD), who has not yet developed the movement disorder known as chorea. Still, as chorea is a hallmark of the disease, HD research would benefit from methods that are resistant to movement artifacts.

We compared PC and AWPC of two resting-state oxygenated hemoglobin (oxyHb) time-series measured using a fNIRS device (NIRScout, NIRx, Germany) with a sampling frequency of 31.25 Hz over 20 min (for further details on measurements, see Ref. 93). The measurement sites correspond to T7 and T8 in the international 10–20 system (left and right temporal locations). The resolution parameter f 0, Eq. (8), was set to 1, as to minimize the spread of an artifact.

The results are shown in Fig. 9. The time-series contain two movement artifacts, which appear as high-amplitude cones in the WT and have the greatest impact at low frequencies. The artifacts have a very significant impact on the AWPC plot and affect an even wider area of time and frequency than is visible in the WT plots. This is a consequence of the moving window used to calculate wavelet coherence. In the plot of PC. the effect of the artifacts is not obvious. This illustrates how any simultaneous increase in amplitude, even if not phase coherent, results in AWPC appearing significant over large areas of the time–frequency domain. This can also be seen in the time-averaged coherence plot, where the AWPC (orange line) is much higher than the PC (black line). The two dashed lines show the 95th percentile of the 136 intersubject surrogates.

FIG. 9.

(a) and (b) Two fNIRS time-series measured from a participant with Huntington’s disease. The locations of two artifacts are marked on the time-series using red triangles. (c) The WT of the time-series in (a). (d) The WT of the time-series in (b). (e) The PC of the two time-series. (f) The AWPC plots of the two time-series. (g) The time-averaged PC (solid black line) and AWPC (solid orange line), with the corresponding surrogate thresholds (dashed lines). (h) The time-averaged phase difference at each frequency. A positive value means that the time-series in (a) is leading.

FIG. 9.

(a) and (b) Two fNIRS time-series measured from a participant with Huntington’s disease. The locations of two artifacts are marked on the time-series using red triangles. (c) The WT of the time-series in (a). (d) The WT of the time-series in (b). (e) The PC of the two time-series. (f) The AWPC plots of the two time-series. (g) The time-averaged PC (solid black line) and AWPC (solid orange line), with the corresponding surrogate thresholds (dashed lines). (h) The time-averaged phase difference at each frequency. A positive value means that the time-series in (a) is leading.

Close modal

This example illustrates that PC is relatively resistant to artifacts, which is beneficial when analyzing time-series from various non-invasive measurement techniques.

Time–frequency and coherence analysis can provide valuable information about the dynamics of a system. In addition, the phase difference between oscillations can give information about the direction of influence. We consider the movement of electrons on the surface of liquid helium at very low temperatures, as discussed in Siddiq et al.31 At very low temperatures, the helium will be a superfluid. Since such a system can be used for constructing the qubits that are needed for quantum computers, increasing the understanding of its dynamics is important.

In the experiments, the electrons were just above the liquid helium, trapped between the helium and a vacuum. They were in a perpendicular magnetic field and subjected to microwave radiation and varying pressing voltage. Current oscillations were recorded from five electrodes for 60 s at 100 kHz. The full experimental setup is described in Ref. 31. We chose an example with low electron density and 4.18 V pressing voltage. Currents measured from electrodes E4 and C in the time-interval 30–31.4 s were selected for analysis, and high coherence was obtained as in the original paper.31 

Figure 10 shows the PC and AWPC between current oscillations at the two electrodes. The current signals were first downsampled to 20 kHz, as in this example, we will focus on oscillations around 0.5 kHz. The resolution parameter was set to 3, in line with the original paper.31 100 iterated amplitude-adjusted Fourier transform (IAAFT) surrogates were used to calculate the surrogate thresholds.84 

FIG. 10.

(a) and (b) Time-series of current oscillations caused by the movement of electrons on the surface of liquid helium. The two electrodes are labeled E4 and C. (c) and (d) The WT of the time-series in (a) and (b), respectively, with the extracted ridge shown by the solid black line. (e) The PC of the two time-series. (f) The AWPC of the two time-series. (g) The time-averaged PC (solid black line) and AWPC (solid orange line), with the corresponding surrogate thresholds (dashed lines). (h) The time-averaged phase difference at each frequency. A positive value means that the time-series in (a) is leading. (i) The WT of the ridge time-series plotted in (c). (j) The WT of the ridge time-series plotted in (d).

FIG. 10.

(a) and (b) Time-series of current oscillations caused by the movement of electrons on the surface of liquid helium. The two electrodes are labeled E4 and C. (c) and (d) The WT of the time-series in (a) and (b), respectively, with the extracted ridge shown by the solid black line. (e) The PC of the two time-series. (f) The AWPC of the two time-series. (g) The time-averaged PC (solid black line) and AWPC (solid orange line), with the corresponding surrogate thresholds (dashed lines). (h) The time-averaged phase difference at each frequency. A positive value means that the time-series in (a) is leading. (i) The WT of the ridge time-series plotted in (c). (j) The WT of the ridge time-series plotted in (d).

Close modal

Both PC and AWPC methods pick up a time-varying coherence following the dominant mode in the WT plots, which resembles a non-autonomous influence on the system. The time-averaged PC (black line) and AWPC (orange lines) are similar, with the AWPC having a slightly higher value at the higher frequencies. This could indicate that there is some amplitude covariance. The surrogate thresholds are very similar for both methods. The time-averaged phase difference is positive, meaning that the oscillation at E4 is preceding that at C.

The existence of coherence indicates that the electrons are moving, and the phase difference suggests that they are moving toward the C electrode from the E4 electrode. This is consistent with the microwave radiation being applied closer to E4. Furthermore, by studying the time–frequency representations, we see a clear mode with a time-varying frequency. Using ridge extraction,32 which essentially tracks the maximum amplitude within a frequency range, we can extract a time-series of the instantaneous frequency. The WT of this time-series shows a clear amplitude peak at around 5.2 Hz, indicating modulation of the electron movement at this frequency. This was shown to be caused by slow gravity waves on the liquid helium.31 It is important to note that in the WT of the original current data, there is also a peak at around 5.2 Hz. However, this peak is relatively weak compared to the rest of the spectrum, and, in particular, compared with the dominant oscillatory component, making it challenging to observe and identify directly from the frequency spectrum. This illustrates that time-localized, time–frequency methods can uncover a great deal of physically meaningful information.

The study of coherence has its foundations in physics, where methods were first developed to measure the coherence between the phases of waves. It has then been extended to considering coherence between the phases of more general oscillatory processes occurring in a wide variety of scientific disciplines; for this, one of the most fundamental issues is the quantification of such coherence from measured data. Accordingly, it is a subject particularly treated by harmonic analysis in mathematics and by signal-processing theory. We have approached this same question again from a physics perspective, but still with this greater generality than the kinds of setup that initiated the study of coherence—namely, from the perspective of multi-scale time-dependent oscillatory dynamics.

We have seen that for time-series data recorded from systems involving interacting oscillations, key information about the interactions is contained in the time evolution of the phases of the oscillations. Moreover, we have seen that for the analysis of systems involving oscillations with temporally modulated quantitative characteristics, such as frequency and amplitude, time-series analysis methods that are fundamentally designed for time-series with stationary statistics are inappropriate. For example, the measure of coherence of phases intended to be revealed by Fourier coherence will have little meaning for systems with frequency-modulated oscillations. Rather, tools designed to extract time-evolving, time-localized information about systems exhibiting time-dependent far-from-equilibrium dynamics are needed. In particular, phase information needs to be extracted in a suitably time-localized manner.

Such time-localization inherently needs to be understood relative to the timescale of the dynamical process under investigation, and therefore, for multi-scale time-series involving oscillations of a range of frequencies, this time-localization needs to be adaptive to the range of timescales involved. This has been illustrated in Fig. 1.

In the setting of time–frequency analysis, where the Heisenberg uncertainty principle requires a trade-off between precision in measurements of frequency and precision in location in time, this multi-scale adaptivity corresponds to a logarithmic frequency resolution. This is achieved by the continuous wavelet transform, where the scale variable is taken as inversely proportional to the frequency under investigation. Accordingly, we have seen that the wavelet transform is the appropriate tool for extracting phase information from multi-scale nonstationary time-series, and in particular, WT-based coherence analysis is the appropriate approach to investigating coherence of phases manifesting in such time-series.

In this paper, we have provided an introduction to wavelet-based coherence analysis and evaluated several related issues—some already established and others that had not previously been addressed.

Existing discussions of different approaches to quantifying coherence, and more generally of different approaches to time–frequency analysis, have mainly treated the different approaches as if on essentially equal footing, and practical choices, such as the use of WT over STFT, or of PC over AWPC, are often treated merely as a matter of quantitative optimization, without theoretically reasoned or experimentally explored consideration of the qualitative impact of such choices on the resulting analysis.

To address this issue, we have provided a systematic explanation of the practicalities and pitfalls of how to carry out wavelet coherence analysis in practice. In particular, we have provided a detailed review of the methodology for reliably testing for and detecting significant phase coherence from measured data.

Since the wavelet transform provides extractable phases and amplitudes, one can calculate38,46 a measure of coherence independent of changes in the amplitude, namely, PC, as well as a measure of coherence that is weighted in time by amplitude, namely, AWPC. Prior to this work, an in-depth comparison between AWPC and PC had not been performed. Perhaps counterintuitively, we found a consistent difference in the performance of the two definitions of coherence when applied to noisy time-series. PC is, in general, more robust to noise and particularly to time-localized perturbations, meaning that it is affected to a much lesser degree by phenomena, such as movement artifacts.

Along with the definition of PC, one can also analyze phase differences in the oscillations present in the pair of time-series under investigation. This is first needed as one of the aspects of determining significance of coherence, along with statistical significance of the coherence values themselves: the time-averaged phase difference as a function of frequency needs to have a plateau in the frequency band where coherent oscillations manifest in time–frequency representation. Second, where there is significant coherence, the phase difference can provide indications of which oscillation is leading.106 However, it is important to note that causality (i.e., which process is the origin of the common oscillations) is not always possible to infer from the phase shift. This can be because the phase shift is wrapped on the interval [ π , π ] or because of the existence of higher degree interactions, such as node triples.107 

This investigation of coherence has revealed the wealth of information provided by the phase. Part of the utility of phase over amplitude comes from the fact that phase dynamics is constrained by the frequency interval within which an oscillation lies. For example, each wavelet has a defined frequency response, which limits the rate at which the phase can change. However, in contrast to the phase, the amplitude is not bounded to frequency in such a manner, making the separation of amplitude dynamics from noise harder to satisfactorily achieve. This is analogous to the advantages of frequency modulation over amplitude modulation in radio communications.108 

The practical implications of the main points highlighted by our work are effectively illustrated in our analyses of real data in Sec. VI. In the examples shown in Sec. VI B, the presence of movement artifacts has a significant effect on the results of any analysis. Such artifacts usually need to be removed prior to analysis, which often requires subjective identification. The techniques used to remove identified artifacts may also introduce artificial manipulations in the data.109 The resistance of amplitude-independent phase-based methods to these sorts of artifacts allows for the analysis of noisy data without the need for preprocessing or constraints on the measurement setup, leading to better research into conditions, such as HD and ASD.

In the example of electron dynamics on the surface of liquid helium, using phase coherence analysis, we identified the existence of gravity waves. Without the time-localized approach, these waves might not have been detected. This illustrates how the application of coherence methods and time–frequency analysis can be used to identify specific properties of a physical system. Thus, we see the importance of using a time-localized approach instead of an asymptotic approach, i.e., infinite-time, non-time-evolving approach. The wider adoption of explicitly finite-time and time-localized methods should, therefore, lead to similar discoveries in systems characterized by non-autonomous dynamics involving nonstationary amplitudes and frequencies.

We review the current understanding of coherence, a universal phenomenon that can appear between oscillations or waves, irrespective of their origin. We start from its definition in physics and review numerical methods for analyzing coherence from modeled or real data. We focus particularly on coherence between non-autonomous oscillatory processes whose oscillations have deterministically time-varying frequencies. For this reason, we apply wavelet-based, time-resolved coherence analysis. We discuss differences between amplitude-weighted phase coherence and phase coherence. In the former case, time-resolved information includes both the amplitude and the phase; in the latter case, only the information about phase is considered. We illustrate that the amplitudes are more readily perturbed than phases by noise or movement artifacts, and consequently, that phase coherence provides more robust information about interacting oscillatory systems. We illustrate this in relation to several real-world examples.

An alternative to Fig. 1 in the main paper is presented in the supplementary material. It shows the Fourier transform in a logarithmic scale. By comparing Figs. 1(c), 1(d), 1(g), and 1(h) in the main text and supplementary Figs. 1(c), 1(d), 1(g), and 1(h), it is clear that the logarithmic scale is disadvantageous to the short-time Fourier transform, which is obtained with linear frequency resolution.

We are grateful to Peter McClintock and Kostya Nasyedkin for useful comments on the paper. The work of P.T.C. and A.S. is funded through the Sony Research Award Program. A.S. is also funded by the Engineering and Physical Sciences Research Council, UK (Grant No. EP/X004597/1). J.B. is supported by the Sir John Fisher Foundation. S.J.K.B. is jointly supported by the EPSRC, UK and the MyMind GmbH—Brain Hero, Vienna, Austria (Grant No. EP/T518037/1). The High End Computing facility at Lancaster University was used for some of the computations. The experimental part of the ASD study was supported by the Action Medical Research (UK) MASDA Project (GN1963) and partly by the Slovenian Research Agency (Program No. P20232). The HD and ageing studies were funded by the Engineering and Physical Sciences Research Council, UK (Grant No. EP/M006298/1) and the Slovenian Research Agency (ARRS) (Program No. P20232). The experimental data used for the example on electron dynamics on the surface of liquid helium were obtained by Kostyantyn Nasyedkin in the Quantum Condensed Phases Research Team, RIKEN CEMS, Japan, headed by Kimitoshi Kono. The development of the MODA toolbox used for analyses has been supported by the Engineering and Physical Sciences Research Council, UK (Grant Nos. EP/100999X1 and EP/M006298/1), the EU projects BRACCIA (517133) and COSMOS (642563), the Action Medical Research (UK) MASDA Project (GN1963), and the Slovene Research Agency (Program No. P20232).

The authors have no conflicts to disclose.

The authors have been listed alphabetically due to their approximately equal contributions.

S. J. K. Barnes: Conceptualization (supporting); Data curation (equal); Formal analysis (equal); Investigation (equal); Software (equal); Validation (equal); Visualization (equal); Writing – review & editing (equal). J. Bjerkan: Conceptualization (equal); Data curation (equal); Formal analysis (equal); Investigation (equal); Software (equal); Validation (equal); Visualization (equal); Writing – review & editing (equal). P. T. Clemson: Conceptualization (equal); Data curation (equal); Formal analysis (equal); Investigation (equal); Methodology (equal); Software (equal); Supervision (equal); Validation (equal); Visualization (equal); Writing – original draft (equal); Writing – review & editing (equal). J. Newman: Conceptualization (supporting); Data curation (supporting); Formal analysis (supporting); Funding acquisition (supporting); Investigation (equal); Methodology (supporting); Validation (supporting); Visualization (supporting); Writing – review & editing (equal). A. Stefanovska: Conceptualization (equal); Data curation (equal); Formal analysis (equal); Funding acquisition (equal); Investigation (equal); Methodology (equal); Project administration (equal); Resources (equal); Supervision (equal); Validation (equal); Visualization (equal); Writing – review & editing (equal).

The data used in this work are available in the Publications and Research (Pure) portal on Lancaster University’s research information management system. The data used in the cardio-respiratory example can be found at doi.org/10.17635/lancaster/researchdata/630. The data used in the ASD example can be found at doi.org/10.17635/lancaster/researchdata/604. The data used in the HD example can be found at doi.org/10.17635/lancaster/researchdata/631. The data used for the electron dynamics on the surface of a liquid helium example can be found at doi.org/10.17635/lancaster/researchdata/655.

1.
H.
Crewe
,
C.
Huygens
,
T.
Young
,
A. J.
Fresnel
, and
F.
Arago
,
The Wave Theory of Light; Memoirs of Huygens, Young and Fresnel
(
Cincinnati American Book Company
,
New York
,
1900
).
2.
A.
Michelson
and
E. W.
Morley
,
Am. J. Sci.
s3–34
,
333
(
1887
).
4.
P. G.
Merli
,
G. F.
Missiroli
, and
G.
Pozzi
,
Am. J. Phys.
44
,
306
(
1976
).
5.
A.
Ananthaswamy
,
Through Two Doors at Once: The Elegant Experiment That Captures the Enigma of Our Quantum Reality
(
Penguin Publishing Group
,
2018
).
7.
B. P.
Abbot
et al.,
(LIGO Scientific Collaboration and Virgo Collaboration)
,
Phys. Rev. Lett.
116
,
061102
(
2016
).
8.
P.
Ilzhöfer
,
M.
Sohmen
,
G.
Durastante
,
C.
Politi
,
A.
Trautmann
,
G.
Natale
,
G.
Morpurgo
,
T.
Giamarchi
,
L.
Chomaz
,
M. J.
Mark
, and
F.
Ferlaino
,
Nat. Phys.
17
,
356
(
2021
).
9.
D.
Huang
and
Y.
Yang
,
Phys. Rev. B
104
,
L081115
(
2021
).
10.
S.
Ye
,
C.
Zou
,
H.
Yan
,
Y.
Ji
,
M.
Xu
,
Z.
Dong
,
Y.
Chen
,
X.
Zhou
, and
Y.
Wang
,
Nat. Phys.
19
,
1301
(
2023
).
11.
A. E.
Miroshnichenko
,
S.
Flach
, and
Y. S.
Kivshar
,
Rev. Mod. Phys.
82
,
2257
(
2010
).
12.
J. R.
Kim
,
C. W.
Lin
, and
S. Y.
Lin
,
Remote Sens.
13
,
2240
(
2021
).
13.
A.
Delorme
and
S.
Makeig
,
J. Neurosci. Methods
134
,
9
(
2004
).
14.
M.
Jensen
,
R.
Hyder
,
B. U.
Westner
,
A.
Højlund
, and
Y.
Shtyrov
,
Neuropsychologia
188
,
108602
(
2023
).
15.
A.
Sauer
,
T.
Grent-’t-Jong
,
M.
Zeev-Wolf
,
W.
Singer
,
A.
Goldstein
, and
P. J.
Uhlhaas
,
Schizophr. Res.
261
,
60
(
2023
).
16.
T.
Gänsler
,
M.
Hansson
,
C.-J.
Ivarsson
, and
G.
Salomonsson
,
IEEE Trans. Commun.
44
,
1421
(
1996
).
17.
T.
Hada
,
D.
Koga
, and
E.
Yamamoto
,
Space Sci. Rev.
107
,
463
(
2003
).
18.
P. C.
Liu
, in Wavelets in Geophysics, edited by P. Kumar and E. Foufoula-Georgiou (Elsevier, 1994), Vol. 4, pp. 151–166.
19.
J. P.
Eckmann
and
D.
Ruelle
,
Rev. Mod. Phys.
57
,
617
(
1983
).
20.
P. T.
Clemson
and
A.
Stefanovska
,
Phys. Rep.
542
,
297
(
2014
).
21.
J.
Rowland Adams
,
J.
Newman
, and
A.
Stefanovska
,
Eur. Phys. J. Spec. Top.
232
,
3435
3457
(
2023
).
23.
J.
Ville
,
Cables Transm.
2A
,
61
(
1948
).
25.
J.
Morlet
, in Issues on Acoustic Signal/Image Processing and Recognition, Vol. I, NATO ASI series, edited by C. H. Chen (Springer, Berlin, 1983).
26.
G.
Kaiser
,
A Friendly Guide to Wavelets
(
Birkhäuser
,
Boston, MA
,
1994
).
27.
B.
Boashash
,
Time Frequency Signal Analysis and Processing: A Comprehensive Reference
(
Academic Press
,
Boston, MA
,
2016
).
29.
S. L.
Bressler
,
R.
Coppola
, and
R.
Nakamura
,
Nature
366
,
153
(
1993
).
30.
M.
Bračič
and
A.
Stefanovska
,
Bull. Math. Biol.
60
,
919
(
1998
).
31.
H.
Siddiq
,
K.
Nasyedkin
,
K.
Kono
,
D. E.
Zmeev
,
P. V. E.
McClintock
,
Y. A.
Pashkin
, and
A.
Stefanovska
,
Phys. Rev. B
107
,
104501
(
2023
).
32.
D.
Iatsenko
,
P. V. E.
McClintock
, and
A.
Stefanovska
,
Signal Process.
125
,
290
(
2016
).
33.
C.
Schäfer
,
M. G.
Rosenblum
,
H. H.
Abel
, and
J.
Kurths
,
Phys. Rev. E
60
,
857
(
1999
).
34.
P.
Kvandal
,
L.
Sheppard
,
S. A.
Landsverk
,
A.
Stefanovska
, and
K. A.
Kirkebøen
,
J. Clin. Monit. Comput.
27
,
375
(
2013
).
35.
T.
Stankovski
,
T.
Pereira
,
P. V. E.
McClintock
, and
A.
Stefanovska
,
Rev. Mod. Phys.
89
,
045001
(
2017
).
36.
M.
Rosenblum
and
A.
Pikovsky
,
Front. Netw. Physiol.
3
,
1298228
(
2023
).
37.
P. T.
Clemson
,
Y. F.
Suprunenko
, and
A.
Stefanovska
,
Phys. Rev. E
89
,
032904
(
2014
).
38.
J.-P.
Lachaux
,
E.
Rodriguez
,
M.
Le Van Quyen
,
A.
Lutz
,
J.
Martinerie
, and
F. J.
Varela
,
Int. J. Bifurcat. Chaos
10
,
2429
(
2000
).
39.
F.
Mormann
,
K.
Lehnertz
,
P.
David
, and
C. E.
Elger
,
Physica D
144
,
358
(
2000
).
40.
B. J.
Roach
and
D. H.
Mathalon
,
Schizophr. Bull.
34
,
907
(
2008
).
41.
42.
C. M.
Sweeney-Reed
,
P. M.
Riddell
,
J. A.
Ellis
,
J. E.
Freeman
, and
S. J.
Nasuto
,
PLoS One
7
,
e48357
(
2012
).
43.
E.
Angelopoulos
,
E.
Koutsoukos
,
A.
Maillis
,
G. N.
Papadimitriou
, and
C.
Stefanis
,
Schizophr. Res.
153
,
109
(
2014
).
44.
L.
Bu
,
C.
Huo
,
G.
Xu
,
Y.
Liu
,
Z.
Li
,
Y.
Fan
, and
J.
Li
,
Front. Physiol.
9
,
669
(
2018
).
45.
L.
Xu
,
B.
Wang
,
G.
Xu
,
W.
Wang
,
Z.
Liu
, and
Z.
Li
,
Neurosci. Lett.
640
,
21
(
2017
).
46.
A.
Bandrivskyy
,
A.
Bernjak
,
P. V. E.
McClintock
, and
A.
Stefanovska
,
Cardiovasc. Eng.
4
,
89
(
2004
).
47.
L. W.
Sheppard
,
V.
Vuksanović
,
P. V. E.
McClintock
, and
A.
Stefanovska
,
Phys. Med. Biol.
56
,
3583
(
2011
).
48.
V.
Ticcinelli
,
T.
Stankovski
,
D.
Iatsenko
,
A.
Bernjak
,
A. E.
Bradbury
,
A. R.
Gallagher
,
P. B. M.
Clarkson
,
P. V. E.
McClintock
, and
A.
Stefanovska
,
Front. Comput. Physiol.
8
,
749
(
2017
).
49.
S.
Smirni
,
A. D.
McNeilly
,
M. P.
MacDonald
,
R. J.
McCrimmon
, and
F.
Khan
,
Sci. Rep.
9
,
186
(
2019
).
50.
I. V.
Tikhonova
,
A. A.
Grinevich
, and
A. V.
Tankanag
,
Biomed. Signal Process. Control
71
,
103091
(
2022
).
51.
I. V.
Tikhonova
,
A. V.
Tankanag
,
I. E.
Guseva
, and
A. A.
Grinevich
,
Biomed. Signal Process. Control
79
,
104222
(
2023
).
52.
D.
Iatsenko
,
A.
Bernjak
,
T.
Stankovski
,
Y.
Shiogai
,
P. J.
Owen-Lynch
,
P. B.
Clarkson
,
A.
Stefanovska
, and
P. V. E.
McClintock
,
Phil. Trans. R. Soc. Lond. A
371
,
20110622
(
2013
).
53.
M.
Gruszecki
,
G.
Lancaster
,
A.
Stefanovska
,
J. P.
Neary
,
R. T.
Dech
,
W.
Guminski
,
A. F.
Frydrychowski
,
J.
Kot
, and
P. J.
Winklewski
,
Sci. Rep.
8
,
3057
(
2018
).
54.
A.
Grinsted
,
J. C.
Moore
, and
S.
Jevrejeva
,
Nonlin. Process Geophys.
11
,
561
(
2004
).
55.
D. S.
Bloomfield
,
R. T. J.
McAteer
,
B. W.
Lites
,
P. G.
Judge
,
M.
Mathioudakis
, and
F. P.
Keenan
,
Astrophys. J.
617
,
623
(
2004
).
56.
R.
Donner
and
M.
Thiel
,
Astron. Astrophys.
475
,
L33
(
2007
).
57.
A.
Volvach
,
G.
Kurbasova
, and
L.
Volvach
,
Heliyon
10
,
e23237
(
2024
).
58.
L.
Aguiar-Conraria
,
N.
Azevedo
, and
M. J.
Soares
,
Physica A
387
,
2863
(
2008
).
59.
L.
Aguiar-Conraria
and
M. J.
Soares
,
Empir. Econ.
40
,
645
(
2011
).
60.
L.
Vacha
and
J.
Barunik
,
Energy Econ.
34
,
241
(
2012
).
61.
C.
Aloui
and
B.
Hkiri
,
Econ. Model.
36
,
421
(
2014
).
62.
J. C.
Reboredo
,
M. A.
Rivera-Castro
, and
A.
Ugolini
,
Energy Econ.
61
,
241
(
2017
).
63.
D.
Abboud
,
S.
Baudin
,
J.
Antoni
,
D.
Rémond
,
M.
Eltabach
, and
O.
Sauvage
,
Mech. Syst. Signal Process.
75
,
280
(
2016
).
64.
M.
Morris
,
S.
Yamazaki
, and
A.
Stefanovska
,
J. Biol. Rhythms
37
,
310
(
2022
).
65.
A.
Monterde
,
A.
Calleja-López
,
M.
Aguilera
,
X. E.
Barandiaran
, and
J.
Postill
,
Inf. Commun. Soc.
18
,
930
(
2015
).
67.
A.
Pikovsky
,
M.
Rosenblum
, and
J.
Kurths
,
Synchronization—A Universal Concept in Nonlinear Sciences
(
Cambridge University Press
,
Cambridge
,
2001
).
68.
D.
Iatsenko
,
P. V. E.
McClintock
, and
A.
Stefanovska
,
Digit. Signal Process.
42
,
1
(
2015
).
70.
L.
Keselbrener
and
S.
Akselrod
,
IEEE Trans. Biomed. Eng.
43
,
789
(
1996
).
71.
A. S.
Karavaev
,
A. S.
Borovik
,
E. I.
Borovkova
,
E. A.
Orlova
,
M. A.
Simonyan
,
V. I.
Ponomarenko
,
V. V.
Skazkina
,
V. I.
Gridnev
,
B. P.
Bezruchko
,
M. D.
Prokhorov
, and
A. R.
Kiselev
,
Biophys. J.
120
,
2657
(
2021
).
72.
I.
Mizeva
,
C. D.
Maria
,
P.
Frick
,
S.
Podtaev
, and
J.
Allen
,
J. Biomed. Opt.
20
,
037007
(
2015
).
73.
L. W.
Sheppard
,
J. R.
Bell
,
R.
Harrington
, and
D. C.
Reuman
,
Nat. Clim. Change
6
,
610
(
2016
).
74.
J.
Rowland-Adams
and
A.
Stefanovska
,
Front. Physiol.
11
,
613183
(
2021
).
75.
D.
Iatsenko
,
G.
Lancaster
,
S.
McCormack
,
J.
Newman
,
G. V.
Policharla
,
V.
Ticcinelli
,
T.
Stankovski
, and
A.
Stefanovska
, Lancaster University; see https://doi.org/10.5281/zenodo.3470856 (2019).
76.
J.
Newman
,
G.
Lancaster
, and
A.
Stefanovska
, Lancaster University; see https://doi.org/10.5281/zenodo.3470856 (2018).
77.
R. W.
Thatcher
,
D. M.
North
, and
C. J.
Biver
,
NeuroImage
42
,
1639
(
2008
).
78.
D. J.
DeShazer
,
R.
Breban
,
E.
Ott
, and
R.
Roy
,
Phys. Rev. Lett.
87
,
044101
(
2001
).
79.
O. V.
Sosnovtseva
,
A. N.
Pavlov
,
E.
Mosekilde
,
K.
Yip
,
N.
Holstein-Rathlou
, and
D. J.
Marsh
,
Am. J. Physiol. Renal Physiol.
293
,
F1545
F1555
(
2007
).
80.
G.
Carl
,
D.
Doktor
,
D.
Koslowsky
, and
I.
Kühn
,
Stoch. Environ. Res. Risk Assess.
27
,
1221
1230
(
2013
).
81.
E. A. K.
Cohen
and
A. T.
Walden
,
IEEE Trans. Signal Process.
58
,
2964
(
2010
).
82.
S.
Holm
,
Astrophys. Space Sci.
357
,
106
(
2015
).
83.
T.
Schreiber
and
A.
Schmitz
,
Physica D
142
,
346
(
2000
).
84.
G.
Lancaster
,
D.
Iatsenko
,
A.
Pidde
,
V.
Ticcinelli
, and
A.
Stefanovska
,
Phys. Rep.
748
,
1
(
2018
).
85.
L. W.
Sheppard
,
A.
Stefanovska
, and
P. V. E.
McClintock
,
Phys. Rev. E
85
,
046205
(
2012
).
86.
87.
M.
Bračič Lotrič
and
A.
Stefanovska
,
Physica A
283
,
451
(
2000
).
88.
Y.
Shiogai
,
A.
Stefanovska
, and
P. V. E.
McClintock
,
Phys. Rep.
488
,
51
(
2010
).
89.
G.
Stanley
,
D.
Verotta
,
N.
Craft
,
R. A.
Siegel
, and
J. B.
Schwartz
,
Am. J. Physiol. Heart Circ. Physiol.
270
,
H1833
(
1996
).
90.
K.
Keissar
,
L. R.
Davrath
, and
S.
Akselrod
,
Phil. Trans. R. Soc. A
367
,
1393
(
2009
).
91.
Y. A.
Abdulhameed
,
A. G.
Habib
,
P. V. E.
McClintock
, and
A.
Stefanovska
, “Phase coherence between cardiovascular oscillations in malaria: The basis for a possible diagnostic test,” in Physics of Biological Oscillators: New Insights into Non-Equilibrium and Non-Autonomous Systems, edited by A. Stefanovska and P. V. E. McClintock (Springer International Publishing, Cham, 2021), pp. 401–419.
92.
G.
Lancaster
,
T.
Debevec
,
G. P.
Millet
,
M.
Poussel
,
S. J.
Willis
,
M.
Mramor
,
K.
Goričar
,
D.
Osredkar
,
V.
Dolžan
, and
A.
Stefanovska
,
J. Physiol.
598
,
2001
2019
(
2020
).
93.
J.
Bjerkan
,
G.
Lancaster
,
B.
Meglič
,
J.
Kobal
,
T. J.
Crawford
,
P. V. E.
McClintock
, and
A.
Stefanovska
,
Brain Res. Bull.
201
,
110704
(
2023
).
94.
P.
Van Mierlo
,
M.
Papadopoulou
,
E.
Carrette
,
P.
Boon
,
S.
Vandenberghe
,
K.
Vonck
, and
D.
Marinazzo
,
Prog. Neurobiol.
121
,
19
(
2014
).
95.
A. J.
Mackintosh
,
R.
de Bock
,
Z.
Lim
,
V.-N.
Trulley
,
A.
Schmidt
,
S.
Borgwardt
, and
C.
Andreou
,
Neurosci. Biobehav. Rev.
120
,
354
(
2021
).
96.
T. D.
Satterthwaite
,
R.
Ciric
,
D. R.
Roalf
,
C.
Davatzikos
,
D. S.
Bassett
, and
D. H.
Wolf
,
Hum. Brain Mapp.
40
,
2033
(
2019
).
98.
E.
Arad
,
R. P.
Bartsch
,
J. W.
Kantelhardt
, and
M.
Plotnik
,
PLoS One
13
,
e0197153
(
2018
).
99.
E. T.
Rolls
,
W.
Cheng
, and
J.
Feng
,
Transl. Psychiatry
11
,
70
(
2021
).
100.
S.
Petkoski
,
P.
Ritter
, and
V. K.
Jirsa
,
Cereb. Cortex
33
,
6241
(
2023
).
101.
D.
Liloia
,
J.
Manuello
,
T.
Costa
,
R.
Keller
,
A.
Nani
, and
F.
Cauda
,
Eur. Arch. Psychiatry Clin. Neurosci.
274
,
3
(
2023
).
102.
R.
Coben
,
A. R.
Clarke
,
W.
Hudspeth
, and
R. J.
Barry
,
Clin. Neurophysiol.
119
,
1002
(
2008
).
103.
A.
Dickinson
,
M.
Daniel
,
A.
Marin
,
B.
Gaonkar
,
M.
Dapretto
,
N. M.
McDonald
, and
S.
Jeste
,
Biol. Psychiatry Cogn. Neurosci. Neuroimaging
6
,
59
(
2021
).
104.
M. M.
Chan
,
M.-C.
Chan
,
O. L.-H.
Lai
,
K.
Krishnamurthy
, and
Y. M.
Han
,
Biomedicines
10
,
1132
(
2022
).
105.
J.
Bjerkan
,
J.
Kobal
,
G.
Lancaster
,
S.
Šešok
,
B.
Meglič
,
P. V. E.
McClintock
,
K.
Budohoski
,
P.
Kirkpatrick
, and
A.
Stefanovska
,
Brain Commun.
6
(
2024
).
106.
A.
Arinyo-i-Prats
,
V. J.
López-Madrona
, and
M.
Paluš
,
NeuroImage
292
,
120610
(
2024
).
107.
M.
Günther
,
J. W.
Kantelhardt
, and
R. P.
Bartsch
,
Front. Netw. Physiol.
2
,
893743
(
2022
).
109.
W.
Mumtaz
,
S.
Rasheed
, and
A.
Irfan
,
Biomed. Signal Process. Control
68
,
102741
(
2021
).
Published open access through an agreement with JISC Collections