The ability to distinguish between stochastic systems based on their trajectories is crucial in thermodynamics, chemistry, and biophysics. The Kullback–Leibler (KL) divergence, , quantifies the distinguishability between the two ensembles of length-τ trajectories from Markov processes A and B. However, evaluating from histograms of trajectories faces sufficient sampling difficulties, and no theory explicitly reveals what dynamical features contribute to the distinguishability. This work provides a general formula that decomposes in space and time for any Markov processes, arbitrarily far from equilibrium or steady state. It circumvents the sampling difficulty of evaluating . Furthermore, it explicitly connects trajectory KL divergence with individual transition events and their waiting time statistics. The results provide insights into understanding distinguishability between Markov processes, leading to new theoretical frameworks for designing biological sensors and optimizing signal transduction.
I. INTRODUCTION
Markov models are powerful in describing various phenomena in physics, chemistry, and biology.1–13 These inherently probabilistic models are particularly effective in capturing the dynamic behaviors of complex systems,14–18 and particularly non-equilibrium behaviors, including stochastic thermodynamic processes, chemical reaction kinetics,4,19–22 and the configuration dynamics of biomolecules.23–26 In biology, Markov models have provided invaluable insights into coarse-grained configuration dynamics of complex molecules, elucidating sub-cellular processes, such as gene regulation networks,27–29 molecular motors,30,31 ion channels,32–34 and various cellular processes.7,35,36
The ability to distinguish between two Markov systems plays a crucial role in solving various practical problems. For example, consider identifying mutations in proteins. By examining how a protein’s stochastic trajectories traversing various meta-stable states differ from that of a wild-type protein, one can tell the difference between the mutant and the wild-type. Another example is signal pattern recognition. Consider a sensor as a Markov system influenced by external signals. The sensor’s ability to distinguish between various signal patterns is demonstrated by its ability to generate different stochastic trajectories in response to each pattern.
From an observational perspective, two Markov processes can be distinguished by the statistics of their states and the kinetic transitions among states (trajectories). The state transition trajectory contains more information about the Markov process than the probability distribution over the state space.37–41 To illustrate this, consider two Markov processes with the same stationary probability distribution, yet one system satisfies detailed balance conditions, and the other is dissipative. If both systems reach the stationary distribution, the observer cannot distinguish the two processes by merely observing the statistics of their states. However, observing the trajectories of the two processes can easily distinguish the dissipative system (with non-zero net current) from the detailed balance system (without net current). This argument has interesting indications in biological information sensing: It has been shown that the ensemble of configuration trajectories for a ligand–receptor sensor contains more information than the statistical average of its states. Moreover, by observing the trajectory of a binary-state sensor, one can simultaneously infer multiple environmental parameters (multiplexing).40–42
II. THEORY
A. Distinguishability as the Kullback–Leibler divergence
There are two main drawbacks in representing merely from the above definition. First, the evaluation of between two finite ensembles of trajectories suffers from insufficient sampling and slow convergence.44,45 Second, the definition of in terms of trajectory probabilities only implicitly depends on the transition kinetic rates, and thus, it misses an explicit interpretation of the distinguishability in terms of the kinetic properties of the two systems.
B. Equalities for trajectory KL divergence
C. Key properties of trajectory KL divergence
The results reveal several critical properties of the trajectory KL divergence, . It provides three perspectives to understand the distinguishability between two Markov systems.
1. The initial and the accumulated divergence
When the observed trajectory length reduces to τ → 0, . In this limit, the trajectory KL divergence of the two Markov processes reduces to the KL divergence between the two initial state distributions. That is, by observing infinitely short trajectories, the distinguishability of two Markov processes equals the distinguishability between the systems’ initial state distributions. For any finite-time observation τ > 0, Eq. (8) implies that as trajectory length τ increases, the distinguishability always accumulates with a non-negative rate: . In other words, the longer the observed trajectory, the greater the revealed distinguishability between the two Markov processes. It is noteworthy that the initial state distribution of process B does not impact , since its effect is fully captured by the initial .
2. Temporal decomposition of
3. Spatial (edge-wise) decomposition of
Equations (7) and (8) reveal the non-negative spatial (edge-wise) additivity of the trajectory KL divergence, where the accumulated KL divergence is the summation of non-negative contributions from each directed transition via connected edges. Furthermore, Eq. (8) shows that the contribution from each directed edge equals the product of the detailed probability flow rate [Eq. (9)] and a force-like weighting factor [Eq. (10)]. The force-like factor characterizes the distinction between the two transition rates of the same edge from two Markov processes. This additivity of non-negative contribution from each edge implies that coarse-graining of states or ignoring any edge of the graph may cause underestimation of the trajectory KL divergence, which agrees with intuition. Importantly, this spatial decomposition in state transitions leads to a practical formula described at the end of this paper, allowing for the development of enhanced transition event sampling methods toward the efficient evaluation of .
D. Connection to thermodynamic entropy
Entropy plays a central role in nonequilibrium thermodynamics and is closely related to the Jarzynski equality,46 Crooks fluctuation theorem,47 and other forms of fluctuation theorems.45,48,49 Apparently, the temporal and spatial decomposition of resembles the decomposition of the entropy production in stochastic thermodynamics.48 However, the connection between and the entropy production is non-trivially dependent on the construction of Markov process B.
In stochastic thermodynamics, the entropy production of a Markov process can be obtained from the logarithmic ratio of the probability measure of a trajectory to the probability measure of the time reversal trajectory, which is known as the Radon–Nikodym derivative (RND).50 In contrast, defined in this work involves the RND of one trajectory’s probability measures from two Markov processes A and B. The RND and generalized fluctuation theorems under various choices of conjugate processes can be found in Refs. 6, 45, 49, and 51–58. Notably, the trajectory KL divergence under arbitrary conjugate process was discussed for diffusive and Fokker–Planck (FP) dynamics in Refs. 51 and 52. It is worthy of further study to determine if the results of this paper, for discrete-state Markov processes, can be directly extended to FP dynamics.
To summarize, for a Markov thermodynamic process A and its “conjugate” dynamics A*, when the two systems are prepared at the same initial distribution , their trajectory KL divergence equals the reservoir’s entropy change due to process A.
III. DISCUSSIONS AND APPLICATIONS
In the remaining, we discuss the practical implications of the theory. We first examine how the initial state distribution affects the observational distinguishability between two Markov systems (see Secs. III A and III B). Then, we discuss how responds to control protocols (input signal) and illustrate its implications for biological or artificial stochastic sensors. Specifically, the theory inspires the design of optimal sensor that can discern input signals (see Sec. III C) and reveals the optimal control protocol to discern two apparently similar systems (see Sec. III D). In the end (see Secs. III E and III F), we show that the theory provides multiple new approaches to efficiently evaluate the trajectory KL divergence. These approaches get around the sufficient sampling difficulty of evaluating from a finite number of trajectories.
A. Distinguishability of initial state distributions
Consider two stochastic systems that evolve according to identical kinetic rates, , and they differ only by their initial-state distributions, . Does measuring their trajectories provide more information than the initial states regarding distinguishing the two systems? No, we show that the difference in their initial state distributions fully captures the distinguishability of the two Markov processes: From Eq. (10), the divergence force is zero, , for all times and edges due to the identical kinetic rates . Then, according to Eqs. (7) and (8), the accumulated KL divergence by observing longer trajectory always equals 0. Thus, Eq. (5) reduces to . In other words, additional observations of the trajectories beyond the initial state cannot improve the observational distinguishability between Markov systems with fully identical kinetic rates. An alternative proof by using the data processing inequality59 is shown in Sec. III of the supplementary material.
B. Optimal initial distribution to maximize trajectory distinguishability
Initial distribution dependence of the trajectory KL divergence. Trajectories are obtained from two three-state Markov processes with a common initial distribution (p0, p1, p2), where p2 = 1 − p0 − p1. Each data point represents a distinct initial distribution, and the points live on a flat plane in accordance with Eq. (15). The maximum divergence is achieved at a delta-function initial distribution (p0, p1, p2) = (0, 1, 0). See Sec. IV B of the supplementary material for details.
Initial distribution dependence of the trajectory KL divergence. Trajectories are obtained from two three-state Markov processes with a common initial distribution (p0, p1, p2), where p2 = 1 − p0 − p1. Each data point represents a distinct initial distribution, and the points live on a flat plane in accordance with Eq. (15). The maximum divergence is achieved at a delta-function initial distribution (p0, p1, p2) = (0, 1, 0). See Sec. IV B of the supplementary material for details.
Now, we conclude the discussion of the initial state distribution and shift the focus to temporal control protocol(s) governing/applied to two Markov systems. Here, we describe a sensor under the influence of a signal λ(t) by a Markov system . Under this construction, the results of this work can be used to describe temporal pattern recognition by a sensor (see Sec. III C). It can also help the search for the optimal external control protocol to maximally reveal/enhance the distinction between two apparently similar Markov systems with slight differences in their kinetic properties (see Sec. III D).
C. Design principle: Sensor for discerning temporal patterns
Various biological sensors can improve their sensing capability by utilizing non-trivial stochastic trajectories of their internal states evolving under complex external input signals.37–41,62,63 Consider a stochastic sensor as a Markov system hopping between different internal states. Its time-dependent rate matrix is , and the protocol λ(t) represents the external signal. The downstream sensory network, which interacts with the sensor but does not directly interact with the input signal, can discern two temporal protocols [λA(t) and λB(t)] only by “observing” the sensor’s state paths (trajectories). In this case, the maximum information the downstream sensory network can acquire is limited by the information contained within the observed ensemble of sensor’s state trajectories. Thus, the trajectory KL divergence defined in this work captures the upper bound of the distinguishability of the two signals transmitted via the sensor, and an optimal sensor is one with the highest upper bound given the design restrictions. As a result, the theory in this paper can be used in various practical applications for improving the sensor’s sensitivity or optimizing the ability of pattern recognition. This paper does NOT aim to provide a universal analytical solution for the optimal sensor’s design, because the solution depends on various specific details of the practical problem, e.g., the types of protocols that need to be discerned, the physical coupling between the sensor and the signal, as well as other limitations that define the available form of the sensor’s rate matrix . Although not discussed in this paper, we have recently applied this theory to designing optimal concentration sensors and sensors capable of recognizing patterns of time-varying signal-molecule concentrations.
D. Optimal control protocol to reveal differences between Markov systems
A conjugate problem of Sec. III C is to design the optimal control protocol that maximally reveals the difference between two Markov systems. To illustrate the problem with a practical example, consider two proteins with subtle differences: a wild type and a mutant. The two molecules share the same number of metastable configurations (states); however, the transition rates between states are different for the two, and their rates are influenced by an external control parameter λ (e.g., electric field, temperature, or pH). If the two molecules are subtly different in their kinetic rates and/or their rates’ dependence on the external control, merely observing the steady-state distribution of the molecules in a stationary environment may reveal, to some extent, the differences between the two molecules. However, our theory indicates that one can enhance the ability to distinguish the two systems when we drive them out of the steady state by the same control protocol λ(t) and observe the difference in their stochastic trajectory ensembles [see Eq. (4)]. Moreover, the theory allows for the search of the optimal control protocol λ*(t) to maximally reveal the difference between the two systems. Given and , the specific forms of the two molecule’s kinetic rates and their dependence on the external control parameter λ, the search for the optimal protocol λ*(t) becomes a variational optimization problem of . Intuitively, the decomposition relations imply that the optimal driving protocol maximizes the accumulated number of transitions over the edges with accumulation weight factor .
In the following, we discuss how the decomposition relation can circumvent the sampling convergence difficulties in evaluating the trajectory KL divergence.
E. Slicing trajectory approach for evaluating
When directly evaluated from the definition, the convergence of can face significant sampling difficulty. It requires that each trajectory observed in process A must also be observed sufficiently many times in process B. This convergence difficulty grows rapidly with the trajectory length τ, making it almost impossible to directly calculate for long trajectories. However, the temporal decomposition shown in Eq. (11) makes it possible to get around the difficulty by the following: By dividing the trajectory into multiple consecutive segments, it is easier to evaluate the segment trajectory KL divergences and reconstruct the whole-trajectory KL divergence. The desired whole-trajectory KL divergence can be obtained by adding the segment KL divergences together minus the state distribution KL divergences ’s evaluated at the intersecting time points. This approach significantly reduces the sufficient number of trajectories needed to evaluate without singularity. In comparison with traditional approaches, this new trajectory-slicing approach significantly reduces the number of trajectories required for a converging and accurate estimation of the trajectory KL divergence. (See Fig. 2 and Sec. V of the supplementary material for details.)
Trajectory KL divergence between two binary-state systems with rates and . Each trajectory is 10-step long with time step equal to unity. The KL divergence is evaluated from trajectory ensembles of sizes 2000, 10 000, 50 000, and 250 000 by different methods. Each result is averaged over ten repetitions of calculations for randomly generated trajectories, with the error bar showing ±1 standard deviation from the ten repetitions. The trajectory slicing method slices trajectory into consecutive segments of length 2. See Sec. V of the supplementary material for details.
Trajectory KL divergence between two binary-state systems with rates and . Each trajectory is 10-step long with time step equal to unity. The KL divergence is evaluated from trajectory ensembles of sizes 2000, 10 000, 50 000, and 250 000 by different methods. Each result is averaged over ten repetitions of calculations for randomly generated trajectories, with the error bar showing ±1 standard deviation from the ten repetitions. The trajectory slicing method slices trajectory into consecutive segments of length 2. See Sec. V of the supplementary material for details.
F. Event-counting estimation of
Scenario 1: If the transition rate matrices and are known, then one can directly evaluate the trajectory KL divergence with the above equation, by plugging in the values of (see Fig. 3).
Evaluation of based on event-counting method. Trajectories are generated from two binary-state Markov systems with rates . The color curves are accumulated evaluated from a single trajectory. The black solid curve is the average over the ten color trajectories. The black dashed line is the analytical result for .
Evaluation of based on event-counting method. Trajectories are generated from two binary-state Markov systems with rates . The color curves are accumulated evaluated from a single trajectory. The black solid curve is the average over the ten color trajectories. The black dashed line is the analytical result for .
Scenario 2: If the transition rate matrices are time-homogeneous yet unknown, one can utilize existing tools64–67 to evaluate the transition rates for processes A and B and then obtain to perform the calculation described in scenario 1.
Scenario 3: If the transition rate matrices are unknown and time-dependent, one may need to utilize the waiting time statistics of transition between states x and x′ at each time t, to first evaluate the time-dependent rate matrices, and then obtain for calculating the trajectory KL divergence. In this case, utilizing the method proposed in Sec. III E may be more efficient for evaluating the trajectory KL divergence.
IV. CONCLUSION
In conclusion, this study rigorously delineates the stochastic distinguishability between two arbitrary Markov processes, quantified through the trajectory KL divergence, . Our development of a general formula to decompose in both spatial and temporal dimensions not only unveils the dynamical traits contributing to the distinguishability of stochastic systems but also allows for the efficient evaluation of this divergence.
Given the flexibility in selecting Markov processes A and B, our theory is poised for broad application across various fields. It promises to extend the scope of fluctuation theorems from stochastic thermodynamics to encompass a wide range of non-thermal processes. Moreover, by analyzing trajectories timed by different mechanisms, our theory applies to clock calibration and offers insights into how a deterministic conception of time may arise from purely stochastic dynamics. Finally, by leveraging this theoretical framework to quantify information within stochastic trajectories, we lay the groundwork for innovative design principles for biological and artificial sensors to recognize and interpret temporal patterns in input signals.
SUPPLEMENTARY MATERIAL
Technical derivations that are too lengthy to be included in the manuscript and numerical simulation codes can be found in the supplementary material.
ACKNOWLEDGMENTS
The National Science Foundation financially supports this work under Grant No. DMR-2145256. Z.L. is grateful to Professor Hong Qian at the University of Washington Seattle, Professor Ying Tang at Beijing Normal University, and Mr. Ruicheng Bao at the University of Science and Technology of China for their constructive comments on this work .
AUTHOR DECLARATIONS
Conflict of Interest
The authors have no conflicts to disclose.
Author Contributions
A.P, Z.Z., and J.Z. contributed equally to this work and their names are ordered alphabetically.
Asawari Pagare: Conceptualization (equal); Formal analysis (equal); Methodology (equal); Writing – original draft (equal); Writing – review & editing (equal). Zhongmin Zhang: Conceptualization (equal); Formal analysis (equal); Methodology (equal); Writing – original draft (equal); Writing – review & editing (equal). Jiming Zheng: Conceptualization (equal); Formal analysis (equal); Methodology (equal); Writing – original draft (equal); Writing – review & editing (equal). Zhiyue Lu: Conceptualization (lead); Formal analysis (equal); Funding acquisition (lead); Investigation (equal); Writing – original draft (lead); Writing – review & editing (equal).
DATA AVAILABILITY
Data sharing is not applicable to this article as no new data were created or analyzed in this study.