Stochastic Distinguishability of Markovian Trajectories

The ability to distinguish between stochastic systems based on their trajectories is crucial in thermodynamics, chemistry, and biophysics. The Kullback-Leibler (KL) divergence, $D_{\text{KL}}^{AB}(0,\tau)$, quantifies the distinguishability between the two ensembles of length-$\tau$ trajectories from Markov processes A and B. However, evaluating $D_{\text{KL}}^{AB}(0,\tau)$ from histograms of trajectories faces sufficient sampling difficulties, and no theory explicitly reveals what dynamical features contribute to the distinguishability. This letter provides a general formula that decomposes $D_{\text{KL}}^{AB}(0,\tau)$ in space and time for any Markov processes, arbitrarily far from equilibrium or steady state. It circumvents the sampling difficulty of evaluating $D_{\text{KL}}^{AB}(0,\tau)$. Furthermore, it explicitly connects trajectory KL divergence with individual transition events and their waiting time statistics. The results provide insights into understanding distinguishability between Markov processes, leading to new theoretical frameworks for designing biological sensors and optimizing signal transduction.

The ability to distinguish between two Markov systems plays a crucial role in solving various practical problems.For example, consider identifying mutations in proteins.By examining how a protein's stochastic trajectories traversing various meta-stable states differ from that of a wild-type protein, one can tell the difference between the mutant and the wildtype.Another example is signal pattern recognition.Consider a sensor as a Markov system influenced by external signals.The sensor's ability to distinguish between various signal patterns is demonstrated by its ability to generate different stochastic trajectories in response to each pattern.
In this work, we adopt the master equation description of a Markov process on a finite number of states x ∈ {1, 2, • • • , N}: where p(x;t) is the probability to find the system at state x at time t, R xx ′ (t) is the probability transition rate from state x ′ to state x at time t and R xx (t) = − ∑ x ′ ̸ =x R x ′ x (t) preserves the normalization.The master equation can also be written in a a) These authors contributed equally to this work and their names are ordered alphabetically.
matrix form: where ⃗ p(t) = {p(x;t)} is a vector and R = {R x ′ x } is the rate matrix.
From an observational perspective, two Markov processes can be distinguished by the statistics of their states and the kinetic transitions among states (trajectories).The state transition trajectory contains more information about the Markov process than the probability distribution over the state space [37][38][39][40][41] .To illustrate this, consider two Markov processes with the same stationary probability distribution, yet one system satisfies detailed balance conditions, and the other is dissipative.If both systems reach the stationary distribution, the observer cannot distinguish the two processes by merely observing the statistics of their states.However, observing the trajectories of the two processes can easily distinguish the dissipative system (with non-zero net current) from the detailed balance system (without net current).This argument has interesting indications in biological information sensing: it has been shown that the ensemble of configuration trajectories for a ligand-receptor sensor contain more information than the statistical average of its states.Moreover, by observing the trajectory of a binary-state sensor, one can simultaneously infer multiple environmental parameters (multiplexing) [40][41][42] .

A. Distinguishability as the Kullback-Leibler Divergence
Let us consider two ensembles of stochastic trajectories X τ from two Markov processes A and B. Each trajectory is of time length τ.Here, the probability distribution of length-τ stochastic trajectories for Markov process A is denoted by The accumulated KL divergence D AB acc (0, τ), is the additional distinguishability gained by observing the trajectories between 0 and τ.It follows the following decomposition in time and space (or edge): Here, the summation ∑ edge x→x ′ includes all directed edges (connecting from state x to x ′ ) with non-zero rate R x ′ x (t) at time t.Furthermore, we find that the distinguishability accumulation rate from each directed edge, ḊAB KL,x ′ x (t), can be written as the product of the detailed current of the edge J A x ′ x and a divergence force The detailed probability current from x to x ′ at time t is The divergence force itself is a KL divergence between two transition dwell time distributions: where • s is the exponential distribution of dwell time s in state x before a transition to state x ′ occurs, characterized by the transition probability rate R A x ′ x (t) at time t.

C. Key Properties of Trajectory KL Divergence
The results reveal several critical properties of the trajectory KL divergence, D AB KL (0, τ).It provides the three perspectives to understand the distinguishability between two Markov systems.

The initial and the accumulated divergence
When the observed trajectory length reduces to τ → 0, D AB KL (0, τ) = d AB KL (0).In this limit, the trajectory KL divergence of the two Markov processes reduces to the KL divergence between the two initial state distributions.That is, by observing infinitely short trajectories, the distinguishability of two Markov processes equals the distinguishability between the systems' initial state distributions.For any finite-time observation τ > 0, Eq. ( 8) implies that as trajectory length τ increases, the distinguishability always accumulates with a non-negative rate: ḊAB KL,x ′ x (t) ≥ 0, ∀{(t, x, x ′ )|x ′ ̸ = x}.In other words, the longer the observed trajectory, the greater the revealed distinguishability between the two Markov processes.It is noteworthy that the initial state distribution of process B does not impact D AB acc (0, τ), since its effect is fully captured by the initial d AB KL (0).
denotes the state-ensemble KL divergence between two systems evaluated at time t * .This additivity formula indicates that one can obtain the KL divergence of trajectories starting at 0 and ending at τ as the summation of that for two (or more) consecutive trajectories (0,t * ) and (t * , τ), after the removal of the overlapping term(s) d AB KL (t * ).This decomposition can also be represented in a cleaner form in terms of accumulated KL divergence: 3. Spatial (edge-wise) decomposition of D AB KL (0, τ) Eqs. ( 7) and ( 8) reveal the non-negative spatial (edge-wise) additivity of the trajectory KL divergence, where accumulated KL divergence is the summation of non-negative contributions from each directed transition via connected edges.Furthermore, Eq. ( 8) shows that the contribution from each directed edge equals the product of the detailed probability flow rate Eq. ( 9) and a force-like weighting factor Eq. ( 10).The forcelike factor characterizes the distinction between the two transition rates of the same edge from two Markov processes.This additivity of non-negative contribution from each edge implies that coarse-graining of states or ignoring any edge of the graph may cause underestimation of the trajectory KL divergence, which agrees with intuition.Importantly, this spatial decomposition in state transitions leads to a practical formula described at the end of this paper, allowing for the development of enhanced transition event sampling methods towards the efficient evaluation of D AB KL (0, τ).

D. Connection to Thermodynamic Entropy
Entropy plays a central role in nonequilibrium thermodynamics and is closely related to the Jarzynski equality 46 , Crooks fluctuation theorem 47 , and other forms of fluctuation theorems 45,48,49 .Apparently, the temporal and spatial decomposition of D AB KL (0, τ) resembles the decomposition of the entropy production in stochastic thermodynamics 48 .However, the connection between D AB KL (0, τ) and the entropy production is non-trivially dependent on the construction of Markov process B.
In stochastic thermodynamics, the entropy production of a Markov process can be obtained from the logarithm ratio of the probability measure of a trajectory and the probability measure of the time reversal trajectory, which is known as the Radon-Nikodym derivative (RND) 50 .In contrast, the D AB KL (0, τ) defined in this work involves the RND of one trajectory's probability measures from two Markov processes A and B. The RND and generalized fluctuation theorems under various choices of conjugate processes can be found in 6,45,49,[51][52][53][54][55][56][57][58] .Notably, the trajectory KL divergence under arbitrary conjugate process was discussed for diffusive and Fokker-Planck (FP) dynamics in 51,52 .It is worthy of further study to determine if the results of this paper, for discrete-state Markov processes, can be directly extended to FP dynamics.
One may naturally speculate that D AB KL (0, τ) is related to thermodynamic entropy production, if the process B is chosen as the "reversal" of the process A. However, we find that the process B should be defined without inverting the direction of time or control protocol.Rather, D AB KL (0, τ) becomes thermodynamic entropy when process B is chosen to be the pseudo-transpose of A, denoted by A * : where for A * , the transition rate from state x to x ′ at time t is chosen to be identical to the rate from x ′ to x of the the process A at time t.The normalization is preserved for the process A * due to the choice of its diagonal elements.Then, by assuming that the Markov process A is immersed in a heat reservoir, the time-accumulated KL divergence between processes A and A * equals the entropy change of the reservoir caused by the process A: (see SI.II) where k B is the Boltzmann's constant, s res [X τ ] is the stochastic entropy production in the reservoir due to trajectory X τ , and the angular bracket ⟨•⟩ A is the trajectory ensemble average for process A 48 .
To summarize, for a Markov thermodynamic process A and its "conjugate" dynamics A * , when the two systems are prepared at the same initial distribution (d AA * KL (0) = 0), their trajectory KL divergence equal the reservoir's entropy change due to process A.

III. DISCUSSIONS AND APPLICATIONS
In the remaining, we discuss the practical implications of the theory.We first examine how the initial state distribution affects the observational distinguishability between two Markov systems (see Sections III A and III B).Then we discuss how D AB KL (0, τ) responds to control protocols (input signal), and illustrate its implications for biological or artificial stochastic sensors.Specifically, the theory inspires the design of optimal sensor that can discern input signals (see Section III C), and reveals the optimal control protocol to discern two apparently similar systems (see Section III D).In the end (see Sections III E and III F), we show that the theory provides multiple new approaches to efficiently evaluate the trajectory KL divergence.These approaches get around the sufficient sampling difficulty of evaluating D AB KL (0, τ) from a finite number of trajectories.

A. Distinguishability of Initial State Distributions
Consider two stochastic systems that evolve according to identical kinetic rates, RA (t) = RB (t) ∀t, and they differ only by their initial-state distributions, ⃗ p A (0) ̸ = ⃗ p B (0).Does measuring their trajectories provide more information than the initial states regarding distinguishing the two systems?No, we show that the difference in their initial state distributions fully captures the distinguishability of the two Markov processes: From Eq. ( 10), the divergence force is zero, F AB x ′ x (t) = 0, for all times and edges due to the identical kinetic rates RA (t) = RB (t).Then, according to Eq. ( 7) and Eq. ( 8), the accumulated KL divergence by observing longer trajectory always equals 0. Thus, Eq. ( 5) reduces to D AB KL (0, τ) = d AB KL (0).In other words, additional observations of the trajectories beyond the initial state cannot improve the observational distinguishability between Markov systems with fully identical kinetic rates.An alternative proof by using the data processing inequality 59 is shown in the SI.III.

B. Optimal Initial Distribution to Maximize Trajectory Distinguishability
Consider two Markov systems prepared at identical initial state distributions p A (x; 0) = p B (x; 0) = p(x; 0), yet they evolve with different kinetic rates which could be timedependent or time-independent.How to prepare the initial distribution p(x; 0) of the two systems such that their trajectories most prominently reveal their distinguishability?We find that the optimal initial distribution to maximize the distinguishability is typically a delta-function distribution.In other words, there exist one (or a few degenerate) optimal state(s), from which the observational distinguishability of the two systems is maximized.To show this, recognize that the accumulated KL divergence follows an initial-state decomposition relation: where D AB acc (0, τ) x ini =x is the accumulated trajectory KL divergence if both systems are initially prepared perfectly on a same state x.This result indicates that the trajectory KL divergence is linearly dependent on the initial probability distribution, as illustrated by a tilted flat plane in Fig. 1.See proof of the decomposition relation in SI.IVA.As a result, the highest accumulated KL divergence must be achieved when the initial distribution is a delta function at the optimal state x * , where D AB acc (0, τ) x ini =x * ≥ D AB acc (0, τ) x ini =x for any state x.Very rarely, when there are degenerate optimal states {x * }, any initial distribution purely comprising these degenerate states also gives the optimal distinguishability.Intuitively, from Eqs. ( 7) and (8), this optimal initial state to distinguish two time-independent Markov processes is the one that evolves in time to exhibit the largest accumulated weighted flow of transitions, with the weight defined by Eq. (10).Notice that the specific choice of the optimal state depends on both the systems' kinetic property and the length of the observational time τ.For illustrative purpose, consider two time-independent Markov processes with anomalous relaxations 60,61 where one finds a large separation of the speed of relaxation (big gap in the eigenvalues of R), we find that the optimal initial state x * can be determined by observing the slowest relaxation eigenmode (see this example in Fig.  15).The maximum divergence is achieved at a delta-function initial distribution (p 0 , p 1 , p 2 ) = (0, 1, 0).See SI.IVB for details.Now, we conclude the discussion of the initial state distribution and shift the focus to temporal control protocol(s) governing/applied to two Markov systems.Here, we describe a sensor under influence of a signal λ (t) by a Markov system R(λ (t)).Under this construction, the results of this work can be used to describe temporal pattern recognition by a sensor (see Section III C).It can also help the search for the optimal external control protocol to maximally reveal/enhance the distinction between two apparently similar Markov systems with slight differences in their kinetic properties (see Section III D).

C. Design Principle: Sensor for Discerning Temporal Patterns
Various biological sensors can improve their sensing capability by utilizing non-trivial stochastic trajectories of their internal states evolving under complex external input signals [37][38][39][40][41]62,63 . Consier a stochastic sensor as a Markov system hopping between different internal states.Its timedependent rate matrix is R(λ ), and the protocol λ (t) represents the external signal.The downstream sensory network, which interacts with the sensor but does not directly interact with the input signal, can discern two temporal protocols (λ A (t) and λ B (t)) only by "observing" the sensor's state paths (trajectories).In this case, the maximum information the downstream sensory network can acquire is limited by the information contained within the observed ensemble of sensor's state trajectories.Thus, the trajectory KL divergence defined in this work captures the upper bound of the distinguishability of the two signals transmitted via the sensor, and an optimal sensor is one with the highest upper bound given the design restrictions.As a result, the theory in this paper can be used in various practical applications for improving the sensor's sensitivity or optimizing the ability of pattern recognition.This paper does NOT aim to provide a universal analytical solution for the optimal sensor's design, because the solution depends on various specific details of the practical problem.E.g., the types of protocols that need to be discerned, the physical coupling between the sensor and signal, as well as other limitations that define the available form of the sensor's rate matrix R(λ ).Although not discussed in this paper, we have recently applied this theory to designing optimal concentration sensors and sensors capable of recognizing patterns of time-varying signal-molecule concentrations.

D. Optimal Control Protocol to Reveal Differences between Markov Systems
A conjugate problem of Section III C is to design the optimal control protocol that maximally reveals the difference between two Markov systems.To illustrate the problem with a practical example, consider two proteins with subtle differences: a wild type and a mutant.The two molecules share the same number of metastable configurations (states); however, the transition rates between states are different for the two, and their rates are influenced by an external control parameter λ (e.g., electric field, temperature, or pH).If the two molecules are subtly different in their kinetic rates and/or their rates' dependence on the external control, merely observing the steady-state distribution of the molecules in a stationary environment may reveal, to some extent, the differences between the two molecules.However, our theory indicates that one can enhance the ability to distinguish the two systems when we drive them out of the steady state by the same control protocol λ (t) and observe the difference in their stochastic trajectory ensembles (see Eq. ( 4)).Moreover, the theory allows for the search of the optimal control protocol λ * (t) to maximally reveal the difference between the two systems.Given RA (λ ) and RB (λ ), the specific forms of the two molecule's kinetic rates and their dependence on the external control parameter λ , the search for the optimal protocol λ * (t) becomes a variational optimization problem of D AB acc (0, τ).Intuitively, the decomposition relations imply that the optimal driving protocol maximizes the accumulated number of transitions over the edges with accumulation weight factor F AB xx ′ (t).In the following, we discuss how the decomposition relation can circumvent the sampling convergence difficulties in evaluating the trajectory KL divergence.When directly evaluated from the definition, the convergence of D AB KL (0, τ) can face significant sampling difficulty.It requires that each trajectory observed in process A must also be observed sufficiently many times in process B. This convergence difficulty grows rapidly with the trajectory length τ, making it almost impossible to directly calculate the D AB KL (0, τ) for long trajectories.However, the temporal decomposition shown in Eq. ( 11) makes it possible to get around the difficulty by the following: By dividing the trajectory into multiple consecutive segments, it is easier to evaluate the segment trajectory KL divergences and reconstruct the wholetrajectory KL divergence.The desired whole-trajectory KL divergence can be obtained by adding the segment KL divergences together minus the state distribution KL divergences d AB KL (t * )'s evaluated at the intersecting time points.This approach significantly reduces the sufficient number of trajectories needed to evaluate D AB KL (0, τ) without singularity.In comparison to traditional approaches, this new trajectory-slicing approach significantly reduces the number of trajectories required for an converging and accurate estimation of the trajectory KL divergence.(See Fig. 2 and SI.V for details.) The method proposed above still relies on the definition of KL divergence, Eq. ( 4), and thus, it explicitly depends on the probability density functions (histograms) of segmented trajectories.Below, we show that with merely the sets of transition events, one can evaluate the trajectory KL divergence.Specifically, the temporal and spatial decomposition relations, Eqs. ( 8) to (10)  gence as a weighted statistical average of transition events: where k is the index of the transitions (from state x k to x ′ k ) within a trajectory X τ , and the angular bracket is an average among all trajectories obtained from process A. Practical applications of the above formula may fall into one of the three scenarios below.
Scenario 1: If the transition rate matrices RA (t) and RB (t) are known, then one can directly evaluate the trajectory KL divergence with the above equation, by plugging in the values of F AB x ′ x (t) (see Fig. 3).Scenario 2: If the transition rate matrices are timehomogeneous yet unknown, one can utilize existing tools [64][65][66][67] to evaluate the transition rates for processes A and B and then obtain F AB x ′ x to perform the calculation described in Scenario 1.
Scenario 3: If the transition rate matrices are unknown and are time-dependent, one may need to utilize the waiting time statistics of transition between state x and x ′ at each time t, to first evaluate the time-dependent rate matrices, and then obtain F AB x ′ x (t) for calculating the trajectory KL divergence.In this case, utilizing the method proposed in Section III E may be more efficient for evaluating the trajectory KL divergence.

IV. CONCLUSION
In conclusion, this study rigorously delineates the stochastic distinguishability between two arbitrary Markov processes, quantified through the trajectory KL divergence, D AB KL (0, τ).Our development of a general formula to decompose D AB KL (0, τ) in both spatial and temporal dimensions not only unveils the dynamical traits contributing to the distinguishability of stochastic systems but also allows for the efficient evaluation of this divergence.
Given the flexibility in selecting Markov processes A and B, our theory is poised for broad application across various fields.It promises to extend the scope of fluctuation theorems from stochastic thermodynamics to encompass a wide range of nonthermal processes.Moreover, by analyzing trajectories timed by different mechanisms, our theory applies to clock calibration and offers insights into how a deterministic conception of time may arise from purely stochastic dynamics.Lastly, by leveraging this theoretical framework to quantify information within stochastic trajectories, we lay the groundwork for innovative design principles for biological and artificial sensors to recognize and interpret temporal patterns in input signals.

Fig. 1
FIG.1.Initial distribution dependence of the trajectory KL divergence.Trajectories are obtained from two 3-state Markov processes with common initial distribution (p 0 , p 1 , p 2 ), where p 2 = 1 − p 0 − p 1 .Each data point represents a distinct initial distribution, and points lives on a flat plane in accordance with Eq. (15).The maximum divergence is achieved at a delta-function initial distribution (p 0 , p 1 , p 2 ) = (0, 1, 0).See SI.IVB for details.

FIG. 2 .
FIG. 2. Trajectory KL divergence between two binary-state systems with rates RA 21 = 1, R A 12 = 2 and R B 21 = 2, R B 12 = 1.Each trajectory is 10-step long with time step equals unity.The KL divergence is evaluated from trajectory ensembles of sizes 2000, 10000, 50000, and 250000 by different methods.Each result is averaged over 10 repetition of calculations for randomly generated trajectories, with the error bar showing ±1 standard deviation from the 10 repetitions.The trajectory slicing method slices trajectory into consecutive segments of length 2. See SI.V for details.
allow us to express the trajectory KL diver-Color curve is accumulated ḊAB KL • t evaluated from a single trajectory.Black solid curve is the average over the ten color trajectories.Black dashed line is the analytical result for ḊAB KL • t.