We study the solution V of the Poisson equation LV + f = 0 where L is the backward generator of an irreducible (finite) Markov jump process and f is a given centered state function. Bounds on V are obtained using a graphical representation derived from the Matrix Forest Theorem and using a relation with mean first-passage times. Applications include estimating time-accumulated differences during relaxation toward a steady nonequilibrium regime. That includes obtaining bounds for the quasipotential which controls the thermal response.

A fascinating part of mathematical physics concerns the connection between certain (partial) differential equations and stochastic processes.1 Famous examples include the heat equation which is associated to Brownian motion, and the telegraph equation which connects with run-and-tumble dynamics. As a general technique, Feynman–Kac formulas provide a relation between the (semi)group kernel generated by a (quantum) Hamiltonian and a stochastic representation, aka path-integral, allowing perturbative analysis and diagrammatic developments. Summing over walks or paths and their higher-dimensional versions is indeed a typical subject of stochastic geometry. The associated arborification, i.e., the possible restriction of those sums to (spanning sets of) trees is often a major simplification. That is the subject of the Matrix Tree and Matrix Forest Theorems, which, while mainly in linear algebra, has things to offer in stochastic analysis as well. That is also a main theme of the present paper, viz., to give a graphical representation in terms of trees of the solution of Poisson equations in the context of Markov jump processes. For the physics part, we are mostly interested in the so called quasipotential, which is related to thermal response and is crucial for the concept of nonequilibrium heat capacity.

A second and related subject is to connect solutions of the Poisson equation with mean first-passage times. The study of first-passage times comes with a useful intuition, and their behavior is obviously strongly tied to graph properties.2 In the end, that connection with mean first-passage times combined with graphical representations allows to give estimates on the solution of the Poisson equation. We refer to Refs. 3–5 for the case of diffusions. Such bounds are relevant for a number of applications, including the low-temperature properties of excess heat and of relaxational behavior, and all that within the context of finite Markov jump processes. In that respect, we emphasize that the techniques of the present paper apply outside the traditional context of reversibility, and indeed are motivated by problems in nonequilibrium statistical mechanics. A related and interesting subject concerns the study of metastability for out-of-equilibirum reference processes. We are not discussing here that angle, but refer to Refs. 6–8 for publications on potential theory for Markov processes. E.g., the Poisson equation is used in Ref. 9 to derive the metastability of Markov processes.

In Sec. II, the setup within the framework of the Markov jump process is introduced.

The Poisson equation is presented, in various forms, in Sec. III, where we focus on relations between different concepts relating quasipotentials with mean first-passage times.

In Sec. IV, we give the graphical representation of solutions of the various Poisson equations. The main graphical representations are formulated as Theorems IV.2 and IV.3.

Section V is an application of these relations and graphical representations. It gives bounds on the solutions of the Poisson equation. We mention there an application for nonequilibrium thermal physics when the Markov jump process depends on parameters such as the inverse temperature. We have three main estimates presented in inequalities (5.3), (5.6), and (5.9).

The Appendix presents the broader context of the graphical representations and in particular  Appendix B enters into more mathematical details related to the Matrix Forest Theorem.

Given a finite set K of states x, y, z, … ∈ K, we consider a Markov jump process XtK with transition rates k(x, y), xy, for the jump xy. We speak about Xt as the position of a random walker at time t ≥ 0, but obviously, from a physics perspective, the states xK do not need to model positions but can correspond to many-body configurations.

It naturally gives rise to a finite digraph, i.e., an ordered pair of sets G=(V,E), where the vertex set V=K and E is the set of ordered pairs (arcs) (x, y) of vertices x, yV for which k(x, y) > 0. We assume that the resulting graph G is irreducible (strongly connected). The latter implies the exponentially fast convergence, for time t ↑ ∞,
h(Xt)xh(Xt)|X0=xhshs
(2.1)
of expectations in the Markov process for every real-valued function h on K and independent of the initial condition x, toward the stationary expectation
hs=xh(x)ρs(x),ρsL=0
for the unique stationary probability distribution ρs > 0, and with L the backward generator having matrix elements
Lxy=k(x,y),xy,Lxx=yk(x,y).
(2.2)
In other words,
Lh(x)=yk(x,y)[h(y)h(x)]
(2.3)
gives rise to the semigroup eL t, t ≥ 0, with
eLth(x)=h(Xt)|X0=x,etLhs=hs
(2.4)
as appears in (2.1) as well.
By Poisson equation (in the above setup) we mean in general an equation for a real-valued function U on K, solving
LU(x)+f(x)=0,xH;U(x)=g(x),xH
(3.1)
for a given subset HK and given functions f:HR,g:K\HR.

Note that if H = K, then an extra condition is needed for the function f: it must be centered, ⟨fs = 0, meaning that its stationary average vanishes, because ⟨LUs = 0 always. For certain purposes, it therefore appears useful to replace the Poisson equation by the related but better-behaved resolvent equation; see Ref. 10 for an application in metastability, while we do not aim for that goal here.

As a special but important case, we can consider H = K, where we deal with the Poisson equation
LV(x)+f(x)=0,xK
(3.2)
writing then V instead of U, and we call V the quasipotential with given source f, where we must require ⟨fs = fs = 0 to have a solution. The solution to (3.2) is unique up to an additive constant. If we require that ⟨Vs = 0, then, clearly,
V(x)=0dtetLf(x),xK,
(3.3)
which can be viewed as the accumulated excess of (2.1), in the sense that
V(x)=0dt[h(Xt)|X0=xhs]
(3.4)
for f = h − ⟨hs.

In the case where f = LE for some potential function E, we have V = E − ⟨Es, which is one reason to call, more generally, the solution (3.4) V of (3.2) a quasipotential. Obviously, the physical interpretation of V in (3.4) strongly depends on the meaning of h. In the case where h(x) is the expected instantaneous heat flux when the state is x, the quasipotential is closely related to the so-called excess heat; see Refs. 11 and 12.

As an immediate generalization and for nonempty H, we introduce the random escape time
THinf{t0,XtH},TH=0 if X0H
and consider the expected accumulation till that (random) stopping time:
VH(x)=ϕH|X0=x;ϕH(x)0THdtf(Xt).
(3.5)

Proposition III.1.
The function VH is the unique solution of the Poisson equation
LVH(x)+f(x)=0,xH;VH(x)=0,xH.
(3.6)

That reduces to (3.2) when H = K. For completeness, we give the proof of (3.6).

Proof of Proposition III.1.
Define the stopped process which quits running after escaping from H:
XH(t)=X(t)for tTH,X(TH)for t>TH.
(3.7)
It is again a Markov process with modified transition rates
kH(x,y)=k(x,y)for xH,yK,0for xH,
(3.8)
and the corresponding generator is denoted by LH. That process is not ergodic and its (in general nonunique) stationary distributions have support in K\H ≠ ∅. We have
P(TH>t|X(0)=x)=P(XH(t)H|XH(0)=x)=(etLH1H)(x).
(3.9)
By irreducibility of the original process, the largest eigenvalue of LH (which is real by the Perron–Frobenius theorem) is strictly negative, and hence TH has exponentially tight distribution, P(TH>t)ect for some c > 0 and t large enough, uniformly with respect to the initial condition. It means that any distribution in H will vanish in the limit t ↑ ∞. As a result, the function fH(x) = f(x)1H(x) (where 1H is the indicator for the set H) can be integrated to obtain (3.5):
VH(x)=0etLHfH(x)dt
or
LHVH(x)=fH(x),
which is (3.6).□

For taking the expectation with respect to stationary measures of stopped accumulations VH, see Ref. 13 for the reversible case and Ref. 14 for the nonreversible case. Those expectations can be represented in terms of capacities.7 

As a special case we take f = 1 in the above (3.5), to introduce the mean escape time SH(x) from H when started from x:
SH(x)=TH|X0=x.
(3.10)
It satisfies
LSH(x)+1=0,xH;SH(x)=0,xH.
(3.11)
For xH we have
LSH(x)=yHk(x,y)SH(y).
(3.12)
By combining with (3.11) and using the stationarity LSHs=0, we get
xHyHρs(x)k(x,y)SH(y)=ρs(H)(=Probs[xH]),
(3.13)
which relates the escape times from H when starting from (inner boundary) states in H, with the expected stationary number of jumps to H from outside (outer boundary).
As a special case, we fix a state zK and take H = K\{z}. We put τ(x,z)SH(x) and then from (3.11),
yk(x,y)[τ(y,z)τ(x,z)]+1=0,xz,τ(z,z)=0,
(3.14)
which is the Poisson equation characterizing the mean first-passage time to reach z when started from some xK.

There is a useful relation between the quasipotential, solution of the Poisson equation (3.2), and the mean first-passage times:

Proposition III.2.
V(x)=zρs(z)f(z)τ(x,z)+constant
(3.15)
where the additive constant is fixed, if needed, by the conditionVs = 0.

As a consequence,
V(x)V(y)=zρs(z)f(z)(τ(y,z)τ(x,z)).
(3.16)

Proof of Proposition III.2.
We show that V in (3.15) is satisfying the Poisson equation (3.2). Keep H = K\{z}; the identity (3.13) reads
ρs(z)xk(z,x)SH(x)=1ρs(z).
(3.17)
Therefore,
LSH(x)=gz(x)gzs,forgz(x)δx,zρs(z)
(3.18)
valid for all xK, which means that the τ(x, z)’s of (3.14) play the role of Green functions for the Poisson equation. In particular, since always f(x) = zρs(z)f(z)gz(x), we obtain (3.15) from the following calculation:
LV(x)=Lzρs(z)f(z)τ(x,z)+constant=zρs(z)f(z)LSH(x)=zρs(z)f(z)[gz(x)1]=fsf(x),
(3.19)
which finishes the proof.□

We next turn to a graphical representation of the solution of the Poisson equations (3.1)(3.14). To be self-consistent and as a natural introduction to the Matrix Forest Theorem, if only for notation, we still remind the reader of the Kirchhoff formula; see Refs. 15 and 16 for more explanations and examples.

Remember that the Markov jump process, via its transition rates, has defined a digraph G(V(G),E(G)).

H is a subgraph of G if V(H)V(G) and E(H)E(G). The subgraph is spanning if V(H)=V(G).

A tree in G is defined as a connected subgraph of G that does not contain any cycles or loops. The set of all spanning trees rooted in x is denoted by Tx.

Proposition IV.1.
The following Kirchhoff formula gives the unique solution to the stationary master equation ρsL = 0 in terms of spanning trees:
ρs(x)=w(x)W,Wyw(y),
(4.1)
with weights
w(x)=TxTxw(Tx),w(Tx)=(u,u)Txk(u,u)
where Tx is the set of all spanning trees rooted on x, and TxTx is a rooted spanning tree with arcs (u, u′) (directed edges).

The usual proof of the Kirchhoff formula (4.1) proceeds via verification. In  Appendix A we add the heuristics of a more constructive proof.17 

A (spanning) forest in graph G is a set of trees. For the moment we are only concerned with two-trees, i.e., forests that consist of exactly two trees with the understanding that a single vertex also counts as one tree.

Let Fxy denote the set of all rooted spanning forests with two trees, where x and y are in the same tree rooted at y. The second tree is rooted as well, and we include all possible orientations. The two-trees, elements of Fxy, are denoted by Fxy. The weight of the set
w(xy)w(Fxy)=FxyFxyw(Fxy),w(Fxy)=(u,u)Fxyk(u,u).
(4.2)
We write Fxx=Fx; see Fig. 1. Moreover, let Fx,y be the set of all (spanning) forests consisting of two trees such that x and y are located in different trees and y is a root; see Fig. 1 where Fx,y=Fy\Fxy.
FIG. 1.

A digraph with four vertices (top left) and its simple visualization (down left). Right: from the top onward, the elements of the sets Ty, Fy and Fxy are shown.

FIG. 1.

A digraph with four vertices (top left) and its simple visualization (down left). Right: from the top onward, the elements of the sets Ty, Fy and Fxy are shown.

Close modal

Recall the Poisson equation (3.2) for the quasipotential V, where f is a centered function on K.

Theorem IV.2.
The unique solution to (3.2) withfs = 0 forVs = 0 is
V(x)=yw(xy)Wf(y).
(4.3)

We proceed with the proof by direct verification. An alternative proof, starting from a more general setup and using the Matrix Forest Theorem, is presented in  Appendix B.

Proof of Theorem IV.2.
LV(x)=yk(x,y)[V(y)V(x)]=1Wyk(x,y)z[w(yz)w(xz)]f(z)
where w(yz)=w(Fyz) and w(xz)=w(Fxz) as in (4.2). The intersection of Fyz and Fxz is the set of all forests where x and y are located in the same tree. Here, to emphasize the positions x and y, we use the notation Fx,yz to denote the set of all forests where y is located in the tree rooted in z, and x is in the other tree. Then,
LV(x)=1Wzyk(x,y)[w(Fx,yz)w(Fy,xz)]f(z).
(4.4)
Fix zx, take a forest Fx,yzFx,yz and add the edge (x, y) to that forest: one tree is rooted in z, and corresponding to the root of the second tree there are two cases. Consider the case that x is not the root, then there exists an edge (x, y′) on the tree. Remove the edge (x, y′) from the graph (x, y) ∪ Fx,yz, a new forest is made. The new forest is in the set Fy,xz; see Fig. 2.

The same scenario is true for the set Fy,xz, see Fig. 3.

Hence, when x is not the root, (4.4) equals zero.

Consider a forest from Fx,yz. If x is the root of its tree, then adding the edge (x, y) connects two trees and the new graph is a spanning tree rooted in z; see Fig. 4.

Put z = x, the set of Fx,yx is empty. Take a forest from the set Fy,xx. Adding the edge (x, y) to the forest Fy,xx again is a new spanning tree rooted on a vertex rx.

Hence, finally,
LV(x)=1Wzxw(z)f(z)rxw(r)f(x)=1Wzx[w(z)f(z)w(z)f(x)]=1Wzx[w(z)f(z)+w(x)f(x)w(x)f(x)w(z)f(x)]=1Wzw(z)f(z)f(x)Wzw(z)=ff(x)=f(x)
where we used the Kirchhoff formula (4.1) and the fact that f is centered.□

FIG. 2.

Adding (x, y) to Fx,yz and the edge (x, y′) to Fy,xz, make the same graphs. (a) (x, y) ∪ Fx,yz. (b) (x, y′) ∪ Fy,xz.

FIG. 2.

Adding (x, y) to Fx,yz and the edge (x, y′) to Fy,xz, make the same graphs. (a) (x, y) ∪ Fx,yz. (b) (x, y′) ∪ Fy,xz.

Close modal
FIG. 3.

Adding (x, y) to Fy,xz and the edge (x, y′) to Fx,yzu, make the same graphs. (a) (x, y) ∪ Fy,xz and (b) (x, y′) ∪ Fx,yz.

FIG. 3.

Adding (x, y) to Fy,xz and the edge (x, y′) to Fx,yzu, make the same graphs. (a) (x, y) ∪ Fy,xz and (b) (x, y′) ∪ Fx,yz.

Close modal
FIG. 4.

Adding (x, y) to the forest Fx,yz where x is the root makes a spanning tree rooted in z.

FIG. 4.

Adding (x, y) to the forest Fx,yz where x is the root makes a spanning tree rooted in z.

Close modal

Next, recall the Poisson equation (3.14) for the mean first-passage times. There as well, we get a graphical representation. The set Fx,y contains all two-trees with the restrictions:

  • vertices x and y are in separate trees;

  • the tree that contains y is rooted in y, while the second tree is rooted anywhere (containing x);

  • the weight w(x,y)=w(Fx,y) of the set of such two-trees Fx,y is
    w(x,y)=Fx,yFx,y(u,u)Fx,yk(u,u).

Theorem IV.3.
The mean first-passage time, solution to (3.14), is
τ(x,z)=w(x,z)w(z)xz;τ(x,x)=0
(4.5)
where w(z) appeared in (4.1).

We again give the Proof of Theorem IV.3 by verification.

Proof of Theorem IV.3.
We need to show
yk(x,y)w(y,z)w(z)w(x,z)w(z)=1
or
yk(x,y)[w(x,z)w(y,z)]=w(z).
(4.6)
where w(x,z)=w(Fx,z) and w(y,z)=w(Fy,z),
w(x,z)w(y,z)=Fx,zFx,zw(Fx,z)Fy,zFy,zw(Fy,z).
The intersection of the sets Fx,z and Fy,z is a set of forests where x and y are located in the same tree. We only consider the sets of forests where x and y are in different trees,
w(x,z)w(y,z)=w(Fx,yz)w(Fy,xz).
Fx,yz denotes the set of all forests, where x and z are in the different trees and y is in the same tree with z.

Take a forest Fx,yz from the set Fx,yz. We consider two cases based on whether x is a root or not. If x is not the root in forest Fx,yz, then x has a neighbor such as y′. Add the edge (x, y) where y is located in the same tree as z: a directed graphical object is created; see Fig. 2(a). The new graphical object is created also by adding edge (x, y′) to a forest from the set Fy,xz, where x and z are located in the same tree in the forest and y′ is in another tree; see Fig. 2(b). The same graphical objects have equal weights.

If x is a root, adding the edge (x, y) connects two trees and creates a new spanning tree rooted at z; see Fig. 4. Consider a spanning tree rooted at vertex z, where each of the other vertices has an outgoing edge. Remove the edge that goes out from x: a forest in the set Fx,z is created. That ends the proof.□

Corollary IV.4.
Independently of x,
yρs(y)τ(x,y)=W2W
(4.7)
where W2 is the total weight of all spanning two-tree forests with both trees subsequently oriented toward all its states.

Proof of (IV.7).
yρs(y)τ(x,y)=yw(x,y)w(y)w(y)W=1Wyw(x,y)
and
W2=yw(x,y)
for every y, there is a forest Fx,yFx,y where x is the root and for all yx also y is the root. So then W2 is the weight of the set of all forests with two rooted trees and it is independent of x.□

It is at first not clear how to obtain properties of the solution of the Poisson equation through formulas (4.3)(4.5). Graphical representations are indeed most useful when applied in special parameter regimes, such as for the asymptotics when temperature is very low or driving is very big. We have applied that technique before for an extended Nernst theorem in Ref. 12. We see another example in Theorem V.3 of Sec. V but future applications are expected in the theory of nonreversible metastability as those graphical representations also apply to solutions of the resolvent equation.10 

One important issue for possible applications of the Poisson equation is to get bounds on its solution. One could imagine in fact a parameterized family of Poisson equations and the issue becomes to get bounds that are uniform in that parameterization. The present section adds such bounds, as a consequence of the previous sections.

Consider the centered solution V of the Poisson equation (3.2), with ‖f‖ ≔ maxzK|f(z) − ⟨fs|. Fix any two states x, yK, and put
ṽ(x,y)0TK\{y}dt[f(Xt)fs]|X0=x
(5.1)
as the expected first-passage accumulation for the centered observable f − ⟨fs.

Theorem V.1.
We have
V(x)V(y)=ṽ(x,y)
(5.2)
and the bound
|V(x)V(y)|fmin{τ(x,y),τ(y,x)}.
(5.3)

Proof of Theorem V.1.
To prove (5.2), we note that the centered solution V(x) to (3.2) can be written by fixing state y, and decomposing as
V(x)=limt0TK\{y}f(X(t))dt+TK\{y}tf(X(t))dtfstx=ϕK\{y}(x)fsTK\{y}x+limt0t[f(X(t))fs]dty=VK\{y}(x)fsSK\{y}(x)+V(y)
(5.4)
where in the limit of the second integral we have used the exponential tightness of TK\{y}. As before, SK\{y}(x)=τ(x,y) is the mean first-passage time to reach y from x, and VK\{y}(x) ≕ v(x, y) is the expected accumulation for f up to the first passage at y, as in Sec. III B. Therefore, we have the relation
V(x)V(y)=v(x,y)fsτ(x,y)=ṽ(x,y)
(5.5)
with ṽ(x,y) defined in (5.1).
Finally, from the very definition of ṽ(x,y)
|ṽ(x,y)|fτ(x,y).
Moreover, from (5.2), ṽ(x,y)=ṽ(y,x) is antisymmetric, which yields the bound (5.3).□

Note that since ⟨Vs = 0, there are always states x0, x1K with V(x0) ≤ 0 ≤ V(x1). Therefore, for every xK,
V(x)V(x)V(x0)x0x|V(xi)V(xi+1)|
where the sum is over an arbitrary path in K connecting x0 and x. Similarly,
V(x)V(x)V(x1)x1x|V(xi)V(xi+1)|.
As a consequence, bounds on the solution V follow from bounds on the differences, as provided in (5.3).

The bound (5.3) is not always optimal. It still can be improved as follows.

Theorem V.2.
Suppose there exists a set D so that f(x) = LE(x) + h(x) where h = hE depends on E and h(x) = 0 for xD. Then, the solution of the Poisson equation (3.2) V satisfies
|V(x)V(y)||E(x)E(y)|+hEzDρs(z)|τ(x,z)τ(y,z)|.
(5.6)

That bound requires to control the accessibility of the set D only. In fact, since there is a freedom in choosing (E, h), that can be used to optimize the bound (5.6).

We observe that in the case where fLE, the bound is optimal because then V = E + constant.

Proof of Theorem V.2.
The Poisson equation now becomes
L(V+E)+h=0, with h=0 on K\D.
Applying (3.15), we get
V(x)+E(x)=zDρs(z)hE(z)τ(x,z)+constant
(5.7)
and the bound (5.6) follows directly.□

We recall the setup where a digraph with n vertices is obtained from a Markov jump process with transition rates k(u, u′). We define
fmaxy|f(y)|,kmax(u,u)k(u,u).
(5.8)

Theorem V.3.
For all x,
|V(x)|nkn2Wf.
(5.9)

Proof of Theorem V.3.

We apply (4.3) of Theorem IV.2. The product over the weights over edges is obviously bounded, and the weight of any forest in the graph is bounded as well. In other words, for any x and y in the graph, w(xy) is bounded.

Using (5.8),
w(xy)f(y)fkn2
and the result follows from (4.3).□

As an application of (5.9), we can suppose that the rates k(x, y) = kλ(x, y) depend on a real-valued parameter λ. For simplicity, we consider the limit λ → ∞ (taking for λ the inverse temperature is a relevant example from statistical mechanics). Observe that there is a general lower bound on W, i.e., W ≥ maxT,xw(Tx), the largest weight of all spanning trees. Hence, if there exists a spanning tree with w(Tx) > 0 uniformly in λ, then W = Wλ > 0 is uniformly bounded, and (5.9) gives a uniform bound on V.

The bound (5.9) can for instance be used in an argument extending the Third Law of Thermodynamics to nonequilibrium systems; see Ref. 11 where the source function f is the expected heat flux. The quasipotential V in (3.4) is then the time-integral of the difference between the instantaneous heat flux and the stationary heat flux.

The Poisson equation is ubiquitous in mathematical physics. We have considered a setup where the linear operator is the backward generator of a Markov jump process and the functions are defined on the possible (finite number of) states. We have related the solution of that discrete Poisson equation to the formalism of mean first-passage times, and we have given graphical representations that allow precise estimates—both for the accumulations before first hitting a particular state when starting from another one, and for the quasipotentials measuring the accumulated excess during relaxation to stationarity. The results do not assume reversibility, and are ready to be used in extensions of potential theory when applied to problems in steady-nonequilibrium statistical mechanics. Obvious targets are reaction rate theory for driven or active systems, and metastability around nonequilibrium steady conditions where we expect that graphical representations are particularly helpful in low ambient-temperature or high-driving regimes.

The authors have no conflicts to disclose.

All authors contributed equally to this work.

Faezeh Khodabandehlou: Writing – original draft (equal). Christian Maes: Writing – review & editing (equal). Karel Netočný: Writing – original draft (equal).

The data that support the findings of this study are available within the article.

The Kirchhoff formula can be seen as a consequence of the Matrix Tree Theorem.18–20 However, there also exists a more probabilistic approach. The ideas is illustrated by the algorithm in Fig. 5, where a tree Tx is constructed by removing the edge (x, y′) from the tree Ty, and then adding the edge (y, x). That algorithm is used in Ref. 17 to construct a reversible Markov chain Yn on an ensemble of trees, where the states are the rooted spanning trees and jumps are possible if and only if the corresponding trees are connected by the above-mentioned algorithm. Let Qab = q(a, b) be the stochastic matrix of Yn, and put π the stationary distribution of Q. For example, we can assign states a and b, respectively, to the trees Ty and Tx in Fig. 5, and take transition rates q(a, b). It is shown that putting q(a, b) = k(y, x), q(b, a) = k(x, y′) produces detailed balance πaq(a, b) = πbq(b, a). We add a sketch of the proof.

FIG. 5.

Removing the edge (x, y′) from the spanning tree Ty and adding the edge (y, x) creates a spanning tree rooted in x.

FIG. 5.

Removing the edge (x, y′) from the spanning tree Ty and adding the edge (y, x) creates a spanning tree rooted in x.

Close modal
It suffices to show that the Kirchhoff formula (4.1) satisfies the stationary master equation,
yw(y)k(y,x)w(x)k(x,y)=0.
Take a spanning tree rooted in y, TyTy. Let x be a vertex located on the tree Ty. In this case, x is not the root and there exists a vertex y′ (which may be equal to y) and an edge (x, y′) which goes out from x and connects x to the tree. By removing the edge (x, y′) from the tree Ty and adding the edge (y, x), a new spanning tree rooted in x is created; see Fig. 5.
It turns out that w(Ty)k(x,y)=w(Tx)k(y,x). With the same scenario, from every spanning tree rooted in y we can make a new spanning tree rooted in x. That algorithm makes a spanning tree rooted in y from a spanning tree rooted in x. Therefore,
yw(y)k(y,x)=yTyTyw(Ty)k(y,x)=yTxTxw(Tx)k(x,y)=yw(x)k(x,y),
which ends the proof.

The present appendix gives the broader mathematical context of the graphical representations.

1. Laplacian matrix and the backward generator

Consider a self edge-free directed graph G(V(G),E(G)), with n vertices. The adjacency matrix A(G) is a n × n matrix with entries denoted as A(G)xy, representing the weight of the edge (x, y), which is denoted by ω(x, y).

The out-weight of a vertex xV(G) is yω(x, y), and we define the n × n diagonal matrix D(G) with Dxy = 0 if xy and otherwise Dxx = yω(x, y).

The Laplacian matrix for a directed graph G is
L(G)=D(G)A(G).
(B1)
Clearly, L(G) is not always symmetric, and the backward generator of the Markov jump process (as we had it in Sec. II) L=L is minus the Laplacian matrix of the underlining graph, where the edge weights are equal to transition rates, ω(x, y) = k(x, y). Therefore, all properties of the Laplacian of a weighted digraph, also hold for the backward generator L, as described in Sec. II.

The Laplacian matrix of a (directed) graph has several important properties that have been extensively studied in the literature.21,22

2. Pseudo-inverses of the Laplacian matrix

The Laplacian matrix is not invertible, and different pseudo/generalized-inverses can be defined, e.g., group inverse, Drazin inverse, Moore–Penrose inverse and resolvent inverse.Depending on the characteristics of the graph, the various inverses of the Laplacian may or may not be equal; see also Ref. 23.

Let us recall some definitions from mathematics: for an arbitrary square matrix A, its Drazin inverse is denoted by AD and is the unique matrix X satisfying the following equations:
Aν+1X=XAν,XAX=X,AX=XA
(B2)
where ν = ind A [remember ind A is the smallest non negative integer number b such that rank(Ab+1) = rank(Ab)]. If ν = 0, then AD = A−1; if ν ≤ 1, then AD is referred to the group inverse and it is denoted by A# which is a unique matrix such as X satisfying the following equations:
AXA=A,XAX=X,AX=XA.
(B3)
As a consequence, when the index of a matrix is one its Drazin inverse and group inverse are the same.

We refer to Ref. 24 for the Matrix Forest Theorem for a Laplacian matrix. We briefly recall that result.

Consider a weighted digraph G, where n is the number of vertices. I is the identity matrix, Fmxy is the set of all forests with m edges such that x and y are in the same tree and y is always a root, Fm is the union of all Fmx, and m is the number of edges making the forests. For example, if m = n − 1 the forests are made by one spanning tree with n − 1 edges, and when m = n − 2 the forest has two trees. w(Fmxy) is the weight of the set Fmxy. Put γ as the dimension of the forest in G. Forest dimension is the minimum number of rooted trees that a spanning rooted forest can have in a directed graph. Note that for the underlining graph of an irreducible Markov process, γ = 1.

Consider a weighted digraph G with the Laplacian matrix L. From Ref. 25,

Theorem B.1.
For any α > 0, the matrix (I+αL(G))1 has the graphical representation
1I+αLxy=m=0nγαmw(Fmxy)m=0nγαmw(Fm).
(B4)

For the specific case when γ = 1,
1I+αLxy=m=0n2αmw(Fmxy)m=0n1αmw(Fm)+αn1w(Ty)m=0n1αmw(Fm)
(B5)
where the subscript xy denotes raw x and column y, and we have used Fn1xy=Ty, i.e., the forest with n − 1 edges is a spanning tree. w(Ty) is the weight of the set Ty, w(Ty)w(y).

From Refs. 21 and 26, the graphical representation of the group inverse L# is as follows.

Theorem B.2.
The x, y entry of the group inverse of the Laplacian matrix is
Lxy#=1w(Fnγ)w(Fnγ1xy)w(Fnγ1)w(Fnγxy)w(Fnγ).
(B6)

If γ = 1,
Lxy#=1w(Fn1)w(Fxy)w(F)w(Fn1xy)w(Fn1)=1yw(Ty)w(Fxy)w(F)w(Ty)yw(Ty)
(B7)
=1Ww(Fxy)w(F)ρs(y),
(B8)
where we have used that the set of all rooted spanning forests with n − 1 edges is indeed the set of all rooted spanning trees. F denotes the set of all rooted spanning forests consisting of two trees. In the last line, the Kirchhoff formula is used.
The group inverse of the backward generator is denoted by L# and from Theorem B.2 the graphical representation is
Lxy#=w(Fxy)W+ρ(y)w(F)W
(B9)

3. Another proof of Theorem IV.2

We can solve the Eq. (3.2) by utilizing a graphical representation of the pseudoinverse of L, which can be obtained through the application of the Matrix Forest Theorem.

Proof.
Let us assume that f is centered, meaning yf(y)ρs(y) = 0. Use the graphical representation (B4),
V(x)=limαyαIαLxyf(y)=limαym=0n2αm+1w(Fmxy)m=0n1αmw(Fm)+limαyαnw(Ty)m=0n1αmw(Fm)f(y)=yw(Fxy)Wf(y)
(B10)
in the second line we use the Kirchhoff formula where w(Fn1)=W and apply the centered property of f and the proof is finished.□

As understood in Ref. 23, the resolvent inverse and the Drazin inverse of the backward generator on a centered function f give the same solution to the Poisson equation. The reason is that the index of the backward generator L equals one; see Ref. 25. We check that explicitly in their graphical representations,
V=L#f(x)=yw(Fxy)f(y)Ww(F)Wyρs(y)f(y)=yw(Fxy)Wf(y),
which is equal to the solution of the Poisson equation via the resolvent inverse; see (4.3). Hence, if f is centered, then the resolvent inverse, Drazin inverse and group inverse of L are all equal giving V = −L−1f.

4. Another proof of Theorem IV.3

The equation in (3.14) can be solved by the graphical representation of the group inverse of the backward generator.

Theorem B.3.
The solution of + 1 = 0 is
τ=(L#1Ldg#)D
(B11)
where 1 is a n × n matrix such that all elements are 1 and D is a diagonal n × n matrix such that Dxx=1ρ(x). Ldg# is a matrix made by putting all the entries outside of the main diagonal of L# equal to zero.

We start by
Lτ=L(L#1Ldg#)D=LL#D
where L1=0 is the n × n matrix with all entries equal to zero. Furthermore,
(LL#D)xz=(LL#)xz1ρs(z)=1ρs(z)yLxyLyz#=1ρs(z)yxk(x,y)Lyz#Lxz#yk(x,y)=1ρs(z)yk(x,y)(Lyz#Lxz#),
(LL#D)xz=1ρs(z)yk(x,y)1Ww(yz)ρs(z)z,yw(yz)w(xz)ρs(z)x,zw(xz),
(B12)
=1w(z)yk(x,y)w(yz)w(xz).
(B13)
Fix x and z, consider two sets Fyz and Fxz. For the fix y, the intersection of them is a set of all forests such that x and y are in the same tree rooted at z. As the result, in (B12), we only consider the component of the set Fyz where x is not in the same tree with y and z, and in the set Fxz only the components where z is not in a same tree with x and z. Hence,
w(yz)w(xz)=w(Fx,yz)w(Fy,xz),
where Fx,yz is set of all forests such that z is on the tree rooted in y and x is located on the another tree.

Take a forest from the set Fx,yz. If x is the root, then adding edge the (x, y) connects two trees such that a new spanning tree rooted in z is created, see Fig. 4. Now consider a spanning tree rooted at z, by removing the edge goes out from x a forest in the set Fx,z is created.

Take a forest from the set Fx,yzFx,yz such that x is not the root. If x is not the root, then it has a neighbor such as y′. Add the edge (x, y) to Fx,yz, a directed graphical object is created. The new graphical object is created also by adding edge (x, y′) to a forest from the set Fy,xz, see Fig. 2. The same graphical objects have equal weights and as a result
yk(x,y)w(Fx,yz)w(Fy,xz)=w(z)
and
(LL#D)xz=1zx.
(B14)
It follows = −1 for every zx, which ends the proof of (B11).
We can get the graphical representation (4.5) from (B11):
τ(x,z)=0x=z,Lzz#Lxz#ρs(z)xz,
where
Lzz#Lxz#=1Ww(zz)w(xz)=w(x,z)W.
Hence. we get indeed
τ(x,z)=w(x,z)w(z)xz;τ(x,x)=0.
2.
S.
Redner
,
A Guide to First-Passage Processes
(
Cambridge University Press
,
2001
).
3.
E.
Pardoux
and
Y.
Veretennikov
, “
On the Poisson equation and diffusion approximation. I
,”
Ann. Probab.
29
(
3
),
1061
1085
(
2001
).
4.
È.
Pardoux
and
A. Y.
Veretennikov
, “
On Poisson equation and diffusion approximation 2
,”
Ann. Probab.
31
(
3
),
1166
1192
(
2003
).
5.
E.
Pardoux
and
A. Y.
Veretennikov
, “
On the Poisson equation and diffusion approximation 3
,”
Ann. Probab.
33
(
3
),
1111
1133
(
2005
).
6.
P.
Doyle
and
J.
Steiner
, “
Commuting time geometry of ergodic Markov chains
,” arXiv.1107.2612 (2011).
7.
A.
Gaudillière
and
C.
Landim
, “
A Dirichlet principle for nonreversible Markov chains and some recurrence theorems
,”
Probab. Theory Related Fields
158
,
55
89
(
2014
).
8.
M.
Balázs
and
A.
Folly
, “
An electric network for nonreversible Markov chains
,”
Am. Math. Mon.
123
,
657
682
(
2016
).
9.
C.
Oh
and
F.
Rezakhanlou
, “
Metastability of zero range processes via Poisson equations
,” https://math.berkeley.edu/~chanwoo/Professional/Metastability%20of%20zero%20range%20processes%20via%20Poisson%20equations.pdf (
2019
).
10.
C.
Landim
,
D.
Marcondes
, and
I.
Seo
, “
A resolvent approach to metastability
,”
J. Eur. Math. Soc.
2023
, 1–56.
11.
F.
Khodabandehlou
,
C.
Maes
, and
K.
Netočný
, “
A Nernst heat theorem for nonequilibrium jump processes
,”
J. Chem. Phys.
158
(
20
),
204112
(
2023
).
12.
F.
Khodabandehlou
,
C.
Maes
,
I.
Maes
, and
K.
Netočný
, “
The vanishing of excess heat for nonequilibrium processes reaching zero ambient temperature
,”
Ann. Henri Poincaré
(published online 2023).
13.
A.
Gaudilliere
, “
Condenser physics applied to Markov chains—A brief introduction to potential theory
,” arXiv.0901.3053 (
2009
).
14.
J.
Beltrán
and
C.
Landim
, “
Tunneling and metastability of continuous time Markov chains II, the nonreversible case
,”
J. Stat. Phys.
149
,
598
618
(
2012
).
15.
C.
Maes
and
K.
Netočný
, “
Heat bounds and the blowtorch theorem
,”
Ann. Henri Poincaré
14
(
5
),
1193
(
2013
).
16.
F.
Khodabandehlou
,
C.
Maes
, and
K.
Netočný
, “
Trees and forests for nonequilibrium purposes: An introduction to graphical representations
,”
J. Stat. Phys.
189
,
41
(
2022
).
17.
V.
Anantharam
and
P.
Tsoucas
, “
A proof of the Markov chain tree theorem
,”
Stat. Probab. Lett.
8
(
2
),
189
192
(
1989
).
18.
B. O.
Shubert
, “
A flow-graph formula for the stationary distribution of a Markov chain
,”
IEEE Trans. Syst., Man, Cybern.
SMC-5
(
5
),
565
566
(
1975
).
19.
W. T.
Tutte
, “
The dissection of equilateral triangles into equilateral triangles
,”
Math. Proc. Cambridge Philos. Soc.
44
(
4
),
463
482
(
1948
).
20.
P.
Biane
and
G.
Chapuy
, “
Laplacian matrices and spanning trees of tree graphs
,”
Ann. Fac. Sci. Toulouse: Math.
26
(
2
),
235
261
(
2017
).
21.
P.
Chebotarev
and
R.
Agaev
, “
Forest matrices around the laplacian matrix
,”
Linear Algebra Appl.
356
(
1–3
),
253
274
(
2002
).
22.
A.
Ben-Israel
and
T. N. E.
Greville
,
Generalized Inverses: Theory and Applications
(
Springer
,
New York
,
2003
).
23.
F.
Khodabandehlou
and
I.
Maes
, “
Drazin-inverse and heat capacity for driven random walkers on the ring
,”
Stochastic Process. Appl.
164
,
337
356
(
2023
).
24.
R.
Agaev
and
P.
Chebotarev
, “
The matrix of maximum out forests of a digraph and its applications
,” arXiv:math/0602059 [math.CO] (
2006
).
25.
P.
Chebotarev
and
E.
Shamis
, “
Matrix-forest theorems
,” arXiv:math/0602575 [math.CO] (
2006
).
26.
P.
Chebotarev
, “
A graph theoretic interpretation of the mean first passage times
,” arXiv:math/0701359 [math.PR] (
2017
).