Markov chain Monte Carlo algorithms are invaluable tools for exploring stationary properties of physical systems, especially in situations where direct sampling is unfeasible. Common implementations of Monte Carlo algorithms employ reversible Markov chains. Reversible chains obey detailed balance and thus ensure that the system will eventually relax to equilibrium, though detailed balance is not necessary for convergence to equilibrium. We review nonreversible Markov chains, which violate detailed balance and yet still relax to a given target stationary distribution. In particular cases, nonreversible Markov chains are substantially better at sampling than the conventional reversible Markov chains with up to a square root improvement in the convergence time to the steady state. One kind of nonreversible Markov chain is constructed from the reversible ones by enlarging the state space and by modifying and adding extra transition rates to create non-reversible moves. Because of the augmentation of the state space, such chains are often referred to as lifted Markov Chains. We illustrate the use of lifted Markov chains for efficient sampling on several examples. The examples include sampling on a ring, sampling on a torus, the Ising model on a complete graph, and the one-dimensional Ising model. We also provide a pseudocode implementation, review related work, and discuss the applicability of such methods.
Skip Nav Destination
Article navigation
December 2016
December 01 2016
Lifting—A nonreversible Markov chain Monte Carlo algorithm
Marija Vucelja
Marija Vucelja
a)
Center for Studies in Physics and Biology,
The Rockefeller University
, 1230 York Avenue, New York
, New York 10065 and Department of Physics, University of Virginia
, Charlottesville, Virginia 22904
Search for other works by this author on:
a)
Electronic mail: mvucelja@virginia.edu
Am. J. Phys. 84, 958–968 (2016)
Article history
Received:
February 02 2015
Accepted:
August 11 2016
Citation
Marija Vucelja; Lifting—A nonreversible Markov chain Monte Carlo algorithm. Am. J. Phys. 1 December 2016; 84 (12): 958–968. https://doi.org/10.1119/1.4961596
Download citation file:
Sign in
Don't already have an account? Register
Sign In
You could not be signed in. Please check your credentials and make sure you have an active account and try again.
Sign in via your Institution
Sign in via your InstitutionPay-Per-View Access
$40.00
Citing articles via
Related Content
Estimation and uncertainty of reversible Markov models
J. Chem. Phys. (November 2015)
Active flow control using deep reinforcement learning with time delays in Markov decision process and autoregressive policy
Physics of Fluids (May 2022)
User's guide to Monte Carlo methods for evaluating path integrals
Am. J. Phys. (April 2018)
Nearly reducible finite Markov chains: Theory and algorithms
J. Chem. Phys. (October 2021)
Solving inverse problem for Markov chain model of customer lifetime value using flower pollination algorithm
AIP Conference Proceedings (December 2015)