Machine learning has become a widely popular and successful paradigm, especially in data-driven science and engineering. A major application problem is data-driven forecasting of future states from a complex dynamical system. Artificial neural networks have evolved as a clear leader among many machine learning approaches, and recurrent neural networks are considered to be particularly well suited for forecasting dynamical systems. In this setting, the echo-state networks or reservoir computers (RCs) have emerged for their simplicity and computational complexity advantages. Instead of a fully trained network, an RC trains only readout weights by a simple, efficient least squares method. What is perhaps quite surprising is that nonetheless, an RC succeeds in making high quality forecasts, competitively with more intensively trained methods, even if not the leader. There remains an unanswered question as to why and how an RC works at all despite randomly selected weights. To this end, this work analyzes a further simplified RC, where the internal activation function is an identity function. Our simplification is not presented for the sake of tuning or improving an RC, but rather for the sake of analysis of what we take to be the surprise being not that it does not work better, but that such random methods work at all. We explicitly connect the RC with linear activation and linear readout to well developed time-series literature on vector autoregressive (VAR) averages that includes theorems on representability through the Wold theorem, which already performs reasonably for short-term forecasts. In the case of a linear activation and now popular quadratic readout RC, we explicitly connect to a nonlinear VAR, which performs quite well. Furthermore, we associate this paradigm to the now widely popular dynamic mode decomposition; thus, these three are in a sense different faces of the same thing. We illustrate our observations in terms of popular benchmark examples including Mackey–Glass differential delay equations and the Lorenz63 system.
Skip Nav Destination
On explaining the surprising success of reservoir computing forecaster of chaos? The universal machine learning dynamical system with contrast to VAR and DMD
Article navigation
January 2021
Research Article|
January 04 2021
On explaining the surprising success of reservoir computing forecaster of chaos? The universal machine learning dynamical system with contrast to VAR and DMD
Erik Bollt
Erik Bollt
a)
Department of Electrical and Computer Engineering, Clarkson University
, Potsdam, New York 13699, USA
and Clarkson Center for Complex Systems Science (C3S2)
, Potsdam, New York 13699, USA
a)Author to whom correspondence should be addressed: [email protected]
Search for other works by this author on:
a)Author to whom correspondence should be addressed: [email protected]
Connected Content
Citation
Erik Bollt; On explaining the surprising success of reservoir computing forecaster of chaos? The universal machine learning dynamical system with contrast to VAR and DMD. Chaos 1 January 2021; 31 (1): 013108. https://doi.org/10.1063/5.0024890
Download citation file:
Pay-Per-View Access
$40.00
Sign In
You could not be signed in. Please check your credentials and make sure you have an active account and try again.
Citing articles via
Rate-induced biosphere collapse in the Daisyworld model
Constantin W. Arnscheidt, Hassan Alkhayuon
Response to music on the nonlinear dynamics of human fetal heart rate fluctuations: A recurrence plot analysis
José Javier Reyes-Lagos, Hugo Mendieta-Zerón, et al.
Selecting embedding delays: An overview of embedding techniques and a new method using persistent homology
Eugene Tan, Shannon Algar, et al.
Related Content
Direct statistical simulation of the Lorenz63 system
Chaos (April 2022)
Attractor radius for fractional Lorenz systems and their application to the quantification of predictability limits
Chaos (January 2023)
Chaos emerging in soil failure patterns observed during tillage: Normalized deterministic nonlinear prediction (NDNP) and its application
Chaos (March 2017)
Analysis of chaotic dynamical systems with autoencoders
Chaos (October 2021)
Robust extremes in chaotic deterministic systems
Chaos (December 2009)