In recent years, the artificial intelligence community has seen a continuous interest in research aimed at investigating dynamical aspects of both training procedures and machine learning models. Of particular interest among recurrent neural networks, we have the Reservoir Computing (RC) paradigm characterized by conceptual simplicity and a fast training scheme. Yet, the guiding principles under which RC operates are only partially understood. In this work, we analyze the role played by Generalized Synchronization (GS) when training a RC to solve a generic task. In particular, we show how GS allows the reservoir to correctly encode the system generating the input signal into its dynamics. We also discuss necessary and sufficient conditions for the learning to be feasible in this approach. Moreover, we explore the role that ergodicity plays in this process, showing how its presence allows the learning outcome to apply to multiple input trajectories. Finally, we show that satisfaction of the GS can be measured by means of the mutual false nearest neighbors index, which makes effective to practitioners theoretical derivations.
Skip Nav Destination
Article navigation
August 2021
Research Article|
August 13 2021
Learn to synchronize, synchronize to learn
Pietro Verzelli
;
Pietro Verzelli
a)
1
Faculty of Informatics, Università della Svizzera Italiana
, Lugano 69000, Switzerland
a)Author to whom correspondence should be addressed: pietro.verzelli@usi.ch
Search for other works by this author on:
Cesare Alippi
;
Cesare Alippi
1
Faculty of Informatics, Università della Svizzera Italiana
, Lugano 69000, Switzerland
2
Department of Electronics, Information and Bioengineering, Politecnico di Milano
, Milan 20133, Italy
Search for other works by this author on:
Lorenzo Livi
Lorenzo Livi
3
Department of Computer Science and Mathematics, University of Manitoba
, Winnipeg, Manitoba R3T 2N2, Canada
4
Department of Computer Science, College of Engineering, Mathematics and Physical Sciences, University of Exeter
, Exeter EX4 4QF, United Kingdom
Search for other works by this author on:
a)Author to whom correspondence should be addressed: pietro.verzelli@usi.ch
Chaos 31, 083119 (2021)
Article history
Received:
May 10 2021
Accepted:
July 27 2021
Citation
Pietro Verzelli, Cesare Alippi, Lorenzo Livi; Learn to synchronize, synchronize to learn. Chaos 1 August 2021; 31 (8): 083119. https://doi.org/10.1063/5.0056425
Download citation file:
Sign in
Don't already have an account? Register
Sign In
You could not be signed in. Please check your credentials and make sure you have an active account and try again.
Pay-Per-View Access
$40.00
Citing articles via
Sex, ducks, and rock “n” roll: Mathematical model of sexual response
K. B. Blyuss, Y. N. Kyrychko
Focus on the disruption of networks and system dynamics
Peng Ji, Jan Nagler, et al.
Selecting embedding delays: An overview of embedding techniques and a new method using persistent homology
Eugene Tan, Shannon Algar, et al.
Related Content
A data-driven physics-informed neural network for predicting the viscosity of nanofluids
AIP Advances (February 2023)
Data-driven constitutive meta-modeling of nonlinear rheology via multifidelity neural networks
J. Rheol. (July 2024)
Tourism forecasting using box-Jenkins SARIMA and artificial neural network (ANN) models: Case for outbound and inbound tourist in Malaysia
AIP Conf. Proc. (October 2023)
Adaptive control of sound transmission with neural network algorithms
J Acoust Soc Am (May 1998)
The reservoir’s perspective on generalized synchronization
Chaos (September 2019)