The Hebbian unlearning algorithm, i.e., an unsupervised local procedure used to improve the retrieval properties in Hopfield-like neural networks, is numerically compared to a supervised algorithm to train a linear symmetric perceptron. We analyze the stability of the stored memories: basins of attraction obtained by the Hebbian unlearning technique are found to be comparable in size to those obtained in the symmetric perceptron, while the two algorithms are found to converge in the same region of Gardner’s space of interactions, having followed similar learning paths. A geometric interpretation of Hebbian unlearning is proposed to explain its optimal performances. Because the Hopfield model is also a prototypical model of the disordered magnetic system, it might be possible to translate our results to other models of interest for memory storage in materials.
Skip Nav Destination
Supervised perceptron learning vs unsupervised Hebbian unlearning: Approaching optimal memory retrieval in Hopfield-like networks
Article navigation
14 March 2022
Research Article|
March 11 2022
Supervised perceptron learning vs unsupervised Hebbian unlearning: Approaching optimal memory retrieval in Hopfield-like networks
Special Collection:
Memory Formation
Marco Benedetti
;
Marco Benedetti
1
Dipartimento di Fisica, Sapienza Università di Roma
, P.le A. Moro 2, 00185 Roma, Italy
Search for other works by this author on:
Enrico Ventura
;
Enrico Ventura
1
Dipartimento di Fisica, Sapienza Università di Roma
, P.le A. Moro 2, 00185 Roma, Italy
2
Laboratoire de Physique de l’Ecole Normale Supérieure, ENS, Université PSL, CNRS, Sorbonne Université, Université de Paris
, F-75005 Paris, France
Search for other works by this author on:
Enzo Marinari
;
Enzo Marinari
a)
1
Dipartimento di Fisica, Sapienza Università di Roma
, P.le A. Moro 2, 00185 Roma, Italy
3
CNR-Nanotec and INFN Sezione di Roma
, Roma, Italy
a)Authors to whom correspondence should be addressed: enzo.marinari@uniroma1.it; giancarlo.ruocco@uniroma1.it; and francesco.zamponi@ens.fr
Search for other works by this author on:
Giancarlo Ruocco
;
Giancarlo Ruocco
a)
1
Dipartimento di Fisica, Sapienza Università di Roma
, P.le A. Moro 2, 00185 Roma, Italy
a)Authors to whom correspondence should be addressed: enzo.marinari@uniroma1.it; giancarlo.ruocco@uniroma1.it; and francesco.zamponi@ens.fr
Search for other works by this author on:
Francesco Zamponi
Francesco Zamponi
a)
2
Laboratoire de Physique de l’Ecole Normale Supérieure, ENS, Université PSL, CNRS, Sorbonne Université, Université de Paris
, F-75005 Paris, France
a)Authors to whom correspondence should be addressed: enzo.marinari@uniroma1.it; giancarlo.ruocco@uniroma1.it; and francesco.zamponi@ens.fr
Search for other works by this author on:
a)Authors to whom correspondence should be addressed: enzo.marinari@uniroma1.it; giancarlo.ruocco@uniroma1.it; and francesco.zamponi@ens.fr
Note: This paper is part of the JCP Special Topic on Memory Formation.
J. Chem. Phys. 156, 104107 (2022)
Article history
Received:
January 04 2022
Accepted:
February 14 2022
Citation
Marco Benedetti, Enrico Ventura, Enzo Marinari, Giancarlo Ruocco, Francesco Zamponi; Supervised perceptron learning vs unsupervised Hebbian unlearning: Approaching optimal memory retrieval in Hopfield-like networks. J. Chem. Phys. 14 March 2022; 156 (10): 104107. https://doi.org/10.1063/5.0084219
Download citation file:
Sign in
Don't already have an account? Register
Sign In
You could not be signed in. Please check your credentials and make sure you have an active account and try again.
Sign in via your Institution
Sign in via your InstitutionPay-Per-View Access
$40.00