This paper deals with the distribution of singular values of the input–output Jacobian of deep untrained neural networks in the limit of their infinite width. The Jacobian is the product of random matrices where the independent weight matrices alternate with diagonal matrices whose entries depend on the corresponding column of the nearest neighbor weight matrix. The problem has been considered in the several recent studies of the field for the Gaussian weights and biases and also for the weights that are Haar distributed orthogonal matrices and Gaussian biases. Based on a free probability argument, it was claimed in those papers that, in the limit of infinite width (matrix size), the singular value distribution of the Jacobian coincides with that of the analog of the Jacobian with special random but weight independent diagonal matrices, the case well known in random matrix theory. In this paper, we justify the claim for random Haar distributed weight matrices and Gaussian biases. This, in particular, justifies the validity of the mean field approximation in the infinite width limit for the deep untrained neural networks and extends the macroscopic universality of random matrix theory to this new class of random matrices.
Eigenvalue distribution of large random matrices arising in deep neural networks: Orthogonal case
Note: This paper is part of the special collection in Honor of Freeman Dyson.
L. Pastur; Eigenvalue distribution of large random matrices arising in deep neural networks: Orthogonal case. J. Math. Phys. 1 June 2022; 63 (6): 063505. https://doi.org/10.1063/5.0085204
Download citation file: