We complete Dyson’s dream by cementing the links between symmetric spaces and classical random matrix ensembles. Previous work has focused on a one-to-one correspondence between symmetric spaces and many but not all of the classical random matrix ensembles. This work shows that we can completely capture all of the classical random matrix ensembles from Cartan’s symmetric spaces through the use of alternative coordinate systems. In the end, we have to let go of the notion of a one-to-one correspondence. We emphasize that the KAK decomposition traditionally favored by mathematicians is merely one coordinate system on the symmetric space, albeit a beautiful one. However, other matrix factorizations, especially the generalized singular value decomposition from numerical linear algebra, reveal themselves to be perfectly valid coordinate systems that one symmetric space can lead to many classical random matrix theories. We establish the connection between this numerical linear algebra viewpoint and the theory of generalized Cartan decompositions. This, in turn, allows us to produce yet more random matrix theories from a single symmetric space. Yet, again, these random matrix theories arise from matrix factorizations, though ones that we are not aware have appeared in the literature.

## I. INTRODUCTION

Random matrix theory (RMT) is a big subject touching so many fields of mathematics, science, and engineering. For such a subject, it is helpful to have a means of cataloging the objects to be studied and a theory that covers the objects in the catalog. In 1962, Dyson^{1–4} was the first to propose a systematic approach to RMT. In the beginning of Ref. 4, he states his noble intent:

To bring together and unify three trends of thought which have grown up independently during the last thirtyyears.

which he enumerates as (i) group representations including time-inversion, (ii) Weyl’s theory of matrix algebras, and (iii) RMT.

Around a decade later, Dyson hit upon the idea that symmetric spaces should play a key role (Ref. 5, Sec. V). Dyson’s suggestion was taken up in famous papers by Zirnbauer *et al.*^{6,7} and others.^{8,9} These papers mainly focus on the noncompact cases. On the mathematical side, inspired by Katz and Sarnak,^{10,11} Dueñez detailed connections to RMT for the compact symmetric spaces.^{12,13}

Nonetheless, we felt there was a gap. When one juxtaposes (i) the well-established theory of classical random matrix ensembles with (ii) the RMTs associated with symmetric spaces, ensembles are missing. In particular, only very special Jacobi ensembles (the left side of Fig. 2) seem to be making the symmetric space list. More precisely, if one starts with a symmetric space, one has to make what we call a coordinate system choice, what others might call a matrix factorization choice. This choice has been the map Φ : *K* × *A* → *G*/*K*; (*k*, *a*) ↦ *kaK* of Cartan, which we could call the KAK decomposition. (Although it is often called Cartan’s KAK decomposition, Cartan was not aware of *G* = *KAK*.) See Fig. 1.

We show that coordinate systems from the generalized Cartan (K_{1}AK_{2}) decomposition associate a single symmetric space to multiple RMTs. Letting go of the historical bias of the KAK decomposition, the full set of Jacobi ensembles (the right side of Fig. 2) emerges, thereby leading to the complete list of classical random matrix ensembles. Of course, there is much mathematical precedent in differential geometry to letting go of any one special coordinate system.

### A. Classical random matrix ensembles

The objects that we are interested in are the classical random matrix ensembles. Well-established conventions in random matrix theory agree that the ensembles in this class consist of the Hermite, Laguerre, Jacobi, and circular ensembles built from matrices of integer sizes and involve entries that are real, complex, or quaternion. (Dyson denoted *β* = 1, 2, 4, and other authors in mathematics denote *α* = 2/*β* = 2, 1, 1/2.)

The term “classical random matrix ensembles” may be found in the following well-known references:

Chapter 1 of Forrester’s paper

^{14}has the title “Classical Random Matrix Ensembles,” and the even sections (1.2, 1.4, 1.6, and 1.8) are explicitly Hermite, circular, Laguerre, and Jacobi in that order. (Odd sections have discussions related to these ensembles.) Forrester’s comprehensive book^{15}deals exclusively Hermite, Laguerre, Jacobi and circular ensembles in Chaps. 1–3 where the preface states: “eigenvalue p.d.f. of the various classical*β*-ensembles given in Chaps. 1–3.” Then, later in Chap. 5.4, he further justifies the terminology by pointing out the four weights from classical orthogonal polynomial theory.In Ref. 16, Chap. 4.1 is entitled “Joint distribution of eigenvalues in the classical matrix ensembles” and specifically covers exactly the Hermite, Laguerre, Jacobi, and circular ensembles.

The first author’s 2005

*Acta Numerica*article (Ref. 17, Sec. 4).

If one starts with the list of ten infinite families of Cartan’s symmetric spaces (we will not discuss finite families of the exceptional types) and asks to characterize which classical random matrix ensembles are covered, answers could be found in Ref. 8 (Table 1), Ref. 9 (Table 1) (noncompact cases), and Ref. 13 (Table 1) (compact cases). However, turning the question around, if one starts with the classical random matrix ensembles and asks whether symmetric spaces are adequate to explain all of them, we find that the answer is a big “almost,” as the Jacobi ensembles are not adequately covered. To be precise, the Jacobi densities associated with compact symmetric spaces BDI, AIII, and CII from the previous attempts by the KAK decomposition are the following joint probability densities with *β* = 1, 2, 4 (up to constant) and integers *p* ≥ *q*,

where we observe the powers of *x*_{j}’s restricted to $\beta 2\u22121$. The possible parameters of (1.1) are described in the left side of Fig. 2. Additional four compact symmetric spaces DIII, BD, C, and CI add four more Jacobi ensembles,^{13} but they are not sufficient to cover the two dimensional parameter set of the Jacobi ensembles.

### B. Coordinate systems on the Grassmannian manifold

It is always interesting when a branch of applied mathematics reverses direction and provides guidance to pure mathematics. In this work, we focus on the role of the generalized singular value decomposition (GSVD) from numerical linear algebra.^{18,19}

From an applied viewpoint, the Jacobi ensembles are elegantly generated in software with commands such as svdvals(randn(p,s),randn(q,s)) in languages such as Julia, which is computed by taking the GSVD of two i.i.d. normal matrices with the same number of columns.^{20,21} From a pure viewpoint, this is a pushforward of the uniform measure on the Grassmannian manifold onto a maximal Abelian subgroup *A* (with a fixed Weyl chamber) along the generalized Cartan (K_{1}AK_{2}) decomposition (Fig. 3).^{22,23}

For example, take a Grassmannian point with any *β* = 1, 2, 4 from O(*n*)/(O(*n* − *s*) × O(*s*)) (respectively, with complex or quaternionic unitary groups) and represent it by the *n* × *s* orthogonal (respectively, complex or quaternionic unitary) matrix *X*. [More precisely, we treat the Grassmannian manifold as the quotient $Vs(Rn)/O(s)$ where $Vs(Rn)$ is the Stiefel manifold. We are allowed to multiply any *O* ∈ O(*s*) on the right side of *X*.] For any *p*, *q* ≥ *s* satisfying *p* + *q* = *n*, we have the following coordinate system of *X* arising from the GSVD^{24} of the first *p* rows and the last *q* rows of *X* (for an alternative viewpoint, see Ref. 25):

where *U*, *V* are *p* × *s*, *q* × *s* orthogonal (respectively, complex or quaternionic unitary) matrices and *C*, *S* are *s* × *s* diagonal matrices with cosine and sine values. Deduced joint probability densities^{21} (*p*, *q* ≥ *s*) are the following (up to constant):

where the case *q* = *s* represents the usual KAK decomposition case (1.1).

As can be seen, the classical Jacobi parameters are quantized as they are integer multiples of *β*/2. Random matrix models that remove this quantization, thereby going beyond the classical, appear in Refs. 20, 26, and 27. In Sec. VII, we also illustrate that some Jacobi ensembles can arise from symmetric spaces that are outside the traditional quantization (Fig. 6).

### C. Contributions of this paper

This work shows that a symmetric space can be associated with multiple random matrix theories (Fig. 4). Letting go of the arbitrariness of the choice of the KAK decomposition coordinate system allows us to choose other coordinate systems on symmetric spaces, thereby leading us to the complete list of classical random matrix ensembles (Secs. V, VI, VIII, and IX). Many of these coordinate systems are sometimes better known as matrix factorizations, used widely in matrix models of the classical ensembles.^{15,17,20,26,27} However, in Sec. VII, we compute new families of the Jacobi ensemble parameters from coordinate systems that have not been known before.

This work also endeavors to make the Lie theory more widely accessible by simplifying and modernizing key ideas and proofs in Ref. 28. Cartan’s theory^{29–32} as developed by Helgason^{28,33} is a crowning mathematical achievement, and it is our hope to open up this theory for the benefit of all. Indeed, in Ref. 34 (p. 428), Helgason writes about the difficulty of understanding Cartan’s writings:

[Cartan] was one of the great mathematicians of the period, but his papers were quite a challenge. Hermann Weyl, in reviewing a book by Cartan from 1937 writes: “Cartan is undoubtedly the greatest living master in differential geometry… I must admit that I found the book like most of Cartan’s papers, hardreading.”

In the same vein, while we are admirers of Helgason’s extensive work, we authors must admit that we, in turn, found Refs. 28 and 33 hard reading as well, and this paper attempts to introduce the theory by couching the ideas in terms of what we call ping pong operators.

Summarizing our work, we have the following:

We use the coordinate systems of the K

_{1}AK_{2}decomposition that connects a single symmetric space to multiple random matrices (Fig. 4), completing the list of associated classical random matrix ensembles.We translate some of the key concepts in Cartan’s theory of symmetric spaces into easier to follow linear algebra (Sec. III).

We provide coordinate systems (matrix factorizations) of symmetric spaces that have not been discussed in random matrix context, obtaining new parameter families of the Jacobi ensemble (Sec. VII).

## II. BACKGROUND

### A. Joint densities of classical random matrix ensembles

Dyson introduced the *β* = 1, 2, 4 circular ensembles^{1,4} in 1962. Earlier expositions on circular ensembles could be found on Hurwitz^{35} and Weyl.^{36} Hermite ensembles were introduced by Wigner.^{37–39} Laguerre and Jacobi ensembles could be found as early as 1939 in the statistics literature by Fisher,^{40} Roy,^{41} or Hsu.^{42} The physics literature first touches upon the idea of Laguerre and Jacobi with the 1963 thesis of Leff.^{43} The following list is the joint probability densities (without normalization constants) of classical random matrix ensembles (*β* = 1, 2, 4):

Circular: $\u220fj<k|ei\theta j\u2212ei\theta k|\beta $, $(\theta 1,\u2026,\theta n)\u22080,2\pi n$;

Hermite: $\u220fj<k|\lambda j\u2212\lambda k|\beta e\u2212\u2211\lambda j22$ $(\lambda 1,\u2026,\lambda n)\u2208Rn$;

Laguerre: $\u220fj<k|\lambda j\u2212\lambda k|\beta \u220fj=1m\lambda j\alpha e\u2212\u2211\lambda j2$ $(\lambda 1,\u2026,\lambda m)\u22080,\u221em$;

Jacobi: $\u220fj<k|xj\u2212xk|\beta \u220fj=1mxj\alpha 1(1\u2212xj)\alpha 2$ (

*x*_{1}, …,*x*_{m}) ∈ [0,1]^{m}.

In particular, the parameters *α*, *α*_{1}, *α*_{2} > −1 are quantized as integer multiples of $\beta 2$, i.e., $\beta 2(N+1)\u22121$ for some non-negative integer *N*.

### B. Symmetric space and the generalized Cartan decomposition

In this section, we introduce the theory related to the generalized Cartan decomposition. For readers without preliminary knowledge in Lie theory, we recommend skipping to Sec. III, which follows a more modern linear algebra approach.

Let *G*/*K*_{σ} be a Riemannian symmetric space with a real reductive noncompact Lie group *G* and its maximal compact subgroup *K*_{σ}. Let *σ* be the Cartan involution on $g\u2254Lie(G)$. Then, $g=k\sigma +p\sigma $ is the Cartan decomposition. Let *τ* be another involution on $g$ such that *τσ* = *στ*, and let $g=k\tau +p\tau $ be the ±1 eigenspace decomposition by *τ*. Denote by *K*_{τ} the analytic subgroup of *G* with tangent space $k\tau $. Let $a$ be a maximal Abelian subalgebra of $p\tau \u2229p\sigma $ and define $A\u2254exp(a)$. We introduce the (noncompact) *generalized Cartan decomposition* (Ref. 22, Theorem 4.1).

*(generalized Cartan decomposition,*K

_{1}AK

_{2}

*decomposition). With the above setting, we have the following*

*decomposition of*

*G*

*:*

*That is, for any*

*g*∈

*G*

*, we have*

*k*

_{1}∈

*K*

_{τ},

*k*

_{2}∈

*K*

_{σ}

*and*

*a*∈

*A*

*such that*

*g*=

*k*

_{1}

*ak*

_{2}

*.*

We often use the equivalent name “K_{1}AK_{2} decomposition” for simplicity. Note that if *τ* = *σ* (i.e., *K* = *K*_{σ} = *K*_{τ}), we recover the usual KAK decomposition, *G* = *KAK*. The generalized Cartan decomposition in the work of Flensted-Jensen^{22} is originally intended for the case where *G* is noncompact. The compact analog is developed by Hoogenboom^{23} [Theorem 3.6].

*(generalized Cartan decomposition; compact case). Let*

*G*/

*K*

_{σ}

*and*

*G*/

*K*

_{τ}

*be two compact Riemannian symmetric spaces. Let*$g=k\sigma +p\sigma $

*and*$g=k\tau +p\tau $

*be the corresponding eigenspace decompositions of*$g=Lie(G)$

*. Then, for a maximal Abelian subalgebra*$a$

*of*$p\sigma \u2229p\tau $

*and*$A=exp(a)$,

*we have*

From the space of linear functionals $a*$, we collect eigenvalues of an adjoint representation (the commutator) of $a$ on $g$ and call these eigenvalues the roots of the K_{1}AK_{2} decomposition. By fixing the Weyl chamber, we obtain a set of positive roots Σ^{+}. Details of the theory of the K_{1}AK_{2} decomposition and its root system can be found in Flensted-Jensen,^{22,44} Hoogenboom,^{23} Matsuki,^{45–47} and Kobayashi.^{48} The K_{1}AK_{2} decomposition is also studied in the context of spherical harmonics and intertwining functions.^{49,50} Refine the root space $g\alpha $ of a root *α* by ±1 eigenspaces of *στ*. Let the two dimensions be $m\alpha \xb1$.

Let *dk*_{σ}, *dk*_{τ} be the Haar measures of *K*_{σ}, *K*_{τ}, respectively. Let *dH* be the Euclidean measure on $a$. The Jacobian of the K_{1}AK_{2} decomposition is the following:

*(Jacobian of the*K

_{1}AK

_{2}

*decomposition*

^{23,44}

*). Let*

*dg*

*be the Haar measure on*

*G*,

*and let*$H\u2208a$

*. We have the Jacobian and the integral formula corresponding to the change of variables associated with the*

*K*

_{1}

*AK*

_{2}

*decomposition,*

*where for noncompact*

*G*,

*and for compact*

*G*,

Similar results on the KAK decomposition and the restricted roots of symmetric spaces can be found in standard Lie group textbooks.^{28,33,51–53} In the KAK case, the Jacobian (2.2) reduces down to $\u220f(sinh\u2061\alpha (H))m\alpha $ as we do not have −1 eigenspace of *στ* so that $m\alpha =m\alpha +$.^{33,54,55}

Theorems 2.1–2.3 are decompositions of the group *G*. These decompositions can also be applied to the symmetric space *G*/*K*_{σ}. The following map Φ is the K_{1}AK_{2} decomposition of the Riemannian symmetric space *G*/*K*_{σ}. The map Φ is also called the Hermann action,^{56,57} nonstandard polar coordinates,^{58} and non-Cartan parameterization.^{59} In the KAK case (*K* = *K*_{σ} = *K*_{τ}), Helgason called this the polar coordinate decomposition^{33} and credits Cartan^{30} for this map. Since the *G*-invariant measure of *G*/*K* inherits the Haar measure of *G*, the identical Jacobian is obtained for the decomposition of a symmetric space.^{60}

*(*K

_{1}AK

_{2}

*decomposition of*

*G*/

*K*

_{σ}

*). Given a*K

_{1}AK

_{2}

*decomposition*

*G*=

*K*

_{σ}

*AK*

_{τ}

*with the Riemannian symmetric space*

*G*/

*K*

_{σ}

*, we have the map*Φ

*,*

*Suppose*$H\u2208a$

*,*

*a*= exp(

*H*)

*. For the*

*G*

*-invariant measure*

*dx*

*of*

*G*/

*K*

_{σ}

*,*

*dk*

_{τ}= Haar(

*K*

_{τ}),

*and the Euclidean measure*

*dH*

*on*$a$

*,*

*dx*=

*dk*

_{τ}

*dμ*(

*H*)

*holds where the Jacobian*

*dμ*(

*H*)

*is given in*

*(2.2)*

*if*

*G*

*is noncompact and*

*(2.3)*

*if*

*G*

*is*

*compact.*

*G*/

*K*≅

*P*:

*gK*(coset) or

*p*∈

*P*?]. In the standard KAK decomposition, the Jacobian (2.2) [respectively, (2.3)] only has sinh (respectively, sin) terms as we discussed above. This result could be found in many literature, where some authors

^{28,44,55,61}use ∏sinh

*α*(

*H*) as the Jacobian, whereas other authors

^{13,54,62}use ∏ sinh(

*α*(

*H*)/2). This gap is due to the difference in the realization of a symmetric space

*G*/

*K*as a subset

*P*⊂

*G*. The former uses the right coset representative, i.e.,

*G*/

*K*→

*P*as

*gK*↦

*p*, where

*g*=

*pk*is its group level Cartan decomposition. Then, the action of

*G*on

*G*/

*K*is given as (

*g*

_{1},

*g*

_{2}

*K*) ↦

*g*

_{1}

*g*

_{2}

*K*. The latter authors use the map

*G*/

*K*→

*P*such that

*gK*↦

*g*(

*σg*)

^{−1}, where

*σ*is the group level involution. The

*G*-action is $(g1,g2)\u21a6g1g2(\sigma g1)\u22121,g1\u2208G,g2\u2208P$. In terms of Theorem 2.4, the latter gives the map Φ such that (

*k*,

*a*) ↦

*ka*

^{2}

*k*

^{−1}since

*H*where

*a*= exp(

*H*). Moreover, these two identifications define the map Φ :

*K*×

*A*→

*P*with the same

*k*,

*a*as

(*G*/*K* vs *P*: a symmetric positive definite matrix). Let us take a look at the two realizations in Remark 2.5 for $G/K=GL(n,R)/O(n)$, where *P* is the set of all symmetric positive definite matrices. Let *S* be a fixed positive definite symmetric matrix, with its eigendecomposition *S* = *Q*Λ*Q*^{T}, with *Q* ∈ O(*n*). The coset representation of *S* is *Q*Λ · O(*n*) ∈ *G*/*K* as *Q*Λ = (*Q*Λ*Q*^{T})*Q* is the polar decomposition. With the realization of *P* ≅ *G*/*K*, the point in *G*/*K* is represented by the matrix *S* = *Q*Λ*Q*^{T}.

Finally, we have the Lie algebra counterpart of Theorem 2.4 when *K* = *K*_{σ} = *K*_{τ}.

*For a noncompact Riemannian symmetric space*

*G*/

*K*

*with the Cartan decomposition*$g=k+p$,

*let*$a$

*be a maximal Abelian subalgebra of*$p$

*. We have*

*equivalently the decomposition*$p=\u222ak\u2208Kkak\u22121$

*with the Jacobian*

*dμ*

*given as*

*where*$H\u2208a$

*and*Σ

*is the restricted root system with dimensions*

*m*

_{α}

*. The measure on*$p$

*is the Euclidean*

*measure.*

### C. A symmetric space: one RMT or many RMTs?

The answer to the title question of this section is that both one and many can be construed as correct. To explain how this is possible requires teasing apart the assumptions behind the words “associated with.” Certainly,^{6,8,9,13} associate one random matrix with one symmetric space. However, the example of the GSVD coordinate systems discussed in Sec. I B associates multiple Jacobi densities with one symmetric space, the Grassmannian manifold. In Ref. 59, another example is illustrated as the “non-Cartan parameterization” for the special case of (*G*, *K*_{σ}, *K*_{τ}) = (U(*n*), O(*n*), U(*p*) × U(*q*)). (A similar approach may be found in Ref. 63.) This is discussed in Sec. VII B.

The reconciliation is that indeed it is true that the required maps (2.4) when *K* = *K*_{σ} = *K*_{τ}, i.e., Φ(*k*, *a*) = *kaK* = *kak*^{−1} (compact) or the map (2.6) *ψ*(*k*, *H*) = *kHk*^{−1} (noncompact) lead to a unique random matrix theory associated with a given symmetric space *G*/*K*. This is unique in a sense that any geodesic on the symmetric space *G*/*K* could be transformed to the geodesic on *A* with the above maps.

However, if we relax the condition so that we are allowed to choose *K*_{τ} under the generalized Cartan decomposition framework, we can associate multiple random matrix theories to one symmetric space. The GSVD coordinate systems in Sec. I B illustrate this viewpoint. The real Grassmannian manifold *G*/*K* = O(*n*)/(O(*n* − *s*) × O(*s*)) has the map Φ: (*k*, *a*) ↦ *kaK* for *K* = *K*_{σ} = *K*_{τ} explicitly written as $X=UVCS\u22c5O(s)$, where *U*, *V* are (*r* − *s*) × *s*, *s* × *s* orthogonal matrices. On the other hand, if we let *K*_{τ} = O(*p*) × O(*q*), we have multiple maps Φ: (*k*_{τ}, *a*) ↦ *k*_{τ}*aK* written as $X=UVCS\u22c5O(s)$, where *U*, *V* are *p* × *s*, *q* × *s* orthogonal matrices.

Starting from Sec. V, we discuss (i) random matrices arising from the K_{1}AK_{2} decompositions of compact symmetric spaces (Theorem 2.4 or 2.2) and (ii) random matrices arising from the Lie algebra decomposition of noncompact symmetric spaces (Theorem 2.7). The associated decompositions are well explained by matrix factorizations in numerical linear algebra. As we pointed out, the resulting Jacobi ensembles cover the full parameter set of the classical Jacobi densities, thereby completing the classification from the classical RMT point of view.

## III. CARTAN’S IDEA: A MODERNIZED APPROACH

The Jacobian of the KAK (K_{1}AK_{2}) decomposition, equivalently the determinant of the differential of the map Φ : *K* × *A* → *P* (in Theorem 2.4 and Remark 2.5), is computed in several references.^{28,54,55} The proof of (2.2) is can also be found in Refs. 23 and 44. However, the proof can be inaccessible to some audiences. Meanwhile, individual cases of the KAK decomposition, recognized as matrix factorizations, show up in many areas of mathematics, and some were discovered in various formats by specialists in numerical linear algebra. Motivated by random matrix theory (and sometimes perturbation theory in numerical analysis), Jacobians of these factorizations were often computed case-by-case using the matrix differentials and wedging of independent elements.^{15,21,26,64,73}

In this section, we provide a generalization of such individual Jacobian computations and compare it to the general technique Helgason proposed. With appropriate translation of terminologies and maps in Lie theory into linear algebra, we observe the both methods are indeed the same process but have been illustrated in different languages for a long time. We start out by introducing some important concepts in Lie theory accessible to an audience with a good background in linear algebra and perhaps some basic geometry. Then, in Table II, we present a line-by-line correspondence between Helgason’s derivation and the proof by matrix differentials.

### A. The ping pong operator, ping pong vectors, and ping pong subspaces

We will start with a concrete 2 × 2 linear operator so as to establish the notions of the *ping pong operator*, *ping pong vectors*, *ping pong subspaces*, and the relationship to eigenvectors. Then, we will define a “bigger” linear operator ad_{H} that acts on 2 × 2 spaces exactly in the manner we are about to describe.

We introduce the 2 × 2 matrix

which we will call a *2* × *2 ping pong operator*, and we will call $10$ and $01$ the *ping pong vectors* of *M*, in that *M* bounces these two vectors into *α* times the other,

Furthermore, *M* has eigenvectors $11,1\u22121$, with eigenvalues *α*, −*α*. We will call the eigenvalue a *root* of *M*.

Also worth pointing out are the matrix exponential and matrix sinh of *M*,

and thus, we see that sinh *M* is another ping pong operator with scaling sinh *α*. Figure 5 plots the action of a ping pong matrix and its exponential, with notations that we will use in Secs. III D and III E, i.e., the ping pong operator is denoted ad_{H}, *p*_{j} and *k*_{j} are the ping pong vectors, and *x*_{j} and *θx*_{j} are the eigenvectors. The right side of Fig. 5 shows the action of *e*^{M} and portrays sinh(*M*) as a projection of *e*^{M} on the *p*_{j} direction.

We now go beyond 2 × 2 matrices and suggest the more general 2*N* × 2*N* ping pong matrix *M*_{N}, with *N* roots, *α*_{1}, …, *α*_{N}, *N* pairs of ping pong vectors (*k*_{1}, *p*_{1}), …, (*k*_{N}, *p*_{N}) along with eigenvectors (*x*_{1}, *y*_{1}), …, (*x*_{N}, *y*_{N}),

where the 2*j* − 1 and 2*j* positions are 0 or ±1 and all other entries of these vectors are 0. The matrices exp(*M*_{N}) and sinh *M*_{N} are block versions of the 2 × 2 case.

We may define the subspaces $k$ and $p$ (using the “mathfrak” Fraktur letters “k” and “p”) to be the span of the *k*_{j} and *p*_{j}, respectively. Note that $k$ and $p$ are orthogonal complements as subspaces. A key “ping pong” relationship between these subspaces is that

Thus, if we consider $MN|k$, the restriction of *M*_{N} to $k$, we have an operator from $k$ to $p$. Evidently, $MN|k$ as a matrix may be obtained by taking the even rows and odd columns of *M*_{N}. The result is a diagonal matrix with the *α*_{j} on the diagonal. Similarly, $sinh(MN)|k$ is a diagonal matrix with sinh(*α*_{j}) on the diagonal. We then get the important result that

the product of the hyperbolic sines of the roots.

Given a linear operator $L$ on a vector space with nonzero eigenvalues ±*λ*, the following lemma constructs a pair of ping pong vectors from $L$:

*For a linear operator*$L$

*defined on any vector space, assume that*±

*λ*

*are both nonzero eigenvalues of*$L$

*. Let*

*x*

*and*

*y*

*be the corresponding eigenvectors, i.e.,*$Lx=\lambda x$

*and*$Ly=\u2212\lambda y$

*. Define two vectors*

*k*≔

*x*+

*y*,

*p*≔

*x*−

*y*

*. Then,*

*k*,

*p*

*are ping pong vectors. Furthermore, we have for the operator*$exp(L)$

*,*

The proof is a straightforward extension of the discussion in previous paragraphs.

For the reader who wants to know the upcoming significance of this fact for Jacobians of matrix factorizations, it turns out (or maybe as the reader already observed in Sec. II) that the Jacobian will be the product of sinh *α*’s. Just as the matrix $sinh0\alpha \alpha 0$ takes one of the ping pong vectors to sinh *α* times the other, the key piece of the differential map will consist of multiple ping pong relationships, each one sending one ping pong vector to another.

### B. The Kronecker product, linear operator ad_{X}, and its exponential

Lie theory picks out operators $L$ that exactly have the properties in Sec. III A. Our vector spaces are now matrix spaces, and our operators are linear operators on a matrix space. We introduce the Lie bracket, denoted by [*X*, *Y*], defined as [*X*, *Y*] = *XY* − *YX* (the commutator). The Kronecker product notation is very helpful in this context. We define the Kronecker product notation as a linear operator on a matrix space. [Many authors would write vec(*BXA*^{T}) = (*A* ⊗ *B*)vec(*X*), but we omit the “vec” as we believe it is always clear from context. In a computer language such as Julia, one would write kron(A,B) * vec(X) = vec(B*X*A′)],

With this, we can express the Lie bracket with Kronecker products,

Consider the Lie bracket as a linear operator (determined by *X*) applied to *Y*, and call this operator ad_{X} (abbreviation for “adjoint”),

This will be the important ping pong operator $L$. The operator exponential of ad_{X} (equivalently, the matrix exponential of *I* ⊗ *X* − *X*^{T} ⊗ *I*) is given in the following lemma:

*For the linear operator*ad

_{X}

*, the following holds for*$eadX\u2254\u2211j=0\u221e(adX)nn!$

*and*$sinhadx=(eadX+e\u2212adX)/2$

*:*

*I*⊗

*X*commutes with

*X*

^{T}⊗

*I*, we have

### C. Antisymmetric and symmetric matrices: An important first example of symmetric space as ping pong spaces

In our first example, our vector space is *n* × *n* real matrices. Consider

The ping pong operator that will bounce $k$ and $p$ around will be ad_{H} = *I* ⊗ *H* − *H*^{T} ⊗ *I*, where *H* is the diagonal matrix

Note that the operator ad_{H} sends an antisymmetric matrix to a symmetric matrix and a symmetric matrix to an antisymmetric matrix.

What does this have to do with Jacobians of matrix factorizations, such as the symmetric positive definite eigenvalue factorization? Consider a perturbation of *Q* when forming *S* = *Q*Λ*Q*^{T}. An infinitesimal antisymmetric perturbation *Q*^{T}*dQ* is mapped into a *dS*, an infinitesimal symmetric perturbation. This is the very linear map from the tangent space of *Q* to that of *S* that we wish to understand, so perhaps it is not surprising we would want to restrict our ping pong operator from $k$ to $p$. We invite the reader to check that the corresponding eigenmatrices and ping pong matrices of ad_{H} may be found in the first column of Table I.

### D. General $k$ and $p$ arise from an involution $\theta $

We proceed to construct more important general operators $L$ that have the property in the assumption of Lemma 3.1. This is where the theory of Lie groups and symmetric spaces need to be brought in. Upon doing so, we will obtain two linear spaces of matrices $k$, $p$, and also a space $a$.

For the reader not familiar with Lie groups, one need only imagine a continuous set of matrices that are a subgroup of real, complex, or quaternion matrices. The tangent space $g$ is just a vector space of matrix differentials at the identity. One key example is the compact Lie group O(*n*) (the group of square orthogonal matrices) and its tangent space at the identity $gO(n)$: the set of antisymmetric matrices. Another key example is all *n* × *n* invertible matrices $GL(n,R)$ (a noncompact Lie group) and its tangent space $gGL(n,R)$, consisting of all *n* × *n* matrices.

Cartan noticed that important matrix factorizations start with two ingredients: the **tangent space** $g$ (at the identity) of a Lie group *G* and an **involution** ** θ** on $g$ (i.e.,

*θ*

^{2}= Id and

*θ*[

*X*,

*Y*] = [

*θX*,

*θY*]) An example of

*θ*is

*θ*(

*X*) = −

*X*

^{T}on $g$ for $G=GL(n,R)$. Among matrices in $g$, we select two kinds of matrices. The ones fixed by the involution

*θ*, and the ones negated by

*θ*. Denote each set by $k$ and $p$,

[For $GL(n,R)$, these are the antisymmetric and symmetric matrices respectively.]

The next important player is $a\u2282p$. Readers familiar with the singular value decomposition know the special role of diagonal matrices in the SVD as they list the very important “singular values.” Diagonal matrices have the nice property that linear combinations are still diagonal, they commute (the Lie bracket of any two are zero), and they are symmetric (the $p$ of our first example). The generalization of this is to take a $p$ and find a maximal subalgebra where every matrix commutes. This is the maximal subspace $a\u2282p$ such that for all $a1,a2\u2208a$, [*a*_{1}, *a*_{2}] = 0.

If $H\u2208a$, then *S* = *Q*Λ*Q*^{T} is a symmetric positive definite eigendecomposition, with Λ = *e*^{H}. In the rest of the section, we will be focusing on factorizations of the form *Q*Λ*Q*^{−1}, where Λ is a matrix exponential of $H\u2208a$. (These will be more general than eigendecompositions, as *Q* may not be orthogonal, and Λ may not be diagonal.) In particular, we will compute the Jacobian of perturbations with respect to *Q*, holding *H* constant, and thus, necessarily the Jacobian will be defined in terms of *H*.

From here, we assume that the Lie group *G* is noncompact. The compact case will be discussed after completing the noncompact case. Pick $H\u2208a$, and recall that ad_{H} is a linear operator on $g$. The operator ad_{H} will play the role of $L$, the ping pong operator. We decompose $g$ into the eigenspaces of ad_{H}. For any eigenpair (*α*_{j}, *x*_{j}) of ad_{H}, i.e., ad_{H}(*x*_{j}) = [*H*, *x*_{j}] = *α*_{j}*x*_{j}, we observe (for *α*_{j} ≠ 0)

which implies that the eigenvalues ±*α*_{j} always exist in pairs, with the corresponding eigenmatrices *x*_{j} and *θx*_{j}. This satisfies the assumption of Lemma 3.1, from which we can now construct our ping pong matrices,

with the ping pong relationship by the operator ad_{H},

In addition, the relationship by the operator $eadH$ follows:

The ping pong matrices *k*_{j}, *p*_{j}, eigenmatrices *x*_{j}, *θx*_{j} and the relationships (3.5), (3.6) are illustrated in Fig. 5.

As we mentioned in Remark 3.2 and Sec. III C, the role of ping pong matrices *k*_{j}, *p*_{j} is crucial. **The map** $eadH$ **(particularly,** **sinh** **ad**_{H}**) is the main ingredient constructing the differential map** ** dΦ** of the factorization Φ: (

*Q*, Λ) ↦

*Q*Λ

*Q*

^{−1}. The operator $eadH$ is applied to

*k*

_{j}and then projected to the span of

*p*

_{j}as in Fig. 5 (right), leaving only the sinh

*α*

_{j}factor.

We now compute the full basis of $k$ and $p$. The collection ∪_{j}{*x*_{j}, *θx*_{j}} is a full basis for the union of eigenspaces with nonzero eigenvalues. Since span({*x*_{j}, *θx*_{j}}) = span({*k*_{j}, *p*_{j}}) for any *j*, ∪_{j}{*k*_{j}, *p*_{j}} is another full basis for the eigenspaces with nonzero eigenvalues. Interestingly, we observe *θk*_{j} = *k*_{j} and *θp*_{j} = −*p*_{j}, which identifies ∪_{j}{*k*_{j}} and ∪_{j}{*p*_{j}} as subsets of the basis of $k$ and $p$, respectively. The remaining case is the zero eigenspace. When *α*_{j} = 0, there are two possibilities. First, if *x*_{j} and *θx*_{j} are independent of each other, we can still obtain *k*_{j} and *p*_{j} as before and add them to ∪_{j}{*k*_{j}} and ∪_{j}{*p*_{j}}. Second, if *x*_{j} and *θx*_{j} are colinear, *θx*_{j} is either *x*_{j} or −*x*_{j}. If *θx*_{j} = *x*_{j}, we collect such *x*_{j} and name the set *K*_{z}. Similarly, if *θx*_{j} = −*x*_{j}, then we put them in *P*_{z}. Since we analyzed both nonzero and zero eigenspaces, we have obtained a full basis of $g$, which is $\u222aj{kj,pj}\u222aKz\u222aPz$. Refining once more, $span(\u222aj{kj})\u222aKz=k$ and $span(\u222aj{pj})\u222aPz=p$.

### E. The operators $adH,eadH$, and the subspaces $k,p$

In Sec. III D, we obtained the basis of $k$ and $p$, in terms of ping pong matrices, by linearly combining eigenmatrices of the operator ad_{H}. We now illustrate the relationship of the basis of $k$ and $p$ under $eadH$, just like we illustrated the operator *M*_{N} in Sec. III A. In the *k*_{1}, …, *k*_{N} and *p*_{1}, …, *p*_{N} basis, we have the following:

We are now ready to carefully investigate the map *d*Φ using (3.8).

*x*

_{j}and

*θx*

_{j}of ad

_{H}are independent of the choice of $H\u2208a$. In other words, the complete basis of $g$ and $k$, $p$ obtained above does not care about a specific choice of

*H*. Furthermore, the eigenvalues ±

*α*

_{j}are functions of

*H*, and these eigenvalue assigning functions $\alpha \u0303j:H\u21a6\alpha j\u2208R$ are more properly called the

*restricted roots*. It can be inferred from the separation of the basis that $k$, $p$ together form the whole tangent space $g$,

### F. Symmetric spaces

The reader may have noticed that our discussions have focused on the Lie algebras rather than the Lie groups themselves. It is a point of fact that Lie groups are mostly useful to define the factorizations of our interest, but Lie algebras are where the Jacobian “lives,” and hence, this is the most important place to concentrate. For the interested reader, the subgroup *K* of *G* is picked such that its tangent space is exactly $k$ [one easy way to imagine such a subgroup is to define $K\u2254exp(k)$], and we now obtain a *symmetric space* *G*/*K*.

It can be proven that for the noncompact Lie group, there exists a unique involution *θ* such that the subgroup *K* is the maximal compact subgroup of *G*. We call *θ* the *Cartan involution*, and (3.9) is called the *Cartan decomposition*. Furthermore, the subset $P\u2254exp(p)$ plays an important role as its elements serve as representatives of the cosets in *G*/*K*. Regarding the identification of *G*/*K* as elements in *P*, refer to Remark 2.5, where we point out as an example, taking $G/K=GL(n,R)/O(n)$ that an element of *G*/*K* has the form of a coset *gK*, then *gg*^{T} may be a representative of the coset in $p$. While some authors use $(ggT)1/2$, the key point being each choice is well defined independent of choice of representative.

### G. When *G* is a compact Lie group

Upon considering the compact cases, it is helpful to make use of a certain duality between compact and noncompact symmetric spaces. We again start with a noncompact Lie group *G* and the Cartan involution *θ*. Let $g=k+p$ be the Cartan decomposition. Then, define a new space,

where *i* is the imaginary unit. The result in Lie theory implies that the new vector space $gC$ is the tangent space of a compact Lie group, say, *G*_{C}. In Table I, the first and third columns labeled $GL(n,R)/O(n)$ and O(*p*, *q*)/(O(*p*) × O(*q*)) are noncompact tangent spaces. Their compact duals are, respectively, the second and fourth columns labeled U(*n*)/O(*n*) and O(*n*)/(O(*p*) × O(*q*)).

Matrixwise, the ping pong matrices $kj\u2208k,pj\u2208p$ of $g$ are brought back to a new set of ping pong matrices $kj\u2208kC,ipj\u2208pC$ in $gC$. Let us denote them by $k\u0303j\u2254kj$ and $p\u0303j\u2254ipj$. The role of the subspace $a$ is now played by $ia$ replacing ad_{H} by ad_{iH}. We deduce a set of similar relationships for $k\u0303j,p\u0303j$ under ad_{iH},

In matrix form,

At the group level, the symmetric spaces *G*/*K* and *G*_{C}/*K* are called the duals of each other, and they appear in the same row of standard symmetric space charts. An example of eigenmatrices *x*_{j}, *θx*_{j} and ping pong matrices for some symmetric spaces and their duals are presented in Table I.

### H. Jacobian of the map Φ

We provide a generalized algorithm for finding a Jacobian of the decomposition Φ(*Q*, Λ) = *Q*Λ*Q*^{−1} [as we defined in (2.5)], where $\Lambda \u2208A\u2254exp(a),Q\u2208K$. $k$ and $p$ from Sec. III D are the tangent spaces of *K* and *P*, respectively. As mentioned, we follow Helgason’s derivation (Ref. 28, Theorem 5.8 of Chap. I) and start by directly translating his proof into simple linear algebra terms. In Table II, we have Helgason’s derivation (left) compared in the same row with linear algebra (Right). Table II is using the noncompact symmetric space *G*/*K* but the compact case is identical with replacing sin *α*_{j} by sinh *α*_{j}.

Classical notation (Ref. 28, p. 187, Proof of Theorem 5.8, Chap. I) | Linear algebra notation (matrix factorizations) |

Definitions | |

Φ: K × A → G/K | $\Phi \u0303$: K × A → P |

Φ: (k, a) ↦ kaK | $\Phi \u0303$: (Q, Λ) ↦ QΛQ^{−1} ($\Lambda 12=a$, Q = k) |

$d\tau (g0):(G/K)o\u2192(G/K)g0\u22c5o$ | $d\tau \u0303(g0):X\u21a6g0X(\theta g0)\u22121$ |

$d\pi :g\u2192(G/K)o$ | (θk = k, k ∈ K, θp = p^{−1}, p ∈ P) |

At k ∈ K, fix a tangent vector $d\tau (k)Ti\alpha $ | At Q ∈ K, fix a tangent vector dQ |

At Id, basis element $Ti\alpha \u2208k$ | At Id, basis element $Q\u22121dQ=kj\u2208k$ |

Derivations | |

$2d\Phi (d\tau (k)Ti\alpha ,0)$^{a} | $d\Phi \u0303(dQ,0)=d(Q\Lambda Q\u22121)$ (with dΛ = 0) |

$=d\pi (2kTi\alpha a)$ | = dQΛQ^{−1} + QΛdQ^{−1} |

$=d\tau (ka)d\pi (2Ad(a\u22121)Ti\alpha )$^{b} | $=d\tau \u0303(Q\Lambda 12)\Lambda \u221212(Q\u22121dQ\Lambda +\Lambda dQ\u22121Q)\Lambda \u221212$ ^{c} |

$=d\tau (ka)d\pi (Ad(a\u22121)Ti\alpha \u2212Ad(a)Ti\alpha )$ | $=d\tau \u0303(Q\Lambda 12)\Lambda \u221212kj\Lambda 12\u2212\Lambda 12kj\Lambda \u221212$ |

[Let H be such that $exp(H)=a=\Lambda 12$] | [Note that $d\tau \u0303(Q\Lambda 12)X=Q\Lambda 12X\Lambda 12Q\u22121$] |

$=d\tau (ka)d\pi (e\u2212adHTi\alpha \u2212eadHTi\alpha )$ | $=d\tau \u0303(Q\Lambda 12)exp(HT\u2297I\u2212I\u2297H)kj$ |

$\u2212exp(I\u2297H\u2212HT\u2297I)kj$ [by (3.3)] | |

$=d\tau (ka)d\pi (\u2212\alpha (H)\u22121[H,Ti\alpha ]2\u2061sinh\u2061\alpha (H))$ | $=d\tau \u0303(Q\Lambda 12)(\u22122\u2061sinh\alpha j)pj$ [by (3.8)] |

Classical notation (Ref. 28, p. 187, Proof of Theorem 5.8, Chap. I) | Linear algebra notation (matrix factorizations) |

Definitions | |

Φ: K × A → G/K | $\Phi \u0303$: K × A → P |

Φ: (k, a) ↦ kaK | $\Phi \u0303$: (Q, Λ) ↦ QΛQ^{−1} ($\Lambda 12=a$, Q = k) |

$d\tau (g0):(G/K)o\u2192(G/K)g0\u22c5o$ | $d\tau \u0303(g0):X\u21a6g0X(\theta g0)\u22121$ |

$d\pi :g\u2192(G/K)o$ | (θk = k, k ∈ K, θp = p^{−1}, p ∈ P) |

At k ∈ K, fix a tangent vector $d\tau (k)Ti\alpha $ | At Q ∈ K, fix a tangent vector dQ |

At Id, basis element $Ti\alpha \u2208k$ | At Id, basis element $Q\u22121dQ=kj\u2208k$ |

Derivations | |

$2d\Phi (d\tau (k)Ti\alpha ,0)$^{a} | $d\Phi \u0303(dQ,0)=d(Q\Lambda Q\u22121)$ (with dΛ = 0) |

$=d\pi (2kTi\alpha a)$ | = dQΛQ^{−1} + QΛdQ^{−1} |

$=d\tau (ka)d\pi (2Ad(a\u22121)Ti\alpha )$^{b} | $=d\tau \u0303(Q\Lambda 12)\Lambda \u221212(Q\u22121dQ\Lambda +\Lambda dQ\u22121Q)\Lambda \u221212$ ^{c} |

$=d\tau (ka)d\pi (Ad(a\u22121)Ti\alpha \u2212Ad(a)Ti\alpha )$ | $=d\tau \u0303(Q\Lambda 12)\Lambda \u221212kj\Lambda 12\u2212\Lambda 12kj\Lambda \u221212$ |

[Let H be such that $exp(H)=a=\Lambda 12$] | [Note that $d\tau \u0303(Q\Lambda 12)X=Q\Lambda 12X\Lambda 12Q\u22121$] |

$=d\tau (ka)d\pi (e\u2212adHTi\alpha \u2212eadHTi\alpha )$ | $=d\tau \u0303(Q\Lambda 12)exp(HT\u2297I\u2212I\u2297H)kj$ |

$\u2212exp(I\u2297H\u2212HT\u2297I)kj$ [by (3.3)] | |

$=d\tau (ka)d\pi (\u2212\alpha (H)\u22121[H,Ti\alpha ]2\u2061sinh\u2061\alpha (H))$ | $=d\tau \u0303(Q\Lambda 12)(\u22122\u2061sinh\alpha j)pj$ [by (3.8)] |

^{a}

Since $\Lambda 12=a$, we have $2d\Phi =d\Phi \u0303$.

^{b}

This is $(d\tau (ka)\u25e6d\pi )(Ad(a\u22121)Ti\alpha )$.

^{c}

Both *dQ*Λ*Q*^{−1} and *Q*Λ*dQ*^{−1} are at *Q*Λ*Q*^{−1} and should be brought back to identity (inside bracket).

From the last line of Table II, we can finish the story with two different directions, depending on the choice of the volume measure. First, if we use a *G***-invariant measure** (the “canonical measure”) of *P*, the measure is invariant under the map *dτ* or $d\tau \u0303$ (by definition of the invariant measure). Thus, we can disregard $d\tau \u0303(Q\Lambda 12)$ [or *dτ*(*ka*)] so that the Jacobian of $d\Phi \u0303$ (or *d*Φ) only depends on the differential map *k*_{j} ↦ (sinh *α*_{j})*p*_{j}. Since ∪_{j}{*k*_{j}} and ∪_{j}{*p*_{j}} are both orthonormal bases, we obtain the Jacobian (2.2),

Note that eigenvalues ±*α*_{j} belong to *x*_{j} and *θx*_{j} have the same corresponding *k*_{j}. [see (3.4) and above.] Thus, we only take the positive roots Σ^{+} above.

The second choice of measure is the **Euclidean measure**, which is a wedge product of independent entrywise differentials. In this case, the procedure is identical up to the factor sinh *α*_{j}, but the map $d\tau \u0303(Q\Lambda 12)$ [equivalently *dτ*(*ka*)] cannot be ignored. One needs to carefully compute the differential map $d\tau \u0303(Q\Lambda 12)pj=Q\Lambda 12pj\Lambda 12Q\u22121$ under the Euclidean measure. We can further use the fact that conjugation by the matrix *Q* always preserves the Euclidean measure, since the subgroup *K* is always a set of matrices with an orthogonal/unitary type of property. Thus, one needs to compute the map $pj\u21a6\Lambda 12pj\Lambda 12$ and multiply its Jacobian by $\u220f\alpha \u2208\Sigma +\u2061sinh\u2061\alpha (H)$.

For the compact Lie group *G*, we have sinh *α*_{j} replaced by sin *α*_{j} everywhere. Moreover, the last Jacobian computation step $pj\u21a6\Lambda 12pj\Lambda 12$ can be omitted for the compact cases, since $\Lambda 12$ is an orthogonal/unitary matrix for the compact cases. The map $d\tau \u0303(\Lambda 12)$ preserves the Euclidean measure as $d\tau \u0303(Q)$.

### I. Extension to the generalized Cartan decomposition

In the previous paragraphs, we studied the Jacobian of the usual Cartan decomposition. We now proceed to consider the generalized Cartan decomposition (Theorems 2.1 and 2.2), its Jacobian (2.2), (2.3), and the extension of Table II. The derivations are analogous, analyzing subspaces of $g$, but one should now proceed with four tangent subspaces, $k\tau \u2229k\sigma $, $k\tau \u2229p\sigma $, $p\tau \u2229k\sigma $, and $p\tau \u2229p\sigma $. Earlier work on these Jacobian related derivations may be found in Refs. 23 and 44. The maximal subspace $a$ is now defined inside $p\tau \u2229p\sigma $. We start with the same strategy: the tangent space $g$ is decomposed into the eigenspaces of the linear operator ad_{H} with $H\u2208a$. The eigenvalues ±*α*_{j} still come in pairs, but we have two eigenmatrices *x*_{j}, *τσx*_{j} for eigenvalue *α*_{j} and two eigenmatrices *τx*_{j}, *σx*_{j} for eigenvalue −*α*_{j}. We define four vectors *v*_{1}, *v*_{2}, *w*_{1}, *w*_{2} with the same roles as *k*_{j} and *p*_{j} played before,

and these have similar ping pong relationships by ad_{H} like *k*_{j} and *p*_{j},

## IV. RANDOM MATRIX ENSEMBLES: COMPACT AND NONCOMPACT

### A. Compact symmetric spaces

In compact cases, the random matrices could be simply determined from the Haar measure of the compact Lie group *G*,^{12,13} since the compactness of *G* turns the Haar measure into a probability measure. In Secs. V, VI, VII, we discuss random matrix ensembles based on ten types of Riemannian symmetric space classification by Cartan. For the triple (*G*, *K*_{σ}, *K*_{τ}), we start with the cases where *G*/*K*_{σ} and *G*/*K*_{τ} are of the same types in Secs. V and VI. Then, in Sec. VII, we will discuss the “mixed types” where *G*/*K*_{σ} and *G*/*K*_{τ} are different types under Cartan’s classification.

### B. Noncompact symmetric spaces

Sections VIII and IX discuss classical random matrix ensembles associated with noncompact symmetric spaces. Hermite and Laguerre eigenvalue joint densities arise as result of (2.2) using Theorem 2.4 on noncompact symmetric spaces. As opposed to compact Lie groups and symmetric spaces where the Haar measure or *G*-invariant measure can be normalized by a constant to a probability measure, invariant measures on noncompact manifolds cannot be normalized to one by constants. A normalizing factor $S$ should be introduced to complete the construction of a probability measure. Therefore, random matrices on a noncompact manifold face an innate problem if we proceed analogous to Secs. V and VI:

The choice of the probability measure on noncompact

*G*/*K*is not unique.

In Ref. 13, Dueñez also addressed this problem along the noncompact duals.

As we push the measure forward to the subgroup *A*, the resulting measure should be a symmetric function of independent generators of *A*. Hence, the probability measure $I(g)$ of the random matrix ensemble is the Haar or *G*-invariant measure on *G* or *G*/*K*, multiplied by some symmetric function $S$ on *A*,

where *g* = *k*_{1}*ak*_{2} or *g* = *kak*^{−1} and *μ*(*g*) is an invariant measure. Using (2.2), the measure on *A* is induced,

which means that even though the measure $I$ changes, the measure on *A* still differs only by a normalization function. The traditional choice of $S$ has been made such that $I(g)$ can be constructed from independent Gaussian distributions endowed on matrix entries. In fact, one could also endow a Gaussian distribution on the Riemannian manifold (symmetric space) itself.^{65}

An alternative approach that appears in Ref. 6 is to put a probability measure on the tangent space of the symmetric space, $p$. In particular, independent Gaussian distribution endowed on the elements of $p$ give rise to Hermite and Laguerre ensembles by Theorem 2.7. We will follow this alternative approach.

### C. Non-probability measure of noncompact groups

As discussed in Sec. IV B, the Haar measure of a noncompact group *G* or a noncompact symmetric space *G*/*K* is not a probability measure. However, we can force an analog of a random matrix theory. Imagine, for example, a noncompact K_{1}AK_{2} decomposition *G* = *K*_{σ}*AK*_{τ} with $(G,K\sigma ,K\tau )=(GL(n,R),O(n),O(p,q))$. This is called the hyperbolic SVD^{66} where any real invertible matrix *M* is factored into the product of an orthogonal matrix *O*, a positive diagonal matrix Λ, and an indefinite orthogonal matrix *V*. From the Haar measure and (2.2) of $GL(n,R)$, one obtains the Jacobian,

where *λ*_{j} is the squared diagonal entries of Λ for all *j*’s.

One can impose a Gaussian-like density function (although not a probability density) on the group $GL(n,R)$, such as exp(−tr(*gI*_{p,q}*g*^{T})/2)∏*dg*_{jk}, where *I*_{p,q} = diag(*I*_{p}, −*I*_{q}). In terms of independent entries of *g*, this is

Since the Haar measure of $GL(n,R)$ is |det(*g*)|^{−n} ∏*dg*_{jk}, (4.1) becomes [after integrating out O(*n*) and O(*p*, *q*)]

where *λ*_{1}, …, *λ*_{p} ≥ 0 are the first *p* squared diagonal values of Λ and *λ*_{p+1}, …, *λ*_{n} ≤ 0 are the last *q* squared diagonal values of Λ, multiplied by −1. Extending this approach to find a proper random matrix probability measure on noncompact Lie groups and symmetric spaces with joint probability densities on the subgroup *A* is still an open problem.

## V. COMPACT AI, A, and AII: CIRCULAR ENSEMBLES

The joint probability density of the circular ensemble is (*β* = 1, 2, 4)

Circular ensembles *β* = 1, 2, and 4 (COE, CUE, and CSE) arise as the eigenvalues of special unitary matrices. As we discuss in the Introduction, circular ensembles are completely classified by (compact) symmetric spaces of the types AI, A, and AII, respectively.^{5,13} The K_{1}AK_{2} decomposition associated with each symmetric space recovers the KAK decomposition. The restricted root system (and dimensions) of AI, A, and AII are given as the following (1 ≤ *j* < *k* ≤ *n*):

Since we have compact symmetric spaces, we use (2.3) from either Theorem 2.2 or 2.4 with these root systems.

### A. Compact AI, $\beta =1$ COE

The compact symmetric space AI is *G*/*K* = U(*n*)/O(*n*). The involution on U(*n*) has no free parameter and the K_{1}AK_{2} decomposition is equivalent to the KAK decomposition of U(*n*)/O(*n*). (In other words, we only have Cartan’s coordinate system.) The maximal Abelian torus *A* is

From the KAK decomposition, we obtain *U* = *O*_{1}*DO*_{2}, a factorization of a unitary matrix *U* into the product of two orthogonal matrices *O*_{1}, *O*_{2} ∈ O(*n*) and a unit complex diagonal matrix *D* ∈ *A*. This decomposition first appears in Ref. 67, and we will call this the *ODO decomposition*. The corresponding Jacobian (up to constant) from (2.3) using (5.1), *β* = 1 is (with the change of variables *θ*_{j} = 2*h*_{j})

This is the joint density of the COE. In other words, doubled angles in the diagonal of *D* from the ODO decomposition of a Haar distributed unitary matrix is the COE distribution. Moreover, if we identify *G*/*K* as the set of unitary symmetric matrices *P*, the map (2.5) is the factorization *S* = *O*Λ*O*^{T}, the eigendecomposition of a unitary symmetric matrix *S* with real eigenvectors *O*. In terms of Remark 2.5, *U* = *O*_{1}*DO*_{2} becomes $S=UUT=O1D2O1T$, where Λ = *D*^{2}. To obtain the COE, we can utilize both factorizations:

Two times the angles of the unit diagonal values of

*D*from the ODO decomposition of*U*∈ Haar(U(*n*)).The angles of the (unit) eigenvalues of a unitary symmetric matrix obtained from

*UU*^{T},*U*∈ Haar(U(*n*)).

The second algorithm above would be obvious since the days of Dyson,^{1,4} while we are not aware of the first algorithm appearing in the literature.

### B. Compact A, $\beta =2$ CUE

The symmetric space of compact type A is *G*/*K* = U(*n*) × U(*n*)/U(*n*). The restricted root system returns to the usual root system *A*_{n} of the classical semisimple Lie algebra. A maximal torus of U(*n*) is a Cartan subalgebra of U(*n*). Weyl’s integration formula agrees with (2.3) obtaining the CUE, which is the eigenvalues of a Haar distributed unitary matrix. The derivation of the CUE can be found in many random matrix textbooks.^{15,64,68}

### C. Compact AII, $\beta =4$ CSE

The involution $X\u21a6\u2212JnTXTJn$, where $Jn\u22540In\u2212In0$ on the tangent space of U(2*n*), results in the symmetric space U(2*n*)/Sp(*n*), where $Sp(n)=Sp(2n,C)\u2229U(2n)$. A choice of maximal Abelian torus *A* is

Again from the KAK decomposition, we obtain *U* = *Q*_{1}*DQ*_{2}, a factorization of a 2*n* × 2*n* unitary matrix *U* into the product of two unitary symplectic matrices *Q*_{1}, *Q*_{2} ∈ Sp(*n*) and a unit complex diagonal matrix *D* ∈ *A*. We call this the *QDQ decomposition*. The corresponding Jacobian from (2.3) using (5.1) (*β* = 4) is

with the change of variables *θ*_{j} = 2*h*_{j}. This is the CSE distribution. Similarly, as in Sec. V A, the eigendecomposition of the unitary skew-Hamiltonian matrix obtained by $UJnUTJnT$, *U* ∈ Haar(2*n*) is equivalent to the map (2.5). Two numerical algorithms for sampling the CSE are as follows:

Two times the angles of the first

*n*unit diagonal values of*D*from the QDQ decomposition of*U*∈ Haar(U(2*n*)).The angles of the first

*n*(unit) eigenvalues of a unitary skew-Hamiltonian matrix obtained by $UJnUTJnT$ with*U*∈ Haar(U(2*n*)).

## VI. COMPACT BDI, AIII, and CII: JACOBI ENSEMBLES

The joint probability density of the Jacobi ensemble is (*β* = 1, 2, 4),

In Refs. 12 and 13, Jacobi ensembles *β* = 1, 2, 4 arise from the KAK decompositions of seven compact symmetric spaces, BDI, AIII, CII, DIII, BD, C, and CI. In particular, types BDI, AIII, and CII give multiple Jacobi densities as follows (for integers *p* ≥ *q*):

and the powers of *x*_{j}’s are fixed to $\beta 2\u22121$. The remaining four cases add four more parameter points, which could be found in Refs. 12 and 13. In this paper, we omit these four cases as these do not have any further results, as they only have Cartan’s coordinates (no free parameter for the Cartan involution).

The K_{1}AK_{2} decomposition *G* = *K*_{τ}*AK*_{σ} of the compact types BDI-I, AIII-III, CII-II are exactly the *CS decomposition* (CSD)^{69,70} of orthogonal, unitary, and unitary symplectic matrices, respectively. The decomposition Φ of the symmetric space (Theorem 2.4) is the GSVD coordinate systems we discussed in Secs. I B and II C. Assume *r* ≥ *p* ≥ *q* ≥ *s* and *n* = *p* + *q* = *r* + *s* throughout this section. We note that with the KAK decomposition, only the cases *p* = *r*, *q* = *s* are obtained for the CSD. The root system associated with the K_{1}AK_{2} decomposition is the following (1 ≤ *j* < *k* ≤ *s*):

For all three *β*, we have the identical maximal Abelian subgroup *A*,

where $C,S\u2208Rs\xd7s$ are diagonal matrices with cosine, sine values of *θ*_{1}, …, *θ*_{s} on diagonal entries, respectively.

### A. Compact BDI-I, $\beta =1$ Jacobi

With the involution *X* ↦ *I*_{p,q}*XI*_{p,q} on the tangent space of O(*n*), we obtain the symmetric space BDI, *G*/*K* = O(*n*)/(O(*p*) × O(*q*)), where *I*_{p,q} ≔ diag(*I*_{p} − *I*_{q}). With two symmetric pairs [O(*n*), O(*p*) × O(*q*)] and [O(*n*), O(*r*) × O(*s*)], we obtain the K_{1}AK_{2} decomposition BDI-I,

This is the real CSD. [Equivalently, one can imagine the GSVD of (1.2).] From (2.3) using (6.1) *β* = 1, we obtain the Jacobian

Using trigonometric identities with change of variables $xj=cos2\theta j=1+cos(2\theta j)2$,

which is the joint density of the *β* = 1 Jacobi ensemble $J\alpha 1,\alpha 2(1),s$ if we let $\alpha 1=12(q\u2212s+1)\u22121$, $\alpha 2=12(p\u2212s+1)\u22121$. This result agrees with Ref. 20, Theorem 1.5, where the squared CSD cosine values of a Haar distributed orthogonal matrix are distributed as *β* = 1 Jacobi ensemble. Moreover, recall the fact that the QL decomposition *G* = *QL* (a lower triangular analog of the QR decomposition) of an *n* × *n* independent Gaussian matrix *G* obtains a Haar distributed orthogonal matrix *Q*. Since the GSVD^{18,19} is equivalent to the combination of the QL decomposition and the CSD, one can take the GSVD of a real independent Gaussian matrix to obtain the same *β* = 1 Jacobi ensemble. Two associated numerical algorithms are as follows (*a* = *q* − *s*, *b* = *p* − *s*):

The squared CSD cosine values of a Haar distributed

*m*×*m*orthogonal matrix (*m*= 2*s*+*a*+*b*) with row/column partitions (*s*+*a*,*s*+*b*) and (*s*,*s*+*a*+*b*).The squared cosine values, where the tangent values are the generalized singular values of real (

*s*+*a*) ×*s*and (*s*+*b*) ×*s*Gaussian matrices.

### B. Compact AIII-III, *β* = 2 Jacobi

Two symmetric pairs of compact *A*III type are [U(*n*), U(*p*) × U(*q*)] and [U(*n*), U(*r*) × U(*s*)]. The K_{1}AK_{2} decomposition of the group *G* is the CSD of unitary matrices and the decomposition of *G*/*K*_{σ} = U(*n*)/(U(*r*) × U(*s*)) are the complex GSVD described in Sec. I B and Eq. (1.2). Using (2.3) with the root system (6.1), *β* = 2, and change of variables *x*_{j} = cos^{2} *θ*_{j} as above, we obtain the Jacobian

which is the *β* = 2 Jacobi density $J\alpha 1,\alpha 2(2),s$ with *α*_{1} = *q* − *s*, *α*_{2} = *p* − *s*. Numerically, the following could be utilized to obtain *β* = 2 Jacobi densities (*a* = *q* − *s*, *b* = *p* − *s*):

The squared CSD cosine values of a Haar distributed

*m*×*m*unitary matrix (*m*= 2*s*+*a*+*b*) with row/column partitions (*s*+*a*,*s*+*b*) and (*s*,*s*+*a*+*b*).The squared cosine values, where the tangent values are the generalized singular values of complex (

*s*+*a*) ×*s*and (*s*+*b*) ×*s*Gaussian matrices.

### C. Compact CII-II, *β* = 4 Jacobi

Jacobi densities with *β* = 4 are similarly obtained from two symmetric spaces Sp(*n*)/(Sp(*p*) × Sp(*q*)) and Sp(*n*)/(Sp(*r*) × Sp(*s*)), where both are compact type CII. We identify Sp(*n*) as the quaternionic unitary group, $U(n,H)\u2254{g\u2208GL(n,H)|gDg=In}$. The K_{1}AK_{2} decomposition is the CSD of a quaternionic unitary matrix. Using (2.3) with the root system (6.1) *β* = 4, we obtain the following Jacobian with the change of variables *x*_{j} = cos^{2} *θ*_{j}:

which is the *β* = 4 Jacobi density $J\alpha 1,\alpha 2(4),s$ with *α*_{1} = 2(*q* − *s*) + 1, *α*_{2} = 2(*p* − *s*) + 1. The associated numerical algorithm is the following (*a* = *q* − *s*, *b* = *p* − *s*):

The squared cosine CS values of a Haar distributed

*m*×*m*quaternionic unitary matrix (*m*= 2*s*+*a*+*b*) with row/column partitions (*s*+*a*,*s*+*b*) and (*s*,*s*+*a*+*b*).

Again, one can use the GSVD on quaternionic Gaussian matrices to obtain the classical *β* = 4 Jacobi ensemble.

## VII. COMPACT MIXED TYPES: MORE CIRCULAR AND JACOBI

In this section, we show even more cases such that a single symmetric space leading to multiple random matrix theories. We introduce K_{1}AK_{2} decompositions with two compact symmetric spaces, each from different Cartan types. The classification of such K_{1}AK_{2} decompositions is studied in Ref. 47, with the computation of corresponding root systems. As always the names of these decompositions are combinations of two Cartan types, i.e., AI-II represents (*G*, *K*_{σ}, *K*_{τ}) = (U(2*n*), O(2*n*), Sp(2*n*)).

### A. Compact AI-II

The two compact symmetric spaces are types AI and AII, U(2*n*)/O(2*n*) and U(2*n*)/Usp(2*n*). A maximal Abelian subalgebra $a\u2282p\sigma \u2229p\tau $ is the set of all matrices diag(*iθ*_{1}, …, *iθ*_{n}, *iθ*_{1}, …, *iθ*_{n}) for $(\theta 1,\u2026,\theta n)\u2208Rn$. The subgroup *A* is the following:

The root system is given as

Using (2.3), we obtain the Jacobian (*ξ*_{j} = 4*θ*_{j}),

which is the joint probability density of the CUE. Hence, we obtain another sampling method for the CUE.

### B. Compact AI-III, CI-II

The two symmetric spaces in each case are the following:

The subgroup *A* is computed as follows:

where *C*, *S* are *q* × *q* diagonal matrices with cosine and sine values of *q* angles *θ*_{1}, …, *θ*_{q} on their diagonals. The imaginary unit *η* is *i* for AI-III (*β* = 1) and *η* = *j*, *k* for CI-II (*β* = 2). [If we select the subgroup *K* of $U(n,H)/U(n)$ to be the unitary group with the imaginary unit *j*, we could also obtain *η* = *i*.] The root system is the following (*β* = 1, 2):

Using (2.3) with the above root system above, we obtain the following Jacobian:

where *x*_{j} = sin^{2} 2*θ*_{j} for all *j*. The *β* = 1 case of (7.3) can be obtained from the CS decomposition approach too, with (*n* + 1) × (*n* + 1) orthogonal matrix and partitions (*p*, *q* + 1) and (*p* + 1, *q*) [see Fig. 4]. The parameters of *β* = 2 (7.3) cannot be obtained by the complex CSD and, thus, fall outside of the classical parameters.

### C. Compact DI-III, AII-III

Another family of the K_{1}AK_{2} decomposition arise from the following pairs of compact symmetric spaces (*β* = 2, 4):

Under Cartan’s classification, they are types DI-III and AII-III, respectively. The subgroup *A* can be computed as

where *I*_{2} is the 2 × 2 identity matrix, $J1=01\u221210$, and *C*, *S* are *q* × *q* diagonal matrices with cosines and sines of *θ*_{1}, …, *θ*_{q} on their diagonals. The root system is given as follows (*β* = 2, 4):

Again, using (2.3) with the root system above, we obtain the following Jacobian, with the change of variables *x*_{j} = sin^{2} *θ*_{j} for all *j*:

They are *β* = 2, 4 Jacobi ensembles. Both cases could not be obtained from the classical CSD approach, so they are all non-classical parameters of the Jacobi ensemble. To see this at once, we compare three *β* = 2 Jacobi densities each from Secs. VI B, VII B and VII C. Figure 6 shows the possible parameters *α*_{1}, *α*_{2} of the *β* = 2 Jacobi ensemble obtained from each approach.

## VIII. NONCOMPACT AI, A, AND AII: HERMITE ENSEMBLES

While Sec. VII contains essentially new random matrix theories, Secs. VIII and IX review the Hermite and Laguerre ensembles for completeness.^{6–9,59}

The joint probability density of the Hermite ensemble is (*β* = 1, 2, 4),

Hermite ensembles *β* = 1, 2, and 4 (GOE, GUE, and GSE) arise as the eigenvalues of symmetric, Hermitian, and self-dual Gaussian matrices. Hermite ensembles can be thought as the Gaussian measure endowed on the tangent space of noncompact symmetric spaces of the types AI, A, and AII. The connection between these symmetric spaces and Hermite ensembles are made by Theorem 2.7. The decomposition Ψ (2.6) in Theorem 2.7 is the eigendecomposition of symmetric, Hermitian, and self-dual matrices. The maximal Abelian subalgebra $a$ is the collection of all real diagonal matrices, diag(*h*_{1}, …, *h*_{n}). The restricted root system is the following (1 ≤ *j* < *k* ≤ *n*):

### A. Noncompact AI, *β* = 1 GOE

The dual of the compact symmetric space type AI, the noncompact symmetric space type AI, is $G/K=GL(n,R)/O(n)$, represented by the set $Sn$ of all symmetric positive definite matrices. The tangent space at the identity of $Sn$, $p$, is the set of all real symmetric matrices. The Gaussian measure on $p$ is, for $p\u2208p$, exp(−tr(*p*^{T}*p*)/2)*dp* ,where *dp* is the Euclidean measure on $p$. From (2.7) using (8.1) *β* = 1, we obtain (integrate out *dk*)

for the eigenvalues of *p*, *λ*_{j} = *h*_{j}. This is the joint density of the GOE.

### B. Noncompact A, *β* = 2 GUE

The noncompact symmetric space type A is $G/K=GL(n,C)/U(n)$, represented by $Hn$, the set of all Hermitian positive definite matrices. The tangent space at the identity of $Hn$, $p$, is the set of all complex Hermitian matrices. The Gaussian measure on $p$ is, for $p\u2208p$, exp(−tr(*p*^{H}*p*)/2)*dp*, where *dp* is the (real) Euclidean measure on $p$. From (2.7) using (8.1) *β* = 2, we obtain

for the eigenvalues of *p*, *λ*_{j} = *h*_{j}. This is the joint density of the GUE.

### C. Noncompact AII, *β* = 4 GSE

The noncompact symmetric space type *A*II is $G/K=GL(n,H)/U(n,H)$. We use $U(n,H)$ instead of Sp(*n*) to clearly indicate the quaternionic realization. *G*/*K* can be represented by the set of all quaternionic self-dual positive definite matrices, $QHn$. Again, the tangent space at the identity $p$ is the set of all quaternionic self-dual matrices. The Gaussian measure on $p$ is, for $p\u2208p$, exp(−tr(*p*^{D}*p*)/2)*dp*, where *dp* is the (real) Euclidean measure on $p$. From (2.7) using (8.1) *β* = 4, we obtain

for the eigenvalues of *p*, *λ*_{j} = *h*_{j}. This is the joint density of the GSE.

## IX. NONCOMPACT BDI, AIII, and CII: LAGUERRE ENSEMBLES

The joint probability density of the Laguerre ensemble is (*β* = 1, 2, 4)

Laguerre ensembles *β* = 1, 2, 4 arise from Theorem 2.7 applied to noncompact symmetric spaces BDI, AIII, CII, DIII, BD, C, and CI. The last four cases of types DIII, BD, C, and CI are well-studied in Ref. 6, and we again omit these cases as discussed in Sec. VI. In particular, the first three symmetric spaces give the following Laguerre densities (*β* = 1, 2, 4 and *p* ≥ *q*):

as these *λ*_{j} values are the squared singular values of *p* × *q* i.i.d. Gaussian matrices. Equivalently, the eigenvalues of the matrix $A\u2020A\u2208Fq\xd7q$ are frequently used for sampling purpose, where † is the conjugate transposition. The tangent spaces of noncompact symmetric spaces of the types BDI, AIII, and CII are

and a choice of maximal Abelian subalgebra $a$ is the set with *X* being (nonsquare) diagonal matrix with diagonal elements *h*_{1}, …, *h*_{q}. The KAK decomposition *G* = *KAK* of the noncompact symmetric spaces BDI, AIII, and CII is the *hyperbolic CS decomposition* (HCSD).^{71,72} The decomposition $p=\u222ak\u2208Kkak\u22121$ is the *p* × *q* SVD on upper right *p* × *q* corner. The restricted roots are the following (*β* = 1, 2, 4):

### A. Noncompact BDI, *β* = 1 Laguerre

The noncompact symmetric space type BDI is *G*/*K* = O(*p*, *q*)/(O(*p*) × O(*q*)). The tangent space $p$ (9.1) has the Gaussian measure as i.i.d. Gaussian distribution endowed on the elements of *X*. For $M\u2208p$, it is $exp(\u2212tr(MTM))dp$. From (2.7) using (9.2) *β* = 1, we obtain

with the change of variables $\lambda j=hj2$. Thus, the values *λ*_{1}, …, *λ*_{q} are the squared singular values of the upper right corner of *M*. The obtained measure is the joint density of the *β* = 1 Laguerre ensemble.

### B. Noncompact AIII, *β* = 2 Laguerre

The noncompact symmetric space type AIII is *G*/*K* = U(*p*, *q*)/(U(*p*) × U(*q*)). The tangent space (9.1) has the Gaussian measure as i.i.d. complex Gaussian distribution endowed on the elements of *X*. For $M\u2208p$, that is $exp(\u2212tr(MHM))dp$. From (2.7) and using (9.2) *β* = 2, we obtain

with the change of variables $\lambda j=hj2$. Again, the values *λ*_{1}, …, *λ*_{q} are the squared singular values of the upper right corner of *M*. The obtained measure is the joint density of the *β* = 2 Laguerre ensemble.

### C. Noncompact CII, *β* = 4 Laguerre

The noncompact symmetric space CII is $G/K=U(p,q,H)/(U(p,H)\xd7U(q,H))$. The tangent space (9.1) has the Gaussian measure as i.i.d. quaternionic Gaussian distribution endowed on the elements of *X*. For $M\u2208p$, that is $exp(\u2212tr(MDM))dp$. From (2.7) and using (9.2) *β* = 4, we obtain

with the change of variables $\lambda j=hj2$. The values *λ*_{1}, …, *λ*_{q} are the squared singular values of the upper right corner of *M*. The obtained measure is the joint density of the *β* = 4 Laguerre ensemble.

## ACKNOWLEDGMENTS

We thank Martin Zirnbauer for the lengthy email thread from 2001, where he patiently explained which random matrix ensembles seemed to be covered by symmetric spaces. We thank Eduardo Dueñez for another lengthy email thread back in 2013. We thank Pavel Etingof for suggesting the K_{1}AK_{2} decomposition and pointing us to key references, Bernie Wang for so very much, and the Fall 2020 Random Matrix Theory class (MIT 18.338) for valuable suggestions. We also thank Sigurður Helgason for lively discussions by email. We acknowledge NSF Grant Nos. OAC-1835443, OAC-2103804, SII-2029670, ECCS-2029670, and PHY-2021825 for financial support.

## AUTHOR DECLARATIONS

### Conflict of Interest

The authors have no conflicts to disclose.

## DATA AVAILABILITY

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

## REFERENCES

Alternatively, one can imagine the partial format of the CS decomposition. This is also equivalent to the bi-Stiefel decomposition with another quotient by the orthogonal group on the right.

In Ref. 73, Mehta credits Hsu^{42} for the GOE. In fact, Ref. 42 has the Jacobian for the symmetric real eigenvalue problem and indeed works with *AA*^{T} where A = randn(m,n) but does not work with *A* + *A*^{T}. It is no doubt Hsu^{42} could have instantly written down the GOE distribution if he had only been asked.

_{1}) × U(n

_{2}) × U(n

_{3}))\U(n)/(U(p) × U(q))

The actual development of the Jacobians (2.2), (2.3) was done the other way around. In Ref. 61, Helgason credits Cartan^{32} for the derivation of these Jacobians, which was then computed only for symmetric spaces. The KAK decomposition was discovered later in 1950s, and the Jacobians are identically extended from the decomposition of *G*/*K* to the decomposition of *G*.