We complete Dyson’s dream by cementing the links between symmetric spaces and classical random matrix ensembles. Previous work has focused on a one-to-one correspondence between symmetric spaces and many but not all of the classical random matrix ensembles. This work shows that we can completely capture all of the classical random matrix ensembles from Cartan’s symmetric spaces through the use of alternative coordinate systems. In the end, we have to let go of the notion of a one-to-one correspondence. We emphasize that the KAK decomposition traditionally favored by mathematicians is merely one coordinate system on the symmetric space, albeit a beautiful one. However, other matrix factorizations, especially the generalized singular value decomposition from numerical linear algebra, reveal themselves to be perfectly valid coordinate systems that one symmetric space can lead to many classical random matrix theories. We establish the connection between this numerical linear algebra viewpoint and the theory of generalized Cartan decompositions. This, in turn, allows us to produce yet more random matrix theories from a single symmetric space. Yet, again, these random matrix theories arise from matrix factorizations, though ones that we are not aware have appeared in the literature.

Random matrix theory (RMT) is a big subject touching so many fields of mathematics, science, and engineering. For such a subject, it is helpful to have a means of cataloging the objects to be studied and a theory that covers the objects in the catalog. In 1962, Dyson1–4 was the first to propose a systematic approach to RMT. In the beginning of Ref. 4, he states his noble intent:

To bring together and unify three trends of thought which have grown up independently during the last thirtyyears.

which he enumerates as (i) group representations including time-inversion, (ii) Weyl’s theory of matrix algebras, and (iii) RMT.

Around a decade later, Dyson hit upon the idea that symmetric spaces should play a key role (Ref. 5, Sec. V). Dyson’s suggestion was taken up in famous papers by Zirnbauer et al.6,7 and others.8,9 These papers mainly focus on the noncompact cases. On the mathematical side, inspired by Katz and Sarnak,10,11 Dueñez detailed connections to RMT for the compact symmetric spaces.12,13

Nonetheless, we felt there was a gap. When one juxtaposes (i) the well-established theory of classical random matrix ensembles with (ii) the RMTs associated with symmetric spaces, ensembles are missing. In particular, only very special Jacobi ensembles (the left side of Fig. 2) seem to be making the symmetric space list. More precisely, if one starts with a symmetric space, one has to make what we call a coordinate system choice, what others might call a matrix factorization choice. This choice has been the map Φ : K × AG/K; (k, a) ↦ kaK of Cartan, which we could call the KAK decomposition. (Although it is often called Cartan’s KAK decomposition, Cartan was not aware of G = KAK.) See Fig. 1.

FIG. 1.

Families of matrix factorizations associated with a symmetric space, its tangent space, and its isometry group: Shown above are the skeleton of five factorizations associated with noncompact (left) and compact (right) symmetric spaces. Each serves as coordinate systems on the respective manifolds. Previous approaches (manifold, coordinate system, and measure) are shown in magenta. Examples of the linked factorizations/coordinate systems are shown.

FIG. 1.

Families of matrix factorizations associated with a symmetric space, its tangent space, and its isometry group: Shown above are the skeleton of five factorizations associated with noncompact (left) and compact (right) symmetric spaces. Each serves as coordinate systems on the respective manifolds. Previous approaches (manifold, coordinate system, and measure) are shown in magenta. Examples of the linked factorizations/coordinate systems are shown.

Close modal

We show that coordinate systems from the generalized Cartan (K1AK2) decomposition associate a single symmetric space to multiple RMTs. Letting go of the historical bias of the KAK decomposition, the full set of Jacobi ensembles (the right side of Fig. 2) emerges, thereby leading to the complete list of classical random matrix ensembles. Of course, there is much mathematical precedent in differential geometry to letting go of any one special coordinate system.

FIG. 2.

The parameter space (α1, α2) ∈ (−1,∞)2 of the β = 2 Jacobi ensemble obtained from Cartan’s coordinates (KAK) (left) and the generalized singular value decomposition coordinates (K1AK2) (right).

FIG. 2.

The parameter space (α1, α2) ∈ (−1,∞)2 of the β = 2 Jacobi ensemble obtained from Cartan’s coordinates (KAK) (left) and the generalized singular value decomposition coordinates (K1AK2) (right).

Close modal

The objects that we are interested in are the classical random matrix ensembles. Well-established conventions in random matrix theory agree that the ensembles in this class consist of the Hermite, Laguerre, Jacobi, and circular ensembles built from matrices of integer sizes and involve entries that are real, complex, or quaternion. (Dyson denoted β = 1, 2, 4, and other authors in mathematics denote α = 2/β = 2, 1, 1/2.)

The term “classical random matrix ensembles” may be found in the following well-known references:

  • Chapter 1 of Forrester’s paper14 has the title “Classical Random Matrix Ensembles,” and the even sections (1.2, 1.4, 1.6, and 1.8) are explicitly Hermite, circular, Laguerre, and Jacobi in that order. (Odd sections have discussions related to these ensembles.) Forrester’s comprehensive book15 deals exclusively Hermite, Laguerre, Jacobi and circular ensembles in Chaps. 1–3 where the preface states: “eigenvalue p.d.f. of the various classical β-ensembles given in Chaps. 1–3.” Then, later in Chap. 5.4, he further justifies the terminology by pointing out the four weights from classical orthogonal polynomial theory.

  • In Ref. 16, Chap. 4.1 is entitled “Joint distribution of eigenvalues in the classical matrix ensembles” and specifically covers exactly the Hermite, Laguerre, Jacobi, and circular ensembles.

  • The first author’s 2005 Acta Numerica article (Ref. 17, Sec. 4).

If one starts with the list of ten infinite families of Cartan’s symmetric spaces (we will not discuss finite families of the exceptional types) and asks to characterize which classical random matrix ensembles are covered, answers could be found in Ref. 8 (Table 1), Ref. 9 (Table 1) (noncompact cases), and Ref. 13 (Table 1) (compact cases). However, turning the question around, if one starts with the classical random matrix ensembles and asks whether symmetric spaces are adequate to explain all of them, we find that the answer is a big “almost,” as the Jacobi ensembles are not adequately covered. To be precise, the Jacobi densities associated with compact symmetric spaces BDI, AIII, and CII from the previous attempts by the KAK decomposition are the following joint probability densities with β = 1, 2, 4 (up to constant) and integers pq,

(1.1)

where we observe the powers of xj’s restricted to β21. The possible parameters of (1.1) are described in the left side of Fig. 2. Additional four compact symmetric spaces DIII, BD, C, and CI add four more Jacobi ensembles,13 but they are not sufficient to cover the two dimensional parameter set of the Jacobi ensembles.

It is always interesting when a branch of applied mathematics reverses direction and provides guidance to pure mathematics. In this work, we focus on the role of the generalized singular value decomposition (GSVD) from numerical linear algebra.18,19

From an applied viewpoint, the Jacobi ensembles are elegantly generated in software with commands such as svdvals(randn(p,s),randn(q,s)) in languages such as Julia, which is computed by taking the GSVD of two i.i.d. normal matrices with the same number of columns.20,21 From a pure viewpoint, this is a pushforward of the uniform measure on the Grassmannian manifold onto a maximal Abelian subgroup A (with a fixed Weyl chamber) along the generalized Cartan (K1AK2) decomposition (Fig. 3).22,23

FIG. 3.

Cartan’s coordinate system (KAK) and GSVD coordinate systems (K1AK2) on the Grassmannian manifold O(n)/(O(ns) × O(s)).

FIG. 3.

Cartan’s coordinate system (KAK) and GSVD coordinate systems (K1AK2) on the Grassmannian manifold O(n)/(O(ns) × O(s)).

Close modal

For example, take a Grassmannian point with any β = 1, 2, 4 from O(n)/(O(ns) × O(s)) (respectively, with complex or quaternionic unitary groups) and represent it by the n × s orthogonal (respectively, complex or quaternionic unitary) matrix X. [More precisely, we treat the Grassmannian manifold as the quotient Vs(Rn)/O(s) where Vs(Rn) is the Stiefel manifold. We are allowed to multiply any O ∈ O(s) on the right side of X.] For any p, qs satisfying p + q = n, we have the following coordinate system of X arising from the GSVD24 of the first p rows and the last q rows of X (for an alternative viewpoint, see Ref. 25):

(1.2)

where U, V are p × s, q × s orthogonal (respectively, complex or quaternionic unitary) matrices and C, S are s × s diagonal matrices with cosine and sine values. Deduced joint probability densities21 (p, qs) are the following (up to constant):

where the case q = s represents the usual KAK decomposition case (1.1).

As can be seen, the classical Jacobi parameters are quantized as they are integer multiples of β/2. Random matrix models that remove this quantization, thereby going beyond the classical, appear in Refs. 20, 26, and 27. In Sec. VII, we also illustrate that some Jacobi ensembles can arise from symmetric spaces that are outside the traditional quantization (Fig. 6).

This work shows that a symmetric space can be associated with multiple random matrix theories (Fig. 4). Letting go of the arbitrariness of the choice of the KAK decomposition coordinate system allows us to choose other coordinate systems on symmetric spaces, thereby leading us to the complete list of classical random matrix ensembles (Secs. V, VI, VIII, and IX). Many of these coordinate systems are sometimes better known as matrix factorizations, used widely in matrix models of the classical ensembles.15,17,20,26,27 However, in Sec. VII, we compute new families of the Jacobi ensemble parameters from coordinate systems that have not been known before.

FIG. 4.

Examples illustrating the lack of a one-to-one relationship between symmetric spaces and classical random matrix theories: A complex Grassmannian (top) obtains three Jacobi ensembles. A real Grassmannian (bottom) obtains four Jacobi ensembles. In particular, the β = 1 Jacobi ensemble J0,1(1),2 can be obtained from both symmetric spaces. Interestingly, a complex Grassmannian can lead to (top purple) a real RMT in the sense that β = 1. Similarly, a real Grassmannian obtains β = 2 RMT (bottom purple).

FIG. 4.

Examples illustrating the lack of a one-to-one relationship between symmetric spaces and classical random matrix theories: A complex Grassmannian (top) obtains three Jacobi ensembles. A real Grassmannian (bottom) obtains four Jacobi ensembles. In particular, the β = 1 Jacobi ensemble J0,1(1),2 can be obtained from both symmetric spaces. Interestingly, a complex Grassmannian can lead to (top purple) a real RMT in the sense that β = 1. Similarly, a real Grassmannian obtains β = 2 RMT (bottom purple).

Close modal

This work also endeavors to make the Lie theory more widely accessible by simplifying and modernizing key ideas and proofs in Ref. 28. Cartan’s theory29–32 as developed by Helgason28,33 is a crowning mathematical achievement, and it is our hope to open up this theory for the benefit of all. Indeed, in Ref. 34 (p. 428), Helgason writes about the difficulty of understanding Cartan’s writings:

[Cartan] was one of the great mathematicians of the period, but his papers were quite a challenge. Hermann Weyl, in reviewing a book by Cartan from 1937 writes: “Cartan is undoubtedly the greatest living master in differential geometry… I must admit that I found the book like most of Cartan’s papers, hardreading.”

In the same vein, while we are admirers of Helgason’s extensive work, we authors must admit that we, in turn, found Refs. 28 and 33 hard reading as well, and this paper attempts to introduce the theory by couching the ideas in terms of what we call ping pong operators.

Summarizing our work, we have the following:

  • We use the coordinate systems of the K1AK2 decomposition that connects a single symmetric space to multiple random matrices (Fig. 4), completing the list of associated classical random matrix ensembles.

  • We translate some of the key concepts in Cartan’s theory of symmetric spaces into easier to follow linear algebra (Sec. III).

  • We provide coordinate systems (matrix factorizations) of symmetric spaces that have not been discussed in random matrix context, obtaining new parameter families of the Jacobi ensemble (Sec. VII).

Dyson introduced the β = 1, 2, 4 circular ensembles1,4 in 1962. Earlier expositions on circular ensembles could be found on Hurwitz35 and Weyl.36 Hermite ensembles were introduced by Wigner.37–39 Laguerre and Jacobi ensembles could be found as early as 1939 in the statistics literature by Fisher,40 Roy,41 or Hsu.42 The physics literature first touches upon the idea of Laguerre and Jacobi with the 1963 thesis of Leff.43 The following list is the joint probability densities (without normalization constants) of classical random matrix ensembles (β = 1, 2, 4):

  • Circular: j<k|eiθjeiθk|β, (θ1,,θn)0,2πn;

  • Hermite: j<k|λjλk|βeλj22(λ1,,λn)Rn;

  • Laguerre: j<k|λjλk|βj=1mλjαeλj2(λ1,,λm)0,m;

  • Jacobi: j<k|xjxk|βj=1mxjα1(1xj)α2 (x1, …, xm) ∈ [0,1]m.

In particular, the parameters α, α1, α2 > −1 are quantized as integer multiples of β2, i.e., β2(N+1)1 for some non-negative integer N.

In this section, we introduce the theory related to the generalized Cartan decomposition. For readers without preliminary knowledge in Lie theory, we recommend skipping to Sec. III, which follows a more modern linear algebra approach.

Let G/Kσ be a Riemannian symmetric space with a real reductive noncompact Lie group G and its maximal compact subgroup Kσ. Let σ be the Cartan involution on gLie(G). Then, g=kσ+pσ is the Cartan decomposition. Let τ be another involution on g such that τσ = στ, and let g=kτ+pτ be the ±1 eigenspace decomposition by τ. Denote by Kτ the analytic subgroup of G with tangent space kτ. Let a be a maximal Abelian subalgebra of pτpσ and define Aexp(a). We introduce the (noncompact) generalized Cartan decomposition (Ref. 22, Theorem 4.1).

Theorem 2.1
(generalized Cartan decomposition, K1AK2decomposition). With the above setting, we have the followingdecomposition ofG:
(2.1)
That is, for anygG, we havek1Kτ, k2KσandaAsuch thatg = k1ak2.

We often use the equivalent name “K1AK2 decomposition” for simplicity. Note that if τ = σ (i.e., K = Kσ = Kτ), we recover the usual KAK decomposition, G = KAK. The generalized Cartan decomposition in the work of Flensted-Jensen22 is originally intended for the case where G is noncompact. The compact analog is developed by Hoogenboom23 [Theorem 3.6].

Theorem 2.2
(generalized Cartan decomposition; compact case). LetG/KσandG/Kτbe two compact Riemannian symmetric spaces. Letg=kσ+pσandg=kτ+pτbe the corresponding eigenspace decompositions ofg=Lie(G). Then, for a maximal Abelian subalgebraaofpσpτandA=exp(a), we have

From the space of linear functionals a*, we collect eigenvalues of an adjoint representation (the commutator) of a on g and call these eigenvalues the roots of the K1AK2 decomposition. By fixing the Weyl chamber, we obtain a set of positive roots Σ+. Details of the theory of the K1AK2 decomposition and its root system can be found in Flensted-Jensen,22,44 Hoogenboom,23 Matsuki,45–47 and Kobayashi.48 The K1AK2 decomposition is also studied in the context of spherical harmonics and intertwining functions.49,50 Refine the root space gα of a root α by ±1 eigenspaces of στ. Let the two dimensions be mα±.

Let dkσ, dkτ be the Haar measures of Kσ, Kτ, respectively. Let dH be the Euclidean measure on a. The Jacobian of the K1AK2 decomposition is the following:

Theorem 2.3
(Jacobian of the K1AK2decomposition23,44). Letdgbe the Haar measure onG, and letHa. We have the Jacobian and the integral formula corresponding to the change of variables associated with theK1AK2decomposition,
where for noncompactG,
(2.2)
and for compactG,
(2.3)

Similar results on the KAK decomposition and the restricted roots of symmetric spaces can be found in standard Lie group textbooks.28,33,51–53 In the KAK case, the Jacobian (2.2) reduces down to (sinhα(H))mα as we do not have −1 eigenspace of στ so that mα=mα+.33,54,55

Theorems 2.1–2.3 are decompositions of the group G. These decompositions can also be applied to the symmetric space G/Kσ. The following map Φ is the K1AK2 decomposition of the Riemannian symmetric space G/Kσ. The map Φ is also called the Hermann action,56,57 nonstandard polar coordinates,58 and non-Cartan parameterization.59 In the KAK case (K = Kσ = Kτ), Helgason called this the polar coordinate decomposition33 and credits Cartan30 for this map. Since the G-invariant measure of G/K inherits the Haar measure of G, the identical Jacobian is obtained for the decomposition of a symmetric space.60 

Theorem 2.4
(K1AK2decomposition ofG/Kσ). Given a K1AK2decompositionG = KσAKτwith the Riemannian symmetric spaceG/Kσ, we have the map Φ,
(2.4)
SupposeHa,a = exp(H). For theG-invariant measuredxofG/Kσ,dkτ = Haar(Kτ), and the Euclidean measuredHona,dx = dkτ(H) holds where the Jacobian(H) is given in(2.2)ifGis noncompact and(2.3)ifGiscompact.

Remark 2.5
[representing G/KP: gK (coset) or pP?]. In the standard KAK decomposition, the Jacobian (2.2) [respectively, (2.3)] only has sinh (respectively, sin) terms as we discussed above. This result could be found in many literature, where some authors28,44,55,61 use ∏sinh α(H) as the Jacobian, whereas other authors13,54,62 use ∏ sinh(α(H)/2). This gap is due to the difference in the realization of a symmetric space G/K as a subset PG. The former uses the right coset representative, i.e., G/KP as gKp, where g = pk is its group level Cartan decomposition. Then, the action of G on G/K is given as (g1, g2K) ↦ g1g2K. The latter authors use the map G/KP such that gKg(σg)−1, where σ is the group level involution. The G-action is (g1,g2)g1g2(σg1)1,g1G,g2P. In terms of Theorem 2.4, the latter gives the map Φ such that (k, a) ↦ ka2k−1 since
which explains the extra factor 12 applied to H where a = exp(H). Moreover, these two identifications define the map Φ : K × AP with the same k, a as
(2.5)
depending on the author’s notational choice explained above. This coordinate system Φ is sometimes called the polar coordinate decomposition, e.g., see Ref. 33 (p. 402).

Example 2.6

(G/K vs P: a symmetric positive definite matrix). Let us take a look at the two realizations in Remark 2.5 for G/K=GL(n,R)/O(n), where P is the set of all symmetric positive definite matrices. Let S be a fixed positive definite symmetric matrix, with its eigendecomposition S = QΛQT, with Q ∈ O(n). The coset representation of S is QΛ · O(n) ∈ G/K as QΛ = (QΛQT)Q is the polar decomposition. With the realization of PG/K, the point in G/K is represented by the matrix S = QΛQT.

Finally, we have the Lie algebra counterpart of Theorem 2.4 when K = Kσ = Kτ.

Theorem 2.7.
For a noncompact Riemannian symmetric spaceG/Kwith the Cartan decompositiong=k+p, letabe a maximal Abelian subalgebra ofp. We have
(2.6)
equivalently the decompositionp=kKkak1with the Jacobiangiven as
(2.7)
whereHaand Σ is the restricted root system with dimensionsmα. The measure onpis the Euclideanmeasure.

The answer to the title question of this section is that both one and many can be construed as correct. To explain how this is possible requires teasing apart the assumptions behind the words “associated with.” Certainly,6,8,9,13 associate one random matrix with one symmetric space. However, the example of the GSVD coordinate systems discussed in Sec. I B associates multiple Jacobi densities with one symmetric space, the Grassmannian manifold. In Ref. 59, another example is illustrated as the “non-Cartan parameterization” for the special case of (G, Kσ, Kτ) = (U(n), O(n), U(p) × U(q)). (A similar approach may be found in Ref. 63.) This is discussed in Sec. VII B.

The reconciliation is that indeed it is true that the required maps (2.4) when K = Kσ = Kτ, i.e., Φ(k, a) = kaK = kak−1 (compact) or the map (2.6) ψ(k, H) = kHk−1 (noncompact) lead to a unique random matrix theory associated with a given symmetric space G/K. This is unique in a sense that any geodesic on the symmetric space G/K could be transformed to the geodesic on A with the above maps.

However, if we relax the condition so that we are allowed to choose Kτ under the generalized Cartan decomposition framework, we can associate multiple random matrix theories to one symmetric space. The GSVD coordinate systems in Sec. I B illustrate this viewpoint. The real Grassmannian manifold G/K = O(n)/(O(ns) × O(s)) has the map Φ: (k, a) ↦ kaK for K = Kσ = Kτ explicitly written as X=UVCSO(s), where U, V are (rs) × s, s × s orthogonal matrices. On the other hand, if we let Kτ = O(p) × O(q), we have multiple maps Φ: (kτ, a) ↦ kτaK written as X=UVCSO(s), where U, V are p × s, q × s orthogonal matrices.

Starting from Sec. V, we discuss (i) random matrices arising from the K1AK2 decompositions of compact symmetric spaces (Theorem 2.4 or 2.2) and (ii) random matrices arising from the Lie algebra decomposition of noncompact symmetric spaces (Theorem 2.7). The associated decompositions are well explained by matrix factorizations in numerical linear algebra. As we pointed out, the resulting Jacobi ensembles cover the full parameter set of the classical Jacobi densities, thereby completing the classification from the classical RMT point of view.

The Jacobian of the KAK (K1AK2) decomposition, equivalently the determinant of the differential of the map Φ : K × AP (in Theorem 2.4 and Remark 2.5), is computed in several references.28,54,55 The proof of (2.2) is can also be found in Refs. 23 and 44. However, the proof can be inaccessible to some audiences. Meanwhile, individual cases of the KAK decomposition, recognized as matrix factorizations, show up in many areas of mathematics, and some were discovered in various formats by specialists in numerical linear algebra. Motivated by random matrix theory (and sometimes perturbation theory in numerical analysis), Jacobians of these factorizations were often computed case-by-case using the matrix differentials and wedging of independent elements.15,21,26,64,73

In this section, we provide a generalization of such individual Jacobian computations and compare it to the general technique Helgason proposed. With appropriate translation of terminologies and maps in Lie theory into linear algebra, we observe the both methods are indeed the same process but have been illustrated in different languages for a long time. We start out by introducing some important concepts in Lie theory accessible to an audience with a good background in linear algebra and perhaps some basic geometry. Then, in Table II, we present a line-by-line correspondence between Helgason’s derivation and the proof by matrix differentials.

We will start with a concrete 2 × 2 linear operator so as to establish the notions of the ping pong operator, ping pong vectors, ping pong subspaces, and the relationship to eigenvectors. Then, we will define a “bigger” linear operator adH that acts on 2 × 2 spaces exactly in the manner we are about to describe.

We introduce the 2 × 2 matrix

which we will call a 2 × 2 ping pong operator, and we will call 10 and 01 the ping pong vectors of M, in that M bounces these two vectors into α times the other,

Furthermore, M has eigenvectors 11,11, with eigenvalues α, −α. We will call the eigenvalue a root of M.

Also worth pointing out are the matrix exponential and matrix sinh of M,

and thus, we see that sinh M is another ping pong operator with scaling sinh α. Figure 5 plots the action of a ping pong matrix and its exponential, with notations that we will use in Secs. III D and III E, i.e., the ping pong operator is denoted adH, pj and kj are the ping pong vectors, and xj and θxj are the eigenvectors. The right side of Fig. 5 shows the action of eM and portrays sinh(M) as a projection of eM on the pj direction.

FIG. 5.

The eigenmatrices xj, θxj and ping pong matrices kj, pj (3.4) in the tangent space g. The operators are illustrated in blue lines. The operator adH and ping pong relationship (left) and the operator eadH on kj to pj (right). The left map shows the factor of αj, which is a building block of the Jacobian ∏j |αj(H)| (2.7). The factor of sinh αj in the right map builds the Jacobian ∏j | sinh αj(H)| (2.2).

FIG. 5.

The eigenmatrices xj, θxj and ping pong matrices kj, pj (3.4) in the tangent space g. The operators are illustrated in blue lines. The operator adH and ping pong relationship (left) and the operator eadH on kj to pj (right). The left map shows the factor of αj, which is a building block of the Jacobian ∏j |αj(H)| (2.7). The factor of sinh αj in the right map builds the Jacobian ∏j | sinh αj(H)| (2.2).

Close modal

We now go beyond 2 × 2 matrices and suggest the more general 2N × 2N ping pong matrix MN, with N roots, α1, …, αN, N pairs of ping pong vectors (k1, p1), …, (kN, pN) along with eigenvectors (x1, y1), …, (xN, yN),

where the 2j − 1 and 2j positions are 0 or ±1 and all other entries of these vectors are 0. The matrices exp(MN) and sinh MN are block versions of the 2 × 2 case.

We may define the subspaces k and p (using the “mathfrak” Fraktur letters “k” and “p”) to be the span of the kj and pj, respectively. Note that k and p are orthogonal complements as subspaces. A key “ping pong” relationship between these subspaces is that

Thus, if we consider MN|k, the restriction of MN to k, we have an operator from k to p. Evidently, MN|k as a matrix may be obtained by taking the even rows and odd columns of MN. The result is a diagonal matrix with the αj on the diagonal. Similarly, sinh(MN)|k is a diagonal matrix with sinh(αj) on the diagonal. We then get the important result that

the product of the hyperbolic sines of the roots.

Given a linear operator L on a vector space with nonzero eigenvalues ±λ, the following lemma constructs a pair of ping pong vectors from L:

Lemma 3.1.
For a linear operatorLdefined on any vector space, assume that ±λare both nonzero eigenvalues ofL. Letxandybe the corresponding eigenvectors, i.e.,Lx=λxandLy=λy. Define two vectorskx + y, pxy. Then,k, pare ping pong vectors. Furthermore, we have for the operatorexp(L),

The proof is a straightforward extension of the discussion in previous paragraphs.

Remark 3.2.

For the reader who wants to know the upcoming significance of this fact for Jacobians of matrix factorizations, it turns out (or maybe as the reader already observed in Sec. II) that the Jacobian will be the product of sinh α’s. Just as the matrix sinh0αα0 takes one of the ping pong vectors to sinh α times the other, the key piece of the differential map will consist of multiple ping pong relationships, each one sending one ping pong vector to another.

Lie theory picks out operators L that exactly have the properties in Sec. III A. Our vector spaces are now matrix spaces, and our operators are linear operators on a matrix space. We introduce the Lie bracket, denoted by [X, Y], defined as [X, Y] = XYYX (the commutator). The Kronecker product notation is very helpful in this context. We define the Kronecker product notation as a linear operator on a matrix space. [Many authors would write vec(BXAT) = (AB)vec(X), but we omit the “vec” as we believe it is always clear from context. In a computer language such as Julia, one would write kron(A,B) * vec(X) = vec(B*X*A′)],

(3.1)

With this, we can express the Lie bracket with Kronecker products,

Consider the Lie bracket as a linear operator (determined by X) applied to Y, and call this operator adX (abbreviation for “adjoint”),

This will be the important ping pong operator L. The operator exponential of adX (equivalently, the matrix exponential of IXXTI) is given in the following lemma:

Lemma 3.3.
For the linear operator adX, the following holds foreadXj=0(adX)nn!andsinhadx=(eadX+eadX)/2:
(3.2)
(3.3)

Proof.
The proof is straightforward by identity (3.1). eXYeX=(eX)TeXY and eadXY=exp(IXXTI)Y. It is left to prove (eX)TeX=exp(IXXTI). Since IX commutes with XTI, we have
proving the result. The sinh result follows trivially.□

In our first example, our vector space is n × n real matrices. Consider

The ping pong operator that will bounce k and p around will be adH = IHHTI, where H is the diagonal matrix

Note that the operator adH sends an antisymmetric matrix to a symmetric matrix and a symmetric matrix to an antisymmetric matrix.

What does this have to do with Jacobians of matrix factorizations, such as the symmetric positive definite eigenvalue factorization? Consider a perturbation of Q when forming S = QΛQT. An infinitesimal antisymmetric perturbation QTdQ is mapped into a dS, an infinitesimal symmetric perturbation. This is the very linear map from the tangent space of Q to that of S that we wish to understand, so perhaps it is not surprising we would want to restrict our ping pong operator from k to p. We invite the reader to check that the corresponding eigenmatrices and ping pong matrices of adH may be found in the first column of Table I.

TABLE I.

Examples of eigenmatrices xl, θxl and ping pong matrices kl, pl. kl = xl + θxl and pl = xlθxl as defined in (3.4). kl, pl are normalized to have ±1 entries. A block structure on row/columns j, k and j′ ≔ p + j and k′ ≔ p + k are filled up with 0 and ±1.

GKGL(n,R)O(n)U(n)O(n)O(p,q)O(p)×O(q)O(n)O(p)×O(q)
xl  ⋯  ⋯ 
θxl  ⋯  ⋯ 
kl     
pl     
GKGL(n,R)O(n)U(n)O(n)O(p,q)O(p)×O(q)O(n)O(p)×O(q)
xl  ⋯  ⋯ 
θxl  ⋯  ⋯ 
kl     
pl     

We proceed to construct more important general operators L that have the property in the assumption of Lemma 3.1. This is where the theory of Lie groups and symmetric spaces need to be brought in. Upon doing so, we will obtain two linear spaces of matrices k, p, and also a space a.

For the reader not familiar with Lie groups, one need only imagine a continuous set of matrices that are a subgroup of real, complex, or quaternion matrices. The tangent space g is just a vector space of matrix differentials at the identity. One key example is the compact Lie group O(n) (the group of square orthogonal matrices) and its tangent space at the identity gO(n): the set of antisymmetric matrices. Another key example is all n × n invertible matrices GL(n,R) (a noncompact Lie group) and its tangent space gGL(n,R), consisting of all n × n matrices.

Cartan noticed that important matrix factorizations start with two ingredients: the tangent spaceg (at the identity) of a Lie group G and an involutionθ on g (i.e., θ2 = Id and θ[X, Y] = [θX, θY]) An example of θ is θ(X) = −XT on g for G=GL(n,R). Among matrices in g, we select two kinds of matrices. The ones fixed by the involution θ, and the ones negated by θ. Denote each set by k and p,

[For GL(n,R), these are the antisymmetric and symmetric matrices respectively.]

The next important player is ap. Readers familiar with the singular value decomposition know the special role of diagonal matrices in the SVD as they list the very important “singular values.” Diagonal matrices have the nice property that linear combinations are still diagonal, they commute (the Lie bracket of any two are zero), and they are symmetric (the p of our first example). The generalization of this is to take a p and find a maximal subalgebra where every matrix commutes. This is the maximal subspace ap such that for all a1,a2a, [a1, a2] = 0.

If Ha, then S = QΛQT is a symmetric positive definite eigendecomposition, with Λ = eH. In the rest of the section, we will be focusing on factorizations of the form QΛQ−1, where Λ is a matrix exponential of Ha. (These will be more general than eigendecompositions, as Q may not be orthogonal, and Λ may not be diagonal.) In particular, we will compute the Jacobian of perturbations with respect to Q, holding H constant, and thus, necessarily the Jacobian will be defined in terms of H.

From here, we assume that the Lie group G is noncompact. The compact case will be discussed after completing the noncompact case. Pick Ha, and recall that adH is a linear operator on g. The operator adH will play the role of L, the ping pong operator. We decompose g into the eigenspaces of adH. For any eigenpair (αj, xj) of adH, i.e., adH(xj) = [H, xj] = αjxj, we observe (for αj ≠ 0)

which implies that the eigenvalues ±αj always exist in pairs, with the corresponding eigenmatrices xj and θxj. This satisfies the assumption of Lemma 3.1, from which we can now construct our ping pong matrices,

(3.4)

with the ping pong relationship by the operator adH,

(3.5)

In addition, the relationship by the operator eadH follows:

(3.6)
(3.7)

The ping pong matrices kj, pj, eigenmatrices xj, θxj and the relationships (3.5), (3.6) are illustrated in Fig. 5.

As we mentioned in Remark 3.2 and Sec. III C, the role of ping pong matrices kj, pj is crucial. The mapeadH(particularly,sinhadH) is the main ingredient constructing the differential mapdΦ of the factorization Φ: (Q, Λ) ↦ QΛQ−1. The operator eadH is applied to kj and then projected to the span of pj as in Fig. 5 (right), leaving only the sinh αj factor.

We now compute the full basis of k and p. The collection ∪j{xj, θxj} is a full basis for the union of eigenspaces with nonzero eigenvalues. Since span({xj, θxj}) = span({kj, pj}) for any j, ∪j{kj, pj} is another full basis for the eigenspaces with nonzero eigenvalues. Interestingly, we observe θkj = kj and θpj = −pj, which identifies ∪j{kj} and ∪j{pj} as subsets of the basis of k and p, respectively. The remaining case is the zero eigenspace. When αj = 0, there are two possibilities. First, if xj and θxj are independent of each other, we can still obtain kj and pj as before and add them to ∪j{kj} and ∪j{pj}. Second, if xj and θxj are colinear, θxj is either xj or −xj. If θxj = xj, we collect such xj and name the set Kz. Similarly, if θxj = −xj, then we put them in Pz. Since we analyzed both nonzero and zero eigenspaces, we have obtained a full basis of g, which is j{kj,pj}KzPz. Refining once more, span(j{kj})Kz=k and span(j{pj})Pz=p.

In Sec. III D, we obtained the basis of k and p, in terms of ping pong matrices, by linearly combining eigenmatrices of the operator adH. We now illustrate the relationship of the basis of k and p under eadH, just like we illustrated the operator MN in Sec. III A. In the k1, …, kN and p1, …, pN basis, we have the following:

(3.8)

We are now ready to carefully investigate the map dΦ using (3.8).

Remark 3.4.
Results in Lie theory imply that the eigenmatrices xj and θxj of adH are independent of the choice of Ha. In other words, the complete basis of g and k, p obtained above does not care about a specific choice of H. Furthermore, the eigenvalues ±αj are functions of H, and these eigenvalue assigning functions α̃j:HαjR are more properly called the restricted roots. It can be inferred from the separation of the basis that k, p together form the whole tangent space g,
(3.9)

The reader may have noticed that our discussions have focused on the Lie algebras rather than the Lie groups themselves. It is a point of fact that Lie groups are mostly useful to define the factorizations of our interest, but Lie algebras are where the Jacobian “lives,” and hence, this is the most important place to concentrate. For the interested reader, the subgroup K of G is picked such that its tangent space is exactly k [one easy way to imagine such a subgroup is to define Kexp(k)], and we now obtain a symmetric spaceG/K.

It can be proven that for the noncompact Lie group, there exists a unique involution θ such that the subgroup K is the maximal compact subgroup of G. We call θ the Cartan involution, and (3.9) is called the Cartan decomposition. Furthermore, the subset Pexp(p) plays an important role as its elements serve as representatives of the cosets in G/K. Regarding the identification of G/K as elements in P, refer to Remark 2.5, where we point out as an example, taking G/K=GL(n,R)/O(n) that an element of G/K has the form of a coset gK, then ggT may be a representative of the coset in p. While some authors use (ggT)1/2, the key point being each choice is well defined independent of choice of representative.

Upon considering the compact cases, it is helpful to make use of a certain duality between compact and noncompact symmetric spaces. We again start with a noncompact Lie group G and the Cartan involution θ. Let g=k+p be the Cartan decomposition. Then, define a new space,

(3.10)

where i is the imaginary unit. The result in Lie theory implies that the new vector space gC is the tangent space of a compact Lie group, say, GC. In Table I, the first and third columns labeled GL(n,R)/O(n) and O(p, q)/(O(p) × O(q)) are noncompact tangent spaces. Their compact duals are, respectively, the second and fourth columns labeled U(n)/O(n) and O(n)/(O(p) × O(q)).

Matrixwise, the ping pong matrices kjk,pjp of g are brought back to a new set of ping pong matrices kjkC,ipjpC in gC. Let us denote them by k̃jkj and p̃jipj. The role of the subspace a is now played by ia replacing adH by adiH. We deduce a set of similar relationships for k̃j,p̃j under adiH,

In matrix form,

(3.11)

which leads to the compact version of (3.6) and (3.7),

(3.12)

At the group level, the symmetric spaces G/K and GC/K are called the duals of each other, and they appear in the same row of standard symmetric space charts. An example of eigenmatrices xj, θxj and ping pong matrices for some symmetric spaces and their duals are presented in Table I.

We provide a generalized algorithm for finding a Jacobian of the decomposition Φ(Q, Λ) = QΛQ−1 [as we defined in (2.5)], where ΛAexp(a),QK. k and p from Sec. III D are the tangent spaces of K and P, respectively. As mentioned, we follow Helgason’s derivation (Ref. 28, Theorem 5.8 of Chap. I) and start by directly translating his proof into simple linear algebra terms. In Table II, we have Helgason’s derivation (left) compared in the same row with linear algebra (Right). Table II is using the noncompact symmetric space G/K but the compact case is identical with replacing sin αj by sinh αj.

TABLE II.

Line-by-line translation of the classical proof to linear algebra proof.

Classical notation (Ref. 28, p. 187, Proof of Theorem 5.8, Chap. I) Linear algebra notation (matrix factorizations) 
Definitions 
Φ: K × AG/K Φ̃: K × AP 
Φ: (k, a) ↦ kaK Φ̃: (Q, Λ) ↦ QΛQ−1 (Λ12=a, Q = k
dτ(g0):(G/K)o(G/K)g0o dτ̃(g0):Xg0X(θg0)1 
dπ:g(G/K)o (θk = k, kK, θp = p−1, pP
At kK, fix a tangent vector dτ(k)Tiα At QK, fix a tangent vector dQ 
At Id, basis element Tiαk At Id, basis element Q1dQ=kjk 
Derivations 
2dΦ(dτ(k)Tiα,0)a dΦ̃(dQ,0)=d(QΛQ1) (with dΛ = 0) 
=dπ(2kTiαa) = dQΛQ−1 + QΛdQ−1 
=dτ(ka)dπ(2Ad(a1)Tiα)b =dτ̃(QΛ12)Λ12(Q1dQΛ+ΛdQ1Q)Λ12c 
=dτ(ka)dπ(Ad(a1)TiαAd(a)Tiα) =dτ̃(QΛ12)Λ12kjΛ12Λ12kjΛ12 
[Let H be such that exp(H)=a=Λ12[Note that dτ̃(QΛ12)X=QΛ12XΛ12Q1
=dτ(ka)dπ(eadHTiαeadHTiα) =dτ̃(QΛ12)exp(HTIIH)kj 
 exp(IHHTI)kj [by (3.3)] 
=dτ(ka)dπ(α(H)1[H,Tiα]2sinhα(H)) =dτ̃(QΛ12)(2sinhαj)pj [by (3.8)] 
Classical notation (Ref. 28, p. 187, Proof of Theorem 5.8, Chap. I) Linear algebra notation (matrix factorizations) 
Definitions 
Φ: K × AG/K Φ̃: K × AP 
Φ: (k, a) ↦ kaK Φ̃: (Q, Λ) ↦ QΛQ−1 (Λ12=a, Q = k
dτ(g0):(G/K)o(G/K)g0o dτ̃(g0):Xg0X(θg0)1 
dπ:g(G/K)o (θk = k, kK, θp = p−1, pP
At kK, fix a tangent vector dτ(k)Tiα At QK, fix a tangent vector dQ 
At Id, basis element Tiαk At Id, basis element Q1dQ=kjk 
Derivations 
2dΦ(dτ(k)Tiα,0)a dΦ̃(dQ,0)=d(QΛQ1) (with dΛ = 0) 
=dπ(2kTiαa) = dQΛQ−1 + QΛdQ−1 
=dτ(ka)dπ(2Ad(a1)Tiα)b =dτ̃(QΛ12)Λ12(Q1dQΛ+ΛdQ1Q)Λ12c 
=dτ(ka)dπ(Ad(a1)TiαAd(a)Tiα) =dτ̃(QΛ12)Λ12kjΛ12Λ12kjΛ12 
[Let H be such that exp(H)=a=Λ12[Note that dτ̃(QΛ12)X=QΛ12XΛ12Q1
=dτ(ka)dπ(eadHTiαeadHTiα) =dτ̃(QΛ12)exp(HTIIH)kj 
 exp(IHHTI)kj [by (3.3)] 
=dτ(ka)dπ(α(H)1[H,Tiα]2sinhα(H)) =dτ̃(QΛ12)(2sinhαj)pj [by (3.8)] 
a

Since Λ12=a, we have 2dΦ=dΦ̃.

b

This is (dτ(ka)dπ)(Ad(a1)Tiα).

c

Both dQΛQ−1 and QΛdQ−1 are at QΛQ−1 and should be brought back to identity (inside bracket).

From the last line of Table II, we can finish the story with two different directions, depending on the choice of the volume measure. First, if we use a G-invariant measure (the “canonical measure”) of P, the measure is invariant under the map or dτ̃ (by definition of the invariant measure). Thus, we can disregard dτ̃(QΛ12) [or (ka)] so that the Jacobian of dΦ̃ (or dΦ) only depends on the differential map kj ↦ (sinh αj)pj. Since ∪j{kj} and ∪j{pj} are both orthonormal bases, we obtain the Jacobian (2.2),

Note that eigenvalues ±αj belong to xj and θxj have the same corresponding kj. [see (3.4) and above.] Thus, we only take the positive roots Σ+ above.

The second choice of measure is the Euclidean measure, which is a wedge product of independent entrywise differentials. In this case, the procedure is identical up to the factor sinh αj, but the map dτ̃(QΛ12) [equivalently (ka)] cannot be ignored. One needs to carefully compute the differential map dτ̃(QΛ12)pj=QΛ12pjΛ12Q1 under the Euclidean measure. We can further use the fact that conjugation by the matrix Q always preserves the Euclidean measure, since the subgroup K is always a set of matrices with an orthogonal/unitary type of property. Thus, one needs to compute the map pjΛ12pjΛ12 and multiply its Jacobian by αΣ+sinhα(H).

Remark 3.5.

For the compact Lie group G, we have sinh αj replaced by sin αj everywhere. Moreover, the last Jacobian computation step pjΛ12pjΛ12 can be omitted for the compact cases, since Λ12 is an orthogonal/unitary matrix for the compact cases. The map dτ̃(Λ12) preserves the Euclidean measure as dτ̃(Q).

In the previous paragraphs, we studied the Jacobian of the usual Cartan decomposition. We now proceed to consider the generalized Cartan decomposition (Theorems 2.1 and 2.2), its Jacobian (2.2), (2.3), and the extension of Table II. The derivations are analogous, analyzing subspaces of g, but one should now proceed with four tangent subspaces, kτkσ, kτpσ, pτkσ, and pτpσ. Earlier work on these Jacobian related derivations may be found in Refs. 23 and 44. The maximal subspace a is now defined inside pτpσ. We start with the same strategy: the tangent space g is decomposed into the eigenspaces of the linear operator adH with Ha. The eigenvalues ±αj still come in pairs, but we have two eigenmatrices xj, τσxj for eigenvalue αj and two eigenmatrices τxj, σxj for eigenvalue −αj. We define four vectors v1, v2, w1, w2 with the same roles as kj and pj played before,

and these have similar ping pong relationships by adH like kj and pj,

We can similarly extend (3.8) and other relationships and proceed as in Table II to obtain (2.2) and (2.3).

In compact cases, the random matrices could be simply determined from the Haar measure of the compact Lie group G,12,13 since the compactness of G turns the Haar measure into a probability measure. In Secs. V, VI, VII, we discuss random matrix ensembles based on ten types of Riemannian symmetric space classification by Cartan. For the triple (G, Kσ, Kτ), we start with the cases where G/Kσ and G/Kτ are of the same types in Secs. V and VI. Then, in Sec. VII, we will discuss the “mixed types” where G/Kσ and G/Kτ are different types under Cartan’s classification.

Sections VIII and IX discuss classical random matrix ensembles associated with noncompact symmetric spaces. Hermite and Laguerre eigenvalue joint densities arise as result of (2.2) using Theorem 2.4 on noncompact symmetric spaces. As opposed to compact Lie groups and symmetric spaces where the Haar measure or G-invariant measure can be normalized by a constant to a probability measure, invariant measures on noncompact manifolds cannot be normalized to one by constants. A normalizing factor S should be introduced to complete the construction of a probability measure. Therefore, random matrices on a noncompact manifold face an innate problem if we proceed analogous to Secs. V and VI:

  • The choice of the probability measure on noncompact G/K is not unique.

In Ref. 13, Dueñez also addressed this problem along the noncompact duals.

As we push the measure forward to the subgroup A, the resulting measure should be a symmetric function of independent generators of A. Hence, the probability measure I(g) of the random matrix ensemble is the Haar or G-invariant measure on G or G/K, multiplied by some symmetric function S on A,

where g = k1ak2 or g = kak−1 and μ(g) is an invariant measure. Using (2.2), the measure on A is induced,

which means that even though the measure I changes, the measure on A still differs only by a normalization function. The traditional choice of S has been made such that I(g) can be constructed from independent Gaussian distributions endowed on matrix entries. In fact, one could also endow a Gaussian distribution on the Riemannian manifold (symmetric space) itself.65 

An alternative approach that appears in Ref. 6 is to put a probability measure on the tangent space of the symmetric space, p. In particular, independent Gaussian distribution endowed on the elements of p give rise to Hermite and Laguerre ensembles by Theorem 2.7. We will follow this alternative approach.

As discussed in Sec. IV B, the Haar measure of a noncompact group G or a noncompact symmetric space G/K is not a probability measure. However, we can force an analog of a random matrix theory. Imagine, for example, a noncompact K1AK2 decomposition G = KσAKτ with (G,Kσ,Kτ)=(GL(n,R),O(n),O(p,q)). This is called the hyperbolic SVD66 where any real invertible matrix M is factored into the product of an orthogonal matrix O, a positive diagonal matrix Λ, and an indefinite orthogonal matrix V. From the Haar measure and (2.2) of GL(n,R), one obtains the Jacobian,

where λj is the squared diagonal entries of Λ for all j’s.

One can impose a Gaussian-like density function (although not a probability density) on the group GL(n,R), such as exp(−tr(gIp,qgT)/2)∏dgjk, where Ip,q = diag(Ip, −Iq). In terms of independent entries of g, this is

(4.1)

Since the Haar measure of GL(n,R) is |det(g)|ndgjk, (4.1) becomes [after integrating out O(n) and O(p, q)]

where λ1, …, λp ≥ 0 are the first p squared diagonal values of Λ and λp+1, …, λn ≤ 0 are the last q squared diagonal values of Λ, multiplied by −1. Extending this approach to find a proper random matrix probability measure on noncompact Lie groups and symmetric spaces with joint probability densities on the subgroup A is still an open problem.

The joint probability density of the circular ensemble is (β = 1, 2, 4)

Circular ensembles β = 1, 2, and 4 (COE, CUE, and CSE) arise as the eigenvalues of special unitary matrices. As we discuss in the Introduction, circular ensembles are completely classified by (compact) symmetric spaces of the types AI, A, and AII, respectively.5,13 The K1AK2 decomposition associated with each symmetric space recovers the KAK decomposition. The restricted root system (and dimensions) of AI, A, and AII are given as the following (1 ≤ j < kn):

(5.1)

Since we have compact symmetric spaces, we use (2.3) from either Theorem 2.2 or 2.4 with these root systems.

The compact symmetric space AI is G/K = U(n)/O(n). The involution on U(n) has no free parameter and the K1AK2 decomposition is equivalent to the KAK decomposition of U(n)/O(n). (In other words, we only have Cartan’s coordinate system.) The maximal Abelian torus A is

From the KAK decomposition, we obtain U = O1DO2, a factorization of a unitary matrix U into the product of two orthogonal matrices O1, O2 ∈ O(n) and a unit complex diagonal matrix DA. This decomposition first appears in Ref. 67, and we will call this the ODO decomposition. The corresponding Jacobian (up to constant) from (2.3) using (5.1), β = 1 is (with the change of variables θj = 2hj)

This is the joint density of the COE. In other words, doubled angles in the diagonal of D from the ODO decomposition of a Haar distributed unitary matrix is the COE distribution. Moreover, if we identify G/K as the set of unitary symmetric matrices P, the map (2.5) is the factorization S = OΛOT, the eigendecomposition of a unitary symmetric matrix S with real eigenvectors O. In terms of Remark 2.5, U = O1DO2 becomes S=UUT=O1D2O1T, where Λ = D2. To obtain the COE, we can utilize both factorizations:

  • Two times the angles of the unit diagonal values of D from the ODO decomposition of U ∈ Haar(U(n)).

  • The angles of the (unit) eigenvalues of a unitary symmetric matrix obtained from UUT, U ∈ Haar(U(n)).

Remark 5.1.

The second algorithm above would be obvious since the days of Dyson,1,4 while we are not aware of the first algorithm appearing in the literature.

The symmetric space of compact type A is G/K = U(n) × U(n)/U(n). The restricted root system returns to the usual root system An of the classical semisimple Lie algebra. A maximal torus of U(n) is a Cartan subalgebra of U(n). Weyl’s integration formula agrees with (2.3) obtaining the CUE, which is the eigenvalues of a Haar distributed unitary matrix. The derivation of the CUE can be found in many random matrix textbooks.15,64,68

The involution XJnTXTJn, where Jn0InIn0 on the tangent space of U(2n), results in the symmetric space U(2n)/Sp(n), where Sp(n)=Sp(2n,C)U(2n). A choice of maximal Abelian torus A is

Again from the KAK decomposition, we obtain U = Q1DQ2, a factorization of a 2n × 2n unitary matrix U into the product of two unitary symplectic matrices Q1, Q2 ∈ Sp(n) and a unit complex diagonal matrix DA. We call this the QDQ decomposition. The corresponding Jacobian from (2.3) using (5.1) (β = 4) is

with the change of variables θj = 2hj. This is the CSE distribution. Similarly, as in Sec. V A, the eigendecomposition of the unitary skew-Hamiltonian matrix obtained by UJnUTJnT, U ∈ Haar(2n) is equivalent to the map (2.5). Two numerical algorithms for sampling the CSE are as follows:

  • Two times the angles of the first n unit diagonal values of D from the QDQ decomposition of U ∈ Haar(U(2n)).

  • The angles of the first n (unit) eigenvalues of a unitary skew-Hamiltonian matrix obtained by UJnUTJnT with U ∈ Haar(U(2n)).

The joint probability density of the Jacobi ensemble is (β = 1, 2, 4),

In Refs. 12 and 13, Jacobi ensembles β = 1, 2, 4 arise from the KAK decompositions of seven compact symmetric spaces, BDI, AIII, CII, DIII, BD, C, and CI. In particular, types BDI, AIII, and CII give multiple Jacobi densities as follows (for integers pq):

and the powers of xj’s are fixed to β21. The remaining four cases add four more parameter points, which could be found in Refs. 12 and 13. In this paper, we omit these four cases as these do not have any further results, as they only have Cartan’s coordinates (no free parameter for the Cartan involution).

The K1AK2 decomposition G = KτAKσ of the compact types BDI-I, AIII-III, CII-II are exactly the CS decomposition (CSD)69,70 of orthogonal, unitary, and unitary symplectic matrices, respectively. The decomposition Φ of the symmetric space (Theorem 2.4) is the GSVD coordinate systems we discussed in Secs. I B and II C. Assume rpqs and n = p + q = r + s throughout this section. We note that with the KAK decomposition, only the cases p = r, q = s are obtained for the CSD. The root system associated with the K1AK2 decomposition is the following (1 ≤ j < ks):

(6.1)

For all three β, we have the identical maximal Abelian subgroup A,

where C,SRs×s are diagonal matrices with cosine, sine values of θ1, …, θs on diagonal entries, respectively.

With the involution XIp,qXIp,q on the tangent space of O(n), we obtain the symmetric space BDI, G/K = O(n)/(O(p) × O(q)), where Ip,q ≔ diag(IpIq). With two symmetric pairs [O(n), O(p) × O(q)] and [O(n), O(r) × O(s)], we obtain the K1AK2 decomposition BDI-I,

This is the real CSD. [Equivalently, one can imagine the GSVD of (1.2).] From (2.3) using (6.1) β = 1, we obtain the Jacobian

Using trigonometric identities with change of variables xj=cos2θj=1+cos(2θj)2,

which is the joint density of the β = 1 Jacobi ensemble Jα1,α2(1),s if we let α1=12(qs+1)1, α2=12(ps+1)1. This result agrees with Ref. 20, Theorem 1.5, where the squared CSD cosine values of a Haar distributed orthogonal matrix are distributed as β = 1 Jacobi ensemble. Moreover, recall the fact that the QL decomposition G = QL (a lower triangular analog of the QR decomposition) of an n × n independent Gaussian matrix G obtains a Haar distributed orthogonal matrix Q. Since the GSVD18,19 is equivalent to the combination of the QL decomposition and the CSD, one can take the GSVD of a real independent Gaussian matrix to obtain the same β = 1 Jacobi ensemble. Two associated numerical algorithms are as follows (a = qs, b = ps):

  • The squared CSD cosine values of a Haar distributed m × m orthogonal matrix (m = 2s + a + b) with row/column partitions (s + a, s + b) and (s, s + a + b).

  • The squared cosine values, where the tangent values are the generalized singular values of real (s + a) × s and (s + b) × s Gaussian matrices.

Two symmetric pairs of compact AIII type are [U(n), U(p) × U(q)] and [U(n), U(r) × U(s)]. The K1AK2 decomposition of the group G is the CSD of unitary matrices and the decomposition of G/Kσ = U(n)/(U(r) × U(s)) are the complex GSVD described in Sec. I B and Eq. (1.2). Using (2.3) with the root system (6.1), β = 2, and change of variables xj = cos2θj as above, we obtain the Jacobian

which is the β = 2 Jacobi density Jα1,α2(2),s with α1 = qs, α2 = ps. Numerically, the following could be utilized to obtain β = 2 Jacobi densities (a = qs, b = ps):

  • The squared CSD cosine values of a Haar distributed m × m unitary matrix (m = 2s + a + b) with row/column partitions (s + a, s + b) and (s, s + a + b).

  • The squared cosine values, where the tangent values are the generalized singular values of complex (s + a) × s and (s + b) × s Gaussian matrices.

Jacobi densities with β = 4 are similarly obtained from two symmetric spaces Sp(n)/(Sp(p) × Sp(q)) and Sp(n)/(Sp(r) × Sp(s)), where both are compact type CII. We identify Sp(n) as the quaternionic unitary group, U(n,H){gGL(n,H)|gDg=In}. The K1AK2 decomposition is the CSD of a quaternionic unitary matrix. Using (2.3) with the root system (6.1) β = 4, we obtain the following Jacobian with the change of variables xj = cos2θj:

which is the β = 4 Jacobi density Jα1,α2(4),s with α1 = 2(qs) + 1, α2 = 2(ps) + 1. The associated numerical algorithm is the following (a = qs, b = ps):

  • The squared cosine CS values of a Haar distributed m × m quaternionic unitary matrix (m = 2s + a + b) with row/column partitions (s + a, s + b) and (s, s + a + b).

Remark 6.1.

Again, one can use the GSVD on quaternionic Gaussian matrices to obtain the classical β = 4 Jacobi ensemble.

In this section, we show even more cases such that a single symmetric space leading to multiple random matrix theories. We introduce K1AK2 decompositions with two compact symmetric spaces, each from different Cartan types. The classification of such K1AK2 decompositions is studied in Ref. 47, with the computation of corresponding root systems. As always the names of these decompositions are combinations of two Cartan types, i.e., AI-II represents (G, Kσ, Kτ) = (U(2n), O(2n), Sp(2n)).

The two compact symmetric spaces are types AI and AII, U(2n)/O(2n) and U(2n)/Usp(2n). A maximal Abelian subalgebra apσpτ is the set of all matrices diag(1, …, n, 1, …, n) for (θ1,,θn)Rn. The subgroup A is the following:

The root system is given as

(7.1)

Using (2.3), we obtain the Jacobian (ξj = 4θj),

which is the joint probability density of the CUE. Hence, we obtain another sampling method for the CUE.

The two symmetric spaces in each case are the following:

The subgroup A is computed as follows:

where C, S are q × q diagonal matrices with cosine and sine values of q angles θ1, …, θq on their diagonals. The imaginary unit η is i for AI-III (β = 1) and η = j, k for CI-II (β = 2). [If we select the subgroup K of U(n,H)/U(n) to be the unitary group with the imaginary unit j, we could also obtain η = i.] The root system is the following (β = 1, 2):

(7.2)

Using (2.3) with the above root system above, we obtain the following Jacobian:

(7.3)

where xj = sin2 2θj for all j. The β = 1 case of (7.3) can be obtained from the CS decomposition approach too, with (n + 1) × (n + 1) orthogonal matrix and partitions (p, q + 1) and (p + 1, q) [see Fig. 4]. The parameters of β = 2 (7.3) cannot be obtained by the complex CSD and, thus, fall outside of the classical parameters.

Another family of the K1AK2 decomposition arise from the following pairs of compact symmetric spaces (β = 2, 4):

Under Cartan’s classification, they are types DI-III and AII-III, respectively. The subgroup A can be computed as

where I2 is the 2 × 2 identity matrix, J1=0110, and C, S are q × q diagonal matrices with cosines and sines of θ1, …, θq on their diagonals. The root system is given as follows (β = 2, 4):

(7.4)

Again, using (2.3) with the root system above, we obtain the following Jacobian, with the change of variables xj = sin2θj for all j:

(7.5)

They are β = 2, 4 Jacobi ensembles. Both cases could not be obtained from the classical CSD approach, so they are all non-classical parameters of the Jacobi ensemble. To see this at once, we compare three β = 2 Jacobi densities each from Secs. VI B, VII B and VII C. Figure 6 shows the possible parameters α1, α2 of the β = 2 Jacobi ensemble obtained from each approach.

FIG. 6.

The parameter space (α1, α2) ∈ (−1,∞)2 of the β = 2 Jacobi ensemble covered by symmetric spaces. The GSVD coordinate systems on the complex Grassmannian manifold (AIII-III) discussed in Sec. VI covers red dots. A new coordinate system on the quaternionic (respectively, real) Grassmannian manifold discussed in Sec. VII B (respectively, Sec. VII C) of type CI-II (respectively, DI-III) represent blue (respectively, green) dots.

FIG. 6.

The parameter space (α1, α2) ∈ (−1,∞)2 of the β = 2 Jacobi ensemble covered by symmetric spaces. The GSVD coordinate systems on the complex Grassmannian manifold (AIII-III) discussed in Sec. VI covers red dots. A new coordinate system on the quaternionic (respectively, real) Grassmannian manifold discussed in Sec. VII B (respectively, Sec. VII C) of type CI-II (respectively, DI-III) represent blue (respectively, green) dots.

Close modal

While Sec. VII contains essentially new random matrix theories, Secs. VIII and IX review the Hermite and Laguerre ensembles for completeness.6–9,59

The joint probability density of the Hermite ensemble is (β = 1, 2, 4),

Hermite ensembles β = 1, 2, and 4 (GOE, GUE, and GSE) arise as the eigenvalues of symmetric, Hermitian, and self-dual Gaussian matrices. Hermite ensembles can be thought as the Gaussian measure endowed on the tangent space of noncompact symmetric spaces of the types AI, A, and AII. The connection between these symmetric spaces and Hermite ensembles are made by Theorem 2.7. The decomposition Ψ (2.6) in Theorem 2.7 is the eigendecomposition of symmetric, Hermitian, and self-dual matrices. The maximal Abelian subalgebra a is the collection of all real diagonal matrices, diag(h1, …, hn). The restricted root system is the following (1 ≤ j < kn):

(8.1)

The dual of the compact symmetric space type AI, the noncompact symmetric space type AI, is G/K=GL(n,R)/O(n), represented by the set Sn of all symmetric positive definite matrices. The tangent space at the identity of Sn, p, is the set of all real symmetric matrices. The Gaussian measure on p is, for pp, exp(−tr(pTp)/2)dp ,where dp is the Euclidean measure on p. From (2.7) using (8.1) β = 1, we obtain (integrate out dk)

for the eigenvalues of p, λj = hj. This is the joint density of the GOE.

The noncompact symmetric space type A is G/K=GL(n,C)/U(n), represented by Hn, the set of all Hermitian positive definite matrices. The tangent space at the identity of Hn, p, is the set of all complex Hermitian matrices. The Gaussian measure on p is, for pp, exp(−tr(pHp)/2)dp, where dp is the (real) Euclidean measure on p. From (2.7) using (8.1) β = 2, we obtain

for the eigenvalues of p, λj = hj. This is the joint density of the GUE.

The noncompact symmetric space type AII is G/K=GL(n,H)/U(n,H). We use U(n,H) instead of Sp(n) to clearly indicate the quaternionic realization. G/K can be represented by the set of all quaternionic self-dual positive definite matrices, QHn. Again, the tangent space at the identity p is the set of all quaternionic self-dual matrices. The Gaussian measure on p is, for pp, exp(−tr(pDp)/2)dp, where dp is the (real) Euclidean measure on p. From (2.7) using (8.1) β = 4, we obtain

for the eigenvalues of p, λj = hj. This is the joint density of the GSE.

The joint probability density of the Laguerre ensemble is (β = 1, 2, 4)

Laguerre ensembles β = 1, 2, 4 arise from Theorem 2.7 applied to noncompact symmetric spaces BDI, AIII, CII, DIII, BD, C, and CI. The last four cases of types DIII, BD, C, and CI are well-studied in Ref. 6, and we again omit these cases as discussed in Sec. VI. In particular, the first three symmetric spaces give the following Laguerre densities (β = 1, 2, 4 and pq):

as these λj values are the squared singular values of p × q i.i.d. Gaussian matrices. Equivalently, the eigenvalues of the matrix AAFq×q are frequently used for sampling purpose, where † is the conjugate transposition. The tangent spaces of noncompact symmetric spaces of the types BDI, AIII, and CII are

(9.1)

and a choice of maximal Abelian subalgebra a is the set with X being (nonsquare) diagonal matrix with diagonal elements h1, …, hq. The KAK decomposition G = KAK of the noncompact symmetric spaces BDI, AIII, and CII is the hyperbolic CS decomposition (HCSD).71,72 The decomposition p=kKkak1 is the p × q SVD on upper right p × q corner. The restricted roots are the following (β = 1, 2, 4):

(9.2)

The noncompact symmetric space type BDI is G/K = O(p, q)/(O(p) × O(q)). The tangent space p (9.1) has the Gaussian measure as i.i.d. Gaussian distribution endowed on the elements of X. For Mp, it is exp(tr(MTM))dp. From (2.7) using (9.2) β = 1, we obtain

with the change of variables λj=hj2. Thus, the values λ1, …, λq are the squared singular values of the upper right corner of M. The obtained measure is the joint density of the β = 1 Laguerre ensemble.

The noncompact symmetric space type AIII is G/K = U(p, q)/(U(p) × U(q)). The tangent space (9.1) has the Gaussian measure as i.i.d. complex Gaussian distribution endowed on the elements of X. For Mp, that is exp(tr(MHM))dp. From (2.7) and using (9.2) β = 2, we obtain

with the change of variables λj=hj2. Again, the values λ1, …, λq are the squared singular values of the upper right corner of M. The obtained measure is the joint density of the β = 2 Laguerre ensemble.

The noncompact symmetric space CII is G/K=U(p,q,H)/(U(p,H)×U(q,H)). The tangent space (9.1) has the Gaussian measure as i.i.d. quaternionic Gaussian distribution endowed on the elements of X. For Mp, that is exp(tr(MDM))dp. From (2.7) and using (9.2) β = 4, we obtain

with the change of variables λj=hj2. The values λ1, …, λq are the squared singular values of the upper right corner of M. The obtained measure is the joint density of the β = 4 Laguerre ensemble.

We thank Martin Zirnbauer for the lengthy email thread from 2001, where he patiently explained which random matrix ensembles seemed to be covered by symmetric spaces. We thank Eduardo Dueñez for another lengthy email thread back in 2013. We thank Pavel Etingof for suggesting the K1AK2 decomposition and pointing us to key references, Bernie Wang for so very much, and the Fall 2020 Random Matrix Theory class (MIT 18.338) for valuable suggestions. We also thank Sigurður Helgason for lively discussions by email. We acknowledge NSF Grant Nos. OAC-1835443, OAC-2103804, SII-2029670, ECCS-2029670, and PHY-2021825 for financial support.

The authors have no conflicts to disclose.

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

1.
F. J.
Dyson
, “
Statistical theory of the energy levels of complex systems. I
,”
J. Math. Phys.
3
(
1
),
140
156
(
1962
).
2.
F. J.
Dyson
, “
Statistical theory of the energy levels of complex systems. II
,”
J. Math. Phys.
3
(
1
),
157
165
(
1962
).
3.
F. J.
Dyson
, “
Statistical theory of the energy levels of complex systems. III
,”
J. Math. Phys.
3
(
1
),
166
175
(
1962
).
4.
F. J.
Dyson
, “
The threefold way. Algebraic structure of symmetry groups and ensembles in quantum mechanics
,”
J. Math. Phys.
3
(
6
),
1199
1215
(
1962
).
5.
F. J.
Dyson
, “
Correlations between eigenvalues of a random matrix
,”
Commun. Math. Phys.
19
(
3
),
235
250
(
1970
).
6.
A.
Altland
and
M. R.
Zirnbauer
, “
Nonstandard symmetry classes in mesoscopic normal-superconducting hybrid structures
,”
Phys. Rev. B
55
(
2
),
1142
(
1997
).
7.
M. R.
Zirnbauer
, “
Riemannian symmetric superspaces and their origin in random-matrix theory
,”
J. Math. Phys.
37
(
10
),
4986
5018
(
1996
).
8.
M.
Caselle
, “
A new classification scheme for random matrix theories
,” arXiv:cond-mat/9610017 (
1996
).
9.
D. A.
Ivanov
, “
Random-matrix ensembles in p-wave vortices
,” in
Vortices in Unconventional Superconductors and Superfluids
(
Springer
,
2002
), pp.
253
265
.
10.
N.
Katz
and
P.
Sarnak
,
Random Matrices, Frobenius Eigenvalues, and Monodromy
(
American Mathematical Society
,
1999
), Vol. 45.
11.
N.
Katz
and
P.
Sarnak
, “
Zeroes of zeta functions and symmetry
,”
Bull. Am. Math. Soc.
36
(
1
),
1
26
(
1999
).
12.
E.
Dueñez
, “
Random matrix ensembles associated to compact symmetric spaces
,” Ph.D. thesis,
Princeton University
,
2001
.
13.
E.
Dueñez
, “
Random matrix ensembles associated to compact symmetric spaces
,”
Commun. Math. Phys.
244
(
1
),
29
61
(
2004
).
14.
P. J.
Forrester
, “
Random matrices, log-gases and the Calogero-Sutherland model
,” in
Quantum Many-Body Problems and Representation Theory
(
Mathematical Society of Japan
,
1998
), pp.
97
181
.
15.
P. J.
Forrester
,
Log-Gases and Random Matrices
(
Princeton University Press
,
2010
).
16.
G. W.
Anderson
,
A.
Guionnet
, and
O.
Zeitouni
,
An Introduction to Random Matrices
(
Cambridge University Press
,
2010
), Vol. 118.
17.
A.
Edelman
and
N. R.
Rao
, “
Random matrix theory
,”
Acta Numer.
14
,
233
297
(
2005
).
18.
C. C.
Paige
and
M. A.
Saunders
, “
Towards a generalized singular value decomposition
,”
SIAM J. Numer. Anal.
18
(
3
),
398
405
(
1981
).
19.
C. F.
Van Loan
, “
Generalizing the singular value decomposition
,”
SIAM J. Numer. Anal.
13
(
1
),
76
83
(
1976
).
20.
A.
Edelman
and
B. D.
Sutton
, “
The beta-Jacobi matrix model, the CS decomposition, and generalized singular value problems
,”
Found. Comput. Math.
8
(
2
),
259
285
(
2008
).
21.
A.
Edelman
and
Y.
Wang
, “
Random hyperplanes, generalized singular values & ‘what’s my β?
’,” in
2018 IEEE Statistical Signal Processing Workshop (SSP)
(
IEEE
,
2018
), pp.
458
462
.
22.
M.
Flensted-Jensen
, “
Spherical functions on a real semisimple Lie group. A method of reduction to the complex case
,”
J. Funct. Anal.
30
(
1
),
106
146
(
1978
).
23.
B.
Hoogenboom
,
The Generalized Cartan Decomposition for a Compact Lie Group
(
Stichting Mathematisch Centrum. Zuivere Wiskunde
,
1983
).
24.
A.
Edelman
and
Y.
Wang
, “
The GSVD: Where are the ellipses?, matrix trigonometry, and more
,”
SIAM J. Matrix Anal. Appl.
41
(
4
),
1826
1856
(
2020
).
25.

Alternatively, one can imagine the partial format of the CS decomposition. This is also equivalent to the bi-Stiefel decomposition with another quotient by the orthogonal group on the right.

26.
I.
Dumitriu
and
A.
Edelman
, “
Matrix models for beta ensembles
,”
J. Math. Phys.
43
(
11
),
5830
5847
(
2002
).
27.
R.
Killip
and
I.
Nenciu
, “
Matrix models for circular ensembles
,”
Int. Math. Res. Not.
2004
(
50
),
2665
2701
.
28.
S.
Helgason
,
Groups & Geometric Analysis: Radon Transforms, Invariant Differential Operators and Spherical Functions: Volume 1
(
Academic Press
,
1984
).
29.
E.
Cartan
, “
Sur une classe remarquable d’espaces de Riemann
,”
Bull. Soc. Math. Fr.
54
,
214
264
(
1926
).
30.
E.
Cartan
, “
Sur certaines formes Riemanniennes remarquables des géométries à groupe fondamental simple
,”
Ann. Sci. Ec. Norm. Super.
44
,
345
467
(
1927
).
31.
E.
Cartan
, “
Sur une classe remarquable d’espaces de Riemann. II
,”
Bull. Soc. Math. Fr.
55
,
114
134
(
1927
).
32.
E.
Cartan
, “
Sur la détermination d’un système orthogonal complet dans un espace de Riemann symétrique clos
,”
Rend. Circolo Mat. Palermo
53
(
1
),
217
252
(
1929
).
33.
S.
Helgason
,
Differential Geometry, Lie Groups, and Symmetric Spaces
(
Academic Press
,
1978
).
34.
J.
Segel
,
Recountings: Conversations with MIT Mathematicians
(
CRC Press
,
2009
), http://www-math.mit.edu/helgason/helgason_interview.pdf.
35.
A.
Hurwitz
, “
Ueber die erzeugung der invarianten durch integration
,” in
Mathematische Werke
(
Springer
,
1963
), pp.
546
564
.
36.
H.
Weyl
,
The Classical Croups: Their Invariants and Representations
(
Princeton University Press
,
1946
), Vol. 45.
37.

In Ref. 73, Mehta credits Hsu42 for the GOE. In fact, Ref. 42 has the Jacobian for the symmetric real eigenvalue problem and indeed works with AAT where A = randn(m,n) but does not work with A + AT. It is no doubt Hsu42 could have instantly written down the GOE distribution if he had only been asked.

38.
E. P.
Wigner
, “
Characteristic vectors of bordered matrices with infinite dimensions
,”
Ann. Math.
62
,
548
564
(
1955
).
39.
E. P.
Wigner
, “
On the distribution of the roots of certain symmetric matrices
,”
Ann. Math.
67
,
325
327
(
1958
).
40.
R. A.
Fisher
, “
The sampling distribution of some statistics obtained from non-linear equations
,”
Ann. Eugen.
9
(
3
),
238
249
(
1939
).
41.
S. N.
Roy
, “
p-Statistics or some generalisations in analysis of variance appropriate to multivariate problems
,”
Sankhyā
4
,
381
396
(
1939
).
42.
P. L.
Hsu
, “
On the distribution of roots of certain determinantal equations
,”
Ann. Eugen.
9
(
3
),
250
258
(
1939
).
43.
H.
Leff
, “
Statistical theory of energy-level spacing distributions for complex spectra
,” Ph.D. thesis,
University of Iowa
,
1963
.
44.
M.
Flensted-Jensen
, “
Discrete series for semisimple symmetric spaces
,”
Ann. Math.
111
,
253
311
(
1980
).
45.
T.
Matsuki
, “
Double coset decompositions of algebraic groups arising from two involutions I
,”
J. Algebra
175
(
3
),
865
925
(
1995
).
46.
T.
Matsuki
, “
Double coset decompositions of reductive Lie groups arising from two involutions
,”
J. Algebra
197
(
1
),
49
91
(
1997
).
47.
T.
Matsuki
, “
Classification of two involutions on compact semisimple Lie groups and root systems
,”
J. Lie Theory
12
(
1
),
41
68
(
2002
).
48.
T.
Kobayashi
, “
A generalized Cartan decomposition for the double coset space (U(n1) × U(n2) × U(n3))\U(n)/(U(p) × U(q))
,”
J. Math. Soc. Jpn.
59
(
3
),
669
691
(
2007
).
49.
B.
Hoogenboom
,
Intertwining Functions on Compact Lie Groups, I
(
Stichting Mathematisch Centrum. Zuivere Wiskunde
,
1983
).
50.
A. T.
James
and
A. G.
Constantine
, “
Generalized Jacobi polynomials as spherical functions of the Grassmann manifold
,”
Proc. London Math. Soc.
s3-29
(
1
),
174
192
(
1974
).
51.
D.
Bump
,
Lie Groups
(
Springer
,
2004
).
52.
R.
Gilmore
,
Lie Groups, Lie Algebras, and Some of Their Applications
(
Courier Corporation
,
2012
).
53.
A. W.
Knapp
,
Lie Groups Beyond an Introduction
(
Springer Science & Business Media
,
2013
), Vol. 140.
54.
A. A.
Kirillov
,
Representation Theory and Noncommutative Harmonic Analysis II: Homogeneous Spaces, Representations and Special Functions
(
Springer
,
1995
).
55.
A. W.
Knapp
,
Representation Theory of Semisimple Groups: An Overview Based on Examples
(
Princeton University Press
,
2001
), Vol. 36.
56.
R.
Hermann
, “
Variational completeness for compact symmetric spaces
,”
Proc. Am. Math. Soc.
11
(
4
),
544
546
(
1960
).
57.
A.
Kollross
, “
A classification of hyperpolar and cohomogeneity one actions
,”
Trans. Am. Math. Soc.
354
(
2
),
571
612
(
2002
).
58.
M. R.
Zirnbauer
and
F. D. M.
Haldane
, “
Single-particle Green’s functions of the Calogero-Sutherland model at couplings λ = 1/2, 1, and 2
,”
Phys. Rev. B
52
(
12
),
8729
(
1995
).
59.
M.
Caselle
and
U.
Magnea
, “
Random matrix theory and symmetric spaces
,”
Phys. Rep.
394
(
2–3
),
41
156
(
2004
).
60.

The actual development of the Jacobians (2.2), (2.3) was done the other way around. In Ref. 61, Helgason credits Cartan32 for the derivation of these Jacobians, which was then computed only for symmetric spaces. The KAK decomposition was discovered later in 1950s, and the Jacobians are identically extended from the decomposition of G/K to the decomposition of G.

61.
S.
Helgason
,
Differential Geometry and Symmetric Spaces
(
Academic Press
,
1962
).
62.
A.
Terras
,
Harmonic Analysis on Symmetric Spaces—Higher Rank Spaces, Positive Definite Matrix Space and Generalizations
(
Springer
,
2016
).
63.
J.
An
,
Z.
Wang
, and
K.
Yan
, “
A generalization of random matrix ensemble, I: General theory
,”
Pacific J. Math.
228
(
1
),
1
17
(
2006
).
64.
M. L.
Mehta
,
Random Matrices
(
Elsevier
,
2004
).
65.
S.
Heuveline
,
S.
Said
, and
C.
Mostajeran
, “
Gaussian distributions on Riemannian symmetric spaces, random matrices, and planar Feynman diagrams
,” arXiv:2106.08953 (
2021
).
66.
R.
Onn
,
A. O.
Steinhardt
, and
A.
Bojanczyk
, “
The hyperbolic singular value decomposition and applications
,” in
Proceedings of the 32nd Midwest Symposium on Circuits and Systems
(
IEEE
,
1989
), pp.
575
577
.
67.
H.
Führ
and
Z.
Rzeszotnik
, “
A note on factoring unitary matrices
,”
Linear Algebra Appl.
547
,
32
44
(
2018
).
68.
G.
Blower
,
Random Matrices: High Dimensional Phenomena
(
Cambridge University Press
,
2009
), Vol. 367.
69.
C.
Davis
and
W. M.
Kahan
, “
Some new bounds on perturbation of subspaces
,”
Bull. Am. Math. Soc.
75
(
4
),
863
868
(
1969
).
70.
C.
Davis
and
W. M.
Kahan
, “
The rotation of eigenvectors by a perturbation. III
,”
SIAM J. Numer. Anal.
7
(
1
),
1
46
(
1970
).
71.
E. J.
Grimme
,
D. C.
Sorensen
, and
P.
Van Dooren
, “
Model reduction of state space systems via an implicitly restarted Lanczos method
,”
Numer. Algorithms
12
(
1
),
1
31
(
1996
).
72.
N. J.
Higham
, “
J-orthogonal matrices: Properties and generation
,”
SIAM Rev.
45
(
3
),
504
519
(
2003
).
73.
M. L.
Mehta
, “
On the statistical properties of the level-spacings in nuclear spectra
,”
Nuclear Physics
18
,
395
419
(
1960
).