Building on work of Müger and Tuset [arXiv.2210.06582 (2022)], we reduce the Mathieu conjecture, formulated by Mathieu in 1997 [Algebre Non Commutative, Groupes Quantiques et Invariants, edited by Alex, J. and Cauchon, G. (Société Mathématique de France, Reims, 1997), Vol. 2, pp. 263–279], for SU(N) to a simpler conjecture in purely abelian terms. We sketch a similar reduction for SO(N). The proofs rely on Euler-style parametrizations of these groups, which we discuss including proofs.

In a 1997 paper, Mathieu conjectured the following statement:

Conjecture 1.1

(Ref. 8). Let G be a compact connected Lie group. If f, h are finite-type functions such thatGfPdg = 0 for all PN, thenGfPhdg = 0 for all large enough P.

In 1998, only one year after the publication of Mathieu’s paper, Duistermaat and van der Kallen4 proved Mathieu’s conjecture for all abelian connected compact groups. While the conjecture still has not been proven for any non-abelian group, some attempts were made. Dings and Koelink3 approached the conjecture for SU(2) by expressing the finite-type functions by explicit matrix coefficients. Influenced heavily by this, Müger and Tuset9 reduced the Mathieu conjecture for SU(2) to a conjecture about certain Laurent polynomials.

The goal of the present paper is to generalize the paper by Müger and Tuset to the compact matrix groups SU(N) and SO(N), where N ≥ 2. A key ingredient to achieving this will be a generalization of the Euler decomposition. The Euler decomposition for SU(2) has been known for some time but is mostly used by physicists under the name of Euler angles. This is no different in the case of SU(N). Several (similar but not equal) versions of the Euler decomposition for SU(N) exist, see for example Bertini et al.,1 Cacciatori et al.,2 or Tilma and Sudarshan.13 In a similar way there exist several decompositions of SO(N), see for example Refs. 6, 10, and 14.

In our paper, we will reduce the Mathieu conjecture to a conjecture similar to that of Müger and Tuset.9 We start by looking at the matrix coefficients of the generalized Euler decomposition on SU(N) and SO(N), and we find that any finite-type function can be described by a function on Cn×Rk. To be more specific, any finite type function reduces to a function f:Cn×RkC which can be written as f(z,x)=mcm(x)zm where m=(m1,,mn) is a multi-index where mij=1N1jZ for each i, and cm(x) is a polynomial in x1, …, xk and 1x12,,1xk2. Assuming these functions satisfy other conjectures, the Mathieu conjecture is proven for SU(N) and SO(N). The proof uses the explicit description of the Euler decomposition on SU(N) and SO(N) and the properties of the Haar measure in these parametrizations. In Sec. II we will focus on the group SU(N), while in Sec. III the group SO(N) will be considered. The final part of the paper is dedicated to proving the generalized Euler decomposition we used throughout this paper, with the corresponding explicit description of the Haar measure in this parametrization.

In this paper we will reduce the Mathieu’s conjecture on SU(N) and SO(N) with N ≥ 2. We start by recalling Mathieu’s conjecture. To do so, we first introduce the notion of a finite-type function:

Definition 2.1.
Let G be a compact Lie group. A function f:GC is called a finite-type function if it can be written as a finite sum of matrix components of irreducible continuous representations, i.e.
where (πj, Vj) is an irreducible continuous representation of G, and aj ∈ End(Vj).

Conjecture 2.2

(The Mathieu conjecture8). Let G be a compact connected Lie group. If f, h are finite-type functions such thatGfP dg = 0 for all PN, thenGfPh dg = 0 for all large enough P.

In this section we will focus on SU(N). We will base our parametrization and Haar measure on Refs. 12 and 13. For completeness, we included in the  Appendix dedicated to proving the parametrization.

For simplicity, we will define the generators of su(N) for NN. Let j = 1, 2, …, N − 1 and k = 1, 2, …, 2j and define the matrices λjsu(N) in the following way
where 1j is the j × j identity matrix. (In most physics papers the matrices {iλj}j are called Gell-Mann matrices, see e.g., Refs. 1, 12, and 13.) The matrices λ1,,λN21 span su(N). For example, the first eight matrices are given by
With this basis of su(N) we have the following lemma:

Lemma 2.3
(Generalized Euler angles). Let N ≥ 2. Define inductively the mapping FN:([0,π]×[0,2π]N2)×([0,π]×[0,2π]N3)××([0,π]×[0,2π])×[0,π]×0,π2N(N1)2×[0,2π]××0,2πN1SU(N) by F1 ≡ 1 and
(2.1)
where A(k)(x,y)eλ3xeλ(k1)2+1y, and ψj0,π2,ωj0,2πj for all j. Here we denoted the product as
This mapping is surjective. Moreover it is a diffeomorphism on the interior of the hypercube.

Remark 2.4.
This lemma tells us that we have a parametrization of SU(N) up to a measure zero sets. In the case of SU(2), this reduces to the Euler angles parametrization, which is given by
To give further motivation for this parametrization, we can define a Cartan involution θ by
We see that θ ≡ 1 on the subalgebra k spanned by λ1,,λ(N1)21 and λN21, and θ ≡ −1 on the vector space pspanR(λ(N1)2,,λN22). In addition, note that ksu(N1)u(1). Since SU(N) is connected for all N, the corresponding connected Lie group K such that Lie(K)=k can be seen as
We choose the maximal abelian subalgebra ap as a=Rλ(N1)2+1. The KAK-decomposition7 then gives
Our lemma states that, up to a measure zero set, there exists a subset LK such that SU(N) is diffeomorphic to LAK. We also note that by construction SU(N)/K is a symmetric space and is diffeomorphic to the complex projective plane CPN1.

Lemma 2.5.
Let N ≥ 2 and FN be the Euler parametrization of SU(N). The Haar measure dgSU(N) is then given inductively by
and
(2.2)
where Cn(n1)!(n1)2πn for all integers n ≥ 2.

As mentioned, Lemmas 2.3 and 2.5 are proved in the  Appendix. With these lemmas, we can start the discussion of Mathieu’s conjecture. Note that any finite-type function on SU(N) is a sum of products of matrix coefficients since the irreducible representations of SU(N) are polynomials in the matrix coefficients. By the parametrization given in Eq. (2.1), we see that these products consist of (powers of) sin(ψj),cos(ψk),eiϕl and eiωm. Therefore any finite-type function h can be written as
(2.3)
where gSU(N1)FN1(ϕN,,ϕN(N1)2,ψN,,ψN(N1)2,ω1,,ωN2) is the SU(N − 1) component of g = FN(ϕ1, …, ωN−1) as in Lemma 2.3, and (hSU(N1))ij is a family of finite-type functions on SU(N − 1). Also kijp,lijpZ, mijpN and nijp{0,1}. We can achieve nijp{0,1} by using the equality cos2(ψj) + sin2(ψj) = 1 repeatedly. Note that we sum over both i and j. The sum over i is to ensure we have all possible combinations of different terms, while the sum over j allows for different powers of each term. For example, in SU(2), we have the parametrization of the form
so any finite-type function is of the form

Remark 2.6.

We note that if we restrict any finite type function h to a closed subgroup H of SU(N), then h|H also is a finite-type function. This can easily be seen by the fact that any irreducible representation (π, V) of SU(N) is finite dimensional, hence (π|H, V) splits into finitely many irreducible representations (πH,i, Vi) of H, i.e., π|Hi=1MπH,i. It is immediate then that h|H is again a finite-type function.

Lemma 2.7.
Let h be a finite-type function on SU(N) as in Eq. (2.3), and N ≥ 2. Then for any PN we have
(2.4)
Here JSU(N) is defined recursively by JSU(1) ≡ 1 and, for 2 ≤ nN, by
where Cn is as in Lemma 2.5, and where hSU(N)̃ is defined recursively by hSU(1)̃1 and by
(2.5)
Here S* ≔ S1\{1} to have the function z1n1 single-valued.

The main ingredients of the Proof of Lemma 2.7 are captured in the following lemma:

Lemma 2.8.
Let p,q,k,lN0 and l > 0. Then
where S* ≔ S1\{1} is chosen such that zkl is analytic on C\R+. In addition

Proof.

Both equalities can be found by using a subsitution. The former integral is found by setting z = e and the latter by x = sin(ϕ).□

Proof of Lemma 2.7.
We use induction on N. The N = 2 case is already proven by Müger and Tuset.9 So assume that the proposition is true for N − 1. Consider any finite-type function h. Then we see that hP can be expanded by using the multinomial expansion twice:
Filling in the measure given by Eq. (2.2) gives
where we denoted G=[0,π]×[0,2π]N2×[0,π2]N1×[0,2πN1] which are the intervals in which ϕ1, …, ϕN−1, ψ1, …, ψN−1 and ωN−1 lie, respectively. We note that the integrals over ωN−1 and ϕ1 are not over the interval [0, 2π] yet, hence we will make the substitution ΩN−1 = (N − 1)ωN−1 and Φ1 = 2ϕ1. Then dΩN−1 = (N − 1)N−1 and dΦ1 = 21. This allows us to make use of Lemma 2.8 to rewrite the integral as
where X=[0,1]N1×(S*)N. We are now in a position to use the induction hypothesis, which reduces the integral over hSU(N−1) to the following:
Note that we can pull the factors βij back out, which gives
which is the desired result.□

In other words, we have translated the problem of the non-abelian group SU(N) to the simpler set [0,1]N(N1)2×(S*)N(N+1)21. This is used to translate Mathieu’s conjecture to a complex analysis question in the case of SU(N).

Definition 2.9.
Let k,lN and f:[0,1]k×(S*)lC. We say f is a SU(N)-admissible function if f can be written as
where m=(m1,,ml) is a multi-index where mij=1N1jZ, and cm(x)C[x1,(1x12)1/2,,xk,(1xk2)1/2] is a complex polynomial in xi and 1xi2. We define the collection of m for which cm0 the spectrum of f, and it will be denoted by Sp(f).

It is clear that hSU(N)̃ is a SU(N)-admissible function, so we focus on this class of functions. Motivated by Ref. 9, we make the following conjecture:

Conjecture 2.10.
Let f:[0,1]N(N1)2×(S*)N(N+1)21C be a SU(N)-admissible function. If
for all PN, then 0 does not lie in the convex hull of Sp(f).

At first sight, this conjecture may seem to have little to do with Mathieu’s conjecture. However

Theorem 2.11.

Assume Conjecture 2.10 is true. Then Mathieu’s conjecture is true for SU(N).

Proof.
Let f, h be finite-type functions of SU(N). Then both are of the form of Eq. (2.3). Assume ∫SU(N)fP = 0 for all PN. By Lemma 2.7, this is equivalent to
where f̃ is defined as in Eq. (2.5). Applying our assumption gives that 0 does not lie in the convex hull of Sp(f̃). Let us write
(2.6)
where the subscript SU(N) indicates it is a finite-type function of SU(N), so that fSU(N−1) is a finite-type function of SU(N − 1). Note that by Lemma 2.7,
(2.7)
where the constants kijp,lijq are as in Eq. (2.6).
We need to prove that ∫SU(N)fPh = 0 for P large enough. Assume to the contrary that there exists infinitely many P such that ∫SU(N)fPh ≠ 0. The goal of the proof is to show that this gives that 0Conv(Sp(f̃)), taking the identity of Eq. (2.7) into account. Because of the linearity of the integral, it is enough to show this for h being a monomial. So let us write
Note that hSU(N−1) is a monomial finite-type function as well. If ∫SU(N)fPh ≠ 0, then there is at least one term over which the integral is non-zero. Going through the same calculations as in previous proof, there is a set if integers {βij}i,j such that ∑i,jβij = P and such that
(2.8)
Our goal is to show that the arguments of all the exponential mappings in Eq. (2.8) are zero. To do this, we will make use of properties of the Haar measure. Note that dg is left- and right-invariant, meaning that ∫SU(N)fP(gy)h(gy)dg = ∫SU(N)fP(g)h(g)dg = ∫SU(N)fP(yg)h(yg)dg for any ySU(N). This must limit the possible parameters. The rest of the proof will therefore consist of choosing convenient matrices ySU(N) to get restrictions on these parameters which will prove the proposition.
But before finding these matrices specifically, we give a construction on how to continue restricting the relevant parameters of fSU(N−1) when we have found a construction on restricting the parameters kij1, …, kijN−1, lijN. For note that due to the Euler parametrization, see Lemma 2.3, any gSU(N) can be written as
where uSU(N − 1), ξ=eλN21ωN1 for some ωN1R and x = ∏2≤kNA(k)(ϕk−1, ψk−1). In the same way dgSU(N) = dgK · dgSU(N−1) · N−1 for some form dgK, as can be seen in Lemma 2.3 and Lemma 2.5 respectively (for more details on dgK we refer to our Proof of Lemma 2.5 and Ref. 5). Specifically, dgSU(N−1) is a Haar measure itself, which means that
for any u′ ∈ SU(N − 1) by using properties of the Haar measure dgSU(N−1). Here we denoted G/K as the following space
Looking at which parameters change in Eq. (2.8) when changing u to uu or uu′, we see that the following equation must hold
for any u′ ∈ SU(N − 1). This shows that any construction on SU(N) to restrict the parameters kij1, …, kijN−1, lijN can also be applied to SU(N − 1) and the parameters kijN,,kijN(N1)2,lij1,,lijN2 from those finite-type functions, yielding the same result. It is therefore enough to know what the restrictions of kij1, …, kijN−1, lijN are.
Now let us define
(2.9)
which is a k × k matrix, where 2 ≤ nk and zR. Here the diagonal has n − 1 times eiz, and kn ones. Then Dk,n(z) ∈ SU(k) for all n and z. Recall that by the properties of the Haar measure, the mapping gDN,2(z)g is invariant. That is to say, the map LDN,2(z):GG, given by LDN,2(z)g=DN,2(z)g, is invariant, i.e.
So note if gSU(N) we have
This means, by bijectivity of the Euler parametrization that
which shows that the mapping gDN,2(z)g is equivalent to sending ϕ1ϕ1 + z. Since gDN,2(z)g is invariant under the Haar measure, this means that sending ϕ1ϕ1 + z for any zR should be invariant as well in Eq. (2.8), which is to say the integral does not change if we replace ϕ with ϕ + z. This can only be the case in Eq. (2.8) if
In the same way we see that the mapping ggDN,N(z) is equivalent to sending ωN−1ωN−1 + z which should be invariant. This can only lead to the same equation in Eq. (2.8) if
(2.10)
As stated before, this construction can be applied on fSU(N1)PhSU(N1) as well with the matrix DN−1,n(z). Setting n = 2 and n = N − 1 give the equations
(2.11)
Next, we note that we can write the measure as a product of the Haar measure on SU(2) and other measures in the following way:
where T(ψ2,,ψN1)=2π2CNcos(ψN1)sin2(N1)1(ψN1)j=2N2sin(ψj)cos2j1(ψj). The Euler parametrization also allows us to write
This way we have found a subset isomorphic to SU(2) over which we integrate with the Haar measure associated with SU(2) group. This means that we can use the features of the Haar measure on SU(2), which is to say that for any function H:SU(N)C we have the equation
for any ASU(2). In specific, taking A=eiz00eiz for any zR gives that the following map is also invariant:
Hence sending ϕ2ϕ2 + z is also an invariance for all zR. Looking back at Eq. (2.8) this can only be the case if
Finally, to get an equation for K3, …, KN−1, we see that
where the block matrix is an n × n matrix. In addition, if the block matrix is an m × m matrix with m > n, then we see that
Given that DN,n(z) is identical to eiz1n in the upper left (n − 1) × (n − 1) corner, it commutes with eλ3ϕk and eλ(l1)2+1λl1 for all k and ln − 1. This way, we see that
We note that enzλ3SU(N1)001, and by previous arguments the mapping FN1(ϕN,,ωN2)enzλ3FN1(ϕN,,ωN2) is equivalent to ϕNϕNnz. So in total we see that the mapping gDN(z)g is equivalent to the mapping (ϕN−1, ϕN, ωN−1) ↦ (ϕN−1 + zn, ϕNzn, ωN−1 + z). Looking back at Eq. (2.8) this can only be invariant if
Combining this with Eqs. (2.10) and (2.11), we immediately find
(2.12)
Next, we consider gDN,N−1(z)g. We then see
Therefore we see that the mapping gDN,N−1(z)g is equivalent to sending
Note that this transformation contains the mapping g′ ↦ DN−1,2(−z)DN−1,N−1(z)g′ where g′ ∈ SU(N − 1). By previous arguments how to apply our strategy to SU(N − 1), we see that this transformation has already been considered and as a result we could put some parameters equal to 0. So we can ignore transformation g′ ↦ DN−1,N−1(−z)DN−1,2(z)g′. In other words, the mapping gDN,N−1(z)g is equivalent to the mappings
to see what kind of restrictions we can put on the parameters. Looking at Eq. (2.8), this is can only happen if
Combining this with Eq. (2.12) we see that
This process can be repeated for gDNmg with m = 2, …, N − 3 increasing, and in the end we find that
Combining everything, we thus see that
As discussed, this procedure can be continued on the remaining parameters of fSU(N−1), which gives in the end
Therefore, we see that
Thus we see that
As P → ∞, the left-hand side tends to 0. Hence there exists a limiting sequence in the convex hull of Sp(f̃) that converges to 0. Since the convex hull is a closed set, it shows that 0Conv(Sp(f̃)) which is a contradiction with our assumption. Therefore it cannot be that ∫SU(N)fPh ≠ 0 for infinitely many P, and thus ∫SU(N)fPh = 0 for large P as was to be proven.□

Remark 2.12.

One could wonder whether the inverse implication of Theorem 2.11 is true as well. It is still an open question whether this is the case.

Having considered the group SU(N), we employ a similar strategy for SO(N). We will leave the details to the reader because of the similarity. Since SO(N) ⊂ SU(N) is a closed subgroup, we could describe so(N) as a subalgebra of su(N). We note that the set
is a basis of so(N). For the rest of this paper we will use this basis to describe so(N), and with it we can describe the generalized Euler angles for SO(n):

Lemma 3.1
[Generalized Euler angles on SO(N)]. Let N ≥ 2. Define inductively the mapping ΦN : ([0, 2π] × [0,π]N−2) × ([0, 2π] × [0,π]N−3) × ⋯ × ([0, 2π] × [0, π]) × [0, 2π] → SO(N) by Φ1 ≡ 1 and
(3.1)
where we denoted the product as
This mapping is surjective. Moreover it is a diffeomorphism on the interior of the hypercube.

Lemma 3.2.
Let N ≥ 2 and ΦN be the Euler parametrization of SO(N) as in Lemma 3.1. The normalised Haar measure dgSO(N) with this parametrization is then given inductively by
and
(3.2)
where we denoted sin0(ϕk) = 1 and Cn12πn2Γ(n2) for all n = 2, …, N, where Γ is the Euler Gamma function.

As stated before, the proofs proceed analogously to the proofs given in the  Appendix, hence will be omitted. Next we will describe the finite-type functions just as in Sec. II. As in the case of SU(N), the finite-type functions f are sums of products of matrix coefficients since the irreducible representations of SO(N) are polynomials in the matrix coefficients. By the parametrization given in Eq. (3.1), we see that these products consist of (powers of) sin(ϕk), cos(ϕk), exp(iϕ(N1)N2+1(Nk)(Nk+1)2) and finite-type functions on SO(N − 1) where k = 1, …, N − 1. Again, using cos2(ψk) + sin2(ψk) = 1, any finite-type function h can be written as
(3.3)
where gSO(N1)ΦN1(ϕN,,ϕN(N1)2) is the SO(N − 1) component of g = ΦN(ϕ1ϕN(N−1)/2) as in Lemma 3.1, and (hSO(N1))ij is a family of finite-type functions on SO(N − 1). Again, kijp,lijpZ, mijpN and nijp{0,1}. We sum over both i and j. The sum over i is to ensure we have all possible combinations of different terms, while the sum over j allows for different powers of each term.

The SO(N) finite-type functions differ slightly from the SU(N) finite-type functions in Eq. (2.3). We chose to have (possible) higher powers of cos(ϕj) instead of sin(ϕj) because the Haar measure only contains powers of sin(ϕj). In addition, there are fewer parameters going over [0, 2π], hence we can only write ϕ1, ϕN, …, ϕN(N−1)/2 as an exponential.

In the same way as with SU(N), we can translate the problem then back to analysis of functions on Rn×Cm in the following way:

Lemma 3.3.
Let hSO(N) be a finite-type function on SO(N), as in . Eq (3.3) and N ≥ 2. Then for any PN we have
(3.4)
Here JSO(N) is defined recursively by JSO(2) ≡ 1 and, for 3 ≤ nN, by
where Cn is defined as in Lemma 3.2 and hSO(N)̃ is defined recursively by hSO(1)̃=1 and by
(3.5)

Note that Lemma 3.3 is similar to Lemma 2.7, the difference here being that the xj variables go over the interval [−1, 1] instead of [0,1] which is due to the original intervals being [0, π] instead of [0, π/2] in the SU(N) case. The proof goes identical to the Proof of Lemma 2.7.

Similar to Sec. II, we describe a conjecture that will deal with Mathieu’s conjecture for SO(N):

Definition 3.4.
Let k,lN and f:[1,1]k×(S1)lC. We say f is a SO(N)-admissible function if f can be written as
where m=(m1,,ml)Zl is a multi-index, and cm(x)C[x1,(1x12)1/2,,xk,(1xk2)1/2] is a complex polynomial in xi and 1xi2. We will call the collection of m for which cm0 the spectrum of f and will be denoted by Sp(f).

Conjecture 3.5.
Let f:[1,1](N1)(N2)2×SN1C be a SO(N)-admissible function. If
for all PN, then 0 does not lie in the convex hull of Sp(f).

Proposition 3.6.

Assume Conjecture 3.5 is true. Then Mathieu’s conjecture is true for SO(N).

Outline of the proof.
The proof goes analogously to the proof we gave to prove Theorem 2.11, with some simplifications. Let f, h be finite-type functions of SO(N), and assume ∫SO(N)fP(g)dg = 0 for all PN. Then by applying Lemma 3.3 and Conjecture 3.5 we see that 0 does not lie in the convex hull of Sp(f̃). Now assume that ∫SO(N)fPh ≠ 0 for infinitely many PN. Because of linearity, we can assume that h is a monomial with respect to each component. In other words
where hSO(N−1) is a monomial finite-type function on SO(N − 1). If ∫fPh ≠ 0 then there exists at least one set of parameters {βij}i,jN0 where 1 ≤ iM and 1 ≤ jQ with ∑i,jβij = P such that
Note that dg is a Haar measure, hence ∫Gf(g)dg = ∫Gf(xg)dg for any xG. Sending gxg where x=eψλ2 for any ψ ∈ [0, 2π) gives, taking Lemma 3.1 into account, that this is equivalent to replacing ϕ1 with ϕ1 + ψ. Applying this to the above integral, stating that it is invariant under this mapping, we have
for all ψ ∈ [0, 2π). This can only be fulfilled if
Since dgSO(N−1) is also a Haar measure, applying inductively the same strategy to the integral
gives i,jβijkijq+Kq=0 for all 1 ≤ qN − 1. But this means that
Dividing both sides by P and taking the limit P → ∞, we see that the left-hand side goes to 0. Since the convex hull is closed, we must have that 0 lies in the convex hull of Sp(f̃) which is a contradiction. Hence ∫SO(N)fPh = 0 for all P large enough, hence proving Mathieu’s conjecture.

The author would like to thank Michael Müger for proposing the subject, and the many valuable discussions we had. He also wishes to thank Erik Koelink for feedback and suggestions.

The author has no conflicts to disclose.

Kevin Zwart: Conceptualization (equal); Writing – original draft (equal); Writing – review & editing (equal).

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

In this appendix, we will prove Lemma 2.3 and Lemma 2.5. We will note that the proofs are inspired by and/or based on Refs. 1 and 15–13.

Proof of Lemma 2.3.
Since the mapping in Lemma 2.3 is defined inductively, the proof will be by induction on N. We start with N = 2. On SU(2) it can easily be seen that the function
is surjective, and a bijection between [0,π]×0,π2×[0,2π] and SU(2) up to measure zero sets. So let it be proven for N − 1. Let us consider N. We note that 2kN1A(k)(ϕk,ψk)SU(N1)001. For simplicity, let us denote the following two matrices
(A1)
(A2)
Then we can write FN(ϕ1, …, ωN−1) as
We will start with proving that FN is injective on the interior of the hypercube. Let XFN(ϕ1, …, ωN−1) and YFN(ϕ1,,ωN1) satisfy X = Y. Since X, YSU(N) we have XY−1 = X−1Y = 1N. If we look at the matrix component (XY1)N,N, we get the following equation:
(A3)
Taking the absolute value on both sides, and using the triangle inequality, we get
Since we know that B(B′)−1SU(N − 1), we have that each matrix element has |B(B)1ij|1. In addition, ψN1,ψN1[0,π2] thus cos(ψN1),cos(ψN1),sin(ψN1),sin(ψN1)0. Therefore we find
Hence we see that all inequalities were equalities. So cos(ψN1ψN1)=1, which can only mean ψN1=ψN1 in the given range. In addition, it means |B(B)111|=1. So set B(B)111=eiξ for some ξR. Putting it all in Eq. (A3) and writing the real and imaginary part separately gives
(A4)
Taking the absolute value of the first equation, and again using the triangle inquality gives
Once again, we see that |cos((N1)(ωN1ωN1))|=|cos(ωN1ωN1+ξ)|=1. But since sin2, cos2 ≥ 0, we must have that cos((N1)(ωN1ωN1))=cos(ωN1ωN1+ξ)=1 in order to satisfy Eq. (A4). We have two solutions. Either ωN1=ωN1 or ωN1=2πN1 and ωN1=0 or vice versa. However, since we are considering the interior of the hypercube, the latter cannot be the case. Therefore, we get ωN1=ωN1, and so ξ = 0. Going back to XY−1 = 1N, using what we found, gives
Note that this can be rewritten as PB(B)1001P1=A1A001 where PSO(N) is a rotation in the (e1, eN)-plane. This means that either P = 1 or (B(B)1)1k=(B(B)1)k1=δ1k for all k = 1, …, N − 1. Either case immediately gives AB(B′)−1(A′)−1 = 1. But that would mean (B)1(A)1=(AB)1 is the inverse of AB, so AB = AB′.
By the same arguments, considering X−1Y, will give BA = BA′. Note that P = 1 can only occur when ψN−1 = 0, which would lie at the boundary of the hypercube, which we are not considering. So we can assume (B(B)1)1k=(B(B)1)k1=δ1k for all k. Since AB(B′)−1(A′)−1 = 1N, it gives (A(A)1)1k=(A(A)1)k1=δ1k as well. One can do the calculations and see that this is only possible if ϕN1=ϕN1 and ϕj=ϕj and ψj=ψj for all j = 1, …, N − 2. In other words A = A′ and thus B = B′. This means
Using the induction hypothesis we see that ϕj=ϕj, ψj=ψj for all j=N,,N(N1)2 and ωk=ωk for all k = 1, …, N − 2. Therefore, we see that FN is injective on the interior of the hypercube, and not injective on the boundary.
Next, we consider surjectivity. Let USU(N). If we can show that there exist ϕ1, …, ωN−1 such that FN(ϕ1,,ωN1)1U=1N, then U = FN(ϕ1, …, ωN−1). The last three matrices of the product FN(ϕ1,,ωN1)1U are of the form
where U is written as U=(uij)i,j=1,,N. Multiplying these three matrices gives a new element in SU(N), call this U′. Doing the multiplication gives
We choose ψ1, ϕ1 in such a way that u2N=0. This is always possible. For if u1N = 0 then we choose ψ1=π2 and ϕ1 = 0. If u2,N = 0 we choose ψ1 = 0 and ϕ1 = 0. If both are equal to zero, we are free to choose what we want, so we choose ψ1 = ϕ1 = 0. And if neither is 0, we choose tan(ψ1)=u2Nu1N and 2ϕ1=argu1Nu2N. Since ϕ1 ∈ [0, π), these determine ϕ1, ψ1 uniquely.
Filling in our choice of ψ1, ϕ1 shows that the next two matrices in the multiplication FN(ϕ1,,ωN1)1U will be of the form
If we denote this matrix product by U=(uij)i,j=1,,N we get
To set u3N=0, we consider a few cases. If u1N=0 we set ψ2=π2 and ϕ2 = 0. If u3N=0 then ψ2 = 0 and ϕ2 = 0. And if neither are 0, we set tan(ψ2)=u3Nu1N and ϕ3=argu1Nu3N. Since ψ20,π2 and ϕ2 ∈ [0, 2π], they are determined uniquely in the last case.
This procedure can be repeated N − 3 more times, defining ψ1, …, ψN−2, ϕ1, …, ϕN−2, and denoting the resulting matrix by U(N2)=(uij(N2))i,j=1,,N. Then the next set of multiplications is given by
Denoting the resulting matrix by U(N1)=(uijN1)i,j=1,,N gives the following set of equations:
We now choose u1N(N1)=0. This can be achieved in a similar way as before. Therefore the resulting matrix is then
Since U(N−1)SU(N), we have U(N1)(U(N1))=1N. If we write X=(uij(N1))i,j=1,,N1 we get
This implies XX = 1N−1, which means by finite dimensionality arguments that XU(N − 1). We also see that
Since X is invertible, we have uNk(N1)=0 for k = 1, 2, …, N − 1. Therefore
We recall we were originally looking at [FN(ϕ1,,ωN1)]1U=1N. Applying our procedure, the multiplication is reduced to finding the remaining parameters such that
We note that the two left-most matrices commute, hence this is equivalent to
Since XU(N − 1) we have det(X) = e for some ξ ∈ [0, 2π). In addition, we see that
so uNN(N1)=eiξ. Choosing ωN=ξN1 gives us the equation
By the induction hypothesis, we know FN−1 is surjective onto SU(N − 1). Since det(eiξN1X)=eiξeiξ=1, we get that eiξN1XSU(N1) hence it can be reached by FN−1. Therefore, FN is surjective.
Finally we want to show that FN is a diffeomorphism. It is clear that FN is C, because the exponential and multiplication are C. On the interiour of the hypercube, we see that the inverse is given inductively by
We see that each of these equations are continuous and differentiable on the image of the interior of the domain of FN. In other words, FN is on the interior of the hypercube a diffeomorphism.□

Proof of Lemma 2.5.
The proof is based on Ref. 1. Before proving the lemma in detail, we outline the general strategy. We will construct the proof in four steps. First, we will show that there is a closed subgroup K such that G/K is a symmetric space. We wish to show
for any measurable function f on G, and where xG is a representative of xKG/K and kK such that g = xk. Here dk is the Haar measure on K and dgK is the unique G-invariant measure on the symmetric space G/K.5 If the previous equation is true, then it shows that

Second, we construct left-invariant one-forms on G/K, which can be wedged to find dgK. Third, we will show how the top form dgK looks like explicitly by considering the parameterization of SU(N) as in Lemma 2.3. We end the proof by normalizing the measure to get the Haar measure, which we shall call dgSU(N).

Our first step will be to find the subgroup K. Let us consider the group
Note that this subgroup is closed, and is the same subgroup as for the KAK decomposition, as discussed in Remark 2.4. This automatically shows that (G, K) is a Riemannian symmetric pair and thus G/K a symmetric space. To show the identity
where xG is a representative of xKG/K and kK, it is enough to show that |det(AdG(k))| = |det(AdK(k))|.5 Since we are considering matrix Lie groups, AdG(k)(X) = kXk−1 for any Xg. The Lie algebra of K, which we denoted as k, is generated by λ1,λ2,,λ(N1)21,λN21. Let us denote pspanR(λ(N1)2,,λN22). We see then that for any kK and 1 ≤ l ≤ 2(N − 1) that
(A5)
(A6)
where λ(N1)21+l=0v(v)0, vCN1 is a column vector, and (v) is the adjoint of v i.e., the transpose of the complex conjugate of v. Therefore AdG(k)λ(N1)21+lp for all l and thus
(A7)
Note that we can identify pR2(N1). Also recall that U(N) ⊂ SO(2N) by identifying CnR2n. By looking at Eq. (A5) it can be seen that AdG(k)|pSO(2(N1)). Hence we conclude that
Therefore there exists a unique G-invariant measure on G/K and
The rest of the proof will be dedicated to finding dgK. To find dgK explicitly, we will consider the Maurer–Cartan one-form ω, which, at gG, is a map ωg:TgGg defined by
Note that by construction ωg is left-invariant, that is to say (Lx)*ωg=ωg for all gG. In the case of matrix groups, especially when G = SU(N), ωg can be calculated explicitly to be
where x1,,xN21 is a set of local coordinates of G. We recall that Tr(λjλk) = 0 whenever jk. Using this, we construct one-forms out of ωg by defining for m = 1, …, 2(N − 1) the form
Note that em is left-invariant, because ωg is left-invariant. Let g = xk where kK and xG. Then we see that
Filling this into em, and noting that Tr(wλ(N1)21+m)=0 for all wk, gives
Now let us define the 2(N − 1)-form given by
This form is left-invariant because em is left-invariant. In addition, if kK, then by Eq. (A7) we see
Note that det(AdG(k)|p)S1, but μg has values in R for all gG. Therefore in order for the above equation to hold, one must have det(AdG(k)|p)=±1 for all kK. But K is connected and det and AdK are smooth mappings, hence the image is a connected set. Since AdG(1) = Id, we must have det(AdG(k)|p)=1. Therefore we conclude that
Together with the fact that k lies in the kernel of (em)e, the form μ can be identified as a top form on the symmetric space G/K. In addition, μ is G-invariant, hence we can conclude that
where
(A8)
If x1, …, x2(N−1) is a parametrization of G/K, we have in general that
where em=j=12(N1)emjdxj. By Lemma 2.3, we have that the set ϕN−1, ψN−1, …, ϕ1, ψ1 parametrize G/K. Hence we get that
(A9)
For the rest of this proof, we will be calculating det((eij)).
To be able to find the matrix (eij), we need to calculate Eq. (A8) explicitly. Let xG be a representative of xKG/K, then x = ∏2≤iNA(i)(ϕi−1, ψi−1) for given ϕj, ψj. If we write recursively xn+1(ϕ1,,ϕn,ψ1,,ψn)=xn(ϕ1,,ϕn1,ψ1,,ψn1)eϕnλ3eψnλn2+1 for nN, we get
If we label ωxlxl1dxl for 1 ≤ lN − 1, we get
(A10)
Putting Eq. (A10), with l = N − 1, into Eq. (A8) gives
After a quick calculation, we see
where by the notation O(diag) we mean a diagonal matrix that can be disregarded when taking the trace form with λ(N1)2+k1. Therefore
Remember that we were interested in det((eij)) where em = ∑jemjdxj. The set {ψN−1, ϕN−1, …, ψ1, ϕ1} is a set of charts of G/K, so tracking down all the N−1, N−1, …, 1, 1 gives the following matrix
where Λ is a vector with indices Λj2=λ(N1)2+j1, j = 3, …, 2(N − 1) and 0 is the 2(N − 2) dimensional 0 vector. If N = 2 we are done and see that det((eij)) = cos(ψN−1)sin(ψN−1). If N ≥ 3 we see that the lower right corner of (eij) is itself a 2(N − 2) × 2(N − 2) matrix. Taking the determinant, and using Tr(AB) = Tr(BA), is then
Calculating eψN1λ(N1)2+1λ(N1)2+m1eψN1λ(N1)2+1 with 3 ≤ m ≤ 2(N − 1) we find the following relation
where
Since the elements {λj}j=1,,N21 are orthogonal with respect to the trace, and ωxN1 has values in su(N1), we see that only the sin(ψN−1) part contributes. Filling this in gives
(A11)
To finish the proof, we make the following claim:

Claim 1.

Proof.
Let 1 < lN − 2 and consider
for a given m. We apply Eq. (A10) and find
(A12)
To be able to evaluate this, we need some relations for the adjoint action. These are given by:
(A13)
(A14)
(A15)
(A16)
where p,q,nN and p > q > 1. Before we fill this in, we recall that each λj is orthogonal with respect to the trace form and ωxl has values in su(l). Hence we see that only a few of the terms survive in Eqs. (A13)(A16), and the only relevant terms are given here:
(A17)
(A18)
(A19)
(A20)
The latter two equations can we written even more compactly, namely
Therefore, we see that due to linearity of the trace form that
(A21)
Filling these equations in into Eq. (A12) gives
Note that j(m) is either a square number or a square number plus 1, which must mean that δl2,j(m) can only be non-zero if m is odd, and δl2+1,j(m) can only be non-zero if m is even.
Define the 2l × 2l dimensional matrix
To prove the claim, we need to calculate det(XN−1). Swapping four rows and four columns does not change the value of det(XN−1), so we swap the first and second row and column with the (2(N − 2) − 1)-th and the 2(N − 2)-th row and column respectively. Redefining this again as XN−1, we get, using the above equations:
Taking the determinant gives
Recursively continuing the decomposition of the latter determinant gives
(A22)
The last determinant is easily found, for x2=eϕ1λ3eψ1λ2 and so
Since λj(3) = λ1 and λj(4) = λ2 we can find the final trace by just computing the matrix multiplications, which gives
Filling this in into Eq. (A22) gives the result
(A23)
which proves the claim.□

Putting Eqs. (A23) and (A11) into Eq. (A9) gives
Thus this is the Haar measure up to a normalization constant. To get the normalised Haar measure, we need to explicitely integrate over the whole group. The normalisation constant CN in Eq. (2.2) can be found by noting that the only non-trivial integration is over the ψj coordinates, and each integral can be evaluated using the following identity
1.
Bertini
,
S.
,
Cacciatori
,
S. L.
, and
Cerchiai
,
B. L.
, “
On the Euler angles for SU(N)
,”
J. Math. Phys.
47
(
4
),
043510-1
043510-13
(
2006
).
2.
Cacciatori
,
S.
,
Dalla Piazza
,
F.
, and
Scotti
,
A.
, “
Compact Lie groups: Euler constructions and generalized Dyson conjecture
,”
Trans. Am. Math. Soc.
369
(
7
),
4709
4724
(
2017
).
3.
Dings
,
T.
and
Koelink
,
E.
, “
On the Mathieu conjecture for SU(2)
,”
Indagationes Math.
26
(
1
),
219
224
(
2015
).
4.
Duistermaat
,
J.
and
van der Kallen
,
W.
, “
Constant terms in powers of a Laurent polynomial
,”
Indagationes Math.
9
(
2
),
221
231
(
1998
).
5.
Helgason
,
S.
,
Differential Geometry and Symmetric Spaces
(
Academic Press, Inc.
,
1962
).
6.
Hoffman
,
D. K.
,
Raffenetti
,
R. C.
, and
Ruedenberg
,
K.
, “
Generalization of Euler angles to N-dimensional orthogonal matrices
,”
J. Math. Phys.
13
(
4
),
528
533
(
1972
).
7.
Knapp
,
A. W.
,
Lie Groups Beyond an Introduction
,
2nd ed.
(
Birkhäuser
,
Boston
,
2002
), ISBN: 0-8176-4259-5.
8.
Mathieu
,
O.
, “
Some conjectures about invariant theory and their applications
,” in
Algebre Non Commutative, Groupes Quantiques et Invariants
, edited by
Alex
,
J.
and
Cauchon
,
G.
(
Société Mathématique de France
,
Reims
,
1997
), Vol.
2
, pp.
263
279
.
9.
Müger
,
M.
and
Tuset
,
L.
, “
More on the Mathieu conjecture for SU(2)
,” arXiv:2210.06582 (
2022
).
10.
Raffenetti
,
R. C.
and
Ruedenberg
,
K.
, “
Parametrization of an orthogonal matrix in terms of generalized Eulerian angles
,”
Int. J. Quantum Chem.
4
,
625
634
(
1969
).
11.
Spengler
,
C.
,
Huber
,
M.
, and
Hiesmayr
,
B. C.
, “
Composite parameterization and Haar measure for all unitary and special unitary groups
,”
J. Math. Phys.
53
(
1
),
013501-1
013501-22
(
2012
).
12.
Tilma
,
T.
,
Byrd
,
M.
, and
Sudarshan
,
E.
, “
A parametrization of bipartite systems based on SU(4) Euler angles
,”
J. Phys. A: Math. Gen.
35
(
48
),
10445
10465
(
2002
).
13.
Tilma
,
T.
and
Sudarshan
,
E.
, “
Generalized Euler angle parametrization for SU(N)
,”
J. Phys. A: Math. Gen.
35
(
48
),
10467
10501
(
2002
).
14.
Wigner
,
E. P.
, “
On a generalization of Euler’s angles
,” in
Group Theory and its Applications
, edited by
Loebl
,
E. M.
(
Academic Press
,
1968
), pp.
119
129
, ISBN: 978-1-4832-3188-4.