Any nonsingular function of spin *j* matrices always reduces to a matrix polynomial of order 2*j*. The challenge is to find a convenient form for the coefficients of the matrix polynomial. The theory of biorthogonal systems is a useful framework to meet this challenge. Central factorial numbers play a key role in the theoretical development. Explicit polynomial coefficients for rotations expressed either as exponentials or as rational Cayley transforms are considered here. Structural features of the results are discussed and compared, and large *j* limits of the coefficients are examined.

## I. INTRODUCTION

As a consequence of the Cayley-Hamilton theorem,^{1,2} any nonsingular function^{3} of an *n* × *n* matrix always reduces to a polynomial of order *n* − 1 in powers of the matrix, since the *n*th and higher powers of the matrix can be reduced to a linear combination of lower powers (see Ref. 5 and the earlier literature cited therein). The challenge for anyone wishing to take advantage of this fact is to find a convenient form for the coefficients of the matrix polynomial.

In particular, a nonsingular matrix function of any one component, $ n \u02c6 \u22c5J$, for a unitary, irreducible angular momentum representation of spin *j*, always reduces to a matrix polynomial of order 2*j*. Explicit polynomial coefficients for rotations expressed either as exponentials or as rational Cayley transforms^{1,4} are considered here, to augment some recent studies.^{6–8} Structural features of the results are discussed and compared, including the behavior of the coefficients for large *j*.

The theory of biorthogonal systems^{9–11} is a useful framework to find the coefficients of the matrix polynomial, even in the most general case. For analytic spin functions of $ n \u02c6 \u22c5J$, central factorial numbers^{12,13} play a key role in the construction of the biorthogonal system.

In prior work, Curtright, Fairlie, and Zachos (CFZ) obtained explicit and intuitive results^{6} expressing the standard, exponential rotation matrix for *any* quantized angular momentum *j* as a polynomial of order 2*j* in the corresponding $ 2 j + 1 \xd7 2 j + 1 $ spin matrices $ n \u02c6 \u22c5J$ that generate rotations about axis $ n \u02c6 $. While many previous studies of this or closely related problems can be found in the literature,^{14–19} none of these other studies succeeded to find such simple, compact expressions for the coefficients in the spin matrix polynomial, as elementary functions of the rotation angle, as those obtained by CFZ. For each angle-dependent coefficient in the polynomial, the explicit formula found by CFZ involves nothing more complicated than a truncated series expansion for a power of the arcsin function. Although a detailed proof of the CFZ result is not exhibited in Ref. 6, the essential ingredients needed to provide such a proof are in that paper, and indeed, the details of two elementary derivations were subsequently given in Ref. 7.

More recently, Van Kortryk^{8} discovered the corresponding polynomial result for the Cayley rational form of an irreducible, unitary *SU*(2) representation. At first glance, the Cayley transform for a spin representation would seem to follow immediately from the CFZ result just by changing variables from *θ* to a function $\theta \alpha $, where *α* parameterizes the transform. For any given numerical value of $ n \u02c6 \u22c5J$, this would be so, obviously, but it is not obviously so for matrix-valued $ n \u02c6 \u22c5J$. Nevertheless, as it turns out, the result for the Cayley transform is actually much simpler than the CFZ result for the exponential. For the Cayley transform, the explicit coefficients involve nothing more complicated than truncated series expansions of already finite polynomials in a real parameter *α*. Remarkably, this is true for the Cayley transform of *any* matrix, not just spin matrices.^{20–22}

In this Paper I, compare the CFZ and Van Kortryk results, and I provide a more complete discussion of the latter, including a detailed derivation of the polynomial coefficients for the *SU*(2) Cayley transforms. I show explicitly how the CFZ results are connected to Cayley transforms by Laplace transformations.

## II. METHODOLOGY

### A. A fundamental identity

The fundamental identity for spin $j\u2208 0 , 1 2 , 1 , 3 2 , \u2026 $ is^{7}

where the coefficients $t n , k $ are central factorial numbers.^{12,13} (For details, see Appendices A and B.) This identity ensures that all powers of $ n \u02c6 \u22c5 J $ higher than 2*j* can be reduced to linear combinations of lower powers. This is a specific illustration of the Cayley-Hamilton theorem, applied to the monomial on the LHS of (1).

### B. Vandermondeum

The Vandermonde matrix^{23} for spin *j* is

while its inverse^{24} is given by the “Fin du monde” matrix elements,

where $k,l\u2208 1 , 2 , \u2026 , 2 j + 1 $, and where the numerator factors $ N k l , j $ are given by nested sums subject to some exclusions,

These are unfamiliar polynomials^{25} in *l* and *j*. For *k* = 2*j* + 1, the numerator is always 1, as it says above, and for *k* = 2*j*, it reduces to a single sum giving a result only first order in both *j* and *l*. Thus,

But for other values of *k*, these numerator polynomials quickly get out of hand. For example, for *k* = 2*j* − 1, the resulting double sum yields a polynomial 3rd order in *j* and 2nd order in *l*,

For reasonable values of *j* — up to 100, say — it is easier to just use numerical machine computation to obtain $ V \u2212 1 j $ directly from $V j $, with no need of the analytic closed-form expressions for the $ N k l , j $.

### C. Biorthogonal matrices

It is useful to have in hand the *dual matrices* which are trace orthonormalized with respect to powers of the spin matrix, $S\u22612 n \u02c6 \u22c5J$. Without loss of generality, choose *S* = 2*J*_{3}, since any other choice for $ n \u02c6 $ merely requires selecting a different basis to diagonalize the spin matrix, thereby obtaining the same eigenvalues as 2*J*_{3}. Thus, the powers are

It is now straightforward to construct orthonormalized dual matrices *T _{n}* such that

The *T _{n}* may also be chosen to be diagonal $ 2 j + 1 \xd7 2 j + 1 $ matrices in the basis that diagonalizes

*S*. In fact, for any spin

*j*, the required entries on the diagonal of

*T*are just the entries in the $ n + 1 $ th row of the inverted Vandermonde matrix, $ V \u2212 1 j $. (Note that here, unlike the conventions in Ref. 6, both rows and columns of the Vandermonde matrix and its inverse are indexed as 1, 2, …, 2

_{n}*j*+ 1.) That is,

This result follows immediately from the fact that the diagonal entries for *S ^{m}* are just the entries in the corresponding column (i.e., the $ m + 1 $ th column) of the Vandermonde matrix, $V j $.

Explicit examples of spin matrix powers and their duals, and the corresponding Vandermonde matrix inverse $ V \u2212 1 j $, are given in Appendix C for *j* = 1/2, 1, 3/2, and 2.

### D. Projecting coefficients

Suppose *f* is a nonsingular matrix function for spin *j*. In a basis where *S* = 2*J*_{3}, the matrix $f S $ is diagonal and so is its reduction to a polynomial in *S*,

Trace orthogonality of the dual matrices then gives the coefficients as

Assembling the coefficients into a column and using (9) gives

So, the spin matrix polynomial coefficients for $f S $ are determined by straightforward matrix multiplication using the inverse Vandermonde matrix.

In terms of the explicit form in (3), for *k* = 1, 2, …, 2*j* + 1,

The coefficients of the highest powers appearing in the spin matrix polynomial are often worked out readily from the general closed form expressions for the $ N k l , j $, as given in (5) and (6). For the highest three powers in the polynomial,

etc. But again, for reasonable values of *j*, it is probably easiest to just use numerical machine computation to first obtain $ V \u2212 1 j $ directly from $V j $, and then perform the matrix multiplication in (11).

Indeed, by taking several low values of *j* and their explicit $ V \u2212 1 j $, Van Kortryk^{8} was able to deduce the general formula for the coefficients in the Cayley transform polynomial, which he then confirmed for specific higher values of *j*. I provide a proof of the formula in Section IV of this paper.

### E. Lagrange-Sylvester expansions

I note in passing another useful method, albeit equivalent to that involving the inverse Vandermonde matrix.

Reduction of nonsingular matrix functions for spin *j* to polynomials can be efficiently carried out in specific cases using Lagrange-Sylvester expansions.^{26} Consider functions of an *N* × *N* diagonalizable matrix 𝕄 with non-degenerate eigenvalues *λ _{i}*,

*i*= 1, …,

*N*. On the span of the eigenvectors, there is an obviously correct Lagrange formula, as extended to diagonalizable matrices by Sylvester,

where the projection operators — the so-called Frobenius covariants — are given by products,

From expanding each such product, it is evident that any *f*(𝕄) reduces to a polynomial of order *N* − 1 in powers of 𝕄,

and the function-dependent coefficients can be expressed in terms of the eigenvalues of 𝕄 by expanding projection operators (17) as polynomials in 𝕄.

CFZ used the methods described above to obtain the coefficients for the spin matrix polynomial reduction of the exponential form for rotations for several low values of *j*. They then re-expressed the results in a form that surprisingly suggested an immediate generalization, after which they developed the theory to confirm their ansatz, thereby raising it to the level of a theorem with explicit proofs.^{7} I review and illustrate the CFZ formula in Sec. III.

## III. ROTATIONS AS EXPONENTIALS

For comparison to the results of Sec. IV, we first recall the CFZ formula for a rotation through an angle *θ* about an axis $ n \u02c6 $, valid for any spin $j\u2208 0 , 1 2 , 1 , 3 2 , \u2026 $. The formula reduces the *manifestly* nonsingular exponential function of any spin matrix to an explicit polynomial,

where the angle-dependent coefficients of the various spin matrix powers are given by

Here, $ \cdots $ is the integer-valued floor function and $ Trunc n f x $ is the *n*th-order Taylor polynomial truncation for any $f x $ admitting a power series representation,

In addition, $\u03f5 j , k $ is a binary-valued function of 2*j* − *k* that distinguishes even and odd integers: $\u03f5 j , k =0$ for even 2*j* − *k*, and $\u03f5 j , k =1$ for odd 2*j* − *k*.

The simplest two nontrivial cases of (19) are well-known: *j* = 1/2 and *j* = 1. Explicitly,

The former involves the Pauli matrices, ** J** =

**/2, while the latter is sometimes known as the Euler-Rodrigues formula. Several other explicit cases may be found in Refs. 17 and 6.**

*σ*In practice, for finite *j* of reasonable size, the truncations needed to evaluate (20) are easily obtained as a matter of course by machine computation, for example, by using either *Maple*^{®} or *Mathematica*^{®}. Nevertheless, it is interesting and useful for a systematic analysis that Taylor series for powers of cyclometric functions can be expressed in terms of $t m , n $, the so-called *central factorial numbers of the first kind*.^{12,13} Thus for $ z \u22641$ and non-negative integer *n* (cf. Theorem 4.1.2 in Ref. 13),

Note that the coefficients in these Taylor series are all non-negative. In general, the values of $t m , n $ are defined by and obtained from simple polynomials, as described in Appendix A.

As firmly established in Refs. 17 and 6, the remaining coefficients in (19) may then be obtained from

Also, as observed in Ref. 6, results (20) display the limit *j* → ∞ for fixed *k* in a beautifully intuitive way. In that limit, the truncation is lifted to obtain trigonometrical series for the *periodicized* *θ ^{k}* monomials. But even as

*j*→ ∞, integer

*j*(bosonic) and semi-integer

*j*(fermionic) coefficients are clearly distinguished by a relative sign flip for $\theta \u2208 \pi , 3 \pi mod 4 \pi $. This is evident upon plotting the first few coefficients for very large spins. For example, $ A 0 , 1 , \u2026 , 5 j \theta $ are plotted here (Figure 1) for

*j*= 69 (darker curves, in blue) and

*j*= 137/2 (lighter curves, in red).

## IV. ROTATIONS AS CAYLEY TRANSFORMS

The Cayley rational form of a unitary $SU 2 $ group element is also nonsingular for any irreducible representation with spin *j*. Therefore, it can also be reduced to a spin matrix polynomial of order 2*j*. Thus,

Here, *α* is a *real* parameter, to avoid singularities for *any*$j\u2208 0 , 1 2 , 1 , 3 2 , 2 , \u2026 $. The coefficients $ A k j \alpha $ are to be determined as functions of *α*. These coefficients can be obtained by rewriting the geometric series $1/ 1 \u2212 2 i \alpha n \u02c6 \u22c5 J $ for spin *j* as a polynomial in $ n \u02c6 \u22c5J$. Thus, define the auxiliary polynomial

Then clearly,

The simplest two nontrivial cases are again *j* = 1/2 and *j* = 1. Explicitly,

Yet again, the first two of these polynomials involve the Pauli matrices, ** J** =

**/2, while the last is related to the Cayley transform of the Euler-Rodrigues formula. Appendix D repeats these two examples and also gives the Cayley transforms for**

*σ**j*= 3/2, 2, 5/2, and 3.

The coefficients in the geometric series expansion for arbitrary integer or semi-integer *j* follow directly from the methods in Refs. 6 and 7. The result is^{8}

where the truncation is in powers of *α*, and where the determinant for spin *j* is

These results are readily checked for small values of *j* upon using explicit matrices, say $ n \u02c6 \u22c5J= J 3 $. I provide a detailed proof of (34) in Subsection IV A.

But first, note for integer spins, it is always true that $ B 0 j \alpha =1$, and that the higher coefficients are paired,

while *all* the coefficients are paired for semi-integer spins. In this case,

So, the coefficients in the spin matrix polynomials for bosonic (integer *j*) and fermionic (semi-integer *j*) Cayley transforms are easily distinguished, both quantitatively and qualitatively. The coefficients are paired as indicated because, for either bosonic or fermionic spins, only *even* powers of *α* are produced by the determinant factors in (34). Hence, the parities $ A k j \u2212 \alpha = \u2212 1 k \u2009 A k j \alpha $ and $ B k j \u2212 \alpha = \u2212 1 k B k j \alpha $.

Moreover, for any allowed *j*, all the coefficients of *α*^{2k} in (35) are positive. As a consequence, for any allowed *j*, the $ A k j \alpha $ and $ B k j \alpha $ coefficients have no singularities for real *α*. In particular, for any *j*, the highest two coefficients reduce to

And as *j* increases, the domain where $1/det 1 \u2212 2 i \alpha n \u02c6 \u22c5 J $ achieves significant values collapses towards the origin, at which point, it always has unit value. Here is a plot for *j* = 1/2, 1, 3/2, 2, 5/2, and 3 (Figure 2).

Two additional comments are warranted.^{8} First, the determinants in (34) are essentially generating functions of the central factorial numbers $t m , n $ (see Appendix A), a fact previously exploited in Refs. 6 and 7 and already emphasized for the Cayley transforms in Ref. 8. For either integer or semi-integer *j*,

with the full determinant obtained for *k* = 0 if *j* is integer. For semi-integer *j*, the same expression is true, only now the full determinant includes an additional term $ 4 j + 1 2 \alpha 2 j + 1 t 2 j + 2 , 1 $. Thus, the general formula for the determinant, for either integer or semi-integer *j*, is

Second, as *j* → ∞ for any fixed *k*, the truncation in (34) is lifted — but with some subtleties to be discussed below — to obtain for small *α*, $ lim j \u2192 \u221e B k j \alpha \u223c \alpha k $. But in contrast to the periodicized *θ*-monomials found in Ref. 6, the large *j* behavior here does not make the periodicity of rotations manifest. To exhibit periodicity even for finite values of *j*, *θ* must be expressed, on a case-by-case basis, as cyclometric functions of *α*, and then the branch structure of those cyclometric functions must be invoked.^{8}

### A. A derivation for general matrix resolvents

In this subsection, I obtain the coefficients in the Cayley transform spin matrix polynomial by deriving the expansion coefficients for the resolvent of a general matrix. This is a well-known result in resolvent theory (e.g., see Ref. 20) and has been established many times in the literature (e.g., see Refs. 21 and 22). While there is hardly any need to plow again a field that has been tilled as much as this one, I include here a variant of the standard proof just to make the discussion self-contained. (Also see Appendix D.)

For any finite *N* × *N* matrix *M*, consider the resolvent written as a matrix polynomial,

This is much simpler than the exponential case^{6,7} in that one *need not* differentiate, e.g., to find

so as to obtain an expression linear in *M* multiplying the original polynomial, and subsequently a set of first-order equations for the coefficients *r _{m}*. Rather, it suffices here just to multiply through by 1 −

*αM*. Thus,

On the other hand, the Cayley-Hamilton theorem gives

where the polynomial coefficients *d _{n}* are defined by the determinant

If the powers of *M ^{m}* for 0 ≤

*m*≤

*N*− 1 are all linearly independent, as is the case for the spin matrices $ n \u02c6 \u22c5J$, then

It is now straightforward to show the solution to these recursion relations is given by

For $M=2i n \u02c6 \u22c5J$ with *N* = 2*j* + 1, the *r _{n}* are essentially just renamed $ B n j $. Thus, result (34) is proven.

### B. Additional exact results and asymptotic behavior for large *j*

The determinants can also be written as Pochhammer symbols: $ x n \u2261x x + 1 \cdots x + n \u2212 1 = \Gamma x + n \Gamma x $. For example, for integer *j*,

which leads to a nice form for the determinant expressed in terms of analytic functions of both *j* and *α*. Again, for integer *j*,

upon using $\Gamma 1 2 i \alpha \Gamma \u2212 1 2 i \alpha =2\pi \alpha /sinh \pi 2 \alpha $. On the other hand, for semi-integer *j*,

In the limit *j* → ∞, these are the well-known infinite product representations of sinh and cosh. That is to say,

This shows that there is a distinguishable difference between Cayley transforms for bosonic and fermionic spins, even as *j* → ∞, much as there were distinguishable differences in the behavior of the coefficients for the exponential (19), also as *j* → ∞, as illustrated in the graphs of Section III.

Now, consider the truncations for *k* = 1 or 2, say for integer spin. This choice of coefficients removes from $det 1 \u2212 2 i \alpha n \u02c6 \u22c5 J $ the highest power of *α*, namely, the *α*^{2j} term. But this term is easily computed,

Therefore for integer *j*,

The corresponding coefficients in the Cayley rational form are

as well as $ B 2 j \alpha =\alpha B 1 j \alpha $. The exact results for these two coefficients, for all integer spins, are then given by

For fixed *α*, the large *j* limits of these follow immediately from

In fact, the limit is just 1 to all orders in $ 1 \alpha $. This should be obvious, but in case it is not, as a check, the reader may consider the following expansion:

where $Psi n , z $ is the *n*th derivative of the digamma function $Psi z $ (also known as PolyGamma in *Mathematica*). Except for the $O z 0 $ term, all of the $O z k $ coefficients in this last expansion vanish as *j* → ∞.

Therefore, it follows for integer *j* that

So, as *α* approaches zero, the RHS approaches unity faster than any power of *α* due to the essential singularity in the exponential,

This is the precise meaning of the statement in Ref. 8 that

In fact, the asymptotic form is rapidly approached as *j* is increased.

A similar calculation, again for integer *j*, gives

For the case at hand,

Here are some other facts that were used to get exact expressions as well as asymptotic behaviors for the $ B 1 \u2013 4 j \alpha $ coefficients,

I plot a few examples to illustrate these features, beginning with $ B 1 j \alpha $ and $ B 2 j \alpha $ (Figure 3). A closer look near *α* = 0 is provided by the next graph (Figure 4). I also plot the corresponding results for $ B 3 j \alpha $ and $ B 4 j \alpha $ (Figure 5). Again, a closer look near *α* = 0 is given in the next graph (Figure 6). Similar behavior is exhibited by the higher coefficients.

These plots should be compared to the large *j* behavior of the coefficients for the exponential, given in (20). As the graphs in Section III attest, the asymptotic *j* → ∞ behavior is not closely approached by the $ A k j \theta $ for fixed *k* until rather large values of *j* are considered, and for some *θ*, not even then. This is especially significant for *θ* near ±*π*, where $ lim j \u2192 \u221e A k j \theta $ has a discontinuity in *θ* for every other value of *k*. By contrast, for fixed *k* as *j* → ∞, the $ A k j \alpha $ converge uniformly and monotonically to $ A k \u221e \alpha $ on any finite interval in *α*, and indeed, the large *j* limit of the coefficients is closely approached for moderate values of *j*. These last points are clearly displayed in plots of the relative error between $ A k j \alpha $ and $ A k \u221e \alpha $ for a few values of *j*. Further numerical studies suggest that, for any fixed *k*, the relative error

#### 1. Large *j* conjectures and proofs for all the coefficients

Here are the first four requisite asymptotic *j* → ∞ results, for integer *j*, as obtained by explicit calculation,

(Semi-integer *j* are different. See below.) Note that the numerators are just truncations of the series for the denominator, $ 2 \alpha \pi sinh \pi 2 \alpha = \u2211 n = 0 \u221e 1 2 n + 1 ! \pi 2 \alpha 2 n $. Hence, the obvious conjecture for integer *j* is

Given this result, the coefficients of the spin matrix powers in reduction (28) are, as *j* → ∞ *for integer j*,

As an immediate consequence, $ lim k \u2192 \u221e asymp k \alpha =0$ for all *α* ≠ 0, with the limit approached monotonically from above, while at $\alpha =0, asymp k 0 =1$ for all *k*.

Also, to put it briefly, explicit calculation of the lowest few coefficients reveals the corresponding conjecture for semi-integer *j*: The asymptotic expressions may be obtained from the bosonic spin case by substituting $ 2 \alpha \pi sinh \pi 2 \alpha \u27f9cosh \pi 2 \alpha $. Thus for the fermionic case, as *j* → ∞ *for semi-integer j*,

While plots of the various functions of *α* in (77) and (78) are qualitatively similar, nevertheless, the bosonic and fermionic Cayley transforms are still easily distinguished, even as *j* → ∞.

Proofs of these conjectures are straightforward using (34), (39), (53), and Proposition 2.7 in Ref. 13. For example, for integer *j*, use result (39) to rewrite (34) for the even-indexed coefficients as

The limit *j* → ∞ now follows from the integer *j* result in (53), written as

and from the asymptotic form of $ t 2 j + 2 , 2 m $ as given in Ref. 13, Proposition 2.7 (xxvi), written as

The net result is an expression whose numerator is a truncation of the infinite series for the denominator, as conjectured above,

The odd-indexed coefficients for the bosonic spin case are given by this same expression according to elementary pairing (36). Thus for integer *j*, the conjectured *j* → ∞ form of the coefficients is established.

The result for semi-integer *j* is as conjectured in (78). Alternatively, it may be expressed as a hypergeometric function similar to that in (76).

## V. RELATING EXPONENTIALS AND RATIONAL FORMS BY VARIABLE CHANGES

To finish the comparison of rotations as exponentials and Cayley transforms, for arbitrary *j*, it is instructive to return to an issue briefly mentioned in the Introduction. Namely, why does a map from one form to the other **not** succeed through the use of a numerical-valued change of variable $\alpha \theta $ or its inverse $\theta \alpha $? After all, a comparison of the exponential and Cayley results for *j* = 1/2, as well as for *j* = 1, as given in (22), (23), (31), and (33), would give the *misleading* impression that this can succeed.

To see the difficulty, take a basis where $ n \u02c6 \u22c5J$ is diagonal, with eigenvalues *M* ∈ { − *j*, − *j* + 1, …, *j* − 1, *j*}, and equate the exponential and Cayley forms acting on a specific eigenstate $ M $. The result of course is

This is easily solved either for $\alpha \theta $ or for the inverse map $\theta \alpha $,

So the relation is *M*-dependent, although in this particular instance, it *is in*dependent of the sign of *M*. In general, nontrivial *M*-dependence would be a feature that would arise upon comparing *any* two functions of $ n \u02c6 \u22c5J$ when acting on $ M $.

Now, it just so happens that a unique map works for *all* the *M*-eigenstates for *j* = 1/2, and another unique map works for all the *M*-eigenstates states for *j* = 1. (Note that for *M* = 0, the relation (84) always reduces to 1 = 1 *no matter what* the relation between *α* and *θ*.) But this is fortuitous and peculiar to those two spins due to $ M $ having only one nontrivial value in those special cases. For all *j* ≥ 3/2, the map must change with $ M $. There is *no consistent α*⇄*θ* *relation* that works on all $ n \u02c6 \u22c5J$ eigenstates for *j* > 1. In a sense made precise by (85), there is a “parameter shear” in the relation between *α* and *θ* as the $ n \u02c6 \u22c5J$ spectrum is traversed.

Similarly, if one were to equate the coefficients of any given power $ n \u02c6 \u22c5 J k $ in the spin matrix polynomial reductions of the exponential and the Cayley transform, the result would be an *α*⇄*θ* relation that depends on *k*, and this result would be *in*consistent (except in the two special cases *j* = 1/2 and *j* = 1) with that obtained from equating the coefficients of some other power of $ n \u02c6 \u22c5J$.

## VI. RELATING EXPONENTIALS AND RATIONAL FORMS BY LAPLACE TRANSFORMS

Some standard results from resolvent theory provide a direct relation between (19) and (28). Consider the Laplace transform and its inverse,

For any Hermitian *N* × *N* matrix, a Laplace transform yields the resolvent from the exponential, and vice versa,

This furnishes another route to derive the CFZ formula starting from the known matrix polynomial expression for the resolvent. Given the ease by which the polynomial coefficients for the resolvent are obtained, this provides perhaps the simplest proof of the CFZ result.

For linearly independent powers *M ^{k}*, 0 ≤

*k*<

*N*, as is the case for spin matrices, the matrix polynomial expansion coefficients for the exponential and the resolvent are distinctly related order-by-order by the Laplace transform,

If the first *k* powers of *M* are not independent, for *k* < *N* − 1, correct relations between the exponential and the resolvent are still obtained by taking $L$ or $ L \u2212 1 $, as in (87) and (88), although the final results may admit further simplification. Fortunately, it is not necessary to worry about that situation in the following.

Rescaling variables produces a form of the Laplace transform that is more useful for the direct comparison of (20) and (34). Thus,

It suffices here to consider only the direct transform. The inverse relation is automatic, given the large *α* behavior of the $ B k j \alpha $. From (25), along with (26), the required calculation is

These are elementary integrals, of course, resulting in

After combining these results with (25) and (35), and some elementary manipulations of the central factorial numbers, transform (90) yields the expected result,

For example, for spin *j* = 1/2, with the usual 2 × 2 matrices,

and for spin *j* = 1, with 3 × 3 matrices,

## VII. CONCLUDING REMARKS

Polynomial reductions of spin matrix exponentials have long been recognized as important and useful in many different contexts.^{16–19,28–31} Perhaps, the results presented in Refs. 6–8 and discussed further in this paper can help to facilitate these and other applications.

Moreover, Cayley transforms have numerous applications, many of a practical nature. For example, consider Refs. 32 and 33. In the first of these two papers, previous results on attitude representations were generalized using Cayley transforms, for application to guidance and control problems, while in the second paper, the theory of time stepping methods was developed in a systematic manner based on the Cayley transform, for application to discretised differential equations that describe evolution in Lie groups. Both of these papers stress that Cayley transform methods are easier to implement than the evaluation of matrix exponentials.

This relative ease of implementation is borne out in the present paper by direct comparison of (20) and (34), and more indirectly by the relative simplicity of the derivation for the Cayley transform coefficients, as given here, when compared to the proofs for the coefficients in the exponential case, as given in Ref. 7.

## Acknowledgments

I thank T. S. Van Kortryk and C. K. Zachos for discussions. I have also benefited from stimulating views of sea waves approaching the Malecón, La Habana, and I thank the organizers of STARS 2015 for their efforts to make this possible. This work was supported in part by NSF Award No. PHY-1214521, and in part by a University of Miami Cooper Fellowship.

### APPENDIX A: CENTRAL FACTORIAL NUMBERS

For historical reasons, central factorial numbers are defined as the coefficients in simple polynomials.^{12,13} They can be either positive or negative, but only their absolute values are needed for the coefficients of the spin matrix expansions in the main text. Moreover, $t e v e n , e v e n $ are integers, but $t o d d , o d d $ are not integers, and $t o d d , e v e n =0=t e v e n , o d d $. So, the even and odd cases are best handled separately.

By definition and as elementary consequences thereof (cf. http://oeis.org/A182867),

as well as (cf. http://oeis.org/A008956)

### APPENDIX B: PROOF OF THE FUNDAMENTAL IDENTITY

Here, I provide a demonstration that

where $t m , n $ are the central factorial numbers, defined in Appendix A. In a basis where $2 n \u02c6 \u22c5J$ is diagonal, (B1) reduces to a matrix equation

where the Vandermonde matrix for spin *j* is defined as in (2). So, consider the *k*th row on the RHS of (B2),

If *j* is semi-integer, say *j* = *n* + 1/2 for integer *n*, then $t 2 + 2 j , e v e n =0$, and this *k*th row becomes

where the $t 2 + 2 j , 2 j + 2 $ term was added and subtracted to obtain the complete sum on the RHS of (A3), for *m* = *j* + 1/2 = *n* + 1 and *x* = *j* + 1 − *k*. The sum was then replaced with the product on the LHS of (A3). But the product evaluates to zero because one of the terms in the product always vanishes for *k* ≥ 1, and therefore the *k*th row on the RHS of (B2) is

since $t 2 + 2 j , 2 j + 2 =1$. Thus, the *k*th row on the LHS of (B2) is obtained, and the identity is established for semi-integer *j*. If *j* is integer, a similar proof goes through upon using (A1). (See Ref. 7, Appendix B.)

### APPENDIX C: BIORTHOGONAL MATRIX EXAMPLES

Here are more details about the biorthogonal systems of spin matrices described in the main text, for *j* = 1/2, 1, 3/2, and 2.

Spin *j* = 1/2 is deceptively simple. The independent powers of the spin matrix are

and the corresponding trace-orthonormal dual matrices are the same up to a normalization factor. (NB: This is *not* true for any other *j*.)

Spin *j* = 1 is a more interesting example. The independent powers of the spin matrix are given by

and the corresponding trace-orthonormal dual matrices, as well as *V*^{−1}, are given by

Spin *j* = 3/2 is also interesting. The independent powers of the spin matrix are given by

and the corresponding trace-orthonormal dual matrices, as well as *V*^{−1}, are given by

Spin *j* = 2 helps to establish the general pattern.

From the 1st through 5th rows of *V*^{−1}, one extracts, respectively, the diagonal dual matrices *T*_{0} through *T*_{4}.

### APPENDIX D: A DERIVATION USING DIFFERENTIAL EQUATIONS

In this appendix, I derive the coefficients in the Cayley transform spin matrix polynomial by solving differential equations. This parallels the first derivation of the CFZ results, as given in Ref. 7. Although the resolvent analysis for general matrices, as given in the main text, is much simpler, perhaps the discussion here has some pedagogical value in that it emphasizes how much more effort is needed to pursue the same differential equation approach as was used for the exponential case.

For a given spin *j*, let $M=2i n \u02c6 \u22c5J$ and write

Differentiate with respect to *α* to obtain

That is to say,

or upon rearranging the terms,

Trace projections using the dual matrices (i.e., the linear independence of the *M ^{m}* for 0 ≤

*m*≤ 2

*j*) gives the set of 1st order equations

These are immediately integrated, with the constants of integration fixed by the requirement that all the *b _{n}* behave properly as

*α*→ 0, namely, $ b 0 \alpha = 0 =1$ and $ b m \alpha = 0 =0$ for

*m*≥ 1. The only additional ingredients needed to establish

are $t e v e n , o d d =0=t o d d , e v e n $^{12,13} and the relations (39) and (40). The details are spelled out more fully in the following few paragraphs.

while for *semi-integer spin* the equations become

Here, I have incorporated the phases of the central factorial numbers (see Appendix A) to write

Moreover, for integer *j*,

solves (D9) with the correct value at *α* = 0.

Note the second of these is obtained from the first by $k\u2192k\u2212 1 2 $. The first of these, along with (D17), may be solved by upward recursion starting from *k* = 0, using (D19), with consistency fixing *b*_{2j} = *b*_{2j−1}. The result for integer *j* is

Note that *b*_{0} = 1 when *j* is an integer, and that $ d m b m / d \alpha m \alpha = 0 =m!$ for all *m*, a property that is clear from original rational form (D1).

Similarly, (D21) may be solved by recursion to obtain for semi-integer *j*,

Again note that $ d m b m / d \alpha m \alpha = 0 =m!$ for all *m*. But in this case, *b*_{0} ≠ 1 since the $O \alpha 2 j + 1 $ term in the denominator is never present in the numerator. Nevertheless, as previously announced,^{8} in view of (39) and (40), it follows that the *same final determinantal form* holds for both integer and semi-integer *j*, namely (D8). Technically,^{34} how sweet it is!

Alternatively, one may simply substitute (D22) into (D20), and (D23) into (D21), to verify that those difference equations are both satisfied, along with (D17) and (D18), with the proper behavior at *α* = 0.

Since for a given *j* the *b _{n}* are just renamed $ B n j $, result (34) is proven.

### APPENDIX E: EXAMPLES OF CAYLEY TRANSFORMS FOR $j= 1 2 $, 1, $ 3 2 $, 2, $ 5 2 $, AND 3

## REFERENCES

When I say that $fM$ is a *nonsingular function* of a matrix *M*, I mean that all matrix elements of $fM$ are finite in a basis where all matrix elements of *M* are finite.

At least they were unfamiliar to me when I first started work on spin matrix expansions. Also, they are unnamed polynomials, as far as I can tell.