We give a polynomial-time algorithm for computing upper bounds on some of the smaller energy eigenvalues in a spin-1/2 ferromagnetic Heisenberg model with any graph *G* for the underlying interactions. An important ingredient is the connection between Heisenberg models and the symmetric products of *G*. Our algorithms for computing upper bounds are based on generalized diameters of graphs. Computing the upper bounds amounts to solving the minimum assignment problem on *G*, which has well-known polynomial-time algorithms from the field of combinatorial optimization. We also study the possibility of computing the lower bounds on some of the smaller energy eigenvalues of Heisenberg models. This amounts to estimating the isoperimetric inequalities of the symmetric product of graphs. By using connections with discrete Sobolev inequalities, we show that this can be performed by considering just the vertex-induced subgraphs of *G*. If our conjecture for a polynomial time approximation algorithm to solve the edge-isoperimetric problem holds, then our proposed method of estimating the energy eigenvalues via approximating the edge-isoperimetric properties of vertex-induced subgraphs will yield a polynomial time algorithm for estimating the smaller energy eigenvalues of the Heisenberg ferromagnet.

## I. INTRODUCTION

The Heisenberg model (HM) is a quantum theory of magnetism^{1} and is prevalent in many naturally occurring physical systems such in various cuprates,^{2,3} in solid Helium-3,^{4} and more generally in systems with interacting electrons.^{5} The HM can also be engineered in ultracold atomic gases^{6} and quantum dots.^{7} Given the abundance of the HM, it may be advantageous to obtain a detailed understanding of its spectral structure. Such an understanding would, for example, help us to analyze the feasibility of storing quantum information in HMs via encoding into permutation-invariant quantum error correction codes.^{8–13} Moreover, given the widespread applicability of magnetic materials in classical information processing,^{14,15} quantum magnets based on the HM could similarly enable quantum technologies. In addition, the HM also can be used for quantum computation^{16} and quantum simulation. What is most interesting is the relevance of the HM in mathematical physics because it is a paradigmatic model of statistical mechanics. For example, the celebrated Mermin-Wagner theorem^{17} was proven for the HM.

The central object in this paper is the Heisenberg Hamiltonian (HH). It is the mathematical embodiment of the HM’s energy level structure, and contains all information necessary to derive every property of the HM. More precisely, the HH for spin-half particles in the absence of an external magnetic field is a matrix given by

where **1** is the identity matrix, $\sigma ix,\sigma iy$ and $\sigma iz$ as the usual Pauli matrices acting on the *i*th particle, the sets {*i*, *j*} are included in the sum whenever particles *i* and *j* interact, and *J*_{{i,j}} is an exchange constant which quantifies the strength and nature of the coupling between the particles. Here, we restrict our attention to ferromagnetic HHs, where every exchange constant is non-negative. We write the Hamiltonian in this way because we want the smallest eigenvalue of *Ĥ* to be zero. It is a well-known fact that a ferromagnetic HH can be written as a sum of graph Laplacians. For completeness, we give its proof later in Theorem II.1. Studying the spectrum of the HH is thus equivalent to studying the spectrum of these Laplacians. The field of spectral graph theory deals entirely on determining the eigenvalues of graph Laplacians, and an extensive amount of work has been done on this topic. One can, for example, refer to Chung’s book for a review of the most important results in spectral graph theory.^{18}

Traditionally, most studies on the HM rely on the Bethe ansatz.^{19} In such approaches, the structure of the eigenvectors is assumed and later verified to hold by solving for some of the previously undetermined parameters. This approach has proved hugely successful in 1D Heisenberg models.^{2,20–26} Recently, lower bounds have been proved on the average free energy of the HM on three dimensional lattices^{27} and also on lattices with any dimension.^{28} However bounds on the spectrum of the Heisenberg ferromagnet have yet to be directly addressed. Moreover, while certain other 2D HMs have been studied,^{3,29–31} the question of how to address HMs of potentially arbitrary geometry remains unresolved.

In recent years, there has been impressive progress toward determining the spectrum of the HH. The seminal result of Caputo, Liggett, and Richthammer proves the Aldous’ spectral gap conjecture,^{32} which implies that the spectral gap of the HH is equal to the spectral gap of the Laplacian representing the graph of interactions of the HM. Since the size of this Laplacian is just the number of the HM’s spins, determining the spectral gap of the HH is completely trivial and can be found numerically in polynomial time.^{33} One of the most important developments thereafter was made by Correggi, Giuliani, and Seiringer^{27,28} where they develop important Sobolev inequalities for discrete graphs, but which are also applicable to the HM. Based on this, they find the right inequalities to obtain lower bounds on the free energy of the HM at finite temperatures. However the problem of obtaining bounds for the higher eigenvalues of the HH has been largely unaddressed.

In this paper, we utilize relatively recent developments in spectral graph theory to obtain new bounds for HH’s spectrum. With regards to the upper bounds, we rely on analytical bounds on the eigenvalues of a graph based on its generalized diameters by Chung, Grigor’yan, and Yau.^{34} For the lower bounds, we use Chung and Yau’s Sobolev inequalities on graphs.^{35} There are two innovations provided in this paper. First, we identify a probabilistic polynomial-time algorithm to obtain upper bounds on the HH’s eigenvalues by reducing the computation of a generalized diameter to that of a minimum assignment problem. Second, we provide new discrete Sobolev inequalities that are based on deleting vertices from graphs. These inequalities can be used with Chung and Yau’s Sobolev inequalities to obtain lower bounds on the eigenvalues of the HH. To the best of our knowledge, this is the first time graph-theoretic methods are directly used to obtain bounds on the eigenvalues of the HH.

We begin our paper by explaining how the HH is connected to the symmetric power of graphs in Sec. II. In a preprint by Rudolph, the connection between graphs and the HH was noted, and the terminology of symmetric power of graphs was coined.^{36} Such graphs, later also known as token graphs,^{37} have been extensively studied in recent years for their graph theoretic properties in Refs. 38–41 among many others. Once we establish the connection of the HH with symmetric powers of graphs, we turn our attention to the elementary problem of determining the spectrum of the mean-field Heisenberg ferromagnet, where every pair of spins interacts with the same exchange constant. Obviously, the SU(2) symmetry of such a model immediately allows one to determine the HH eigenvalues and multiplicities, and the eigenprojectors can be in principle calculated using textbook methods with Clebsch-Gordan coefficients. However, we wish to highlight that by using well-known facts about association schemes, we can already directly identify the eigenprojectors of this HH in terms of Hahn polynomials and generalized adjacency matrices (see Theorem III.1).

Generalized diameters of graphs play a central role in deriving upper bounds on the spectrum of HHs, as we shall see in Sec. IV B. These generalized diameters can be thought of as the widths of a body when it is interpreted to have a given dimension. The most important feature of our algorithms is that they run much more efficiently than algorithms that attempt to directly evaluate the eigenvalues of the HH. We show that computing these generalized diameters is equivalent to the minimum assignment problem, which is solved efficiently using the Kuhn-Munkres algorithm (Ref. 42, p. 52). Together with analytical bounds on the eigenvalues of a graph based on its generalized diameters by Chung, Grigor’yan, and Yau,^{34} we thereby obtain a polynomial-time algorithm for evaluating upper bounds on the eigenvalues of the ferromagnetic HH, which gives us our result in Theorem IV.5.

Isoperimetric inequalities play a central role in deriving lower bounds on the spectrum of HHs in this paper. An isoperimetric inequality essentially gives a lower bound on the minimum boundary size of a body with a fixed volume in a given manifold. Specializing this to graphs, we require a lower bound on the minimum cut-size of a subset of *k* vertices, for every possible choice of *k*. Such bounds are then called edge-isoperimetric inequalities, which we introduce in Sec. V. Based on edge-isoperimetric inequalities of the symmetric products of graphs, we present lower bounds on the eigenvalues of the ferromagnetic HH (see Theorem V.8). Because deriving edge-isoperimetric inequalities on the symmetric product of graphs is potentially difficult, we also derive isoperimetric inequalities on the symmetric product of graphs based on the isoperimetric inequalities on their vertex-induced subgraphs (see Theorem V.6). We introduce some Sobolev inequalities in Sec. V A, and proceed to use our results on isoperimetric inequalities on the symmetric product of graphs to obtain lower bounds on all of the HH eigenvalues based on isoperimetric properties of the associated graphs. For this, we use the Sobolev inequalties of with Chung, Yau,^{35} and Ostrovskii’s^{43} on graphs.

Finally in Sec. VI, we discuss some potential implications of our bounds and algorithms. We then remark on the potential to improve both the upper and lower bounds that we present, by further investigation using a combinatorial approach. We also point out how an advance in the field of approximation algorithms could help to make computing lower bounds for the spectrum of the ferromagnetic HH much more efficient.

## II. GRAPHS AND THE HEISENBERG MODEL

Since we investigate the spectrum of HHs with graphs of varying dimensions, we need to explain what these graphs and their dimensions are. Here, a graph corresponding to a HH comprises vertices from 1 to *n* which label the particles and edges {*u*, *v*} which label the interaction between particles *u* and *v*. A graph’s dimension generalizes from the dimension of continuous manifolds. The edge-boundary of any set of vertices *X* denoted by *∂X* is the set of edges in *G* with exactly one vertex in *X*. Suppose that every set *X* with *k* vertices in *G* satisfies the bound |*∂X*| ≥ *ck*^{1−1/δ} for some positive constant *c* for every *k* ≤ *n*/2. Then, we say that *G* has a dimension of *δ* with isoperimetric number *c*. This is analogous to the situation where a manifold with fixed volume *k* and a surface area of at least *ck*^{1−1/δ} for some positive constant *c* has a dimension of *δ*. The dimension of a physical system is then the dimension of the corresponding graph of interactions.

To understand how precisely HH is related to graphs, we need to define the symmetric product of a graph. When *k* is a non-negative integer with *k* ≤ *n*, the *k*th symmetric product of a graph *G* with vertices *V* and edges *E* denoted by *G*^{{k}} is a graph with the following properties. First, *G*^{{k}} has as its vertices all possible subsets of *V* of size *k*. Second, the edges of *G*^{{k}} are the sets {*X*, *Y*}, where (i) *X* and *Y* are subsets of *V* with *k* vertices, (ii) *X* and *Y* have *k* − 1 common elements, and (iii) their symmetric difference, the union of the sets without their intersection, is an edge in *E*. In short, {*X*, *Y*} is an edge in *G*^{{k}} only if the symmetric difference of *X* and *Y* is an edge in *E*, i.e., *X*△*Y* ∈ *E*. Examples of the symmetric product of graphs can be seen in Figs. 1 and 2.

Now, we proceed to define the Laplacians of *G*^{{k}}. By denoting |*X*⟩ as a state with the spins labeled by *X* in the up state and the remaining spins in the down state where *X* is a subset of vertices in *G*, the Laplacians of *G*^{{k}} are

Here, each *L*_{k} is the Laplacian of the graph *G*^{{k}} and has rank $nk$. If we interpret *G*^{{k}} as a discrete manifold, the eigenvectors and eigenvalues of *L*_{k} are its normal modes and associated resonance frequencies.

If we normalize the HH so that every non-zero exchange constant is equal to 1, we get the normalized Hamiltonian

This normalized Hamiltonian *Ĥ*_{1} is just a sum of pairwise orthogonal matrices *L*_{k} (Ref. 38, Appendix A), as we can see from the following theorem.

*Let G =* (*V*, *E*) *be a graph with n vertices. Then, Ĥ*_{1} = *L*_{0} + ⋯ + *L*_{n}, *where L*_{k} *are as given in Eq.* (2.1) *and Ĥ*_{1} *is as given in Eq.* (2.2)*.*

This decomposition of the ferromagnetic HH with graph *G* as a sum of pairwise orthogonal matrices, with each matrix associated with the symmetric products of *G*, has already been known for years (Ref. 38, Appendix A).

*G*is connected, each

*L*

_{k}has exactly one eigenvalue equal to zero with corresponding eigenvector $nk\u22121/2\u2211X\u2286V|X|=k|X\u2009$ (Ref. 44, Lemma 13.1.1). Hence, the ground state energy of

*Ĥ*

_{1}is zero with degeneracy

*n*+ 1, and the ground space is spanned by the Dicke states $|Dkn\u2009$,

^{10}where $|Dkn\u2009$ is a normalized superposition of all |

*X*⟩ for which

*X*is a subset of {1, …,

*n*} of size

*k*. Moreover, for any graph, the Laplacians

*L*

_{k}and

*L*

_{n−k}are unitarily equivalent because of the equivalence of

*G*

^{{k}}and

*G*

^{{n−k}}under set complementation. To see this, denote $X\xaf$ as the set complement of

*X*⊂

*V*, and note that $Ln\u2212k=UkLkUk\u2020$, where

*L*

_{k}for which $k\u2264n2$.

The implication of Caputo, Liggett, and Richthammer’s proof of Aldous’ spectral gap conjecture^{32} is that the spectral gap of every *L*_{k} for *k* = 1, …, *n* − 1 is identical. This renders the problem of finding the spectral gap of HHs trivial because *L*_{1} is effectively a size *n* matrix and its spectral gap can be efficiently solved numerically, for example, by using Spielman and Teng’s celebrated algorithm.^{33}

*L*

_{k}, which we denote as

*λ*

_{1}(

*L*

_{k}) the spectral gap of

*L*

_{k}and $\lambda max(Lk)=\lambda nk\u22121(Lk)$ the largest eigenvalue of

*L*

_{k}. We order these eigenvalues so that

*Proof of Theorem II.1.*

*i*and

*j*as

*π*

_{i,j}, we have the identity

*G*= (

*V*,

*E*) in terms of swap operators so that

*X*denote any subset of vertices

*V*= {1, …,

*n*}. Then, for any distinct

*i*and

*j*from the set

*V*, we have

*m*denotes the number of edges in

*E*. Hence,

*Y*is a subset of

*V*that has a different size from

*X*, then ⟨

*Y*|

*Ĥ*

_{1}|

*X*⟩ = 0. This immediately implies that

*Ĥ*

_{1}can be written as a sum of orthogonal matrices, each of them supported on the space spanned by |

*X*⟩, where

*X*have constant size. Next, note that ⟨

*X*|

*Ĥ*

_{1}|

*X*⟩ = |

*∂X*|, which implies that the diagonal entries of

*L*

_{k}are given by the sizes of the corresponding edge-boundaries of

*k*sets. Finally, note that if

*Y*has the same size as

*X*, then ⟨

*Y*|

*Ĥ*

_{1}|

*X*⟩ = 0 whenever

*X*△

*Y*∉

*E*and ⟨

*Y*|

*Ĥ*

_{1}|

*X*⟩ = 0 whenever

*X*△

*Y*∈

*E*. This proves the result.

## III. EXACT SOLUTIONS FOR THE MEAN-FIELD MODEL

We begin with a combinatorial approach for producing the exact solution for a mean-field HM. Such a HM has *n* spins, and every pair of spin interacts with exactly the same exchange constant *J*. In this case, the normalized Hamiltonian is

From the perspective of SU(2) symmetry, this model is trivial. This is because we can write $\u01241=\u2212S\u2192tot\u22c5S\u2192tot2+n(n+1)21$, where $S\u2192tot=\u2211i=1nS\u2192i$ and $S\u2192i=\sigma \u2192i/2$. The spectrum along with the degeneracies is directly given by the representations contained in the direct product of *n* spin 1/2 representations,

which can be easily solved using standard techniques. Moreover, the corresponding eigenvectors can be in principle calculated using textbook methods with Clebsch-Gordan coefficients. However this computation can be fairly tedious. We show how the eigenvalues and eigenprojectors of *Ĥ*_{1} can be alternatively obtained from a combinatorial perspective.

Note that for *Ĥ*_{1}, the graph of interactions is precisely the complete graph on *n* vertices. The symmetric products of the complete graph are the Johnson graphs for which the spectral problem has been exactly solved using association schemes.^{45,46} Using this connection, we can use prior knowledge of the Johnson schemes to conclude that *L*_{k} has exactly one eigenvalue equal to zero, and its other eigenvalues are *j*(*n* + 1 − *j*) with multiplicities $mj=nj\u2212nj\u22121$ for *j* = 1, …, *k* (Ref. 47, Sec. 12.3.2). Hence, the positive eigenvalues of *Ĥ*_{1} are

with multiplicities

where $j=1,\u2026,n/2$.

What is most remarkable about the connection between association schemes and the mean-field Heisenberg model is that we can assign a combinatorial interpretation to the matrices *L*_{k}. In particular, we can analytically decompose *L*_{k} as a linear combination of eigenprojectors, where each eigenprojector is in turn a linear combination of generalized adjacency matrices. We proceed to explain what these generalized adjacency matrices are. Now the adjacency matrix of *L*_{k} is

Namely, the matrix element of *A*_{k,1} labeled by |*X*⟩⟨*Y*| has a coefficient of 1 if *X* is adjacent to *Y* in *G*^{{k}}, and equal to zero otherwise. Since two vertices in a graph are adjacent if and only if they are a distance of one apart, we can define the generalized adjacency matrices by having

Here, the matrix element of *A*_{k,z} labeled by |*X*⟩⟨*Y*| has a coefficient of 1 if *X* is a distance of *z* from *Y* in *G*^{{k}}, and equal to zero otherwise. We call *A*_{k,z} the *z*th generalized adjacency matrix of the Johnson graph associated with *L*_{k} relating *k*-sets a distance of *z* apart. For completeness, let *A*_{k,0} denote a size $nk$ identity matrix. Now let

denote a Hahn polynomial [Ref. 48, (18) and (20)]. Then, properties of the Johnson scheme given in Ref. 48 imply that for $k=1,\u2026,n/2$, the Laplacians *L*_{k} have the spectral decomposition

where

are pairwise orthogonal projectors. To make the spectral decomposition of the normalized mean-field HH explicit, we present the following theorem.

*Let G =*(

*V*,

*E*)

*be a complete graph. Then, a normalized HH on this graph Ĥ*

_{1}

*has the spectral decomposition*

*when n is odd and*

*when n is even.*

*Proof.*

*a*, and any complex coefficients

*a*

_{u,j}.

*n*is odd, we can write

*U*is the unitary as defined in (2.3) and

*L*

_{j}is as given in (3.7). Substituting the decomposition of

*L*

_{j}, we get

*n*. When

*n*is even, we have

*n*, we get

*L*

_{n/2}, we get the result.

## IV. UPPER BOUNDS FOR THE HEISENBERG SPECTRUM

### A. Simple two-sided bounds on the largest eigenvalue

We obtain bounds on the largest eigenvalue of ferromagnetic HHs with graphs having dimension *δ* with isoperimetric number *c*, and maximum vertex degrees *β*. Note that obtaining bounds on the largest eigenvalue of the normalized HH *Ĥ*_{1} amounts to obtaining bounds on *λ*_{max}(*L*_{k}). Now the largest eigenvalue of the Laplacian of any graph is at least its maximum vertex degree (Ref. 49, p. 149, line 7) and at most twice its maximum vertex degree from Gersgorin’s circle theorem.^{50,51} The upper bound can also slightly improved over Gersgorin’s circle theorem to be at most the sum of the largest and the second largest vertex degrees [Ref. 49, (6)]. Thus,

for 1 ≤ *k* ≤ *n*/2. Since *Ĥ*_{1} = *L*_{0} + ⋯ + *L*_{n}, we get

### B. Upper bounds from graph diameters

In this subsection, we outline an algorithmic approach for finding upper bounds on the smaller eigenvalues of the HH. This approach relies crucially on the generalizations of the diameter of a graph. The diameter of a graph is the length of its shortest path and intuitively measures the size of the graph. In the case when the graph has the geometry of a hypercube of dimension *d*, its diameter will be the length between the vertices of the hypercube that are furthest apart. The generalization of the diameter that we will consider allows us to quantify, in the case of the hypercube, the length of its sides. In particular, the *d*-diameter of a *d*-dimensional hypercube will be precisely the length of its side. Intuitively, the *d*-diameter of a body is its width when it is interpreted to have *d* dimensions. The generalized diameters are important because they can give upper bounds on the eigenvalues of a graph Laplacian.^{34,52}

The generalized diameter of a graph quantifies its sparsity. It is then reasonable to expect that the larger the generalized diameter, the smaller the upper bound on the eigenvalues can be, since a sparse graph ought to have smaller eigenvalues than a highly connected graph. In the extreme case when a graph comprises disconnected vertices, its generalized distances are all infinite, and every eigenvalue is equal is zero. Thus, in this case, we would anticipate that the upper bound we get from the diameter is also equal to zero. This is indeed the case. When a graph has *j* distinct connected components, by selecting *j* vertices, one from each of these connected components, the corresponding generalized distance is infinite. This then implies that the *c*th smallest eigenvalue of the corresponding graph Laplacian is at most zero. Since it is known that a graph with *c* distinct components has a graph Laplacian with exactly *c* + 1 zero eigenvalues (Ref. 44, Lemma 13.1.1), in this sense, the bound of (Ref. 34, Corollary 4.4) can be said to be tight.

To understand the generalized diameter of a graph, we need to review the concept of the distance amongst a subset of its vertices. Now, the distance between a pair of vertices *v*_{a} and *v*_{b} is the just the length of the shortest path connecting them, which we denote as *d*(*v*_{a}, *v*_{b}). This can be computed using Algorithm 1.

D ← size n matrix of zeros |

for all v ∈ V do |

Perform BFS on v, obtaining a spanning tree T rooted at v. |

for all w ∈ V, w ≠ v do |

D(u, v) ← distance of vertex w to v in T |

end for |

end for |

return D |

D ← size n matrix of zeros |

for all v ∈ V do |

Perform BFS on v, obtaining a spanning tree T rooted at v. |

for all w ∈ V, w ≠ v do |

D(u, v) ← distance of vertex w to v in T |

end for |

end for |

return D |

The distance between a set of vertices *K* = {*v*_{1}, …, *v*_{k}} is then the minimum pairwise distance between distinct vertices *v*_{a} and *v*_{b}, which we denote as

The *j*-diameter of a graph *G* = (*V*, *E*) has been defined (Ref. 34, p. 25, last equation) as the maximum distance of subsets *K* with (*j* + 1) vertices, and we denote it as

Now define *d*_{j,k} to be the *j*-diameter of *G*^{{k}}. Whenever *d*_{j,k} ≥ 2, we can obtain upper bounds on the eigenvalues of *L*_{k} from graph-theoretic results of Ref. 34, Corollary 4.4,

Clearly, *d*_{j,k} decreases with increasing *j*, and thus our upper bounds on *λ*_{j}(*L*_{k}) are increasing with *j* as one would expect. Now let us see how (4.5) can be tight. Let us consider a graph *G* with *c* connected components and consider *k* = 1 so that *G*^{{1}} = *G*. We claim that the (*c* − 1)-diameter of *G* is infinite. This is because we can pick a set of vertices, with one vertex from each connected component. Since none of these vertices are connected, their pairwise distance is always infinite. Using this value for the generalized diameter, the upper bound in (4.5) for *λ*_{c−1}(*L*_{1}) becomes zero. Since we know from (Ref. 44, Lemma 13.1.1) that *λ*_{c−1}*L*(*G*) = 0, the upper bound in (4.5) is tight.

Since the *j*-diameter of *G*^{{k}} may be unwieldy to calculate directly, we outline a polynomial time algorithm to obtain lower bounds on it. At the heart of our algorithm is the fact that the distances between vertices in *G*^{{k}} can be computed using only information about the distances between vertices in *G*. This makes it possible to estimate the *j*-diameter of *G*^{{k}} solely by computing on the graph *G*. Before diving into the specifics of our algorithm, we briefly outline its inner workings.

Pick any

*j*+ 1 distinct vertices*X*_{1}, …,*X*_{j+1}from*G*^{{k}}. Note that each of these vertices are subsets of*V*, each with*k*elements.Loop over all

*a*,*b*such that 1 ≤*a*<*b*≤*j*+ 1.Compute

*d*(*X*_{a},*X*_{b}).Exit loop.

A lower bound for

*d*_{j}(*G*^{{k}}) is the minimum*d*(*X*_{a},*X*_{b}).

This procedure can in principle be repeated for all possible choices of *X*_{1}, …, *X*_{j+1} to obtain the value of *d*_{j}(*G*^{{k}}) exactly. Since this may be computationally expensive, we propose just to randomly select the vertices *X*_{1}, …, *X*_{j+1} a constant number of times. Obviously the complexity of such an algorithm depends on the complexity of Step 3 of this procedure, where the *d*(*X*_{a}, *X*_{b}) is evaluated.

A direct attack on evaluating *d*(*X*_{a}, *X*_{b}) might seem to take time with complexity *O*(*k*!) and hence not be polynomial in *n*. This is because the distance between *X*_{a} = {*x*_{1}, …, *x*_{k}} and *X*_{b} = {*y*_{1}, …, *y*_{k}} with respect to *G*^{{k}} is the sum of the distances with respect to *G* between *x*_{j} and *y*_{π(j)}, minimized over all permutations *π* that permute *k* symbols. There are then *k*! possible permutations and *k* distances to a sum for each instance. This however is not the case since the problem of evaluating *d*(*X*_{a}, *X*_{b}) is actually equivalent to the minimum assignment problem, which can be solved in *O*(*k*^{3}) time using the celebrated Kuhn-Munkres algorithm (Ref. 42, p. 52), after one first computes all pairwise distances in *G*.

We now explain how combinatorial optimization algorithms from graph theory can be used to compute lower bounds on *d*_{j,k} can be evaluated in polynomial time.

Algorithm 1 computes the all pairwise distances in

*G*. This is achieved using breath-first-search on every vertex. Since breadth-first search on any vertex produces a shortest path tree in linear time (Ref. 42, Theorem 6.4), and there are*n*such vertices, Algorithm 1 runs in*O*(*n*^{2}) time.Algorithm 3 evaluates distances between given vertices in

*G*^{{k}}. It turns out that the evaluation of*d*(*X*_{a},*X*_{b}) is equivalent to the well-known minimum assignment problem in the field of combinatorial optimization. First, evaluate*Z*=*X*_{a}∩*X*_{b}and set*X*=*X*_{a}∖*Z*and*Y*=*X*_{b}∖*Z*. Consider a complete bipartite graph with every vertex in*x*∈*X*is connected to a vertex in*y*∈*Y*by a weighted edge. The weight of the edge {*x*,*y*} in the bipartite graph is equal to the distance between*x*and*y*given by*d*(*x*,*y*). The problem of computing*d*(*X*_{a},*X*_{b}) is then equivalent to finding the perfect matching (set of edges such that every vertex belongs to exactly one edge) on this bipartite graph, such that the sum of the weights on these matchings is minimized. But this is precisely equal to the minimum assignment problem, which can be solved using the Kuhn-Munkres algorithm. We therefore just need to generate the cost matrix for the minimum assignment problem in this algorithm to utilize the Kuhn-Munkres algorithm.

We would be able to easily compute the generalized diameter of *G*^{{k}} exactly, if we only knew how to optimally select *j* + 1 of its vertices in *G*^{{k}}. Without such knowledge, we can use Algorithm 2 to randomly select *j* + 1 vertices in *G*^{{k}}.

X_{1} ← a random k-vertex subset of V |

c ← 1 |

while c ≤ j + 1 do |

Y ← a random k-vertex subset of V |

if Y ∪ X_{a} ≠ Y for all a = 1, …, c then |

X_{c+1} ← Y |

c ← c + 1 |

end if |

end while |

return (X_{1}, …, X_{j+1}) |

X_{1} ← a random k-vertex subset of V |

c ← 1 |

while c ≤ j + 1 do |

Y ← a random k-vertex subset of V |

if Y ∪ X_{a} ≠ Y for all a = 1, …, c then |

X_{c+1} ← Y |

c ← c + 1 |

end if |

end while |

return (X_{1}, …, X_{j+1}) |

Z ← X ∩ Y |

a ← |X| − |Z| ∩ Y |

X = {x_{1}, …, x_{a}} ← X∖Z |

Y = {x_{1}, …, y_{a}} ← Y∖Z |

C ←zeros(a) $\u2003\u2003\u2003\u2003\u2003\u2003\u22b3$ initialize a size a matrix |

for all u = 1, …, a do |

for all v = 1, …, a do |

C(u, v) = D(x_{u}, x_{v}) |

end for |

end for |

d ← output of Kuhn-Munkres algorithm on the cost matrix C |

return d |

Z ← X ∩ Y |

a ← |X| − |Z| ∩ Y |

X = {x_{1}, …, x_{a}} ← X∖Z |

Y = {x_{1}, …, y_{a}} ← Y∖Z |

C ←zeros(a) $\u2003\u2003\u2003\u2003\u2003\u2003\u22b3$ initialize a size a matrix |

for all u = 1, …, a do |

for all v = 1, …, a do |

C(u, v) = D(x_{u}, x_{v}) |

end for |

end for |

d ← output of Kuhn-Munkres algorithm on the cost matrix C |

return d |

We completely describe our algorithm to compute upper bounds on the eigenvalues of *L*_{k} in Algorithm 4.

Initialization | |

$1\u2264k\u2264n2$. | |

$1\u2264j<nk$. | |

β ← maximum vertex degree of G. | |

μ ← 2kβ | $\u22b3$ upper bound on λ_{max}(L_{k}) |

D ← | Dist(G) $\u22b3$ From Algorithm 1 |

end initialization | |

(X_{1}, …, X_{j+1}) ← SEL_{k}(j, V) | $\u22b3$ From Algorithm 2 |

d ← ∞ | |

for all a, b = 1, …, j + 1: a < b do | |

$dXa,Xb\u2190Dist(Xa,Xb,D)$ | $\u22b3$ From Algorithm 3 |

if $dXa,Xb<d$ then | |

$d\u2190dXa,Xb$ | |

end if | |

end for | |

if d ≥ 2 then | |

$u\u2190\mu (1\u22122/(1+nk1/d))$ | |

else | |

u ← ∞ | |

end if | |

return u |

Initialization | |

$1\u2264k\u2264n2$. | |

$1\u2264j<nk$. | |

β ← maximum vertex degree of G. | |

μ ← 2kβ | $\u22b3$ upper bound on λ_{max}(L_{k}) |

D ← | Dist(G) $\u22b3$ From Algorithm 1 |

end initialization | |

(X_{1}, …, X_{j+1}) ← SEL_{k}(j, V) | $\u22b3$ From Algorithm 2 |

d ← ∞ | |

for all a, b = 1, …, j + 1: a < b do | |

$dXa,Xb\u2190Dist(Xa,Xb,D)$ | $\u22b3$ From Algorithm 3 |

if $dXa,Xb<d$ then | |

$d\u2190dXa,Xb$ | |

end if | |

end for | |

if d ≥ 2 then | |

$u\u2190\mu (1\u22122/(1+nk1/d))$ | |

else | |

u ← ∞ | |

end if | |

return u |

Since there are $j+12$ possible pairwise distances amongst *X*_{1}, …, *X*_{j+1} that we must consider, the time complexity of running Algorithm 4 is

This thereby leads to an algorithm that evaluates a lower bound for *d*_{j,k} in time polynomial in *n*, *j*, and *k*. This then leads to our formal result, which we give in the following theorem.

*Let G =* (*V*, *E*) *be any graph with n vertices. Let* 2 ≤ *k* ≤ *n/*2 *and* $1\u2264j\u2264nk\u22121$*. Then, Algorithm* 4 *can compute an upper bound on λ*_{j}(*L*_{k}) *in O*(*n*^{2}) + *O*(*j*^{2}*k*^{3}) *time.*

Thus, for all *k* and *j* polynomial in *n*, upper bounds on the eigenvalues of the ferromagnetic HH can be computed in time polynomial in *n*. Such an algorithm would outperform a direct solver for Laplacians^{33} whenever *k* ≥ 3.

## V. LOWER BOUNDS FOR THE HEISENBERG SPECTRUM

A property of graphs that we focus on is their associated isoperimetric inequalities. These isoperimetric inequalities on graphs allow us to define the notion of the isoperimetric dimension of a graph. Now let *X* be a set of vertices and *∂X* be its boundary. In this case, the edge boundary of *X* is just the set of edges in *E* with exactly one vertex in *X* and one vertex in *V*∖*X*. Then, the edge-isoperimetric inequality on graphs^{53} is any lower bound of the form

that holds for every vertex subset *X* of size at most half the cardinality of *V*. The utility of these isoperimetric inequalities in the case of continuous manifolds lies in their applicability, for example, to give bounds on the principal frequency of a vibrating membrane.^{54} The rationale behind seeking edge-isoperimetric inequalities for the graphs *G* lies in the fact that such inequalities can yield spectral bounds on the eigenvalues of the normalized Laplacians of *G*,^{35} and hence also of the Laplacians. Since the Heisenberg Hamiltonian is just a direct sum of Laplacians of *G*^{{k}}, edge-isoperimetric inequalities on *G*^{{k}} can then yield bounds on the corresponding energy eigenvalues of the Heisenberg Hamiltonian.

In this section, we prove several technical results relating to the edge-isoperimetric inequalities on the symmetric products of graphs. Roughly speaking, our results allow us to establish the isoperimetric properties of *G*^{{k}} in terms of the isoperimetric properties of certain subgraphs of the graph *G*. In particular, these subgraphs are vertex induced subgraphs of *G* where a number of vertices and their corresponding edges are deleted from *G*. Our technical result applies to graphs with a finite number of vertices. In Theorem V.6, we prove that that if deleting any *k* − 1 vertices from a finite graph *G* yields a vertex induced subgraph that has a dimension *δ* with isoperimetric number *C*, then a lower bound on the size of the edge-boundary of a subset of vertices Ω in *G*^{{k}} is given in terms of the size of the edge-boundary of Ω in the Johnson graph that is the *k*th symmetric product of the complete graph.

The proof relies crucially on the fact that the size of an edge boundary of a set *X* can be written as a Sobolev seminorm of the indicator function of *X*. This implies that edge-isoperimetric inequalities can be written in terms of the Sobolev seminorm of an indicator function and an appropriate functional of that indicator function, as we shall see in Sec. V A. Also, we use Tillich’s observation of a one-to-one correspondence between edge-isoperimetric inequalities and inequalities relating the Sobolev seminorm of functions and an appropriate functional evaluated on those functions.^{55} Together, these insights allow us to obtain lower bounds on the size of the edge-boundary of the subsets of vertices in *G*^{{k}}.

### A. Sobolev inequalities on graphs

Recall that an edge-isoperimetric inequality for a graph *G* = (*V*, *E*) has the form

where *k* = 1, …, |*V*|/2. The point of this section is that the size of the edge-boundary |*∂X*| can be written in terms of a discrete Sobolev seminorm and this allows us to obtain some interesting insights. Namely, given a graph *G* = (*V*, *E*) and a function $f:V\u2192R$ on the vertex set, the discrete Sobolev seminorm of *f* corresponding to the edge set *E* is defined by

Now consider the case where *f* = **1**_{X} where **1**_{X}:*V* → {0, 1} is an indicator function on *X* so that for all *X* ⊆ *V*, **1**_{X}(*x*) = 1 if *x* ∈ *X* and **1**_{X}(*x*) = 0 if *x* ∈ *V*∖*X*. Then, it is clear that

We call any inequality which involves the Sobolev seminorm ∥·∥_{E}, such as the one above, a discrete Sobolev inequality.

The analytic inequalities of Tillich (Ref. 55, Theorem 2) establish the equivalence between edge-isoperimetric inequalities and discrete Sobolev inequalities on functionals that map functions from Φ_{V} to non-negative real numbers, where Φ_{V} denotes the set of all functions $f:V\u2192R$. To state Tillich’s theorem succinctly, we introduce the following definition.

*Definition V.1.*

Given *C* > 0 and a functional $\rho :\Phi V\u2192R+$, we say that *G* is (*C*, *ρ*)-isoperimetric if for every *X* ⊆ *V*, we have $\Vert 1X\Vert E\u2265C\rho (1X)$.

By not requiring that |*X*| ≤ |*V*|/2, an implicit constraint on the choice of feasible functionals *ρ* that can satisfy the discrete Sobolev inequality in Definition V.1 is imposed.

We state Tillich’s result on functionals that are also seminorms in the following theorem.

(Ref. 55, Theorem 2). *Let G =* (*V*, *E*) *be a graph, C >* 0*, and ρ be a seminorm on* Φ_{V}*. Then, G is* (*C*, *ρ*)*-isoperimetric if and only if* ∥*f*∥_{E} ≥ *Cρ*(*f*) *for every function* $f:V\u2192R$*.*

Imposing the additional constraint |*X*| ≤ *V*/2 would allow ourselves to work with a larger family of seminorms *ρ*, but Theorem V.2 would need appropriate modification, which we do not address in this paper. Working without the constraint |*X*| ≤ *V*/2 allowed Tillich to derive edge-isoperimetric inequalities for graphs with a countably infinite number of vertices.

*g*

_{p}and

*ρ*

_{p}for

*p*≥ 1, where

*f*. It is then easy to show that

*g*

_{p}and

*ρ*

_{p}are evaluated on

**1**

_{X}, they are invariant under the substitution of

*X*with

*V*∖

*X*.

The discrete Sobolev inequality is closely related to the isoperimetric number and dimension of a graph as given in the following proposition, which is obvious from definitions.

*Proposition V.3.*

*Let G =* (*V*, *E*) *be graph and C >* 0 *and δ >* 1*. Then, the following are true.*

*If V is finite and G is*(*C*,*g*_{δ/(δ−1)})*-isoperimetric, then G has an dimension of δ with isoperimetric number C.**If V is finite and G has dimension δ with isoperimetric number C, then G is*(2^{−δ/(δ−1)}*C*,*g*_{δ/(δ−1)})*-isoperimetric.*

Hence, we can address finite-sized graphs with the functionals *ρ*_{p} using the two-sided bounds on *ρ*_{p}(**1**_{X}) in terms of *g*_{p}(**1**_{X}) as given in the following lemma. Note that when *p* = 1, we get *ρ*_{1}(**1**_{X}) = *g*_{1}(**1**_{X}) for any vertex subset *X*.

*Lemma V.4.*

*Let G =*(

*V*,

*E*)

*be a graph, X*⊆

*V and p*≥ 1

*. Then,*

*Proof.*

*V*into the disjoint subsets

*X*and

*V*∖

*X*yields

*p*≥ 1, we get

*ρ*

_{p}(

**1**

_{X}) ≤

*g*

_{p}(

**1**

_{X}). Since both $|X|1\u2212|X||V|p$ and $|V\X||X||V|p$ are at least $|X||V\X||V|(12)p\u22121$, we get $\rho p(1X)\u2265gp(1X)(1/2p\u22121)1/p$.

*p*= 1 because then we would have

*ρ*

_{1}(

**1**

_{X}) =

*g*

_{1}(

**1**

_{X}). The scenario

*p*= 1 occurs for graphs with infinite dimensions and expander graphs are examples of such graphs.

Lemma V.4 implies the following for *C* > 0 and *δ* > 1.

If a graph is (

*C*,*ρ*_{δ/(δ−1)})-isoperimetric, the graph also has dimension*δ*with isoperimetric number 2^{−δ}*C*.If a graph has dimension

*δ*with isoperimetric number*C*, the graph is also (2^{−δ/(δ−1)}*C*,*g*_{δ/(δ−1)})-isoperimetric.

*ρ*=

*ρ*

_{p}for

*p*≥ 1.

### B. The symmetric product of finite graphs

Now, we address the edge-isoperimetric problem on the graph *G*^{{k}} when *G* has a finite number of vertices, for a fixed positive integer *k* = 2, …, ⌊|*V*|/2⌋. Again, we rely on the edge-isoperimetric properties of the vertex-induced subgraphs of a graph *G*. A key ingredient of our proof is a bijection between sets, described by the following proposition.

*Proposition V.5.*

*Let V be a countable set and k be a integer such that k =* 1, …, *|V|. Then, the sets* $A={(W;x):W\u2286V,|W|=k\u22121,x\u2208V\W}$ *and* $A\u2032={(X;x):X\u2286V,|X|=k,x\u2208X}$ *have the same cardinality.*

*Proof.*

Let $f:A\u2192A\u2032$, where *f* ↦ (*W*; *x*) = (*W* ∪ {*x*}; *x*) for all *W* ⊆ *V* and *x* ∈ *V*∖*W*. The map *f* is invertible and is therefore a bijection from $A$ to $A\u2032$. Hence, $A$ and $A\u2032$ have the same cardinality.

We obtain here a lower bound on |*∂*Ω|, which is the size of the edge boundary of any vertex subset Ω in *G*^{{k}}. Our lower bound on |*∂*Ω| is provided in terms of |*∂*_{J}Ω|, which is the size of the edge boundary of Ω in the Johnson graph *J*(*n*, *k*).

*Let G =*(

*V*,

*E*)

*be a graph with n vertices, and let p*≥ 1

*and C >*0

*. Suppose that every vertex-induced subgraph of G with n − k +*1

*vertices is*(

*C*,

*ρ*

_{p})

*-isoperimetric. Then, for every*Ω ⊆

*V*

^{{k}},

*p*= 1. To see this, let us consider a trivial scenario where

*G*is the complete graph on

*n*vertices and

*k*= 1. For the complete graph, we can compute the edge boundary of any vertex subset

*X*exactly. Denoting

*x*= |

*X*| and

*n*= |

*V*|, we have |

*∂X*| = min{

*x*,

*n*−

*x*}(

*n*− 1). Recall from (5.7) that $g1(1X)=2x(n\u2212x)n$. Then, the edge-isoperimetric inequality for the complete graph with respect to the seminorm

*g*

_{1}is equivalent to

*x*= 0, so let us consider

*x*≥ 1. Now focus on the scenario where

*x*≤

*n*/2. Then, (

*n*− 1)

*x*≥

*C*2

*x*(1 −

*x*/

*n*), which is equivalent to (

*n*− 1) ≥

*C*2(1 −

*x*/

*n*) and $C\u2264n\u221212(1\u2212x/n)$. To minimum upper bound for

*C*in this case is attained for

*x*= 1, and thus, we have $C\u2264n2$. Now consider the scenario where $n2<x\u2264n$. When

*x*=

*n*, the inequality again holds trivially. So we consider $n2<x\u2264n\u22121$. Then, the inequality we are faced with is (

*n*− 1)

*y*≥

*C*2

*y*(1 −

*y*/

*n*), where

*y*=

*n*−

*x*. Since we just finished analyzing this scenario, we can conclude that the optimal isoperimetric constant is $C=n2$ for the complete graph. Substituting this example into Theorem V.6, since

*G*

^{{1}}=

*G*, we get for the complete graph

*Proof of Theorem V.6.*

*V*

^{{k}}, note that $|\u2202\Omega |=\Vert 1\Omega \Vert E{k}$. Two

*k*-sets

*X*and

*Y*in Ω are adjacent in the graph

*G*

^{{k}}if and only if the symmetric difference of

*X*and

*Y*is an edge in

*E*. Hence,

*ρ*

_{p}on each induced subgraph

*G*[

*V*∖

*W*] for every (

*k*− 1)-set

*W*with respect to the function

**1**

_{Ω}(

*W*∪ {·}), we get

^{1/p}for all

*p*≥ 1, the inequality (5.14) becomes

*k*-set

*X*appearing in the inequality (5.16) either belongs to Ω or not. Applying simple arithmetic on the right-hand side of (5.16) above then yields

*x*

_{i}, the expression (5.17) becomes

*J*(

*n*,

*k*) for

*k*= 0, …, ⌊

*n*/2⌋ are

*j*(

*n*+ 1 −

*j*) with multiplicities $nj\u2212nj\u22121$, where

*j*= 0, …,

*k*(Ref. 47, Sec. 12.3.2). If

*λ*is the second smallest eigenvalue of the combinatorial Laplacian of a graph, then that graph is $(\lambda 2,g1)$-isoperimetric (Ref. 44, Lemma 13.7.1). Since the second smallest eigenvalue of the combinatorial Laplacian of the Johnson graph

*J*(

*n*,

*k*) is always

*n*, $|\u2202J\Omega |\u2265n2g1(1\Omega )$ for every Ω ⊆

*V*

^{{k}}. Hence,

*Corollary V.7.*

*Let G =* (*V*, *E*) *be a graph with n vertices, and let p* ≥ 1 *and C* > 0*. Suppose that every vertex-induced subgraph of G with n − k +* 1 *vertices is* (*C*, *ρ*_{p})*-isoperimetric. Then, G*^{{k}} *is* $(Cn1/pn\u2212k+1,gp)$*-isoperimetric and* $(Cn1/pn\u2212k+1,\rho p)$*-isoperimetric.*

This corollary plays a central role in Subsection V C.

### C. Lower bounds from isoperimetric considerations

If one were to compute the eigenvalues of *L*_{k} directly, one may quickly run into computational difficulties. The reason is twofold. First, the size of the matrix *L*_{k} is $nk$, and in general scales exponentially with *n*. This leads to the difficulty in evaluating the eigenvalues of *L*_{k} when one does not desire to utilize a computer with both exponential memory that runs in exponential time. In view of this problem, our methodology to obtain lower bounds on the eigenvalues of *L*_{k} will be handy. The algorithms to compute lower bounds that we introduce from graph theory will considerably outperform algorithms that directly compute the eigenvalues of *L*_{k}. Instead of studying the symmetric products *G*^{{k}}, we restrict our attention to the vertex-induced subgraphs of *G*.

When one deletes vertices from a graph *G* along with the corresponding edges, one obtains a vertex-induced subgraph of *G*. We denote the set of all graphs obtained by deleting exactly *k* − 1 vertices from *G* as $V(G,k)$. Clearly, there are $nk\u22121$ graphs in the set $V(G,k)$. From Corollary V.7, we know that if *C* is less than the isoperimetric number of every graph in $V(G,k)$ with corresponding dimension *δ*, then the graph *G*^{{k}} has isoperimetric dimension *δ* with isoperimetric number at least

We now proceed to outline how lower bounds on the eigenvalues of *L*_{k} can be obtained from geometric considerations the graphs *G*^{{k}}. To achieve this, we will first illustrate how lower bounds on the spectrum of a graph Laplacians can depend only on the graph’s geometry. We begin by introducing some notation. Let *D*_{G} = *∑*_{v∈V}*d*_{v}|*v*⟩⟨*v*| denote the degree matrix of a graph *G* = (*V*, *E*). Let *A*_{G} denote the adjacency matrix of a graph, which means that it is a matrix with matrix elements equal to either 0 or 1, and where ⟨*u*|*A*_{G}|*v*⟩ = 1 if the vertex *u* is adjacent to *v*. Let *L*_{G} denote the Laplacian of a graph, which can be written as *D*_{G} − *A*_{G}. In this subsection, we have the following theorem, which is essentially a Chung-Yau type bound^{35} with Ostrovskii’s correction^{43} for unnormalized Laplacians.

*Let a graph G =*(

*V*,

*E*)

*have dimension δ >*2

*with isoperimetric number c. Let b and β be the minimum and maximum vertex degrees of G*,

*respectively. Then,*

*G*as

*Lemma V.9.*

*If the graph has minimum and maximum vertex degrees given by b and β*,

*respectively,*

*Proof.*

*i*th largest singular value of a matrix

*A*of size

*d*

_{a}as

*s*

_{i}(

*A*) with $s1(A)\u2265\cdots \u2265sda(A)$, we have from Ref. 56, Problem III.6.5 the inequalities

*D*

_{G}and $DG\u22121$ are

*β*and

*b*

^{−1}, respectively. Hence, the inequalities (5.24) then give the result.

Lower bounds on the eigenvalues of the normalized Laplacian can be obtained from the graph’s Sobolev inequalities, as shown in the seminal work of Chung and Yau.^{35} Because of a gap in the proof in Ref. 35 as shown by Ostrovskii [Ref. 43, after Eq. (8)], we have to take Ostrovskii’s correction into account when we prove the corresponding lower bounds on the graph’s Laplacian which we state explicitly in Theorem V.8.

*Proof of Theorem V.8.*

*G*= (

*V*,

*E*), denote the volume of a subset of vertices

*X*as vol(

*X*) =

*∑*

_{v∈X}

*d*

_{v}. Also let vol(

*G*) =

*∑*

_{v∈V}

*d*

_{v}denote the sum of all vertex degrees in the graph

*G*. The isoperimetric inequality we focus on is

*X*) ≤ vol(

*VX*). Note here that vol(

*X*) is in general different from the number of vertices in

*X*. While |

*X*| counts the number of vertices in

*X*, the volume vol(

*X*) counts the sum of all vertex degrees of vertices in

*X*. We may also interpret vol(

*X*) as the number of vertices in

*X*multiplied by the average degree of the vertices in

*X*. The Sobolev inequality on graphs has the form

*A*depends on

*c*

_{δ}and

*δ*. Chung and Yau proved when the above Sobolev inequality holds for a graph, the eigenvalues of the graph’s normalized Laplacians satisfy the lower bound

*δ*> 2, the inequality (5.26) holds with $A=c\delta 216\delta \u22122\delta \u221212$ using Ostrovskii’s Sobolev inequality [Ref. 43, (8)]. Using this fact with Lemma V.9, we get

*c*

_{δ}to

*c*. Let

*β*be the maximum vertex degree of

*G*. Since

*G*has isoperimetric dimension

*δ*and isoperimetric number

*c*, its vertex subsets

*X*satisfy the bound

*c*

_{δ}=

*c*/

*β*

^{1−1/δ}. The hand-shaking Lemma also implies that vol(

*G*) = 2|

*E*|, and we get the result.

*L*

_{k}using

*b*

_{k}and

*β*

_{k}, which are the minimum and maximum vertex degrees of

*G*

^{{k}}, respectively. Note that

*β*

_{1}denotes the maximum number of interacting neighbors each spin experiences in the Heisenberg ferromanget. To bound

*b*

_{k}and

*β*

_{k}, note that every vertex in

*G*

^{{k}}is a set of vertices in

*G*with

*k*elements. Therefore, the vertex degree of {

*x*

_{1}, …,

*x*

_{k}} in

*G*

^{{k}}is just the edge-boundary of {

*x*

_{1}, …,

*x*

_{k}} in

*G*. Thus,

*b*

_{k}≥

*ck*

^{1−1/δ}whenever

*G*has dimension

*δ*with isoperimetric number

*c*. Also, when

*β*is the maximum vertex degree of

*G*, we trivially have and

*β*

_{k}≤

*kβ*

_{1}. Hence, Corollary V.7 implies that $ck\u2265akn1\u22121/\delta kn\u2212k+1$, where every vertex-induced subgraph of

*G*with

*k*− 1 deleted vertices has dimension

*δ*

_{k}with isoperimetric number

*a*

_{k}. The number of edges in

*G*

^{{k}}is at most $\beta knk/2$, where

*n*is the number of spins. Then, if

*δ*

_{k}> 2 for

*k*= 1, …,

*n*/2, Theorem V.8 implies that

*a*

_{k}, it suffices to numerically compute the isoperimetric numbers of graphs $K\u2208V(G,k)$ with vertex set

*V*(

*K*) and edge set

*E*(

*K*). To find the isoperimetric number of

*K*, we need to solve its corresponding edge-isoperimetric problem (EIP) on

*K*, which involves finding

*j*≤ |

*V*(

*K*)|/2. While solving the EIP exactly is NP-hard,

^{57,58}we conjecture that there can be approximation algorithms to approximately solve the EIP in polynomial time.

*Conjecture V.10.*

*Let G =* (*V*, *E*) *be a graph. For every k =* 1, …, *|v|/*2*, let e*_{k} *=* min{*|∂X|*:*X* ⊆ *V*, *|X| = k*}*. Then, for every ε>* 0 *and for every k =* 1, …, *|V|/*2*, there exists a polynomial time approximation algorithm that computes* $ek\u2032$ *such that* (1 − *ε*)$ek$ *≤* $ek\u2032$ *≤ e*_{k}*.*

A reason why Conjecture V.10 might be true is because for a multitude of different NP-hard problems, there do exist approximation algorithms that have efficient runtimes.^{59} If our Conjecture V.10 holds, then lower bounds on the eigenvalues can be evaluated in *O*(poly(*n*)*n*^{k−1}) time with *O*(*n*) memory. In contrast, computing the eigenvalues of *L*_{k} directly in practice requires a computer in *O*(*n*^{3k}) time and *O*(*n*^{2k}) memory. Even using the best asymptotic algorithm for matrix multiplication would require at least *O*(*n*^{2k}) time^{60} and *O*(*n*^{2k}) memory.

## VI. DISCUSSIONS

In this paper, we obtain many bounds on the spectrum of the ferromagnetic HHs. For this, we rely on tools from graph theory and matrix analysis. Obviously, with these bounds on the eigenvalues of the Heisenberg ferromagnet, one can easily compute bounds on thermodynamic quantities of the corresponding Heisenberg models such as free energy.

With regards to upper bounds based on graph distances, there remains a potential to further tighten our bounds by optimizing over the partitions used in Eq. (4.22) of Corollary 4.4 in Ref. 34. This is however beyond the scope of the current paper and we leave this for future investigation. With regards to the lower bounds based on isoperimetric inequalities, we wish to point out that the edge-isoperimetric problem for the Johnson graph, also known as the problem of Kleitman and West,^{61} remains unsolved. Given this fact, better edge-isoperimetric inequalities for the Johnson graph will improve the edge-isoperimetric inequalities of the symmetric product of finite graphs given in Corollary V.7. Also advances in the theory of the graph expansion properties of vertex induced subgraphs will certainly also improve the bounds given in this corollary. Directly deriving lower bounds on the combinatorial Laplacian of a graph from discrete Sobolev inequalities can also help to improve the constants involved in the bound. Moreover, a polynomial-time approximation algorithm for solving the edge-isoperimetric problem for graphs (Conjecture V.10) would together with the methods already in this paper yield a polynomial-time algorithm for computing lower bounds for the eigenvalues of the ferromagnetic HH.

To recap, in the spin half case, the computational basis of the ferromagnetic HM can be represented by a binary string. Each binary string is represented as a vertex, and interactions represented as edges between the vertices. In the spin-half case, each exchange interaction is equivalent to a swap operator and acts as a transposition on the binary strings. The relationship between different binary strings under transpositions that correspond to the interaction are represented as a graph. Because transpositions leave the Hamming weight of these binary strings invariant, the HH naturally decomposes into a direct sum of graphs labeled by all the possible Hamming weights from 0 to *n*.

One might wonder how the results here could generalize to the spin *S* case. We briefly sketch how one might proceed to achieve this. We can observe that the computational basis of the ferromagnetic HM can be represented by a (2*S* + 1)-nary string. We can represent these (2*S* + 1)-nary strings as vertices on a graph, and interactions as relationships between the vertices. In this representation, the spin-S exchange operator maps a (2*S* + 1)-nary string to a linear combination of (2*S* + 1)-nary strings. Since one can show that the coefficients of this linear combination are non-negative, if all non-zero exchange constants are the same, the coefficients can rescale to allow us to interpret them as probabilities of transitions from one vertex to another vertex. Since the spin-*S* exchange operator conserves total spin, the (2*S* + 1)-nary strings naturally partition into disjoint subsets, where only strings in different partitions do not interact, and strings in the same partition can have their interactions represented as a Markov model. We expect the spectrum HH to thereby be related to the spectrum of the associated (2*S* + 1) Markov models. Markov models describe stochastic transitions between a set of discrete states and are well-studied. We therefore expect that connections between the theory of Markov models and spin-*S* HMs can bring similar insights into bound the spectrum of spin-*S* HMs.

## ACKNOWLEDGMENTS

Y.O. thanks Robert Seiringer and anonymous referees for their comments and recommendations that have helped to improve this manuscript. Y.O. acknowledges support from the Singapore National Research Foundation under NRF Award No. NRF-NRFF2013-01, the U.S. Air Force Office of Scientific Research under AOARD Grant No. FA2386-18-1-4003, and the Singapore Ministry of Education. This work was supported by the EPSRC (Grant No. EP/M024261/1).