We give a polynomial-time algorithm for computing upper bounds on some of the smaller energy eigenvalues in a spin-1/2 ferromagnetic Heisenberg model with any graph G for the underlying interactions. An important ingredient is the connection between Heisenberg models and the symmetric products of G. Our algorithms for computing upper bounds are based on generalized diameters of graphs. Computing the upper bounds amounts to solving the minimum assignment problem on G, which has well-known polynomial-time algorithms from the field of combinatorial optimization. We also study the possibility of computing the lower bounds on some of the smaller energy eigenvalues of Heisenberg models. This amounts to estimating the isoperimetric inequalities of the symmetric product of graphs. By using connections with discrete Sobolev inequalities, we show that this can be performed by considering just the vertex-induced subgraphs of G. If our conjecture for a polynomial time approximation algorithm to solve the edge-isoperimetric problem holds, then our proposed method of estimating the energy eigenvalues via approximating the edge-isoperimetric properties of vertex-induced subgraphs will yield a polynomial time algorithm for estimating the smaller energy eigenvalues of the Heisenberg ferromagnet.

The Heisenberg model (HM) is a quantum theory of magnetism1 and is prevalent in many naturally occurring physical systems such in various cuprates,2,3 in solid Helium-3,4 and more generally in systems with interacting electrons.5 The HM can also be engineered in ultracold atomic gases6 and quantum dots.7 Given the abundance of the HM, it may be advantageous to obtain a detailed understanding of its spectral structure. Such an understanding would, for example, help us to analyze the feasibility of storing quantum information in HMs via encoding into permutation-invariant quantum error correction codes.8–13 Moreover, given the widespread applicability of magnetic materials in classical information processing,14,15 quantum magnets based on the HM could similarly enable quantum technologies. In addition, the HM also can be used for quantum computation16 and quantum simulation. What is most interesting is the relevance of the HM in mathematical physics because it is a paradigmatic model of statistical mechanics. For example, the celebrated Mermin-Wagner theorem17 was proven for the HM.

The central object in this paper is the Heisenberg Hamiltonian (HH). It is the mathematical embodiment of the HM’s energy level structure, and contains all information necessary to derive every property of the HM. More precisely, the HH for spin-half particles in the absence of an external magnetic field is a matrix given by

Ĥ={i,j}J{i,j}σixσjx+σiyσjy+σizσjz12,
(1.1)

where 1 is the identity matrix, σix,σiy and σiz as the usual Pauli matrices acting on the ith particle, the sets {i, j} are included in the sum whenever particles i and j interact, and J{i,j} is an exchange constant which quantifies the strength and nature of the coupling between the particles. Here, we restrict our attention to ferromagnetic HHs, where every exchange constant is non-negative. We write the Hamiltonian in this way because we want the smallest eigenvalue of Ĥ to be zero. It is a well-known fact that a ferromagnetic HH can be written as a sum of graph Laplacians. For completeness, we give its proof later in Theorem II.1. Studying the spectrum of the HH is thus equivalent to studying the spectrum of these Laplacians. The field of spectral graph theory deals entirely on determining the eigenvalues of graph Laplacians, and an extensive amount of work has been done on this topic. One can, for example, refer to Chung’s book for a review of the most important results in spectral graph theory.18 

Traditionally, most studies on the HM rely on the Bethe ansatz.19 In such approaches, the structure of the eigenvectors is assumed and later verified to hold by solving for some of the previously undetermined parameters. This approach has proved hugely successful in 1D Heisenberg models.2,20–26 Recently, lower bounds have been proved on the average free energy of the HM on three dimensional lattices27 and also on lattices with any dimension.28 However bounds on the spectrum of the Heisenberg ferromagnet have yet to be directly addressed. Moreover, while certain other 2D HMs have been studied,3,29–31 the question of how to address HMs of potentially arbitrary geometry remains unresolved.

In recent years, there has been impressive progress toward determining the spectrum of the HH. The seminal result of Caputo, Liggett, and Richthammer proves the Aldous’ spectral gap conjecture,32 which implies that the spectral gap of the HH is equal to the spectral gap of the Laplacian representing the graph of interactions of the HM. Since the size of this Laplacian is just the number of the HM’s spins, determining the spectral gap of the HH is completely trivial and can be found numerically in polynomial time.33 One of the most important developments thereafter was made by Correggi, Giuliani, and Seiringer27,28 where they develop important Sobolev inequalities for discrete graphs, but which are also applicable to the HM. Based on this, they find the right inequalities to obtain lower bounds on the free energy of the HM at finite temperatures. However the problem of obtaining bounds for the higher eigenvalues of the HH has been largely unaddressed.

In this paper, we utilize relatively recent developments in spectral graph theory to obtain new bounds for HH’s spectrum. With regards to the upper bounds, we rely on analytical bounds on the eigenvalues of a graph based on its generalized diameters by Chung, Grigor’yan, and Yau.34 For the lower bounds, we use Chung and Yau’s Sobolev inequalities on graphs.35 There are two innovations provided in this paper. First, we identify a probabilistic polynomial-time algorithm to obtain upper bounds on the HH’s eigenvalues by reducing the computation of a generalized diameter to that of a minimum assignment problem. Second, we provide new discrete Sobolev inequalities that are based on deleting vertices from graphs. These inequalities can be used with Chung and Yau’s Sobolev inequalities to obtain lower bounds on the eigenvalues of the HH. To the best of our knowledge, this is the first time graph-theoretic methods are directly used to obtain bounds on the eigenvalues of the HH.

We begin our paper by explaining how the HH is connected to the symmetric power of graphs in Sec. II. In a preprint by Rudolph, the connection between graphs and the HH was noted, and the terminology of symmetric power of graphs was coined.36 Such graphs, later also known as token graphs,37 have been extensively studied in recent years for their graph theoretic properties in Refs. 38–41 among many others. Once we establish the connection of the HH with symmetric powers of graphs, we turn our attention to the elementary problem of determining the spectrum of the mean-field Heisenberg ferromagnet, where every pair of spins interacts with the same exchange constant. Obviously, the SU(2) symmetry of such a model immediately allows one to determine the HH eigenvalues and multiplicities, and the eigenprojectors can be in principle calculated using textbook methods with Clebsch-Gordan coefficients. However, we wish to highlight that by using well-known facts about association schemes, we can already directly identify the eigenprojectors of this HH in terms of Hahn polynomials and generalized adjacency matrices (see Theorem III.1).

Generalized diameters of graphs play a central role in deriving upper bounds on the spectrum of HHs, as we shall see in Sec. IV B. These generalized diameters can be thought of as the widths of a body when it is interpreted to have a given dimension. The most important feature of our algorithms is that they run much more efficiently than algorithms that attempt to directly evaluate the eigenvalues of the HH. We show that computing these generalized diameters is equivalent to the minimum assignment problem, which is solved efficiently using the Kuhn-Munkres algorithm (Ref. 42, p. 52). Together with analytical bounds on the eigenvalues of a graph based on its generalized diameters by Chung, Grigor’yan, and Yau,34 we thereby obtain a polynomial-time algorithm for evaluating upper bounds on the eigenvalues of the ferromagnetic HH, which gives us our result in Theorem IV.5.

Isoperimetric inequalities play a central role in deriving lower bounds on the spectrum of HHs in this paper. An isoperimetric inequality essentially gives a lower bound on the minimum boundary size of a body with a fixed volume in a given manifold. Specializing this to graphs, we require a lower bound on the minimum cut-size of a subset of k vertices, for every possible choice of k. Such bounds are then called edge-isoperimetric inequalities, which we introduce in Sec. V. Based on edge-isoperimetric inequalities of the symmetric products of graphs, we present lower bounds on the eigenvalues of the ferromagnetic HH (see Theorem V.8). Because deriving edge-isoperimetric inequalities on the symmetric product of graphs is potentially difficult, we also derive isoperimetric inequalities on the symmetric product of graphs based on the isoperimetric inequalities on their vertex-induced subgraphs (see Theorem V.6). We introduce some Sobolev inequalities in Sec. V A, and proceed to use our results on isoperimetric inequalities on the symmetric product of graphs to obtain lower bounds on all of the HH eigenvalues based on isoperimetric properties of the associated graphs. For this, we use the Sobolev inequalties of with Chung, Yau,35 and Ostrovskii’s43 on graphs.

Finally in Sec. VI, we discuss some potential implications of our bounds and algorithms. We then remark on the potential to improve both the upper and lower bounds that we present, by further investigation using a combinatorial approach. We also point out how an advance in the field of approximation algorithms could help to make computing lower bounds for the spectrum of the ferromagnetic HH much more efficient.

Since we investigate the spectrum of HHs with graphs of varying dimensions, we need to explain what these graphs and their dimensions are. Here, a graph corresponding to a HH comprises vertices from 1 to n which label the particles and edges {u, v} which label the interaction between particles u and v. A graph’s dimension generalizes from the dimension of continuous manifolds. The edge-boundary of any set of vertices X denoted by ∂X is the set of edges in G with exactly one vertex in X. Suppose that every set X with k vertices in G satisfies the bound |∂X| ≥ ck1−1/δ for some positive constant c for every kn/2. Then, we say that G has a dimension of δ with isoperimetric number c. This is analogous to the situation where a manifold with fixed volume k and a surface area of at least ck1−1/δ for some positive constant c has a dimension of δ. The dimension of a physical system is then the dimension of the corresponding graph of interactions.

To understand how precisely HH is related to graphs, we need to define the symmetric product of a graph. When k is a non-negative integer with kn, the kth symmetric product of a graph G with vertices V and edges E denoted by G{k} is a graph with the following properties. First, G{k} has as its vertices all possible subsets of V of size k. Second, the edges of G{k} are the sets {X, Y}, where (i) X and Y are subsets of V with k vertices, (ii) X and Y have k − 1 common elements, and (iii) their symmetric difference, the union of the sets without their intersection, is an edge in E. In short, {X, Y} is an edge in G{k} only if the symmetric difference of X and Y is an edge in E, i.e., XYE. Examples of the symmetric product of graphs can be seen in Figs. 1 and 2.

FIG. 1.

On the left is a graph G with six vertices and on the right is its symmetric square G{2}. The symmetric cube G{3} is depicted in Fig. 2.

FIG. 1.

On the left is a graph G with six vertices and on the right is its symmetric square G{2}. The symmetric cube G{3} is depicted in Fig. 2.

Close modal
FIG. 2.

G{3}, the symmetric cube of the graph G depicted in Fig. 1, is shown here.

FIG. 2.

G{3}, the symmetric cube of the graph G depicted in Fig. 1, is shown here.

Close modal

Now, we proceed to define the Laplacians of G{k}. By denoting |X⟩ as a state with the spins labeled by X in the up state and the remaining spins in the down state where X is a subset of vertices in G, the Laplacians of G{k} are

Lk=    XV|X|=k   |X||XX|XYE(|XY|+|YX|).
(2.1)

Here, each Lk is the Laplacian of the graph G{k} and has rank nk. If we interpret G{k} as a discrete manifold, the eigenvectors and eigenvalues of Lk are its normal modes and associated resonance frequencies.

If we normalize the HH so that every non-zero exchange constant is equal to 1, we get the normalized Hamiltonian

Ĥ1={i,j}Eσixσjx+σiyσjy+σizσjz12.
(2.2)

This normalized Hamiltonian Ĥ1 is just a sum of pairwise orthogonal matrices Lk (Ref. 38, Appendix A), as we can see from the following theorem.

Theorem II.1.

Let G = (V, E) be a graph with n vertices. Then, Ĥ1 = L0 + ⋯ + Ln, where Lkare as given in Eq. (2.1) and Ĥ1is as given in Eq. (2.2).

This decomposition of the ferromagnetic HH with graph G as a sum of pairwise orthogonal matrices, with each matrix associated with the symmetric products of G, has already been known for years (Ref. 38, Appendix A).

The decomposition of the normalized Hamiltonian as given in Theorem II.1 holds because of its fundamental connections with Laplacians in graph theory (Ref. 44, Chap. 13). Using a graph-theoretic perspective, some trivial properties of this normalized Hamiltonian can be easily seen. For example, when the graph G is connected, each Lk has exactly one eigenvalue equal to zero with corresponding eigenvector nk1/2XV|X|=k|X (Ref. 44, Lemma 13.1.1). Hence, the ground state energy of Ĥ1 is zero with degeneracy n + 1, and the ground space is spanned by the Dicke states |Dkn,10 where |Dkn is a normalized superposition of all |X⟩ for which X is a subset of {1, …, n} of size k. Moreover, for any graph, the Laplacians Lk and Lnk are unitarily equivalent because of the equivalence of G{k} and G{nk} under set complementation. To see this, denote X¯ as the set complement of XV, and note that Lnk=UkLkUk, where
Uk=XV:|X|=k|X¯X|.
(2.3)
Hence, it suffices to only study Laplacians Lk for which kn2.

The implication of Caputo, Liggett, and Richthammer’s proof of Aldous’ spectral gap conjecture32 is that the spectral gap of every Lk for k = 1, …, n − 1 is identical. This renders the problem of finding the spectral gap of HHs trivial because L1 is effectively a size n matrix and its spectral gap can be efficiently solved numerically, for example, by using Spielman and Teng’s celebrated algorithm.33 

In this paper, we will focus on the obtaining bounds of the eigenvalues of every Lk, which we denote as
λ0(Lk),λ1(Lk),,λnk1(Lk).
We call λ1(Lk) the spectral gap of Lk and λmax(Lk)=λnk1(Lk) the largest eigenvalue of Lk. We order these eigenvalues so that
0=λ0(Lk)λnk1(Lk).
(2.4)
Now, we proceed to give the proof of Theorem II.1.

Proof of Theorem II.1.
The first step is to notice that the swap operator of two qubits can be written as
(|0|0)(0|0|)+(|0|1)(1|0|)+(|1|0)(0|1|)+(|1|1)(1|1|)
(2.5)
and is identical to the sum σ1xσ2x+σ1yσ2y+σ1zσ2z+12. Then, denoting the operator that swaps qubits i and j as πi,j, we have the identity
πi,j1=σixσjx+σiyσjy+σizσjz12.
(2.6)
This allows us to rewrite the normalized HH with a graph G = (V, E) in terms of swap operators so that
Ĥ1={i,j}E(1πi,j).
(2.7)
Next, we let X denote any subset of vertices V = {1, …, n}. Then, for any distinct i and j from the set V, we have
πi,j|X=|X,i,jX|X,i,jX|X{i,j},{i,j}X.
(2.8)
This allows us to obtain
{i,j}Eπi,j|X={i,j}Xπi,j|X+{i,j}Xπi,j|X={i,j}X|X{i,j}+{i,j}X|X={i,j}X|X{i,j}+(m|X|)|X,
(2.9)
where m denotes the number of edges in E. Hence,
Ĥ1|X={i,j}E|X{i,j}Eπi,j|X=|X||X{i,j}X|X{i,j}.
(2.10)
Clearly if Y is a subset of V that has a different size from X, then ⟨Y|Ĥ1|X⟩ = 0. This immediately implies that Ĥ1 can be written as a sum of orthogonal matrices, each of them supported on the space spanned by |X⟩, where X have constant size. Next, note that ⟨X|Ĥ1|X⟩ = |∂X|, which implies that the diagonal entries of Lk are given by the sizes of the corresponding edge-boundaries of k sets. Finally, note that if Y has the same size as X, then ⟨Y|Ĥ1|X⟩ = 0 whenever XYE and ⟨Y|Ĥ1|X⟩ = 0 whenever XYE. This proves the result.

We begin with a combinatorial approach for producing the exact solution for a mean-field HM. Such a HM has n spins, and every pair of spin interacts with exactly the same exchange constant J. In this case, the normalized Hamiltonian is

Ĥ1=i=1nj=1i1σixσjx+σiyσjy+σizσjz12.
(3.1)

From the perspective of SU(2) symmetry, this model is trivial. This is because we can write Ĥ1=StotStot2+n(n+1)21, where Stot=i=1nSi and Si=σi/2. The spectrum along with the degeneracies is directly given by the representations contained in the direct product of n spin 1/2 representations,

1/21/21/2,

which can be easily solved using standard techniques. Moreover, the corresponding eigenvectors can be in principle calculated using textbook methods with Clebsch-Gordan coefficients. However this computation can be fairly tedious. We show how the eigenvalues and eigenprojectors of Ĥ1 can be alternatively obtained from a combinatorial perspective.

Note that for Ĥ1, the graph of interactions is precisely the complete graph on n vertices. The symmetric products of the complete graph are the Johnson graphs for which the spectral problem has been exactly solved using association schemes.45,46 Using this connection, we can use prior knowledge of the Johnson schemes to conclude that Lk has exactly one eigenvalue equal to zero, and its other eigenvalues are j(n + 1 − j) with multiplicities mj=njnj1 for j = 1, …, k (Ref. 47, Sec. 12.3.2). Hence, the positive eigenvalues of Ĥ1 are

j(n+1j)
(3.2)

with multiplicities

(n+12j)mj,
(3.3)

where j=1,,n/2.

What is most remarkable about the connection between association schemes and the mean-field Heisenberg model is that we can assign a combinatorial interpretation to the matrices Lk. In particular, we can analytically decompose Lk as a linear combination of eigenprojectors, where each eigenprojector is in turn a linear combination of generalized adjacency matrices. We proceed to explain what these generalized adjacency matrices are. Now the adjacency matrix of Lk is

Ak,1=|XY|=2(|XY|+|YX|).
(3.4)

Namely, the matrix element of Ak,1 labeled by |X⟩⟨Y| has a coefficient of 1 if X is adjacent to Y in G{k}, and equal to zero otherwise. Since two vertices in a graph are adjacent if and only if they are a distance of one apart, we can define the generalized adjacency matrices by having

Ak,z=X,Y{1,,n}|XY|=2z|XY|.
(3.5)

Here, the matrix element of Ak,z labeled by |X⟩⟨Y| has a coefficient of 1 if X is a distance of z from Y in G{k}, and equal to zero otherwise. We call Ak,z the zth generalized adjacency matrix of the Johnson graph associated with Lk relating k-sets a distance of z apart. For completeness, let Ak,0 denote a size nk identity matrix. Now let

hk,j(z)=mja=0j(1)ajan+1jakankaza
(3.6)

denote a Hahn polynomial [Ref. 48, (18) and (20)]. Then, properties of the Johnson scheme given in Ref. 48 imply that for k=1,,n/2, the Laplacians Lk have the spectral decomposition

Lk=j=1kj(n+1j)Pk,j,
(3.7)

where

Pk,j=1nkz=0khk,j(z)Ak,z
(3.8)

are pairwise orthogonal projectors. To make the spectral decomposition of the normalized mean-field HH explicit, we present the following theorem.

Theorem III.1.
Let G = (V, E) be a complete graph. Then, a normalized HH on this graph Ĥ1has the spectral decomposition
Ĥ1=j=1(n1)/2j(n+1j)k=j(n1)/2Pk,j+UkPk,jUk
(3.9)
when n is odd and
Ĥ1=j=1n/21j(n+1j)k=jn/2Pk,j+k=jn/21UkPk,jUk+n2n2+1Pn/2,n/2
(3.10)
when n is even.

Proof.
The proof of this theorem relies on the identity
u=0aj=1uau,j=u=1aj=1uau,j=j=1au=jaau,j,
(3.11)
which holds for all non-negative integers a, and any complex coefficients au,j.
When n is odd, we can write
Ĥ1=k=0(n1)/2Lk+ULkU,
(3.12)
where U is the unitary as defined in (2.3) and Lj is as given in (3.7). Substituting the decomposition of Lj, we get
Ĥ1=k=0(n1)/2j=1kj(n+1j)Pk,j+UPk,jU.
(3.13)
Applying (3.11) then yields the result for odd n. When n is even, we have
Ĥ1=k=0n/21Lk+ULkU+Ln/2.
(3.14)
By using the techniques used to prove the case for odd n, we get
Ĥ1=j=1n/21j(n+1j)k=jn/21Pk,j+UkPk,jUk+Ln/2.
(3.15)
Substituting the value of (3.7) for Ln/2, we get the result.

We obtain bounds on the largest eigenvalue of ferromagnetic HHs with graphs having dimension δ with isoperimetric number c, and maximum vertex degrees β. Note that obtaining bounds on the largest eigenvalue of the normalized HH Ĥ1 amounts to obtaining bounds on λmax(Lk). Now the largest eigenvalue of the Laplacian of any graph is at least its maximum vertex degree (Ref. 49, p. 149, line 7) and at most twice its maximum vertex degree from Gersgorin’s circle theorem.50,51 The upper bound can also slightly improved over Gersgorin’s circle theorem to be at most the sum of the largest and the second largest vertex degrees [Ref. 49, (6)]. Thus,

ck11/δλmax(Lk)2kβ
(4.1)

for 1 ≤ kn/2. Since Ĥ1 = L0 + ⋯ + Ln, we get

cn/211/δλmax(Ĥ1)nβ.
(4.2)

In this subsection, we outline an algorithmic approach for finding upper bounds on the smaller eigenvalues of the HH. This approach relies crucially on the generalizations of the diameter of a graph. The diameter of a graph is the length of its shortest path and intuitively measures the size of the graph. In the case when the graph has the geometry of a hypercube of dimension d, its diameter will be the length between the vertices of the hypercube that are furthest apart. The generalization of the diameter that we will consider allows us to quantify, in the case of the hypercube, the length of its sides. In particular, the d-diameter of a d-dimensional hypercube will be precisely the length of its side. Intuitively, the d-diameter of a body is its width when it is interpreted to have d dimensions. The generalized diameters are important because they can give upper bounds on the eigenvalues of a graph Laplacian.34,52

The generalized diameter of a graph quantifies its sparsity. It is then reasonable to expect that the larger the generalized diameter, the smaller the upper bound on the eigenvalues can be, since a sparse graph ought to have smaller eigenvalues than a highly connected graph. In the extreme case when a graph comprises disconnected vertices, its generalized distances are all infinite, and every eigenvalue is equal is zero. Thus, in this case, we would anticipate that the upper bound we get from the diameter is also equal to zero. This is indeed the case. When a graph has j distinct connected components, by selecting j vertices, one from each of these connected components, the corresponding generalized distance is infinite. This then implies that the cth smallest eigenvalue of the corresponding graph Laplacian is at most zero. Since it is known that a graph with c distinct components has a graph Laplacian with exactly c + 1 zero eigenvalues (Ref. 44, Lemma 13.1.1), in this sense, the bound of (Ref. 34, Corollary 4.4) can be said to be tight.

To understand the generalized diameter of a graph, we need to review the concept of the distance amongst a subset of its vertices. Now, the distance between a pair of vertices va and vb is the just the length of the shortest path connecting them, which we denote as d(va, vb). This can be computed using Algorithm 1.

ALGORITHM 1.

Dist(G = (V, E)), Compute pairwise distances in G.

D ← size n matrix of zeros 
for allvVdo 
Perform BFS on v, obtaining a spanning tree T rooted at v
for allwV, wvdo 
D(u, v) ← distance of vertex w to v in T 
end for 
end for 
returnD 
D ← size n matrix of zeros 
for allvVdo 
Perform BFS on v, obtaining a spanning tree T rooted at v
for allwV, wvdo 
D(u, v) ← distance of vertex w to v in T 
end for 
end for 
returnD 

The distance between a set of vertices K = {v1, …, vk} is then the minimum pairwise distance between distinct vertices va and vb, which we denote as

d(K)=min{d(va,vb):1a<bk}.
(4.3)

The j-diameter of a graph G = (V, E) has been defined (Ref. 34, p. 25, last equation) as the maximum distance of subsets K with (j + 1) vertices, and we denote it as

dj(G)=max{d(K):KV,|K|=j+1}.
(4.4)

Now define dj,k to be the j-diameter of G{k}. Whenever dj,k ≥ 2, we can obtain upper bounds on the eigenvalues of Lk from graph-theoretic results of Ref. 34, Corollary 4.4,

λj(Lk)λmax(Lk)12/1+nk1/(dj,k1).
(4.5)

Clearly, dj,k decreases with increasing j, and thus our upper bounds on λj(Lk) are increasing with j as one would expect. Now let us see how (4.5) can be tight. Let us consider a graph G with c connected components and consider k = 1 so that G{1} = G. We claim that the (c − 1)-diameter of G is infinite. This is because we can pick a set of vertices, with one vertex from each connected component. Since none of these vertices are connected, their pairwise distance is always infinite. Using this value for the generalized diameter, the upper bound in (4.5) for λc−1(L1) becomes zero. Since we know from (Ref. 44, Lemma 13.1.1) that λc−1L(G) = 0, the upper bound in (4.5) is tight.

Since the j-diameter of G{k} may be unwieldy to calculate directly, we outline a polynomial time algorithm to obtain lower bounds on it. At the heart of our algorithm is the fact that the distances between vertices in G{k} can be computed using only information about the distances between vertices in G. This makes it possible to estimate the j-diameter of G{k} solely by computing on the graph G. Before diving into the specifics of our algorithm, we briefly outline its inner workings.

  1. Pick any j + 1 distinct vertices X1, …, Xj+1 from G{k}. Note that each of these vertices are subsets of V, each with k elements.

  2. Loop over all a, b such that 1 ≤ a < bj + 1.

  3. Compute d(Xa, Xb).

  4. Exit loop.

  5. A lower bound for dj(G{k}) is the minimum d(Xa, Xb).

This procedure can in principle be repeated for all possible choices of X1, …, Xj+1 to obtain the value of dj(G{k}) exactly. Since this may be computationally expensive, we propose just to randomly select the vertices X1, …, Xj+1 a constant number of times. Obviously the complexity of such an algorithm depends on the complexity of Step 3 of this procedure, where the d(Xa, Xb) is evaluated.

A direct attack on evaluating d(Xa, Xb) might seem to take time with complexity O(k!) and hence not be polynomial in n. This is because the distance between Xa = {x1, …, xk} and Xb = {y1, …, yk} with respect to G{k} is the sum of the distances with respect to G between xj and yπ(j), minimized over all permutations π that permute k symbols. There are then k! possible permutations and k distances to a sum for each instance. This however is not the case since the problem of evaluating d(Xa, Xb) is actually equivalent to the minimum assignment problem, which can be solved in O(k3) time using the celebrated Kuhn-Munkres algorithm (Ref. 42, p. 52), after one first computes all pairwise distances in G.

We now explain how combinatorial optimization algorithms from graph theory can be used to compute lower bounds on dj,k can be evaluated in polynomial time.

  1. Algorithm 1 computes the all pairwise distances in G. This is achieved using breath-first-search on every vertex. Since breadth-first search on any vertex produces a shortest path tree in linear time (Ref. 42, Theorem 6.4), and there are n such vertices, Algorithm 1 runs in O(n2) time.

  2. Algorithm 3 evaluates distances between given vertices in G{k}. It turns out that the evaluation of d(Xa, Xb) is equivalent to the well-known minimum assignment problem in the field of combinatorial optimization. First, evaluate Z = XaXb and set X = XaZ and Y = XbZ. Consider a complete bipartite graph with every vertex in xX is connected to a vertex in yY by a weighted edge. The weight of the edge {x, y} in the bipartite graph is equal to the distance between x and y given by d(x, y). The problem of computing d(Xa, Xb) is then equivalent to finding the perfect matching (set of edges such that every vertex belongs to exactly one edge) on this bipartite graph, such that the sum of the weights on these matchings is minimized. But this is precisely equal to the minimum assignment problem, which can be solved using the Kuhn-Munkres algorithm. We therefore just need to generate the cost matrix for the minimum assignment problem in this algorithm to utilize the Kuhn-Munkres algorithm.

We would be able to easily compute the generalized diameter of G{k} exactly, if we only knew how to optimally select j + 1 of its vertices in G{k}. Without such knowledge, we can use Algorithm 2 to randomly select j + 1 vertices in G{k}.

ALGORITHM 2.

SELk(j, V), Select j + 1 distinct vertices in G{k}.

X1 ← a random k-vertex subset of V 
c ← 1 
whilecj + 1 do 
Y ← a random k-vertex subset of V 
ifYXaY for all a = 1, …, cthen 
Xc+1Y 
cc + 1 
end if 
end while 
return (X1, …, Xj+1
X1 ← a random k-vertex subset of V 
c ← 1 
whilecj + 1 do 
Y ← a random k-vertex subset of V 
ifYXaY for all a = 1, …, cthen 
Xc+1Y 
cc + 1 
end if 
end while 
return (X1, …, Xj+1
ALGORITHM 3.

Dist(X, Y, D), Evaluates the distance between X and Y in G{k}.

ZXY 
a ← |X| − |Z| ∩ Y 
X = {x1, …, xa} ← XZ 
Y = {x1, …, ya} ← YZ 
C ←zeros(a)        initialize a size a matrix 
for allu = 1, …, ado 
for allv = 1, …, ado 
C(u, v) = D(xu, xv
end for 
end for 
d ← output of Kuhn-Munkres algorithm on the cost matrix C 
returnd 
ZXY 
a ← |X| − |Z| ∩ Y 
X = {x1, …, xa} ← XZ 
Y = {x1, …, ya} ← YZ 
C ←zeros(a)        initialize a size a matrix 
for allu = 1, …, ado 
for allv = 1, …, ado 
C(u, v) = D(xu, xv
end for 
end for 
d ← output of Kuhn-Munkres algorithm on the cost matrix C 
returnd 

We completely describe our algorithm to compute upper bounds on the eigenvalues of Lk in Algorithm 4.

ALGORITHM 4.

Upp(j, k, G = (V, E)), Upper bounds on λj(Lk).

Initialization 
1kn2
1j<nk
β ← maximum vertex degree of G
μ ← 2  upper bound on λmax(Lk
D ← Dist(G) From Algorithm 1  
end initialization 
(X1, …, Xj+1) ← SELk(j, V From Algorithm 2  
d ← ∞ 
for alla, b = 1, …, j + 1: a < bdo 
dXa,XbDist(Xa,Xb,D)  From Algorithm 3  
ifdXa,Xb<dthen 
ddXa,Xb 
end if 
end for 
ifd ≥ 2 then 
uμ(12/(1+nk1/d)) 
else 
u ← ∞ 
end if 
returnu 
Initialization 
1kn2
1j<nk
β ← maximum vertex degree of G
μ ← 2  upper bound on λmax(Lk
D ← Dist(G) From Algorithm 1  
end initialization 
(X1, …, Xj+1) ← SELk(j, V From Algorithm 2  
d ← ∞ 
for alla, b = 1, …, j + 1: a < bdo 
dXa,XbDist(Xa,Xb,D)  From Algorithm 3  
ifdXa,Xb<dthen 
ddXa,Xb 
end if 
end for 
ifd ≥ 2 then 
uμ(12/(1+nk1/d)) 
else 
u ← ∞ 
end if 
returnu 

Since there are j+12 possible pairwise distances amongst X1, …, Xj+1 that we must consider, the time complexity of running Algorithm 4 is

O(n2)+O(j2k3).
(4.6)

This thereby leads to an algorithm that evaluates a lower bound for dj,k in time polynomial in n, j, and k. This then leads to our formal result, which we give in the following theorem.

Theorem IV.5.

Let G = (V, E) be any graph with n vertices. Let 2 ≤ kn/2 and1jnk1. Then, Algorithm4can compute an upper bound on λj(Lk) in O(n2) + O(j2k3) time.

Thus, for all k and j polynomial in n, upper bounds on the eigenvalues of the ferromagnetic HH can be computed in time polynomial in n. Such an algorithm would outperform a direct solver for Laplacians33 whenever k ≥ 3.

A property of graphs that we focus on is their associated isoperimetric inequalities. These isoperimetric inequalities on graphs allow us to define the notion of the isoperimetric dimension of a graph. Now let X be a set of vertices and ∂X be its boundary. In this case, the edge boundary of X is just the set of edges in E with exactly one vertex in X and one vertex in VX. Then, the edge-isoperimetric inequality on graphs53 is any lower bound of the form

|X|c|X|11/d
(5.1)

that holds for every vertex subset X of size at most half the cardinality of V. The utility of these isoperimetric inequalities in the case of continuous manifolds lies in their applicability, for example, to give bounds on the principal frequency of a vibrating membrane.54 The rationale behind seeking edge-isoperimetric inequalities for the graphs G lies in the fact that such inequalities can yield spectral bounds on the eigenvalues of the normalized Laplacians of G,35 and hence also of the Laplacians. Since the Heisenberg Hamiltonian is just a direct sum of Laplacians of G{k}, edge-isoperimetric inequalities on G{k} can then yield bounds on the corresponding energy eigenvalues of the Heisenberg Hamiltonian.

In this section, we prove several technical results relating to the edge-isoperimetric inequalities on the symmetric products of graphs. Roughly speaking, our results allow us to establish the isoperimetric properties of G{k} in terms of the isoperimetric properties of certain subgraphs of the graph G. In particular, these subgraphs are vertex induced subgraphs of G where a number of vertices and their corresponding edges are deleted from G. Our technical result applies to graphs with a finite number of vertices. In Theorem V.6, we prove that that if deleting any k − 1 vertices from a finite graph G yields a vertex induced subgraph that has a dimension δ with isoperimetric number C, then a lower bound on the size of the edge-boundary of a subset of vertices Ω in G{k} is given in terms of the size of the edge-boundary of Ω in the Johnson graph that is the kth symmetric product of the complete graph.

The proof relies crucially on the fact that the size of an edge boundary of a set X can be written as a Sobolev seminorm of the indicator function of X. This implies that edge-isoperimetric inequalities can be written in terms of the Sobolev seminorm of an indicator function and an appropriate functional of that indicator function, as we shall see in Sec. V A. Also, we use Tillich’s observation of a one-to-one correspondence between edge-isoperimetric inequalities and inequalities relating the Sobolev seminorm of functions and an appropriate functional evaluated on those functions.55 Together, these insights allow us to obtain lower bounds on the size of the edge-boundary of the subsets of vertices in G{k}.

Recall that an edge-isoperimetric inequality for a graph G = (V, E) has the form

|X|C|X|11/d,XV:|X|=k,
(5.2)

where k = 1, …, |V|/2. The point of this section is that the size of the edge-boundary |∂X| can be written in terms of a discrete Sobolev seminorm and this allows us to obtain some interesting insights. Namely, given a graph G = (V, E) and a function f:VR on the vertex set, the discrete Sobolev seminorm of f corresponding to the edge set E is defined by

fE={u,v}E|f(u)f(v)|.

Now consider the case where f = 1X where 1X:V → {0, 1} is an indicator function on X so that for all XV, 1X(x) = 1 if xX and 1X(x) = 0 if xVX. Then, it is clear that

|X|=1XE.
(5.3)

We call any inequality which involves the Sobolev seminorm ∥·∥E, such as the one above, a discrete Sobolev inequality.

The analytic inequalities of Tillich (Ref. 55, Theorem 2) establish the equivalence between edge-isoperimetric inequalities and discrete Sobolev inequalities on functionals that map functions from ΦV to non-negative real numbers, where ΦV denotes the set of all functions f:VR. To state Tillich’s theorem succinctly, we introduce the following definition.

Definition V.1.

Given C > 0 and a functional ρ:ΦVR+, we say that G is (C, ρ)-isoperimetric if for every XV, we have 1XECρ(1X).

By not requiring that |X| ≤ |V|/2, an implicit constraint on the choice of feasible functionals ρ that can satisfy the discrete Sobolev inequality in Definition V.1 is imposed.

We state Tillich’s result on functionals that are also seminorms in the following theorem.

Theorem V.2

(Ref. 55, Theorem 2). Let G = (V, E) be a graph, C > 0, and ρ be a seminorm on ΦV. Then, G is (C, ρ)-isoperimetric if and only iffE(f) for every functionf:VR.

Imposing the additional constraint |X| ≤ V/2 would allow ourselves to work with a larger family of seminorms ρ, but Theorem V.2 would need appropriate modification, which we do not address in this paper. Working without the constraint |X| ≤ V/2 allowed Tillich to derive edge-isoperimetric inequalities for graphs with a countably infinite number of vertices.

In this article, we restrict our attention to the functionals gp and ρp for p ≥ 1, where
gp(f)=1|V|x,yV|f(x)f(y)|p1/p,
(5.4)
ρp(f)=xV|f(x)E(f)|p1/p,
(5.5)
where
E(f)=1|V|vVf(v)
(5.6)
denotes the expectation value of f. It is then easy to show that
gp(1X)=2|X||V\X||V|1/p,
(5.7)
ρp(1X)=xV1X(x)|X||V|p1/p.
(5.8)
Note that when gp and ρp are evaluated on 1X, they are invariant under the substitution of X with VX.

The discrete Sobolev inequality is closely related to the isoperimetric number and dimension of a graph as given in the following proposition, which is obvious from definitions.

Proposition V.3.

Let G = (V, E) be graph and C > 0 and δ > 1. Then, the following are true.

  1. If V is finite and G is (C, gδ/(δ−1))-isoperimetric, then G has an dimension of δ with isoperimetric number C.

  2. If V is finite and G has dimension δ with isoperimetric number C, then G is (2δ/(δ−1)C, gδ/(δ−1))-isoperimetric.

Hence, we can address finite-sized graphs with the functionals ρp using the two-sided bounds on ρp(1X) in terms of gp(1X) as given in the following lemma. Note that when p = 1, we get ρ1(1X) = g1(1X) for any vertex subset X.

Lemma V.4.
Let G = (V, E) be a graph, XV and p ≥ 1. Then,
1211/pgp(1X)ρp(1X)gp(1X).

Proof.
By definition, ρp(1X)=xV1X(x)|X||V|p1/p. Splitting the summation over V into the disjoint subsets X and VX yields
ρp(1X)=|X|1|X||V|p+(|V||X|)|X||V|p1/p.
(5.9)
Since 1|X||V|p1|X||V| and |X||V|p|X||V| for p ≥ 1, we get ρp(1X) ≤ gp(1X). Since both |X|1|X||V|p and |V\X||X||V|p are at least |X||V\X||V|(12)p1, we get ρp(1X)gp(1X)(1/2p1)1/p.
We remark that Lemma V.4 is tight when p = 1 because then we would have
g1(1X)ρ1(1X)g1(1X),
(5.10)
which implies that ρ1(1X) = g1(1X). The scenario p = 1 occurs for graphs with infinite dimensions and expander graphs are examples of such graphs.

Lemma V.4 implies the following for C > 0 and δ > 1.

  1. If a graph is (C, ρδ/(δ−1))-isoperimetric, the graph also has dimension δ with isoperimetric number 2δC.

  2. If a graph has dimension δ with isoperimetric number C, the graph is also (2δ/(δ−1)C, gδ/(δ−1))-isoperimetric.

In what follows, we use Theorem V.2 where ρ = ρp for p ≥ 1.

Now, we address the edge-isoperimetric problem on the graph G{k} when G has a finite number of vertices, for a fixed positive integer k = 2, …, ⌊|V|/2⌋. Again, we rely on the edge-isoperimetric properties of the vertex-induced subgraphs of a graph G. A key ingredient of our proof is a bijection between sets, described by the following proposition.

Proposition V.5.

Let V be a countable set and k be a integer such that k = 1, …, |V|. Then, the setsA={(W;x):WV,|W|=k1,xV\W}andA={(X;x):XV,|X|=k,xX}have the same cardinality.

Proof.

Let f:AA, where f ↦ (W; x) = (W ∪ {x}; x) for all WV and xVW. The map f is invertible and is therefore a bijection from A to A. Hence, A and A have the same cardinality.

We obtain here a lower bound on |Ω|, which is the size of the edge boundary of any vertex subset Ω in G{k}. Our lower bound on |Ω| is provided in terms of |JΩ|, which is the size of the edge boundary of Ω in the Johnson graph J(n, k).

Theorem V.6.
Let G = (V, E) be a graph with n vertices, and let p ≥ 1 and C > 0. Suppose that every vertex-induced subgraph of G with n − k + 1 vertices is (C, ρp)-isoperimetric. Then, for every Ω ⊆ V{k},
|Ω|Cnk+1(2|JΩ|)1/p.
Note that the inequality in Theorem V.6 is tight for p = 1. To see this, let us consider a trivial scenario where G is the complete graph on n vertices and k = 1. For the complete graph, we can compute the edge boundary of any vertex subset X exactly. Denoting x = |X| and n = |V|, we have |∂X| = min{x, nx}(n − 1). Recall from (5.7) that g1(1X)=2x(nx)n. Then, the edge-isoperimetric inequality for the complete graph with respect to the seminorm g1 is equivalent to
(n1)min{x,nx}C2x(1x/n).
(5.11)
This inequality holds trivially when x = 0, so let us consider x ≥ 1. Now focus on the scenario where xn/2. Then, (n − 1)xC2x(1 − x/n), which is equivalent to (n − 1) ≥ C2(1 − x/n) and Cn12(1x/n). To minimum upper bound for C in this case is attained for x = 1, and thus, we have Cn2. Now consider the scenario where n2<xn. When x = n, the inequality again holds trivially. So we consider n2<xn1. Then, the inequality we are faced with is (n − 1)yC2y(1 − y/n), where y = nx. Since we just finished analyzing this scenario, we can conclude that the optimal isoperimetric constant is C=n2 for the complete graph. Substituting this example into Theorem V.6, since G{1} = G, we get for the complete graph
|X|n21n(2|X|),
(5.12)
which is equivalent to 1 ≥ 1 and hence the inequality in Theorem V.6 is tight for the complete graph.

Proof of Theorem V.6.
For all Ω ⊆ V{k}, note that |Ω|=1ΩE{k}. Two k-sets X and Y in Ω are adjacent in the graph G{k} if and only if the symmetric difference of X and Y is an edge in E. Hence,
|Ω|=WV|W|=k1{u,v}E[V\W]1Ω(W{u})1Ω(W{v}).
(5.13)
Applying Theorem V.2 with seminorm ρp on each induced subgraph G[VW] for every (k − 1)-set W with respect to the function 1Ω(W ∪ {·}), we get
|Ω|WV|W|=k1CxV\W1Ω(W{x})yV\W1Ω(W{y})nk+1p1/p.
(5.14)
By subadditivity of the function (·)1/p for all p ≥ 1, the inequality (5.14) becomes
|Ω|CWV|W|=k1xV\W1Ω(W{x})yV\W1Ω(W{y})nk+1p1/p.
(5.15)
By Proposition V.5, we can reorder the summation in (5.15) to get
|Ω|CXV{k}xX1Ω(X)yV\(X\{x})1Ω(X{x,y})nk+1p1/p.
(5.16)
Each k-set X appearing in the inequality (5.16) either belongs to Ω or not. Applying simple arithmetic on the right-hand side of (5.16) above then yields
CXΩxXyV\(X\{x})11Ω(X{x,y})nk+1p+XΩxXyV\(X\{x})1Ω(X{x,y})nk+1p1/p.
(5.17)
Using the inequality ixipixip for non-negative xi, the expression (5.17) becomes
CXΩxXyV\(X\{x})11Ω(X{x,y})nk+1p+XΩxXyV\(X\{x})1Ω(X{x,y})nk+1p1/p=Cnk+12XΩxXyV\(X\{x})1Ω(X{x,y})1/p.
To complete the proof, note that
XΩxXyV\(X\{x})1Ω(X{x,y})=|JΩ|.
The eigenvalues of the combinatorial Laplacian of the Johnson graph J(n, k) for k = 0, …, ⌊n/2⌋ are j(n + 1 − j) with multiplicities njnj1, where j = 0, …, k (Ref. 47, Sec. 12.3.2). If λ is the second smallest eigenvalue of the combinatorial Laplacian of a graph, then that graph is (λ2,g1)-isoperimetric (Ref. 44, Lemma 13.7.1). Since the second smallest eigenvalue of the combinatorial Laplacian of the Johnson graph J(n, k) is always n, |JΩ|n2g1(1Ω) for every Ω ⊆ V{k}. Hence,
(2|JΩ|)1/p(ng1(1Ω))1/p=n1/pgp(1Ω).
(5.18)
Using (5.18) with Theorem V.6 together with Lemma V.4 yields the following corollary.

Corollary V.7.

Let G = (V, E) be a graph with n vertices, and let p ≥ 1 and C > 0. Suppose that every vertex-induced subgraph of G with n − k + 1 vertices is (C, ρp)-isoperimetric. Then, G{k}is(Cn1/pnk+1,gp)-isoperimetric and(Cn1/pnk+1,ρp)-isoperimetric.

This corollary plays a central role in Subsection V C.

If one were to compute the eigenvalues of Lk directly, one may quickly run into computational difficulties. The reason is twofold. First, the size of the matrix Lk is nk, and in general scales exponentially with n. This leads to the difficulty in evaluating the eigenvalues of Lk when one does not desire to utilize a computer with both exponential memory that runs in exponential time. In view of this problem, our methodology to obtain lower bounds on the eigenvalues of Lk will be handy. The algorithms to compute lower bounds that we introduce from graph theory will considerably outperform algorithms that directly compute the eigenvalues of Lk. Instead of studying the symmetric products G{k}, we restrict our attention to the vertex-induced subgraphs of G.

When one deletes vertices from a graph G along with the corresponding edges, one obtains a vertex-induced subgraph of G. We denote the set of all graphs obtained by deleting exactly k − 1 vertices from G as V(G,k). Clearly, there are nk1 graphs in the set V(G,k). From Corollary V.7, we know that if C is less than the isoperimetric number of every graph in V(G,k) with corresponding dimension δ, then the graph G{k} has isoperimetric dimension δ with isoperimetric number at least

Cn11/δnk+1.
(5.19)

We now proceed to outline how lower bounds on the eigenvalues of Lk can be obtained from geometric considerations the graphs G{k}. To achieve this, we will first illustrate how lower bounds on the spectrum of a graph Laplacians can depend only on the graph’s geometry. We begin by introducing some notation. Let DG = vVdv|v⟩⟨v| denote the degree matrix of a graph G = (V, E). Let AG denote the adjacency matrix of a graph, which means that it is a matrix with matrix elements equal to either 0 or 1, and where ⟨u|AG|v⟩ = 1 if the vertex u is adjacent to v. Let LG denote the Laplacian of a graph, which can be written as DGAG. In this subsection, we have the following theorem, which is essentially a Chung-Yau type bound35 with Ostrovskii’s correction43 for unnormalized Laplacians.

Theorem V.8.
Let a graph G = (V, E) have dimension δ > 2 with isoperimetric number c. Let b and β be the minimum and maximum vertex degrees of G, respectively. Then,
λj(LG)bc216eβ2δ2δ12jβ18|E|2/δ.
(5.20)
When a graph is connected, its degree matrix is non-singular, and we can write its normalized Laplacian of G as
L̃G=DG1/2LGDG1/2.
(5.21)
The proof of Theorem V.8 relies trivially on the result on the corresponding result for lower bounds on the spectrum of normalized Laplacians. The connection is given by the following lemma.

Lemma V.9.
If the graph has minimum and maximum vertex degrees given by b and β, respectively,
bλj(L̃G)λj(LG)βλj(L̃G).
(5.22)

Proof.
Denoting the ith largest singular value of a matrix A of size da as si(A) with s1(A)sda(A), we have from Ref. 56, Problem III.6.5 the inequalities
si(AB)si(A)s1(B),si(AB)s1(A)si(B).
(5.23)
Applying the above inequalities iteratively, consequently
si(L̃G)=si(DG1/2LGDG1/2)si(LG)s1(DG1),si(LG)=si(DG1/2L̃GDG1/2)si(L̃G)s1(DG).
(5.24)
Since the matrices DG,DG1,LG, and L̃G are positive semidefinite, their singular values are equivalent to their eigenvalues. The largest eigenvalue of DG and DG1 are β and b−1, respectively. Hence, the inequalities (5.24) then give the result.

Lower bounds on the eigenvalues of the normalized Laplacian can be obtained from the graph’s Sobolev inequalities, as shown in the seminal work of Chung and Yau.35 Because of a gap in the proof in Ref. 35 as shown by Ostrovskii [Ref. 43, after Eq. (8)], we have to take Ostrovskii’s correction into account when we prove the corresponding lower bounds on the graph’s Laplacian which we state explicitly in Theorem V.8.

Proof of Theorem V.8.
For a graph G = (V, E), denote the volume of a subset of vertices X as vol(X) = vXdv. Also let vol(G) = vVdv denote the sum of all vertex degrees in the graph G. The isoperimetric inequality we focus on is
|X|cδ(vol(X))11/δ,
(5.25)
where vol(X) ≤ vol(VX). Note here that vol(X) is in general different from the number of vertices in X. While |X| counts the number of vertices in X, the volume vol(X) counts the sum of all vertex degrees of vertices in X. We may also interpret vol(X) as the number of vertices in X multiplied by the average degree of the vertices in X. The Sobolev inequality on graphs has the form
{u,v}E|f(u)f(v)|2AminμRvV|f(v)μ|αdv2/α,
(5.26)
where α=2δδ2. Typically A depends on cδ and δ. Chung and Yau proved when the above Sobolev inequality holds for a graph, the eigenvalues of the graph’s normalized Laplacians satisfy the lower bound
λj(L̃G)Ae34/δj/vol(G)2/δ.
(5.27)
When δ > 2, the inequality (5.26) holds with A=cδ216δ2δ12 using Ostrovskii’s Sobolev inequality [Ref. 43, (8)]. Using this fact with Lemma V.9, we get
λj(LG)bcδ216eδ2δ12j9vol(G)2/δ.
(5.28)
It remains to relate cδ to c. Let β be the maximum vertex degree of G. Since G has isoperimetric dimension δ and isoperimetric number c, its vertex subsets X satisfy the bound
|X|cmin{|X|,|V||X|}11/δcβ11/δminvol(X),vol(V\X)11/δ.
(5.29)
Hence, we can take cδ = c/β1−1/δ. The hand-shaking Lemma also implies that vol(G) = 2|E|, and we get the result.
Using Theorem V.8, we can easily obtain lower bounds on the eigenvalues of Lk using bk and βk, which are the minimum and maximum vertex degrees of G{k}, respectively. Note that β1 denotes the maximum number of interacting neighbors each spin experiences in the Heisenberg ferromanget. To bound bk and βk, note that every vertex in G{k} is a set of vertices in G with k elements. Therefore, the vertex degree of {x1, …, xk} in G{k} is just the edge-boundary of {x1, …, xk} in G. Thus, bkck1−1/δ whenever G has dimension δ with isoperimetric number c. Also, when β is the maximum vertex degree of G, we trivially have and βk1. Hence, Corollary V.7 implies that ckakn11/δknk+1, where every vertex-induced subgraph of G with k − 1 deleted vertices has dimension δk with isoperimetric number ak. The number of edges in G{k} is at most βknk/2, where n is the number of spins. Then, if δk > 2 for k = 1, …, n/2, Theorem V.8 implies that
λj(Lk)ck1/δak216ekβ12n11/δknk+12δ2δ12j9nk2/δk.
(5.30)
To numerically estimate ak, it suffices to numerically compute the isoperimetric numbers of graphs KV(G,k) with vertex set V(K) and edge set E(K). To find the isoperimetric number of K, we need to solve its corresponding edge-isoperimetric problem (EIP) on K, which involves finding
min{|X|:XV(K),|X|=j}
(5.31)
for every 1 ≤ j ≤ |V(K)|/2. While solving the EIP exactly is NP-hard,57,58 we conjecture that there can be approximation algorithms to approximately solve the EIP in polynomial time.

Conjecture V.10.

Let G = (V, E) be a graph. For every k = 1, …, |v|/2, let ek= min{|∂X|:XV, |X| = k}. Then, for every ε> 0 and for every k = 1, …, |V|/2, there exists a polynomial time approximation algorithm that computeseksuch that (1 − ε)ekek≤ ek.

A reason why Conjecture V.10 might be true is because for a multitude of different NP-hard problems, there do exist approximation algorithms that have efficient runtimes.59 If our Conjecture V.10 holds, then lower bounds on the eigenvalues can be evaluated in O(poly(n)nk−1) time with O(n) memory. In contrast, computing the eigenvalues of Lk directly in practice requires a computer in O(n3k) time and O(n2k) memory. Even using the best asymptotic algorithm for matrix multiplication would require at least O(n2k) time60 and O(n2k) memory.

In this paper, we obtain many bounds on the spectrum of the ferromagnetic HHs. For this, we rely on tools from graph theory and matrix analysis. Obviously, with these bounds on the eigenvalues of the Heisenberg ferromagnet, one can easily compute bounds on thermodynamic quantities of the corresponding Heisenberg models such as free energy.

With regards to upper bounds based on graph distances, there remains a potential to further tighten our bounds by optimizing over the partitions used in Eq. (4.22) of Corollary 4.4 in Ref. 34. This is however beyond the scope of the current paper and we leave this for future investigation. With regards to the lower bounds based on isoperimetric inequalities, we wish to point out that the edge-isoperimetric problem for the Johnson graph, also known as the problem of Kleitman and West,61 remains unsolved. Given this fact, better edge-isoperimetric inequalities for the Johnson graph will improve the edge-isoperimetric inequalities of the symmetric product of finite graphs given in Corollary V.7. Also advances in the theory of the graph expansion properties of vertex induced subgraphs will certainly also improve the bounds given in this corollary. Directly deriving lower bounds on the combinatorial Laplacian of a graph from discrete Sobolev inequalities can also help to improve the constants involved in the bound. Moreover, a polynomial-time approximation algorithm for solving the edge-isoperimetric problem for graphs (Conjecture V.10) would together with the methods already in this paper yield a polynomial-time algorithm for computing lower bounds for the eigenvalues of the ferromagnetic HH.

To recap, in the spin half case, the computational basis of the ferromagnetic HM can be represented by a binary string. Each binary string is represented as a vertex, and interactions represented as edges between the vertices. In the spin-half case, each exchange interaction is equivalent to a swap operator and acts as a transposition on the binary strings. The relationship between different binary strings under transpositions that correspond to the interaction are represented as a graph. Because transpositions leave the Hamming weight of these binary strings invariant, the HH naturally decomposes into a direct sum of graphs labeled by all the possible Hamming weights from 0 to n.

One might wonder how the results here could generalize to the spin S case. We briefly sketch how one might proceed to achieve this. We can observe that the computational basis of the ferromagnetic HM can be represented by a (2S + 1)-nary string. We can represent these (2S + 1)-nary strings as vertices on a graph, and interactions as relationships between the vertices. In this representation, the spin-S exchange operator maps a (2S + 1)-nary string to a linear combination of (2S + 1)-nary strings. Since one can show that the coefficients of this linear combination are non-negative, if all non-zero exchange constants are the same, the coefficients can rescale to allow us to interpret them as probabilities of transitions from one vertex to another vertex. Since the spin-S exchange operator conserves total spin, the (2S + 1)-nary strings naturally partition into disjoint subsets, where only strings in different partitions do not interact, and strings in the same partition can have their interactions represented as a Markov model. We expect the spectrum HH to thereby be related to the spectrum of the associated (2S + 1) Markov models. Markov models describe stochastic transitions between a set of discrete states and are well-studied. We therefore expect that connections between the theory of Markov models and spin-S HMs can bring similar insights into bound the spectrum of spin-S HMs.

Y.O. thanks Robert Seiringer and anonymous referees for their comments and recommendations that have helped to improve this manuscript. Y.O. acknowledges support from the Singapore National Research Foundation under NRF Award No. NRF-NRFF2013-01, the U.S. Air Force Office of Scientific Research under AOARD Grant No. FA2386-18-1-4003, and the Singapore Ministry of Education. This work was supported by the EPSRC (Grant No. EP/M024261/1).

1.
W.
Heisenberg
,
Z. Phys.
49
,
619
(
1928
).
2.
N.
Motoyama
,
H.
Eisaki
, and
S.
Uchida
,
Phys. Rev. Lett.
76
,
3212
(
1996
).
3.
C.
Chung
,
J.
Marston
, and
R. H.
McKenzie
,
J. Phys.: Condens. Matter
13
,
5159
(
2001
).
4.
D.
Thouless
,
Proc. Phys. Soc.
86
,
893
(
1965
).
5.
S.
Blundell
,
Magnetism in Condensed Matter
, Oxford Master Series in Condensed Matter Physics, 1st ed. (Oxford University Press,
2003
).
6.
L.-M.
Duan
,
E.
Demler
, and
M. D.
Lukin
,
Phys. Rev. Lett.
91
,
090402
(
2003
).
7.
H.
Tamura
,
K.
Shiraishi
, and
H.
Takayanagi
,
Jpn. J. Appl. Phys., Part 2
43
,
L691
(
2004
).
8.
M. B.
Ruskai
,
Phys. Rev. Lett.
85
,
194
(
2000
).
9.
H.
Pollatsek
and
M. B.
Ruskai
,
Linear Algebra Appl.
392
,
255
(
2004
).
10.
Y.
Ouyang
,
Phys. Rev. A
90
,
062317
(
2014
).
11.
Y.
Ouyang
and
J.
Fitzsimons
,
Phys. Rev. A
93
,
042340
(
2016
).
12.
Y.
Ouyang
,
Linear Algebra Appl.
532
,
43
(
2017
).
13.
Y.
Ouyang
, preprint arXiv:1904.01458 (
2019
).
14.
B. D.
Cullity
and
C. D.
Graham
,
Introduction to Magnetic Materials
(
John Wiley & Sons
,
2011
).
15.
D.
Jiles
,
Introduction to Magnetism and Magnetic Materials
(
CRC Press
,
2015
).
16.
D. P.
DiVincenzo
,
D.
Bacon
,
J.
Kempe
,
G.
Burkard
, and
K. B.
Whaley
,
Nature
408
,
339
(
2000
).
17.
N. D.
Mermin
and
H.
Wagner
,
Phys. Rev. Lett.
17
,
1133
(
1966
).
18.
F. R.
Chung
,
Spectral Graph Theory
(
American Mathematical Society
,
1997
), Vol. 92.
19.
H.
Bethe
,
Z. Phys.
71
,
205
(
1931
).
20.
F.
Haldane
,
Phys. Lett. A
93
,
464
(
1983
).
21.
L. D.
Faddeev
and
L. A.
Takhtadzhyan
,
J. Sov. Math.
24
,
241
(
1984
).
22.
T.
Koma
,
Prog. Theor. Phys.
78
,
1213
(
1987
).
23.
S.
Eggert
,
I.
Affleck
, and
M.
Takahashi
,
Phys. Rev. Lett.
73
,
332
(
1994
).
24.
T.
Kennedy
,
Commun. Math. Phys.
100
,
447
(
1985
).
25.
T.
Kennedy
,
J. Phys.: Condens. Matter
2
,
5737
(
1990
).
26.
Y.
Ogata
,
Commun. Math. Phys.
348
,
847
(
2016
).
27.
M.
Correggi
,
A.
Giuliani
, and
R.
Seiringer
,
Europhys. Lett.
108
,
20003
(
2014
).
28.
M.
Correggi
,
A.
Giuliani
, and
R.
Seiringer
,
Commun. Math. Phys.
339
,
279
(
2015
).
29.
B. S.
Shastry
and
B.
Sutherland
,
Physica B+C
108
,
1069
(
1981
).
30.
B. S.
Shastry
,
Phys. Rev. Lett.
60
,
639
(
1988
).
31.
G.
Baker
, Jr.
,
H.
Gilbert
,
J.
Eve
, and
G.
Rushbrooke
,
Phys. Lett. A
25
,
207
(
1967
).
32.
P.
Caputo
,
T.
Liggett
, and
T.
Richthammer
,
J. Am. Math. Soc.
23
,
831
(
2010
).
33.
D. A.
Spielman
and
S.-H.
Teng
,
SIAM J. Matrix Anal. Appl.
35
,
835
(
2014
).
34.
F.
Chung
,
A.
Grigor’yan
, and
S.
Yau
, Tsing Hua Lectures on Geometry & Analysis, Hsinchu, 1990–1991 (
International Press
,
1997
), p.
79
.
35.
F. R. K.
Chung
and
S.-T.
Yau
,
Combinatorics, Probab. Comput.
4
,
11
(
1995
).
36.
T.
Rudolph
, “
Constructing physically intuitive graph invariants
,” e-print arXiv:quant-ph/0206068v1 (
2002
).
37.
R.
Fabila-Monroy
,
D.
Flores-Peñaloza
,
C.
Huemer
,
F.
Hurtado
,
J.
Urrutia
, and
D. R.
Wood
,
Graph Combinatorics
28
,
365
(
2012
).
38.
K.
Audenaert
,
C.
Godsil
,
G.
Royle
, and
T.
Rudolph
,
J. Comb. Theory, Ser. B
97
,
74
(
2007
).
39.
A.
Alzaga
,
R.
Iglesias
, and
R.
Pignol
,
J. Comb. Theory, Ser. B
100
,
671
(
2010
).
40.
K.
Yamanaka
,
E. D.
Demaine
,
T.
Ito
,
J.
Kawahara
,
M.
Kiyomi
,
Y.
Okamoto
,
T.
Saitoh
,
A.
Suzuki
,
K.
Uchizawa
, and
T.
Uno
,
Theor. Comput. Sci.
586
,
81
(
2015
).
41.
J.
Leaños
and
A. L.
Trujillo-Negrete
,
Graphs Combinatorics
34
,
777
(
2018
).
42.
A.
Schrijver
,
Combinatorial Optimization: Polyhedra and Efficiency
(
Springer
,
Berlin- Heidelberg-New York
,
2003
).
43.
M.
Ostrovskii
,
Quaestiones Math.
28
,
501
(
2005
).
44.
C.
Godsil
and
G.
Royle
,
Algebraic Graph Theory
, Graduate Texts in Mathematics Vol. 207 (
Springer-Verlag
,
New York
,
2001
).
45.
P.
Delsarte
, “
An algebraic approach to the association schemes of coding theory
,” Ph.D. thesis,
Philips Research Laboratories
,
1973
.
46.
E.
Bannai
and
T.
Ito
,
Algebraic Combinatorics
(
Benjamin/Cummings Menlo Park
,
1984
).
47.
A. E.
Brouwer
and
W. H.
Haemers
,
Spectra of Graphs
(
Springer Science & Business Media
,
2011
).
48.
P.
Delsarte
and
V. I.
Levenshtein
,
IEEE Trans. Inf. Theory
44
,
2477
(
1998
).
49.
R.
Merris
,
Linear Algebra Appl.
197
,
143
(
1994
).
50.
S.
Geršgorin
,
Bulletin de l’Académie des Sciences de l’URSS
, Classe des Sciences Mathématiques et na (
l’Académie des Sciences de l’URSS
,
1931
), p.
749
.
51.
R. S.
Varga
,
Geršgorin and his Circles
, 1st ed. (
Springer-Verlag
,
2004
).
52.
F. R.
Chung
,
A.
Grigor’Yan
, and
S.-T.
Yau
,
Adv. Math.
117
,
165
(
1996
).
53.
N.
Alon
,
Combinatorica
6
,
83
(
1986
).
54.
L. E.
Payne
,
SIAM Rev.
9
,
453
(
1967
).
55.
J.-P.
Tillich
,
Discrete Math.
213
,
291
(
2000
).
56.
R.
Bhatia
,
Matrix Analysis
(
Springer-Verlag
,
1997
).
57.
M. R.
Garey
,
D. S.
Johnson
, and
L.
Stockmeyer
,
Theor. Comput. Sci.
1
,
237
(
1976
).
58.
U.
Brandes
and
D.
Fleischer
,
J. Graph Algorithms Appl.
13
,
119
(
2009
).
59.
D. S.
Hochbaum
,
Approximation Algorithms for NP-Hard Problems
(
PWS Publishing Co.
,
1996
).
60.
J.
Demmel
,
I.
Dumitriu
, and
O.
Holtz
,
Numerische Math.
108
,
59
(
2007
).
61.
L.
Harper
,
Discrete Math.
93
,
169
(
1991
).