We prove a local law and eigenvector delocalization for general Wigner-type matrices. Our methods allow us to get the best possible interval length and optimal eigenvector delocalization in the dense case, and the first results of such kind for the sparse case down to $p=g(n)log⁡nn$ with g(n) → . We specialize our results to the case of the stochastic block model, and we also obtain a local law for the case when the number of classes is unbounded.

The Stochastic Block Model (SBM), first introduced by mathematical sociologists,22 is a widely used random graph model for networks with communities. In the last decade, there has been considerable activity1,2,8–11,25 in understanding the spectral properties of matrices associated with the SBM and to other generalized graph models, in particular, in connection to spectral clustering methods.

Stochastic Block Models represent a generalization of Erdős-Rényi graphs to allow for more heterogeneity. Roughly speaking, an SBM graph starts with a partitioning of the vertices into classes, followed by placing an Erdős-Rényi graph on each class (independent edges, each occurring with the same given probability depending on the class), and connecting vertices in two different blocks by independent edges, again with the same given probability which this time depends on the pair of classes. The random matrix associated with this graph is the adjacency matrix, which is a random block matrix whose entries have Bernoulli distributions, the parameters of which are dictated by the inter- and intra-block probabilities mentioned above.

Specifically, suppose for ease of numbering that [n] = V1V2 ∪ ⋯ ∪Vd for some integer d, |Vi| = Ni for i = 1, …, d. Suppose that for any pair (k, l) ∈ [d] × [d] with kl there is a pkl ∈ [0, 1] such that for any iVk, jVl,

$aij=1, with probabilitypkl,0, otherwise.$

Also, if k = l, there is pk such that for any i, jVk,

$aij=0, ifi=j,1, with probability pk,0, otherwise.$

Each diagonal block is an adjacency matrix of a simple Erdős-Rényi graph, and off-diagonal blocks are adjacency matrix of bipartite graphs. While there is interest in studying the O(1) variance case [corresponding to all pijs and pis being O(1) the “dense” case], special interest is given to the sparse case [when pijs and pis are o(1), and more specifically, when the average vertex degrees, given by npij as well as npi, are growing very slowly with n or may even be large and constant].

The adjacency matrices of SBM graphs are themselves a particular form of general Wigner-type matrix, which have been shown to exhibit universal properties in Ref. 5 in the dense case. We detail the connection to the broader field of random matrix universality studies in Sec. I B.

At the same time with the increased interest in the spectra of SBM, the universality studies in random matrix theory pioneered by Refs. 30 and 13 had been gaining ground at tremendous pace. The pioneering work on Wigner matrices started in Refs. 30 and 13 has been now extended to cover generalized Wigner matrices,18 Erdős-Rényi matrices (including sparse ones),14–16 and general Wigner-type matrices.5 All such studies start by proving a local law at “optimal” scale, that is, on intervals of length (log n)α/n or n1−ε, which is necessary for the complicated machinery of either Ref. 30 or Ref. 17 to translate the local law into universality of eigenstatistics on “optimal-length” intervals.

In this paper, we prove a main theorem about (dense) generalized Wigner matrices and then apply it to cover sparse generalized Wigner matrices; finally, we show that our results translate to graph models like the SBM with bounded or unbounded number of blocks. We provide below a brief review of universality studies related to graph-based models.

After the original work on Wigner matrices, the first step in the direction of graph models, or graph-based matrices, came with Ref. 31, where the authors proved a local law for Erdős-Rényi graphs. Subsequently, Refs. 16 and 15 superseeded these results for the slightly denser cases and showed bulk universality in a pn−1/3 regime. The sparsity of the model is important here because it makes the problem more difficult. A more recent paper24 refined the results of Refs. 15 and 16 and made them applicable for pn−1+ε for any (fixed) ε > 0. Subsequently, in a departure from studying adjacency matrices, Ref. 23 proved bulk universality for the eigenvalue statistics of Laplacian matrices of Erdős-Rényi graphs, in the regime when pn−1+ε for fixed ε > 0. Very recently, Ref. 21 proved a local law for the adjacency matrix of the Erdős-Rényi graph with pC log n/n for some constant C > 0.

Finally, Ref. 3 examined a large class of sparse random graph-based matrices (two-atom and three-atom entry distributions), proved a local law up to intervals of length 1/n1−ε, and deduced (by the same means employed in Ref. 24) a bulk universality theorem. This is different from our results since our sparse matrices have entries that are not necessarily atomic but come from the product of a Bernoulli variable and a potentially continuous one (See Sec. III A); however, the3 case does seem to cover the sparse SBM model for pn−1+ε. As an interesting aside, in the general case, there may not be an asymptotic global law (aka the limiting empirical spectral distribution); the cases we study here (SBM with bounded and unbounded number of classes) are specific enough that we can also prove the asymptotic global law. However, as it turns out, in the case of the SBM with an unbounded number of blocks, the prediction in the local law must still be made using the n-step approximation to the global law, not the global law itself, since convergence to the global law is not uniform.

Some of the methods used for examining the universality of these graph models rely on the work in Refs. 4 and 5, where a general (dense) Wigner-type matrix model is considered and universality is proved up to intervals of length 1/n1−ε. We will also appeal to Ref. 4 since it will help establish the existence of limiting distributions and the stability of their Stieltjes transforms.

We should mention the significant body of the literature that deals with global limits for the empirical spectral distributions of block matrices. Starting with the seminal work of Girko,20 the topic was treated in Refs. 19 and 29 from a free probability perspective; more recently, Refs. 6 and 12 have examined the topic again for finitely many blocks (a claim in Ref. 12 that the method extends to a growing number of blocks is incorrect). The global law for stochastic block models with a growing number of blocks was derived in Ref. 34 via graphon theory.

The main difference in the results streaming from the seminal studies of Ref. 30, respectively,13 is in the conditions imposed on the matrix entries: the former approach to universality is based on the “four moment match” condition but imposes relatively weak conditions on the tails, while the latter studies by imposing stronger conditions on the tails. In later studies, these stronger conditions have included bounded moments.5,18,24 While the methods of Ref. 13 have been extended to increasingly more general matrix models, the methods in Ref. 30 have been used to focus on reaching the best (smallest) possible interval lengths for the classical, Wigner case via methods whose basis was set in Ref. 32.

This paper bridges the two approaches to obtain a local law and eigenvector delocalization for dense and sparse general Wigner-type matrices and for the SBM.

Our main result is a local law in the bulk down to interval length $CK2⁡log⁡nn$ for general Wigner-type matrices (see Sec. II) whose entries are compactly supported almost surely in [−K, K], employing some of the ideas from Refs. 30 and 32. Our result is more refined than the one from Ref. 5, where the smallest interval length was O(1/n1−ε) and bounded moments were assumed. With additional assumptions (either four-moment matching, as in the case of Ref. 30, or finite moments, as in Ref. 13 and subsequent studies), universality down to this smaller interval length should follow.

In addition to this main result, we also obtain the first local laws for sparse general Wigner-type matrices (see Sec. III A), down to interval length $CK2⁡log⁡nnp$. We specialize our results to sparse SBM with finite many blocks, where a limiting law exists. Finally, we extend these results to an unbounded number of blocks for the SBM, under certain conditions (see Sec. III B).

It should be said that our local laws for sparse general Wigner-type matrices are not sharp enough to yield universality, unless p is ω(1/nε) for any ε > 0. This is an artifact of the use of the methods of Ref. 30 and is also observable in Ref. 31. It is to be expected that they can be refined (by us or by other researchers) in the near future to a point where universality can be deduced.

Let $Mn≔(ξij)1≤i,j≤n$ be a random Hermitian matrix with variance profile $Sn=(sij)1≤i,j≤n$ such that ξij, 1 ≤ ijn are independent with

$Eξij=0,E|ξij|2=sij$

and compactly supported almost surely, i.e., |ξij| ≤ K for some $K=onlog⁡n$.

For the variance profile Sn, we assume

$c≤sij≤1$

for some constant c > 0. Note this is equivalent to csijC by scaling. Define $Wn≔Mnn$. The Stieltjes transform of the empirical spectral distribution of Wn is given by

$sn(z)≔1ntr(Wn−zI)−1.$

We will show that sn(z) can be approximated by the solution of the following quadratic vector equation studied in Ref. 4:

$mn(z)=1n∑k=1ngn(k)(z),$
(1)
$−1gn(k)(z)=z+1n∑l=1nsklgn(l)(z), 1≤k≤n.$
(2)

From Theorem 2.1 in Ref. 4, Eq. (2) has a unique set of solutions $gn(k)(z):H→H,1≤k≤n$, which are analytic functions on the complex upper half plane $H≔{z∈C:Im(z)>0}$. The unique solution mn(z) in Eq. (1) is the Stieltjes transform of a probability measure ρn with supp(ρn) ⊂ [−2, 2] such that

$ρn(x)≔limη↓01πIm(mn(x+iη)).$

We use the following definition for bulk intervals of ρn.

Definition II.1.

An interval I of a probability density function ρ on $R$ is a bulk interval if there exists some fixed ε > 0 such that ρ(x) ≥ε, for any xI.

We obtain the following local law of Mn in the bulk.

Theorem II.2
(Local law in the bulk). Let Mnbe a general Wigner-type matrix and ρnbe the probability measure corresponding to Eqs. (1) and (2). For any constant δ, C1, there exists a constant C2> 0 such that with probability at least$1−n−C1$, the following holds. For any bulk interval I of length$|I|≥C2K2⁡log⁡nn$, the number of eigenvalues NIof Wnin I obeys the concentration estimate
$NI−n∫Iρn(x)dx≤δn|I|.$
(3)

As a consequence, we obtained an optimal upper bound for eigenvectors that corresponds to eigenvalues of Wn in the bulk interval.

Theorem II.3
(Optimal delocalization of eigenvectors in the bulk). Let Mnbe a general Wigner-type matrix. For any constant C1> 0 and any bulk interval I such that eigenvalue λi(Wn) ∈ I, with probability at least$1−n−C1$, there is a constant C2such that the corresponding unit eigenvector ui(Wn) satisfies
$‖ui(Wn)‖∞≤C2K⁡log1/2⁡nn.$

Remark II.4.

Theorems II.2 and II.3 also hold for the general Wigner-type matrix whose entries ξijs are sub-gaussian with sub-gaussian norm bounded by K. As mentioned in Remark 4.2 in Ref. 27, the proof follows in the same way by using the inequality in Theorem 2.1 in Ref. 28 for sub-gaussian concentration instead of Lemma 1.2 in Ref. 32 for K-bounded entries.

We use standard methods from Ref. 30, adapted to fit the model considered here.

#### 1. Proof of Theorem II.2

For any $0<ε<12$ and constant C1 > 0, define a region

$Dn,ε≔{z∈C:ρn(Re(z))≥ε,Im(z)≥C32K2⁡log⁡nnδ6}$
(4)

for some constant C3 > 0 to be decided later.

Let Wn,k be the matrix Wn with the kth row and column removed, and ak be the kth row of Wn with the kth element removed.

Let $(Wn−zI)−1≔(qij(n))1≤i,j≤n$. From Schur’s complement lemma (Theorem A.4 in Ref. 7), we have

$qkk=1−ξkkn−z−Yk,$

where

$Yk=ak*(Wn,k−zI)−1ak.$

Let

$fn(k)(z)≔1−ξkkn−z−Yk,$
(5)

then we can write sn(z) as

$sn(z)=1ntr(Wn−zI)−1=1n∑k=1nfn(k)(z).$
(6)

We first estimate Yk to derive a perturbed version of (2). Let $(Wn,k−zI)−1≔(qij(n,k))1≤i,j≤n−1$, and $Sn(k)$ be a diagonal matrix whose diagonal elements are the kth row of Sn with the kth entry removed. We have

$E[Yk|Wn,k]=E[ak*(Wn,k−zI)−1ak|Wn,k]=∑i=1n−1qii(n,k)E|aki|2=∑i=1n−1qii(n,k)ski=1ntr[(Wn,k−zI)−1Sn(k)].$
(7)

The following 2 lemmas give estimates for Yk, and the proofs are deferred for Secs II B 1 and II B 2.

Lemma II.5.
Let$Σn(k)$be the diagonal matrix whose diagonal elements are the kth row of Sn. For any k, 1 ≤ k ≤ n, and any fixed z with$Im(z)≥K2C32⁡log⁡nnδ6$,
$E[Yk|Wn,k]=1ntr[(Wn−zI)−1Σn(k)]+O1nη,$
where the constant in the$O1nη$term is independent of z.

A similar estimate holds for Yk itself.

Lemma II.6.
For any constant C > 0, one can choose the constant C3defined in (4) sufficiently large such that for any k, 1 ≤ k ≤ n, z ∈ Dn,ε, one has
$Yk−1ntr[(Wn−zI)−1Σn(k)]=o(δ2)$
(8)
with probability at least 1 − n−C−10.
With the help of Lemmas II.5 and II.6, note that, since $|ξkk|n=o(δ2)$,
$1ntr[(Wn−zI)−1Σn(k)]=1n∑l=1nsklfn(l),$
(9)
and combining (5), (8), and (9), we have
$fn(k)(z)+11n∑l=1nsklfn(l)(z)+z+o(δ2)=0, 1≤k≤n$
(10)
with probability at least 1 − nC−9.

The next step involves using the stability analysis of quadratic vector equations provided in Ref. 4 to compare the solutions to (10) and (2). We have the following estimate.

Lemma II.7.
For any constant C > 0, one can choose C3in (4) sufficiently large such that
$sup1≤k≤n|fn(k)(z)−gn(k)(z)|=o(δ2),$
(11)
for all z ∈ Dn,εuniformly with probability at least 1 − n−C−2.
With Lemma II.7, we have for any C > 0, there exists C3 > 0 in (4) such that
$|sn(z)−mn(z)|=1n∑k=1nfn(k)(z)−1n∑k=1ngn(k)(z)=o(δ2)$
(12)
uniformly for all zDn,ε with probability at least 1 − nC.

To complete the Proof of Theorem II.2, we need the following well-known connection between the Stieltjes transform and empirical spectral distribution, as shown, for example, in Lemma 64 in Ref. 30 and also Lemma 4.1 in Ref. 32.

Lemma II.8.
Let Mnbe a general Wigner-type matrix. Let ε, δ > 0, for any constant C1> 0, there exists a constant C > 0 such that suppose that one has the bound
$|sn(z)−mn(z)|≤δ$
with probability at least 1 − n−Cuniformly for all z ∈ Dn,ε, then for any bulk interval I with$|I|≥max{2η,ηδlog1δ}$where$η=C32K2⁡log⁡nn$, one has
$NI−n∫Iρn(x)dx≤δn|I|$
with probability at least$1−n−C1$.
From (12), for any constant C1 > 0, we can choose C3 in (4) large enough such that
$|sn(z)−mn(z)|≤δ$
uniformly for all zDn,ε with probability 1 − nC, where C is the constant in the assumption of Lemma II.8, and then Theorem II.2 follows from Lemma II.8.

#### 2. Proof of Theorem II.3

The proof is based on Lemma 41 from Ref. 30 given below.

Lemma II.9.
Let Wnbe a n × n Hermitian matrix and Wn,kbe the submatrix of Wnwith kth row and column removed, and let ui(Wn) be a unit eigenvector of Wncorresponding to λi(Wn), and xkbe the kth coordinate of ui(Wn). Suppose that none of the eigenvalues of Wn,kare equal to λi(Wn). Let akbe the kth row of Wnwith of kth entry removed, then
$|xk|2=11+∑j=1n−1(λj(Wn,k)−λi(Wn))−2|uj(Wn,k)*ak|2,$
(13)
where uj(Wn,k) is a unit eigenvector corresponding to λj(Wn,k).
Another lemma we need is a weighted projection lemma for random vectors with different variances. It is a slight generalization of Lemma 1.2 in Ref. 32. Note that in the below
$E|uj*X|2=tr(ujuj*Σ),$
and the proof follows verbatim, as in Ref. 32.

Lemma II.10.
Let X = (ξ1, …, ξn) be a K-bounded random vector in$Cn$such that$Var(ξi)=σi2$,$0≤σi2≤1$. Then there are constants C, C′ > 0 such that the following holds. Let H be a subspace of dimension d with an orthonormal basis {u1, …, ud}, and$Σ=diag(σ12,…,σn2)$. Then for any 1 ≥ r1, …, rd 0,
$P ∑j=1drj|uj*X|2−∑j=1drjtr(ujuj*Σ)≥t≤C⁡exp(−C′t2K2).$
(14)
In particular, by squaring, it follows that
$P ∑j=1drj|uj*X|2−∑j=1drjtr(ujuj*Σ)≥2t∑j=1drjtr(ujuj*Σ)+t2≤C⁡exp(−C′t2K2).$
(15)
Below we show how delocalization follows from Lemmas II.9, II.10, and Theorem II.2. For any C1 > 0 and any λi(Wn) in the bulk, by Theorem II.2, one can find an interval I centered at λi(Wn) and $|I|=K2C2⁡log⁡nn$ for some sufficiently large C2 such that NIδ1n|I| for some small δ1 > 0 with probability at least $1−n−C1−3$. By Cauchy interlacing law, we can find a set J ⊂ {1, , n − 1} with |J| ≥ NI/2 such that |λj(Wn,k) − λi(Wn)| ≤ |I| for all jJ. Let Xk be the kth column of Mn with the kth entry removed. Note that from Lemma II.10, by taking rj = 1, jJ, and $t=C3Klog⁡n$ for some constant $C3≥C1+3C′$ in (14), using assumption sijc, we have
$∑j∈J|uj(Wn,k)*Xk|2≥∑j∈Jtr(uj(Wn,k)uj*(Wn,k)Σ)−C3Klog⁡n≥c|J|−C3Klog⁡n≥(c−C3C2δ1/2)|J|$
(16)
with probability at least $1−n−C1−3$. By choosing C2 sufficiently large, (16) implies
$∑j∈J|uj(Wn,k)*Xk|2≥C′|J|$
for some constant C′ > 0 with probability at least $1−n−C1−3$. By (13),
$|xk|2=11+∑j=1n−1(λj(Wn,k)−λi(Wn))−2|uj(Wn,k)*Xkn|2≤11+∑j∈J(λj(Wn,k)−λi(Wn))−2|uj(Wn,k)*Xkn|2≤11+n−1|I|−2∑j∈J|uj(Wn,k)*Xk|2≤11+n−1|I|−2C′|J|≤2|I|C′δ1≤K2C42⁡log⁡nn$
for some constant C4 with probability at least $1−2n−C1−3$. Thus by taking a union bound, $‖ui‖∞≤C4Klog⁡nn$ with probability at least $1−n−C1$ for all 1 ≤ in.

We now prove the all the lemmas in the proof of Theorem II.2.

#### 1. Proof of Lemma II.5

Let $η≔C32K2⁡log⁡nnδ6$ and $z≔x+−1⋅η$. By (7), it suffices to show for all 1 ≤ kn,

$tr[(Wn−zI)−1Σn(k)]−tr[(Wn,k−zI)−1Sn(k)]≤1η.$
(17)

We will use the following result known as Lemma 1.1 in Chap. 1 of Ref. 20.

Lemma II.11.
Let$c→=(c1,…,cn)$be a real column vector, and$Mn=(ξij)n×n$be a Hermitian matrix, for any z with and Imz > 0, we have, for any 1 ≤ k ≤ n,
$c→T(Mn−zI)−1c→−ck→T(Mn,k−zI)−1ck→=ck2−ξk→*Rk(2ckck→)+ξk→*Rkck→ck→TRkξk→ξkk−z−ξk→*Rkξk→,$
where$Rk=(Mn,k−zI)−1$,$ck→$is the vector$c→$with the kth coordinate removed, and$ξk→$is the kth column of Mnwith the kth element removed.

We introduce a real random vector $c→=(c1,…,cn)$ whose coordinates are mean zero, independent variables also independent of Wn with Var(ci) = ski for 1 ≤ in.

Apply Lemma II.11 to Wn and $c→$. We have $Rk=(Wn,k−zI)−1$, and
$c→T(Wn−zI)−1c→−ck→T(Wn,k−zI)−1ck→=ck2−ak*Rk(2ckck→)+ak*Rkck→ck→TRkakξkkn−z−Yk.$
By taking the conditional expectation with respect to c, conditioned on Wn, we have
$E[c→T(Wn−zI)−1c→−ck→T(Wn,k−zI)−1ck→∣Wn]=skk+ak*RkSn(k)Rkakξkkn−z−Yk.$
Calculating the left-hand side yields
$tr[(Wn−zI)−1Σn(k)]−tr[(Wn,k−zI)−1Sn(k)]=skk+ak*RkSn(k)Rkakξkkn−z−Yk.$
Since we have
$|skk+ak*RkSn(k)Rkak|≤1+|ak*RkSn(k)Rkak|≤1+ak*((Wn,k−xI)2+η2I)−1ak,$
and
$Imξkkn−z−Yk=−η1+ak*((Wn,k−xI)2+η2I)−1ak,$

(17) holds. This completes the proof of Lemma II.5.

#### 2. Proof of Lemma II.6

We need a preliminary bound on the number of eigenvalues in a short interval. The following Lemma is similar to Proposition 66 in Ref. 30.

Lemma II.12.
For any constant C1> 0, there exists a constant C2> 0 such that for any interval$I⊂R$with$|I|≥C2K2⁡log⁡nn$, one has
$NI(Wn)=O(n|I|)$
(18)
with probability at least$1−n−C1$.

Proof.
By the union bound, it suffices to show that the failure probability for (18) is less than $1−n−C1−1$ for
$|I|=η≔C2K2⁡log⁡nn$
for some sufficiently large C2. By
$Im(sn(x+−1η))=1n∑i=1nηη2+(λi(Wn)−x)2,$
(19)
it suffices to show that the event
$NI≥Cnη$
(20)
and
$Im(sn(x+η−1))≥C$
(21)
fails with probability at least $1−n−C1−1$ for some large absolute constant C > 1. Suppose we have (20), (21), by (19),
$1n∑k=1nIm1ξkkn−(x+η−1)−Yk≥C.$
Using the bound $Im1z≤1|Im(z)|$, it implies
$1n∑k=1n1|η+Im(Yk)|≥C.$
(22)
Note that
$Wn,k=∑j=1n−1λj(Wn,k)uj*(Wn,k)uj(Wn,k),$
where uj(Wn,k), 1 ≤ jn − 1 are orthonormal basis of Wn,k, one has
$Yk=ak*(Wn,k−zI)−1ak=∑j=1n−1|uj*(Wn,k)ak|2λj(Wn,k)−(x+η−1),$
and hence
$ImYk≥η∑j=1n−1|uj*(Wn,k)ak|2η2+λj(Wn,k−x)2.$
On the other hand, from (20), by Cauchy interlacing theorem, we can find an index set J with |J| ≥ ηn such that λj(Wn,k) ∈ I for all jJ, then we have
$Im(Yk)≥12η∑j∈J|uj*(Wn,k)ak|2=12η‖PHkak‖2,$
(23)
where $PHk$ is the orthogonal projection onto a subspace Hk spanned by eigenvectors uj(Wn,k), jJ. From (22) and (23), we have
$1n∑k=1n2η2η2+‖PHkak‖2≥C.$
(24)
On the other hand, taking rj = 1, 1 ≤ jd, d = |J| and $t=C4Klog⁡n$ for some sufficiently large C4 in (15), using assumption sijc, we have that $‖PHk(ak)‖2=Ω(η)$ with probability at least $1−O(n−C′C4)≥1−n−C1−5$. Taking the union bound over all possible choice of J, we have (24) holds with probability at least $1−n−C1−1$. The claim then follows by taking C sufficiently large.
Now we are ready to prove Lemma II.6. From Lemma II.5, it suffices to show
$Yk−E[Yk|Wn,k]=o(δ2), 1≤k≤n$
(25)
with probability at least 1 − nC−10. We can write
$Yk=∑j=1n−1|uj*(Wn,k)ak|2λj(Wn,k)−z,$
where ${uj(Wn,k)}j=1n−1$ are orthonormal eigenvectors of Wn,k. Moreover,
$E[Yk|Wn,k]=1ntr[(Wn,k−zI)−1Sn(k)]=1ntr∑j=1n−11λj(Wn,k)−zuj(Wn,k)uj*(Wn,k)Sn(k)=1n∑j=1n−1tr[uj(Wn,k)uj*(Wn,k)Sn(k)]λj(Wn,k)−z.$
Let $Xk=nak$, and define
$tj≔|uj(Wn,k)*Xk|2−tr[uj(Wn,k)uj*(Wn,k)Sn(k)].$
It suffices to show that
$Yk−E[Yk|Wn,k]=1n∑j=1n−1tjλj(Wn,k)−x−−1η=o(δ2)$
with probability at least 1 − nC−10. The remaining part of the proof goes through in the same way as in the Proof of Lemma 5.2 in Ref. 32 with Lemmas II.10 and II.12. Then Lemma II.6 follows.

#### 3. Proof of Lemma II.7

We define $gn(z,x)≔gn(k)(z)$ if $x∈[k−1n,kn),1≤k≤n$ and

$Sn(x,y)≔sij ifx∈[i−1n,in),y∈[j−1n,jn).$
(26)

Then (2) can be written as

$mn(z)=∫01gn(z,x)dx,$
(27)
$−1gn(z,x)=z+∫01Sn(x,y)gn(z,y)dy,$
(28)

for all x ∈ [0, 1]. Similarly, define $fn(z,x)≔fn(k)(z)$ if $x∈[k−1n,kn),1≤k≤n$. Then we can write (6) and (10) as

$sn(z)=∫01fn(z,x)dx,−1fn(z,x)=z+∫01Sn(x,y)fn(z,y)dy+dn(z,x),$

where for any fixed z from (10),

$‖dn(z)‖∞≔supx∈[0,1]|dn(z,x)|=o(δ2)$
(29)

with probability at least 1 − nC−9 for any fixed zDn,ε.

The following lemma follows from Theorem 2.12 in Ref. 4 which controls the stability of Eq. (28) in the bulk. Here we use the fact that csij ≤ 1 to guarantee the assumptions of Sn in Theorem 2.12 in Ref. 4. Define

$Λ(z)≔supx∈[0,1]|fn(z,x)−gn(z,x)|.$

Lemma II.13.
For any fixed z ∈ Dn,ε, there exist constants λ, C5> 0 depending on ε but independent of n such that for z ∈ Dn,ε,
$Λ(z)1{Λ(z)≤λ}≤C5‖dn(z)‖∞.$
(30)

Proof.

Since the variance satisfies csij ≤ 1, Sn satisfies condition A1-A3 in Chap. 1 of Ref. 4. Especially, it implies condition A3 with L = 1.

From the lower bound ρn(z) ≥ε, Lemma 5.4 (i) in Ref. 4 implies
$sup1≤k≤n|gn(k)(z)|≤1ε<∞$
for any z with Re(z) ∈ I, Im(z) > 0. Then the assumptions in Theorem 2.12 in Ref. 4 holds. (30) then follows from Theorem 2.12 in Ref. 4.

Remark II.14.

Lemma II.13 is a stability result for the solution of (28), which is deterministic and does not require moment assumptions on the random matrix Mn.

From Lemma II.13 and (29), we have for any fixed zDn,ε,
$Λ(z)1{Λ(z)≤λ}=o(δ2)$
(31)
with probability at least 1 − nC−9.

We proceed with a continuity argument as in the proof of Theorem 3.2 in the bulk (Sec. 3.1 in Ref. 4) to show (31) holds uniformly for zDn,ε with probability at least 1 − nC−2.

Now for any $0<ε′<λ4$, we consider a line segment
$L=x+−1K2C32⁡log⁡nnδ6,n$
for some fixed x with ρn(x) ≥ε, 0 < ε < 1/2, and let n be large enough such that $1n<ε′$ and $‖dn(z)‖∞≤ε′$. Let Ln consist of n4 evenly spaced points on L. Then we have
$Λ(z)1{Λ(z)≤λ}≤ε′$
(32)
for all zLn with probability at least 1 − nC−5.
From Theorem 2.1 in Ref. 4, gn(z, x) is the Stieltjes transform of a probability measure; hence, the derivative of gn(z, x) is uniformly bounded by $1|Im(z)|2≤n2$ for zDn,ε. Similarly, for fn(z, x), from (5), for 1 ≤ kn,
$∂fn(k)(z)∂z=1+∂Yk∂z(ξkkn−z−Yk)2≤1+ak*(Wn,k−zI)−2akξkkn−z−Yk1|ξkkn−z−Yk|.$
By Theorem A.6. in Ref. 7, for $z=x+−1η$,
$1+ak*(Wn,k−zI)−2ak(ξkkn−z−Yk)≤1η,$
and
$ξkkn−z−Yk≥Im(ξkkn−z−Yk)=η(1+ak*((Wn,k−xI)2+η2I)−1ak)≥η.$
Note that for zDn,ε, $η≥K2C32⁡log⁡nnδ6≥1n$, we get
$∂fn(k)(z)∂z≤1η2≤n2, 1≤k≤n.$
So both fn(z, x) and gn(z, x) are n2-Lipschitz function in z for zDn,ε. It follows that
$|Λ(z′)−Λ(z)|≤2n2|z′−z|,$
for any z, z′ ∈ L. We first claim that
$Λ(z)1{Λ(z)≤λε2}≤2ε′,$
(33)
for all zL with probability at least 1 − nC−5.
Since 0 < ε < 1/2, if zLn, (33) is true from (32). If zL$\$Ln, choose some z′ ∈ Ln such that |zz′| ≤ n−3. Suppose $Λ(z)≤λε2$, note that
$|Λ(z′)−Λ(z)|≤2n2|z−z′|≤2n,$
(34)
which implies
$Λ(z′)≤Λ(z)+2n≤λε2+2n≤λ$
with probability at least 1 − nC−5. From (32), Λ(z′) ≤ ε′ with probability at least 1 − nC−5. From (34),
$Λ(z)≤Λ(z′)+2n<2ε′$
(35)
with probability at least 1 − nC−5, therefore (33) holds.

In the next step, we show that the indicator function in (33) is identically equal to 1. From (32), we have $Λ(z)∉2ε′,λ/2$ with probability at least 1 − nC−5.

Let E be the event that $Λ(z)1{Λ(z)≤λ2}≤2ε′$ happens. Conditioning on E, since Λ(z) is 2n2-Lipschitz in z, and L is simply connected, we have
$Λ(L)≔{Λ(z):z∈L}$
is simply connected. Therefore Λ(L) is contained either in [0, 2ε′] or $[λ2,∞)$.
From (5), we have for 1 ≤ kn,
$|fn(k)(z)|=1−ξkkn−z−Yk≤1|Im(z)|,$
and since $gn(k)(z)$ is a Stieltjes transform of a probability measure, for 1 ≤ kn,
$|gn(k)(z)|≤1|Im(z)|,$
which implies
$Λ(z)≤2|Im(z)|.$
Consider the point $zn≔x+−1⋅n∈L$, we have
$Λ(zn)≤2Im(zn)=2n≤2ε′,$
which implies Λ(zn) ∈ [0, 2ε′]. Hence for all zL, Λ(z) ≤ 2ε′ with probability at least 1 − nC−5, and the indicator function in (33) is identically equal to 1.
Now we extend the estimate to all zDn,ε. Consider n3 lines segments
$xk+−1K2C32⁡log⁡nnδ6,n,ρn(xk)≥ε,1≤k≤n3$
such that the n2-neighborhoods of points {xk, 1 ≤ kn3} cover any bulk interval of ρn. By the 2n2-Lipschitz property of Λ(z) again, we can show Λ(z) ≤ 4ε′ for all z with ρn(Re(z)) > ε, $K2C32⁡log⁡nnδ6≤Imz≤n$, with probability at least 1 − nC−2.
On the other hand, for all z with Im(z) > n,
$fn(z)−gn(z)∞≤2Imz=O1n.$
(36)
Combining these two cases, for all zDn,ε with probability at least 1 − nC−2,
$‖fn(z)−gn(z)‖∞=o(δ2).$
This completes the Proof of Lemma II.7.

Let Mn be a sparse general Wigner-type matrix with independent entries Mij = δijξij for 1 ≤ ijn. Here δij are independent and identically distributed (i.i.d.) Bernoulli random variables which take value 1 with probability $p=g(n)log⁡nn$, where g(n) is any function for which g(n) → as n, and ξij are independent random variables such that

$Eξij=0,E|ξij|2=sij,c≤sij≤1,$

and in addition, |ξij| ≤ K almost surely for $K=o(g(n))$.

We can regard this model as the sparsification of a general Wigner-type matrix by uniform sampling. Similar models were considered in Refs. 26 and 33.

Considering the empirical spectral distribution of $Wn≔Mnnp$, we specify a local law for this model.

Corollary III.1.
Let Mn be a sparse general Wigner-type matrix, let ρn be the probability measure corresponding to Eqs. (1) and (2). For any constants δ, C1 > 0, there exists a constant C2 > 0 such that with probability at least $1−n−C1$, the following holds. For any bulk interval I of length $|I|≥C2K2⁡log⁡nnp$, the number of eigenvalues NI of $Wn≔Mnnp$ in I obeys the concentration estimate
$NI−n∫Iρn(x)dx≤δn|I|.$
(37)

Proof.
Define
$Hn≔Mnp=(hij)1≤i,j≤n.$
Then $Ehij=0,E|hij|2=sij,$ and $|hij|≤Kp=onlog⁡n$, and (37) follows as a corollary of Theorem II.2 for Hn.

The infinity norm of eigenvectors in the bulk can be estimated in a similar way.

Corollary III.2.
Let Mn be a sparse general Wigner-type matrix and $Wn=Mnnp$. For any constant C1 > 0 and any bulk interval I such that eigenvalue λi(Wn) ∈ I, with probability at least $1−n−C1$, there is a constant C2 such that the corresponding unit eigenvector ui(Wn) satisfies
$‖ui(Wn)‖∞≤C2K⁡log1/2⁡nnp.$

#### 1. Finite number of classes

Our analysis of sparse random matrices applies to the adjacency matrices of sparse stochastic block models.

Consider the adjacency matrix $An=(aij)1≤i,j≤n$ of an SBM graph, where An is a random real symmetric block matrix with d2 blocks. Recall that we partition all indices [n] into d sets,

$[n]=V1∪V2∪⋯∪Vd$
(38)

such that |Vi| = Ni. We assume aii = 0, 1 ≤ in and aij, ij are Bernoulli random variables such that if aij is in the (k, l)th block, aij = 1 with probability pkl and aij = 0 with probability 1 − pkl.

Let $σkl2≔pkl(1−pkl)$. Define p$maxkl$pkl and σ2 = p(1 − p). Assume

$p=g(n)log⁡nn,$

where $supn$p < 1 and g(n) → as n. We also assume that

$Nin=αi+o1g(n),$
(39)
$σkl2σ2=ckl+o1g(n),$
(40)

where αi > 0, 1 ≤ id and cklc > 0, 1 ≤ k, ld for some constant c. The quadratic vector equation becomes

$m(z)=∑k=1dαkgk(z),$
(41)
$−1gk(z)=z+∑l=1dαlcklgl(z).$
(42)

We state the following local law for sparse SBM.

Corollary III.3.
Let An be the adjacency matrix of a stochastic block model with the assumptions above, let ρ be the probability measure corresponding to Eq. (41). For any constant δ, C1 > 0, there exists a constant C2 > 0 such that with probability at least $1−n−C1$, the following holds. For any bulk interval I of length $|I|≥C2⁡log⁡nnp$, the number of eigenvalues NI of $Annσ$ in I obeys the concentration estimate
$NI−n∫Iρ(x)dx≤δn|I|.$

Proof.

We have the following well-known Cauchy Interlacing Lemma, appearing, for example, as Lemma 36 from Ref. 30.

Lemma III.4.
Let A, B be symmetric matrices with the same size, and B has rank 1. Then for any interval I, we have
$|NI(A+B)−NI(B)|≤1,$
(43)
where NI(M) is the number of eigenvalues of M in I.
Let $Ãn$ be the matrix whose off diagonal entries are equal to An and
$ãii=pkk$
(44)
if (i, i) is in the kth block.
From Lemma III.4, since rank $E(Ãn)=d$, we have
$|NI(An)−NI(An−E(Ãn))|≤d=o(n|I|).$
Therefore it suffices to prove the local law for
$Wn=An−EÃnnσ.$
Let $An−EÃnσ=(ξij)1≤i,j≤n$. By Schur’s complement, we can write the Stieltjes transform of the empirical measure sn(z) in the following way:
$sn(z)=1n∑k=1n1−ξkkn−z−Yk.$
We do the following partition of sn(z) into d parts:
$sn(z)≔∑l=1dNlnfn(l)(z),$
where
$fn(l)(z)≔1Nl∑k∈Vl1−ξkkn−z−Yk.$
(45)
The kth diagonal element in $An−EAñσ$ is $−pkknσ=o(1).$ Similar with (10), we have
$−1fn(l)(z)=∑m=1dNmncmlfn(m)(z)+z+o(1), 1≤l≤d$
(46)
for any zDn,ε with probability at least 1 − nC−9. Using the assumptions (39), (40) and the fact that $|fn(l)|≤1η$, we have
$−1fn(l)(z)=z+∑m=1dαmcmlfn(m)(z)+o(1), 1≤l≤d$
(47)
for any fixed zDn,ε with probability at least 1 − nC−9.
Since d is fixed and all coefficients ckl, 1 ≤ k, ld in (42) are positive and bounded, from Theorem 2.10 in Ref. 4,
$sup1≤i≤d|gi(z)|<∞,∀z∈H.$
Theorem 2.12(i) in Ref. 4 implies Lemma II.13 holds with $Λ(z)≔sup1≤i≤d|fn(i)(z)−gi(z)|$ for any fixed zDn,ε. Similar to the Proof of Lemma II.7, we have
$|sn(z)−m(z)|=|∑l=1dNlnfn(l)(z)−∑l=1dαlgl(z)|≤|∑l=1dNlnfn(l)(z)−∑l=1dαlfn(l)(z)|+|∑l=1dαlfn(l)(z)−∑l=1dαlgl(z)|≤∑l=1d|(Nln−αl)fn(l)(z)|+∑l=1dαl|fn(l)(z)−gl(z)|=o(1)$
uniformly for all zDn,ε with probability at least 1 − nC. Hence the local law for $Annσ$ is proved.

We have the corresponding infinity norm bound for eigenvectors in the bulk.

Corollary III.5.
Let An be an adjacency matrix of a stochastic block model. For any bulk interval I such that eigenvalue $λi(Annσ)∈I$ and any constant C1 > 0, with probability at least $1−n−C1$, the corresponding unit eigenvector $ui(Annσ)$ satisfies
$uiAnnσ∞≤C2log⁡nnp$
for some constant C2 > 0.

Proof.

Let $Wn≔Annσ$. For any λi(Wn) in the bulk, by Corollary III.3, one can find an interval I centered at λi(Wn) and $|I|=C2⁡log⁡nnp$ such that NIδ1n|I| for some small δ1 > 0 with probability at least $1−n−C1−3$. We can find a set J ⊂ {1, , n − 1} with |J| ≥ NI/2 such that |λj(Wn−1) − λi(Wn)| ≤ |I| for all jJ. Let Xk be the kth column of $Anσ$ with the kth entry removed, then $Xk=nak$.

Since Xk is not centered, we need to show
$∑j∈J|uj(Wn,k)*Xk|2=‖πH(Xk)‖2=Ω(|J|)$
(48)
with probability at least $1−n−C1−3$, where H is the subspace spanned by all orthonormal eigenvectors associated with eigenvalues λj(Wn,k), jJ and dim(H) = |J|.
Let H1 = HH2, where H2 is the subspace orthogonal to the vector $Eak$. The dimension of H1 is at least |J| − 1. Let $bk=ak−Eak$, then the entries of bk are centered with the same variances as ak. By Lemma II.10, we have
$‖πH1(bk)‖2=Ω|J|n$
with probability at least $1−n−C1−3$. Moreover,
$‖πH(ak)‖=‖πH(bk+Eak)‖≥‖πH1(bk+Eak)‖=‖πH1(bk)‖,$
which implies (48) holds. The rest of the proof follows from the Proof of Theorem II.3.

#### 2. Unbounded number of classes

For the Stochastic Block Models, if we allow the number of classes d as n, a local law can be proved under the following assumptions:

$d=ong(n),$
(49)
$∑i=1dσkl2σ2−ckl=o1g(n).$
(50)

We will compare the Stieltjes transform of the empirical spectral distribution to the measure whose Stieltjes transform satisfies the following equations:

$mn(z)=∑i=1dNingn,i(z),$
(51)
$−1gn,i(z)=z+∑i=1dNincijgn,j(z).$
(52)

We have the following local law for SBM with unbounded number of blocks.

Corollary III.6.
Let An be an adjacency matrix of SBM with assumptions (49) and (50). Let ρn be the probability measure corresponding to Eqs. (51) and (52). For any constants δ, C1 > 0, there exists a constant C2 such that with probability at least $1−n−C1$ the following holds. For any bulk interval I of length $|I|≥C2⁡log⁡nnp$, the number of eigenvalues NI of $Annσ$ in I obeys the concentration estimate
$NI−n∫Iρn(x)dx≤δn|I|.$
(53)

Proof.
Since $d=ong(n)$, recall the definition of $Ãn$ from (44), by Cauchy interlacing law,
$|NI(An)−NI(An−E(Ãn))|≤d=o(n|I|).$
It suffices to prove the statement for the centered matrix $Wn≔An−EÃnnσ.$ The proof then follows from Corollary III.3 with assumption (50).

Remark III.7.

Different from Corollary III.3, in Corollary III.6, we are not comparing the empirical spectral distribution to a limiting spectral distribution ρ independent of n. If we assume $Nin→αi,α1≥α2⋯≥⋯ ,$ and $∑i=1∞αi=1$, one can show that ρn converge to some ρ (see Sec. 7 in Ref. 34 for further details). But we do not have a local law comparing NI with nIρ(x)dx. In fact, let Sn be the symmetric function on [0, 1]2 representing the variance profile as in (26) and S be its point-wise limit, there is no upper bound for rate of convergence on $supx,y$|Sn(x, y) − S(x, y)|.

Remark III.8.

With the same argument in the Proof of Corollary III.5, the infinity norm bound for eigenvectors in Corollary III.5 still holds for the SBM with unbounded number of classes.

The authors would like to thank PCMI Summer Session 2017 on Random Matrices, during which a part of this work was performed. This work was supported by NSF Grant No. DMS-1712630.

1.
E.
Abbe
and
C.
Sandon
, “
Community detection in general stochastic block models: Fundamental limits and efficient algorithms for recovery
,” in
2015 IEEE 56th Annual Symposium on Foundations of Computer Science (FOCS)
(
IEEE
,
2015
), pp.
670
688
.
2.
E.
Abbe
,
A. S.
Bandeira
, and
G.
Hall
, “
Exact recovery in the stochastic block model
,”
IEEE Trans. Inf. Theory
62
(
1
),
471
487
(
2016
).
3.
B.
and
Z.
Che
, “
Spectral statistics of sparse random graphs with a general degree distribution
,” e-print arXiv:1509.03368 (
2015
).
4.
O. H.
Ajanki
,
L.
Erdős
, and
T.
Krüger
, “
Quadratic vector equations on complex upper half-plane
,”
Mem. Amer. Math. Soc.
(to appear); preprint arXiv:1506.05095 (
2015
).
5.
O. H.
Ajanki
,
L.
Erdős
, and
T.
Krüger
, “
Universality for general Wigner-type matrices
,”
Probab. Theory Relat. Fields
169
(
3-4
),
667
727
(
2015
).
6.
K.
Avrachenkov
,
L.
Cottatellucci
, and
A.
, “
Spectral properties of random matrices for stochastic block model
,” in
2015 13th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt)
(
IEEE
,
2015
), pp.
537
544
.
7.
Z.
Bai
and
J. W.
Silverstein
,
Spectral Analysis of Large Dimensional Random Matrices
(
Springer
,
2010
), Vol. 20.
8.
F.
Benaych-Georges
,
C.
Bordenave
, and
A.
Knowles
, “
Spectral radii of sparse random matrices
,” preprint arXiv:1704.02945 (
2017
).
9.
F.
Benaych-Georges
,
C.
Bordenave
, and
A.
Knowles
, “
Largest eigenvalues of sparse inhomogeneous Erdős-Rényi graphs
,”
Ann. Prob.
(to appear); preprint arXiv:1704.02953 (
2017
).
10.
G.
Brito
,
I.
Dumitriu
,
S.
Ganguly
,
C.
Hoffman
, and
L. V.
Tran
, “
Recovery and rigidity in a regular stochastic block model
,” in
Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms
(
Society for Industrial and Applied Mathematics
,
2016
), pp.
1589
1601
.
11.
A.
Coja-Oghlan
, “
Graph partitioning via adaptive spectral techniques
,”
Combinatorics, Probab. Comput.
19
(
02
),
227
284
(
2010
).
12.
X.
Ding
, “
Spectral analysis of large block random matrices with rectangular blocks
,”
Lith. Math. J.
54
(
2
),
115
126
(
2014
).
13.
L.
Erdős
,
B.
Schlein
, and
H.-T.
Yau
, “
Semicircle law on short scales and delocalization of eigenvectors for Wigner random matrices
,”
Ann. Probab.
37
(
3
),
815
852
(
2009
).
14.
L.
Erdős
,
H.-T.
Yau
, and
J.
Yin
, “
Universality for generalized Wigner matrices with Bernoulli distribution
,”
J. Combinatorics
2
(
1
),
15
81
(
2011
).
15.
L.
Erdős
,
A.
Knowles
,
H.-T.
Yau
, and
J.
Yin
, “
Spectral statistics of Erdős-Rényi graphs II: Eigenvalue spacing and the extreme eigenvalues
,”
Commun. Math. Phys.
314
(
3
),
587
640
(
2012
).
16.
L.
Erdős
,
A.
Knowles
,
H.-T.
Yau
, and
J.
Yin
, “
Spectral statistics of Erdős-Rényi graphs I: Local semicircle law
,”
Ann. Probab.
41
(
3B
),
2279
2375
(
2013
).
17.
L.
Erdős
,
S.
Péché
,
J. A.
Ramírez
,
B.
Schlein
, and
H.-T.
Yau
, “
Bulk universality for Wigner matrices
,”
Commun. Pure Appl. Math.
63
(
7
),
895
925
(
2010
).
18.
L.
Erdős
,
H.-T.
Yau
, and
J.
Yin
, “
Bulk universality for generalized Wigner matrices
,”
Probab. Theory Relat. Fields
154
(
1-2
),
341
407
(
2012
).
19.
R. R.
Far
,
T.
Oraby
,
W.
Bryc
, and
R.
Speicher
, “
Spectra of large block matrices
,” preprint arXiv:cs/0610045 (
2006
).
20.
V. L.
Girko
,
Theory of Stochastic Canonical Equations
(
,
2001
), Vol. 2.
21.
Y.
He
,
A.
Knowles
, and
M.
Marcozzi
, “
Local law and complete eigenvector delocalization for supercritical Erdős-Rényi graphs
,”
Ann. Prob.
(to appear); preprint arXiv:1808.09437 (
2018
).
22.
P. W.
Holland
,
K. B.
, and
S.
Leinhardt
, “
Stochastic blockmodels: First steps
,”
Soc. Networks
5
(
2
),
109
137
(
1983
).
23.
J.
Huang
and
B.
Landon
, “
Spectral statistics of sparse Erdős-Rényi graph Laplacians
,”
Ann. Inst. H. Poincare Probab. Statist.
(to appear); e-print arXiv:1510.06390v1 (
2015
).
24.
J.
Huang
,
B.
Landon
, and
H.-T.
Yau
, “
Bulk universality of sparse random matrices
,”
J. Math. Phys.
56
(
12
),
123301
(
2015
).
25.
F.
Krzakala
,
C.
Moore
,
E.
Mossel
,
N.
Joe
,
S.
Allan
,
L.
Zdeborová
, and
P.
Zhang
, “
Spectral redemption in clustering sparse networks
,”
Proc. Natl. Acad. Sci. U. S. A.
110
(
52
),
20935
20940
(
2013
).
26.
K.
Luh
and
V.
Vu
, “
Sparse random matrices have simple spectrum
,” preprint arXiv:1802.03662 (
2018
).
27.
S.
O’Rourke
,
V.
Vu
, and
K.
Wang
, “
Eigenvectors of random matrices: A survey
,”
J. Comb. Theory, Ser. A
144
,
361
442
(
2016
).
28.
M.
Rudelson
,
R.
Vershynin
, and
R.
Vershynin
, “
Hanson-Wright inequality and sub-Gaussian concentration
,”
Electron. Commun. Probab.
18
(
82
),
1
9
(
2013
).
29.
D.
Shlyakhtenko
, “
Gaussian random band matrices and operator-valued free probability theory
,”
Banach Cent. Publ.
43
(
1
),
359
368
(
1998
).
30.
T.
Tao
and
V.
Vu
, “
Random matrices: Universality of local eigenvalue statistics
,”
Acta Math.
206
(
1
),
127
204
(
2011
).
31.
L. V.
Tran
,
V. H.
Vu
, and
K.
Wang
, “
Sparse random graphs: Eigenvalues and eigenvectors
,”
Random Struct. Algorithms
42
(
1
),
110
134
(
2013
).
32.
V.
Vu
and
K.
Wang
, “
Random weighted projections, random quadratic forms and random eigenvectors
,”
Random Struct. Algorithms
47
(
4
),
792
821
(
2015
).
33.
P. M.
Wood
, “
Universality and the circular law for sparse random matrices
,”
Ann. Appl. Probab.
22
(
3
),
1266
1300
(
2012
).
34.
Y.
Zhu
, “
A graphon approach to limiting spectral distributions of Wigner-type matrices
,” preprint arXiv:1806.11246 (
2018
).