We consider quasiperiodic operators on with unbounded monotone sampling functions (“Maryland-type”), which are not required to be strictly monotone and are allowed to have flat segments. Under several geometric conditions on the frequencies, lengths of the segments, and their positions, we show that these operators enjoy Anderson localization at large disorder.
I. INTRODUCTION
This paper can be considered as a direct continuation of the earlier publication.12 We consider quasiperiodic Schrödinger operators on ,
where ω = (ω1, …, ωd) is the frequency vector and Δ is the discrete Laplacian,
We will consider the regime of large disorder, which, after rescaling, corresponds to small ɛ > 0. The function f, which generates the quasiperiodic potential, is a non-decreasing function, given by
and is extended into by 1-periodicity. As usual for quasiperiodic operators, the numbers {1, ω1, …, ωd} are assumed to be linearly independent over .
Potentials of form (1.2) will be called Maryland-type, after the classical Maryland model with f(x) = tan(πx) (we refer the reader to the following, certainly not exhaustive, list of related publications1,4–11). In Ref. 12, Sec. VI, we considered a case where f was not strictly monotone and was allowed to have a single flat piece of sufficiently small length h; in particular, |h| < minj |ωj|. In this situation, we can treat this flat piece as a single isolated resonance. In the present paper, we consider several more elaborate situations where the resonance is not isolated but has, in some sense, finite multiplicity.
The main result of this paper is Theorem 5.2, where we establish some sufficient conditions under which the operator H admits Anderson localization. We also list several particular examples and refinements in Sec. VI. The simplest new example (see Theorem 6.1 for a precise statement) is as follows. On , consider operator (1.1) with the function f satisfying
Additionally, assume that ω is Diophantine (see Sec. II B for the definition) and that [a − 2ω, a + L + 2ω] ⊂ (−1/2, 1/2). Suppose that f is Lipschitz monotone outside [a, a + L], with some regularity conditions similar to Ref. 12. Assume also that L is not a rational multiple of ω. Then, operator (1.1) has Anderson localization for 0 < ɛ < ɛ0(ω, L).
We would also like to mention the simplest possible class of operators that are not covered by our approach. Suppose that f is a constant on an interval [a, a + L] ⊂ (−1/2, 1/2), and suppose that the set
has an unbounded connected component for some x (here, we define connectedness on the graph with nearest neighbor edges). For example, this will happen if 0 < ω1 < ω2 < L. Our methods cannot cover such operators. In fact, it seems possible that such models do not demonstrate Anderson localization around the energy .
Our approach, essentially, is based on assuming the opposite of the above: that is, all connected components of the set S are bounded and are sufficiently far away from each other. If we are able to surround each component by a layer of lattice points n such that f(x + ω · n) has good monotonicity properties, then, after a partial diagonalization, their monotonicity will “propagate,” in a weaker form, into the interior of the set S. Afterward, we can apply a modified version of the main result of Ref. 12 to finalize the diagonalization; see Proposition 2.1.
II. PRELIMINARIES: REGULARITY OF f AND CONVERGENCE OF PERTURBATION SERIES
While the functions f under consideration will have flat pieces, a necessary assumption for all proofs below would be the existence of sufficiently many pieces with a good control of monotonicity. The corresponding regularity conditions and convergence results are summarized in this section.
It will be convenient to not exclude the case , where exactly one value of the potential in operator (1.1) becomes infinite: say, f(x + ω · n). In this case, the natural limiting object is the operator on obtained from (1.1) by enforcing the Dirichlet condition ψ(n) = 0. The results of Ref. 12, which we will be using, extend “continuously” into these values of x (see Ref. 12, Remark 4.14), with a reasonable interpretation of infinities, if one adds an infinite eigenvalue with an eigenvector en. Here, is the standard basis in . We will also use the notation {ej: j = 1, 2, …, d} for the standard basis in . For a subset , it will be convenient to use the notation ℓ2(A) for . For example, ℓ2([0, N]) = ℓ2{0, 1, 2, …, N}.
A. Creg-regularity
Similarly to Ref. 12, we will always assume the following:
- (f1)
is continuous, non-decreasing, f(−1/2 + 0) = −∞, f(1/2 − 0) = +∞, and f is extended by 1-periodicity into .
Suppose that f satisfies (f1). Let Creg > 0 and x0 ∈ (−1/2, 1/2). We say that f is Creg-regular at x0, if the following holds:
- (cr0)
The pre-image f−1((f(x0) − 2, f(x0) + 2)) ∩ (−1/2, 1/2) is an open interval [denoted by (a, b)], and is a one-to-one map between (a, b) and (f(x0) − 2, f(x0) + 2).
- (cr1)Let (for points where f′ does not exist, consider the smallest of the derivative numbers). Then,(2.1)
- (cr2)Define (a1, b1) = f−1(f(x0) − 1, f(x0) + 1) ⊂ (a, b) andextended by continuity to g(±1/2) = 0 (recall that we also assume f(x + 1) = f(x) so that the interval (b1, a1 + 1) is essentially (−1/2, 1/2)\(a1, b1) together with the point 1/2 = −1/2 mod 1). Then, under the same conventions on the existence of derivatives,For convenience, we will require Dmin(x0) ≥ 1 as a necessary condition for Creg-regularity. This condition can always be achieved by rescaling.
B. The frequency vector
The frequency vector is called Diophantine if there exist Cdio, τdio > 0 such that
Without loss of generality, we will always assume that 0 < ω1 < … < ωd < 1/2. The set of Diophantine vectors with the above property will be denoted by , implying that the dependence on d will be clear from the context. We will only use this definition with τdio > d + 1.
C. Operators with convergent perturbation series
In Ref. 12, it was shown that if f is Creg-regular on (−1/2, 1/2) and ω is Diophantine, then the Rayleigh–Schrödinger perturbation series (see below) converges for sufficiently small ɛ > 0. However, one can also apply the construction from Ref. 12 in the case where Creg and Dmin themselves depend on ɛ, under some additional restrictions on the off-diagonal terms of H(x). In Ref. 12, Sec. VI, an example of such an operator was considered. In this section, we will describe a slightly more general class of operators for which the construction from the end of Ref. 12, Sec. VI can be applied, virtually, without any changes. The main results of the present paper will be obtained by reducing various operators to the class described in this section.
We will consider long range quasiperiodic operators with variable hopping terms. A quasiperiodic hopping matrix is, by definition, a matrix with elements of the following form:
where are Lipschitz 1-periodic functions, satisfying the self-adjointness condition
Let also
Define Range(Φ) to be the smallest number L ≥ 0 such that Φnm ≡ 0 for |m − n| > L. We will only consider hopping matrices of finite range. Note that (2.3) can be reformulated as the following covariance property:
Fix some , and suppose that Φ1, Φ2, … is a family of quasiperiodic hopping matrices with Range(Φk) ≤ kR, defined by a family of functions . The class of operators we would like to consider will be of the following form:
where
One can easily check that, assuming
the part Φ = ɛΦ1 + ɛ2Φ2 +⋯ defines a bounded operator on .
The central object of Ref. 12 is the Rayleigh–Schrödinger perturbation series, which is a formal series of eigenvalues and eigenvectors
where, in a small departure from the notation of Ref. 12, we assume that
Under the above assumptions, we consider the eigenvalue equation
as equality of coefficients of two power series in the variable ɛ,
Assuming that f is strictly monotone, the above system of equations has a unique solution satisfying (2.7).
It will also be convenient to consider a graph Γ(x0) whose set of vertices is , and there is an edge between m and n if Φmn(x0) ≠ 0. The length of that edge is the smallest j > 0 such that .
We can now describe the class of operators with convergent perturbation series, which will be used in this paper. Let I1, …, Ik ⊂ (−1/2, 1/2) be disjoint closed intervals and μ1, …, μk > 0. We will only consider in applications, but the argument can be easily extended for μj ∈ [0, +∞). We will make the following assumptions.
- (conv1)
f satisfies (f1) from Sec. II A. Additionally, f is Creg-regular on (0, 1)\(I1 ∪…∪Ik) with Dmin ≥ 1.
- (conv2)
on Ij. Here, f is allowed to be non-differentiable at some values of x, but then, the inequality is required for all derivative numbers. As a consequence, f is strictly monotone on (−1/2, 1/2).
- (conv3)
Suppose that x0 + ω · n ∈ Ij. Any edge of Γ(x0) that starts at n has a length of at least μj + 1.
The proof of the following result can be done along the same lines as the argument in Ref. 12, Proof of Theorem 6.2.
Here, we use an additional refinement of Ref. 12, by introducing ‖φ‖ɛ instead of considering ‖φ‖∞ and ‖φ′‖∞ separately. The reason is that differentiating φ is “cheaper” than differentiating the denominators, and the latter has already been accounted for in (conv2).
One can also consider the perturbation series in finite volume. By “finite volume,” we mean the case when the operator is restricted to a finite box in . The results of Ref. 12, as well as Proposition 2.1, do not extend directly into that case (in the language of Ref. 12, certain loops required for cancellation are forbidden). However, in all our applications, the volume will be fixed before choosing a small ɛ, in which case one can apply the regular perturbation theory of isolated eigenvalues.
Since the components of ω are rationally independent and f is Creg-regular at x0 + ω · n, we have f(x0 + ω · n) separated from the other diagonal entries by a constant that only depends on ω, Creg, and the size of the box. Then, the argument follows from Proposition 2.3. The differentiability follows from differentiating the perturbation series in the variable x0, which, in turn, follows from (cr2).□
Exact same bounds on the derivatives also hold for the infinite volume eigenvectors [in that case, δ = δ(Cdio, τdio, d)]. As a consequence, for most practical purposes involving at most one derivative, regular eigenvalues can be considered to be small perturbations of the original diagonal entries, in both finite and infinite volumes. In a finite volume, the constants depend on the size of the box but can be made independent of the location of the box.
The following is a simple consequence of rational independence.
Suppose that f satisfies (f1). For any M > 0, there exists W = W(f, M, ω) such that, for any box B with |B| ≤ M, all eigenvalues of HB(x), except for at most one, are bounded by W + ɛ‖φ‖∞ in absolute value.
The only possibility for the operator to have a very large eigenvalue is for x0 + ω · n to be close to . However, this implies that all the remaining values of x0 + ω · m, where m ≠ n is from the same box, are away from . Ultimately, the dependence on ω is only through Cdio and τdio.□
III. SOME PROPERTIES OF SCHRÖDINGER EIGENVECTORS
In this section, we will summarize some basic properties of the eigenvectors of discrete Schrödinger operators in bounded domains. Let . Denote the Laplace operator on ℓ2(C) (with the Dirichlet boundary conditions on ) by
Let . We will define n ∈ A ∪ B to be directly reachable from B using the following recurrent definition:
Any point, n ∈ B is directly reachable in 0 steps.
- Let n ∈ A ∪ B, m ∈ A, and |m − n| = 1, and the setis contained in A ∪ B and only consists of directly reachable points in at most k − 1 steps. Then, we declare m to be directly reachable in at most k steps.
The above definition can be understood as follows. The eigenvalue equation
allows us to determine the value ψ(m) from the values
where n ∈ A ∪ B is some point with |n − m| = 1. A directly reachable point, therefore, is a point m ∈ A ∪ B such that ψ(m) can be determined from the values of ψ on B using a finite sequence of such operations.
We will say that satisfies direct unique continuation property (DUCP) for if all points of A are directly reachable from B.
We will need the following very coarse quantitative version of the unique continuation. Better modifications to this lemma can be made in particular situations.
By contradiction, in each step, we can only increase the value of the eigenfunction by a factor of 2d + |E| + W.□
In Lemma 3.3, one can generalize the Laplace operator in the following way: the weight of any edge connecting two adjacent points of B can be replaced by any number between −1 and 1. The reason is that, in the process of calculating the new values of ψm, m ∈ B\A, one would not have to divide by these weights.
This argument will be particularly useful in combination with Proposition 2.6 since it states that, while we may not be able to control the largest eigenvalue on a box of given size, we can control all remaining eigenvalues in terms of ω, f, and the size of the box.
We will also need some information about the eigenfunction decay, which follows from elementary perturbation theory.
In other words, if the values of V are clustered, then the corresponding eigenfunctions decay exponentially away from the clusters. The proof immediately follows from the standard perturbation theory.13 The statement is only meaningful for η ≫ 1; otherwise, one can absorb the bound into C(A, {Aj}). The constant does not depend on V and on η, as long as (3.2) is satisfied. In particular, it does not depend on the energy range inside a cluster; additionally, any dependence on the number of lattice points in each cluster can also be absorbed into C.
In Proposition 3.7, one can replace the Laplacian by a weighted Laplacian as long as the weights are bounded above by 1 in absolute value, with the same constants. One can also replace it by a long range operator if the distance function is modified accordingly.
The following statement, which also follows from perturbation theory for 2 × 2 matrices, is useful.
Apply partial 2 × 2 diagonalization to each edge that connects a pair of vertices from different clusters. By doing this, one will reduce all weights of those edges to at most O(ɛ2η−1). Afterward, additional perturbation of the whole operator of size O(ɛ2η−1) would completely eliminate those edges. Finally, one can restore the entries inside of each cluster to their original form HA′ by another perturbation of the order O(ɛ2η−1). Note that the constants do depend on the size of the domain and geometry of clusters; however, they can be chosen uniformly in the magnitude of the potential (in fact, large values of the potential only make the situation easier) as long as the clusters are η-separated.□
We will also need the following property related to the differentiation of eigenvalues with respect to a parameter. The following is often referred to as Hellmann–Feynman variational formula; see Ref. 13, Remark II.2.2.
The remaining term in the right-hand side of (3.3) is , which is identically zero due to normalization of ψ(t).□
In particular, if A(t) = A0 + f(t)⟨ej, ·⟩ej, then
IV. MATRIX FUNCTIONS WITH MARYLAND-TYPE DIAGONAL ENTRIES
Let satisfy (f1) from Sec. II. We will say that f is locally Lipschitz monotone at x0 ∈ (−1/2, 1/2) if
This condition is a weaker version of Creg-regularity at x0 with Dmin ≥ 1 and will be sufficient for the construction in the current section. Let also
where {·} denotes the fractional part. Assume that f, ft are extended into by 1-periodicity. It is easy to see that f1 is locally Lipschitz monotone at all x0 ∈ (−1/2, 1/2). Moreover, if ft is locally Lipschitz monotone at x0, then all ft′, t′ ≥ t, are also locally Lipschitz monotone at x0.
In the following theorem, we consider N × N matrix-valued functions whose diagonal entries are locally Lipschitz monotone. Let {e1, …, eN} denote the standard basis in .
- Suppose that f is locally Lipschitz monotone at x0 − dj. Then, A(x) has a unique eigenvalue Ej(x) and a unique ℓ2-normalized real eigenvector ψj(x) satisfying(4.2)
There exists a continuous 1-periodic real orthogonal matrix function such that U−1(x)A(x)U(x) is a diagonal matrix and the jth column of U(x) coincides with ψj(x) from (1) for all x where (1) is applicable.
One needs to specify the exact meaning of “simple” in the case when one of the matrix entries becomes infinite. It is easy to see that, as x approaches dj, exactly one eigenvalue of At(x) approaches ±∞, and the remaining eigenvalues remain bounded. In fact, they approach the eigenvalues of At with the jth row and column removed and thus can be extended continuously through dj. Thus, we require all those finite eigenvalues to be simple. Equivalently, we require the distances between eigenvalues of At(x) to be bounded by some constant ϰ > 0 for all t ∈ [0, 1] and all .
Part (1) follows from the standard perturbation theory of isolated eigenvalues; see also Proposition 2.3. Note that, once we fix a real ℓ2-normalized branch, it must be close to ej or −ej, and requiring it to be close to ej completely determines this branch.
A similar argument can be applied to x0 near the endpoints. Since one (and only one) eigenvalue of At(x) approaches infinity, the corresponding branch of eigenvector must approach ej or −ej as x → ±1/2 + dj, uniformly in t. Since for t = 1, it approaches +ej from both sides, the same holds for all t. In other words, it cannot suddenly change from a vector close to ej to a vector close to −ej due to continuity in t and the fact that it has to be close to ej or −ej for any particular t.□
Under the assumptions of Theorem 4.1, one can take t → +∞ in (4.1). In this case, ψj(x, t) will be well defined for all t ≥ 0 and uniformly approach ej as t → ∞. As a consequence, if the family {At(x)} satisfies the assumptions of Theorem 4.1, there is a unique family {Ut(x)} of real orthogonal matrices such that Ut(x) diagonalizes At(x) and Ut(x) → I as t → +∞. In this notation, we will have U(x) = U0(x). One can use this as an alternative definition of U(x).
Fix L > 0, . Let J be an N × N matrix satisfying the following properties: Jij = 0 for |i − j| > 1 and . Assume that Ji,i+1 ≥ 1 for all applicable values of i and that Jii ∈ [a, a + L] for all i. Then, the spectrum of J is simple. Moreover, if Ei ≠ Ej are two eigenvalues of J, then |Ei − Ej| ≥ c(N, L) > 0.
The fact that the spectrum is simple is standard and follows from the fact that any solution to the eigenvalue equation is uniquely determined from its value at one point. The rest follows from compactness.□
By rescaling, one can obtain a version of the proposition with |Ei − Ej| ≥ c(N, ɛL)ɛ, provided that |Ji,i+1| > ɛ > 0.
In Proposition 4.4, one can relax the requirement Jii ∈ [a, a + L] for the endpoint values J11 and JNN: one can remove this restriction for one of them or for both if one of them goes to +∞ and the other to −∞.
In higher dimensions, Proposition 4.4 will be often used in combination with Proposition 3.9: if a cluster from the latter proposition has a linear shape (or is somewhat one-dimensional), then Proposition 4.4 guarantees that its eigenvalues will be cɛ-separated (after restricting the operator to the cluster). The presence of coupling terms with other clusters will shift the eigenvalues by at most O(ɛ2) and will thus preserve the separation condition.
V. THE MOVING BLOCK CONSTRUCTION: GENERAL SCHEME
As a reminder, the standard basis of (and ) is denoted by {e1, e2, …, ed}. The translation operator on functions is defined by
Since we will use the following specific translations more often, we also denote
Consider operator (1.1) on , and assume that f satisfies (f1). Recall that, without loss of generality, we assumed that
Fix some and Creg > 0. The construction below will depend on the choice of x0; however, any x0 would be equally applicable. A lattice point will be called regular forH(x) if f is Creg-regular at x + ω · n. As discussed in Corollary 2.4 and Remark 2.5, for sufficiently small ɛ > 0, H(x) has an eigenvalue close to f(x + ω · n) and the corresponding eigenvector close to en at any regular point n. The same also holds for a restriction of H onto any box containing n. The smallness of ɛ depends on the size of the box. Additionally, “close” in this case also applies to derivatives in x. A lattice point that is not regular for H(x) will be, naturally, called singular forH(x). Let
For the purposes of the construction, the point with infinite potential will be considered regular. Since the components of ω are rationally independent, there is at most one such point. We will always assume that singular points are contained in a finite range of energies:
(gen0) There is Ereg > 0 such that f is Creg-regular at any x with |f(x)| ≥ Ereg.
Let be a finite subset possibly containing some singular points. Let also R ⊃ S be another finite subset, with the following properties:
(gen1) S ∪ (S + e1) ⊂ R ∩ (R + e1).
(gen2) For every x ∈ [x0, x0 + ω1], all points of (R ∪ (R + e1))\(S ∪ (S + e1)) are regular for H(x).
Let also
We can represent R′ as a disjoint union R′ = R− ⊔ R+ ⊔ R0. Denote by the following family of operators on ℓ2(R′):
where denotes just the potential part of , without the Laplacian, restricted onto the domains R±. For the values x ∈ [x0, x0 + ω1], we will linearly interpolate,
Assume the following condition:
(gen3) Matrix families , , and satisfy the assumptions of Theorem 4.1 for all x ∈ [x0, x0 + ω1].
Technically, in order to apply Theorem 4.1, has to be defined (possibly with infinite entries) and be periodic for all . The exact form of the extension will not matter. For the sake of clarity, assume that for x ∈ [x0 + ω1, x0 + 1], the off-diagonal parts of are linearly interpolated between those of and so that and then extended 1-periodically to . For the later construction [see the definition of the operator below], we will only use for [x0, x0 + ω1].
From the definitions, we have
where T: ℓ2(R0 ∪ R+) → ℓ2(R− ∪ R0) is the restriction of the translation map (which is a bijection between R0 ∪ R+ and R− ∪ R0). Denote by UR′(x), x ∈ [x0, x0 + ω1], the result of applying Theorem 4.1 to the family . For x = x0 and x = x0 + ω, UR′(x) splits into a direct sum,
The matrices and have real entries and, due to (5.2), diagonalize the same operator, up to the identification of R− ∪ R0 with R0 ∪ R+. Therefore, they consist of the same eigenvectors, up to the ordering and the choice of the signs. On the other hand, (gen3) guarantees that both approach the identity as t → +∞; see Remark 4.3. Therefore, they must be related in the same way as operators (5.2) they diagonalize,
Extend UR′(x) into by identity
Then, we have
We start from defined for x ∈ [x0, x0 + ω1]. One can easily check that there is a unique extension of this family into satisfying
This above construction will be called the diagonalization of a moving block, which is reflected in the superscript mb. Note that the singular set also has the following covariance property:
As a consequence, if the above-mentioned set S from the moving block construction contains singular points for H(x0), they will naturally correspond to singular points in S − e1 for H(x0 + ω1). In the above construction, as x increases, the set S “moves” along with speed 1/ω1 and traces a part of Ssing, thus justifying the name of the construction.
It will also be convenient to use the following language: a normal operator on is supported on R′ if ℓ2(R′) is its invariant subspace and the operator acts as an identity on . The above construction implies that is supported on R′ for x ∈ [x0, x0 + ω1] and is supported on R′ − me1 for x ∈ [x0 + mω1, x0 + (m + 1)ω1], where .
The operator family takes care of only a part of Ssing(x) that corresponds to a finite block S that moves as x moves around . We now need to cover the whole singular set by such blocks and multiply the corresponding operators. At the same time, the family is not a quasiperiodic operator family: its entries are not even periodic in x. We will need to modify this family in order for it to be quasiperiodic. First, note that the set Ssing(x) is not only translationally covariant (5.7) but also translationally invariant,
Therefore, with each moving block, we actually have a “train” of blocks, with a translation by 1. Thus, it is natural to consider
We will later require [see (gen4)] that the supports of are all disjoint. As a consequence, the above product will be well defined.
Unlike , the family is actually a periodic function in x and is quasiperiodic with respect to translations by ω1,
We now enforce the full covariance (quasiperiodicity) condition. Let ω′ = (ω2, …, ωd). For any , the family x ↦ T(0,n′)V1(x + n′ · ω′)T−(0,n′) describes a different moving block (in fact, a different train of moving blocks described above). Let
Each factor in (5.10) contains countably many copies of . If well defined, is a fully quasiperiodic operator family,
If order for and is to be well defined, we will require the last condition:
The above construction will be referred to as a covariant family of moving blocks. The set S will be called the base block.
One may need more than one such family to take care of all Ssing(x), in which case we will require all involved operators to commute with each other (in other words, all sets R′ under consideration will not overlap with each other). For simplicity, we will now consider the case with only one covariant moving block. Let
We will now outline the general strategy of the proof of localization. Our goal is to show that, under some additional conditions on the sets R0 (which will impose some separation conditions on S), the operator family H2(x) will satisfy the assumptions of Proposition 2.1. For the convenience of referencing, we will denote the steps of the strategy by (s1)–(s5).
- (s1)Quasiperiodicity of (5.11) implies that H2(x) is also a quasiperiodic operator family, with diagonal entries defined by a new function f2,For sufficiently small ɛ (depending on the size of R′), the function f2 will be, say, Creg/2-regular at any x where the original f was Creg-regular; see Corollary 2.4 and Remark 2.5.
- (s2)
As a consequence, our attention should be toward the diagonal entries of H2 corresponding to singular points for H(x). Assume that so that we avoid the boundary effects. Then, for any m ∈ (S ∪ (S + e1)), f2(x + ω · m) is the exact eigenvalue of that corresponds to the lattice point m. Here, we use the relation between eigenvalues of and lattice points of R′ provided in Theorem 4.1.
- (s3)
Suppose that, for a single block, we have a subset B ⊂ R0\(S ∪ (S + e1)) that satisfies DUCP for S ∪ (S + e1). For example, if R0 is a box whose boundary is not too close to S ∪ (S + e1), then we can surround S ∪ (S + e1) by a layer of thickness two inside R0; see Remark 3.6. For any eigenvalue of , corresponding to a lattice point m ∈ S ∪ (S + e1), apply Lemma 3.3 to the corresponding eigenfunction and conclude that it must be non-trivially supported on B. Since every point of B is regular, the corresponding diagonal entry of the operator is Lipschitz monotone. Proposition 3.10 guarantees that there is a non-trivial positive contribution to the derivative of each singular eigenvalue from some diagonal entries of B. The contribution from the remaining diagonal entries of to the derivative is also non-negative.
- (s4)
The previous part almost guarantees (conv2), with one caveat: the interpolation process in the definition of creates some x-dependent off-diagonal entries, which also contribute to the derivatives of singular eigenvalues not necessarily in a positive way. However, these entries are outside of R0. If we require S ∪ (S + e1) to be away from , we can make this contribution arbitrarily small, independent of the lower bound on the contribution from B described above. This will complete the verification of (conv2). The particular selection of the set B does not affect the definition of , but it only affects our choice of what to use as a lower bound on the derivatives. Thus, it can be chosen in an x-dependent way as long as the ultimate lower bounds are uniform in x.
- (s5)
In order to verify (conv3), note that, after diagonalization, singular points will only be coupled with points outside of R0. The strength of this coupling (i.e., in the language of Sec. II, the length of the corresponding edges) will be in terms of the decay of the corresponding eigenfunctions of , which, in turn, can be obtained using Proposition 3.7 and Proposition 3.11 [the latter is used to control the derivatives, as explained in (conv4)]. Ultimately, one can make it bounded by an arbitrarily large power of ɛ, by requiring to be large enough.
We will summarize the above construction in the following theorem.
f satisfies (f1) and (gen0).
There exists a box R ⊃ S such that the family of covariant moving blocks, constructed above, satisfies (gen1)–(gen4).
Define R0, R′ as in (5.1). We will require .
Ssing(x) is contained in the support of , constructed above in (5.10). Due to (gen2), this means that Ssing(x) is contained in the union of the copies of the sets S ∪ (S + e1) that appear in the definition of .
The eigenvalues of are uniformly csepɛ-separated for all x ∈ [x0, x0 + ω1]. Moreover, this property holds with f replaced by ft from (4.1), uniformly in t ∈ [0, 1].
There is additional dependence of ɛ0 on f, besides the dependence through Creg and Ereg. This dependence can be made explicit; we will specify it here in order to avoid overloading the statement of the main result. Consider the smallest box B such that . Then, ɛ0 depends on f through the quantity W(f, M, ω) from Proposition 2.6. Note that the dependence on ω can be reduced to dependence on Cdio, τdio, and the dependence on M is accounted for in the dependence on S. See also Remark 3.6.
B(x) ⊃ (S ∪ (S + e1)) and .
For any box R ⊃ S with , the set
The choice of B(x) can be made independently of R. Indeed, we can always choose two non-overlapping layers of thickness two around S and then pick one of them with the smaller maximal value of the potential. For example, let B1(x) be the smallest box such that and B2(x) be the smallest box such that . Then, for any x, we can either pick B = B1(x) or B = B2(x).
As a consequence, each singular diagonal entry of H2(x) located on S ∪ (S + e1) has only outgoing edges of length bounded by Cɛr−2, with a derivative bound Cɛr−3. It would be sufficient to show that the assumption (conv2) is satisfied with μ independent of r. Note that we already have μ = 1 for each regular eigenvalue.
One can naturally extend the result of Theorem 5.2 to the case where one requires more than one type of blocks to cover Ssing, assuming that they do not “interact” with each other.
VI. PARTICULAR CASES AND REFINEMENTS OF THEOREM 5.2
As the title suggests, in this section, we will discuss several more concrete examples of the construction from Theorem 5.2. In each particular case, one can make some optimizations in choosing the unique continuation sets and obtain weaker requirements on the separation between different blocks.
A. Example 1: The case d = 1
We will illustrate the above procedure, first, on the case where f has a single flat segment and is Creg-regular outside of a neighborhood of it. Assume the following.
(z1) f(x) = E for x ∈ [a, a + L] = [b − L/2, b + L/2] ⊂ (−1/2, 1/2).
(z2) L is not an integer multiple of ω. Let L = pω + z, , z ∈ (0, 1). Let also β = min{z, 1 − z}.
(z3) f is Creg-regular outside of [a − β, a + L + β].
(z4) Let M = ⌈L/2ω⌉ + 2. Assume that the points
are contained inside the interval (−1/2, 1/2).
Condition (z4) is the most important one and has to be present in some form. It manifests the separation condition (2) from Theorem 5.2. Let also
Let HM(x) be the restriction of H(x) into the interval [−M, …, M + 1], a (2M + 2) × (2M + 2)-matrix. The asymmetry is caused by the fact that we will consider H(x) for x ∈ [b, b + ω].
For x ∈ [b, b + ω], consider the matrix HM(x) and modify it as follows: the entries that couple −M with −M + 1 will be linearly interpolated from ɛ at x = b to 0 at x = b + ω and the entries coupling M and M + 1, respectively, in the opposite way: 0 at x = b and ɛ at x = 0. Denote the resulting operator HM′(x). Denote also by UM(x) the diagonalizing operator obtained in Theorem 4.1 for HM′(x). Proposition 4.4 combined with Proposition 3.9 guarantees that the family UM(x) satisfies the assumptions of Theorem 4.1 (note that we cannot apply 4.4 directly since, after rescaling, the range of energies will be of the size ɛ−1; however, one can use 3.9 to treat large eigenvalues corresponding to near the edges of the interval).
Let ψ be an eigenfunction corresponding to one of the lattice points on S ∪ (S + e1) = [−M + 3, M − 2], normalized in, say, ℓ2([−M, M + 1]), with eigenvalue Ej(x), j ∈ [−M + 3, M − 2] (here, the eigenvalues are identified with lattice points through Theorem 4.1). By Proposition 3.7, ψ is supported on [−M + 3, M − 2] up to O(ɛ). Moreover, ψ(−M + 2) = O(ɛ) and ψ(M + 1) = O(ɛ2). The values f(x + mω), m ∈ S, are all equal to E, except maybe for f(x + (M − 2)ω).
One can check that, under the assumptions of Theorem 6.1, the derivative of the integrated density of states of the family H(x) has a spike of height ≍ɛ−2 and width ≍ɛ2 around energy E. One can produce larger spikes by combining multiple flat intervals at different energies; see Examples 5 and 6.
B. A higher-dimensional version
The approach from Subsection VI A can be extended to several higher-dimensional examples.
Example 2: Let d ≥ 2. Suppose that f and ω1 satisfy all assumptions from Subsection VI A (with ω = ω1). Additionally, assume the following:
(z5) For any x ∈ [a, a + L] and any , , and |n|1 ≤ 6, we have f being Creg-regular at x + n · ω.
Proposition 3.7 implies that any singular eigenfunction restricted to will be O(ɛ2). The rest of the set B is one step away from S, and therefore, any eigenfunction will have, ultimately, at least cɛ of mass somewhere on B. One may have to treat the right-most point of S separately, and the argument does not change from Theorem 6.1, in view of Proposition 4.4 and Proposition 3.9.
Once UM(x) is constructed for x ∈ [b, b + ω1], we repeat the steps in Sec. V in order to construct the operator . Condition (z3) will guarantee that the supports of different copies of will not overlap.□
C. Several additional cases
In this subsection, we outline several additional cases, which, ultimately, are covered by Theorem 5.2.
Example 3: One can consider multiple intervals of the type described in Theorem 6.1 as long as they are separated enough so that the supports of the operator U1 are disjoint. Similarly, one can combine multiple “non-interacting” cases of Theorem 6.3.
Example 4: More interestingly, if two intervals from Theorem 6.1 are not sufficiently disjoint, one can consider them as one singular set and apply the argument from the general theorem. One can also do it in the setting of Theorem 6.3, with a strengthened version of (z5).
Example 5: The following example can be called “a chain of intervals.” Let d = 1 and I1, …, IN ⊂ (−1/2, 1/2) be a collection of disjoint intervals,
and for j = 1, …, N − 1. Assume also that, for sufficiently large r = r(N) (r = 2N should be enough), we have
Then, the assumptions of Theorem 5.2 are satisfied, and we have Anderson localization for small ɛ.
Suppose that, on top of all these assumptions, Ij+1 = Ij + ω, and we also assume for simplicity that N = 2k − 1. Then, it is easy to see that f2′(x) ≍ ɛ2k for x ∈ Ik. As a result, the derivative of the integrated density of states has a spike of height ≍ɛ−2k and width ≍ɛ2k around Ek.
Example 6: The following example is a combination of Examples 2 and 5. It can be called “a tree of intervals.” Let I1, …, IN be a collection of disjoint intervals,
Suppose that for some x ∈ I1, we have, say, x2 ≔ x + ω2 ∈ I2. Additional shifts by ω2, …, ωd may produce points, say, x3 = x2 + ω3 ∈ I3, …. Assume that, for any x ∈ I1, we are guaranteed to escape all intervals after, say, q steps in the directions perpendicular to e1 and would only be able to come back to I1 after additional r steps. In other words, for x ∈ I1, f is regular at x + ω′ · n′ for all , q ≤ |n′| ≤ q + r. One can refine this in each particular case if the number of steps needed to escape the system of intervals depends on the direction. In all cases, we would require a “layer” of thickness r that consists of regular points and surrounds the collection of singular points described above. The thickness of the layer will depend on the number of steps required to escape the singular intervals. For r sufficiently large, the assumptions of Theorem 5.2 will be satisfied. In particular, to establish the separation condition (5) from Theorem 5.2, note that each interval produces a block with cɛ-separated eigenvalues due to Proposition 4.4. Different intervals are located at different energies and therefore do not affect each other (each lies in its own cluster). The coupling between these intervals satisfies the assumptions of Proposition 3.9 and therefore also does not affect the separation condition. After the conjugation, we will have, for the new diagonal function f2(x),
where μj depends on the number of steps in the directions perpendicular to e1 required to escape the interval (by more careful analysis, one can show that it is equal to twice the number of steps). As a consequence, the derivative of the integrated density of states of H will have spikes of order around the energies close to Ej.
ACKNOWLEDGMENTS
The research of L.P. was partially supported by the EPSRC, Grant Nos. EP/J016829/1 and EP/P024793/1. R.S. was partially supported by the NSF, Grant No. DMS–1814664. I.K. was partially supported by the NSF, Grant No. DMS–1846114.
At various stages of their careers, all authors were deeply inspired by the groundbreaking work of Jean Bourgain. This paper is dedicated to his memory.
DATA AVAILABILITY
Data sharing is not applicable to this article as no new data were created or analyzed in this study.